content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Solve a math operation in a string without using the eval function(python) Solve a math operation in a string based on operation priority without using the eval function for example (3*(72/2)+2-1(32%2)) should solve this without using eval I couldn't make the parenthetical operation priority A: No need to reinvent the wheel, there is a very straightforward way to do this by using the PCPP package, which is a C/C++ preprocessor for Python: from pcpp import Evaluator eval = Evaluator() result = eval("(3*(72/2)+2-(32%2))") print(result.value()) Note that for this case I had to manually remove the 1 in -1(32%2)) as it makes no sense and make the library or any other processor to crash. PCPP also allows you to add custom variables, flags and functions to evaluate an expression, which is very useful. Output: 110
Solve a math operation in a string without using the eval function(python)
Solve a math operation in a string based on operation priority without using the eval function for example (3*(72/2)+2-1(32%2)) should solve this without using eval I couldn't make the parenthetical operation priority
[ "No need to reinvent the wheel, there is a very straightforward way to do this by using the PCPP package, which is a C/C++ preprocessor for Python:\nfrom pcpp import Evaluator\n\neval = Evaluator()\nresult = eval(\"(3*(72/2)+2-(32%2))\")\nprint(result.value())\n\nNote that for this case I had to manually remove the 1 in -1(32%2)) as it makes no sense and make the library or any other processor to crash.\nPCPP also allows you to add custom variables, flags and functions to evaluate an expression, which is very useful.\nOutput:\n110\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074659443_python.txt
Q: Reactive kafka stream example with spring boot webflux I am trying to build a Spring Boot WebFlux application which consume events from one Kafka topic does some processing(would involve operation of reactive db lookup) on it and send them to another kafka topic. Currently the application uses webflux and spring cloud data streams for kafka. Can some please suggest is it fine to use Spring Cloud Stream Kafka Stream API or should I use something else to achieve end to end reactive behaviour. Tried with Spring boot WebFlux, spring cloud data stream kafka streams. Processor example here A: It does not look like you are using the Spring Cloud Stream Kafka Streams binder, but you mention Spring Cloud Stream Kafka Stream API . I assume you are using the regular Kafka binder in Spring Cloud Stream based on message channels since nothing in the code you provided points to any Kafka Streams API. Assuming this is the case, the best way to achieve full end-to-end reactive behavior is to use the new reactive Kafka binder (not yet released, but part of the 4.0.0 release, which is presently in the RC stage). More details on this reactive Kafka binder is available here. This binder implementation is based on the reactor Kafka project from Project Reactor. The regular message channel-based Kafka binder is not based on reactive models and could block when consuming/producing. Therefore, if you are looking for an end-to-end reactive paradigm, use the reactive Kafka binder.
Reactive kafka stream example with spring boot webflux
I am trying to build a Spring Boot WebFlux application which consume events from one Kafka topic does some processing(would involve operation of reactive db lookup) on it and send them to another kafka topic. Currently the application uses webflux and spring cloud data streams for kafka. Can some please suggest is it fine to use Spring Cloud Stream Kafka Stream API or should I use something else to achieve end to end reactive behaviour. Tried with Spring boot WebFlux, spring cloud data stream kafka streams. Processor example here
[ "It does not look like you are using the Spring Cloud Stream Kafka Streams binder, but you mention Spring Cloud Stream Kafka Stream API . I assume you are using the regular Kafka binder in Spring Cloud Stream based on message channels since nothing in the code you provided points to any Kafka Streams API. Assuming this is the case, the best way to achieve full end-to-end reactive behavior is to use the new reactive Kafka binder (not yet released, but part of the 4.0.0 release, which is presently in the RC stage). More details on this reactive Kafka binder is available here. This binder implementation is based on the reactor Kafka project from Project Reactor. The regular message channel-based Kafka binder is not based on reactive models and could block when consuming/producing. Therefore, if you are looking for an end-to-end reactive paradigm, use the reactive Kafka binder.\n" ]
[ 0 ]
[]
[]
[ "apache_kafka", "microservices", "spring_cloud_stream" ]
stackoverflow_0074653093_apache_kafka_microservices_spring_cloud_stream.txt
Q: How to convert numpy.ndarray image to discord.File? I found similar question but about PIL: How can I upload a PIL Image object to a Discord chat without saving the image?, and using it results in AttributeError: 'numpy.ndarray' object has no attribute 'save' which is surely because I use OpenCV and not PIL. The question is how to convert this numpy.ndarray to discord.File (using binary or otherwise)? A: In case anybody also get this problem here is function that takes cv2 image(which is basically numpy.ndarray) and returns discord.File: def cv2discordfile(img): img_encode = cv2.imencode('.png', img)[1] data_encode = np.array(img_encode) byte_encode = data_encode.tobytes() byteImage = BytesIO(byte_encode) image=discord.File(byteImage, filename='image.png') return image
How to convert numpy.ndarray image to discord.File?
I found similar question but about PIL: How can I upload a PIL Image object to a Discord chat without saving the image?, and using it results in AttributeError: 'numpy.ndarray' object has no attribute 'save' which is surely because I use OpenCV and not PIL. The question is how to convert this numpy.ndarray to discord.File (using binary or otherwise)?
[ "In case anybody also get this problem here is function that takes cv2 image(which is basically numpy.ndarray) and returns discord.File:\ndef cv2discordfile(img):\n img_encode = cv2.imencode('.png', img)[1]\n data_encode = np.array(img_encode)\n byte_encode = data_encode.tobytes()\n byteImage = BytesIO(byte_encode)\n image=discord.File(byteImage, filename='image.png')\n return image\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "numpy_ndarray", "python" ]
stackoverflow_0074657948_discord.py_numpy_ndarray_python.txt
Q: pygame android preplash image stays for ever I am facing strange problem for my small game written in python I am pushing this to andorid using this doc http://pygame.renpy.org/index.html But my android-presplash.jpg stays for ever in the screen.Eventhough I am blitting different Introduction screen. (http://pygame.renpy.org/android-advanced.html) I tried with my all virtual Devices and my LG mobile with 2.3 android. Please suggest some solution to reslove this. Below is part of my code: import pygame class Game: def __init__(self): pygame.init() self.window = pygame.display.set_mode((1300,500)) def checkRectCollidePoint(self,rect,pos,x): NewRect=pygame.Rect(rect.left+x[0], rect.top+x[1], rect.width, rect.height) if NewRect.collidepoint(pos): return True def loadingScreen(self): loadingScreenSprite=pygame.image.load('Intro.jpg').convert_alpha() loadingScreenSprite = loadingScreenSprite.convert_alpha() myfont = pygame.font.Font("comic.ttf", 40) ExitText = myfont.render("Exit :", 1,[0,0,128]) dirty=[] dirty.append(self.window.blit(loadingScreenSprite,(0,0))) dirty.append(self.window.blit(ExitText, [120,300])) pygame.display.update(dirty) needgoodVarRunning=True while needgoodVarRunning: for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() if self.checkRectCollidePoint(ExitText.get_rect(),pos,[120,300]): needgoodVarRunning=False def main(): game = Game() game.loadingScreen() if __name__ == "__main__": main() A: It is most likely a problem with a driver. pygame.init() initializes all pygame modules. This may take some time on some systems. If you don't need all pygame modules, just leave it out. For the display and event handling it is not necessary to call pygame.init() at all. If you need other modules, you can initialize them separately (e.g. pygame.font.init(), pygame.mixer.init()). See pygame.init() Initialize all imported pygame modules. No exceptions will be raised if a module fails, but the total number if successful and failed inits will be returned as a tuple. You can always initialize individual modules manually, but pygame.init() initialize all imported pygame modules is a convenient way to get everything started. The init() functions for individual modules will raise exceptions when they fail. You may want to initialize the different modules separately to speed up your program or to not use modules your game does not require.
pygame android preplash image stays for ever
I am facing strange problem for my small game written in python I am pushing this to andorid using this doc http://pygame.renpy.org/index.html But my android-presplash.jpg stays for ever in the screen.Eventhough I am blitting different Introduction screen. (http://pygame.renpy.org/android-advanced.html) I tried with my all virtual Devices and my LG mobile with 2.3 android. Please suggest some solution to reslove this. Below is part of my code: import pygame class Game: def __init__(self): pygame.init() self.window = pygame.display.set_mode((1300,500)) def checkRectCollidePoint(self,rect,pos,x): NewRect=pygame.Rect(rect.left+x[0], rect.top+x[1], rect.width, rect.height) if NewRect.collidepoint(pos): return True def loadingScreen(self): loadingScreenSprite=pygame.image.load('Intro.jpg').convert_alpha() loadingScreenSprite = loadingScreenSprite.convert_alpha() myfont = pygame.font.Font("comic.ttf", 40) ExitText = myfont.render("Exit :", 1,[0,0,128]) dirty=[] dirty.append(self.window.blit(loadingScreenSprite,(0,0))) dirty.append(self.window.blit(ExitText, [120,300])) pygame.display.update(dirty) needgoodVarRunning=True while needgoodVarRunning: for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() if self.checkRectCollidePoint(ExitText.get_rect(),pos,[120,300]): needgoodVarRunning=False def main(): game = Game() game.loadingScreen() if __name__ == "__main__": main()
[ "It is most likely a problem with a driver. pygame.init() initializes all pygame modules. This may take some time on some systems. If you don't need all pygame modules, just leave it out. For the display and event handling it is not necessary to call pygame.init() at all. If you need other modules, you can initialize them separately (e.g. pygame.font.init(), pygame.mixer.init()).\nSee pygame.init()\n\nInitialize all imported pygame modules. No exceptions will be raised if a module fails, but the total number if successful and failed inits will be returned as a tuple. You can always initialize individual modules manually, but pygame.init() initialize all imported pygame modules is a convenient way to get everything started. The init() functions for individual modules will raise exceptions when they fail.\nYou may want to initialize the different modules separately to speed up your program or to not use modules your game does not require.\n\n" ]
[ 0 ]
[]
[]
[ "android", "pygame" ]
stackoverflow_0014913000_android_pygame.txt
Q: How to optimize hyper-parameters of a PPO for a gym environment training I would like to use an optimization algorithm (hyperOptSearch) using ray.tune . On the official documentation, they use this syntax : tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( metric="mean_loss", mode="min", search_alg=algo, num_samples=num_samples, ), param_space=search_config, ) results = tuner.fit() where objective is a function to minimize (or maximize) defined as : def evaluate(step, width, height): time.sleep(0.1) return (0.1 + width * step / 100) ** (-1) + height * 0.1 def objective(config): for step in range(config["steps"]): score = evaluate(step, config["width"], config["height"]) session.report({"iterations": step, "mean_loss": score}) I would like to use this syntaxe, but with an 'evaluate' function evaluating the episode_reward_mean of my gym environment, which is a LunarLander-v2 env. I recently used this config config = { "env": "LunarLander-v2", "sgd_minibatch_size": 5000, "num_sgd_iter": 50, "lr": 5e-5, "lambda": 0.8, "vf_loss_coeff": 0.7, "kl_target": 0.01, "kl_coeff": 0.6, "entropy_coeff": 0.001, "clip_param": 0.38, "train_batch_size": 25000, # "monitor": True, # "model": {"free_log_std": True}, "num_workers": 1, "num_gpus": 0, # "batch_mode": "complete_episodes" }, and this syntaxe to train the model : analysis = tune.Tuner( "PPO", # Algorithme d'IA utilisé tune_config=tune.TuneConfig( metric="episode_reward_mean", mode="max", search_alg=HyperOptSearch(metric="episode_reward_mean", mode="max"), # num_samples will repeat the entire config 10 times. num_samples=10, ), param_space=config, # local_dir="res_LunarLander" ) results = analysis.fit() What could I do to solve my problem ? I used to train my model without using any optimization algorithm. I would like to use one to improve my parameters. A: You need to modify your config to make use of Tune search space distributions (https://docs.ray.io/en/latest/tune/tutorials/tune-search-spaces.html), which will let you specify lower and upper bounds for possible values in your search space. Without them (as it is in your case), you will only have constant values and thus identical configurations for each trial. For example, if you want "lr" to be sampled from a logarithmic distribution between 5e-6 and 5e-4, you'd specify it as "lr": tune.loguniform(5e-6, 5e-4).
How to optimize hyper-parameters of a PPO for a gym environment training
I would like to use an optimization algorithm (hyperOptSearch) using ray.tune . On the official documentation, they use this syntax : tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( metric="mean_loss", mode="min", search_alg=algo, num_samples=num_samples, ), param_space=search_config, ) results = tuner.fit() where objective is a function to minimize (or maximize) defined as : def evaluate(step, width, height): time.sleep(0.1) return (0.1 + width * step / 100) ** (-1) + height * 0.1 def objective(config): for step in range(config["steps"]): score = evaluate(step, config["width"], config["height"]) session.report({"iterations": step, "mean_loss": score}) I would like to use this syntaxe, but with an 'evaluate' function evaluating the episode_reward_mean of my gym environment, which is a LunarLander-v2 env. I recently used this config config = { "env": "LunarLander-v2", "sgd_minibatch_size": 5000, "num_sgd_iter": 50, "lr": 5e-5, "lambda": 0.8, "vf_loss_coeff": 0.7, "kl_target": 0.01, "kl_coeff": 0.6, "entropy_coeff": 0.001, "clip_param": 0.38, "train_batch_size": 25000, # "monitor": True, # "model": {"free_log_std": True}, "num_workers": 1, "num_gpus": 0, # "batch_mode": "complete_episodes" }, and this syntaxe to train the model : analysis = tune.Tuner( "PPO", # Algorithme d'IA utilisé tune_config=tune.TuneConfig( metric="episode_reward_mean", mode="max", search_alg=HyperOptSearch(metric="episode_reward_mean", mode="max"), # num_samples will repeat the entire config 10 times. num_samples=10, ), param_space=config, # local_dir="res_LunarLander" ) results = analysis.fit() What could I do to solve my problem ? I used to train my model without using any optimization algorithm. I would like to use one to improve my parameters.
[ "You need to modify your config to make use of Tune search space distributions (https://docs.ray.io/en/latest/tune/tutorials/tune-search-spaces.html), which will let you specify lower and upper bounds for possible values in your search space. Without them (as it is in your case), you will only have constant values and thus identical configurations for each trial.\nFor example, if you want \"lr\" to be sampled from a logarithmic distribution between 5e-6 and 5e-4, you'd specify it as \"lr\": tune.loguniform(5e-6, 5e-4).\n" ]
[ 0 ]
[]
[]
[ "hyperparameters", "openai_gym", "python", "ray", "reinforcement_learning" ]
stackoverflow_0074635036_hyperparameters_openai_gym_python_ray_reinforcement_learning.txt
Q: How do I specify dynamic template's version when calling v3 api? I assume dynamic template's version is to specify the template's id and version when sending the email. How do I specify dynamic template's version when calling v3 api? I'm using c# client library. A: Just click the version and you will see it on the address bar from your browser, e.g. https://mc.sendgrid.com/dynamic-templates/YOUR_TEMPLATE_ID/version/YOUR_VERSION_ID/editor And you can activate the version following the offical document. After the verion has been activated, you will be able to send the active version of your template. Note: You might need to swith your API key from Restricted Access to Full access. A: Just to expand on Chuan's answer: you can't explicitly use a specific version from the template. Sendgrid always sends the active version. There might be some workarounds if you use the marketing/test/send_email endpoint, but controlling the template's personalisation might be limited. More info here: https://docs.sendgrid.com/api-reference/send-test-e-mail/send-a-test-marketing-email
How do I specify dynamic template's version when calling v3 api?
I assume dynamic template's version is to specify the template's id and version when sending the email. How do I specify dynamic template's version when calling v3 api? I'm using c# client library.
[ "Just click the version and you will see it on the address bar from your browser, e.g.\nhttps://mc.sendgrid.com/dynamic-templates/YOUR_TEMPLATE_ID/version/YOUR_VERSION_ID/editor\nAnd you can activate the version following the offical document.\nAfter the verion has been activated, you will be able to send the active version of your template.\nNote: You might need to swith your API key from Restricted Access to Full access.\n", "Just to expand on Chuan's answer: you can't explicitly use a specific version from the template. Sendgrid always sends the active version.\nThere might be some workarounds if you use the marketing/test/send_email endpoint, but controlling the template's personalisation might be limited.\nMore info here:\nhttps://docs.sendgrid.com/api-reference/send-test-e-mail/send-a-test-marketing-email\n" ]
[ 0, 0 ]
[]
[]
[ "sendgrid", "sendgrid_api_v3", "sendgrid_templates" ]
stackoverflow_0070268753_sendgrid_sendgrid_api_v3_sendgrid_templates.txt
Q: (Stochastic) Gradient Descent implementation in Python I am trying to do (preferably Stochastic) Gradient Descent to minimize a custom loss function. I tried using scikit learn SGDRegressor class. However, SGDRegressor doesn't seem to allow me to minimize a custom loss function without data, and if I can use custom loss function, I can only use it as regression to fit data with fit() method. Is there a way to use scikit implementation or any other Python implementation of stochastic gradient descent to minimize a custom function without data? A: Implementation of Basic Gradient Descent Now that you know how the basic gradient descent works, you can implement it in Python. You’ll use only plain Python and NumPy, which enables you to write concise code when working with arrays (or vectors) and gain a performance boost. This is a basic implementation of the algorithm that starts with an arbitrary point, start, iteratively moves it toward the minimum, and returns a point that is hopefully at or near the minimum: def gradient_descent(gradient, start, learn_rate, n_iter): vector = start for _ in range(n_iter): diff = -learn_rate * gradient(vector) vector += diff return vector gradient_descent() takes four arguments: gradient is the function or any Python callable object that takes a vector and returns the gradient of the function you’re trying to minimize. start is the point where the algorithm starts its search, given as a sequence (tuple, list, NumPy array, and so on) or scalar (in the case of a one-dimensional problem). learn_rate is the learning rate that controls the magnitude of the vector update. n_iter is the number of iterations. This function does exactly what’s described above: it takes a starting point (line 2), iteratively updates it according to the learning rate and the value of the gradient (lines 3 to 5), and finally returns the last position found. Before you apply gradient_descent(), you can add another termination criterion: import numpy as np def gradient_descent( gradient, start, learn_rate, n_iter=50, tolerance=1e-06): vector = start for _ in range(n_iter): diff = -learn_rate * gradient(vector) if np.all(np.abs(diff) <= tolerance): break vector += diff return vector You now have the additional parameter tolerance (line 4), which specifies the minimal allowed movement in each iteration. You’ve also defined the default values for tolerance and n_iter, so you don’t have to specify them each time you call gradient_descent(). Lines 9 and 10 enable gradient_descent() to stop iterating and return the result before n_iter is reached if the vector update in the current iteration is less than or equal to tolerance. This often happens near the minimum, where gradients are usually very small. Unfortunately, it can also happen near a local minimum or a saddle point. Line 9 uses the convenient NumPy functions numpy.all() and numpy.abs() to compare the absolute values of diff and tolerance in a single statement. That’s why you import numpy on line 1. Now that you have the first version of gradient_descent(), it’s time to test your function. You’ll start with a small example and find the minimum of the function = ². This function has only one independent variable (), and its gradient is the derivative 2. It’s a differentiable convex function, and the analytical way to find its minimum is straightforward. However, in practice, analytical differentiation can be difficult or even impossible and is often approximated with numerical methods. You need only one statement to test your gradient descent implementation: >>> gradient_descent( ... gradient=lambda v: 2 * v, start=10.0, learn_rate=0.2) 2.210739197207331e-06 You use the lambda function lambda v: 2 * v to provide the gradient of ². You start from the value 10.0 and set the learning rate to 0.2. You get a result that’s very close to zero, which is the correct minimum. The figure below shows the movement of the solution through the iterations: enter link description here You start from the rightmost green dot ( = 10) and move toward the minimum ( = 0). The updates are larger at first because the value of the gradient (and slope) is higher. As you approach the minimum, they become lower. Improvement of the Code You can make gradient_descent() more robust, comprehensive, and better-looking without modifying its core functionality: import numpy as np def gradient_descent( gradient, x, y, start, learn_rate=0.1, n_iter=50, tolerance=1e-06, dtype="float64"): # Checking if the gradient is callable if not callable(gradient): raise TypeError("'gradient' must be callable") # Setting up the data type for NumPy arrays dtype_ = np.dtype(dtype) # Converting x and y to NumPy arrays x, y = np.array(x, dtype=dtype_), np.array(y, dtype=dtype_) if x.shape[0] != y.shape[0]: raise ValueError("'x' and 'y' lengths do not match") # Initializing the values of the variables vector = np.array(start, dtype=dtype_) # Setting up and checking the learning rate learn_rate = np.array(learn_rate, dtype=dtype_) if np.any(learn_rate <= 0): raise ValueError("'learn_rate' must be greater than zero") # Setting up and checking the maximal number of iterations n_iter = int(n_iter) if n_iter <= 0: raise ValueError("'n_iter' must be greater than zero") # Setting up and checking the tolerance tolerance = np.array(tolerance, dtype=dtype_) if np.any(tolerance <= 0): raise ValueError("'tolerance' must be greater than zero") # Performing the gradient descent loop for _ in range(n_iter): # Recalculating the difference diff = -learn_rate * np.array(gradient(x, y, vector), dtype_) # Checking if the absolute difference is small enough if np.all(np.abs(diff) <= tolerance): break # Updating the values of the variables vector += diff return vector if vector.shape else vector.item() A: Yes, you can use scikit-learn's SGDRegressor class to minimize a custom loss function without data. The SGDRegressor class allows you to specify a custom loss function using the loss parameter. For example, suppose you have a custom loss function called custom_loss_function that you want to minimize using stochastic gradient descent. You can do this using the following code: from sklearn.linear_model import SGDRegressor # Define your custom loss function def custom_loss_function(y_true, y_pred): # Your custom loss function implementation goes here pass # Create an SGDRegressor object with the custom loss function sgd_regressor = SGDRegressor(loss=custom_loss_function) # Use the fit() method to minimize the custom loss function without data sgd_regressor.fit(X=None, y=None) In this code, the SGDRegressor object is created with the custom_loss_function as the loss function. Then, the fit() method is used to minimize the custom loss function without data. Note that the X and y arguments to the fit() method are set to None because we are not using any data. Please note that the custom_loss_function should be implemented according to the scikit-learn loss function API. This means that the custom_loss_function should take two arguments: y_true and y_pred, and should return a scalar value representing the loss. You can find more details about the loss function API in the scikit-learn documentation: https://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator
(Stochastic) Gradient Descent implementation in Python
I am trying to do (preferably Stochastic) Gradient Descent to minimize a custom loss function. I tried using scikit learn SGDRegressor class. However, SGDRegressor doesn't seem to allow me to minimize a custom loss function without data, and if I can use custom loss function, I can only use it as regression to fit data with fit() method. Is there a way to use scikit implementation or any other Python implementation of stochastic gradient descent to minimize a custom function without data?
[ "Implementation of Basic Gradient Descent\nNow that you know how the basic gradient descent works, you can implement it in Python. You’ll use only plain Python and NumPy, which enables you to write concise code when working with arrays (or vectors) and gain a performance boost.\nThis is a basic implementation of the algorithm that starts with an arbitrary point, start, iteratively moves it toward the minimum, and returns a point that is hopefully at or near the minimum:\ndef gradient_descent(gradient, start, learn_rate, n_iter):\n vector = start\n for _ in range(n_iter):\n diff = -learn_rate * gradient(vector)\n vector += diff\n return vector\n\ngradient_descent() takes four arguments:\ngradient is the function or any Python callable object that takes a vector and returns the gradient of the function you’re trying to minimize.\nstart is the point where the algorithm starts its search, given as a sequence (tuple, list, NumPy array, and so on) or scalar (in the case of a one-dimensional problem).\nlearn_rate is the learning rate that controls the magnitude of the vector update.\nn_iter is the number of iterations.\nThis function does exactly what’s described above: it takes a starting point (line 2), iteratively updates it according to the learning rate and the value of the gradient (lines 3 to 5), and finally returns the last position found.\nBefore you apply gradient_descent(), you can add another termination criterion:\nimport numpy as np\n\ndef gradient_descent(\n gradient, start, learn_rate, n_iter=50, tolerance=1e-06):\n vector = start\n for _ in range(n_iter):\n diff = -learn_rate * gradient(vector)\n if np.all(np.abs(diff) <= tolerance):\n break\n vector += diff\n return vector\n\nYou now have the additional parameter tolerance (line 4), which specifies the minimal allowed movement in each iteration. You’ve also defined the default values for tolerance and n_iter, so you don’t have to specify them each time you call gradient_descent().\nLines 9 and 10 enable gradient_descent() to stop iterating and return the result before n_iter is reached if the vector update in the current iteration is less than or equal to tolerance. This often happens near the minimum, where gradients are usually very small. Unfortunately, it can also happen near a local minimum or a saddle point.\nLine 9 uses the convenient NumPy functions numpy.all() and numpy.abs() to compare the absolute values of diff and tolerance in a single statement. That’s why you import numpy on line 1.\nNow that you have the first version of gradient_descent(), it’s time to test your function. You’ll start with a small example and find the minimum of the function = ².\nThis function has only one independent variable (), and its gradient is the derivative 2. It’s a differentiable convex function, and the analytical way to find its minimum is straightforward. However, in practice, analytical differentiation can be difficult or even impossible and is often approximated with numerical methods.\nYou need only one statement to test your gradient descent implementation:\n>>> gradient_descent(\n... gradient=lambda v: 2 * v, start=10.0, learn_rate=0.2)\n2.210739197207331e-06\n\nYou use the lambda function lambda v: 2 * v to provide the gradient of ². You start from the value 10.0 and set the learning rate to 0.2. You get a result that’s very close to zero, which is the correct minimum.\nThe figure below shows the movement of the solution through the iterations:\nenter link description here\nYou start from the rightmost green dot ( = 10) and move toward the minimum ( = 0). The updates are larger at first because the value of the gradient (and slope) is higher. As you approach the minimum, they become lower.\nImprovement of the Code\nYou can make gradient_descent() more robust, comprehensive, and better-looking without modifying its core functionality:\nimport numpy as np\n\ndef gradient_descent(\n gradient, x, y, start, learn_rate=0.1, n_iter=50, tolerance=1e-06,\n dtype=\"float64\"):\n # Checking if the gradient is callable\n if not callable(gradient):\n raise TypeError(\"'gradient' must be callable\")\n\n # Setting up the data type for NumPy arrays\n dtype_ = np.dtype(dtype)\n\n # Converting x and y to NumPy arrays\n x, y = np.array(x, dtype=dtype_), np.array(y, dtype=dtype_)\n if x.shape[0] != y.shape[0]:\n raise ValueError(\"'x' and 'y' lengths do not match\")\n\n # Initializing the values of the variables\n vector = np.array(start, dtype=dtype_)\n\n # Setting up and checking the learning rate\n learn_rate = np.array(learn_rate, dtype=dtype_)\n if np.any(learn_rate <= 0):\n raise ValueError(\"'learn_rate' must be greater than zero\")\n\n # Setting up and checking the maximal number of iterations\n n_iter = int(n_iter)\n if n_iter <= 0:\n raise ValueError(\"'n_iter' must be greater than zero\")\n\n # Setting up and checking the tolerance\n tolerance = np.array(tolerance, dtype=dtype_)\n if np.any(tolerance <= 0):\n raise ValueError(\"'tolerance' must be greater than zero\")\n\n # Performing the gradient descent loop\n for _ in range(n_iter):\n # Recalculating the difference\n diff = -learn_rate * np.array(gradient(x, y, vector), dtype_)\n\n # Checking if the absolute difference is small enough\n if np.all(np.abs(diff) <= tolerance):\n break\n\n # Updating the values of the variables\n vector += diff\n\n return vector if vector.shape else vector.item()\n\n", "Yes, you can use scikit-learn's SGDRegressor class to minimize a custom loss function without data. The SGDRegressor class allows you to specify a custom loss function using the loss parameter.\nFor example, suppose you have a custom loss function called custom_loss_function that you want to minimize using stochastic gradient descent. You can do this using the following code:\nfrom sklearn.linear_model import SGDRegressor\n\n# Define your custom loss function\ndef custom_loss_function(y_true, y_pred):\n # Your custom loss function implementation goes here\n pass\n\n# Create an SGDRegressor object with the custom loss function\nsgd_regressor = SGDRegressor(loss=custom_loss_function)\n\n# Use the fit() method to minimize the custom loss function without data\nsgd_regressor.fit(X=None, y=None)\n\nIn this code, the SGDRegressor object is created with the custom_loss_function as the loss function. Then, the fit() method is used to minimize the custom loss function without data. Note that the X and y arguments to the fit() method are set to None because we are not using any data.\nPlease note that the custom_loss_function should be implemented according to the scikit-learn loss function API. This means that the custom_loss_function should take two arguments: y_true and y_pred, and should return a scalar value representing the loss. You can find more details about the loss function API in the scikit-learn documentation:\nhttps://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator\n" ]
[ 1, 0 ]
[]
[]
[ "gradient_descent", "keras", "python", "scikit_learn", "tensorflow" ]
stackoverflow_0074631492_gradient_descent_keras_python_scikit_learn_tensorflow.txt
Q: How do I find out how many days have passed since the date saved in the SQLite table? I'm writing my first Android app. I save the data to my SQLite database like this: ContentValues values = new ContentValues(); values.put("mydate", System.currentTimeMillis()); mydatabase.update("mytable", values, "id = ?", new String[]{String.valueOf(w.Id())}); Then I am to get how many days have passed from that day. I tried this: SELECT ((julianday('now') - 2440587.5)*86400000 - mydate)/1000 * 60 * 60 * 24 AS days FROM mytable And get some absurd numbers. It's because of absence of julianday() function in old Java versions I cannot get this value by simple SELECT julianday('now') - mydate AS days And how can I get this in old Java? A: I found it! I am to save in my database every date as 'YYYY-MM-DD'.Then simply: SELECT (strftime('%s', 'now') - strftime('%s', mydate)) / 86400.0 from mytable
How do I find out how many days have passed since the date saved in the SQLite table?
I'm writing my first Android app. I save the data to my SQLite database like this: ContentValues values = new ContentValues(); values.put("mydate", System.currentTimeMillis()); mydatabase.update("mytable", values, "id = ?", new String[]{String.valueOf(w.Id())}); Then I am to get how many days have passed from that day. I tried this: SELECT ((julianday('now') - 2440587.5)*86400000 - mydate)/1000 * 60 * 60 * 24 AS days FROM mytable And get some absurd numbers. It's because of absence of julianday() function in old Java versions I cannot get this value by simple SELECT julianday('now') - mydate AS days And how can I get this in old Java?
[ "I found it!\nI am to save in my database every date as 'YYYY-MM-DD'.Then simply:\nSELECT (strftime('%s', 'now') - strftime('%s', mydate)) / 86400.0 from mytable\n" ]
[ 0 ]
[]
[]
[ "android", "java", "julian_date", "sqlite" ]
stackoverflow_0074658743_android_java_julian_date_sqlite.txt
Q: System.loadLibrary() can't find a shared library on linux My Linux (Ubuntu 22.04) has the FluidSynth library package installed: lrwxrwxrwx 1 root root 22 févr. 3 2022 /lib/x86_64-linux-gnu/libfluidsynth.so.3 -> libfluidsynth.so.3.0.5 -rw-r--r-- 1 root root 551240 févr. 3 2022 /lib/x86_64-linux-gnu/libfluidsynth.so.3.0.5 My java 17 program uses System.loadLibrary("fluidsynth") to load the shared library. It does not find the library using the default java search path: no fluidsynth in java.library.path: /usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib I tried running my program with -Djava.library.path=/lib/x86_64-linux-gnu, but same problem: no fluidsynth in java.library.path: /lib/x86_64-linux-gnu I checked that the fluidsynth lib is in the ldconfig cache: > ldconfig -p | grep fluid libfluidsynth.so.3 (libc6,x86-64) => /lib/x86_64-linux-gnu/libfluidsynth.so.3 I'm lost... Note: I was able to make it work using the default java search path and manually copying a libfluidsynth.so file in /usr/lib. But I can't ask the program users to do this kind of hack, it should work out of the box on any Linux, as long as the fluidsynth library is installed. A: I ended up using System.load("/lib/x86_64-linux-gnu/libfluidsynth.so.3"); which works and loads the lib plus all its dependencies. It's not perfect though, because I need to locate the library file first. It's still not clear to me why java does not accept *.so.version for the initial lib, but accept it for its dependencies.
System.loadLibrary() can't find a shared library on linux
My Linux (Ubuntu 22.04) has the FluidSynth library package installed: lrwxrwxrwx 1 root root 22 févr. 3 2022 /lib/x86_64-linux-gnu/libfluidsynth.so.3 -> libfluidsynth.so.3.0.5 -rw-r--r-- 1 root root 551240 févr. 3 2022 /lib/x86_64-linux-gnu/libfluidsynth.so.3.0.5 My java 17 program uses System.loadLibrary("fluidsynth") to load the shared library. It does not find the library using the default java search path: no fluidsynth in java.library.path: /usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib I tried running my program with -Djava.library.path=/lib/x86_64-linux-gnu, but same problem: no fluidsynth in java.library.path: /lib/x86_64-linux-gnu I checked that the fluidsynth lib is in the ldconfig cache: > ldconfig -p | grep fluid libfluidsynth.so.3 (libc6,x86-64) => /lib/x86_64-linux-gnu/libfluidsynth.so.3 I'm lost... Note: I was able to make it work using the default java search path and manually copying a libfluidsynth.so file in /usr/lib. But I can't ask the program users to do this kind of hack, it should work out of the box on any Linux, as long as the fluidsynth library is installed.
[ "I ended up using System.load(\"/lib/x86_64-linux-gnu/libfluidsynth.so.3\"); which works and loads the lib plus all its dependencies. It's not perfect though, because I need to locate the library file first.\nIt's still not clear to me why java does not accept *.so.version for the initial lib, but accept it for its dependencies.\n" ]
[ 0 ]
[]
[]
[ "java", "linux", "shared_libraries" ]
stackoverflow_0074604651_java_linux_shared_libraries.txt
Q: NET 6: Controller method is not reachable I have a simple NET 6 application. added a controller and trying to test. Run it, I see method on Swagger page, execute the method in Swagger, it returns 200, but it does not return "Hello World". Then I have added logger output to the controller constructor and to the method - no outputs. What can be the reason of the problem? My Program file using MTApp.Infra; var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddMvc(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>(); builder.Services.AddDistributedMemoryCache(); builder.Services.AddSession(options => { options.IdleTimeout = TimeSpan.FromSeconds(1800); options.Cookie.HttpOnly = true; options.Cookie.IsEssential = true; }); //builder.Services.AddSession();// ! var app = builder.Build(); app.UseSession(); //<--- add this line // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "MTApp API V1"); }); } app.UseTenant(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllerRoute(name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); }); app.UseDeveloperExceptionPage(); app.UseHttpsRedirection(); app.MapControllers(); app.Run(); The extension: public static class TenantSecurityMiddlewareExtension { public static IApplicationBuilder UseTenant(this IApplicationBuilder app) { app.UseMiddleware<TenantSecurityMiddleware>(); return app; } } and my controller using Microsoft.Extensions.Logging; namespace MTApp.Controllers { [ApiController] [Route("api/[controller]")] public class DobedoController : ControllerBase { private readonly ILogger<DobedoController> _logger; public DobedoController(ILogger<DobedoController> logger) { _logger = logger; _logger.LogInformation("DobedoController"); } [HttpGet("GetHW2")] public string GetHW2() { _logger.LogInformation("DobedoController:HW2"); return "Hello World2"; } } } A: What is app.UseTenant(); supposed to do? Becase it works for me after: Creating a new .NET 7 web api using dotnet new webapi. Replacing the template controller with yours (missing using Microsoft.AspNetCore.Mvc; line at the top, but you probably have that in the global usings). Replacing the template Program.cs with yours. Commenting the line with app.UseTenant() (since it comes from MTApp.Infra).
NET 6: Controller method is not reachable
I have a simple NET 6 application. added a controller and trying to test. Run it, I see method on Swagger page, execute the method in Swagger, it returns 200, but it does not return "Hello World". Then I have added logger output to the controller constructor and to the method - no outputs. What can be the reason of the problem? My Program file using MTApp.Infra; var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddMvc(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>(); builder.Services.AddDistributedMemoryCache(); builder.Services.AddSession(options => { options.IdleTimeout = TimeSpan.FromSeconds(1800); options.Cookie.HttpOnly = true; options.Cookie.IsEssential = true; }); //builder.Services.AddSession();// ! var app = builder.Build(); app.UseSession(); //<--- add this line // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "MTApp API V1"); }); } app.UseTenant(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllerRoute(name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); }); app.UseDeveloperExceptionPage(); app.UseHttpsRedirection(); app.MapControllers(); app.Run(); The extension: public static class TenantSecurityMiddlewareExtension { public static IApplicationBuilder UseTenant(this IApplicationBuilder app) { app.UseMiddleware<TenantSecurityMiddleware>(); return app; } } and my controller using Microsoft.Extensions.Logging; namespace MTApp.Controllers { [ApiController] [Route("api/[controller]")] public class DobedoController : ControllerBase { private readonly ILogger<DobedoController> _logger; public DobedoController(ILogger<DobedoController> logger) { _logger = logger; _logger.LogInformation("DobedoController"); } [HttpGet("GetHW2")] public string GetHW2() { _logger.LogInformation("DobedoController:HW2"); return "Hello World2"; } } }
[ "What is app.UseTenant(); supposed to do?\nBecase it works for me after:\n\nCreating a new .NET 7 web api using dotnet new webapi.\n\nReplacing the template controller with yours (missing using Microsoft.AspNetCore.Mvc; line at the top, but you probably have that in the global usings).\n\nReplacing the template Program.cs with yours.\n\nCommenting the line with app.UseTenant() (since it comes from MTApp.Infra).\n\n\n" ]
[ 1 ]
[]
[]
[ ".net", ".net_6.0", "asp.net_core_webapi" ]
stackoverflow_0074658278_.net_.net_6.0_asp.net_core_webapi.txt
Q: How can I replace in Excel multiple values in one cell with values in a column using a macro? I have zero knowledge of VBA and I'm trying to create a macro that will replace "/1", "/2", etc. in cell C17 with the values in the column AF. So far I have this, but it only replaces the first value (i.e. "/1") and stops there. Considering that I know close to nothing about VBA, I'm surprised that this even did anything. Any help will be much appreciated. Thanks!! Sub questiontext() Dim InputRng As Range, ReplaceRng As Range Set InputRng = Range("AJ3:AJ52") Set ReplaceRng = Range("AF3:AF52") Range("C17").Replace What:=InputRng, Replacement:=ReplaceRng, SearchOrder:=xlByRows, LookAt:=xlPart End Sub A: So you want to loop down a column and perform that line of code; you can use a loop to replace within a formula: Dim i As Long: For i = 3 to 52 Range("C17").Formula = Replace( Range("C17").Formula, Range("AJ" & i).Value, Range("AF" & i).Value) Next i Depending on your intent, you may want to save your base formula for C17 so you can reference and replace later, otherwise you'll keep modifying the same formula and possibly remove items, e.g., /1 and /17 each have "/1" and may have unintended consequences; looping in reverse would benefit this scenario, e.g.: Dim i As Long: For i = 52 to 3 Step -1 Range("C17").Formula = Replace( Range("C17").Formula, Range("AJ" & i).Value, Range("AF" & i).Value) Next i
How can I replace in Excel multiple values in one cell with values in a column using a macro?
I have zero knowledge of VBA and I'm trying to create a macro that will replace "/1", "/2", etc. in cell C17 with the values in the column AF. So far I have this, but it only replaces the first value (i.e. "/1") and stops there. Considering that I know close to nothing about VBA, I'm surprised that this even did anything. Any help will be much appreciated. Thanks!! Sub questiontext() Dim InputRng As Range, ReplaceRng As Range Set InputRng = Range("AJ3:AJ52") Set ReplaceRng = Range("AF3:AF52") Range("C17").Replace What:=InputRng, Replacement:=ReplaceRng, SearchOrder:=xlByRows, LookAt:=xlPart End Sub
[ "So you want to loop down a column and perform that line of code; you can use a loop to replace within a formula:\nDim i As Long: For i = 3 to 52\n Range(\"C17\").Formula = Replace( Range(\"C17\").Formula, Range(\"AJ\" & i).Value, Range(\"AF\" & i).Value)\nNext i\n\nDepending on your intent, you may want to save your base formula for C17 so you can reference and replace later, otherwise you'll keep modifying the same formula and possibly remove items, e.g., /1 and /17 each have \"/1\" and may have unintended consequences; looping in reverse would benefit this scenario, e.g.:\nDim i As Long: For i = 52 to 3 Step -1\n Range(\"C17\").Formula = Replace( Range(\"C17\").Formula, Range(\"AJ\" & i).Value, Range(\"AF\" & i).Value)\nNext i\n\n" ]
[ 0 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0074658359_excel_vba.txt
Q: Typescript Sudoku problem outputting incomplete boards I've managed to convert a js sudoku generator into a ts generator for some practice and the only problem I'm having is getting it to output only complete boards. Right now, it's outputting boards regardless if they're complete and I have to refresh until one board is correct. I'm not sure how to write the following function so that it only outputs full boards: function fillBoard(puzzleArray: number[][]): number[][] { if (nextEmptyCell(puzzleArray).colIndex === -1) return puzzleArray; let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; fillBoard(puzzleArray); } } return puzzleArray; } Here is all my code: import { Box } from "./Box"; export function Board() { let BLANK_BOARD = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], ]; let NEW_BOARD = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], ]; let counter: number = 0; let check: number[]; const numArray: number[] = [1, 2, 3, 4, 5, 6, 7, 8, 9]; function rowSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { return puzzleArray[emptyCell.rowIndex].indexOf(num) == -1; } function colSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { let test = puzzleArray.flat(); for (let i = emptyCell.colIndex; i < test.length; i += 9) { if (test[i] === num) { return false; } } return true; } function regionSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { const rowStart: number = emptyCell.rowIndex - (emptyCell.rowIndex % 3); const colStart: number = emptyCell.colIndex - (emptyCell.colIndex % 3); for (let i = 0; i < 3; i++) { for (let j = 0; j < 3; j++) { if (puzzleArray[rowStart + i][colStart + j] === num) { return false; } } } return true; } console.log(rowSafe(BLANK_BOARD, { rowIndex: 4, colIndex: 6 }, 5)); console.log(colSafe(BLANK_BOARD, { rowIndex: 2, colIndex: 3 }, 4)); console.log(regionSafe(BLANK_BOARD, { rowIndex: 5, colIndex: 6 }, 5)); function safeToPlace( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { return ( regionSafe(puzzleArray, emptyCell, num) && rowSafe(puzzleArray, emptyCell, num) && colSafe(puzzleArray, emptyCell, num) ); } console.log(safeToPlace(BLANK_BOARD, { rowIndex: 5, colIndex: 6 }, 5)); function nextEmptyCell(puzzleArray: number[][]): { colIndex: number; rowIndex: number; } { let emptyCell = { rowIndex: -1, colIndex: -1 }; for (let i = 0; i < 9; i++) { for (let j = 0; j < 9; j++) { if (puzzleArray[i][j] === 0) { return { rowIndex: i, colIndex: j }; } } } return emptyCell; } function shuffle(array: number[]): number[] { // using Array sort and Math.random let shuffledArr = array.sort(() => 0.5 - Math.random()); return shuffledArr; } function fillBoard(puzzleArray: number[][]): number[][] { if (nextEmptyCell(puzzleArray).colIndex === -1) return puzzleArray; let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; fillBoard(puzzleArray); } else { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = 0; } } return puzzleArray; } console.log(nextEmptyCell(BLANK_BOARD)); NEW_BOARD = fillBoard(BLANK_BOARD); function fullBoard(puzzleArray: number[][]): boolean { return puzzleArray.every((row) => row.every((col) => col !== 0)); } return ( <div style={{ height: "450px", width: "450px", display: "inline-grid", gap: "10px", gridTemplateColumns: "repeat(9,50px)", gridTemplateRows: "repeat(9,50px)", position: "absolute", top: "30px", left: "0px", right: "0px", marginLeft: "auto", marginRight: "auto", }} > {NEW_BOARD.flat().map((item) => ( <Box i={item} /> ))} </div> ); } A: The function will return an incomplete board when it turns out there is no way to add a digit in a free cell that will be valid. To solve that, your function should: implement proper backtracking, i.e. taking back a move that didn't work out Have the function return a boolean (for success/failure) -- there is no need for it to return puzzleArray, since that array is mutated inplace, so the caller has access to the changes. This also means that NEW_BOARD = fillBoard(BLANK_BOARD); is having as side effect that NEW_BOARD and BLANK_BOARD reference the same board, and it is not blank any more (so the name is misleading). break/return out of the loop when there is success. Here is the adapted implementation: function fillBoard(puzzleArray: number[][]): boolean { if (nextEmptyCell(puzzleArray).colIndex === -1) return true; // boolean let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; if fillBoard(puzzleArray) return true; // exit upon success } } puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = 0; // undo return false; // boolean - no success } The caller should check the return value, but if you start with a blank board, it is guaranteed to get true as return value. So you could do: const puzzleArray = BLANK_BOARD.map(row => [...row]); // deep copy const success = fillBoard(puzzleArray); // returns a boolean // ... do something with puzzleArray ... A: To output only complete boards in your TypeScript Sudoku generator, you can add a check at the end of the fillBoard() function to determine if the board is complete. If the board is complete, you can return it. Otherwise, you can return an empty array to indicate that the board is incomplete. Here is an example of how you can modify the fillBoard() function to return only complete boards: function fillBoard(puzzleArray: number[][]): number[][] { if (nextEmptyCell(puzzleArray).colIndex === -1) { // Check if the board is complete if (isBoardComplete(puzzleArray)) { // Return the completed board return puzzleArray; } else { // Return an empty array if the board is not complete return []; } } let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; fillBoard(puzzleArray); } } return puzzleArray; } In this example, the fillBoard() function is modified to include a check for whether the board is complete using the isBoardComplete() function. If the board is complete, it is returned. Otherwise, an empty array is returned. You will need to implement the isBoardComplete() function to determine if a given Sudoku board is complete. This function should check that all cells in the board are filled with a valid number (1-9). If all cells are filled, the board is complete and the function should return true. Otherwise, it should return false.
Typescript Sudoku problem outputting incomplete boards
I've managed to convert a js sudoku generator into a ts generator for some practice and the only problem I'm having is getting it to output only complete boards. Right now, it's outputting boards regardless if they're complete and I have to refresh until one board is correct. I'm not sure how to write the following function so that it only outputs full boards: function fillBoard(puzzleArray: number[][]): number[][] { if (nextEmptyCell(puzzleArray).colIndex === -1) return puzzleArray; let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; fillBoard(puzzleArray); } } return puzzleArray; } Here is all my code: import { Box } from "./Box"; export function Board() { let BLANK_BOARD = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], ]; let NEW_BOARD = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], ]; let counter: number = 0; let check: number[]; const numArray: number[] = [1, 2, 3, 4, 5, 6, 7, 8, 9]; function rowSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { return puzzleArray[emptyCell.rowIndex].indexOf(num) == -1; } function colSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { let test = puzzleArray.flat(); for (let i = emptyCell.colIndex; i < test.length; i += 9) { if (test[i] === num) { return false; } } return true; } function regionSafe( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { const rowStart: number = emptyCell.rowIndex - (emptyCell.rowIndex % 3); const colStart: number = emptyCell.colIndex - (emptyCell.colIndex % 3); for (let i = 0; i < 3; i++) { for (let j = 0; j < 3; j++) { if (puzzleArray[rowStart + i][colStart + j] === num) { return false; } } } return true; } console.log(rowSafe(BLANK_BOARD, { rowIndex: 4, colIndex: 6 }, 5)); console.log(colSafe(BLANK_BOARD, { rowIndex: 2, colIndex: 3 }, 4)); console.log(regionSafe(BLANK_BOARD, { rowIndex: 5, colIndex: 6 }, 5)); function safeToPlace( puzzleArray: number[][], emptyCell: { rowIndex: number; colIndex: number }, num: number ): boolean { return ( regionSafe(puzzleArray, emptyCell, num) && rowSafe(puzzleArray, emptyCell, num) && colSafe(puzzleArray, emptyCell, num) ); } console.log(safeToPlace(BLANK_BOARD, { rowIndex: 5, colIndex: 6 }, 5)); function nextEmptyCell(puzzleArray: number[][]): { colIndex: number; rowIndex: number; } { let emptyCell = { rowIndex: -1, colIndex: -1 }; for (let i = 0; i < 9; i++) { for (let j = 0; j < 9; j++) { if (puzzleArray[i][j] === 0) { return { rowIndex: i, colIndex: j }; } } } return emptyCell; } function shuffle(array: number[]): number[] { // using Array sort and Math.random let shuffledArr = array.sort(() => 0.5 - Math.random()); return shuffledArr; } function fillBoard(puzzleArray: number[][]): number[][] { if (nextEmptyCell(puzzleArray).colIndex === -1) return puzzleArray; let emptyCell = nextEmptyCell(puzzleArray); for (var num in shuffle(numArray)) { if (safeToPlace(puzzleArray, emptyCell, numArray[num])) { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num]; fillBoard(puzzleArray); } else { puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = 0; } } return puzzleArray; } console.log(nextEmptyCell(BLANK_BOARD)); NEW_BOARD = fillBoard(BLANK_BOARD); function fullBoard(puzzleArray: number[][]): boolean { return puzzleArray.every((row) => row.every((col) => col !== 0)); } return ( <div style={{ height: "450px", width: "450px", display: "inline-grid", gap: "10px", gridTemplateColumns: "repeat(9,50px)", gridTemplateRows: "repeat(9,50px)", position: "absolute", top: "30px", left: "0px", right: "0px", marginLeft: "auto", marginRight: "auto", }} > {NEW_BOARD.flat().map((item) => ( <Box i={item} /> ))} </div> ); }
[ "The function will return an incomplete board when it turns out there is no way to add a digit in a free cell that will be valid.\nTo solve that, your function should:\n\nimplement proper backtracking, i.e. taking back a move that didn't work out\nHave the function return a boolean (for success/failure) -- there is no need for it to return puzzleArray, since that array is mutated inplace, so the caller has access to the changes.\n\nThis also means that NEW_BOARD = fillBoard(BLANK_BOARD); is having as side effect that NEW_BOARD and BLANK_BOARD reference the same board, and it is not blank any more (so the name is misleading).\n\n\nbreak/return out of the loop when there is success.\n\nHere is the adapted implementation:\nfunction fillBoard(puzzleArray: number[][]): boolean {\n if (nextEmptyCell(puzzleArray).colIndex === -1) return true; // boolean\n\n let emptyCell = nextEmptyCell(puzzleArray);\n\n for (var num in shuffle(numArray)) {\n if (safeToPlace(puzzleArray, emptyCell, numArray[num])) {\n puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num];\n if fillBoard(puzzleArray) return true; // exit upon success\n } \n }\n puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = 0; // undo\n return false; // boolean - no success\n}\n\nThe caller should check the return value, but if you start with a blank board, it is guaranteed to get true as return value. So you could do:\nconst puzzleArray = BLANK_BOARD.map(row => [...row]); // deep copy\nconst success = fillBoard(puzzleArray); // returns a boolean\n// ... do something with puzzleArray ...\n\n", "To output only complete boards in your TypeScript Sudoku generator, you can add a check at the end of the fillBoard() function to determine if the board is complete. If the board is complete, you can return it. Otherwise, you can return an empty array to indicate that the board is incomplete.\nHere is an example of how you can modify the fillBoard() function to return only complete boards:\nfunction fillBoard(puzzleArray: number[][]): number[][] {\n if (nextEmptyCell(puzzleArray).colIndex === -1) {\n // Check if the board is complete\n if (isBoardComplete(puzzleArray)) {\n // Return the completed board\n return puzzleArray;\n } else {\n // Return an empty array if the board is not complete\n return [];\n }\n }\n\n let emptyCell = nextEmptyCell(puzzleArray);\n\n for (var num in shuffle(numArray)) {\n if (safeToPlace(puzzleArray, emptyCell, numArray[num])) {\n puzzleArray[emptyCell.rowIndex][emptyCell.colIndex] = numArray[num];\n fillBoard(puzzleArray);\n } \n }\n\n return puzzleArray;\n}\n\nIn this example, the fillBoard() function is modified to include a check for whether the board is complete using the isBoardComplete() function. If the board is complete, it is returned. Otherwise, an empty array is returned.\nYou will need to implement the isBoardComplete() function to determine if a given Sudoku board is complete. This function should check that all cells in the board are filled with a valid number (1-9). If all cells are filled, the board is complete and the function should return true. Otherwise, it should return false.\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "reactjs", "typescript" ]
stackoverflow_0074659589_javascript_reactjs_typescript.txt
Q: what is the connection string using alwaysdata with scala i am using Scala with play and slick. scalaVersion := "2.13.10" libraryDependencies += guice libraryDependencies += "org.scalatestplus.play" %% "scalatestplus-play" % "5.1.0" % Test libraryDependencies += "org.postgresql" % "postgresql" % "42.5.0" libraryDependencies += "com.typesafe.play" %% "play-slick" % "5.1.0" and in my application.conf: slick.dbs.default.profile = "slick.jdbc.PostgresProfile$" slick.dbs.default.db.driver = "org.postgresql.Driver" slick.dbs.default.db.url = "postgresql-stevapp.alwaysdata.net:5432" slick.dbs.default.db.user = "XXXX" slick.dbs.default.db.password="YYYYY" i got this error A: The URL should be a JDBC URL in the form jdbc:postgres://postgresql-stevapp.alwaysdata.net:5432/mydb?someParam=someValue.
what is the connection string using alwaysdata with scala
i am using Scala with play and slick. scalaVersion := "2.13.10" libraryDependencies += guice libraryDependencies += "org.scalatestplus.play" %% "scalatestplus-play" % "5.1.0" % Test libraryDependencies += "org.postgresql" % "postgresql" % "42.5.0" libraryDependencies += "com.typesafe.play" %% "play-slick" % "5.1.0" and in my application.conf: slick.dbs.default.profile = "slick.jdbc.PostgresProfile$" slick.dbs.default.db.driver = "org.postgresql.Driver" slick.dbs.default.db.url = "postgresql-stevapp.alwaysdata.net:5432" slick.dbs.default.db.user = "XXXX" slick.dbs.default.db.password="YYYYY" i got this error
[ "The URL should be a JDBC URL in the form jdbc:postgres://postgresql-stevapp.alwaysdata.net:5432/mydb?someParam=someValue.\n" ]
[ 2 ]
[]
[]
[ "playframework", "scala", "slick" ]
stackoverflow_0074657504_playframework_scala_slick.txt
Q: React testing with Jest and React Testing Library not renderin text displayed on the webpage I have a NextJS app that I am using Jest and React Testing Library to test. I have a card component that is passed data (id, image url, text, and name) that is rendered on the card. This works correctly on the webpage. When I run the test, the test cannot find any text on the page. Here is the component: import React from "react"; import Image from "next/image"; import styles from "./testCard.module.css"; export default function TestCard(data) { const card = data.data; return ( <> <div className={styles.cardContainer}> <div className={styles.cardTop}> <div className={styles.cardImg}> <Image src={card.imgUrl} alt="" height={150} width={150} loading="lazy" className={styles.circular} /> </div> </div> <div className={styles.cardBottom}> <div className={styles.cardText}> <p>&quot;{card.text}&quot;</p> </div> <div className={styles.cardName}> <p>-{card.name}</p> </div> </div> </div> </> ); } Here is the test file: import React from "react"; import "@testing-library/jest-dom"; import { render, screen } from "@testing-library/react"; import TestCard from "./testCard"; import { testimonialMock } from "../../__mocks__/next/testimonialMock"; describe("TestCard Component", () => { it("renders the component", () => { render(<TestCard data={testimonialMock} />); }); it("renders the component unchanged", () => { const { containter } = render(<TestCard data={testimonialMock} />); expect(containter).toMatchSnapshot(); }); it("renders the passed in data", () => { render(<TestCard data={testimonialMock} />); screen.getByRole('p', {name: /test text/i}); }); }); And here is the testimonialMock.js file: export const testimonialMock = [ { id: 0, imgUrl: "/img/mock.png", text: "test text", name: "test name", }, ]; Here is the result I am getting: TestCard Component ✓ renders the component (12 ms) ✓ renders the component unchanged (5 ms) ✕ renders the passed in data (15 ms) ● TestCard Component › renders the passed in data TestingLibraryElementError: Unable to find an element with the text: test text. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible. Ignored nodes: comments, script, style <body> <div> <div class="cardContainer" > <div class="cardTop" > <div class="cardImg" /> </div> <div class="cardBottom" > <div class="cardText" > <p> " " </p> </div> <div class="cardName" > <p> - </p> </div> </div> </div> </div> </body> 17 | it("renders the passed in data", () => { 18 | render(<TestCard data={testimonialMock} />); > 19 | expect(screen.getByText("test text")).toBeInTheDocument(); | ^ 20 | }); 21 | }); 22 | at Object.getElementError (node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/config.js:40:19) at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:90:38 at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:62:17 at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:111:19 at Object.getByText (components/testCard/testCard.test.js:19:19) Test Suites: 1 failed, 1 total Tests: 1 failed, 2 passed, 3 total Snapshots: 1 passed, 1 total Time: 0.725 s, estimated 1 s Ran all test suites matching /testCard.test.js/i. I have tried using different forms of passing in the data and different queries, all to no avail. A: In your test, you are trying to use screen.getByRole to get an element with the role "p" and the name "test text". However, it looks like the elements in your component do not have any roles assigned to them. Instead of using getByRole, you could try using getByText to search for the text that you are looking for in the component. Here is an example of how you could use getByText in your test: it("renders the passed in data", () => { render(<TestCard data={testimonialMock} />); screen.getByText(/test text/i); }); In this example, getByText will search for any element that contains the text "test text", ignoring case. If it finds an element that contains this text, the test will pass. If it does not find any elements that match, the test will fail. A: If you are still having trouble getting the text to appear in your test, there are a few things that you can try to troubleshoot the issue. First, make sure that the text you are searching for is actually present in the rendered component. You can do this by logging the result of screen.debug() to the console, which will show the HTML of the rendered component. Then, check to see if the text you are looking for is included in the HTML output. If the text is present in the rendered HTML, but the getByText assertion is still failing, you can try using a different selector to search for the element. For example, instead of using getByText, you could try using getByTestId to search for an element with a specific data-testid attribute. Here is an example of how you could use getByTestId in your test: it("renders the passed in data", () => { render(<TestCard data={testimonialMock} />); expect(screen.getByTestId("test-text")).toBeInTheDocument(); }); In this example, the getByTestId method will search for an element with the attribute data-testid="test-text". If it finds an element that matches this criteria, the test will pass. If it does not find any elements that match, the test will fail. You will need to add the data-testid attribute to the elements in your component in order for this approach to work. For example, you could add a data-testid attribute to the p element that contains the "test text" like this: <p data-testid="test-text">"{card.text}"</p> A: Use a single object as the mock (no data prop inside it), but fix the confusing use of data instead of props for the component arguments. Then use the getByText instead of the getByRole so export default function TestCard(props) { const card = props.data; .... } then in your mock file export const testimonialMock = { id: 0, imgUrl: "/img/mock.png", text: "test text", name: "test name", }; and finally in the test screen.getByText(/test text/i);
React testing with Jest and React Testing Library not renderin text displayed on the webpage
I have a NextJS app that I am using Jest and React Testing Library to test. I have a card component that is passed data (id, image url, text, and name) that is rendered on the card. This works correctly on the webpage. When I run the test, the test cannot find any text on the page. Here is the component: import React from "react"; import Image from "next/image"; import styles from "./testCard.module.css"; export default function TestCard(data) { const card = data.data; return ( <> <div className={styles.cardContainer}> <div className={styles.cardTop}> <div className={styles.cardImg}> <Image src={card.imgUrl} alt="" height={150} width={150} loading="lazy" className={styles.circular} /> </div> </div> <div className={styles.cardBottom}> <div className={styles.cardText}> <p>&quot;{card.text}&quot;</p> </div> <div className={styles.cardName}> <p>-{card.name}</p> </div> </div> </div> </> ); } Here is the test file: import React from "react"; import "@testing-library/jest-dom"; import { render, screen } from "@testing-library/react"; import TestCard from "./testCard"; import { testimonialMock } from "../../__mocks__/next/testimonialMock"; describe("TestCard Component", () => { it("renders the component", () => { render(<TestCard data={testimonialMock} />); }); it("renders the component unchanged", () => { const { containter } = render(<TestCard data={testimonialMock} />); expect(containter).toMatchSnapshot(); }); it("renders the passed in data", () => { render(<TestCard data={testimonialMock} />); screen.getByRole('p', {name: /test text/i}); }); }); And here is the testimonialMock.js file: export const testimonialMock = [ { id: 0, imgUrl: "/img/mock.png", text: "test text", name: "test name", }, ]; Here is the result I am getting: TestCard Component ✓ renders the component (12 ms) ✓ renders the component unchanged (5 ms) ✕ renders the passed in data (15 ms) ● TestCard Component › renders the passed in data TestingLibraryElementError: Unable to find an element with the text: test text. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible. Ignored nodes: comments, script, style <body> <div> <div class="cardContainer" > <div class="cardTop" > <div class="cardImg" /> </div> <div class="cardBottom" > <div class="cardText" > <p> " " </p> </div> <div class="cardName" > <p> - </p> </div> </div> </div> </div> </body> 17 | it("renders the passed in data", () => { 18 | render(<TestCard data={testimonialMock} />); > 19 | expect(screen.getByText("test text")).toBeInTheDocument(); | ^ 20 | }); 21 | }); 22 | at Object.getElementError (node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/config.js:40:19) at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:90:38 at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:62:17 at node_modules/.pnpm/@[email protected]/node_modules/@testing-library/dom/dist/query-helpers.js:111:19 at Object.getByText (components/testCard/testCard.test.js:19:19) Test Suites: 1 failed, 1 total Tests: 1 failed, 2 passed, 3 total Snapshots: 1 passed, 1 total Time: 0.725 s, estimated 1 s Ran all test suites matching /testCard.test.js/i. I have tried using different forms of passing in the data and different queries, all to no avail.
[ "In your test, you are trying to use screen.getByRole to get an element with the role \"p\" and the name \"test text\". However, it looks like the elements in your component do not have any roles assigned to them. Instead of using getByRole, you could try using getByText to search for the text that you are looking for in the component.\nHere is an example of how you could use getByText in your test:\nit(\"renders the passed in data\", () => {\n render(<TestCard data={testimonialMock} />);\n screen.getByText(/test text/i);\n});\n\nIn this example, getByText will search for any element that contains the text \"test text\", ignoring case. If it finds an element that contains this text, the test will pass. If it does not find any elements that match, the test will fail.\n", "If you are still having trouble getting the text to appear in your test, there are a few things that you can try to troubleshoot the issue.\nFirst, make sure that the text you are searching for is actually present in the rendered component. You can do this by logging the result of screen.debug() to the console, which will show the HTML of the rendered component. Then, check to see if the text you are looking for is included in the HTML output.\nIf the text is present in the rendered HTML, but the getByText assertion is still failing, you can try using a different selector to search for the element. For example, instead of using getByText, you could try using getByTestId to search for an element with a specific data-testid attribute.\nHere is an example of how you could use getByTestId in your test:\nit(\"renders the passed in data\", () => {\n render(<TestCard data={testimonialMock} />);\n expect(screen.getByTestId(\"test-text\")).toBeInTheDocument();\n});\n\nIn this example, the getByTestId method will search for an element with the attribute data-testid=\"test-text\". If it finds an element that matches this criteria, the test will pass. If it does not find any elements that match, the test will fail.\nYou will need to add the data-testid attribute to the elements in your component in order for this approach to work. For example, you could add a data-testid attribute to the p element that contains the \"test text\" like this:\n<p data-testid=\"test-text\">\"{card.text}\"</p>\n\n", "Use a single object as the mock (no data prop inside it), but fix the confusing use of data instead of props for the component arguments.\nThen use the getByText instead of the getByRole\nso\nexport default function TestCard(props) {\n const card = props.data;\n ....\n}\n\nthen in your mock file\nexport const testimonialMock = {\n id: 0,\n imgUrl: \"/img/mock.png\",\n text: \"test text\",\n name: \"test name\",\n };\n\nand finally in the test\n screen.getByText(/test text/i);\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "jestjs", "next.js", "react_testing_library", "reactjs" ]
stackoverflow_0074659463_jestjs_next.js_react_testing_library_reactjs.txt
Q: Terraform ACM doesn't validate Route 53 DNS I've created a Route 53 DNS using Terraform and assigned a certificate with ACM.. although when trying to verify the code is stuck in a loop aws_acm_certificate_validation.verify: Still creating... [27m31s elapsed] main.tf # ACM Certificate resource "aws_acm_certificate" "ssl" { domain_name = "modules.cclab.cloud-castles.com" validation_method = "DNS" tags = { Environment = "test" } lifecycle { create_before_destroy = true } } # Route53 Zone resource "aws_route53_zone" "selected" { name = "modules.cclab.cloud-castles.com" } data "aws_route53_zone" "selected" { private_zone = false vpc_id = aws_vpc.main.id zone_id = aws_route53_zone.selected.zone_id } # Route53 Record resource "aws_route53_record" "www" { for_each = { for dvo in aws_acm_certificate.ssl.domain_validation_options : dvo.domain_name => { name = dvo.resource_record_name record = dvo.resource_record_value type = dvo.resource_record_type } } allow_overwrite = true name = each.value.name records = [each.value.record] ttl = 60 type = each.value.type zone_id = data.aws_route53_zone.selected.zone_id } # ACM Validation resource "aws_acm_certificate_validation" "verify" { certificate_arn = aws_acm_certificate.ssl.arn validation_record_fqdns = [for record in aws_route53_record.www : record.fqdn] } # ALB Listener resource "aws_lb_listener" "alb-listener" { load_balancer_arn = aws_lb.alb.arn port = "80" protocol = "HTTP" ssl_policy = "ELBSecurityPolicy-2016-08" certificate_arn = aws_acm_certificate_validation.verify.certificate_arn default_action { type = "forward" target_group_arn = aws_lb_target_group.alb-target.arn } } I verified there's a hosted zone using the aws cli aws route53 get-hosted-zone --id Z03171471QBEVDH2KPJ6W { "HostedZone": { "Id": "/hostedzone/Z03171471QBEVDH2KPJ6W", "Name": "modules.cclab.cloud-castles.com.", "CallerReference": "terraform-20221202175826093600000001", "Config": { "Comment": "Managed by Terraform", "PrivateZone": false }, }, "DelegationSet": { "NameServers": [ "ns-566.awsdns-06.net", "ns-1336.awsdns-39.org", "ns-212.awsdns-26.com", "ns-1559.awsdns-02.co.uk" ] } } aws route53 list-resource-record-sets --hosted-zone-id Z03171471QBEVDH2KPJ6W aws route53 list-resource-record-sets --hosted-zone-id Z03171471QBEVDH2KPJ6W { "ResourceRecordSets": [ { "Name": "modules.cclab.cloud-castles.com.", "Type": "NS", "TTL": 172800, "ResourceRecords": [ { "Value": "ns-566.awsdns-06.net." }, { "Value": "ns-1336.awsdns-39.org." }, { "Value": "ns-212.awsdns-26.com." }, { "Value": "ns-1559.awsdns-02.co.uk." } ] }, { "Name": "modules.cclab.cloud-castles.com.", Following this documentation this documentation with a slight change, It didnt work when I tried to provide only an aws_route53_zone of data type so I added a resource and pointed the data to it. I literally tried everything in the extent of my knowledge and in need of help. I stumbled upon other posts on stackoverflow with the same problem but none had a proper anwser. A: If the DNS validation process is stuck in a loop, it may be because the required DNS records were not created correctly, or they are not being propagated correctly. To troubleshoot this issue, you can try the following steps: Verify that the DNS records for your domain are correctly configured in Route 53. You can find more information about the required DNS records in the AWS documentation. Check the status of the DNS records in Route 53. Run the aws route53 get-hosted-zone and aws route53 list-resource-record-sets commands to check the status of your hosted zone and the DNS records for your domain. Make sure that the records are in the "INSYNC" status, which indicates that the records are correctly propagated and available to the ACM certificate validation process. Try running the terraform apply command again
Terraform ACM doesn't validate Route 53 DNS
I've created a Route 53 DNS using Terraform and assigned a certificate with ACM.. although when trying to verify the code is stuck in a loop aws_acm_certificate_validation.verify: Still creating... [27m31s elapsed] main.tf # ACM Certificate resource "aws_acm_certificate" "ssl" { domain_name = "modules.cclab.cloud-castles.com" validation_method = "DNS" tags = { Environment = "test" } lifecycle { create_before_destroy = true } } # Route53 Zone resource "aws_route53_zone" "selected" { name = "modules.cclab.cloud-castles.com" } data "aws_route53_zone" "selected" { private_zone = false vpc_id = aws_vpc.main.id zone_id = aws_route53_zone.selected.zone_id } # Route53 Record resource "aws_route53_record" "www" { for_each = { for dvo in aws_acm_certificate.ssl.domain_validation_options : dvo.domain_name => { name = dvo.resource_record_name record = dvo.resource_record_value type = dvo.resource_record_type } } allow_overwrite = true name = each.value.name records = [each.value.record] ttl = 60 type = each.value.type zone_id = data.aws_route53_zone.selected.zone_id } # ACM Validation resource "aws_acm_certificate_validation" "verify" { certificate_arn = aws_acm_certificate.ssl.arn validation_record_fqdns = [for record in aws_route53_record.www : record.fqdn] } # ALB Listener resource "aws_lb_listener" "alb-listener" { load_balancer_arn = aws_lb.alb.arn port = "80" protocol = "HTTP" ssl_policy = "ELBSecurityPolicy-2016-08" certificate_arn = aws_acm_certificate_validation.verify.certificate_arn default_action { type = "forward" target_group_arn = aws_lb_target_group.alb-target.arn } } I verified there's a hosted zone using the aws cli aws route53 get-hosted-zone --id Z03171471QBEVDH2KPJ6W { "HostedZone": { "Id": "/hostedzone/Z03171471QBEVDH2KPJ6W", "Name": "modules.cclab.cloud-castles.com.", "CallerReference": "terraform-20221202175826093600000001", "Config": { "Comment": "Managed by Terraform", "PrivateZone": false }, }, "DelegationSet": { "NameServers": [ "ns-566.awsdns-06.net", "ns-1336.awsdns-39.org", "ns-212.awsdns-26.com", "ns-1559.awsdns-02.co.uk" ] } } aws route53 list-resource-record-sets --hosted-zone-id Z03171471QBEVDH2KPJ6W aws route53 list-resource-record-sets --hosted-zone-id Z03171471QBEVDH2KPJ6W { "ResourceRecordSets": [ { "Name": "modules.cclab.cloud-castles.com.", "Type": "NS", "TTL": 172800, "ResourceRecords": [ { "Value": "ns-566.awsdns-06.net." }, { "Value": "ns-1336.awsdns-39.org." }, { "Value": "ns-212.awsdns-26.com." }, { "Value": "ns-1559.awsdns-02.co.uk." } ] }, { "Name": "modules.cclab.cloud-castles.com.", Following this documentation this documentation with a slight change, It didnt work when I tried to provide only an aws_route53_zone of data type so I added a resource and pointed the data to it. I literally tried everything in the extent of my knowledge and in need of help. I stumbled upon other posts on stackoverflow with the same problem but none had a proper anwser.
[ "If the DNS validation process is stuck in a loop, it may be because the required DNS records were not created correctly, or they are not being propagated correctly. To troubleshoot this issue, you can try the following steps:\n\nVerify that the DNS records for your domain are correctly configured in Route 53. You can find more information about the required DNS records in the AWS documentation.\nCheck the status of the DNS records in Route 53. Run the aws route53 get-hosted-zone and aws route53 list-resource-record-sets commands to check the status of your hosted zone and the DNS records for your domain. Make sure that the records are in the \"INSYNC\" status, which indicates that the records are correctly propagated and available to the ACM certificate validation process.\nTry running the terraform apply command again\n\n" ]
[ 1 ]
[]
[]
[ "terraform" ]
stackoverflow_0074659853_terraform.txt
Q: handle multiple radiobuttons from same json and treat them separately I fetch from my db multiple modifiergroups, for example, "have gluten", "size", and inside this modifiergroups you can choose an option, for example from "size" => 15oz, 20oz, 30oz. Example of what I want to do. Here is what I got. I can't treat size and gluten differently so the user can choose. My JSON fetched from db { "id": "cl9t45hry002xvpr91nppx7rx", "image": "https://madre-cafe.com/wp-content/uploads/2021/11/logo-madre-cafe-header.svg", "name": "Especiales Item #1", "description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit.", "price": "291", "available": true, "menuCategoryId": "cl9t45hqu000tvpr9fjw67ype", "modifierGroups": [ { "id": "cl9x4v8xr0005vpuax56i2z7r", "name": "Size", "minSelectionAllowed": null, "maxSelectionAllowed": null, "isMandatory": true, "menuItemId": "cl9t45hry002xvpr91nppx7rx", "orderItemId": null, "modifiers": [ { "id": "cl9x4wm7l0007vpualwl5zyac", "type": "15 oz", "mandatorySelected": true, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9x4v8xr0005vpuax56i2z7r" }, { "id": "cl9x4uual0003vpuax7rtny7b", "type": "20 oz", "mandatorySelected": true, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9x4v8xr0005vpuax56i2z7r" } ] }, { "id": "cl9xm0x2b000hvpuaxzyc7oht", "name": "Gluten?", "minSelectionAllowed": null, "maxSelectionAllowed": null, "isMandatory": false, "menuItemId": "cl9t45hry002xvpr91nppx7rx", "orderItemId": null, "modifiers": [ { "id": "cl9xm1l27000jvpuajisqm3hl", "type": "Without gluten", "mandatorySelected": null, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9xm0x2b000hvpuaxzyc7oht" }, { "id": "cl9xm1u8z000lvpuazch4ui1s", "type": "With Gluten", "mandatorySelected": null, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9xm0x2b000hvpuaxzyc7oht" } ] } ] } And here is my code {modifierGroups.map((mGroup, index) => { let { modifiers } = mGroup; return ( <div key={mGroup.id} className="bg-red-200"> <label> {mGroup.name}</label> <input type="radio" name="modifier" value={modifiers.map((a) => a.type)} /> {modifiers.map((m, i) => { return ( <div key={m.key} className="bg-green-200"> <input type="radio" id={m.id} name="modifier" onChange={(e) => setRadioButton({ ...radioButton, items: e.target.value })} value={modifiers} /> <button className="ring-1 ring-black h-3 w-3 active:bg-blue-200 focus:bg-slate-600" /> <label htmlFor={m.id}>{m.type}</label> </div> ); })} A: You can use check-boxes for "Size" and "Gluten". And radiobuttons for the nested options. A: Above issue is occurring because same name is used for all the input radio boxes (modifier). In Html you can select only one value for the same name radio input. To resolve this issue assign different name to different category of radio input. I have attached stackblitz link Updated your code below {modifierGroups.map((mGroup, index) => { let { name, modifiers } = mGroup; return ( <div key={mGroup.id} className="bg-red-200"> <label> {name}</label> {modifiers.map((m, i) => { return ( <div key={m.key} className="bg-green-200"> <input type="radio" id={m.id} name={`modifier_${name}`} // this line provides unique name for different categories onChange={(e) => console.log(e)} value={modifiers} /> <button className="ring-1 ring-black h-3 w-3 active:bg-blue-200 focus:bg-slate-600" /> <label htmlFor={m.id}>{m.type}</label> </div> ); })} </div> ); })} Output image
handle multiple radiobuttons from same json and treat them separately
I fetch from my db multiple modifiergroups, for example, "have gluten", "size", and inside this modifiergroups you can choose an option, for example from "size" => 15oz, 20oz, 30oz. Example of what I want to do. Here is what I got. I can't treat size and gluten differently so the user can choose. My JSON fetched from db { "id": "cl9t45hry002xvpr91nppx7rx", "image": "https://madre-cafe.com/wp-content/uploads/2021/11/logo-madre-cafe-header.svg", "name": "Especiales Item #1", "description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit.", "price": "291", "available": true, "menuCategoryId": "cl9t45hqu000tvpr9fjw67ype", "modifierGroups": [ { "id": "cl9x4v8xr0005vpuax56i2z7r", "name": "Size", "minSelectionAllowed": null, "maxSelectionAllowed": null, "isMandatory": true, "menuItemId": "cl9t45hry002xvpr91nppx7rx", "orderItemId": null, "modifiers": [ { "id": "cl9x4wm7l0007vpualwl5zyac", "type": "15 oz", "mandatorySelected": true, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9x4v8xr0005vpuax56i2z7r" }, { "id": "cl9x4uual0003vpuax7rtny7b", "type": "20 oz", "mandatorySelected": true, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9x4v8xr0005vpuax56i2z7r" } ] }, { "id": "cl9xm0x2b000hvpuaxzyc7oht", "name": "Gluten?", "minSelectionAllowed": null, "maxSelectionAllowed": null, "isMandatory": false, "menuItemId": "cl9t45hry002xvpr91nppx7rx", "orderItemId": null, "modifiers": [ { "id": "cl9xm1l27000jvpuajisqm3hl", "type": "Without gluten", "mandatorySelected": null, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9xm0x2b000hvpuaxzyc7oht" }, { "id": "cl9xm1u8z000lvpuazch4ui1s", "type": "With Gluten", "mandatorySelected": null, "onlyOne": null, "multiSelect": null, "mandatoryOneMultiSelect": null, "modifierGroupId": "cl9xm0x2b000hvpuaxzyc7oht" } ] } ] } And here is my code {modifierGroups.map((mGroup, index) => { let { modifiers } = mGroup; return ( <div key={mGroup.id} className="bg-red-200"> <label> {mGroup.name}</label> <input type="radio" name="modifier" value={modifiers.map((a) => a.type)} /> {modifiers.map((m, i) => { return ( <div key={m.key} className="bg-green-200"> <input type="radio" id={m.id} name="modifier" onChange={(e) => setRadioButton({ ...radioButton, items: e.target.value })} value={modifiers} /> <button className="ring-1 ring-black h-3 w-3 active:bg-blue-200 focus:bg-slate-600" /> <label htmlFor={m.id}>{m.type}</label> </div> ); })}
[ "You can use check-boxes for \"Size\" and \"Gluten\". And radiobuttons for the nested options.\n", "Above issue is occurring because same name is used for all the input radio boxes (modifier). In Html you can select only one value for the same name radio input. To resolve this issue assign different name to different category of radio input.\nI have attached stackblitz link\nUpdated your code below\n{modifierGroups.map((mGroup, index) => {\n let { name, modifiers } = mGroup;\n\n return (\n <div key={mGroup.id} className=\"bg-red-200\">\n <label> {name}</label>\n {modifiers.map((m, i) => {\n return (\n <div key={m.key} className=\"bg-green-200\">\n <input\n type=\"radio\"\n id={m.id}\n name={`modifier_${name}`} // this line provides unique name for different categories\n onChange={(e) => console.log(e)}\n value={modifiers}\n />\n <button className=\"ring-1 ring-black h-3 w-3 active:bg-blue-200 focus:bg-slate-600\" />\n <label htmlFor={m.id}>{m.type}</label>\n </div>\n );\n })}\n </div>\n );\n })}\n\n\nOutput image\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "javascript", "json", "reactjs" ]
stackoverflow_0074625027_arrays_javascript_json_reactjs.txt
Q: How to group pandas series by values and return dict of list of indices for those values, without explicitly transforming the series first? I have a pandas series that looks like this: import numpy as np import string import pandas as pd np.random.seed(0) data = np.random.randint(1,6,10) index = list(string.ascii_lowercase)[:10] a = pd.Series(data=data,index=index,name='apple') a >>> a 5 b 1 c 4 d 4 e 4 f 2 g 4 h 3 i 5 j 1 Name: apple, dtype: int32 I want to group the series by its values and return a dict of of list of indices for those values i.e. this result: {1: ['b', 'j'], 2: ['f'], 3: ['h'], 4: ['c', 'd', 'e', 'g'], 5: ['a', 'i']} Here is how I achieve that at the moment: b = a.reset_index().set_index('apple').squeeze() grouped = b.groupby(level=0).apply(list).to_dict() grouped >>> {1: ['b', 'j'], 2: ['f'], 3: ['h'], 4: ['c', 'd', 'e', 'g'], 5: ['a', 'i']} However, it does not feel particularly pythonic to explicitly transform the series first so that I can get to the result. Is there a way to do this directly by applying a single function (ideally) or combination of functions in one line to achieve the same result? Thanks! A: You can use the groupby function and apply a lambda expression to it in order to get the desired result in one line: grouped = a.groupby(a.values).apply(lambda x: list(x.index)).to_dict() Alternatively, you could use the following: grouped = dict(a.groupby(a.values).apply(lambda x: x.index.get_level_values(0))) grouped = dict(a.groupby(a.values).apply(lambda x: x.index.tolist()))
How to group pandas series by values and return dict of list of indices for those values, without explicitly transforming the series first?
I have a pandas series that looks like this: import numpy as np import string import pandas as pd np.random.seed(0) data = np.random.randint(1,6,10) index = list(string.ascii_lowercase)[:10] a = pd.Series(data=data,index=index,name='apple') a >>> a 5 b 1 c 4 d 4 e 4 f 2 g 4 h 3 i 5 j 1 Name: apple, dtype: int32 I want to group the series by its values and return a dict of of list of indices for those values i.e. this result: {1: ['b', 'j'], 2: ['f'], 3: ['h'], 4: ['c', 'd', 'e', 'g'], 5: ['a', 'i']} Here is how I achieve that at the moment: b = a.reset_index().set_index('apple').squeeze() grouped = b.groupby(level=0).apply(list).to_dict() grouped >>> {1: ['b', 'j'], 2: ['f'], 3: ['h'], 4: ['c', 'd', 'e', 'g'], 5: ['a', 'i']} However, it does not feel particularly pythonic to explicitly transform the series first so that I can get to the result. Is there a way to do this directly by applying a single function (ideally) or combination of functions in one line to achieve the same result? Thanks!
[ "You can use the groupby function and apply a lambda expression to it in order to get the desired result in one line:\ngrouped = a.groupby(a.values).apply(lambda x: list(x.index)).to_dict()\n\nAlternatively, you could use the following:\ngrouped = dict(a.groupby(a.values).apply(lambda x: x.index.get_level_values(0)))\ngrouped = dict(a.groupby(a.values).apply(lambda x: x.index.tolist()))\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "python_3.x", "series" ]
stackoverflow_0074659667_group_by_python_3.x_series.txt
Q: java.lang.ArithmeticException: / by zero in an unordered map implementation how do I fix a java.lang.ArithmeticException: / by zero error inside of this method I have tried almost every solution possible and I keep on getting the java.lang.ArithmeticException: / by zero How would I fix this error? boolean put(KEY key, T val) { if (entrysize / tablesize >= 2) { rehash(); } if (!containsKey(key)) { int index = key.hashCode(); index = Math.floorMod(index, tablesize); //this is what i have tried so far //if (tablesize == 0) { //throw new ArithmaticException(); //} Entry newEntry = new Entry(key, val, table.get(index)); table.set(index, newEntry); entrysize++; return true; } else { return false; } } I have tried almost every solution possible and I keep on getting the java.lang.ArithmeticException: / by zero How would I fix this error? I have tried setting the entrysize and table size to zero and returning null but it still calls the arithmatic exception A: Preferred approach package rollbar; public class DivisionByZero { public static void main(String... args) { int a = 50, b = 0; if (b != 0) { int c = divideAndSquare(a, b); System.out.println(c); } else { System.out.println("undefined (division by zero)"); } } static int divideAndSquare(int x, int y) { int z = x / y; return z * z; } } Output: undefined (division by zero) Alternative approach: using try-catch package rollbar; public class DivisionByZero { public static void main(String... args) { int a = 50, b = 0; try { int c = divideAndSquare(a, b); System.out.println(c); } catch (ArithmeticException e) { System.out.println(e.getMessage()); } } static int divideAndSquare(int x, int y) { int z = x / y; return z * z; } } Output: / by zero
java.lang.ArithmeticException: / by zero in an unordered map implementation
how do I fix a java.lang.ArithmeticException: / by zero error inside of this method I have tried almost every solution possible and I keep on getting the java.lang.ArithmeticException: / by zero How would I fix this error? boolean put(KEY key, T val) { if (entrysize / tablesize >= 2) { rehash(); } if (!containsKey(key)) { int index = key.hashCode(); index = Math.floorMod(index, tablesize); //this is what i have tried so far //if (tablesize == 0) { //throw new ArithmaticException(); //} Entry newEntry = new Entry(key, val, table.get(index)); table.set(index, newEntry); entrysize++; return true; } else { return false; } } I have tried almost every solution possible and I keep on getting the java.lang.ArithmeticException: / by zero How would I fix this error? I have tried setting the entrysize and table size to zero and returning null but it still calls the arithmatic exception
[ "Preferred approach\npackage rollbar;\n\npublic class DivisionByZero {\n\n public static void main(String... args) {\n int a = 50, b = 0;\n if (b != 0) {\n int c = divideAndSquare(a, b);\n System.out.println(c);\n } else {\n System.out.println(\"undefined (division by zero)\");\n }\n }\n\n static int divideAndSquare(int x, int y) {\n int z = x / y;\n return z * z;\n }\n}\n\nOutput:\nundefined (division by zero)\n\nAlternative approach: using try-catch\npackage rollbar;\n\npublic class DivisionByZero {\n\n public static void main(String... args) {\n int a = 50, b = 0;\n try {\n int c = divideAndSquare(a, b);\n System.out.println(c);\n } catch (ArithmeticException e) {\n System.out.println(e.getMessage());\n }\n }\n\n static int divideAndSquare(int x, int y) {\n int z = x / y;\n return z * z;\n }\n}\n\nOutput:\n/ by zero\n\n" ]
[ 1 ]
[]
[]
[ "java" ]
stackoverflow_0074659855_java.txt
Q: nuget doesn't recognize installed packages I have a C# project on Git that uses libraries from NuGet. Everything works fine, but when I pull on a fresh new machine and open the solution in Visual Studio it won't compile because of broken references. If I click on the references under the project I can see the classic warning sign with the yellow exclamation mark. Nuget restore won't do anything (and I still haven't found any use of this feature...), files repositories.config are fine. If I right click on solution and then 'Manage NuGet packages for solution' no installed package is shown. To this day I solved it this way: run Install-Package package_name it says: 'package_name' already installed. My_project already has a reference to 'package_name'. ...and after that it shows the packages on the Manager, already assigned to the correct project. NOTHING HAS BEEN CHANGED in the code ANYWHERE, I can see that because there are no differences on Git. I have to do it only once on new machines, but it's really annoying. Any idea? NuGet version: 2.8.60318.667 UPDATE 27/07 I tried the procedure from scratch on another PC, with windows 10, and everything works... same version of Visual Studio, NuGet, etc... I'm baffled A: This is probably because of the incorrect path of the .dll in your .csproj. The package restore downloads the packages to the local directory. It doesn't change the reference path of the assembly in the .csproj, meaning that the project will still try to locate dlls on the local directory. The yellow mark means the project is unable to locate the assembly. Unload the project, right click on project and select "Edit .csproj", and verify the path of missing dlls. For example - If you have NUnit, <Reference Include="nunit.framework"> <HintPath>..\packages\NUnit.3.6.1\lib\net45\nunit.framework.dll</HintPath> </Reference> verify if the dll is present inside "\packages\NUnit.3.6.1\lib\net45" directory. A: From the top of my head I can think of a few reasons the packages are not being downloaded, ideally you would have to share a few more details. First the install-package command won't work. Your packages are already installed VS is just unable to download them, so it makes sense that you are getting that error. First and foremost is this a public package hosted in nuget.org like System.MVC.Web? Because if this is in a new machine, using a private nuget server, you need to configure that source in Tools > Options > Nuget Package Manager > Package Sources. (See https://learn.microsoft.com/en-us/nuget/tools/package-manager-ui for more details) Check if you have added the folders to your git repo but at the same time set the exclusion for its contents. To check that when you do a clean checkout see if the folders exist but are empty. If thats the case just remove the folders, the git ignore should do its job from now on, and everyone new clone will do the proper check. If the two above which are the most likely ones to be the reason do not work. Try and restore the packages from the Package Manager Console and update your post with the details. You can open the Package Manager Console and type: Update-Package -reinstall or Update-Package -reinstall -Project YourProjectName FYI there's comprehensive document from Microsoft - https://learn.microsoft.com/en-us/nuget/consume-packages/package-restore - on the multiple ways of restoring nuget packages A: try removing your package from below nuget cache folder so that NUGET is forced to download from source C:\Users\<< your user name >>\.nuget\packages A: I experienced this issue today, and upgrading to the latest version of VS 2017 (15.8.7) didn't help at all. Check your packages.config file(s) to see if your packages tag looks like this: <packages xmlns="urn:packages"> If it does, remove the xmlns attribute so it's just: <packages> That fixed it for me! A: I have solved this problem. Follow this steps In Visual Studio, click Tools > Extension and Updates. Navigate to Online, search for "NuGet Package Manager for Visual Studio" and click Update. (If there is no button Update, navigate to Updates > Visual Studio Gallery, find the "NuGet Package Manager for Visual Studio" and click Update.) Then restart Visual Studio. A: Even after you've installed it at the Solution level, depending on your default you may have to pick which projects you want it to be available in. That was my problem.
nuget doesn't recognize installed packages
I have a C# project on Git that uses libraries from NuGet. Everything works fine, but when I pull on a fresh new machine and open the solution in Visual Studio it won't compile because of broken references. If I click on the references under the project I can see the classic warning sign with the yellow exclamation mark. Nuget restore won't do anything (and I still haven't found any use of this feature...), files repositories.config are fine. If I right click on solution and then 'Manage NuGet packages for solution' no installed package is shown. To this day I solved it this way: run Install-Package package_name it says: 'package_name' already installed. My_project already has a reference to 'package_name'. ...and after that it shows the packages on the Manager, already assigned to the correct project. NOTHING HAS BEEN CHANGED in the code ANYWHERE, I can see that because there are no differences on Git. I have to do it only once on new machines, but it's really annoying. Any idea? NuGet version: 2.8.60318.667 UPDATE 27/07 I tried the procedure from scratch on another PC, with windows 10, and everything works... same version of Visual Studio, NuGet, etc... I'm baffled
[ "This is probably because of the incorrect path of the .dll in your .csproj. The package restore downloads the packages to the local directory. It doesn't change the reference path of the assembly in the .csproj, meaning that the project will still try to locate dlls on the local directory. The yellow mark means the project is unable to locate the assembly.\nUnload the project, right click on project and select \"Edit .csproj\", and verify the path of missing dlls. For example - If you have NUnit,\n<Reference Include=\"nunit.framework\">\n <HintPath>..\\packages\\NUnit.3.6.1\\lib\\net45\\nunit.framework.dll</HintPath>\n</Reference>\n\nverify if the dll is present inside \"\\packages\\NUnit.3.6.1\\lib\\net45\" directory.\n", "From the top of my head I can think of a few reasons the packages are not being downloaded, ideally you would have to share a few more details.\nFirst the install-package command won't work. Your packages are already installed VS is just unable to download them, so it makes sense that you are getting that error. \n\nFirst and foremost is this a public package hosted in nuget.org like\nSystem.MVC.Web? Because if this is in a new machine, using a private nuget server, you need to\nconfigure that source in Tools > Options > Nuget Package Manager >\nPackage Sources. (See https://learn.microsoft.com/en-us/nuget/tools/package-manager-ui for more details)\nCheck if you have added the folders to your git repo but at the same\ntime set the exclusion for its contents. To check that when you do a\nclean checkout see if the folders exist but are empty. If thats the\ncase just remove the folders, the git ignore should do its job from\nnow on, and everyone new clone will do the proper check.\nIf the two above which are the most likely ones to be the reason do\nnot work. Try and restore the packages from the Package Manager\nConsole and update your post with the details.\n\nYou can open the Package Manager Console and type:\nUpdate-Package -reinstall\n\nor \nUpdate-Package -reinstall -Project YourProjectName\n\nFYI there's comprehensive document from Microsoft - https://learn.microsoft.com/en-us/nuget/consume-packages/package-restore - on the multiple ways of restoring nuget packages\n", "try removing your package from below nuget cache folder so that NUGET is forced to download from source\nC:\\Users\\<< your user name >>\\.nuget\\packages\n", "I experienced this issue today, and upgrading to the latest version of VS 2017 (15.8.7) didn't help at all.\nCheck your packages.config file(s) to see if your packages tag looks like this:\n<packages xmlns=\"urn:packages\">\n\nIf it does, remove the xmlns attribute so it's just:\n<packages>\n\nThat fixed it for me!\n", "I have solved this problem. Follow this steps\n\nIn Visual Studio, click Tools > Extension and Updates.\nNavigate to Online, search for \"NuGet Package Manager for Visual\nStudio\" and click Update.\n(If there is no button Update, navigate to Updates > Visual\nStudio Gallery, find the \"NuGet Package Manager for Visual\nStudio\" and click Update.)\nThen restart Visual Studio.\n\n", "Even after you've installed it at the Solution level, depending on your default you may have to pick which projects you want it to be available in. That was my problem.\n\n" ]
[ 8, 7, 2, 2, 1, 0 ]
[]
[]
[ ".net", "c#", "nuget", "visual_studio", "visual_studio_2012" ]
stackoverflow_0045104616_.net_c#_nuget_visual_studio_visual_studio_2012.txt
Q: Bizarre case of one change making canvas event not listen I started re-writing Windows! The code below draws 10 canvas objects you can move around. I made them random sizes. When you click on one it has thicker outline and can be dragged around. When I added one line to change the initial top position of the canvases the move stopped working. Its very odd, I only wanted to stack the windows in increasing 10px steps rather then them all starting at the same spot. Why could this minor change stop the rest working? var n = 0, canv = [], ct = [] var body = document.getElementsByTagName("body")[0]; var targ, wind = 10, i, offx, offy = 0 for (n = 0; n < wind; n++) { canv[n] = document.createElement('canvas'); canv[n].id = "C" + n; canv[n].width = rnd(30 * n); canv[n].height = rnd(30 * n); //canv[n].style.top=10*n <- This line stops the drag effect working canv[n].style.zIndex = n; canv[n].style.position = "absolute"; canv[n].style.border = "2px solid"; body.appendChild(canv[n]); targ=canv[0] // added now to stop errors before first click ct[n] = canv[n].getContext("2d"); ct[n].fillText(n, canv[n].width / 2, canv[n].height / 2) canv[n].addEventListener('mousedown', function(e) { targ = e.currentTarget if (targ.style.border == "2px solid") { for (i = 0; i < wind; i++) { canv[i].style.border = "2px solid" } targ.style.border = "5px solid"; offy = e.y - targ.style.top } else { targ.style.border = "2px solid" } }); canv[n].addEventListener('mousemove', function(e) { if (targ.style.border == "5px solid") { move(e.x, e.y) } }); } function move(x, y) { targ.style.top = y - offy } function rnd(m) { return Math.floor(Math.random() * m) } A: You are using a global variable for when it is active. That is not set until a click happens so you get tons of errors. When you are setting a position, it needs some sort of units. Basic idea with the code cleaned up a bit. const canv = []; const ct = []; const body = document.getElementsByTagName("body")[0]; let targ, wind = 10, i, offx, offy = 0; for (let n = 0; n < wind; n++) { const canvas = document.createElement('canvas'); canv[n] = canvas; canvas.id = "C" + n; const width = rnd(30 * n); const height = rnd(30 * n); canvas.style.width = width + "px"; canvas.style.height = height + "px"; canvas.style.top = (10 * n) + "px"; canvas.style.left = (10 * n) + "px"; canvas.style.zIndex = n; body.appendChild(canvas); const ctx = canvas.getContext("2d"); ct[n] = ctx; ctx.fillText(n, width / 2, height / 2) canvas.addEventListener('mousedown', function(e) { document.querySelectorAll("canvas.active").forEach(elem => { elem.classList.remove("active"); }); targ = e.currentTarget; targ.classList.add("active"); }); canvas.addEventListener('mousemove', function(e) { if (targ && targ.classList.contains('active')) { move(e.x, e.y) } }); } function move(x, y) { targ.style.top = y - offy + "px" } function rnd(m) { return Math.floor(Math.random() * m) } canvas { position: absolute; min-width: 30px; min-height: 30px; border: 2px solid black; z-index: 1; } canvas.active { border: 5px solid black; }
Bizarre case of one change making canvas event not listen
I started re-writing Windows! The code below draws 10 canvas objects you can move around. I made them random sizes. When you click on one it has thicker outline and can be dragged around. When I added one line to change the initial top position of the canvases the move stopped working. Its very odd, I only wanted to stack the windows in increasing 10px steps rather then them all starting at the same spot. Why could this minor change stop the rest working? var n = 0, canv = [], ct = [] var body = document.getElementsByTagName("body")[0]; var targ, wind = 10, i, offx, offy = 0 for (n = 0; n < wind; n++) { canv[n] = document.createElement('canvas'); canv[n].id = "C" + n; canv[n].width = rnd(30 * n); canv[n].height = rnd(30 * n); //canv[n].style.top=10*n <- This line stops the drag effect working canv[n].style.zIndex = n; canv[n].style.position = "absolute"; canv[n].style.border = "2px solid"; body.appendChild(canv[n]); targ=canv[0] // added now to stop errors before first click ct[n] = canv[n].getContext("2d"); ct[n].fillText(n, canv[n].width / 2, canv[n].height / 2) canv[n].addEventListener('mousedown', function(e) { targ = e.currentTarget if (targ.style.border == "2px solid") { for (i = 0; i < wind; i++) { canv[i].style.border = "2px solid" } targ.style.border = "5px solid"; offy = e.y - targ.style.top } else { targ.style.border = "2px solid" } }); canv[n].addEventListener('mousemove', function(e) { if (targ.style.border == "5px solid") { move(e.x, e.y) } }); } function move(x, y) { targ.style.top = y - offy } function rnd(m) { return Math.floor(Math.random() * m) }
[ "You are using a global variable for when it is active. That is not set until a click happens so you get tons of errors.\nWhen you are setting a position, it needs some sort of units.\nBasic idea with the code cleaned up a bit.\n\n\nconst canv = [];\nconst ct = [];\n\nconst body = document.getElementsByTagName(\"body\")[0];\n\nlet targ, wind = 10,\n i, offx, offy = 0;\n\nfor (let n = 0; n < wind; n++) {\n const canvas = document.createElement('canvas');\n canv[n] = canvas;\n \n canvas.id = \"C\" + n;\n\n const width = rnd(30 * n);\n const height = rnd(30 * n);\n \n canvas.style.width = width + \"px\";\n canvas.style.height = height + \"px\";\n canvas.style.top = (10 * n) + \"px\";\n canvas.style.left = (10 * n) + \"px\";\n canvas.style.zIndex = n;\n \n body.appendChild(canvas);\n\n const ctx = canvas.getContext(\"2d\");\n ct[n] = ctx;\n ctx.fillText(n, width / 2, height / 2)\n\n canvas.addEventListener('mousedown', function(e) {\n document.querySelectorAll(\"canvas.active\").forEach(elem => {\n elem.classList.remove(\"active\");\n });\n targ = e.currentTarget;\n targ.classList.add(\"active\");\n });\n\n canvas.addEventListener('mousemove', function(e) {\n if (targ && targ.classList.contains('active')) {\n move(e.x, e.y)\n }\n });\n}\n\nfunction move(x, y) {\n targ.style.top = y - offy + \"px\"\n}\n\nfunction rnd(m) {\n return Math.floor(Math.random() * m)\n}\ncanvas {\n position: absolute;\n min-width: 30px;\n min-height: 30px;\n border: 2px solid black;\n z-index: 1;\n}\n\ncanvas.active {\n border: 5px solid black;\n}\n\n\n\n" ]
[ 1 ]
[]
[]
[ "css", "html", "html5_canvas", "javascript" ]
stackoverflow_0074659627_css_html_html5_canvas_javascript.txt
Q: Django migration error :you cannot alter to or from M2M fields, or add or remove through= on M2M fields I'm trying to modify a M2M field to a ForeignKey field. The command validate shows me no issues and when I run syncdb : ValueError: Cannot alter field xxx into yyy they are not compatible types (you cannot alter to or from M2M fields, or add or remove through= on M2M fields) So I can't make the migration. class InstituteStaff(Person): user = models.OneToOneField(User, blank=True, null=True) investigation_area = models.ManyToManyField(InvestigationArea, blank=True,) investigation_group = models.ManyToManyField(InvestigationGroup, blank=True) council_group = models.ForeignKey(CouncilGroup, null=True, blank=True) #profiles = models.ManyToManyField(Profiles, null = True, blank = True) profiles = models.ForeignKey(Profiles, null = True, blank = True) Any suggestions? A: I stumbled upon this and although I didn't care about my data much, I still didn't want to delete the whole DB. So I opened the migration file and changed the AlterField() command to a RemoveField() and an AddField() command that worked well. I lost my data on the specific field, but nothing else. I.e. migrations.AlterField( model_name='player', name='teams', field=models.ManyToManyField(related_name='players', through='players.TeamPlayer', to='players.Team'), ), to migrations.RemoveField( model_name='player', name='teams', ), migrations.AddField( model_name='player', name='teams', field=models.ManyToManyField(related_name='players', through='players.TeamPlayer', to='players.Team'), ), A: NO DATA LOSS EXAMPLE I would say: If machine cannot do something for us, then let's help it! Because the problem that OP put here can have multiple mutations, I will try to explain how to struggle with that kind of problem in a simple way. Let's assume we have a model (in the app called users) like this: from django.db import models class Person(models.Model): name = models.CharField(max_length=128) def __str__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person) def __str__(self): return self.name but after some while we need to add a date of a member join. So we want this: class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') # <-- through model def __str__(self): return self.name # and through Model itself class Membership(models.Model): person = models.ForeignKey(Person, on_delete=models.CASCADE) group = models.ForeignKey(Group, on_delete=models.CASCADE) date_joined = models.DateField() Now, normally you will hit the same problem as OP wrote. To solve it, follow these steps: start from this point: from django.db import models class Person(models.Model): name = models.CharField(max_length=128) def __str__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person) def __str__(self): return self.name create through model and run python manage.py makemigrations (but don't put through property in the Group.members field yet): from django.db import models class Person(models.Model): name = models.CharField(max_length=128) def __str__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person) # <-- no through property yet! def __str__(self): return self.name class Membership(models.Model): # <--- through model person = models.ForeignKey(Person, on_delete=models.CASCADE) group = models.ForeignKey(Group, on_delete=models.CASCADE) date_joined = models.DateField() create an empty migration using python manage.py makemigrations users --empty command and create conversion script in python (more about the python migrations here) which creates new relations (Membership) for an old field (Group.members). It could look like this: # Generated by Django A.B on YYYY-MM-DD HH:MM import datetime from django.db import migrations def create_through_relations(apps, schema_editor): Group = apps.get_model('users', 'Group') Membership = apps.get_model('users', 'Membership') for group in Group.objects.all(): for member in group.members.all(): Membership( person=member, group=group, date_joined=datetime.date.today() ).save() class Migration(migrations.Migration): dependencies = [ ('myapp', '0005_create_models'), ] operations = [ migrations.RunPython(create_through_relations, reverse_code=migrations.RunPython.noop), ] remove members field in the Group model and run python manage.py makemigrations, so our Group will look like this: class Group(models.Model): name = models.CharField(max_length=128) add members field the the Group model, but now with through property and run python manage.py makemigrations: class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') and that's it! Now you need to change creation of members in a new way in your code - by through model. More about here. You can also optionally tidy it up, by squashing these migrations. A: Potential workarounds: Create a new field with the ForeignKey relationship called profiles1 and DO NOT modify profiles. Make and run the migration. You might need a related_name parameter to prevent conflicts. Do a subsequent migration that drops the original field. Then do another migration that renames profiles1 back to profiles. Obviously, you won't have data in the new ForeignKey field. Write a custom migration: https://docs.djangoproject.com/en/1.7/ref/migration-operations/ You might want to use makemigration and migration rather than syncdb. Does your InstituteStaff have data that you want to retain? A: If you're still developing the application, and don't need to preserve your existing data, you can get around this issue by doing the following: Delete and re-create the db. go to your project/app/migrations folder Delete everything in that folder with the exception of the init.py file. Make sure you also delete the pycache dir. Run syncdb, makemigrations, and migrate. A: Another approach that worked for me: Delete the existing M2M field and run migrations. Add the FK field and run migrations again. FK field added in this case has no relation to the previously used M2M field and hence should not create any problems. A: This link helps you resolve all problems related to this The one which worked for me is python3 backend/manage.py migrate --fake "app_name" A: I literally had the same error for days and i had tried everything i saw here but still didn'y work. This is what worked for me: I deleted all the files in migrations folder exceps init.py I also deleted my database in my case it was the preinstalled db.sqlite3 After this, i wrote my models from the scratch, although i didn't change anything but i did write it again. Apply migrations then on the models and this time it worked and no errors. A: First delete the migrations in your app (the folders/ files under 'migrations' folder) Showing the 'migrations' folder Then delete the 'db.sqlite3' file Showing the 'db.sqlite3' file And run python manage.py makemigrations name_of_app Finally run python manage.py migrate A: I had the same problem and found this How to Migrate a ‘through’ to a many to many relation in Django article which is really really helped me to solve this problem. Please have a look. I'll summarize his answer here, There is three model and one(CollectionProduct) is going to connect as many-to-many relationship. This is the final output, class Product(models.Model): pass class Collection(models.Model): products = models.ManyToManyField( Product, blank=True, related_name="collections", through="CollectionProduct", through_fields=["collection", "product"], ) class CollectionProduct(models.Model): collection = models.ForeignKey(Collection, on_delete=models.CASCADE) product = models.ForeignKey(Product, on_delete=models.CASCADE) class Meta: db_table = "product_collection_products" and here is the solution, The solution Take your app label (the package name, e.g. ‘product’) and your M2M field name, and combine them together with and underscore: APPLABEL + _ + M2M TABLE NAME + _ + M2M FIELD NAME For example in our case, it’s this: product_collection_products This is your M2M’s through database table name. Now you need to edit your M2M’s through model to this: Also found another solution in In Django you cannot add or remove through= on M2M fields article which is going to edit migration files. I didn't try this, but have a look if you don't have any other solution. A: This worked for Me as well Delete last migrations run command python manage.py migrate --fake <application name> run command 'python manage.py makemigrations ' run command 'python manage.py migrate' Hope this will solve your problem with deleting database/migrations A: this happens when adding 'through' attribute to an existing M2M field: as M2M fields are by default handled by model they are defined in (if through is set). although when through is set to new model the M2M field is handled by that new model, hence the error in alter solutions:- you can reset db or remove those m2m fields and run migration as explained above then create them again A: *IF YOU ARE IN THE INITIAL STAGES OF DEVELOPMENT AND CAN AFFORD TO LOOSE DATA :) delete all the migration files except init.py then apply the migrations. python manage.py makemigrations python manage.py migrate this will create new tables.
Django migration error :you cannot alter to or from M2M fields, or add or remove through= on M2M fields
I'm trying to modify a M2M field to a ForeignKey field. The command validate shows me no issues and when I run syncdb : ValueError: Cannot alter field xxx into yyy they are not compatible types (you cannot alter to or from M2M fields, or add or remove through= on M2M fields) So I can't make the migration. class InstituteStaff(Person): user = models.OneToOneField(User, blank=True, null=True) investigation_area = models.ManyToManyField(InvestigationArea, blank=True,) investigation_group = models.ManyToManyField(InvestigationGroup, blank=True) council_group = models.ForeignKey(CouncilGroup, null=True, blank=True) #profiles = models.ManyToManyField(Profiles, null = True, blank = True) profiles = models.ForeignKey(Profiles, null = True, blank = True) Any suggestions?
[ "I stumbled upon this and although I didn't care about my data much, I still didn't want to delete the whole DB. So I opened the migration file and changed the AlterField() command to a RemoveField() and an AddField() command that worked well. I lost my data on the specific field, but nothing else.\nI.e.\nmigrations.AlterField(\n model_name='player',\n name='teams',\n field=models.ManyToManyField(related_name='players', through='players.TeamPlayer', to='players.Team'),\n),\n\nto\nmigrations.RemoveField(\n model_name='player',\n name='teams',\n),\nmigrations.AddField(\n model_name='player',\n name='teams',\n field=models.ManyToManyField(related_name='players', through='players.TeamPlayer', to='players.Team'),\n),\n\n", "NO DATA LOSS EXAMPLE\n\nI would say: If machine cannot do something for us, then let's help it!\nBecause the problem that OP put here can have multiple mutations, I will try to explain how to struggle with that kind of problem in a simple way. \nLet's assume we have a model (in the app called users) like this:\nfrom django.db import models\n\n\nclass Person(models.Model):\n name = models.CharField(max_length=128)\n\n def __str__(self):\n return self.name\n\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n members = models.ManyToManyField(Person)\n\n def __str__(self):\n return self.name\n\nbut after some while we need to add a date of a member join. So we want this:\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n members = models.ManyToManyField(Person, through='Membership') # <-- through model\n\n def __str__(self):\n return self.name\n\n# and through Model itself\nclass Membership(models.Model):\n person = models.ForeignKey(Person, on_delete=models.CASCADE)\n group = models.ForeignKey(Group, on_delete=models.CASCADE)\n date_joined = models.DateField()\n\nNow, normally you will hit the same problem as OP wrote. To solve it, follow these steps:\n\nstart from this point:\nfrom django.db import models\n\n\nclass Person(models.Model):\n name = models.CharField(max_length=128)\n\n def __str__(self):\n return self.name\n\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n members = models.ManyToManyField(Person)\n\n def __str__(self):\n return self.name\n\ncreate through model and run python manage.py makemigrations (but don't put through property in the Group.members field yet):\nfrom django.db import models\n\n\nclass Person(models.Model):\n name = models.CharField(max_length=128)\n\n def __str__(self):\n return self.name\n\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n members = models.ManyToManyField(Person) # <-- no through property yet!\n\n def __str__(self):\n return self.name\n\nclass Membership(models.Model): # <--- through model\n person = models.ForeignKey(Person, on_delete=models.CASCADE)\n group = models.ForeignKey(Group, on_delete=models.CASCADE)\n date_joined = models.DateField()\n\ncreate an empty migration using python manage.py makemigrations users --empty command and create conversion script in python (more about the python migrations here) which creates new relations (Membership) for an old field (Group.members). It could look like this:\n# Generated by Django A.B on YYYY-MM-DD HH:MM\nimport datetime\n\nfrom django.db import migrations\n\n\ndef create_through_relations(apps, schema_editor):\n Group = apps.get_model('users', 'Group')\n Membership = apps.get_model('users', 'Membership')\n for group in Group.objects.all():\n for member in group.members.all():\n Membership(\n person=member,\n group=group,\n date_joined=datetime.date.today()\n ).save()\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('myapp', '0005_create_models'),\n ]\n\n operations = [\n migrations.RunPython(create_through_relations, reverse_code=migrations.RunPython.noop),\n ]\n\nremove members field in the Group model and run python manage.py makemigrations, so our Group will look like this:\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n\nadd members field the the Group model, but now with through property and run python manage.py makemigrations: \nclass Group(models.Model):\n name = models.CharField(max_length=128)\n members = models.ManyToManyField(Person, through='Membership')\n\n\nand that's it! \nNow you need to change creation of members in a new way in your code - by through model. More about here.\nYou can also optionally tidy it up, by squashing these migrations.\n", "Potential workarounds:\n\nCreate a new field with the ForeignKey relationship called profiles1 and DO NOT modify profiles. Make and run the migration. You might need a related_name parameter to prevent conflicts. Do a subsequent migration that drops the original field. Then do another migration that renames profiles1 back to profiles. Obviously, you won't have data in the new ForeignKey field.\nWrite a custom migration: https://docs.djangoproject.com/en/1.7/ref/migration-operations/\n\nYou might want to use makemigration and migration rather than syncdb. \nDoes your InstituteStaff have data that you want to retain?\n", "If you're still developing the application, and don't need to preserve your existing data, you can get around this issue by doing the following:\n\nDelete and re-create the db.\ngo to your project/app/migrations folder\nDelete everything in that folder with the exception of the init.py file. Make sure you also delete the pycache dir.\nRun syncdb, makemigrations, and migrate.\n\n", "Another approach that worked for me:\n\nDelete the existing M2M field and run migrations.\nAdd the FK field and run migrations again.\n\nFK field added in this case has no relation to the previously used M2M field and hence should not create any problems.\n", "This link helps you resolve all problems related to this \nThe one which worked for me is python3 backend/manage.py migrate --fake \"app_name\"\n", "I literally had the same error for days and i had tried everything i saw here but still didn'y work.\nThis is what worked for me:\n\nI deleted all the files in migrations folder exceps init.py\nI also deleted my database in my case it was the preinstalled db.sqlite3\nAfter this, i wrote my models from the scratch, although i didn't change anything but i did write it again.\nApply migrations then on the models and this time it worked and no errors.\n\n", "\nFirst delete the migrations in your app (the folders/ files under 'migrations'\nfolder)\nShowing the 'migrations' folder\n\nThen delete the 'db.sqlite3' file\nShowing the 'db.sqlite3' file\n\nAnd run python manage.py makemigrations name_of_app\n\nFinally run python manage.py migrate\n\n\n", "I had the same problem and found this How to Migrate a ‘through’ to a many to many relation in Django article which is really really helped me to solve this problem. Please have a look. I'll summarize his answer here,\nThere is three model and one(CollectionProduct) is going to connect as many-to-many relationship.\n\nThis is the final output,\nclass Product(models.Model):\n pass\n\n\nclass Collection(models.Model):\n products = models.ManyToManyField(\n Product,\n blank=True,\n related_name=\"collections\",\n through=\"CollectionProduct\",\n through_fields=[\"collection\", \"product\"],\n )\n\n\nclass CollectionProduct(models.Model):\n collection = models.ForeignKey(Collection, on_delete=models.CASCADE)\n product = models.ForeignKey(Product, on_delete=models.CASCADE)\n\n class Meta:\n db_table = \"product_collection_products\"\n\nand here is the solution,\n\nThe solution\nTake your app label (the package name, e.g. ‘product’) and your M2M field name, and combine them together with and underscore:\nAPPLABEL + _ + M2M TABLE NAME + _ + M2M FIELD NAME\nFor example in our case, it’s this:\nproduct_collection_products\nThis is your M2M’s through database table name. Now you need to edit your M2M’s through model to this:\n\n\nAlso found another solution in In Django you cannot add or remove through= on M2M fields article which is going to edit migration files. I didn't try this, but have a look if you don't have any other solution.\n", "This worked for Me as well\n\nDelete last migrations\nrun command python manage.py migrate --fake <application name>\nrun command 'python manage.py makemigrations '\nrun command 'python manage.py migrate'\n\nHope this will solve your problem with deleting database/migrations\n", "this happens when adding 'through' attribute to an existing M2M field:\nas M2M fields are by default handled by model they are defined in (if through is set).\nalthough when through is set to new model the M2M field is handled by that new model, hence the error in alter\nsolutions:-\nyou can reset db or\nremove those m2m fields and run migration as explained above then create them again\n", "*IF YOU ARE IN THE INITIAL STAGES OF DEVELOPMENT AND CAN AFFORD TO LOOSE DATA :)\ndelete all the migration files except init.py\nthen apply the migrations.\npython manage.py makemigrations\npython manage.py migrate\n\nthis will create new tables.\n" ]
[ 118, 80, 24, 21, 4, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "django", "foreign_keys", "manytomanyfield", "migration" ]
stackoverflow_0026927705_django_foreign_keys_manytomanyfield_migration.txt
Q: .Net core custom middleware - How to come out with Forbidden error and goto Error controller I have custom authentication and authorization handlers, but there is still a custom middleware to check few other things on another scenario. Here is some code for exception handler app.UseExceptionHandler("/Error/{0}"); app.UseHsts(); } app.UseStatusCodePagesWithReExecute("/Error/{0}"); The custom middleware code is below. This is a test code. I want to come out of the middleware on some conditions. The below code does not work (it won't go to error controller). When I use response.Redirect(), it works, but then it goes on infinite redirects. I've thought of return Forbid(), return StatusCodeResult(403), but the return type is Task. public async Task Invoke(HttpContext context) { context.Response.StatusCode = 403; await _next(context); return; A: I tried the following and it worked. I think the complete call will stop it from executing the rest of the middleware and go back to the exceptionhandler registered at the top. When I tried to set the StatusCode after the await _next(context) call, it interestingly had the response from the page that was requested appended by the response form the error handler. context.Response.StatusCode = 403; await context.Response.CompleteAsync(); //await _next(context); return;
.Net core custom middleware - How to come out with Forbidden error and goto Error controller
I have custom authentication and authorization handlers, but there is still a custom middleware to check few other things on another scenario. Here is some code for exception handler app.UseExceptionHandler("/Error/{0}"); app.UseHsts(); } app.UseStatusCodePagesWithReExecute("/Error/{0}"); The custom middleware code is below. This is a test code. I want to come out of the middleware on some conditions. The below code does not work (it won't go to error controller). When I use response.Redirect(), it works, but then it goes on infinite redirects. I've thought of return Forbid(), return StatusCodeResult(403), but the return type is Task. public async Task Invoke(HttpContext context) { context.Response.StatusCode = 403; await _next(context); return;
[ "I tried the following and it worked. I think the complete call will stop it from executing the rest of the middleware and go back to the exceptionhandler registered at the top. When I tried to set the StatusCode after the await _next(context) call, it interestingly had the response from the page that was requested appended by the response form the error handler.\ncontext.Response.StatusCode = 403;\nawait context.Response.CompleteAsync();\n\n//await _next(context);\nreturn;\n\n" ]
[ 0 ]
[]
[]
[ ".net_core", "asp.net_core", "asp.net_mvc", "c#" ]
stackoverflow_0074656143_.net_core_asp.net_core_asp.net_mvc_c#.txt
Q: Android - call "finish()" does not trigger onDestroy I have 3 activities. And I want to do something like this: A -> on button press -> B -> on button press -> (destroy A, B) create C Everything works, but I noticed some "strange" behaviour in the logs of Android Studio. When I press the button on activity A this is done: I/System.out: The A Activity is onStart. I/System.out: The A Activity is onResume. I/System.out: The A Activity is onPause. I/System.out: The B Activity is onResume. I/System.out: The A Activity is stopped. Then I press the button again on activity B: I/System.out: The B Activity is paused. I/System.out: The B Activity is stopped. and I enter in activity C. But on the console is not written that activity A and B are destroyed, even if I call the method finish() in activity B. Just, when from the activity C I press the back button this is executed. I/System.out: The B Activity is destroyed. I/System.out: The A Activity is destroyed. And I automatically exit the app. My question is: Why this was not triggered when I pressed the button on the activity B? B class: startActivity(new Intent(this, C.class)); B on stop: @Override protected void onStop() { Intent returnIntent = getIntent(); setResult(Activity.RESULT_CANCELED, returnIntent); finish(); super.onStop(); System.out.println("The B Activity is stopped."); } A class: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == 1){ finish(); } } EDIT: Another strange thing, when I press the button in the activity B, I go in the activity C, and the activity B onStop is executed. But: As I said finish() does not trigger setResult() does not trigger activities A "onActivityResult" When is setResult() triggered? together with onDestroy()? EDIT 2 If you read the comments, another interesting question came up. Why the onDetroy() method does not get called when finish() gets called from onStop()?; and when the finish() method is out of onStop(), the onDestroy() gets normally called. Is it normal/by design? A: Activities are on a stack so when you start a new activity from one, the old is not destroyed, it will be kept on the stack cause normally you will come back to it in the lifecycle of your app, by pressing "back-button". When you want a result of a triggered Activity, you must call startActivityForResult onDestroy is triggered when the activity is no longer needed which is a decision of the android framework and you can not really know, when this happens, since android guess its a good chance you will come back to it in your apps lifecycle. It depends on used memory and the distance to navigate to it, just an optimization of constructing activities to avoid laggy behavior. A: Regarding your "Edit 2", the onDestroy() actually does get called when you leave the screen till it gets black, or when you simply press shut down botton to turn the screen off. As @Henning Luther mentioned, the answer is maybe somewhere deep in Android, and have possibly to do also with the surface and other things also... If I put finish() out of onStop(), it gets normally destroyed. *it is not a completly answer but since I cannot put comments I did it in this way, cause wanted to share it A: I had some issues also, that i found that after calling finish on activity onDestroy and also listener ActivityLifecycleCallbacks.onActivityDestroyed where not called. How-where when I checked out docs it looks like they should: Activity.onDestory documentations says: Perform any final cleanup before an activity is destroyed. This can happen either because the activity is finishing (someone called finish() on it), or because the system is temporarily destroying this instance of the activity to save space. You can distinguish between these two scenarios with the isFinishing() method.... But I found that isFinishing also was not true. Solution on my end was simply to wait with code that rely on activity onDestroy to be called by using: Handler.postDelayed running on main thread, also you can use view post(Runnable action) that do the same thing.
Android - call "finish()" does not trigger onDestroy
I have 3 activities. And I want to do something like this: A -> on button press -> B -> on button press -> (destroy A, B) create C Everything works, but I noticed some "strange" behaviour in the logs of Android Studio. When I press the button on activity A this is done: I/System.out: The A Activity is onStart. I/System.out: The A Activity is onResume. I/System.out: The A Activity is onPause. I/System.out: The B Activity is onResume. I/System.out: The A Activity is stopped. Then I press the button again on activity B: I/System.out: The B Activity is paused. I/System.out: The B Activity is stopped. and I enter in activity C. But on the console is not written that activity A and B are destroyed, even if I call the method finish() in activity B. Just, when from the activity C I press the back button this is executed. I/System.out: The B Activity is destroyed. I/System.out: The A Activity is destroyed. And I automatically exit the app. My question is: Why this was not triggered when I pressed the button on the activity B? B class: startActivity(new Intent(this, C.class)); B on stop: @Override protected void onStop() { Intent returnIntent = getIntent(); setResult(Activity.RESULT_CANCELED, returnIntent); finish(); super.onStop(); System.out.println("The B Activity is stopped."); } A class: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == 1){ finish(); } } EDIT: Another strange thing, when I press the button in the activity B, I go in the activity C, and the activity B onStop is executed. But: As I said finish() does not trigger setResult() does not trigger activities A "onActivityResult" When is setResult() triggered? together with onDestroy()? EDIT 2 If you read the comments, another interesting question came up. Why the onDetroy() method does not get called when finish() gets called from onStop()?; and when the finish() method is out of onStop(), the onDestroy() gets normally called. Is it normal/by design?
[ "\nActivities are on a stack so when you start a new activity from one, the old is not destroyed, it will be kept on the stack cause normally you will come back to it in the lifecycle of your app, by pressing \"back-button\". \nWhen you want a result of a triggered Activity, you must call startActivityForResult\nonDestroy is triggered when the activity is no longer needed which is a decision of the android framework and you can not really know, when this happens, since android guess its a good chance you will come back to it in your apps lifecycle. It depends on used memory and the distance to navigate to it, just an optimization of constructing activities to avoid laggy behavior.\n\n", "Regarding your \"Edit 2\", the onDestroy() actually does get called when you leave the screen till it gets black, or when you simply press shut down botton to turn the screen off.\nAs @Henning Luther mentioned, the answer is maybe somewhere deep in Android, and have possibly to do also with the surface and other things also... \nIf I put finish() out of onStop(), it gets normally destroyed.\n*it is not a completly answer but since I cannot put comments I did it in this way, cause wanted to share it \n", "I had some issues also, that i found that after calling finish on activity onDestroy and also listener ActivityLifecycleCallbacks.onActivityDestroyed where not called. How-where when I checked out docs it looks like they should:\nActivity.onDestory documentations says:\n\nPerform any final cleanup before an activity is destroyed. This can\nhappen either because the activity is finishing (someone called\nfinish() on it), or because the system is temporarily destroying this\ninstance of the activity to save space. You can distinguish between\nthese two scenarios with the isFinishing() method....\n\nBut I found that isFinishing also was not true.\nSolution on my end was simply to wait with code that rely on activity onDestroy to be called by using: Handler.postDelayed running on main thread, also you can use view post(Runnable action) that do the same thing.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "android", "android_intent", "android_lifecycle", "java" ]
stackoverflow_0035972726_android_android_intent_android_lifecycle_java.txt
Q: Regular expression to capture alias in a string to be used in an oracle query without capturing characters that are the same as the alias I have a string that will be used in an oracle query that I need to parse and change the alias of the columns in the string to a different value and I need to use a regular expression to do so. Here is an example of the sort of string I will be dealing with: ph.activity = ''ph.activity.1'' AND ((ph.activity = ''ph.act.1'' AND ph.activity = ''test.ph.act.1'' AND ph.activity = ''ph.test.23'')) I need to change all the ph. in the string where the ph. appears at the start of the string, where it appears after a ( character and where it appears after a whitespace and not any others. So far for my regular expression I have this: (^ph\.)|( ph.)|(\(ph.) This doesn't work for me as it also captures the whitespace character preceding the ph. as well as the ( character preceding its ph. How can I modify this expression to capture the ph. without also capturing the ( and whitespace character? A: You could change the pattern to have a single capturing group that allows start-anchor, opening parenthesis or space: '(^|\(| )ph\.' and in the replace string, prefix your value with \1 so it includes whatever that matched. For example: select regexp_replace( 'ph.activity = ''ph.activity.1'' AND ((ph.activity = ''ph.act.1'' AND ph.activity = ''test.ph.act.1'' AND ph.activity = ''ph.test.23''))', '(^|\(| )ph\.', '\1TEST.') as result from dual RESULT TEST.activity = 'ph.activity.1' AND ((TEST.activity = 'ph.act.1' AND TEST.activity = 'test.ph.act.1' AND TEST.activity = 'ph.test.23')) fiddle
Regular expression to capture alias in a string to be used in an oracle query without capturing characters that are the same as the alias
I have a string that will be used in an oracle query that I need to parse and change the alias of the columns in the string to a different value and I need to use a regular expression to do so. Here is an example of the sort of string I will be dealing with: ph.activity = ''ph.activity.1'' AND ((ph.activity = ''ph.act.1'' AND ph.activity = ''test.ph.act.1'' AND ph.activity = ''ph.test.23'')) I need to change all the ph. in the string where the ph. appears at the start of the string, where it appears after a ( character and where it appears after a whitespace and not any others. So far for my regular expression I have this: (^ph\.)|( ph.)|(\(ph.) This doesn't work for me as it also captures the whitespace character preceding the ph. as well as the ( character preceding its ph. How can I modify this expression to capture the ph. without also capturing the ( and whitespace character?
[ "You could change the pattern to have a single capturing group that allows start-anchor, opening parenthesis or space:\n'(^|\\(| )ph\\.'\n\nand in the replace string, prefix your value with \\1 so it includes whatever that matched.\nFor example:\nselect regexp_replace(\n 'ph.activity = ''ph.activity.1'' AND ((ph.activity = ''ph.act.1'' AND ph.activity = ''test.ph.act.1'' AND ph.activity = ''ph.test.23''))',\n '(^|\\(| )ph\\.',\n '\\1TEST.') as result\nfrom dual\n\n\n\n\n\nRESULT\n\n\n\n\nTEST.activity = 'ph.activity.1' AND ((TEST.activity = 'ph.act.1' AND TEST.activity = 'test.ph.act.1' AND TEST.activity = 'ph.test.23'))\n\n\n\n\nfiddle\n" ]
[ 0 ]
[]
[]
[ "expression", "oracle", "replace", "string" ]
stackoverflow_0074659765_expression_oracle_replace_string.txt
Q: How to initialize two components sequentially on a Blazor page? I'm using Blazor server-side (with pre-render disabled) with .NET 6. My page looks like: <MyComponentA /> <MyComponentB /> and the components both look like: @inject MyDbService db ... @code { protected override async Task OnParametersSetAsync() { var v = await db.SomeSelect(); // ... use v } } MyDbService is a Scoped service that uses EFCore and has a DbContext member, so there is only one active database connection. It's an external requirement for this task to only have a single database connection -- I don't want to use a Transient service instead. This code causes a runtime error because the DbContext throws an exception if two components both try to use it concurrently. (i.e. DbContext is not re-entrant). aside If I understand correctly, the flow is that db.SomeSelect in MyComponentA initiates, and await means the execution of the Page continues and eventually reaches db.SomeSelect in MyComponentB, which makes a second request on the same thread that still has the first call in progress. My question is: Is there a tidy way to make MyComponentB not make its database call until MyComponentA has finished initializing? One option would be to make MyComponentB do nothing until a specific function is called; and pass that function as parameter to MyComponentA to call when it's finished loading. But that feels pretty clunky and spaghetti so I wonder if there is a better way. A: Here's a lightweight scheduler that uses TaskCompletionSource objects to manually control Tasks passed back to the caller. This is a simple demo, getting time with a configurable delay from a single source that throws an exception if you try and run it in parallel. You should be able to apply this pattern to schedule sequential requests into your data pipeline. The Scoped Queue Service: public class GetTheTime : IAsyncDisposable { private bool _processing; private Queue<TimeRequest> _timeQueue = new(); private Task _queueTask = Task.CompletedTask; private bool _disposing; // only way to get the time public Task<string> GetTime(int delay) { var value = new TimeRequest(delay); // Queues the request _timeQueue.Enqueue(value); // Checks if the queue service is running and if not run it // lock it while we check and potentially start it to ensure thread safety lock (_queueTask) { if (_queueTask.IsCompleted) _queueTask = this.QueueService(); } // returns the maunally controlled Task to the caller who can await it return value.CompletionSource.Task; } private async Task QueueService() { // loop thro the queue and run the enqueued requests till it's empty while (_timeQueue.Count > 0) { if (_disposing) break; var value = _timeQueue.Dequeue(); // do the work and wait for it to complete var result = await _getTime(value.Delay); value.CompletionSource.TrySetResult(result); } } private async Task<string> _getTime(int delay) { // If more than one of me is running go BANG if (_processing) throw new Exception("Bang!"); _processing = true; // Emulate an async database call await Task.Delay(delay); _processing = false; return DateTime.Now.ToLongTimeString(); } public async ValueTask DisposeAsync() { _disposing = true; await _queueTask; } private readonly struct TimeRequest { public int Delay { get; } public TaskCompletionSource<string> CompletionSource { get; } = new TaskCompletionSource<string>(); public TimeRequest(int delay) => Delay = delay; } } A simple Component: @inject GetTheTime service <div class="alert alert-info"> @this.message </div> @code { [Parameter] public int Delay { get; set; } = 1000; private string message = "Not Started"; protected async override Task OnInitializedAsync() { message = "Processing"; message = await service.GetTime(this.Delay); } } And a demo page: @page "/" <PageTitle>Index</PageTitle> <h1>Hello, world!</h1> Welcome to your new app. <Component Delay="3000" /> <Component Delay="2000" /> <Component Delay="1000" />
How to initialize two components sequentially on a Blazor page?
I'm using Blazor server-side (with pre-render disabled) with .NET 6. My page looks like: <MyComponentA /> <MyComponentB /> and the components both look like: @inject MyDbService db ... @code { protected override async Task OnParametersSetAsync() { var v = await db.SomeSelect(); // ... use v } } MyDbService is a Scoped service that uses EFCore and has a DbContext member, so there is only one active database connection. It's an external requirement for this task to only have a single database connection -- I don't want to use a Transient service instead. This code causes a runtime error because the DbContext throws an exception if two components both try to use it concurrently. (i.e. DbContext is not re-entrant). aside If I understand correctly, the flow is that db.SomeSelect in MyComponentA initiates, and await means the execution of the Page continues and eventually reaches db.SomeSelect in MyComponentB, which makes a second request on the same thread that still has the first call in progress. My question is: Is there a tidy way to make MyComponentB not make its database call until MyComponentA has finished initializing? One option would be to make MyComponentB do nothing until a specific function is called; and pass that function as parameter to MyComponentA to call when it's finished loading. But that feels pretty clunky and spaghetti so I wonder if there is a better way.
[ "Here's a lightweight scheduler that uses TaskCompletionSource objects to manually control Tasks passed back to the caller.\nThis is a simple demo, getting time with a configurable delay from a single source that throws an exception if you try and run it in parallel.\nYou should be able to apply this pattern to schedule sequential requests into your data pipeline.\nThe Scoped Queue Service:\npublic class GetTheTime : IAsyncDisposable\n{\n private bool _processing;\n\n private Queue<TimeRequest> _timeQueue = new();\n private Task _queueTask = Task.CompletedTask;\n private bool _disposing;\n\n // only way to get the time\n public Task<string> GetTime(int delay)\n {\n var value = new TimeRequest(delay);\n\n // Queues the request\n _timeQueue.Enqueue(value);\n\n // Checks if the queue service is running and if not run it\n // lock it while we check and potentially start it to ensure thread safety\n lock (_queueTask)\n {\n if (_queueTask.IsCompleted)\n _queueTask = this.QueueService();\n }\n\n // returns the maunally controlled Task to the caller who can await it\n return value.CompletionSource.Task;\n }\n\n private async Task QueueService()\n {\n // loop thro the queue and run the enqueued requests till it's empty\n while (_timeQueue.Count > 0)\n {\n if (_disposing)\n break;\n\n var value = _timeQueue.Dequeue();\n\n // do the work and wait for it to complete\n var result = await _getTime(value.Delay);\n\n value.CompletionSource.TrySetResult(result);\n }\n }\n\n private async Task<string> _getTime(int delay)\n {\n // If more than one of me is running go BANG\n if (_processing)\n throw new Exception(\"Bang!\");\n\n _processing = true;\n // Emulate an async database call\n await Task.Delay(delay);\n _processing = false;\n\n return DateTime.Now.ToLongTimeString();\n }\n\n public async ValueTask DisposeAsync()\n {\n _disposing = true;\n await _queueTask;\n }\n\n private readonly struct TimeRequest\n {\n public int Delay { get; }\n public TaskCompletionSource<string> CompletionSource { get; } = new TaskCompletionSource<string>();\n\n public TimeRequest(int delay)\n => Delay = delay;\n }\n}\n\nA simple Component:\n@inject GetTheTime service\n<div class=\"alert alert-info\">\n @this.message\n</div>\n\n@code {\n [Parameter] public int Delay { get; set; } = 1000;\n\n private string message = \"Not Started\";\n\n protected async override Task OnInitializedAsync()\n {\n message = \"Processing\";\n message = await service.GetTime(this.Delay);\n }\n}\n\nAnd a demo page:\n@page \"/\"\n\n<PageTitle>Index</PageTitle>\n\n<h1>Hello, world!</h1>\n\nWelcome to your new app.\n\n<Component Delay=\"3000\" />\n<Component Delay=\"2000\" />\n<Component Delay=\"1000\" />\n\n" ]
[ 1 ]
[]
[]
[ "blazor", "c#", "entity_framework_core" ]
stackoverflow_0074651951_blazor_c#_entity_framework_core.txt
Q: SQL update set and select issue, I have this code: SELECT ZONE_NAME, ORG_ID, ORG_SHORT_NAME, XMSSN_USE FROM TABLEZ ; UPDATE TABLEZ SET TABLEZ.BILL_CYCLE = '1-DEC-2020 4' WHERE (SELECT TABLEZ.XMSSN_USE WHERE TABLEZ.XMSSN_USE=224) But it doesn't seem to execute. What could be the problem? A: You have 2 different query sentences. You should use ; after first query and then write the update query. Also you need to use some field on the where clause, your where is empty.
SQL update set and select issue,
I have this code: SELECT ZONE_NAME, ORG_ID, ORG_SHORT_NAME, XMSSN_USE FROM TABLEZ ; UPDATE TABLEZ SET TABLEZ.BILL_CYCLE = '1-DEC-2020 4' WHERE (SELECT TABLEZ.XMSSN_USE WHERE TABLEZ.XMSSN_USE=224) But it doesn't seem to execute. What could be the problem?
[ "You have 2 different query sentences. You should use ; after first query and then write the update query.\nAlso you need to use some field on the where clause, your where is empty.\n" ]
[ 0 ]
[]
[]
[ "oracle", "sql" ]
stackoverflow_0074659864_oracle_sql.txt
Q: Java - from Array to linked ArrayList why it doesn't affect all int values? I am studing Java, i have simple Array linked to ArrayList, it is fixed size i can change values inside array or list without change length. So i tried to change all elements of the Array to see changes into ArrayList (it doesn't work). I saw that if i change single value into Array my list would change too (it works). If i change my List values into array wuold changed. If i change List or Array length would throw exception. String[] nameListLinkedToArrayFixedSize = {"Jhonny","Joe","Jhoseph"}; List<String> nameListLinkedToArray = Arrays.asList(nameListLinkedToArrayFixedSize); nameListLinkedToArrayFixedSize[1] = "J.Joe"; // this change my list nameListLinkedToArrayFixedSize = new String[]{"ead","sda","eps"}; //change my array but non change my list System.out.println(nameListLinkedToArray) // is same as first array why? nameListLinkedToArray.set(2, "J.Jhoseph"); //[Jhonny, J.Joe, J.Jhoseph] I need to understand how works linked arrays, i suppose this is not go well without point new array to new linked list? Why single operation on array change list? What is pointer of linked list after i change all element of array? Why my list continues update old values of array? Where to find specific documentation? A: You have to dig into the code for Arrays.asList(). When you do, you will see that it uses the array you pass to it as the backing store. If you make changes to items in that array going forward in your code, those changes will also be seen in elements of the list. However, if you assign your original array to a new array, you have effectively de-linked your external reference to the backing store, hence changes to elements of the new array are not reflected in the list. import java.util.Arrays; import java.util.List; public class ArraysAsListSideEffects { public static void main(String[] args) { codeWithSideEffects(); codeWithoutSideEffects(); } private static void codeWithoutSideEffects() { System.out.println("\n\nCode without side effects: "); String[] originalArray = { "Jhonny", "Joe", "Jhoseph" }; List<String> listFromArray = List.of(originalArray); System.out.println(listFromArray); originalArray[1] = "J.Joe"; System.out.println("After updating original array: " + listFromArray); } protected static void codeWithSideEffects() { System.out.println("Side effects of using Arrays.asList()"); String[] originalArray = { "Jhonny", "Joe", "Jhoseph" }; List<String> listFromArray = Arrays.asList(originalArray); // Because Arrays.asList uses your original array as the // backing store, changing an element in the original // array changes the element in the list. originalArray[1] = "J.Joe"; // this change my list // change my array but non change my list originalArray = new String[] { "ead", "sda", "eps" }; // Since you assigned original array to a new array, // changes to it will not affect the backing store reference // used by listFromArray. System.out.println(listFromArray); // is same as first array why? // Yes because Arrays.asList stores a reference to that // list. listFromArray.set(2, "J.Jhoseph"); // [Jhonny, J.Joe, J.Jhoseph]} // This is correct because of how Arrays.asList // creates an ArrayList with your original array // as backing store. System.out.println(listFromArray); } } Output: Side effects of using Arrays.asList() [Jhonny, J.Joe, Jhoseph] [Jhonny, J.Joe, J.Jhoseph] Code without side effects: [Jhonny, Joe, Jhoseph] After updating original array: [Jhonny, Joe, Jhoseph]
Java - from Array to linked ArrayList why it doesn't affect all int values?
I am studing Java, i have simple Array linked to ArrayList, it is fixed size i can change values inside array or list without change length. So i tried to change all elements of the Array to see changes into ArrayList (it doesn't work). I saw that if i change single value into Array my list would change too (it works). If i change my List values into array wuold changed. If i change List or Array length would throw exception. String[] nameListLinkedToArrayFixedSize = {"Jhonny","Joe","Jhoseph"}; List<String> nameListLinkedToArray = Arrays.asList(nameListLinkedToArrayFixedSize); nameListLinkedToArrayFixedSize[1] = "J.Joe"; // this change my list nameListLinkedToArrayFixedSize = new String[]{"ead","sda","eps"}; //change my array but non change my list System.out.println(nameListLinkedToArray) // is same as first array why? nameListLinkedToArray.set(2, "J.Jhoseph"); //[Jhonny, J.Joe, J.Jhoseph] I need to understand how works linked arrays, i suppose this is not go well without point new array to new linked list? Why single operation on array change list? What is pointer of linked list after i change all element of array? Why my list continues update old values of array? Where to find specific documentation?
[ "You have to dig into the code for Arrays.asList(). When you do, you will see that it uses the array you pass to it as the backing store. If you make changes to items in that array going forward in your code, those changes will also be seen in elements of the list.\nHowever, if you assign your original array to a new array, you have effectively de-linked your external reference to the backing store, hence changes to elements of the new array are not reflected in the list.\nimport java.util.Arrays;\nimport java.util.List;\n\npublic class ArraysAsListSideEffects {\n\n public static void main(String[] args) {\n\n codeWithSideEffects();\n\n codeWithoutSideEffects();\n }\n\n private static void codeWithoutSideEffects() {\n System.out.println(\"\\n\\nCode without side effects: \");\n\n String[] originalArray = { \"Jhonny\", \"Joe\", \"Jhoseph\" };\n List<String> listFromArray = List.of(originalArray);\n System.out.println(listFromArray);\n\n originalArray[1] = \"J.Joe\";\n System.out.println(\"After updating original array: \" + listFromArray);\n }\n\n protected static void codeWithSideEffects() {\n System.out.println(\"Side effects of using Arrays.asList()\");\n String[] originalArray = { \"Jhonny\", \"Joe\", \"Jhoseph\" };\n List<String> listFromArray = Arrays.asList(originalArray);\n\n // Because Arrays.asList uses your original array as the\n // backing store, changing an element in the original\n // array changes the element in the list.\n originalArray[1] = \"J.Joe\"; // this change my list\n\n // change my array but non change my list\n originalArray = new String[] { \"ead\", \"sda\", \"eps\" };\n // Since you assigned original array to a new array,\n // changes to it will not affect the backing store reference\n // used by listFromArray.\n\n System.out.println(listFromArray); // is same as first array why?\n // Yes because Arrays.asList stores a reference to that\n // list.\n\n listFromArray.set(2, \"J.Jhoseph\"); // [Jhonny, J.Joe, J.Jhoseph]} \n // This is correct because of how Arrays.asList\n // creates an ArrayList with your original array\n // as backing store.\n System.out.println(listFromArray);\n }\n}\n\nOutput:\nSide effects of using Arrays.asList()\n[Jhonny, J.Joe, Jhoseph]\n[Jhonny, J.Joe, J.Jhoseph]\n\n\nCode without side effects: \n[Jhonny, Joe, Jhoseph]\nAfter updating original array: [Jhonny, Joe, Jhoseph]\n\n" ]
[ 0 ]
[ "I think you understand that the list array value is defined upper but when you change the string[] it doesn't affect le list so you have to change it again\n" ]
[ -1 ]
[ "arraylist", "arrays", "java", "list", "object" ]
stackoverflow_0074659098_arraylist_arrays_java_list_object.txt
Q: Pyspark : TypeError: _api() takes 1 positional argument I use this code for the pivot on the dataframe : df2 = df.groupBy("id").pivot("status").count("status") df = df.join(df2, on="id", how='left') But I obtain this error : TypeError: _api() takes 1 positional argument but 2 were given Please, can we help me ?? A: When you would to count aggregated values, you could import spark sql functions: from pyspark.sql import functions as spark_sql_functions For example, let's you have next dataframe: df.show() +--------------------+-----------------+------------------+ | country| name| subcountry| +--------------------+-----------------+------------------+ | Andorra| les Escaldes|Escaldes-Engordany| | Andorra| Andorra la Vella| Andorra la Vella| |United Arab Emirates| Umm al Qaywayn| Umm al Qaywayn| |United Arab Emirates| Ras al-Khaimah| Raʼs al Khaymah| |United Arab Emirates| Khawr Fakkān| Ash Shāriqah| |United Arab Emirates| Dubai| Dubai| ... ... Use the agg function with spark sql functions: df_cities = df.groupBy('country', 'subcountry').agg( spark_sql_functions.count('name').alias('cities')).groupBy('country').agg( spark_sql_functions.count('subcountry').alias('subcountry'), spark_sql_functions.sum('cities').alias('cnt')).sort(spark_sql_functions.desc('cnt')) df_cities.show() => +--------------+----------+----+ | country|subcountry| cnt| +--------------+----------+----+ | United States| 51|2699| | India| 35|2443| | Brazil| 27|1200| | Russia| 82|1093| | Germany| 16|1055| | China| 31| 799| | Japan| 47| 736| | France| 13| 633| | Italy| 20| 571| ... I hope this example be useful.
Pyspark : TypeError: _api() takes 1 positional argument
I use this code for the pivot on the dataframe : df2 = df.groupBy("id").pivot("status").count("status") df = df.join(df2, on="id", how='left') But I obtain this error : TypeError: _api() takes 1 positional argument but 2 were given Please, can we help me ??
[ "When you would to count aggregated values, you could import spark sql functions:\nfrom pyspark.sql import functions as spark_sql_functions\n\nFor example, let's you have next dataframe:\ndf.show()\n\n+--------------------+-----------------+------------------+\n| country| name| subcountry|\n+--------------------+-----------------+------------------+\n| Andorra| les Escaldes|Escaldes-Engordany|\n| Andorra| Andorra la Vella| Andorra la Vella|\n|United Arab Emirates| Umm al Qaywayn| Umm al Qaywayn|\n|United Arab Emirates| Ras al-Khaimah| Raʼs al Khaymah|\n|United Arab Emirates| Khawr Fakkān| Ash Shāriqah|\n|United Arab Emirates| Dubai| Dubai|\n...\n...\n\nUse the agg function with spark sql functions:\ndf_cities = df.groupBy('country', 'subcountry').agg(\n spark_sql_functions.count('name').alias('cities')).groupBy('country').agg(\n spark_sql_functions.count('subcountry').alias('subcountry'),\n spark_sql_functions.sum('cities').alias('cnt')).sort(spark_sql_functions.desc('cnt'))\ndf_cities.show()\n=>\n+--------------+----------+----+\n| country|subcountry| cnt|\n+--------------+----------+----+\n| United States| 51|2699|\n| India| 35|2443|\n| Brazil| 27|1200|\n| Russia| 82|1093|\n| Germany| 16|1055|\n| China| 31| 799|\n| Japan| 47| 736|\n| France| 13| 633|\n| Italy| 20| 571|\n...\n\nI hope this example be useful.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pyspark" ]
stackoverflow_0071268154_dataframe_pyspark.txt
Q: Why deos PDO SELECT Work but BASIC INSERT fails? I have connected to the db and able to update a record. I have a variable named "action" that is either "update" or "add". I use it in a switch statement to set my query to either "SELECT" or "INSERT". SELECT statement works. INSERT statement does not. I get this error on $pdo->execute($data). PHP Fatal error: Uncaught PDOException: SQLSTATE[HY093]: Invalid parameter number: parameter was not defined in ... PDOStatement->execute(Array) The error is thrown by the PDOStatement Here is what I have tried, seems pretty straight-forward, but i'm struggling with it. $data = [ 'firstName'=> $firstName, 'lastName'=> $lastName, 'badge'=> $badge, 'department'=> $department, 'image'=> $image, 'active'=> $active, 'stars'=> $stars, 'email'=> $email, 'primary_key'=> $primaryKey, ]; $sql = "INSERT INTO `team` (`primary_key`,`firstName`, `lastName`, `badge`, `department`, `image`, `active`, `stars`, `email`) VALUES (NULL, :firstName, :lastName, :badge, :department, :image, :active, :stars, :email)"; $pdo->prepare($sql); $pdo->execute($data); <- error is here When I simply echo my $data array to see if there is something odd. I don't see anything based off all the sites, I've read. //$data array DATA primary_key = firstName = test lastName = test badge = 9000 department = marketing image = 9000.jpg active = 1 stars = 0 email = [email protected] primary_key in db is auto-increment primary_key is $_post[] on update query and NULL insert query (auto increment db column) Any errors that would prevent this INSERT query from working that you can see? I'm stuck. I know it the array has 9 variables, there are 9 fields to insert, and 9 values listed. A: I know it the array has 9 variables, there are 9 fields to insert, and 9 values listed. Count the parameters. There are 8 of them. The array includes a value called primary_key for which there is no parameter in the query. primary_key in db is auto-increment Then don't insert a value for it: $sql = "INSERT INTO `team` (`firstName`, `lastName`, `badge`, `department`, `image`, `active`, `stars`, `email`) VALUES (:firstName, :lastName, :badge, :department, :image, :active, :stars, :email)"; And remove primary_key from the $data array.
Why deos PDO SELECT Work but BASIC INSERT fails?
I have connected to the db and able to update a record. I have a variable named "action" that is either "update" or "add". I use it in a switch statement to set my query to either "SELECT" or "INSERT". SELECT statement works. INSERT statement does not. I get this error on $pdo->execute($data). PHP Fatal error: Uncaught PDOException: SQLSTATE[HY093]: Invalid parameter number: parameter was not defined in ... PDOStatement->execute(Array) The error is thrown by the PDOStatement Here is what I have tried, seems pretty straight-forward, but i'm struggling with it. $data = [ 'firstName'=> $firstName, 'lastName'=> $lastName, 'badge'=> $badge, 'department'=> $department, 'image'=> $image, 'active'=> $active, 'stars'=> $stars, 'email'=> $email, 'primary_key'=> $primaryKey, ]; $sql = "INSERT INTO `team` (`primary_key`,`firstName`, `lastName`, `badge`, `department`, `image`, `active`, `stars`, `email`) VALUES (NULL, :firstName, :lastName, :badge, :department, :image, :active, :stars, :email)"; $pdo->prepare($sql); $pdo->execute($data); <- error is here When I simply echo my $data array to see if there is something odd. I don't see anything based off all the sites, I've read. //$data array DATA primary_key = firstName = test lastName = test badge = 9000 department = marketing image = 9000.jpg active = 1 stars = 0 email = [email protected] primary_key in db is auto-increment primary_key is $_post[] on update query and NULL insert query (auto increment db column) Any errors that would prevent this INSERT query from working that you can see? I'm stuck. I know it the array has 9 variables, there are 9 fields to insert, and 9 values listed.
[ "\nI know it the array has 9 variables, there are 9 fields to insert, and 9 values listed.\n\nCount the parameters. There are 8 of them. The array includes a value called primary_key for which there is no parameter in the query.\n\nprimary_key in db is auto-increment\n\nThen don't insert a value for it:\n$sql = \"INSERT INTO `team`\n(`firstName`, `lastName`, `badge`, `department`, `image`, `active`, `stars`, `email`)\nVALUES\n(:firstName, :lastName, :badge, :department, :image, :active, :stars, :email)\";\n\nAnd remove primary_key from the $data array.\n" ]
[ 0 ]
[]
[]
[ "pdo", "php" ]
stackoverflow_0074659888_pdo_php.txt
Q: Docker installation on Mac m1 I am trying to install Docker desktop on Mac m1 but after installation dockers asks to execute following command. docker run -d -p 80:80 docker/getting-started But, it gives following error Unable to find image 'docker/getting-started:latest' locally docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": read tcp 192.168.65.4:58764->192.168.65.5:3128: read: connection reset by peer. See 'docker run --help'. Why is it not pulling docker data?
Docker installation on Mac m1
I am trying to install Docker desktop on Mac m1 but after installation dockers asks to execute following command. docker run -d -p 80:80 docker/getting-started But, it gives following error Unable to find image 'docker/getting-started:latest' locally docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": read tcp 192.168.65.4:58764->192.168.65.5:3128: read: connection reset by peer. See 'docker run --help'. Why is it not pulling docker data?
[]
[]
[ "(Sorry for the miss... going to try this again)\nTry docker exec command before your command.\nLike this docker exec docker run -d -p 80:80 docker/getting-started\n\"Tried using the docker exec command and it appears to have worked OK with two different ubuntu instances. Did not try Docker Desktop.\nIt kind of looks like there is a problem with Docker Desktop manipulating Terminal.app.\nI’m using the macOS default zshell.\"\nhttps://forums.docker.com/t/problems-getting-started/116487/9\n" ]
[ -1 ]
[ "apple_m1", "docker", "macos" ]
stackoverflow_0074659859_apple_m1_docker_macos.txt
Q: Can't call a function from CountDownTimer object. What am I missing? I'm going through Googles Kotlin Compose tutorials. One of the tasks is to build a game where you unscramble words. After completing it, I tried to improve the game on my own, and just can't figure out how to add a countdown timer to it. I want the program to skip a word when time runs out. I'm a programming noob, it's not quite clear to me yet how classes and objects work and how they differ from functions. The code for the timer at the moment: object Timer: CountDownTimer(60000, 1000) { override fun onTick(millisUntilFinished: Long) { TODO("Not yet implemented") } override fun onFinish() { skipWord() // <<----------- **Unresolved reference: skipWord** } } Elsewhere in my code I have: class GameViewModel : ViewModel() { //.... fun skipWord() { // <<---------------- Function that skips to the next word updateGameState(_uiState.value.score) updateUserGuess("") } //..... private fun pickRandomWordAndShuffle(): String { // Continue picking up a new random word until you get one that hasn't been used before if (currentLanguage == "English") { currentWord = engWords.random() } else { currentWord = finWords.random() } setPointAmount() Timer.start() // <<---------------Start a new countdown for a new word. if (usedWords.contains(currentWord)) { return pickRandomWordAndShuffle() } else { usedWords.add(currentWord) return shuffleCurrentWord(currentWord) } } } Also, a separate problem: the .random() always uses the same seed and picks the same words to unscramble. A: Change object Timer: CountDownTimer(60000, 1000) { ... to val timer = object: CountDownTimer(60000, 1000) { ... and put in into your GameViewModel class. To solve random issue provide some seed to Random object, like: var myRandom = Random(System.currentTimeMillis()) and then currentWord = engWords[myRandon.nextInt(engwords.lastIndex)]
Can't call a function from CountDownTimer object. What am I missing?
I'm going through Googles Kotlin Compose tutorials. One of the tasks is to build a game where you unscramble words. After completing it, I tried to improve the game on my own, and just can't figure out how to add a countdown timer to it. I want the program to skip a word when time runs out. I'm a programming noob, it's not quite clear to me yet how classes and objects work and how they differ from functions. The code for the timer at the moment: object Timer: CountDownTimer(60000, 1000) { override fun onTick(millisUntilFinished: Long) { TODO("Not yet implemented") } override fun onFinish() { skipWord() // <<----------- **Unresolved reference: skipWord** } } Elsewhere in my code I have: class GameViewModel : ViewModel() { //.... fun skipWord() { // <<---------------- Function that skips to the next word updateGameState(_uiState.value.score) updateUserGuess("") } //..... private fun pickRandomWordAndShuffle(): String { // Continue picking up a new random word until you get one that hasn't been used before if (currentLanguage == "English") { currentWord = engWords.random() } else { currentWord = finWords.random() } setPointAmount() Timer.start() // <<---------------Start a new countdown for a new word. if (usedWords.contains(currentWord)) { return pickRandomWordAndShuffle() } else { usedWords.add(currentWord) return shuffleCurrentWord(currentWord) } } } Also, a separate problem: the .random() always uses the same seed and picks the same words to unscramble.
[ "Change\nobject Timer: CountDownTimer(60000, 1000) {\n...\n\nto\nval timer = object: CountDownTimer(60000, 1000) {\n...\n\nand put in into your GameViewModel class.\nTo solve random issue provide some seed to Random object, like:\nvar myRandom = Random(System.currentTimeMillis())\n\nand then\ncurrentWord = engWords[myRandon.nextInt(engwords.lastIndex)]\n\n" ]
[ 0 ]
[]
[]
[ "class", "kotlin", "object", "random", "timer" ]
stackoverflow_0074656894_class_kotlin_object_random_timer.txt
Q: How to define the body of a method in Flutter I am actaully embarrassed to ask this question as I don't remember how it is done. I am trying to define a method that I can change its body whenever I need. It will be used for a floating action button that will have a certain function in each page of a PageView(). I didn't find anything about it, so can someone remind me of what and how this is done? A: If you want to define a method that can be changed or customized in different pages of a PageView in Flutter, you can use a closure or a lambda expression to create a function that can be passed to the floating action button. Here is an example of how you can define a floating action button with a lambda expression that can be customized in different pages of a PageView: class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: PageView( children: [ MyHomePage(), MySecondPage(), MyThirdPage(), ], ), ); } } class MyHomePage extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( floatingActionButton: FloatingActionButton( onPressed: () => print('Button pressed on Home Page'), ), ); } } class MySecondPage extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( floatingActionButton: FloatingActionButton( onPressed: () => print('Button pressed on Second Page'), ), ); } } class MyThirdPage extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( floatingActionButton: FloatingActionButton( onPressed: () => print('Button pressed on Third Page'), ), ); } } In this example, the onPressed property of the floating action button is set to a lambda expression that defines the action to be performed when the button is pressed. This lambda expression is different for each page of the PageView, so the button will have a different function on each page.
How to define the body of a method in Flutter
I am actaully embarrassed to ask this question as I don't remember how it is done. I am trying to define a method that I can change its body whenever I need. It will be used for a floating action button that will have a certain function in each page of a PageView(). I didn't find anything about it, so can someone remind me of what and how this is done?
[ "If you want to define a method that can be changed or customized in different pages of a PageView in Flutter, you can use a closure or a lambda expression to create a function that can be passed to the floating action button.\nHere is an example of how you can define a floating action button with a lambda expression that can be customized in different pages of a PageView:\nclass MyApp extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: PageView(\n children: [\n MyHomePage(),\n MySecondPage(),\n MyThirdPage(),\n ],\n ),\n );\n }\n}\n\nclass MyHomePage extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n floatingActionButton: FloatingActionButton(\n onPressed: () => print('Button pressed on Home Page'),\n ),\n );\n }\n}\n\nclass MySecondPage extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n floatingActionButton: FloatingActionButton(\n onPressed: () => print('Button pressed on Second Page'),\n ),\n );\n }\n}\n\nclass MyThirdPage extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n floatingActionButton: FloatingActionButton(\n onPressed: () => print('Button pressed on Third Page'),\n ),\n );\n }\n}\n\nIn this example, the onPressed property of the floating action button is set to a lambda expression that defines the action to be performed when the button is pressed. This lambda expression is different for each page of the PageView, so the button will have a different function on each page.\n" ]
[ 2 ]
[]
[]
[ "flutter" ]
stackoverflow_0074659900_flutter.txt
Q: installing node lts with nvm on windows I've installed nvm on windows (from here), but running nvm install lts prints: lts.0.0 Node.js vlts.0.0 is only available in 32-bit. How do I install node lts on windows? A: To install and use the latest LTS release of Node.js: nvm install --lts nvm use --lts To install and use a specific LTS release of Node.js: nvm install lts/erbium nvm use lts/erbium A: Following this github comment, just run nvm list available and then nvm install x.y.z Don't forget to nvm use x.y.z after you're done. I used this manual - in case it helps. A: nvm install --lts.14.15.4 Downloading node.js version 14.15.4 (64-bit)... Complete A: Removing the prepended -- from lts worked for me (instead of nvm install --lts): nvm install lts
installing node lts with nvm on windows
I've installed nvm on windows (from here), but running nvm install lts prints: lts.0.0 Node.js vlts.0.0 is only available in 32-bit. How do I install node lts on windows?
[ "To install and use the latest LTS release of Node.js:\nnvm install --lts\nnvm use --lts\n\nTo install and use a specific LTS release of Node.js:\nnvm install lts/erbium\nnvm use lts/erbium\n\n", "Following this github comment, just run nvm list available and then nvm install x.y.z\nDon't forget to nvm use x.y.z after you're done. I used this manual - in case it helps.\n", "nvm install --lts.14.15.4\n\nDownloading node.js version 14.15.4 (64-bit)...\nComplete\n", "Removing the prepended -- from lts worked for me (instead of nvm install --lts):\nnvm install lts\n\n" ]
[ 41, 18, 4, 0 ]
[]
[]
[ "node.js", "nvm", "windows" ]
stackoverflow_0064002438_node.js_nvm_windows.txt
Q: How To cover remaining space between two division vertically Here is the CSS I have applied CSS to wrapper, header and footer, and I tried to cover the remaining part of the screen with the between container but I am not able to do it. .wrapper { height: 100vh; } .header { width: 100vw; display: flex; position: absolute; top: 0; } .between {} .footer { position: absolute; bottom: 0; width: 100vw; } <div class="wrapper"> <div class="header"></div> <div class="between"></div> <div class="footer"></div> </div> A: you can use display: flex for this, use flex:1 to the child you want to take up available space. .wrapper { height: 100vh; display: flex; flex-direction: column; } .header { background: red; height: 10px; } .between { background: green; flex: 1; } .footer { background: yellow; height: 10px; } <div class="wrapper"> <div class="header"></div> <div class="between"></div> <div class="footer"></div> </div> read more about css flexbox box here.
How To cover remaining space between two division vertically
Here is the CSS I have applied CSS to wrapper, header and footer, and I tried to cover the remaining part of the screen with the between container but I am not able to do it. .wrapper { height: 100vh; } .header { width: 100vw; display: flex; position: absolute; top: 0; } .between {} .footer { position: absolute; bottom: 0; width: 100vw; } <div class="wrapper"> <div class="header"></div> <div class="between"></div> <div class="footer"></div> </div>
[ "you can use display: flex for this, use flex:1 to the child you want to take up available space.\n\n\n.wrapper {\n height: 100vh;\n display: flex;\n flex-direction: column;\n}\n\n.header {\n background: red;\n height: 10px;\n}\n\n.between {\n background: green;\n flex: 1;\n}\n\n.footer {\n background: yellow;\n height: 10px;\n}\n<div class=\"wrapper\">\n <div class=\"header\"></div>\n <div class=\"between\"></div>\n <div class=\"footer\"></div>\n</div>\n\n\n\nread more about css flexbox box here.\n" ]
[ 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074659540_css_html.txt
Q: How do I reference the current page object in puppeteer once user moves from login to homepage? So I am trying to use puppeteer to automate some data entry functions in Oracle Cloud applications. As of now I am able to launch the cloud app login page, enter username and password credentials and click login button. Once login is successful, Oracle opens a homepage for the user. Once this happens if I take screenshot or execute a page.content the screenshot and the content html is from the login page not of the homepage. How do I always have a reference to the current page that the user is on? Here is the basic code so far. const puppeteer = require('puppeteer'); const fs = require('fs'); (async () => { const browser = await puppeteer.launch({headless: false}); let page = await browser.newPage(); await page.goto('oraclecloudloginurl', {waitUntil: 'networkidle2'}); await page.type('#userid', 'USERNAME', {delay : 10}); await page.type('#password', 'PASSWORD', {delay : 10}); await page.waitForSelector('#btnActive', {enabled : true}); page.click('#btnActive', {delay : 1000}).then(() => console.log('Login Button Clicked')); await page.waitForNavigation(); await page.screenshot({path: 'home.png'}); const html = await page.content(); await fs.writeFileSync('home.html', html); await page.waitFor(10000); await browser.close(); })(); With this the user logs in fine and the home page is displayed. But I get an error after that when I try to screenshot the homepage and render the html content. It seems to be the page has changed and I am referring to the old page. How can I refer to the context of the current page? Below is the error: (node:14393) UnhandledPromiseRejectionWarning: Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefined A: This code looks problematic for two reasons: page.click('#btnActive', {delay : 1000}).then(() => console.log('Login Button Clicked')); await page.waitForNavigation(); The first problem is that the page.click().then() spins off a totally separate promise chain: page.click() --> .then(...) | v page.waitForNavigation() | v page.screenshot(...) | v ... This means the click that triggers the navigation and the navigation are running in parallel and can never be rejoined into the same promise chain. The usual solution here is to tie them into the same promise chain: // Note: this code is still broken; keep reading! await page.click('#btnActive', {delay : 1000}); console.log('Login Button Clicked'); await page.waitForNavigation(); This adheres to the principle of not mixing then and await unless you have good reason to. But the above code is still broken because Puppeteer requires the waitForNavigation() promise to be set before the event that triggers navigation is fired. The fix is: await Promise.all([ page.waitForNavigation(), page.click('#btnActive', {delay : 1000}), ]); or const navPromise = page.waitForNavigation(); // no await await page.click('#btnActive', {delay : 1000}); await navPromise; Following this pattern, Puppeteer should no longer be confused about its context. Minor notes: 'networkidle2' is slow and probably unnecessary, especially for a page you're soon going to be navigating away from. I'd default to 'domcontentloaded'. await page.waitFor(10000); is deprecated along with page.waitForTimeout(), although I realize this is an older post.
How do I reference the current page object in puppeteer once user moves from login to homepage?
So I am trying to use puppeteer to automate some data entry functions in Oracle Cloud applications. As of now I am able to launch the cloud app login page, enter username and password credentials and click login button. Once login is successful, Oracle opens a homepage for the user. Once this happens if I take screenshot or execute a page.content the screenshot and the content html is from the login page not of the homepage. How do I always have a reference to the current page that the user is on? Here is the basic code so far. const puppeteer = require('puppeteer'); const fs = require('fs'); (async () => { const browser = await puppeteer.launch({headless: false}); let page = await browser.newPage(); await page.goto('oraclecloudloginurl', {waitUntil: 'networkidle2'}); await page.type('#userid', 'USERNAME', {delay : 10}); await page.type('#password', 'PASSWORD', {delay : 10}); await page.waitForSelector('#btnActive', {enabled : true}); page.click('#btnActive', {delay : 1000}).then(() => console.log('Login Button Clicked')); await page.waitForNavigation(); await page.screenshot({path: 'home.png'}); const html = await page.content(); await fs.writeFileSync('home.html', html); await page.waitFor(10000); await browser.close(); })(); With this the user logs in fine and the home page is displayed. But I get an error after that when I try to screenshot the homepage and render the html content. It seems to be the page has changed and I am referring to the old page. How can I refer to the context of the current page? Below is the error: (node:14393) UnhandledPromiseRejectionWarning: Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefined
[ "This code looks problematic for two reasons:\npage.click('#btnActive', {delay : 1000}).then(() => console.log('Login Button Clicked'));\n\nawait page.waitForNavigation();\n\nThe first problem is that the page.click().then() spins off a totally separate promise chain:\npage.click() --> .then(...)\n |\n v\npage.waitForNavigation()\n |\n v\npage.screenshot(...)\n |\n v\n ...\n\nThis means the click that triggers the navigation and the navigation are running in parallel and can never be rejoined into the same promise chain. The usual solution here is to tie them into the same promise chain:\n// Note: this code is still broken; keep reading!\nawait page.click('#btnActive', {delay : 1000});\nconsole.log('Login Button Clicked');\nawait page.waitForNavigation();\n\nThis adheres to the principle of not mixing then and await unless you have good reason to.\nBut the above code is still broken because Puppeteer requires the waitForNavigation() promise to be set before the event that triggers navigation is fired. The fix is:\nawait Promise.all([\n page.waitForNavigation(),\n page.click('#btnActive', {delay : 1000}),\n]);\n\nor\nconst navPromise = page.waitForNavigation(); // no await\nawait page.click('#btnActive', {delay : 1000});\nawait navPromise;\n\nFollowing this pattern, Puppeteer should no longer be confused about its context.\n\nMinor notes:\n\n'networkidle2' is slow and probably unnecessary, especially for a page you're soon going to be navigating away from. I'd default to 'domcontentloaded'.\nawait page.waitFor(10000); is deprecated along with page.waitForTimeout(), although I realize this is an older post.\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "puppeteer" ]
stackoverflow_0050658786_javascript_puppeteer.txt
Q: Highlight / color text in a RichTextBox using a pattern I'm trying to create a colorized RichTextBox based on a pattern. The text is: Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color Each { } contains a color which is a style for next words: Hey, This IS Asample. The color is green color Hey, This default color. IS A should be Red color. sample. The should be Cyan color. color is green color should be green. Here this is my code: // Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color // shown text should be: // Hey, This IS A sample. The color is green const string OriginalText = "Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color"; const string ShownText = "Hey, This IS A sample. The color is green color"; const string Pattern = "(?<=\\{)(.*?)(?=\\})"; rtbMain.Text = ShownText; rtbMain.SelectAll(); rtbMain.SelectionColor = Color.Black; rtbMain.SelectionBackColor = Color.White; Regex regex = new(Pattern, RegexOptions.IgnoreCase); MatchCollection matches = regex.Matches(OriginalText); if (matches.Count > 0) { var rtbText = rtbMain.Text; var length = ShownText.Length; var allMatches = new List<Match>(); for (int i = 0; i < matches.Count; i++) { var m = matches[i]; allMatches.Add(m); Match nextMatch = null; if (matches.Count > i + 1) { nextMatch = matches[i + 1]; } var sum = GetSum(); var start = m.Index; var currentLength = m.Length; if (nextMatch != null) { var end = nextMatch.Index - start- sum; rtbMain.Select(start- 1, end); } else { var currentIndex = OriginalText.IndexOf(m.Value); rtbMain.Select(length - currentIndex, (length - currentIndex) - sum); } rtbMain.SelectionColor = GetColor(m.Value); } int GetSum() { return allMatches!.Select(m => m.Value.Length - 1).Sum(); } Color GetColor(string color) { return Color.FromName(color); } } else { Debug.WriteLine("No matches found"); } Since RichTextBox doesn't have the color tags, I don't know how to calculate the correct position of index/length. Screen shot: A: You could also match the end position of the string you're parsing, then, when looping the Matches collection, you just need to calculate the current position inside the string, considering the length of each match. With a slightly modified regex, the Index and Length of each Match refer to a matched tag (e.g., {green}), and each value in Group 1 is the name of a Color. Something like this: (note that just SelectionColor is used here, since I'm appending a new string to the Control on each iteration. The new string added is actually already a Selection, so there's no need to set the selection's length explicitly) string originalText = "Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color\n" + "plus other text {blue} and some more {orange}colors"; string pattern = @"\{(.*?)\}|$"; var matches = Regex.Matches(originalText, pattern, RegexOptions.IgnoreCase); int currentPos = 0; foreach (Match m in matches) { someRichTextBox.AppendText(originalText.Substring(currentPos, m.Index - currentPos)); currentPos = m.Index + m.Length; someRichTextBox.SelectionColor = Color.FromName(m.Groups[1].Value); }; someRichTextBox.SelectionColor = someRichTextBox.ForeColor; Resulting in:
Highlight / color text in a RichTextBox using a pattern
I'm trying to create a colorized RichTextBox based on a pattern. The text is: Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color Each { } contains a color which is a style for next words: Hey, This IS Asample. The color is green color Hey, This default color. IS A should be Red color. sample. The should be Cyan color. color is green color should be green. Here this is my code: // Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color // shown text should be: // Hey, This IS A sample. The color is green const string OriginalText = "Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color"; const string ShownText = "Hey, This IS A sample. The color is green color"; const string Pattern = "(?<=\\{)(.*?)(?=\\})"; rtbMain.Text = ShownText; rtbMain.SelectAll(); rtbMain.SelectionColor = Color.Black; rtbMain.SelectionBackColor = Color.White; Regex regex = new(Pattern, RegexOptions.IgnoreCase); MatchCollection matches = regex.Matches(OriginalText); if (matches.Count > 0) { var rtbText = rtbMain.Text; var length = ShownText.Length; var allMatches = new List<Match>(); for (int i = 0; i < matches.Count; i++) { var m = matches[i]; allMatches.Add(m); Match nextMatch = null; if (matches.Count > i + 1) { nextMatch = matches[i + 1]; } var sum = GetSum(); var start = m.Index; var currentLength = m.Length; if (nextMatch != null) { var end = nextMatch.Index - start- sum; rtbMain.Select(start- 1, end); } else { var currentIndex = OriginalText.IndexOf(m.Value); rtbMain.Select(length - currentIndex, (length - currentIndex) - sum); } rtbMain.SelectionColor = GetColor(m.Value); } int GetSum() { return allMatches!.Select(m => m.Value.Length - 1).Sum(); } Color GetColor(string color) { return Color.FromName(color); } } else { Debug.WriteLine("No matches found"); } Since RichTextBox doesn't have the color tags, I don't know how to calculate the correct position of index/length. Screen shot:
[ "You could also match the end position of the string you're parsing,\nthen, when looping the Matches collection, you just need to calculate the current position inside the string, considering the length of each match.\nWith a slightly modified regex, the Index and Length of each Match refer to a matched tag (e.g., {green}), and each value in Group 1 is the name of a Color.\nSomething like this:\n(note that just SelectionColor is used here, since I'm appending a new string to the Control on each iteration. The new string added is actually already a Selection, so there's no need to set the selection's length explicitly)\nstring originalText = \n \"Hey, This{Red} IS A {Cyan}sample. The {Green}color is green color\\n\" +\n \"plus other text {blue} and some more {orange}colors\";\n\nstring pattern = @\"\\{(.*?)\\}|$\";\nvar matches = Regex.Matches(originalText, pattern, RegexOptions.IgnoreCase);\nint currentPos = 0;\n\nforeach (Match m in matches) {\n someRichTextBox.AppendText(originalText.Substring(currentPos, m.Index - currentPos));\n\n currentPos = m.Index + m.Length;\n someRichTextBox.SelectionColor = Color.FromName(m.Groups[1].Value);\n};\nsomeRichTextBox.SelectionColor = someRichTextBox.ForeColor;\n\nResulting in:\n\n" ]
[ 1 ]
[]
[]
[ "c#", "richtextbox", "winforms" ]
stackoverflow_0074659270_c#_richtextbox_winforms.txt
Q: how to call trigger in procedure plsql Here i have created my trigger and it is working. how to call that trigger inside the procedure. please provide any solution for this. below is my trigger create or replace Trigger emp_trigger Before update on Required_table Begin delete from log_table; insert into log_table(employee_name,phone_number,company_name,location,currency) (select employee_name,phone_number,company_name,location,currency from Required_table); end; here is my procedure code and here i want to call that above trigger code create or replace procedure excercise_one is cursor test_cur is select employee_details.emp_name,employee_details.emp_mobile_no,company.company_name, location.area,currency.currency from employee_details, company, location, currency where employee_details.id = company.emp_no and company.location = location.country and location.location_id = currency.location; ename employee_details.emp_name%type; emp_mob Employee_Details.Emp_Mobile_No%type; cname company.company_name%type; l_area location.area%type; cur currency.currency%type; begin open test_cur; loop fetch test_cur into ename,emp_mob,cname,l_area,cur; if test_cur%Found Then insert into Required_table values (ename,emp_mob,cname,l_area,cur); else exit; end if; end loop; close test_cur; end; A: You can't call a trigger in the stored procedure. Triggers run automatically and are executed or fired when some events occur. For more details, please find the following link https://www.oracletutorial.com/plsql-tutorial/oracle-trigger/ A: It can not be call a trigger from stored procedure. But it can be call a stored procedure from a trigger. A: Your Trigger fires only on Update! For your Problem. Write a Procedure to manipulate your logtable. So you can call ist from a Procedure or a Trigger. MfG Ingo
how to call trigger in procedure plsql
Here i have created my trigger and it is working. how to call that trigger inside the procedure. please provide any solution for this. below is my trigger create or replace Trigger emp_trigger Before update on Required_table Begin delete from log_table; insert into log_table(employee_name,phone_number,company_name,location,currency) (select employee_name,phone_number,company_name,location,currency from Required_table); end; here is my procedure code and here i want to call that above trigger code create or replace procedure excercise_one is cursor test_cur is select employee_details.emp_name,employee_details.emp_mobile_no,company.company_name, location.area,currency.currency from employee_details, company, location, currency where employee_details.id = company.emp_no and company.location = location.country and location.location_id = currency.location; ename employee_details.emp_name%type; emp_mob Employee_Details.Emp_Mobile_No%type; cname company.company_name%type; l_area location.area%type; cur currency.currency%type; begin open test_cur; loop fetch test_cur into ename,emp_mob,cname,l_area,cur; if test_cur%Found Then insert into Required_table values (ename,emp_mob,cname,l_area,cur); else exit; end if; end loop; close test_cur; end;
[ "You can't call a trigger in the stored procedure. Triggers run automatically and are executed or fired when some events occur.\nFor more details, please find the following link\nhttps://www.oracletutorial.com/plsql-tutorial/oracle-trigger/\n", "It can not be call a trigger from stored procedure.\nBut it can be call a stored procedure from a trigger.\n", "Your Trigger fires only on Update!\nFor your Problem.\nWrite a Procedure to manipulate your logtable.\nSo you can call ist from a Procedure or a Trigger.\nMfG\nIngo\n" ]
[ 0, 0, 0 ]
[]
[]
[ "plsql" ]
stackoverflow_0074610345_plsql.txt
Q: Limiting user to 1 comment per post in PHP and MYSQL so i just have a simple app that a user can comment on. However, the user needs to be able to post just 1 comment on particular post. This is the method i use to insert: public function insertComment($user_id, $id, $comment) { $sql = "INSERT INTO comments(user_id, post_id, comment_content) VALUES(:user_id, :id, :comment)"; $sqlArr = [ "user_id" => $user_id, "id" => $id, "comment" => $comment ]; $stmt = parent::connect()->prepare($sql); if ($stmt->execute($sqlArr)) { echo "success"; } else { echo "error"; } } This does get the job done, but how can i limit the user to be able to comment only 1 for particular post, if he tries to get an error, for example: if(commentExists()){ echo "You have already commented!"; }else{ //call insert method insertComment($user_id, $id, $comment); } A: To limit a user to only one comment per post in PHP and MySQL, you can use a combination of a unique constraint on the comments table and a check for the existence of a comment in the insertComment() method. First, you can add a unique constraint on the comments table to prevent duplicate entries with the same user ID and post ID. This can be done using the following SQL statement: ALTER TABLE comments ADD UNIQUE (user_id, post_id); This statement will add a unique constraint on the user_id and post_id columns in the comments table. This means that any subsequent inserts with the same user_id and post_id values will fail. Next, you can modify the insertComment() method to check for the existence of a comment with the given user_id and post_id values before inserting the comment. You can do this by adding a SELECT query to check if a comment with the same user_id and post_id exists, and only inserting the comment if it does not exist. Here is an example of how to do this: public function insertComment($user_id, $id, $comment) { // Check if a comment with the same user_id and post_id already exists $checkSql = "SELECT * FROM comments WHERE user_id = :user_id AND post_id = :id"; $checkStmt = parent::connect()->prepare($checkSql); $checkStmt->execute(["user_id" => $user_id, "id" => $id]); $commentExists = $checkStmt->fetch(); if ($commentExists) { // If a comment already exists, return an error message return "You have already commented on this post!"; } else { // If a comment does not exist, insert the comment $sql = "INSERT INTO comments(user_id, post_id, comment_content) VALUES(:user_id, :id, :comment)"; $sqlArr = [ "user_id" => $user_id, "id" => $id, "comment" => $comment ]; $stmt = parent::connect()->prepare($sql); if ($stmt->execute($sqlArr)) { return "success"; } else { return "error"; } } } In this example, the insertComment() method first checks for the existence of a comment with the same user_id and post_id values using a SELECT query. If a comment already exists, an error message is returned. Otherwise, the comment is inserted using the original INSERT statement. You can then call the insertComment() method in your code to insert a comment, and handle the returned value to display an error message if the user has already commented on the post. Here is an example of how to do this: // Call the insertComment() method $result = insertComment($user_id, $id, $comment); // Check the result and display an error message if necessary if ($result === "You have already commented on this post!") {
Limiting user to 1 comment per post in PHP and MYSQL
so i just have a simple app that a user can comment on. However, the user needs to be able to post just 1 comment on particular post. This is the method i use to insert: public function insertComment($user_id, $id, $comment) { $sql = "INSERT INTO comments(user_id, post_id, comment_content) VALUES(:user_id, :id, :comment)"; $sqlArr = [ "user_id" => $user_id, "id" => $id, "comment" => $comment ]; $stmt = parent::connect()->prepare($sql); if ($stmt->execute($sqlArr)) { echo "success"; } else { echo "error"; } } This does get the job done, but how can i limit the user to be able to comment only 1 for particular post, if he tries to get an error, for example: if(commentExists()){ echo "You have already commented!"; }else{ //call insert method insertComment($user_id, $id, $comment); }
[ "To limit a user to only one comment per post in PHP and MySQL, you can use a combination of a unique constraint on the comments table and a check for the existence of a comment in the insertComment() method.\nFirst, you can add a unique constraint on the comments table to prevent duplicate entries with the same user ID and post ID. This can be done using the following SQL statement:\nALTER TABLE comments\nADD UNIQUE (user_id, post_id);\n\nThis statement will add a unique constraint on the user_id and post_id columns in the comments table. This means that any subsequent inserts with the same user_id and post_id values will fail.\nNext, you can modify the insertComment() method to check for the existence of a comment with the given user_id and post_id values before inserting the comment. You can do this by adding a SELECT query to check if a comment with the same user_id and post_id exists, and only inserting the comment if it does not exist. Here is an example of how to do this:\npublic function insertComment($user_id, $id, $comment)\n{\n // Check if a comment with the same user_id and post_id already exists\n $checkSql = \"SELECT * FROM comments\n WHERE user_id = :user_id AND post_id = :id\";\n $checkStmt = parent::connect()->prepare($checkSql);\n $checkStmt->execute([\"user_id\" => $user_id, \"id\" => $id]);\n $commentExists = $checkStmt->fetch();\n\n if ($commentExists) {\n // If a comment already exists, return an error message\n return \"You have already commented on this post!\";\n } else {\n // If a comment does not exist, insert the comment\n $sql = \"INSERT INTO comments(user_id, post_id, comment_content)\n VALUES(:user_id, :id, :comment)\";\n $sqlArr = [\n \"user_id\" => $user_id,\n \"id\" => $id,\n \"comment\" => $comment\n ];\n $stmt = parent::connect()->prepare($sql);\n\n if ($stmt->execute($sqlArr)) {\n return \"success\";\n } else {\n return \"error\";\n }\n }\n}\n\nIn this example, the insertComment() method first checks for the existence of a comment with the same user_id and post_id values using a SELECT query. If a comment already exists, an error message is returned. Otherwise, the comment is inserted using the original INSERT statement.\nYou can then call the insertComment() method in your code to insert a comment, and handle the returned value to display an error message if the user has already commented on the post. Here is an example of how to do this:\n// Call the insertComment() method\n$result = insertComment($user_id, $id, $comment);\n\n// Check the result and display an error message if necessary\nif ($result === \"You have already commented on this post!\") {\n\n" ]
[ 3 ]
[]
[]
[ "php" ]
stackoverflow_0074659910_php.txt
Q: Change PDF title in browser window I have a pdf file that I am putting on a website for a client. It is located here... http://www.optiphysicaltherapy.com/dev/wp-content/uploads/2014/02/OPTI_NewPatientForms.pdf The title should be OPTI New Patient Forms but if you look at the tab in the browser and the name at the top of the browser window it says "Coury And..." Where can I go to change this? The website is using Wordpress 3.8.1 and I am not sure if it is in Wordpress or in the actual pdf file. Thank you, Matt A: Ok, So I found out how to change the meta-data in a .pdf form here: http://help.adobe.com/en_US/acrobat/X/pro/using/WS58a04a822e3e50102bd615109794195ff-7c63.w.html (dead link; archived version here) Sure enough the Title in the Meta Data within the .pdf was "Coury And..." Once I changed this the Tab and the Title in Firefox web browser changed to have the title that I wanted. This shows us that the meta-data in the .pdf does show in Firefox as if it were the meta-title of the webpage when displaying a .pdf within the browser. A: If you have access to the Word document in which the PDF is based, you can define the title when you save the file. A: Open the PDF with Notepad++ and search (CTRL+F) for /Title Change title between brackets (and leave the brackets) For instance: Change "/Title (OLD TITLE)" into "/Title (This is my new title)" Save the PDF and Voila A: Whatever was on that link, I did it opening the PDF with a hex editor (HxD) and searching Title, so I found /Title (untitled) somewhere and just edited it (changed the value between parentheses, here untitled). A: no need to change in meta of pdf. just to following change in iframe url http://localhost:8080/getDataPDF//?patientId=145. use // to solve this problem it can hide your title. A: Open the PDF document in Adobe Acrobat Pro: (OR use google chrome extension) (1) Go to Select File > Properties (2) Select the Description tab to view the metadata in the document, including the document information dictionary (3) Modify the Title field to add or change the document's Title entry A: When you open pdf in chrome you can hit print and save as pdf. As file name write what you want as title in browser, it should be the same now. A: Open File > Properties, then in the box labeled 'Title', add your title. Click on the 'Initial View' tab, where it says Show:, make sure the drop down says 'Document Title' instead of 'File Name'. This works for Chrome, but sadly not IE yet. A: For change my pdf tittle I just open it on nano terminal, or with another text editor that open the raw, and I edit the Title field. A: The title can be changed inside MS Office or LibreOffice if you have access to the source by going to file/properties/description. A: As another answer suggested, printing as a PDF works here if you have the source document. What the other answer perhaps got wrong was that there is an option to add a title in the print dialog. A: You can also use this online pdf editor to change metadata of a pdf file.
Change PDF title in browser window
I have a pdf file that I am putting on a website for a client. It is located here... http://www.optiphysicaltherapy.com/dev/wp-content/uploads/2014/02/OPTI_NewPatientForms.pdf The title should be OPTI New Patient Forms but if you look at the tab in the browser and the name at the top of the browser window it says "Coury And..." Where can I go to change this? The website is using Wordpress 3.8.1 and I am not sure if it is in Wordpress or in the actual pdf file. Thank you, Matt
[ "Ok, So I found out how to change the meta-data in a .pdf form here: http://help.adobe.com/en_US/acrobat/X/pro/using/WS58a04a822e3e50102bd615109794195ff-7c63.w.html (dead link; archived version here)\nSure enough the Title in the Meta Data within the .pdf was \"Coury And...\"\nOnce I changed this the Tab and the Title in Firefox web browser changed to have the title that I wanted. \nThis shows us that the meta-data in the .pdf does show in Firefox as if it were the meta-title of the webpage when displaying a .pdf within the browser.\n", "If you have access to the Word document in which the PDF is based, you can define the title when you save the file.\n\n", "Open the PDF with Notepad++ and search (CTRL+F) for /Title\nChange title between brackets (and leave the brackets)\nFor instance:\nChange \"/Title (OLD TITLE)\" into \"/Title (This is my new title)\"\nSave the PDF and Voila\n", "Whatever was on that link, I did it opening the PDF with a hex editor (HxD) and searching Title, so I found /Title (untitled) somewhere and just edited it (changed the value between parentheses, here untitled).\n", "no need to change in meta of pdf. just to following change in iframe url \nhttp://localhost:8080/getDataPDF//?patientId=145. use // to solve this problem it can hide your title.\n", "Open the PDF document in Adobe Acrobat Pro: (OR use google chrome extension)\n\n(1) Go to Select File > Properties\n(2) Select the Description tab to view the metadata in the document, including the document information dictionary\n(3) Modify the Title field to add or change the document's Title entry\n\n", "When you open pdf in chrome you can hit print and save as pdf. As file name write what you want as title in browser, it should be the same now.\n", "Open File > Properties, then in the box labeled 'Title', add your title.\nClick on the 'Initial View' tab, where it says Show:, make sure the drop down says 'Document Title' instead of 'File Name'. This works for Chrome, but sadly not IE yet.\n", "For change my pdf tittle I just open it on nano terminal, or with another text editor that open the raw, and I edit the Title field. \n", "The title can be changed inside MS Office or LibreOffice if you have access to the source by going to file/properties/description.\n", "As another answer suggested, printing as a PDF works here if you have the source document. What the other answer perhaps got wrong was that there is an option to add a title in the print dialog.\n", "You can also use this online pdf editor to change metadata of a pdf file.\n" ]
[ 27, 24, 23, 12, 8, 5, 3, 3, 2, 2, 1, 0 ]
[ "The title does not come from the pdf. it comes from the word file you export it from.\nRight click on the word file, go to details. change the title and export again\nGood luck\n" ]
[ -1 ]
[ "browser", "pdf", "title" ]
stackoverflow_0022136043_browser_pdf_title.txt
Q: SimpleSAMLPHP redirection loop we are trying to setup sso with custom mysql database but it is going into endless loop between below two requests. POST http://192.168.0.15/simplesaml/module.php/core/loginuserpass.php Set-Cookie PHPSESSID=d0eaabb959ffeb2a0dd20f4744945f8f; path=/; HttpOnly SimpleSAMLAuthToken=_297a91e9a4e14c61d247427063201a39587396c2e3; path=/; httponly http://192.168.0.15/simplesaml/module.php/core/loginuserpass.php?AuthState=_e3e75218660095b936b9582356bcbc7b1e26934876%3Ahttp%3A%2F%2F192.168.0.15%2Fsimplesaml%2Fmodule.php%2Fcore%2Fas_login.php%3FAuthId%3Dexample-sql%26ReturnTo%3Dhttp%253A%252F%252F192.168.0.2%252F%252Fver06%252Fapp.php Set-Cookie PHPSESSID=92688949c724d39e673eec73b0674de0; path=/; HttpOnly 192.168.0.15 is our sso server and 192.168.0.2 is the website which is requesting for sso. Are we missing anything? also is there any client and server separation of sso modules for ease of use. Also we are not getting log file generated. permissions verified on folder. A: Check following parameters in the config.php file. 'baseurlpath' => 'http[s]://YOUR_DOMAIN/simplesaml/', 'session.cookie.domain' => '.YOUR_DOMAIN', 'session.cookie.secure' => true, // ACCORDING TO YOUR REQUIREMENT 'session.phpsession.savepath' => '/PATH/TO/STORE/SESSION', // MAKE SURE THIS PATH IS WRITABLE BY WEB/APP SERVER 'session.phpsession.httponly' => true, // ACCORDING TO YOUR REQUIREMENT A: I got the same problem and for me, the reason was in NGINX configurations. The NGINX wasn't listening to the /simplesaml and didn't redirect it to the right file. location ^~ /simplesaml { alias /var/www/html/vendor/simplesamlphp/simplesamlphp/www/; location ~ \.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi.conf; fastcgi_param QUERY_STRING $args; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass php; } } A: In my case, I had session.cookie.domain set to the wrong domain (which also triggered a redirection loop).
SimpleSAMLPHP redirection loop
we are trying to setup sso with custom mysql database but it is going into endless loop between below two requests. POST http://192.168.0.15/simplesaml/module.php/core/loginuserpass.php Set-Cookie PHPSESSID=d0eaabb959ffeb2a0dd20f4744945f8f; path=/; HttpOnly SimpleSAMLAuthToken=_297a91e9a4e14c61d247427063201a39587396c2e3; path=/; httponly http://192.168.0.15/simplesaml/module.php/core/loginuserpass.php?AuthState=_e3e75218660095b936b9582356bcbc7b1e26934876%3Ahttp%3A%2F%2F192.168.0.15%2Fsimplesaml%2Fmodule.php%2Fcore%2Fas_login.php%3FAuthId%3Dexample-sql%26ReturnTo%3Dhttp%253A%252F%252F192.168.0.2%252F%252Fver06%252Fapp.php Set-Cookie PHPSESSID=92688949c724d39e673eec73b0674de0; path=/; HttpOnly 192.168.0.15 is our sso server and 192.168.0.2 is the website which is requesting for sso. Are we missing anything? also is there any client and server separation of sso modules for ease of use. Also we are not getting log file generated. permissions verified on folder.
[ "Check following parameters in the config.php file.\n'baseurlpath' => 'http[s]://YOUR_DOMAIN/simplesaml/',\n'session.cookie.domain' => '.YOUR_DOMAIN',\n'session.cookie.secure' => true, // ACCORDING TO YOUR REQUIREMENT\n'session.phpsession.savepath' => '/PATH/TO/STORE/SESSION', // MAKE SURE THIS PATH IS WRITABLE BY WEB/APP SERVER \n'session.phpsession.httponly' => true, // ACCORDING TO YOUR REQUIREMENT\n\n", "I got the same problem and for me, the reason was in NGINX configurations.\nThe NGINX wasn't listening to the /simplesaml and didn't redirect it to the right file.\nlocation ^~ /simplesaml {\n alias /var/www/html/vendor/simplesamlphp/simplesamlphp/www/;\n location ~ \\.php(/|$) {\n fastcgi_split_path_info ^(.+?\\.php)(/.*)$;\n fastcgi_param PATH_INFO $fastcgi_path_info;\n include fastcgi.conf;\n fastcgi_param QUERY_STRING $args;\n fastcgi_param SCRIPT_FILENAME $request_filename;\n fastcgi_pass php;\n }\n}\n\n", "In my case, I had session.cookie.domain set to the wrong domain (which also triggered a redirection loop).\n" ]
[ 1, 0, 0 ]
[]
[]
[ "mysql", "php", "simplesamlphp", "single_sign_on" ]
stackoverflow_0041443704_mysql_php_simplesamlphp_single_sign_on.txt
Q: How to change the code template/layout in VS Code I have started to learn C#, so I have started to use VS Code as my editor and have also installed dotnet framework with it. The problem is that when I create a new file, my file has a completely different code setup/layout/template, unlike the one that I had seen in a video, and I was wondering if I could change that default template, to the one in the video, so that it automatically loads in when I create a new C# file. I tried to search through the VS settings and preferences, googling it, but can't seem to find this specific answer. Currently when I create a new file I get such looking code: look of the code that I get when I create the file but I am looking to find this type of code, so that I can follow up with the tutorial that I am following: look of code that I want to get when I create the file A: You can set template by following below steps
How to change the code template/layout in VS Code
I have started to learn C#, so I have started to use VS Code as my editor and have also installed dotnet framework with it. The problem is that when I create a new file, my file has a completely different code setup/layout/template, unlike the one that I had seen in a video, and I was wondering if I could change that default template, to the one in the video, so that it automatically loads in when I create a new C# file. I tried to search through the VS settings and preferences, googling it, but can't seem to find this specific answer. Currently when I create a new file I get such looking code: look of the code that I get when I create the file but I am looking to find this type of code, so that I can follow up with the tutorial that I am following: look of code that I want to get when I create the file
[ "You can set template by following below steps\n" ]
[ 1 ]
[]
[]
[ "visual_studio_2019" ]
stackoverflow_0074659623_visual_studio_2019.txt
Q: function that takes two parameters of string type which are fractions with the same denominator and returns a sum expression and the sum result For example: >>> a_b = '1/3' >>> c_b = '5/3' >>> get_fractions(a_b, c_b) '1/3 + 5/3 = 6/3'` I'm trying to solve this but it won't work: def get_fractions(a_b: str, c_b: str) -> str: calculate = int(a_b) + int(c_b) return calculate A: First you will have to get the nominator and denominator for each argument. After that you convert the nominator of each argument from string to integer and add them. Then lastly convert the sum of nominators to str and concatenate it with '/' and any of the argument denominator. def get_fractions(a_b: str, c_b: str) -> str: a_b = a_b.split('/') a_n, a_d = a_b[0], a_b[1] c_b = c_b.split('/') c_n, c_d = c_b[0], c_b[1] n_sum = int(c_n) + int(a_n) out = f'{n_sum} / {a_d}' return out Output 6 / 3 A: def get_fractions(a_b: str, c_b: str) -> str: a_b = a_b.split('/') a_n, a_d = a_b[0], a_b[1] c_b = c_b.split('/') c_n, c_d = c_b[0], c_b[1] n_sum = int(c_n) + int(az_n) out = f'{n_sum} / {a_d}' return out a_b = '1/3' c_b = '5/3'
function that takes two parameters of string type which are fractions with the same denominator and returns a sum expression and the sum result
For example: >>> a_b = '1/3' >>> c_b = '5/3' >>> get_fractions(a_b, c_b) '1/3 + 5/3 = 6/3'` I'm trying to solve this but it won't work: def get_fractions(a_b: str, c_b: str) -> str: calculate = int(a_b) + int(c_b) return calculate
[ "First you will have to get the nominator and denominator for each argument. After that you convert the nominator of each argument from string to integer and add them. Then lastly convert the sum of nominators to str and concatenate it with '/' and any of the argument denominator.\ndef get_fractions(a_b: str, c_b: str) -> str:\n a_b = a_b.split('/')\n a_n, a_d = a_b[0], a_b[1]\n c_b = c_b.split('/')\n c_n, c_d = c_b[0], c_b[1]\n n_sum = int(c_n) + int(a_n)\n out = f'{n_sum} / {a_d}'\n return out\n\nOutput\n6 / 3\n\n", "def get_fractions(a_b: str, c_b: str) -> str:\na_b = a_b.split('/')\na_n, a_d = a_b[0], a_b[1]\nc_b = c_b.split('/')\nc_n, c_d = c_b[0], c_b[1]\nn_sum = int(c_n) + int(az_n)\nout = f'{n_sum} / {a_d}'\nreturn out\n\na_b = '1/3'\nc_b = '5/3'\n\n" ]
[ 1, 0 ]
[]
[]
[ "fractions", "integer", "python", "python_3.x", "string" ]
stackoverflow_0074235217_fractions_integer_python_python_3.x_string.txt
Q: How to fix Nuget Provider "Find-Module" Installation error with PowerShell 7.3? I've been trying to run a PowerShell script, and upon doing so, I receive a message that NuGet Provider is required. NuGet provider is required to continue This version of PowerShellGet requires minimum version '2.8.5.201' of NuGet provider to publish an item to NuGet-based repositories. The NuGet provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or 'C:\Users\timothy.granata\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import the NuGet provider now? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): If I input Y, an error is returned: Find-Module: NuGet provider is required to interact with NuGet-based repositories. Please ensure that '2.8.5.201' or newer version of NuGet provider is installed. If I try running Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force as it recommends, I also get an error: Install-PackageProvider: Unable to find repository with SourceLocation ''. Use Get-PSRepository to see all available repositories. And finally, if I run Get-PSRepository, that also errors: Get-PackageSource: Unable to find module providers (PowerShellGet). In the script I am trying to debug, the code that seems to trigger this prompt is Install-AWSToolsModule SecurityToken -Force. The surrounding code looks like: if (-not (Get-Module AWS.Tools.Installer -ListAvailable)) { Install-Module AWS.Tools.Installer -Force } Install-AWSToolsModule SecurityToken -Force Get-AWSCredential -ListProfileDetail | ForEach-Object { Remove-AWSCredentialProfile -ProfileName $_.ProfileName -Force } I have tried: Reinstalling PowerShell 7 Making sure I am using TLS 1.2 by running [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 Running PowerShell as Administrator Deleting the Modules folder found in my C:\Users<user>\Documents\WindowsPowerShell folder I'm unsure what else I can try at this point. How can I install the NuGet provider for use with PowerShell 7.3? A: Try $sourceArgs = @{ Name = 'nuget.org' Location = 'https://api.nuget.org/v3/index.json' ProviderName = 'NuGet' } Register-PackageSource @sourceArgs Get-PackageProvider | where name -eq 'nuget' | Install-PackageProvider EDIT Perhaps try Invoke-WebRequest 'https://www.powershellgallery.com/api/v2/package/PackageManagement/1.4.8.1' -OutFile $env:temp\nuget.zip And confirm you're able to download the nuget package. If so, then try Expand-Archive $env:temp\nuget.zip -DestinationPath 'C:\Program Files\PowerShell\7\Modules\PackageManagement' -Force Import-Module PackageManagement -Verbose -Force A: It seems something with OneDrive was indeed throwing this off as I wondered in my one comment. I found this post which stated (from some Microsoft Documentation): The user-specific CurrentUser location on Windows is the PowerShell\Modules folder located in the Documents location in your user profile ... Microsoft OneDrive can also change the location of your Documents folder. I ran $env:PSModulePath and sure enough their was a OneDrive location. I ended up doing what the answer on that post suggested, and excluded the PowerShell directory from OneDrive. After doing this, my script seems to work now (it doesn't produce errors, or that prompt). Ever after doing this, the OneDrive location still shows up from the $env:PSModulePath command, but I guess it falls back to the next modules location if it can't find a directory.
How to fix Nuget Provider "Find-Module" Installation error with PowerShell 7.3?
I've been trying to run a PowerShell script, and upon doing so, I receive a message that NuGet Provider is required. NuGet provider is required to continue This version of PowerShellGet requires minimum version '2.8.5.201' of NuGet provider to publish an item to NuGet-based repositories. The NuGet provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or 'C:\Users\timothy.granata\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import the NuGet provider now? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): If I input Y, an error is returned: Find-Module: NuGet provider is required to interact with NuGet-based repositories. Please ensure that '2.8.5.201' or newer version of NuGet provider is installed. If I try running Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force as it recommends, I also get an error: Install-PackageProvider: Unable to find repository with SourceLocation ''. Use Get-PSRepository to see all available repositories. And finally, if I run Get-PSRepository, that also errors: Get-PackageSource: Unable to find module providers (PowerShellGet). In the script I am trying to debug, the code that seems to trigger this prompt is Install-AWSToolsModule SecurityToken -Force. The surrounding code looks like: if (-not (Get-Module AWS.Tools.Installer -ListAvailable)) { Install-Module AWS.Tools.Installer -Force } Install-AWSToolsModule SecurityToken -Force Get-AWSCredential -ListProfileDetail | ForEach-Object { Remove-AWSCredentialProfile -ProfileName $_.ProfileName -Force } I have tried: Reinstalling PowerShell 7 Making sure I am using TLS 1.2 by running [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 Running PowerShell as Administrator Deleting the Modules folder found in my C:\Users<user>\Documents\WindowsPowerShell folder I'm unsure what else I can try at this point. How can I install the NuGet provider for use with PowerShell 7.3?
[ "Try\n$sourceArgs = @{\n Name = 'nuget.org'\n Location = 'https://api.nuget.org/v3/index.json'\n ProviderName = 'NuGet'\n}\n\nRegister-PackageSource @sourceArgs\n\nGet-PackageProvider | where name -eq 'nuget' | Install-PackageProvider\n\nEDIT\nPerhaps try\nInvoke-WebRequest 'https://www.powershellgallery.com/api/v2/package/PackageManagement/1.4.8.1' -OutFile $env:temp\\nuget.zip\n\nAnd confirm you're able to download the nuget package. If so, then try\nExpand-Archive $env:temp\\nuget.zip -DestinationPath 'C:\\Program Files\\PowerShell\\7\\Modules\\PackageManagement' -Force\n\nImport-Module PackageManagement -Verbose -Force\n\n", "It seems something with OneDrive was indeed throwing this off as I wondered in my one comment. I found this post which stated (from some Microsoft Documentation):\n\nThe user-specific CurrentUser location on Windows is the PowerShell\\Modules folder located in the Documents location in your user profile ... Microsoft OneDrive can also change the location of your Documents folder.\n\nI ran $env:PSModulePath and sure enough their was a OneDrive location. I ended up doing what the answer on that post suggested, and excluded the PowerShell directory from OneDrive. After doing this, my script seems to work now (it doesn't produce errors, or that prompt). Ever after doing this, the OneDrive location still shows up from the $env:PSModulePath command, but I guess it falls back to the next modules location if it can't find a directory.\n" ]
[ 0, 0 ]
[]
[]
[ "nuget", "powershell", "powershell_7.0", "powershell_7.3" ]
stackoverflow_0074646757_nuget_powershell_powershell_7.0_powershell_7.3.txt
Q: Javascript querySelectorAll not working with class I've write a simple JavaScript code to show time. I want to show this javascript value or output in multiple place. That's why I followed querySelectorAll method. But it now working... Look at my code; `<p class="mm"></p> <h1 class="mm"></h1> <script type="text/javaScript"> var date = new Date(); document.querySelectorall(".mm").innerHTML = date; </script>` How can I do it without getElementById......? If I add querySelector, it just show one place. Not many place. querySelectorAll not working. I want to show date in multiple place. A: Here's an example using a for...of loop to help you reproduce success. You can search MDN to learn about other JavaScript and web APIs. const date = new Date(); for (const element of document.querySelectorAll(".mm")) { element.textContent = date.toLocaleString(); } <p class="mm"></p> <h1 class="mm"></h1>
Javascript querySelectorAll not working with class
I've write a simple JavaScript code to show time. I want to show this javascript value or output in multiple place. That's why I followed querySelectorAll method. But it now working... Look at my code; `<p class="mm"></p> <h1 class="mm"></h1> <script type="text/javaScript"> var date = new Date(); document.querySelectorall(".mm").innerHTML = date; </script>` How can I do it without getElementById......? If I add querySelector, it just show one place. Not many place. querySelectorAll not working. I want to show date in multiple place.
[ "Here's an example using a for...of loop to help you reproduce success. You can search MDN to learn about other JavaScript and web APIs.\n\n\nconst date = new Date();\n\nfor (const element of document.querySelectorAll(\".mm\")) {\n element.textContent = date.toLocaleString();\n}\n<p class=\"mm\"></p>\n<h1 class=\"mm\"></h1>\n\n\n\n" ]
[ -2 ]
[]
[]
[ "html", "javascript", "selectors_api", "time" ]
stackoverflow_0074659931_html_javascript_selectors_api_time.txt
Q: R Loop function I'm trying to write a function in R to compute some indicators within a data frame. Let's say my DF is: df <- structure(list(Delt.1.arithmetic = c(0.002519607, 0.03247049, 0.01268653, 0.01105899, -0.003642582, -0.02468412, -0.04560344, 0.0501897, -0.01963724, -0.01068217, -0.1203641, 0.1604419, 0.001868874, 0.04664339, 0.01482009, 0.05694765, 0.2006065, 0.0187676, 0.02741049, 0.0339604)), class = "data.frame", row.names = c(NA, -20L)) I'd like to make packets of five values, starting with the first value until the fifth value. Thus, we have something like df[1:5,] Ok now, I'd like to know within this sample, what sign is each observation. sign(rv_returns[1:5,]) We got this output [1] 1 1 1 1 -1 1 is corresponding to a plus, -1 is corresponding to a minus. Moving forward, I'd like to compute the probability of getting a (+), and a (-). I don' know how to automate this part, by looking we know for the previous output that P(1) = 4/5 and P(-1) = 1 - P(1). Omitting the 0 value of the function sign(). We know the probability of getting the sign of DF[1:5,], we look now for the strength of possible fluctuation. Basically, we compute the average of the most probable sign. As an example, considering the first series of five, gives us a probability of 4/5 to get a positive number. So we take all four positive numbers together and compute their mean. Manually we have something like (DF[1) + DF[2] + DF[3] + DF[4]) / 4 which equals to x. We now have the most probable sign of the data range, and its average strength. We'd like to compare this (x) value, to the 5 + 1 value of the dataset. The sixth value then. The sixth value is equal to -0.02468412, our little computations for step 1 to 4 are equal to 0.01101861. I'd like to compare their signs. If DF[6]'s sign = Computed Sign we have a boolean value like TRUE or CONFIRM. Otherwise we have a FALSE. Basically, we have + = + : TRUE// + = - : FALSE. Finally, we want to repeat this function for the entire data set, taking a period of n previous values. In this case n = 5. So if my DF has 1000 Observations, the last looped arguments will consider DF[994:999,] and compare the computed value with the 1000th value (step 5). The results of this function should be stored in a variable. An example of accomplished variable might be [6,] TRUE [7,] FALSE [8,] TRUE [9,] FALSE [10,] TRUE Additionnal info: our compared value has to be considered in the next data range. For example, in the second looped function we should study DF[2:6,] over observation DF[7]. The third one will consider DF[3:7,] over observation DF[8] and return and TRUE/FALSE. Thank you for your help, Wish you all the best. predictor <- function() { x <- 1 y <- 5 w <- x + 1 z <- y + 1 P1 <- rv_returns[x:y,] P2 <- rv_returns[w:z,] repeat { w <- w + 1 z <- z + 1 sign(rv_returns[w:z,]) } } predictor() A: If your final desired output is simply boolean values, you could combine the intermediate steps you described. I am not sure why you need to calculate the mean (if ultimately you just need the sign of the mean), so I am adding in the commented code to go that route if needed: # Define n n <- 5 # Define second column to store Boolean values df[,2] <- NA # Run `for` loop for(xx in (n + 1):nrow(df)){ # Find most common sign in previous five values sgn <- as.numeric(names(table(sign(df[(xx - n):(xx - 1),1]))[order(table(sign(df[(xx-n):(xx - 1),1])), decreasing = TRUE)][1])) # Determine if the sign of the current value equals the most common sign df[xx, 2] <- sgn == sign(df[xx,1]) # or if need the means #mn <- mean(df[(xx-n):(xx - 1), 1][sign(df[(xx - n):(xx - 1),1]) == sgn]) #df[xx,2] <- sign(mn) == sign(df[xx,1]) } Output: # Delt.1.arithmetic V2 # 1 0.002519607 NA # 2 0.032470490 NA # 3 0.012686530 NA # 4 0.011058990 NA # 5 -0.003642582 NA # 6 -0.024684120 FALSE # 7 -0.045603440 FALSE # 8 0.050189700 FALSE # 9 -0.019637240 TRUE # 10 -0.010682170 TRUE # 11 -0.120364100 TRUE # 12 0.160441900 FALSE # 13 0.001868874 FALSE # 14 0.046643390 FALSE # 15 0.014820090 TRUE # 16 0.056947650 TRUE # 17 0.200606500 TRUE # 18 0.018767600 TRUE # 19 0.027410490 TRUE # 20 0.033960400 TRUE Data: df <- structure(list(Delt.1.arithmetic = c(0.002519607, 0.03247049, 0.01268653, 0.01105899, -0.003642582, -0.02468412, -0.04560344, 0.0501897, -0.01963724, -0.01068217, -0.1203641, 0.1604419, 0.001868874, 0.04664339, 0.01482009, 0.05694765, 0.2006065, 0.0187676, 0.02741049, 0.0339604)), class = "data.frame", row.names = c(NA, -20L)) EDIT As requested, a more detailed breakdown explanation. In the above, everything is put together for more parsimonious code. Below is a step-by-step breakdown. I suggest running it line-by-line to see how it works. Also note this is a "brute-force" method and other may have more elegant means to achieve the same goal. Note that none of the intermediate steps are saved/indexed, so are intentionally overwritten on each iteration of the loop (in contrast to the last step, which saves the final value in df by indexing at df[xx, 2]) Hope this helps and happy coding! for(xx in (n+1):nrow(df)){ # get signs for pervious n values signs_prevn <- sign(df[(xx-n):(xx - 1), 1]) # create a table for these values tbl <- table(signs_prevn) # Order it where the most common (-1 or 1) is first tbl <- tbl[order(tbl, decreasing = TRUE)] # extract the first (most common) element from the table # this gives the frequency as an integer, with the name # being the sign (-1 or 1) tbl <- tbl[1] # Get the "name" of the most common sign (-1 or 1) # This is a character variable so convert to numeric sgn <- as.numeric(names(tbl)) # See if the most common sign meets the sign of the current position df[xx, 2] <- sgn == sign(df[xx,1]) }
R Loop function
I'm trying to write a function in R to compute some indicators within a data frame. Let's say my DF is: df <- structure(list(Delt.1.arithmetic = c(0.002519607, 0.03247049, 0.01268653, 0.01105899, -0.003642582, -0.02468412, -0.04560344, 0.0501897, -0.01963724, -0.01068217, -0.1203641, 0.1604419, 0.001868874, 0.04664339, 0.01482009, 0.05694765, 0.2006065, 0.0187676, 0.02741049, 0.0339604)), class = "data.frame", row.names = c(NA, -20L)) I'd like to make packets of five values, starting with the first value until the fifth value. Thus, we have something like df[1:5,] Ok now, I'd like to know within this sample, what sign is each observation. sign(rv_returns[1:5,]) We got this output [1] 1 1 1 1 -1 1 is corresponding to a plus, -1 is corresponding to a minus. Moving forward, I'd like to compute the probability of getting a (+), and a (-). I don' know how to automate this part, by looking we know for the previous output that P(1) = 4/5 and P(-1) = 1 - P(1). Omitting the 0 value of the function sign(). We know the probability of getting the sign of DF[1:5,], we look now for the strength of possible fluctuation. Basically, we compute the average of the most probable sign. As an example, considering the first series of five, gives us a probability of 4/5 to get a positive number. So we take all four positive numbers together and compute their mean. Manually we have something like (DF[1) + DF[2] + DF[3] + DF[4]) / 4 which equals to x. We now have the most probable sign of the data range, and its average strength. We'd like to compare this (x) value, to the 5 + 1 value of the dataset. The sixth value then. The sixth value is equal to -0.02468412, our little computations for step 1 to 4 are equal to 0.01101861. I'd like to compare their signs. If DF[6]'s sign = Computed Sign we have a boolean value like TRUE or CONFIRM. Otherwise we have a FALSE. Basically, we have + = + : TRUE// + = - : FALSE. Finally, we want to repeat this function for the entire data set, taking a period of n previous values. In this case n = 5. So if my DF has 1000 Observations, the last looped arguments will consider DF[994:999,] and compare the computed value with the 1000th value (step 5). The results of this function should be stored in a variable. An example of accomplished variable might be [6,] TRUE [7,] FALSE [8,] TRUE [9,] FALSE [10,] TRUE Additionnal info: our compared value has to be considered in the next data range. For example, in the second looped function we should study DF[2:6,] over observation DF[7]. The third one will consider DF[3:7,] over observation DF[8] and return and TRUE/FALSE. Thank you for your help, Wish you all the best. predictor <- function() { x <- 1 y <- 5 w <- x + 1 z <- y + 1 P1 <- rv_returns[x:y,] P2 <- rv_returns[w:z,] repeat { w <- w + 1 z <- z + 1 sign(rv_returns[w:z,]) } } predictor()
[ "If your final desired output is simply boolean values, you could combine the intermediate steps you described. I am not sure why you need to calculate the mean (if ultimately you just need the sign of the mean), so I am adding in the commented code to go that route if needed:\n# Define n\nn <- 5\n\n# Define second column to store Boolean values \ndf[,2] <- NA\n\n# Run `for` loop\nfor(xx in (n + 1):nrow(df)){\n # Find most common sign in previous five values\n sgn <- as.numeric(names(table(sign(df[(xx - n):(xx - 1),1]))[order(table(sign(df[(xx-n):(xx - 1),1])), decreasing = TRUE)][1]))\n \n # Determine if the sign of the current value equals the most common sign\n df[xx, 2] <- sgn == sign(df[xx,1])\n\n # or if need the means\n #mn <- mean(df[(xx-n):(xx - 1), 1][sign(df[(xx - n):(xx - 1),1]) == sgn])\n #df[xx,2] <- sign(mn) == sign(df[xx,1])\n}\n\nOutput:\n# Delt.1.arithmetic V2\n# 1 0.002519607 NA\n# 2 0.032470490 NA\n# 3 0.012686530 NA\n# 4 0.011058990 NA\n# 5 -0.003642582 NA\n# 6 -0.024684120 FALSE\n# 7 -0.045603440 FALSE\n# 8 0.050189700 FALSE\n# 9 -0.019637240 TRUE\n# 10 -0.010682170 TRUE\n# 11 -0.120364100 TRUE\n# 12 0.160441900 FALSE\n# 13 0.001868874 FALSE\n# 14 0.046643390 FALSE\n# 15 0.014820090 TRUE\n# 16 0.056947650 TRUE\n# 17 0.200606500 TRUE\n# 18 0.018767600 TRUE\n# 19 0.027410490 TRUE\n# 20 0.033960400 TRUE\n\nData:\ndf <- structure(list(Delt.1.arithmetic = c(0.002519607, 0.03247049, \n0.01268653, 0.01105899, -0.003642582, -0.02468412, -0.04560344, \n0.0501897, -0.01963724, -0.01068217, -0.1203641, 0.1604419, 0.001868874, \n0.04664339, 0.01482009, 0.05694765, 0.2006065, 0.0187676, 0.02741049, \n0.0339604)), class = \"data.frame\", row.names = c(NA, -20L))\n\nEDIT\nAs requested, a more detailed breakdown explanation. In the above, everything is put together for more parsimonious code. Below is a step-by-step breakdown. I suggest running it line-by-line to see how it works. Also note this is a \"brute-force\" method and other may have more elegant means to achieve the same goal. Note that none of the intermediate steps are saved/indexed, so are intentionally overwritten on each iteration of the loop (in contrast to the last step, which saves the final value in df by indexing at df[xx, 2])\nHope this helps and happy coding!\nfor(xx in (n+1):nrow(df)){\n \n # get signs for pervious n values\n signs_prevn <- sign(df[(xx-n):(xx - 1), 1])\n \n # create a table for these values\n tbl <- table(signs_prevn)\n \n # Order it where the most common (-1 or 1) is first\n tbl <- tbl[order(tbl, decreasing = TRUE)]\n \n # extract the first (most common) element from the table\n # this gives the frequency as an integer, with the name\n # being the sign (-1 or 1)\n tbl <- tbl[1]\n \n # Get the \"name\" of the most common sign (-1 or 1)\n # This is a character variable so convert to numeric\n sgn <- as.numeric(names(tbl))\n \n # See if the most common sign meets the sign of the current position\n df[xx, 2] <- sgn == sign(df[xx,1])\n}\n\n" ]
[ 0 ]
[]
[]
[ "function", "r", "time_series" ]
stackoverflow_0074659360_function_r_time_series.txt
Q: (node:17348) UnhandledPromiseRejectionWarning: MongoError: Performing an update on the path '_id' would modify the immutable field '_id' Getting this error on trying to update....mongoose is throwing this error on using the code below....have tried using const project = {...} instead of new Product({...}) but no luck. router.put("/:id", multer({storage: storage}).single("image") , (req, res, next) => { let imagePath = req.body.imagePath; if (req.file) { const url = req.protocol + "://" + req.get("host"); imagePath = url + "/images/" + req.file.filename } const product = new Product( { _id: req.body.id, title: req.body.title, cost: req.body.cost, description: req.body.description, imagePath: imagePath } ) ; console.log(product) Product.updateOne({_id: req.params.id}, product) .then(result => { console.log(result); res.status(200).json({ message: "Update Successful !!"}); }); }); A: You need to remove the key _id from the object that is being used for updating the document. const product = new Product( { title: req.body.title, cost: req.body.cost, description: req.body.description, imagePath: imagePath } ) ; This will solve your problem. Thing is, while doing an update on the document whose _id matches with the one you passed in your code, Product.updateOne({_id: req.params.id}, product) you are trying to update the value of _id field as well. _id by default is immutable which means it can not be updated, as it represents the uniqueness of the document. Edit: If you need to update the _id of the document as well, then you can take a reference from here to do this. Though the _id can not be updated directly, but there is a workaround for doing that. How to update the _id of one MongoDB Document? A: I'm also having this problem when updating a certain value if (guildDB) { if (subCommandG === 'simsimi') { if (subCommand === 'set') { guildDB['settings'].simChannel == channel.id; await Guild.updateOne(guildDB); return interaction.reply({ content: `${client.emoji.success} Thiết lập thành công`, ephemeral: true }); } else if (subCommand === 'reset') { guildDB['settings'].simChannel == null; await Guild.updateOne(guildDB); return interaction.reply({ content: `${client.emoji.success} Thao tác thành công`, ephemeral: true }); } } }
(node:17348) UnhandledPromiseRejectionWarning: MongoError: Performing an update on the path '_id' would modify the immutable field '_id'
Getting this error on trying to update....mongoose is throwing this error on using the code below....have tried using const project = {...} instead of new Product({...}) but no luck. router.put("/:id", multer({storage: storage}).single("image") , (req, res, next) => { let imagePath = req.body.imagePath; if (req.file) { const url = req.protocol + "://" + req.get("host"); imagePath = url + "/images/" + req.file.filename } const product = new Product( { _id: req.body.id, title: req.body.title, cost: req.body.cost, description: req.body.description, imagePath: imagePath } ) ; console.log(product) Product.updateOne({_id: req.params.id}, product) .then(result => { console.log(result); res.status(200).json({ message: "Update Successful !!"}); }); });
[ "You need to remove the key _id from the object that is being used for updating the document.\nconst product = new Product(\n {\n title: req.body.title,\n cost: req.body.cost,\n description: req.body.description,\n imagePath: imagePath\n }\n) ;\n\nThis will solve your problem.\nThing is, while doing an update on the document whose _id matches with the one you passed in your code,\n Product.updateOne({_id: req.params.id}, product)\n\nyou are trying to update the value of _id field as well. _id by default is immutable which means it can not be updated, as it represents the uniqueness of the document.\nEdit: If you need to update the _id of the document as well, then you can take a reference from here to do this. Though the _id can not be updated directly, but there is a workaround for doing that.\nHow to update the _id of one MongoDB Document?\n", "I'm also having this problem when updating a certain value\nif (guildDB) {\n if (subCommandG === 'simsimi') {\n if (subCommand === 'set') {\n guildDB['settings'].simChannel == channel.id;\n await Guild.updateOne(guildDB);\n return interaction.reply({ content: `${client.emoji.success} Thiết lập thành công`, ephemeral: true });\n } else if (subCommand === 'reset') {\n guildDB['settings'].simChannel == null;\n await Guild.updateOne(guildDB);\n return interaction.reply({ content: `${client.emoji.success} Thao tác thành công`, ephemeral: true });\n }\n }\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "angular", "mean", "mongodb", "mongoose", "node.js" ]
stackoverflow_0067670039_angular_mean_mongodb_mongoose_node.js.txt
Q: Proguard rules with Excel fail on release build - Android Edit: Updated with dependencies Dependencies: implementation group: 'org.apache.poi', name: 'poi-ooxml', version: '3.17' implementation group: 'org.apache.xmlbeans', name: 'xmlbeans', version: '3.1.0' implementation 'com.fasterxml:aalto-xml:1.2.2' In my proguard file I have the following: -keep class org.apache.poi.** {*;} -keep class org.apache.xmlbeans.** {*;} -keep class com.fasterxml.** {*;} -keep class com.microsoft.schemas.** {*;} -keep class org.openxmlformats.** {*;} -keep class org.openxmlformats.schemas.** {*;} -keep class schemaorg_apache_xmlbeans.** {*;} This works fine in the debug build, but causes a crash when axcessing one of the methods in the release build. The full stack trace: java.lang.ExceptionInInitializerError at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0) at j7.a.a(SourceFile:2) at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1) at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0) at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1) at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5) at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1) at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2) at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1) at f9.k.invokeSuspend(SourceFile:17) at sa.a.resumeWith(SourceFile:3) at qd.o0.run(SourceFile:18) at xd.a$a.run(SourceFile:14) Caused by: java.lang.IllegalStateException: Cannot load nodeToCursor: verify that xbean.jar is on the classpath at org.apache.xmlbeans.XmlBeans.buildMethod(SourceFile:4) at org.apache.xmlbeans.XmlBeans.buildNodeMethod(Unknown Source:18) at org.apache.xmlbeans.XmlBeans.buildNodeToCursorMethod(Unknown Source:4) at org.apache.xmlbeans.XmlBeans.<clinit>(SourceFile:11) at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0)  at j7.a.a(SourceFile:2)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1)  at f9.k.invokeSuspend(SourceFile:17)  at sa.a.resumeWith(SourceFile:3)  at qd.o0.run(SourceFile:18)  at xd.a$a.run(SourceFile:14)  Caused by: java.lang.NoSuchMethodException: org.apache.xmlbeans.impl.store.Locale.nodeToCursor [interface org.w3c.dom.Node] at java.lang.Class.getMethod(Class.java:2103) at java.lang.Class.getMethod(Class.java:1724) at org.apache.xmlbeans.XmlBeans.buildMethod(SourceFile:1) at org.apache.xmlbeans.XmlBeans.buildNodeMethod(Unknown Source:18)  at org.apache.xmlbeans.XmlBeans.buildNodeToCursorMethod(Unknown Source:4)  at org.apache.xmlbeans.XmlBeans.<clinit>(SourceFile:11)  at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0)  at j7.a.a(SourceFile:2)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1)  at f9.k.invokeSuspend(SourceFile:17)  at sa.a.resumeWith(SourceFile:3)  at qd.o0.run(SourceFile:18)  Is there anything I am missing in my proguard file? I have tried various code snippets from Stack Overflow with no success. A: You could use -keep class !com.your.package.** { *; } to keep everything outside the files of your own package.
Proguard rules with Excel fail on release build - Android
Edit: Updated with dependencies Dependencies: implementation group: 'org.apache.poi', name: 'poi-ooxml', version: '3.17' implementation group: 'org.apache.xmlbeans', name: 'xmlbeans', version: '3.1.0' implementation 'com.fasterxml:aalto-xml:1.2.2' In my proguard file I have the following: -keep class org.apache.poi.** {*;} -keep class org.apache.xmlbeans.** {*;} -keep class com.fasterxml.** {*;} -keep class com.microsoft.schemas.** {*;} -keep class org.openxmlformats.** {*;} -keep class org.openxmlformats.schemas.** {*;} -keep class schemaorg_apache_xmlbeans.** {*;} This works fine in the debug build, but causes a crash when axcessing one of the methods in the release build. The full stack trace: java.lang.ExceptionInInitializerError at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0) at j7.a.a(SourceFile:2) at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1) at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0) at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1) at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5) at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1) at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2) at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1) at f9.k.invokeSuspend(SourceFile:17) at sa.a.resumeWith(SourceFile:3) at qd.o0.run(SourceFile:18) at xd.a$a.run(SourceFile:14) Caused by: java.lang.IllegalStateException: Cannot load nodeToCursor: verify that xbean.jar is on the classpath at org.apache.xmlbeans.XmlBeans.buildMethod(SourceFile:4) at org.apache.xmlbeans.XmlBeans.buildNodeMethod(Unknown Source:18) at org.apache.xmlbeans.XmlBeans.buildNodeToCursorMethod(Unknown Source:4) at org.apache.xmlbeans.XmlBeans.<clinit>(SourceFile:11) at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0)  at j7.a.a(SourceFile:2)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1)  at f9.k.invokeSuspend(SourceFile:17)  at sa.a.resumeWith(SourceFile:3)  at qd.o0.run(SourceFile:18)  at xd.a$a.run(SourceFile:14)  Caused by: java.lang.NoSuchMethodException: org.apache.xmlbeans.impl.store.Locale.nodeToCursor [interface org.w3c.dom.Node] at java.lang.Class.getMethod(Class.java:2103) at java.lang.Class.getMethod(Class.java:1724) at org.apache.xmlbeans.XmlBeans.buildMethod(SourceFile:1) at org.apache.xmlbeans.XmlBeans.buildNodeMethod(Unknown Source:18)  at org.apache.xmlbeans.XmlBeans.buildNodeToCursorMethod(Unknown Source:4)  at org.apache.xmlbeans.XmlBeans.<clinit>(SourceFile:11)  at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Unknown Source:0)  at j7.a.a(SourceFile:2)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook.<clinit>(SourceFile:1)  at org.openxmlformats.schemas.spreadsheetml.x2006.main.CTWorkbook$Factory.newInstance(Unknown Source:0)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.onWorkbookCreate(SourceFile:1)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:5)  at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(SourceFile:1)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.pointsToWorkBook(SourceFile:2)  at com.shervinkoushan.anyTracker.shared.utils.FileUtils.dataPointsToExcelFile(SourceFile:1)  at f9.k.invokeSuspend(SourceFile:17)  at sa.a.resumeWith(SourceFile:3)  at qd.o0.run(SourceFile:18)  Is there anything I am missing in my proguard file? I have tried various code snippets from Stack Overflow with no success.
[ "You could use -keep class !com.your.package.** { *; } to keep everything outside the files of your own package.\n" ]
[ 0 ]
[]
[]
[ "android", "apache_poi", "proguard" ]
stackoverflow_0074575404_android_apache_poi_proguard.txt
Q: Fibonacci series in JavaScript function fib(n) { const result = [0, 1]; for (var i = 2; i <= n; i++) { const a = (i - 1); const b = (i - 2); result.push(a + b); } return result[n]; } console.log(fib(8)); The output of the code above is 13. I don't understand the for loop part. In very first iteration i = 2, but after second iteration i = 3 so a = 2 and b = 1 and third iteration i = 4 so a = 3, b = 2, and so on... If it's going on final sequence will be : [0, 1, 1, 3, 5, 7, 9, 11], which is incorrect. The correct sequence will be [0, 1, 1, 2, 3, 5, 8, 13] A: You were not using the previous two numbers that are already in the array to > generate the new fibonacci number to be inserted into the array. https://www.mathsisfun.com/numbers/fibonacci-sequence.html Here I have used the sum of result[i-2] and result[i-1] to generate the new fibonacci number and pushed it into the array. Also to generate n number of terms you need the condition to be i < n and not i <= n. function fib(n) { const result = [0, 1]; for (var i = 2; i < n; i++) { result.push(result[i-2] + result[i-1]); } return result; // or result[n-1] if you want to get the nth term } console.log(fib(8)); Return result[n-1] if you want to get the nth term. A: My solution for Fibonacci series: const fibonacci = n => [...Array(n)].reduce( (acc, val, i) => acc.concat(i > 1 ? acc[i - 1] + acc[i - 2] : i), [] ) A: This function is incorrect. It cat be checked by just adding the console.log call just before the function return: function fib(n) { const result = [0, 1]; for (var i = 2; i <= n; i++) { const a = (i - 1); const b = (i - 2); result.push(a + b); } console.log(result); return result[n]; } console.log(fib(7)); As you can see, the sequence is wrong and (for n = 7) the return value is too. The possible change would be as following: function fib(n) { const result = [0, 1]; for (var i = 2; i <= n; i++) { const a = result[i - 1]; const b = result[i - 2]; result.push(a + b); } console.log(result); return result[n]; } console.log(fib(8)); This is the "classical" Fibonacci numbers; if you really want to use the first number of 0, not 1, then you should return result[n-1], since array indexes start from zero. A: One approach you could take for fibonacci sequence is recursion: var fibonacci = { getSequenceNumber: function(n) { //base case to end recursive calls if (n === 0 || n === 1) { return this.cache[n]; } //if we already have it in the cache, use it if (this.cache[n]) { return this.cache[n]; } //calculate and store in the cache for future use else { //since the function calls itself it's called 'recursive' this.cache[n] = this.getSequenceNumber(n - 2) + this.getSequenceNumber(n - 1); } return this.cache[n]; }, cache: { 0: 0, 1: 1 } } //find the 7th number in the fibbonacci function console.log(fibonacci.getSequenceNumber(7)); //see all the values we cached (preventing extra work) console.log(fibonacci.cache); //if you want to output the entire sequence as an array: console.log(Object.values(fibonacci.cache)); The code above is also an example of a dynamic programming approach. You can see that I am storing each result in a cache object the first time it is calculated by the getSequenceNumber method. This way, the second time that getSequenceNumber is asked to find a given input, it doesn't have to do any actual work - just grab the value from cache and return it! This is an optimization technique that can be applied to functions like this where you may have to find the value of a particular input multiple times. A: const fib = n => { const array = Array(n); for (i = 0; i < array.length; i++) { if (i > 1) { array[i] = array[i - 1] + array[i - 2]; } else { array[i] = 1; } } return array; } console.log(fib(5)) A: What you are doing wrong is adding the iterator index (i), whereas what you need to do is add the element in the result at that index. function fib(n) { const result = [0, 1]; for (let i = 2; i <= n; i++) { const a = result[(i - 1)]; const b = result[(i - 2)]; result.push(a + b); } console.log("Result Array: " + result); return result[n]; } console.log("Fibonacci Series element at 8: " + fib(8)); A: There are two issues with the logic: Variables a and b currently refer to i - 1 and i - 2. Instead they should refer to the elements of result array, i.e. result[i - 1] and result[i - 2]. If you need 8th element of the array, you need to call result[7]. So the returned value should be result[n - 1] instead of result[n]. function fib(n) { const result = [0, 1]; for (var i = 2; i < n; i++) { const a = result[i - 1]; const b = result[i - 2]; result.push(a + b); } console.log(result); return result[n - 1]; } console.log(fib(8)); A: simple solution for Fibonacci series: function fib(n){ var arr = []; for(var i = 0; i <n; i++ ){ if(i == 0 || i == 1){ arr.push(i); } else { var a = arr[i - 1]; var b = arr[i - 2]; arr.push(a + b); } } return arr } console.log(fib(8)) A: This is certainly one of those "more than one way to clean chicken" type situations, this JavaScript method below works for me. function fibCalc(n) { var myArr = []; for (var i = 0; i < n; i++) { if(i < 2) { myArr.push(i); } else { myArr.push(myArr[i-2] + myArr[i-1]); } } return myArr; } fibCalc(8); When called as above, this produces [0,1,1,2,3,5,8,13] It allows me to have a sequence of fib numbers based on n. A: function fib(n) { const result = [0]; if (n > 1) { result.push(1); for (var i = 2; i < n; i++) { const a = result[result.length - 1] const b = result[result.length - 2]; result.push(a + b); } } console.log(result); } A: i came up with this solution to get the n index fibonacci value. function findFac(n){ if (n===1) { return [0, 1]; } else { var s = findFac(n - 1); s.push(s[s.length - 1] + s[s.length - 2]); return s; } } function findFac0(n){ var vv1 = findFac(n); return vv1[n-1]; } console.log(findFac0(10)); A: Here, you have it, with few argument check, without using exception handling function fibonacci(limit){ if(typeof limit != "number"){return "Please enter a natural number";} if(limit <=0){ return "limit should be at least 1"; } else if(limit == 1){ return [0]; } else{ var series = [0, 1]; for(var num=1; num<=limit-2; num++){ series.push(series[series.length-1]+series[series.length-2]); } return series; } } A: I came up with this solution. function fibonacci(n) { if (n == 0) { return [0]; } if ( n == 1) { return [0, 1]; } else { let fibo = fibonacci(n-1); let nextElement = fibo [n-1] + fibo [n-2]; fibo.push(nextElement); return fibo; } } console.log(fibonacci(10)); A: function fibonacciGenerator (n) { var output = []; if(n===1){ output=[0]; }else if(n===2){ output=[0,1]; }else{ output=[0,1]; for(var i=2; i<n; i++){ output.push(output[output.length-2] + output[output.length-1]); } } return output; } output = fibonacciGenerator(); console.log(output); A: function fibonacci(end) { if (isNaN(end) === false && typeof (end) === "number") { var one = 0, res, two = 1; for (var i = 0; i < end; ++i) { res = one + two; one = two; two = res; console.log(res); } } else { console.error("One of the parameters is not correct!") } } fibonacci(5); A: var input = parseInt(prompt("")); var a =0; var b=1; var x; for(i=0;i<=input;i++){ document.write(a+"<br>") x = a+b; a =b; b= x; }
Fibonacci series in JavaScript
function fib(n) { const result = [0, 1]; for (var i = 2; i <= n; i++) { const a = (i - 1); const b = (i - 2); result.push(a + b); } return result[n]; } console.log(fib(8)); The output of the code above is 13. I don't understand the for loop part. In very first iteration i = 2, but after second iteration i = 3 so a = 2 and b = 1 and third iteration i = 4 so a = 3, b = 2, and so on... If it's going on final sequence will be : [0, 1, 1, 3, 5, 7, 9, 11], which is incorrect. The correct sequence will be [0, 1, 1, 2, 3, 5, 8, 13]
[ "\nYou were not using the previous two numbers that are already in the array to > generate the new fibonacci number to be inserted into the array.\n\nhttps://www.mathsisfun.com/numbers/fibonacci-sequence.html\nHere I have used the sum of result[i-2] and result[i-1] to generate the new fibonacci number and pushed it into the array.\nAlso to generate n number of terms you need the condition to be i < n and not i <= n.\n\n\nfunction fib(n) {\r\n\r\n const result = [0, 1];\r\n for (var i = 2; i < n; i++) {\r\n result.push(result[i-2] + result[i-1]);\r\n }\r\n return result; // or result[n-1] if you want to get the nth term\r\n\r\n}\r\n\r\nconsole.log(fib(8)); \n\n\n\nReturn result[n-1] if you want to get the nth term.\n", "My solution for Fibonacci series:\n const fibonacci = n =>\n [...Array(n)].reduce(\n (acc, val, i) => acc.concat(i > 1 ? acc[i - 1] + acc[i - 2] : i),\n []\n )\n\n", "This function is incorrect. It cat be checked by just adding the console.log call just before the function return:\n\n\nfunction fib(n) {\r\n\r\n const result = [0, 1];\r\n for (var i = 2; i <= n; i++) {\r\n const a = (i - 1);\r\n const b = (i - 2);\r\n result.push(a + b);\r\n }\r\n console.log(result);\r\n return result[n];\r\n\r\n}\r\n\r\nconsole.log(fib(7));\n\n\n\nAs you can see, the sequence is wrong and (for n = 7) the return value is too.\nThe possible change would be as following:\n\n\nfunction fib(n) {\r\n\r\n const result = [0, 1];\r\n for (var i = 2; i <= n; i++) {\r\n const a = result[i - 1];\r\n const b = result[i - 2];\r\n result.push(a + b);\r\n }\r\n console.log(result);\r\n return result[n];\r\n\r\n}\r\n\r\nconsole.log(fib(8));\n\n\n\nThis is the \"classical\" Fibonacci numbers; if you really want to use the first number of 0, not 1, then you should return result[n-1], since array indexes start from zero.\n", "One approach you could take for fibonacci sequence is recursion:\n\n\nvar fibonacci = {\r\n getSequenceNumber: function(n) {\r\n //base case to end recursive calls\r\n if (n === 0 || n === 1) {\r\n return this.cache[n];\r\n }\r\n\r\n //if we already have it in the cache, use it\r\n if (this.cache[n]) {\r\n return this.cache[n];\r\n }\r\n //calculate and store in the cache for future use\r\n else {\r\n //since the function calls itself it's called 'recursive'\r\n this.cache[n] = this.getSequenceNumber(n - 2) + this.getSequenceNumber(n - 1);\r\n }\r\n\r\n return this.cache[n];\r\n },\r\n\r\n cache: {\r\n 0: 0,\r\n 1: 1\r\n }\r\n}\r\n//find the 7th number in the fibbonacci function\r\nconsole.log(fibonacci.getSequenceNumber(7));\r\n\r\n//see all the values we cached (preventing extra work)\r\nconsole.log(fibonacci.cache);\r\n\r\n//if you want to output the entire sequence as an array:\r\nconsole.log(Object.values(fibonacci.cache));\n\n\n\nThe code above is also an example of a dynamic programming approach. You can see that I am storing each result in a cache object the first time it is calculated by the getSequenceNumber method. This way, the second time that getSequenceNumber is asked to find a given input, it doesn't have to do any actual work - just grab the value from cache and return it! This is an optimization technique that can be applied to functions like this where you may have to find the value of a particular input multiple times.\n", "\n\nconst fib = n => {\r\n const array = Array(n);\r\n for (i = 0; i < array.length; i++) {\r\n if (i > 1) {\r\n array[i] = array[i - 1] + array[i - 2];\r\n } else {\r\n array[i] = 1;\r\n }\r\n }\r\n return array;\r\n}\r\n\r\nconsole.log(fib(5))\n\n\n\n", "What you are doing wrong is adding the iterator index (i), whereas what you need to do is add the element in the result at that index.\n\n\nfunction fib(n) {\r\n\r\n const result = [0, 1];\r\n\r\n for (let i = 2; i <= n; i++) {\r\n const a = result[(i - 1)];\r\n const b = result[(i - 2)];\r\n result.push(a + b);\r\n }\r\n console.log(\"Result Array: \" + result);\r\n return result[n];\r\n\r\n}\r\n\r\nconsole.log(\"Fibonacci Series element at 8: \" + fib(8));\n\n\n\n", "There are two issues with the logic:\n\nVariables a and b currently refer to i - 1 and i - 2. Instead they should refer to the elements of result array, i.e. result[i - 1] and result[i - 2].\nIf you need 8th element of the array, you need to call result[7]. So the returned value should be result[n - 1] instead of result[n].\n\n\n\nfunction fib(n) {\r\n\r\n const result = [0, 1];\r\n for (var i = 2; i < n; i++) {\r\n const a = result[i - 1];\r\n const b = result[i - 2];\r\n result.push(a + b);\r\n }\r\n \r\n console.log(result);\r\n return result[n - 1];\r\n}\r\n\r\nconsole.log(fib(8));\n\n\n\n", "simple solution for Fibonacci series:\nfunction fib(n){\n var arr = [];\n for(var i = 0; i <n; i++ ){\n if(i == 0 || i == 1){\n arr.push(i);\n } else {\n var a = arr[i - 1];\n var b = arr[i - 2];\n arr.push(a + b);\n }\n }\n return arr\n}\nconsole.log(fib(8))\n\n", "This is certainly one of those \"more than one way to clean chicken\" type situations, this JavaScript method below works for me.\nfunction fibCalc(n) {\n var myArr = [];\n\n for (var i = 0; i < n; i++) {\n if(i < 2) {\n myArr.push(i);\n } else {\n myArr.push(myArr[i-2] + myArr[i-1]);\n }\n } \n\n return myArr;\n}\n\nfibCalc(8);\n\nWhen called as above, this produces [0,1,1,2,3,5,8,13]\nIt allows me to have a sequence of fib numbers based on n.\n", "function fib(n) {\n const result = [0];\n\n if (n > 1) {\n result.push(1);\n\n for (var i = 2; i < n; i++) {\n const a = result[result.length - 1]\n const b = result[result.length - 2];\n result.push(a + b);\n }\n\n }\n console.log(result);\n}\n\n", "i came up with this solution to get the n index fibonacci value.\nfunction findFac(n){\nif (n===1) \n {\n return [0, 1];\n } \n else \n {\n var s = findFac(n - 1);\n s.push(s[s.length - 1] + s[s.length - 2]);\n return s;\n }\n}\n\nfunction findFac0(n){\nvar vv1 = findFac(n);\nreturn vv1[n-1];\n}\n\n\nconsole.log(findFac0(10));\n\n", "Here, you have it, with few argument check, without using exception handling\nfunction fibonacci(limit){\n if(typeof limit != \"number\"){return \"Please enter a natural number\";}\n\n if(limit <=0){\n return \"limit should be at least 1\";\n }\n else if(limit == 1){\n return [0];\n }\n else{\n var series = [0, 1];\n for(var num=1; num<=limit-2; num++){\n series.push(series[series.length-1]+series[series.length-2]);\n }\n return series;\n }\n}\n\n", "I came up with this solution.\nfunction fibonacci(n) {\n if (n == 0) {\n return [0];\n }\n if ( n == 1) {\n return [0, 1];\n } else {\n let fibo = fibonacci(n-1);\n let nextElement = fibo [n-1] + fibo [n-2];\n fibo.push(nextElement);\n return fibo;\n }\n}\nconsole.log(fibonacci(10));\n\n", "function fibonacciGenerator (n) {\n var output = [];\n if(n===1){\n output=[0];\n }else if(n===2){\n output=[0,1]; \n }else{\n output=[0,1];\n for(var i=2; i<n; i++){\n output.push(output[output.length-2] + output[output.length-1]);\n }\n }\n return output;\n}\noutput = fibonacciGenerator();\nconsole.log(output);\n\n", "function fibonacci(end) {\n if (isNaN(end) === false && typeof (end) === \"number\") {\n var one = 0, res, two = 1;\n for (var i = 0; i < end; ++i) {\n res = one + two;\n one = two;\n two = res;\n console.log(res);\n }\n } else {\n console.error(\"One of the parameters is not correct!\")\n }\n}\n\nfibonacci(5);\n\n", " var input = parseInt(prompt(\"\"));\n var a =0;\n var b=1;\n var x;\n for(i=0;i<=input;i++){\n document.write(a+\"<br>\")\n x = a+b;\n a =b;\n b= x;\n }\n\n" ]
[ 9, 5, 2, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "fibonacci", "javascript" ]
stackoverflow_0051111870_fibonacci_javascript.txt
Q: In SwiftUI, how to draw shadows with high performance? I use the .shadow(color:, radius:, x:, y:) to draw shadows in my application. This is the only way I know of drawing apps in SwiftUI. I use the .sheet(isPresented:, content:) method to pop up a view, which contains a lot of shadows, and when I debug view hierarchy, I saw these warnings: But I don't know how to setting shadowPath, or pre-rendering the shadow into an image and putting it under the layer in SwiftUI, please help me. A: This warning is not caused because your code is inherently bad, but as a way of telling you that there are much more performant ways of rendering shadows. As your UI elements are currently written, SwiftUI is drawing the shadows around your view objects dynamically (at Runtime) based on wherever their positions and bounds are at the time, and that rendering will follow the view throughout it's lifecycle. It's a math intensive process and involves many draw-calls to the GPU in the best of cases, and CPU bottlenecking as well in the worst of cases. There are several different ways of rendering shadows in Swift. Most of them utilize frameworks OUTSIDE of SwiftUI (UIKit and CoreGraphics, usually, though Metal, Core Animation, and Core Image have been important in various applications.) This warning is probably not a big deal if you're not seeing performance problems in the UI layer on target hardware, but if you're very motivated to solve the problem, there are some options: Option 1 The absolute easiest thing to do if you just want to make the error go away would be to force a GPU call rasterization for the view + shadow by adding .drawingGroup() somewhere after the .shadow view. Be advised, this will likely look like crap compared to dynamic shadows. If you're familiar with UIKit, this is similar to the layer.shouldRasterize property on UIView. Option 2 Speaking of UIKit, an alternative would be to head over there and use either a UIViewRepresentable of your SwiftUI drawing logic, or a completely separate UIView. Either way: myView = UIView() myView.layer.shadowPath = UIBezierPath(rect: myView.bounds.cgPath) should get you started... the other shadow properties and stuff will help. Option 3 You could render the shadow as an image, either programatically (hard) or in an imaging editing application (annoying) and load is as an image at a lower Z index than your view, and scale them to give the illusion of depth. This is the kind of hacky work around that game developers used to do when they had crappy hardware but still wanted things to look good. In the end... for MOST SwiftUI views this warning likely can be ignored. If you load the code in Instruments, you'll likely see that the dynamic rendering of drop shadows under a View is probably not impacting your view rendering performance significantly. This warning is only usually visible inside a UI Debug session. Hope this helps set you on the path to a solution. A: I ran into this this week. And I think I figured out a way that is as performant as the shadowPath option in UIKit. I think that when you use Shape().shadow(...) it draws an efficient shadow based on the path. So IF you know the shape of the thing you're giving a shadow to, you can do it like this: content // Use background so the shadow is the same size as the content .background( // I'm assuming rectangle but it can be anything like with rounded corners too Rectangle() // Add efficient shadow .shadow() // Add an inset so you don't see the ugly inner edge of the shadow, it will be under your content .padding(1) ) This was much much faster than what I had before!
In SwiftUI, how to draw shadows with high performance?
I use the .shadow(color:, radius:, x:, y:) to draw shadows in my application. This is the only way I know of drawing apps in SwiftUI. I use the .sheet(isPresented:, content:) method to pop up a view, which contains a lot of shadows, and when I debug view hierarchy, I saw these warnings: But I don't know how to setting shadowPath, or pre-rendering the shadow into an image and putting it under the layer in SwiftUI, please help me.
[ "This warning is not caused because your code is inherently bad, but as a way of telling you that there are much more performant ways of rendering shadows.\nAs your UI elements are currently written, SwiftUI is drawing the shadows around your view objects dynamically (at Runtime) based on wherever their positions and bounds are at the time, and that rendering will follow the view throughout it's lifecycle.\nIt's a math intensive process and involves many draw-calls to the GPU in the best of cases, and CPU bottlenecking as well in the worst of cases.\nThere are several different ways of rendering shadows in Swift. Most of them utilize frameworks OUTSIDE of SwiftUI (UIKit and CoreGraphics, usually, though Metal, Core Animation, and Core Image have been important in various applications.)\nThis warning is probably not a big deal if you're not seeing performance problems in the UI layer on target hardware, but if you're very motivated to solve the problem, there are some options:\n\nOption 1\nThe absolute easiest thing to do if you just want to make the error go away would be to force a GPU call rasterization for the view + shadow by adding\n.drawingGroup()\n\nsomewhere after the .shadow view. Be advised, this will likely look like crap compared to dynamic shadows. If you're familiar with UIKit, this is similar to the layer.shouldRasterize property on UIView.\n\nOption 2\nSpeaking of UIKit, an alternative would be to head over there and use either a UIViewRepresentable of your SwiftUI drawing logic, or a completely separate UIView. Either way:\nmyView = UIView()\nmyView.layer.shadowPath = UIBezierPath(rect: myView.bounds.cgPath)\n\nshould get you started... the other shadow properties and stuff will help.\n\nOption 3\nYou could render the shadow as an image, either programatically (hard) or in an imaging editing application (annoying) and load is as an image at a lower Z index than your view, and scale them to give the illusion of depth.\nThis is the kind of hacky work around that game developers used to do when they had crappy hardware but still wanted things to look good.\n\nIn the end... for MOST SwiftUI views this warning likely can be ignored. If you load the code in Instruments, you'll likely see that the dynamic rendering of drop shadows under a View is probably not impacting your view rendering performance significantly. This warning is only usually visible inside a UI Debug session.\nHope this helps set you on the path to a solution.\n", "I ran into this this week. And I think I figured out a way that is as performant as the shadowPath option in UIKit. I think that when you use Shape().shadow(...) it draws an efficient shadow based on the path. So IF you know the shape of the thing you're giving a shadow to, you can do it like this:\ncontent\n // Use background so the shadow is the same size as the content\n .background(\n // I'm assuming rectangle but it can be anything like with rounded corners too\n Rectangle()\n // Add efficient shadow\n .shadow()\n // Add an inset so you don't see the ugly inner edge of the shadow, it will be under your content\n .padding(1)\n )\n\nThis was much much faster than what I had before!\n" ]
[ 19, 0 ]
[]
[]
[ "shadow", "swift", "swiftui" ]
stackoverflow_0064077689_shadow_swift_swiftui.txt
Q: How to do logout react native How to make sign out in my project? Also the most ways are don't worked because i put my data to firebase through REST API, please help me. It's Profile.js, where i need to realise the fucntion logOut const Profile = () => { const [user] = useAuth() const navigation = useNavigation() const logOut = () => { This function }; return ( <View style={styles.container}> <View style={styles.header}> <Image source={require('../assets/images/Logo_small.png')} style={styles.logo_s} resizeMode='contain'/> </View> <View style={styles.main_cont}> <Profile_text marginBottom={28}/> <View style={styles.replace}> <TextInput style={styles.inp} placeholder={user.displayName} placeholderTextColor= 'rgba(207, 77, 79, 0.75)' editable={false}/> <Pressable onPress={''}> <Image source={require('../assets/images/replace_btn.png')} style={styles.replace_btn} resizeMode='contain'/> </Pressable> </View> <UnderLine marginBottom={500}/> <Pressable style={styles.result} onPress={logOut}> <Text style={styles.text_result}>Выйти из профиля</Text> </Pressable> </View> <Image source={require('../assets/images/bottom_menu.png')} style={styles.img_bg}/> </View> ) } There is App.js const Stack = createNativeStackNavigator(); const Navigator = () => { const [user] = useAuth() if(!user) { return ( <Stack.Navigator screenOptions={{ headerShown: false }}> <Stack.Screen name="Login" component={LoginScreen} /> <Stack.Screen name="Registration" component={RegistrationScreen} /> </Stack.Navigator> ) } return ( <NavigationBar/> ) } Also i have an a Navigation Bar: <Tab.Navigator screenOptions={{ headerShown: false, tabBarStyle: { height: 80, position:'absolute', backgroundColor: 'transparent', elevation: 0, }, tabBarLabelStyle: {color: '#000000'} }} tabBarOptions={{ showLabel: false, }}> <Tab.Screen name="Profile" component={Profile} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/profile_use.png') : require('../assets/images/profile.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> <Tab.Screen name="Train" component={Train} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/train_use.png') : require('../assets/images/train.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> <Tab.Screen name="Statistics" component={Statistics} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/statistic_use.png') : require('../assets/images/statistic.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> </Tab.Navigator> Please, help me, i'm just beginner in react native. Again: what need to make logout? May be i can't do it because my autorization is in Stack Navigator, but the Profile is in Tab Navigator, pls help A: Try const [user, setUser] = useAuth() logout = () => { setUser(false) }
How to do logout react native
How to make sign out in my project? Also the most ways are don't worked because i put my data to firebase through REST API, please help me. It's Profile.js, where i need to realise the fucntion logOut const Profile = () => { const [user] = useAuth() const navigation = useNavigation() const logOut = () => { This function }; return ( <View style={styles.container}> <View style={styles.header}> <Image source={require('../assets/images/Logo_small.png')} style={styles.logo_s} resizeMode='contain'/> </View> <View style={styles.main_cont}> <Profile_text marginBottom={28}/> <View style={styles.replace}> <TextInput style={styles.inp} placeholder={user.displayName} placeholderTextColor= 'rgba(207, 77, 79, 0.75)' editable={false}/> <Pressable onPress={''}> <Image source={require('../assets/images/replace_btn.png')} style={styles.replace_btn} resizeMode='contain'/> </Pressable> </View> <UnderLine marginBottom={500}/> <Pressable style={styles.result} onPress={logOut}> <Text style={styles.text_result}>Выйти из профиля</Text> </Pressable> </View> <Image source={require('../assets/images/bottom_menu.png')} style={styles.img_bg}/> </View> ) } There is App.js const Stack = createNativeStackNavigator(); const Navigator = () => { const [user] = useAuth() if(!user) { return ( <Stack.Navigator screenOptions={{ headerShown: false }}> <Stack.Screen name="Login" component={LoginScreen} /> <Stack.Screen name="Registration" component={RegistrationScreen} /> </Stack.Navigator> ) } return ( <NavigationBar/> ) } Also i have an a Navigation Bar: <Tab.Navigator screenOptions={{ headerShown: false, tabBarStyle: { height: 80, position:'absolute', backgroundColor: 'transparent', elevation: 0, }, tabBarLabelStyle: {color: '#000000'} }} tabBarOptions={{ showLabel: false, }}> <Tab.Screen name="Profile" component={Profile} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/profile_use.png') : require('../assets/images/profile.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> <Tab.Screen name="Train" component={Train} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/train_use.png') : require('../assets/images/train.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> <Tab.Screen name="Statistics" component={Statistics} options={{ tabBarIcon: ({focused}) => ( <View> <Image source={focused ? require('../assets/images/statistic_use.png') : require('../assets/images/statistic.png') } style={{ width: 50, height: 50, }} resizeMode='contain'/> </View> ), }}/> </Tab.Navigator> Please, help me, i'm just beginner in react native. Again: what need to make logout? May be i can't do it because my autorization is in Stack Navigator, but the Profile is in Tab Navigator, pls help
[ "Try\nconst [user, setUser] = useAuth() \n\nlogout = () => {\nsetUser(false)\n}\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "react_native" ]
stackoverflow_0074582763_javascript_react_native.txt
Q: Can someone explain to me what this syntax mean? I was searching to find how to make the camera fit a game object and I found this answer: Change the size of camera to fit a GameObject in Unity/C# , and I don't understand this part cam.orthographicSize = ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) / 2; I want to understand how that part of the code works. A: if you dont understand condition ? A : B maybe this will help you if(w > h * cam.aspect){ cam.orthographicSize = ((float)w / (float)cam.pixelWidth * cam.pixelHeight ) / 2 } else{ cam.orthographicSize = h / 2 } A: It's just a matter of breaking it down piece by piece. Let's see: cam.orthographicSize = STEP 1: ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) STEP 2: [STEP 1] / 2 Step 1 divides into the following: STEP 1.1: h * cam.aspect (let's call this "JOHN") STEP 1.2: (float)cam.pixelWidth * cam.pixelHeight (let's call this "WENDY") STEP 1.3: Ternary operator (it's just an IF else written in a fancy way) IF w is greater than "JOHN" THEN RETURN (float)w / "WENDY" ELSE RETURN h At the end whatever comes from step 1, is divided in 2 (step 2). Here's more info about the ternary operator
Can someone explain to me what this syntax mean?
I was searching to find how to make the camera fit a game object and I found this answer: Change the size of camera to fit a GameObject in Unity/C# , and I don't understand this part cam.orthographicSize = ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) / 2; I want to understand how that part of the code works.
[ "if you dont understand condition ? A : B\nmaybe this will help you\nif(w > h * cam.aspect){\n cam.orthographicSize = ((float)w / (float)cam.pixelWidth * cam.pixelHeight ) / 2\n}\nelse{\n cam.orthographicSize = h / 2\n}\n\n", "It's just a matter of breaking it down piece by piece. Let's see:\ncam.orthographicSize = \nSTEP 1: ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) \nSTEP 2: [STEP 1] / 2\nStep 1 divides into the following:\nSTEP 1.1: h * cam.aspect (let's call this \"JOHN\")\nSTEP 1.2: (float)cam.pixelWidth * cam.pixelHeight (let's call this \"WENDY\")\nSTEP 1.3: Ternary operator (it's just an IF else written in a fancy way)\nIF w is greater than \"JOHN\" THEN\n RETURN (float)w / \"WENDY\"\nELSE\n RETURN h\n\nAt the end whatever comes from step 1, is divided in 2 (step 2). Here's more info about the ternary operator\n" ]
[ 1, 1 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074659607_c#_unity3d.txt
Q: Create multiple objects in one form Django I am trying to create a form in Django that can create one Student object with two Contact objects in the same form. The second Contact object must be optional to fill in (not required). Schematic view of the objects created in the single form: Contact 1 Student < Contact 2 (not required) I have the following models in models.py: class User(AbstractUser): is_student = models.BooleanField(default=False) is_teacher = models.BooleanField(default=False) class Student(models.Model): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) year = models.ForeignKey(Year, on_delete=models.SET_NULL, null=True) school = models.ForeignKey(School, on_delete=models.SET_NULL, null=True) student_email = models.EmailField() # named student_email because email conflicts with user email account_status = models.CharField(max_length=1, choices=ACCOUNT_STATUS_CHOICES) phone_number = models.CharField(max_length=50) homework_coach = models.ForeignKey(Teacher, on_delete=models.SET_NULL, null=True, blank=True, default='') user = models.OneToOneField(User, on_delete=models.CASCADE, null=True) plannings = models.ForeignKey(Planning, on_delete=models.SET_NULL, null=True) def __str__(self): return f"{self.first_name} {self.last_name}" class Contact(models.Model): student = models.ForeignKey(Student, on_delete=models.CASCADE) contact_first_name = models.CharField(max_length=50) contact_last_name = models.CharField(max_length=50) contact_phone_number = models.CharField(max_length=50) contact_email = models.EmailField() contact_street = models.CharField(max_length=100) contact_street_number = models.CharField(max_length=10) contact_zipcode = models.CharField(max_length=30) contact_city = models.CharField(max_length=100) def __str__(self): return f"{self.contact_first_name} {self.contact_last_name}" In forms.py, I have created two forms to register students and contacts. A student is also connected to a User object for login and authentication, but this is not relevant. Hence, when a user is created, the user is defined as the user. from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.db import transaction from .models import Student, Teacher, User, Year, School, Location, Contact class StudentSignUpForm(UserCreationForm): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) #student first_name = forms.CharField(max_length=50, required=True) last_name = forms.CharField(max_length=50, required=True) year = forms.ModelChoiceField(queryset=Year.objects.all(), required=False) school = forms.ModelChoiceField(queryset=School.objects.all(), required=False) # not required for new schools / years that are not yet in the database student_email = forms.EmailField(required=True) account_status = forms.ChoiceField(choices=ACCOUNT_STATUS_CHOICES) phone_number = forms.CharField(max_length=50, required=True) homework_coach = forms.ModelChoiceField(queryset=Teacher.objects.all(), required=False) class Meta(UserCreationForm.Meta): model = User fields = ( 'username', 'first_name', 'last_name', 'year', 'school', 'student_email', 'account_status', 'phone_number', 'homework_coach', 'password1', 'password2', ) @transaction.atomic def save( self, first_name, last_name, year, school, student_email, account_status, phone_number, homework_coach, ): user = super().save(commit=False) user.is_student = True user.save() Student.objects.create( # create student object user=user, first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach ) return user class ContactForm(forms.ModelForm): contact_first_name = forms.CharField(max_length=50, required=True) contact_last_name = forms.CharField(max_length=50, required=True) contact_phone_number = forms.CharField(max_length=50, required=False) contact_email = forms.EmailField(required=False) # not required because some students might not know contact information contact_street = forms.CharField(max_length=100, required=False) contact_street_number = forms.CharField(max_length=10, required=False) contact_zipcode = forms.CharField(max_length=10, required=False) contact_city = forms.CharField(max_length=100, required=False) class Meta: model = Contact fields = '__all__' In views.py, I have created a view that saves the data (so far only student data, not contact data). class StudentSignUpView(CreateView): model = User form_class = StudentSignUpForm template_name = 'registration/signup_form.html' def get_context_data(self, **kwargs): kwargs['user_type'] = 'student' return super().get_context_data(**kwargs) def form_valid(self, form): # student first_name = form.cleaned_data.get('first_name') last_name = form.cleaned_data.get('last_name') year = form.cleaned_data.get('year') school = form.cleaned_data.get('school') student_email = form.cleaned_data.get('student_email') account_status = form.cleaned_data.get('account_status') phone_number = form.cleaned_data.get('phone_number') homework_coach = form.cleaned_data.get('email') user = form.save( # student first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach, ) login(self.request, user) return redirect('home') And in registration/signup_form.html, the template is as follows: {% block content %} {% load crispy_forms_tags %} <form method="POST" enctype="multipart/form-data"> {{ formset.management_data }} {% csrf_token %} {{formset|crispy}} <input type="submit" value="Submit"> </form> {% endblock %} Urls.py: from .views import StudentSignUpView urlpatterns = [ path('', views.home, name='home'), path('signup/student/', StudentSignupView.as_view(), name='student_signup'), ] How can I create one view that has one form that creates 1 Student object and 2 Contact objects (of which the 2nd Contact is not required)? Things I have tried: Using formsets to create multiple contacts at once, but I only managed to create multiple Contacts and could not manage to add Students to that formset. I added this to views.py: def formset_view(request): context={} # creating the formset ContactFormSet = formset_factory(ContactForm, extra = 2) formset = ContactFormSet() # print formset data if it is valid if formset.is_valid(): for form in formset: print(form.cleaned_data) context['formset']=formset return render(request, 'registration/signup_form.html', context) Urls.py: urlpatterns = [ path('', views.home, name='home'), path('signup/student/', views.formset_view, name='student_signup'), ] But I only managed to create multiple Contacts and was not able to add a Student object through that form. I tried creating a ModelFormSet to add fields for the Student object, but that did not work either. A: What I'd try: I don't understand your StudentSignUpForm magic. However, if it's effectively the same as a ModelForm: class StudentSignUpForm(forms.Modelform): class Meta: model = Student fields = ('first_name', 'last_name', ...) then just add non-model fields contact1_first_name = forms.CharField(max_length=50, required=True) contact1_last_name = forms.CharField(max_length=50, required=True) contact1_phone_number = forms.CharField(max_length=50, required=False) ... contact2_first_name = forms.CharField(max_length=50, required=True) ... contact2_zipcode = forms.CharField(max_length=10, required=False) contact2_city = forms.CharField(max_length=100, required=False) And then put everything together in form_valid: @transaction.atomic def form_valid( self, form): student = form.save() contact1 = Contact( student = student, contact_first_name = form.cleaned_data['contact1_first_name'], contact_last_name = ... ) contact1.save() if (form.cleaned_data['contact2_first_name'] and form.cleaned_data['contact2_last_name'] # blank if omitted ): contact2 = Contact( student=student, contact_first_name = form.cleaned_data['contact2_first_name'], ... ) contact2.save() return HttpResponseRedirect( ...) If you want to do further validation beyond what's easy in a form definition you can. (You may well want to check that if conatct2_first_name is specified, contact2_last_name must also be specified). def form_valid( self, form): # extra validations, add errors on fail n=0 if form.cleaned_data['contact2_first_name']: n+=1 if form.cleaned_data['contact2_last_name']: n+=1 if n==1: form.add_error('contact2_first_name', 'Must provide first and last names for contact2, or omit both for no second contact') form.add_error('contact2_last_name', 'Must provide first and last names for contact2, or omit both for no second contact') contact2_provided = (n != 0) ... if not form.is_valid(): return self.form_invalid( self, form) with transaction.atomic(): student = form.save() contact1 = ( ... # as before if contact2_provided: contact2 = ( ...
Create multiple objects in one form Django
I am trying to create a form in Django that can create one Student object with two Contact objects in the same form. The second Contact object must be optional to fill in (not required). Schematic view of the objects created in the single form: Contact 1 Student < Contact 2 (not required) I have the following models in models.py: class User(AbstractUser): is_student = models.BooleanField(default=False) is_teacher = models.BooleanField(default=False) class Student(models.Model): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) year = models.ForeignKey(Year, on_delete=models.SET_NULL, null=True) school = models.ForeignKey(School, on_delete=models.SET_NULL, null=True) student_email = models.EmailField() # named student_email because email conflicts with user email account_status = models.CharField(max_length=1, choices=ACCOUNT_STATUS_CHOICES) phone_number = models.CharField(max_length=50) homework_coach = models.ForeignKey(Teacher, on_delete=models.SET_NULL, null=True, blank=True, default='') user = models.OneToOneField(User, on_delete=models.CASCADE, null=True) plannings = models.ForeignKey(Planning, on_delete=models.SET_NULL, null=True) def __str__(self): return f"{self.first_name} {self.last_name}" class Contact(models.Model): student = models.ForeignKey(Student, on_delete=models.CASCADE) contact_first_name = models.CharField(max_length=50) contact_last_name = models.CharField(max_length=50) contact_phone_number = models.CharField(max_length=50) contact_email = models.EmailField() contact_street = models.CharField(max_length=100) contact_street_number = models.CharField(max_length=10) contact_zipcode = models.CharField(max_length=30) contact_city = models.CharField(max_length=100) def __str__(self): return f"{self.contact_first_name} {self.contact_last_name}" In forms.py, I have created two forms to register students and contacts. A student is also connected to a User object for login and authentication, but this is not relevant. Hence, when a user is created, the user is defined as the user. from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.db import transaction from .models import Student, Teacher, User, Year, School, Location, Contact class StudentSignUpForm(UserCreationForm): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) #student first_name = forms.CharField(max_length=50, required=True) last_name = forms.CharField(max_length=50, required=True) year = forms.ModelChoiceField(queryset=Year.objects.all(), required=False) school = forms.ModelChoiceField(queryset=School.objects.all(), required=False) # not required for new schools / years that are not yet in the database student_email = forms.EmailField(required=True) account_status = forms.ChoiceField(choices=ACCOUNT_STATUS_CHOICES) phone_number = forms.CharField(max_length=50, required=True) homework_coach = forms.ModelChoiceField(queryset=Teacher.objects.all(), required=False) class Meta(UserCreationForm.Meta): model = User fields = ( 'username', 'first_name', 'last_name', 'year', 'school', 'student_email', 'account_status', 'phone_number', 'homework_coach', 'password1', 'password2', ) @transaction.atomic def save( self, first_name, last_name, year, school, student_email, account_status, phone_number, homework_coach, ): user = super().save(commit=False) user.is_student = True user.save() Student.objects.create( # create student object user=user, first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach ) return user class ContactForm(forms.ModelForm): contact_first_name = forms.CharField(max_length=50, required=True) contact_last_name = forms.CharField(max_length=50, required=True) contact_phone_number = forms.CharField(max_length=50, required=False) contact_email = forms.EmailField(required=False) # not required because some students might not know contact information contact_street = forms.CharField(max_length=100, required=False) contact_street_number = forms.CharField(max_length=10, required=False) contact_zipcode = forms.CharField(max_length=10, required=False) contact_city = forms.CharField(max_length=100, required=False) class Meta: model = Contact fields = '__all__' In views.py, I have created a view that saves the data (so far only student data, not contact data). class StudentSignUpView(CreateView): model = User form_class = StudentSignUpForm template_name = 'registration/signup_form.html' def get_context_data(self, **kwargs): kwargs['user_type'] = 'student' return super().get_context_data(**kwargs) def form_valid(self, form): # student first_name = form.cleaned_data.get('first_name') last_name = form.cleaned_data.get('last_name') year = form.cleaned_data.get('year') school = form.cleaned_data.get('school') student_email = form.cleaned_data.get('student_email') account_status = form.cleaned_data.get('account_status') phone_number = form.cleaned_data.get('phone_number') homework_coach = form.cleaned_data.get('email') user = form.save( # student first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach, ) login(self.request, user) return redirect('home') And in registration/signup_form.html, the template is as follows: {% block content %} {% load crispy_forms_tags %} <form method="POST" enctype="multipart/form-data"> {{ formset.management_data }} {% csrf_token %} {{formset|crispy}} <input type="submit" value="Submit"> </form> {% endblock %} Urls.py: from .views import StudentSignUpView urlpatterns = [ path('', views.home, name='home'), path('signup/student/', StudentSignupView.as_view(), name='student_signup'), ] How can I create one view that has one form that creates 1 Student object and 2 Contact objects (of which the 2nd Contact is not required)? Things I have tried: Using formsets to create multiple contacts at once, but I only managed to create multiple Contacts and could not manage to add Students to that formset. I added this to views.py: def formset_view(request): context={} # creating the formset ContactFormSet = formset_factory(ContactForm, extra = 2) formset = ContactFormSet() # print formset data if it is valid if formset.is_valid(): for form in formset: print(form.cleaned_data) context['formset']=formset return render(request, 'registration/signup_form.html', context) Urls.py: urlpatterns = [ path('', views.home, name='home'), path('signup/student/', views.formset_view, name='student_signup'), ] But I only managed to create multiple Contacts and was not able to add a Student object through that form. I tried creating a ModelFormSet to add fields for the Student object, but that did not work either.
[ "What I'd try:\nI don't understand your StudentSignUpForm magic. However, if it's effectively the same as a ModelForm:\nclass StudentSignUpForm(forms.Modelform):\n class Meta:\n model = Student\n fields = ('first_name', 'last_name', ...)\n\nthen just add non-model fields\n contact1_first_name = forms.CharField(max_length=50, required=True)\n contact1_last_name = forms.CharField(max_length=50, required=True)\n contact1_phone_number = forms.CharField(max_length=50, required=False)\n ...\n contact2_first_name = forms.CharField(max_length=50, required=True)\n ...\n contact2_zipcode = forms.CharField(max_length=10, required=False)\n contact2_city = forms.CharField(max_length=100, required=False)\n\nAnd then put everything together in form_valid:\[email protected]\ndef form_valid( self, form):\n student = form.save()\n\n contact1 = Contact(\n student = student,\n contact_first_name = form.cleaned_data['contact1_first_name'],\n contact_last_name = ...\n )\n contact1.save()\n\n if (form.cleaned_data['contact2_first_name'] and \n form.cleaned_data['contact2_last_name'] # blank if omitted\n ):\n\n contact2 = Contact(\n student=student,\n contact_first_name = form.cleaned_data['contact2_first_name'],\n ...\n )\n contact2.save()\n\n return HttpResponseRedirect( ...)\n\nIf you want to do further validation beyond what's easy in a form definition you can. (You may well want to check that if conatct2_first_name is specified, contact2_last_name must also be specified).\ndef form_valid( self, form):\n\n # extra validations, add errors on fail\n\n n=0\n if form.cleaned_data['contact2_first_name']:\n n+=1\n if form.cleaned_data['contact2_last_name']:\n n+=1\n\n if n==1:\n form.add_error('contact2_first_name',\n 'Must provide first and last names for contact2, or omit both for no second contact') \n form.add_error('contact2_last_name',\n 'Must provide first and last names for contact2, or omit both for no second contact') \n contact2_provided = (n != 0)\n ...\n\n if not form.is_valid():\n return self.form_invalid( self, form)\n\n with transaction.atomic():\n student = form.save()\n contact1 = ( ... # as before\n\n if contact2_provided:\n contact2 = ( ...\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_forms", "django_models", "formset", "python" ]
stackoverflow_0074655177_django_django_forms_django_models_formset_python.txt
Q: SwiftUI ShareLink sheet does not have save to album option Using SwiftUI's new ShareLink, with a photo. The Share sheet does not have save to album option. Does anyone know how could I get that option, because I need to save an image to the photo library. As you can see on the screenshot below, it only has add to shared album option. struct Photo: Transferable { static var transferRepresentation: some TransferRepresentation { ProxyRepresentation(\.image) } public var image: Image public var caption: String } ShareLink(item: photo, preview: SharePreview(photo.caption, image: Image("miniIcon"))) { } A: Solved. For whoever encountered this problem. You need to add photo library permission in the Info plist.
SwiftUI ShareLink sheet does not have save to album option
Using SwiftUI's new ShareLink, with a photo. The Share sheet does not have save to album option. Does anyone know how could I get that option, because I need to save an image to the photo library. As you can see on the screenshot below, it only has add to shared album option. struct Photo: Transferable { static var transferRepresentation: some TransferRepresentation { ProxyRepresentation(\.image) } public var image: Image public var caption: String } ShareLink(item: photo, preview: SharePreview(photo.caption, image: Image("miniIcon"))) { }
[ "Solved. For whoever encountered this problem. You need to add photo library permission in the Info plist.\n" ]
[ 1 ]
[]
[]
[ "ios", "swift", "swiftui", "wwdc" ]
stackoverflow_0074659609_ios_swift_swiftui_wwdc.txt
Q: how to load bitmap directly with picasso library like following Picasso.with(context).load("url").into(imageView); Here instead of url i want bitmap how can i achieve this. like below- Picasso.with(context).load(bitmap).into(imageView); A: This should work for you. Use the returned URI with Picasso. (taken from: is there away to get uri of bitmap with out save it to sdcard?) public Uri getImageUri(Context inContext, Bitmap inImage) { ByteArrayOutputStream bytes = new ByteArrayOutputStream(); inImage.compress(Bitmap.CompressFormat.JPEG, 100, bytes); String path = MediaStore.Images.Media.insertImage(inContext.getContentResolver(), inImage, "Title", null); return Uri.parse(path); } A: My Kotlin solution create the bitmap from data val inputStream = getContentResolver().openInputStream(data.data) val bitmap = BitmapFactory.decodeStream(inputStream) val stream = ByteArrayOutputStream() bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream) IMPORTANT: if you don't need to store the image you can avoid Picasso and load the image right away imageView.setImageBitmap(bitmap) otherwise store the file and load it with Picasso val jpegData = stream.toByteArray() val file = File(cacheDir, "filename.jpg") file.createNewFile() val fileOS = FileOutputStream(file) fileOS.write(jpegData) fileOS.flush() fileOS.close() Picasso.get().load(Uri.parse(file.path)).into(imageView) A: private void loadBitmapByPicasso(Context pContext, Bitmap pBitmap, ImageView pImageView) { try { Uri uri = Uri.fromFile(File.createTempFile("temp_file_name", ".jpg", pContext.getCacheDir())); OutputStream outputStream = pContext.getContentResolver().openOutputStream(uri); pBitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream); outputStream.close(); Picasso.get().load(uri).into(pImageView); } catch (Exception e) { Log.e("LoadBitmapByPicasso", e.getMessage()); } } A: If you have bitmap, then no need to load using picasso, there is simple solution provided by android, you can use below code imageView.setImageBitmap(bimap);
how to load bitmap directly with picasso library like following
Picasso.with(context).load("url").into(imageView); Here instead of url i want bitmap how can i achieve this. like below- Picasso.with(context).load(bitmap).into(imageView);
[ "This should work for you.\nUse the returned URI with Picasso.\n(taken from: is there away to get uri of bitmap with out save it to sdcard?)\npublic Uri getImageUri(Context inContext, Bitmap inImage) {\n ByteArrayOutputStream bytes = new ByteArrayOutputStream();\n inImage.compress(Bitmap.CompressFormat.JPEG, 100, bytes);\n String path = MediaStore.Images.Media.insertImage(inContext.getContentResolver(), inImage, \"Title\", null);\n return Uri.parse(path);\n}\n\n", "My Kotlin solution\ncreate the bitmap from data\n val inputStream = getContentResolver().openInputStream(data.data)\n val bitmap = BitmapFactory.decodeStream(inputStream)\n val stream = ByteArrayOutputStream()\n bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)\n\nIMPORTANT: if you don't need to store the image you can avoid Picasso and load the image right away\n imageView.setImageBitmap(bitmap)\n\notherwise store the file and load it with Picasso\n val jpegData = stream.toByteArray()\n\n val file = File(cacheDir, \"filename.jpg\")\n file.createNewFile()\n\n val fileOS = FileOutputStream(file)\n fileOS.write(jpegData)\n fileOS.flush()\n fileOS.close()\n\n Picasso.get().load(Uri.parse(file.path)).into(imageView)\n\n", "private void loadBitmapByPicasso(Context pContext, Bitmap pBitmap, ImageView pImageView) {\n try {\n Uri uri = Uri.fromFile(File.createTempFile(\"temp_file_name\", \".jpg\", pContext.getCacheDir()));\n OutputStream outputStream = pContext.getContentResolver().openOutputStream(uri);\n pBitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream);\n outputStream.close();\n Picasso.get().load(uri).into(pImageView);\n } catch (Exception e) {\n Log.e(\"LoadBitmapByPicasso\", e.getMessage());\n }\n}\n\n", "If you have bitmap, then no need to load using picasso, there is simple solution provided by android, you can use below code\nimageView.setImageBitmap(bimap);\n\n" ]
[ 22, 8, 3, 0 ]
[ "You are using an old version of Picasso. \nUpdate the version in your Gradle file:\nimplementation 'com.squareup.picasso:picasso:2.71828'\n\nJava:\nPicasso.get().load(R.drawable.landing_screen).into(imageView1);\nPicasso.get().load(\"file:///android_asset/DvpvklR.png\").into(imageView2);\nPicasso.get().load(new File(...)).into(imageView3);\n\nSee more details on the Picasso website\n" ]
[ -3 ]
[ "java", "picasso" ]
stackoverflow_0034629424_java_picasso.txt
Q: I want to use rabbitmq in an Azure function project I want to use rabbitmq in an Azure function project The code I put in the startup.cs is as follows public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { builder.Services.AddHttpClient(); builder.Services.AddApplication(); builder.Services.AddInfrastructure(); builder.Services.AddTransient<ReporterAuthenticationRequestHandler>(); builder.Services.AddHttpClient<IReporterApiService, ReporterApiService>() .AddHttpMessageHandler<ReporterAuthenticationRequestHandler>(); #region Cap builder.Services.AddCap(options => { options.UseMongoDB("mongodb://localhost:27017"); options.UseRabbitMQ(x => { x.HostName = "localhost"; x.UserName = "guest"; x.Password = "guest"; x.Port = 5672; }); }); #endregion } } But I get an error: A host error has occurred during startup operation '7b42aad5-f38c-4062-b255-4ccaa12754ea'.[2022-12-01T06:57:44.400Z] func: Invalid host services. Microsoft.Azure.WebJobs.Script.WebHost: The following service registrations did not match the expected services:[2022-12-01T06:57:44.400Z] [Invalid] ServiceType: Microsoft.Extensions.Hosting.IHostedService, Lifetime: Singleton, ImplementationFactory: System.Func2[System.IServiceProvider,DotNetCore.CAP.Internal.Bootstrapper].Value cannot be null. (Parameter 'provider') Do you know what I should do? A: It looks like the error you are seeing is due to a null value being passed to the Microsoft.Extensions.Hosting.IHostedService service registration in your code. The error message indicates that the provider parameter is null, which suggests that the service provider is not being properly initialized or passed to the AddCap() method. To fix this error, you can try the following steps: Make sure that the Microsoft.Azure.Functions.Extensions NuGet package is installed in your project. This package provides the FunctionsStartup class that you are extending in your Startup class, as well as the IFunctionsHostBuilder interface that you are using to configure the services. In the Configure() method of your Startup class, initialize the service provider using the builder.Services property and pass it to the AddCap() method. This will ensure that the service provider is properly initialized and passed to the AddCap() method. Here is an example of how to do this: public override void Configure(IFunctionsHostBuilder builder) { // Initialize the service provider var services = builder.Services; // Add other services and dependencies as needed // Configure Cap services.AddCap(options => { options.UseMongoDB("mongodb://localhost:27017"); options.UseRabbitMQ(x => { x.HostName = "localhost"; x.UserName = "guest"; x.Password = "guest"; x.Port = 5672; }); }); } After making these changes, your Startup class should be able to initialize the service provider and properly configure Cap for use in your Azure Functions project. You can then use Cap to publish and consume messages using RabbitMQ in your Azure Functions.
I want to use rabbitmq in an Azure function project
I want to use rabbitmq in an Azure function project The code I put in the startup.cs is as follows public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { builder.Services.AddHttpClient(); builder.Services.AddApplication(); builder.Services.AddInfrastructure(); builder.Services.AddTransient<ReporterAuthenticationRequestHandler>(); builder.Services.AddHttpClient<IReporterApiService, ReporterApiService>() .AddHttpMessageHandler<ReporterAuthenticationRequestHandler>(); #region Cap builder.Services.AddCap(options => { options.UseMongoDB("mongodb://localhost:27017"); options.UseRabbitMQ(x => { x.HostName = "localhost"; x.UserName = "guest"; x.Password = "guest"; x.Port = 5672; }); }); #endregion } } But I get an error: A host error has occurred during startup operation '7b42aad5-f38c-4062-b255-4ccaa12754ea'.[2022-12-01T06:57:44.400Z] func: Invalid host services. Microsoft.Azure.WebJobs.Script.WebHost: The following service registrations did not match the expected services:[2022-12-01T06:57:44.400Z] [Invalid] ServiceType: Microsoft.Extensions.Hosting.IHostedService, Lifetime: Singleton, ImplementationFactory: System.Func2[System.IServiceProvider,DotNetCore.CAP.Internal.Bootstrapper].Value cannot be null. (Parameter 'provider') Do you know what I should do?
[ "It looks like the error you are seeing is due to a null value being passed to the Microsoft.Extensions.Hosting.IHostedService service registration in your code. The error message indicates that the provider parameter is null, which suggests that the service provider is not being properly initialized or passed to the AddCap() method.\nTo fix this error, you can try the following steps:\nMake sure that the Microsoft.Azure.Functions.Extensions NuGet package is installed in your project. This package provides the FunctionsStartup class that you are extending in your Startup class, as well as the IFunctionsHostBuilder interface that you are using to configure the services.\nIn the Configure() method of your Startup class, initialize the service provider using the builder.Services property and pass it to the AddCap() method. This will ensure that the service provider is properly initialized and passed to the AddCap() method. Here is an example of how to do this:\npublic override void Configure(IFunctionsHostBuilder builder)\n{\n // Initialize the service provider\n var services = builder.Services;\n\n // Add other services and dependencies as needed\n\n // Configure Cap\n services.AddCap(options =>\n {\n options.UseMongoDB(\"mongodb://localhost:27017\");\n\n options.UseRabbitMQ(x =>\n {\n x.HostName = \"localhost\";\n x.UserName = \"guest\";\n x.Password = \"guest\";\n x.Port = 5672;\n });\n });\n}\n\nAfter making these changes, your Startup class should be able to initialize the service provider and properly configure Cap for use in your Azure Functions project. You can then use Cap to publish and consume messages using RabbitMQ in your Azure Functions.\n" ]
[ 0 ]
[]
[]
[ "asp.net_core", "azure_functions", "cap", "rabbitmq" ]
stackoverflow_0074639672_asp.net_core_azure_functions_cap_rabbitmq.txt
Q: Does not work Context Menus in cog - Discord.py I wanted to create a context menu in my bot. For example, he took the documentation code. @app_commands.context_menu(name='react') async def react_(self, interaction: discord.Interaction, message: discord.Message): await interaction.response.send_message('Very cool message!', ephemeral=True) But when the code was launched in the console, the following error appeared:TypeError: context menus cannot be defined inside a class. How can I fix this? A: You cannot use the decorator to create a context menu in Cogs, as explained by Danny here. The quick way to create them in cogs is to create them by using the app_commands.ContextMenu class. Like so: class MyCog(commands.Cog): def __init__(self, bot: commands.Bot) -> None: self.bot = bot self.ctx_menu = app_commands.ContextMenu( name='Cool Command Name', callback=self.my_cool_context_menu, # set the callback of the context menu to "my_cool_context_menu" ) self.bot.tree.add_command(self.ctx_menu) # add the context menu to the tree async def my_cool_context_menu(self, interaction, message): ... You can check out Danny's explanation for more information.
Does not work Context Menus in cog - Discord.py
I wanted to create a context menu in my bot. For example, he took the documentation code. @app_commands.context_menu(name='react') async def react_(self, interaction: discord.Interaction, message: discord.Message): await interaction.response.send_message('Very cool message!', ephemeral=True) But when the code was launched in the console, the following error appeared:TypeError: context menus cannot be defined inside a class. How can I fix this?
[ "You cannot use the decorator to create a context menu in Cogs, as explained by Danny here.\nThe quick way to create them in cogs is to create them by using the app_commands.ContextMenu class. Like so:\nclass MyCog(commands.Cog):\n def __init__(self, bot: commands.Bot) -> None:\n self.bot = bot\n self.ctx_menu = app_commands.ContextMenu(\n name='Cool Command Name',\n callback=self.my_cool_context_menu, # set the callback of the context menu to \"my_cool_context_menu\"\n )\n self.bot.tree.add_command(self.ctx_menu) # add the context menu to the tree\n\n async def my_cool_context_menu(self, interaction, message):\n ...\n\nYou can check out Danny's explanation for more information.\n" ]
[ 1 ]
[]
[]
[ "discord.py" ]
stackoverflow_0074659339_discord.py.txt
Q: Chapel loop variable undeclared When I try to compile the following program, the compiler complains that j and row are undeclared, which surprised me because a similar construction in Chapel - Ranges defined using bounds of type 'range(int(64),bounded,false)' are not currently supported was legal. When I declare row, j it prints 0 for both while the loop starts at 1. Declarations auto-initialise to zero, so is the row from forall row in 1..mat.m somehow different from the declaration above? use BlockDist; record CSRMatrix { /* The matrix is m x n */ var m: int; var n: int; /* Number of non-zeroes */ var nz: int; /* Stores the non-zeroes */ var val: [1..nz] real; /* The first non-zero of row k is the row_ptr(k)th non-zero overall. * row_ptr(m + 1) is the total number of non-zeroes. */ var row_ptr: [1..m + 1] int; /* The kth non-zero has column-index col_ind(k) */ var col_ind: [1..nz] int; } /* We assume some global sparse matrix A has been distributed blockwise along the first dimension and that mat is the local part of the global matrix A(s). Is not distributed from Chapel's POV! vec is distributed blockwise, is distributed from Chapel's POV! Returns matmul(A(s), vec), the local part of matmul(A, vec). */ proc spmv(mat: CSRMatrix, vec: [1..mat.n] real) { var result: [1..mat.m] real = 0.0; // var row: int; // var j: int; forall row in 1..mat.m do for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); return result; } A: @StrugglingProgrammer's response in the notes is correctly diagnosing the problem here, but to capture the explanation with more rationale and commentary: In Chapel, the index variables in for-, foreach-, forall-, and coforall-loops are new variables defined for the scope of the loop's body, as you expected. Thus for i in myIterand declares a new variable i that takes on the values yielded by myIterand—e.g., an integer in the common case of iterating over a range like for i in 1..n and your loops. The problem here is that Chapel is not a whitespace-sensitive language (as Python is, say), and its loops have two syntactic forms: A keyword-based form in which the body must be a single statement following do And a form in which the body is a compound statement using curly brackets Thus, when your code says: forall row in 1..mat.m do for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); to get the effect desired, you would need to write this as: forall row in 1..mat.m do for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 { writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); } Note that the first forall loop does not require curly brackets because its body is a single statement—the for-loop, which happens to have its own multi-statement body. Because of the possibility of this class of errors, particularly as code is evolving, defensive Chapel programmers tend to prefer always using curly brackets to avoid the possibility of this mistake: forall row in 1..mat.m { for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 { writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); } } And to be pedantic, your original code, indented to reflect the actual control flow would have been as follows (and a good Chapel-aware editor mode would ideally help you by indenting it in this way): forall row in 1..mat.m do for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); Seeing the code written this way, you can see why the compiler complains about j and row in the final two statements, but not about row on the first one, since it is within the nested loop as expected. It's also why, when you introduce explicit variables for j and row they evaluate to zero in those lines. Essentially, your loops are introducing new variables that temporarily shadow the explicitly-defined ones, and the two statements that were outside the loop nest refer to those original variables rather than the loops' index variables as expected. As you note, since Chapel variables are default initialized, they have the value 0. As one final piece of trivia, since compound statements { ... } are themselves single statements in Chapel, it is legal to write loops with both do and { ... }: forall row in 1..mat.m do { for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do { writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); } } However, this is not at all necessary, so I personally tend to discourage its use, considering it overkill and likely to lead to further misunderstandings.
Chapel loop variable undeclared
When I try to compile the following program, the compiler complains that j and row are undeclared, which surprised me because a similar construction in Chapel - Ranges defined using bounds of type 'range(int(64),bounded,false)' are not currently supported was legal. When I declare row, j it prints 0 for both while the loop starts at 1. Declarations auto-initialise to zero, so is the row from forall row in 1..mat.m somehow different from the declaration above? use BlockDist; record CSRMatrix { /* The matrix is m x n */ var m: int; var n: int; /* Number of non-zeroes */ var nz: int; /* Stores the non-zeroes */ var val: [1..nz] real; /* The first non-zero of row k is the row_ptr(k)th non-zero overall. * row_ptr(m + 1) is the total number of non-zeroes. */ var row_ptr: [1..m + 1] int; /* The kth non-zero has column-index col_ind(k) */ var col_ind: [1..nz] int; } /* We assume some global sparse matrix A has been distributed blockwise along the first dimension and that mat is the local part of the global matrix A(s). Is not distributed from Chapel's POV! vec is distributed blockwise, is distributed from Chapel's POV! Returns matmul(A(s), vec), the local part of matmul(A, vec). */ proc spmv(mat: CSRMatrix, vec: [1..mat.n] real) { var result: [1..mat.m] real = 0.0; // var row: int; // var j: int; forall row in 1..mat.m do for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do writeln(row); writeln(j); result(row) += mat.val(j) * vec(mat.col_ind(j)); return result; }
[ "@StrugglingProgrammer's response in the notes is correctly diagnosing the problem here, but to capture the explanation with more rationale and commentary:\nIn Chapel, the index variables in for-, foreach-, forall-, and coforall-loops are new variables defined for the scope of the loop's body, as you expected. Thus for i in myIterand declares a new variable i that takes on the values yielded by myIterand—e.g., an integer in the common case of iterating over a range like for i in 1..n and your loops.\nThe problem here is that Chapel is not a whitespace-sensitive language (as Python is, say), and its loops have two syntactic forms:\n\nA keyword-based form in which the body must be a single statement following do\nAnd a form in which the body is a compound statement using curly brackets\n\nThus, when your code says:\nforall row in 1..mat.m do\n for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do\n writeln(row);\n writeln(j);\n result(row) += mat.val(j) * vec(mat.col_ind(j));\n\nto get the effect desired, you would need to write this as:\nforall row in 1..mat.m do\n for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 {\n writeln(row);\n writeln(j);\n result(row) += mat.val(j) * vec(mat.col_ind(j));\n }\n\nNote that the first forall loop does not require curly brackets because its body is a single statement—the for-loop, which happens to have its own multi-statement body.\nBecause of the possibility of this class of errors, particularly as code is evolving, defensive Chapel programmers tend to prefer always using curly brackets to avoid the possibility of this mistake:\nforall row in 1..mat.m {\n for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 {\n writeln(row);\n writeln(j);\n result(row) += mat.val(j) * vec(mat.col_ind(j));\n }\n}\n\nAnd to be pedantic, your original code, indented to reflect the actual control flow would have been as follows (and a good Chapel-aware editor mode would ideally help you by indenting it in this way):\nforall row in 1..mat.m do\n for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do\n writeln(row);\nwriteln(j);\nresult(row) += mat.val(j) * vec(mat.col_ind(j));\n\nSeeing the code written this way, you can see why the compiler complains about j and row in the final two statements, but not about row on the first one, since it is within the nested loop as expected.\nIt's also why, when you introduce explicit variables for j and row they evaluate to zero in those lines. Essentially, your loops are introducing new variables that temporarily shadow the explicitly-defined ones, and the two statements that were outside the loop nest refer to those original variables rather than the loops' index variables as expected. As you note, since Chapel variables are default initialized, they have the value 0.\nAs one final piece of trivia, since compound statements { ... } are themselves single statements in Chapel, it is legal to write loops with both do and { ... }:\nforall row in 1..mat.m do {\n for j in mat.row_ptr(row)..mat.row_ptr(row + 1) - 1 do {\n writeln(row);\n writeln(j);\n result(row) += mat.val(j) * vec(mat.col_ind(j));\n }\n}\n\nHowever, this is not at all necessary, so I personally tend to discourage its use, considering it overkill and likely to lead to further misunderstandings.\n" ]
[ 2 ]
[]
[]
[ "chapel", "sparse_matrix" ]
stackoverflow_0074656857_chapel_sparse_matrix.txt
Q: What is the Web Service Endpoint for the 'Close Financial Periods' screen (AP506000)? What is the Web Service Endpoint for the 'Close Financial Periods' screen (AP506000)? I can't seem to find it (or anything similar) in the list of endpoints. I need to select the oldest open period from the list of periods displayed on that screen. Also - what's the best way to determine the name of the endpoint for a given screen, other than just looking for a similar name as the screen itself in the list of endpoints? Thanks... A: For the oldest open period, I would create a generic inquiry for the PX.Objects.GL.FinPeriods.MasterFinPeriod table. You can see ClosedInAP, ClosedInAr, etc. From there, you could make a filter to show only open periods, sorted ascending. Once you have it worked out, you could expose it to odata, or even extend your web service endpoint to get the data out. Here is an example of periods that are not closed in AP, but closed in AR (and unclosed).
What is the Web Service Endpoint for the 'Close Financial Periods' screen (AP506000)?
What is the Web Service Endpoint for the 'Close Financial Periods' screen (AP506000)? I can't seem to find it (or anything similar) in the list of endpoints. I need to select the oldest open period from the list of periods displayed on that screen. Also - what's the best way to determine the name of the endpoint for a given screen, other than just looking for a similar name as the screen itself in the list of endpoints? Thanks...
[ "For the oldest open period, I would create a generic inquiry for the PX.Objects.GL.FinPeriods.MasterFinPeriod table. You can see ClosedInAP, ClosedInAr, etc. From there, you could make a filter to show only open periods, sorted ascending. Once you have it worked out, you could expose it to odata, or even extend your web service endpoint to get the data out.\nHere is an example of periods that are not closed in AP, but closed in AR (and unclosed).\n\n" ]
[ 0 ]
[]
[]
[ "acumatica" ]
stackoverflow_0074645211_acumatica.txt
Q: Could not load file or asssembly 'Microsoft.identity.client' I want to install Github Copilot extension in Visual studio 2022. when i install it, i close all vs 2022 as instructed then i am supposed to receive a message saying that it was done. However i keep getting an error from VSIX Installer saying: Could not load file or asssembly 'Microsoft.identity.client, version=4.30.0.0,culture=neutral,PublickeyToken=0a613f4dd989eBae' or one of its dependencies. the located assembly's manifest definition does not match the asssembly reference.(Exception from HRESULT: 0x801311040). I get the same message when i try to update any extensions i have in visual studio 2022 PS: I already installed Copilot in VsCode and it working perfectly. Thank you! A: Try logging in from a clean state: Close Visual Studio 2022. Delete C:\Users\AppData\Local\github-copilot\hosts.json Log out of GitHub in the web browser. Open Visual Studio.
Could not load file or asssembly 'Microsoft.identity.client'
I want to install Github Copilot extension in Visual studio 2022. when i install it, i close all vs 2022 as instructed then i am supposed to receive a message saying that it was done. However i keep getting an error from VSIX Installer saying: Could not load file or asssembly 'Microsoft.identity.client, version=4.30.0.0,culture=neutral,PublickeyToken=0a613f4dd989eBae' or one of its dependencies. the located assembly's manifest definition does not match the asssembly reference.(Exception from HRESULT: 0x801311040). I get the same message when i try to update any extensions i have in visual studio 2022 PS: I already installed Copilot in VsCode and it working perfectly. Thank you!
[ "Try logging in from a clean state:\n\nClose Visual Studio 2022.\nDelete C:\\Users\\AppData\\Local\\github-copilot\\hosts.json\nLog out of GitHub in the web browser.\nOpen Visual Studio.\n\n" ]
[ 0 ]
[]
[]
[ "github_copilot", "visual_studio_2022", "visual_studio_extensions" ]
stackoverflow_0074648097_github_copilot_visual_studio_2022_visual_studio_extensions.txt
Q: Is the behaviour of std::get on rvalue-reference tuples dangerous? The following code: #include <tuple> int main () { auto f = [] () -> decltype (auto) { return std::get<0> (std::make_tuple (0)); }; return f (); } (Silently) generates code with undefined behaviour - the temporary rvalue returned by make_tuple is propagated through the std::get<> and through the decltype(auto) onto the return type. So it ends up returning a reference to a temporary that has gone out of scope. See it here https://godbolt.org/g/X1UhSw. Now, you could argue that my use of decltype(auto) is at fault. But in my generic code (where the type of the tuple might be std::tuple<Foo &>) I don't want to always make a copy. I really do want to extract the exact value or reference from the tuple. My feeling is that this overload of std::get is dangerous: template< std::size_t I, class... Types > constexpr std::tuple_element_t<I, tuple<Types...> >&& get( tuple<Types...>&& t ) noexcept; Whilst propagating lvalue references onto tuple elements is probably sensible, I don't think that holds for rvalue references. I'm sure the standards committee thought this through very carefully, but can anyone explain to me why this was considered the best option? A: Consider the following example: void consume(foo&&); template <typename Tuple> void consume_tuple_first(Tuple&& t) { consume(std::get<0>(std::forward<Tuple>(t))); } int main() { consume_tuple_first(std::tuple{foo{}}); } In this case, we know that std::tuple{foo{}} is a temporary and that it will live for the entire duration of the consume_tuple_first(std::tuple{foo{}}) expression. We want to avoid any unnecessary copy and move, but still propagate the temporarity of foo{} to consume. The only way of doing that is by having std::get return an rvalue reference when invoked with a temporary std::tuple instance. live example on wandbox Changing std::get<0>(std::forward<Tuple>(t)) to std::get<0>(t) produces a compilation error (as expected) (on wandbox). Having a get alternative that returns by value results in an additional unnecessary move: template <typename Tuple> auto myget(Tuple&& t) { return std::get<0>(std::forward<Tuple>(t)); } template <typename Tuple> void consume_tuple_first(Tuple&& t) { consume(myget(std::forward<Tuple>(t))); } live example on wandbox but can anyone explain to me why this was considered the best option? Because it enables optional generic code that seamlessly propagates temporaries rvalue references when accessing tuples. The alternative of returning by value might result in unnecessary move operations. A: IMHO this is dangerous and quite sad since it defeats the purpose of the "most important const": Normally, a temporary object lasts only until the end of the full expression in which it appears. However, C++ deliberately specifies that binding a temporary object to a reference to const on the stack lengthens the lifetime of the temporary to the lifetime of the reference itself, and thus avoids what would otherwise be a common dangling-reference error. In light of the quote above, for many years C++ programmers have learned that this was OK: X const& x = f( /* ... */ ); Now, consider this code: struct X { void hello() const { puts("hello"); } ~X() { puts("~X"); } }; auto make() { return std::variant<X>{}; } int main() { auto const& x = std::get<X>(make()); // #1 x.hello(); } I believe anyone should be forgiven for thinking that line #1 is OK. However, since std::get returns a reference to an object that is going to be destroyed, x is a dangling reference. The code above outputs: ~X hello which shows that the object that x binds to is destroyed before hello() is called. Clang gives a warning about the issue but gcc and msvc don't. The same issue happens if (as in the OP) we use std::tuple instead of std::variant but, sadly enough, clang doesn't issues a warning for this case. A similar issue happens with std::optional and this value overload: constexpr T&& value() &&; This code, which uses the same X above, illustrates the issue: auto make() { return std::optional{X{}}; } int main() { auto const& x = make().value(); x.hello(); } The output is: ~X ~X hello Brace yourself for more of the same with C++23's std::except and its methods value() and error(): constexpr T&& value() &&; constexpr E&& error() && noexcept; I'd rather pay the price of the move explained in Vittorio Romeo's post. Sure, I can avoid the issue by removing & from lines #1 and #2. My point is that the rule for the "most important const" just became more complicated and we need to consider if the expression involves std::get, std::optional::value, std::expected::value, std::expected::error, ...
Is the behaviour of std::get on rvalue-reference tuples dangerous?
The following code: #include <tuple> int main () { auto f = [] () -> decltype (auto) { return std::get<0> (std::make_tuple (0)); }; return f (); } (Silently) generates code with undefined behaviour - the temporary rvalue returned by make_tuple is propagated through the std::get<> and through the decltype(auto) onto the return type. So it ends up returning a reference to a temporary that has gone out of scope. See it here https://godbolt.org/g/X1UhSw. Now, you could argue that my use of decltype(auto) is at fault. But in my generic code (where the type of the tuple might be std::tuple<Foo &>) I don't want to always make a copy. I really do want to extract the exact value or reference from the tuple. My feeling is that this overload of std::get is dangerous: template< std::size_t I, class... Types > constexpr std::tuple_element_t<I, tuple<Types...> >&& get( tuple<Types...>&& t ) noexcept; Whilst propagating lvalue references onto tuple elements is probably sensible, I don't think that holds for rvalue references. I'm sure the standards committee thought this through very carefully, but can anyone explain to me why this was considered the best option?
[ "Consider the following example:\nvoid consume(foo&&);\n\ntemplate <typename Tuple>\nvoid consume_tuple_first(Tuple&& t)\n{\n consume(std::get<0>(std::forward<Tuple>(t)));\n}\n\nint main()\n{\n consume_tuple_first(std::tuple{foo{}});\n}\n\nIn this case, we know that std::tuple{foo{}} is a temporary and that it will live for the entire duration of the consume_tuple_first(std::tuple{foo{}}) expression.\nWe want to avoid any unnecessary copy and move, but still propagate the temporarity of foo{} to consume. \nThe only way of doing that is by having std::get return an rvalue reference when invoked with a temporary std::tuple instance. \nlive example on wandbox\n\nChanging std::get<0>(std::forward<Tuple>(t)) to std::get<0>(t) produces a compilation error (as expected) (on wandbox).\n\nHaving a get alternative that returns by value results in an additional unnecessary move:\ntemplate <typename Tuple>\nauto myget(Tuple&& t)\n{\n return std::get<0>(std::forward<Tuple>(t));\n}\n\ntemplate <typename Tuple>\nvoid consume_tuple_first(Tuple&& t)\n{\n consume(myget(std::forward<Tuple>(t)));\n}\n\nlive example on wandbox\n\n\nbut can anyone explain to me why this was considered the best option?\n\nBecause it enables optional generic code that seamlessly propagates temporaries rvalue references when accessing tuples. The alternative of returning by value might result in unnecessary move operations.\n", "IMHO this is dangerous and quite sad since it defeats the purpose of the \"most important const\":\n\nNormally, a temporary object lasts only until the end of the full expression in which it appears. However, C++ deliberately specifies that binding a temporary object to a reference to const on the stack lengthens the lifetime of the temporary to the lifetime of the reference itself, and thus avoids what would otherwise be a common dangling-reference error.\n\nIn light of the quote above, for many years C++ programmers have learned that this was OK:\nX const& x = f( /* ... */ );\n\nNow, consider this code:\nstruct X {\n void hello() const { puts(\"hello\"); }\n ~X() { puts(\"~X\"); }\n};\n\nauto make() {\n return std::variant<X>{};\n}\n\nint main() {\n auto const& x = std::get<X>(make()); // #1\n x.hello();\n}\n\nI believe anyone should be forgiven for thinking that line #1 is OK. However, since std::get returns a reference to an object that is going to be destroyed, x is a dangling reference. The code above outputs:\n~X\nhello\n\nwhich shows that the object that x binds to is destroyed before hello() is called. Clang gives a warning about the issue but gcc and msvc don't. The same issue happens if (as in the OP) we use std::tuple instead of std::variant but, sadly enough, clang doesn't issues a warning for this case.\nA similar issue happens with std::optional and this value overload:\nconstexpr T&& value() &&;\n\nThis code, which uses the same X above, illustrates the issue:\nauto make() {\n return std::optional{X{}};\n}\n\nint main() {\n auto const& x = make().value();\n x.hello();\n}\n\nThe output is:\n~X\n~X\nhello\n\nBrace yourself for more of the same with C++23's std::except and its methods value() and error():\nconstexpr T&& value() &&;\nconstexpr E&& error() && noexcept;\n\nI'd rather pay the price of the move explained in Vittorio Romeo's post. Sure, I can avoid the issue by removing & from lines #1 and #2. My point is that the rule for the \"most important const\" just became more complicated and we need to consider if the expression involves std::get, std::optional::value, std::expected::value, std::expected::error, ...\n" ]
[ 3, 0 ]
[]
[]
[ "c++", "c++11", "c++14", "rvalue_reference", "stdtuple" ]
stackoverflow_0047905333_c++_c++11_c++14_rvalue_reference_stdtuple.txt
Q: __init__.py issue in django test I have an issue with running a test in my Django project, using the command python manage.py test. It shows: user:~/workspace/connector$ docker-compose run --rm app sh -c "python manage.py test" Creating connector_app_run ... done Found 0 test(s). System check identified no issues (0 silenced). ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I was debugging it and I know that it's probably a "init.py" file. If I'm deleting file init.py from app.app (I have read somewhere that it can help) then I'm receiving an error: ====================================================================== ERROR: app.tests.test_secrets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: app.tests.test_secrets Traceback (most recent call last): File "/usr/local/lib/python3.9/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/app/app/tests/test_secrets.py", line 12, in <module> from app.app import secrets ModuleNotFoundError: No module named 'app.app' why did this error occur? Pycharm projects normally see import and what I know from version 3.4 it's not obligatory to put init.py into folders to make the package. This is the github link: https://github.com/MrHarvvey/connector.git Can you explain me what I'm I doing wrong here? A: So as per your project file structure, I changed from app.app import secrets to from app import secrets and then found test cases are also failing, so I fixed them also, you can review the changes here: https://github.com/MrHarvvey/connector/pull/1 Please let me know you if you wanted something else.
__init__.py issue in django test
I have an issue with running a test in my Django project, using the command python manage.py test. It shows: user:~/workspace/connector$ docker-compose run --rm app sh -c "python manage.py test" Creating connector_app_run ... done Found 0 test(s). System check identified no issues (0 silenced). ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I was debugging it and I know that it's probably a "init.py" file. If I'm deleting file init.py from app.app (I have read somewhere that it can help) then I'm receiving an error: ====================================================================== ERROR: app.tests.test_secrets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: app.tests.test_secrets Traceback (most recent call last): File "/usr/local/lib/python3.9/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/app/app/tests/test_secrets.py", line 12, in <module> from app.app import secrets ModuleNotFoundError: No module named 'app.app' why did this error occur? Pycharm projects normally see import and what I know from version 3.4 it's not obligatory to put init.py into folders to make the package. This is the github link: https://github.com/MrHarvvey/connector.git Can you explain me what I'm I doing wrong here?
[ "So as per your project file structure, I changed from app.app import secrets to from app import secrets and then found test cases are also failing, so I fixed them also, you can review the changes here:\nhttps://github.com/MrHarvvey/connector/pull/1\nPlease let me know you if you wanted something else.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074658535_django_python.txt
Q: NameNode Format error "failure to login for principal: X from keytab Y: Unable to obtain password from user" with Kerberos in a Hadoop cluster I've been setting up Kerberos with my Hadoop cluster on Ubuntu 20.04.1 LTS and when I try to reformat the namenode in command line after changing all config files and setting everything up (including principals and keytabs), I'm being met by the error: Exiting with status 1: org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: hdfs/[email protected] from keytab /etc/security/keytabs/hdfs.service.keytab javax.security.auth.login.LoginException: Unable to obtain password from user This is taking place on my master node, with host name "hadoopmaster". Keytabs are stored in /etc/security/keytabs and when checking the keytabs using klist -t -k -e, the keytab has the correct principal "hdfs/hadoopmaster.406bigdata.com@406BIGDATA" My hdfs-site.xml file consists of the following properties (includes more, but not included in code below as shouldn't be relevant to the error): <property> <name>dfs.namenode.keytab.file</name> <value>/etc/security/keytabs/hdfs.service.keytab</value> </property <property> <name>dfs.namenode.kerberos.principal</name> <value>hdfs/[email protected]</value> </property> I also have yarn setup with keytabs and principals and that starts up fine (log files have been checked and no errors) and can access the WebUI. Tried changing filepaths of the keytabs outside of the root directory, double checked /etc/hosts file, the file has correct permissions and ownerships but nothing has helped fix the issue. A: What happens when you su hdfs and try and use the keytab? --> does hdfs user have permissions to access the file?
NameNode Format error "failure to login for principal: X from keytab Y: Unable to obtain password from user" with Kerberos in a Hadoop cluster
I've been setting up Kerberos with my Hadoop cluster on Ubuntu 20.04.1 LTS and when I try to reformat the namenode in command line after changing all config files and setting everything up (including principals and keytabs), I'm being met by the error: Exiting with status 1: org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: hdfs/[email protected] from keytab /etc/security/keytabs/hdfs.service.keytab javax.security.auth.login.LoginException: Unable to obtain password from user This is taking place on my master node, with host name "hadoopmaster". Keytabs are stored in /etc/security/keytabs and when checking the keytabs using klist -t -k -e, the keytab has the correct principal "hdfs/hadoopmaster.406bigdata.com@406BIGDATA" My hdfs-site.xml file consists of the following properties (includes more, but not included in code below as shouldn't be relevant to the error): <property> <name>dfs.namenode.keytab.file</name> <value>/etc/security/keytabs/hdfs.service.keytab</value> </property <property> <name>dfs.namenode.kerberos.principal</name> <value>hdfs/[email protected]</value> </property> I also have yarn setup with keytabs and principals and that starts up fine (log files have been checked and no errors) and can access the WebUI. Tried changing filepaths of the keytabs outside of the root directory, double checked /etc/hosts file, the file has correct permissions and ownerships but nothing has helped fix the issue.
[ "What happens when you su hdfs and try and use the keytab? --> does hdfs user have permissions to access the file?\n" ]
[ 0 ]
[]
[]
[ "hadoop", "hdfs", "kerberos", "namenode", "ubuntu_20.04" ]
stackoverflow_0074641048_hadoop_hdfs_kerberos_namenode_ubuntu_20.04.txt
Q: Test if class extends specific class ArchUnit Is there way to test that classes extends specific class in ArchUnit test? I have 3 classes: ClassA ClassB which extends ClassA ClassC which extends ClassB I need to validate that ClassC extends ClassA. The following test ArchRuleDefinition.classes() .that() .haveSimpleName("ClassC") .should() .beAssignableTo("ClassA") .check(classes); fails with violation error Architecture Violation [Priority: MEDIUM] - Rule 'classes that have simple name 'ClassC' should be assignable to ClassA' was violated (1 times): Class <ClassC> is not assignable to ClassA in ... A: You can only use beAssignableTo("ClassA") if ClassA resides in the default package. In general, you have to use fully qualified class name, i.e. ClassA.class.getName(). If ClassA is available on the classpath, I'd use beAssignableTo(ClassA.class).
Test if class extends specific class ArchUnit
Is there way to test that classes extends specific class in ArchUnit test? I have 3 classes: ClassA ClassB which extends ClassA ClassC which extends ClassB I need to validate that ClassC extends ClassA. The following test ArchRuleDefinition.classes() .that() .haveSimpleName("ClassC") .should() .beAssignableTo("ClassA") .check(classes); fails with violation error Architecture Violation [Priority: MEDIUM] - Rule 'classes that have simple name 'ClassC' should be assignable to ClassA' was violated (1 times): Class <ClassC> is not assignable to ClassA in ...
[ "You can only use beAssignableTo(\"ClassA\") if ClassA resides in the default package.\nIn general, you have to use fully qualified class name, i.e. ClassA.class.getName().\nIf ClassA is available on the classpath, I'd use beAssignableTo(ClassA.class).\n" ]
[ 0 ]
[]
[]
[ "archunit", "java" ]
stackoverflow_0074657679_archunit_java.txt
Q: RANKX DAX not updating other graph I've got the following DAX Measure that provides me with the ability to filter down to the last XX of activity from an individual. I can only seem to add the measure to the Filter on Visual so when chosing to filter down on say the last 10 this does not update other visuals in the report. What can I do so that I am able to view the last 10 activities, but for the other visuals to update? Rank = RANKX (ALLEXCEPT(Sheet1,Sheet1[Name]), CALCULATE(MAX(Sheet1[Date])),, DESC) A: It's likely you are applying that filter to only one visual. It's better to implement the logic in the DAX calculation. DAX Calculation Rank = VAR _Calc = RANKX ( ALLEXCEPT ( Sheet1, Sheet1[Name] ), CALCULATE ( MAX ( Sheet1[Date] ) ), , DESC ) RETURN IF ( _Calc <= 10, _Calc )
RANKX DAX not updating other graph
I've got the following DAX Measure that provides me with the ability to filter down to the last XX of activity from an individual. I can only seem to add the measure to the Filter on Visual so when chosing to filter down on say the last 10 this does not update other visuals in the report. What can I do so that I am able to view the last 10 activities, but for the other visuals to update? Rank = RANKX (ALLEXCEPT(Sheet1,Sheet1[Name]), CALCULATE(MAX(Sheet1[Date])),, DESC)
[ "It's likely you are applying that filter to only one visual.\nIt's better to implement the logic in the DAX calculation.\nDAX Calculation\nRank =\nVAR _Calc =\n RANKX (\n ALLEXCEPT ( Sheet1, Sheet1[Name] ),\n CALCULATE ( MAX ( Sheet1[Date] ) ),\n ,\n DESC\n )\nRETURN\n IF ( _Calc <= 10, _Calc )\n\n" ]
[ 0 ]
[]
[]
[ "dax", "measure", "powerbi" ]
stackoverflow_0074657787_dax_measure_powerbi.txt
Q: Stripe Connect refund with transfer reversal: insufficient funds We are currently using Stripe Connect to accept payments on behalf of externals platforms. The payment process works fine (we are using Transfers to transfer fund directly on payment to the connected account), like this: PaymentIntentCreateParams.Builder paramsBuilder = PaymentIntentCreateParams .builder() .setAmount(getFinalPurchasePrice()) .setCustomer(customerStripeId) .setPaymentMethod(getStripePaymentMethodId()) .setConfirm(true) .setOffSession(true) .setOnBehalfOf(stripeConnectedAccountId) .setTransferData(PaymentIntentCreateParams.TransferData.builder() .setAmount(getFinalTransferPrice()) .setDestination(stripeConnectedAccountId) .build()) .setCurrency(getCurrency().toString().toLowerCase()); Now we are facing the issue of refunds. In test mode they worked fine (and as expected). But in live mode we are getting "insufficient funds". This is our request: RefundCreateParams refundCreateParams = RefundCreateParams.builder() .setReverseTransfer(true) .setCharge(charge.getId()) .setAmount(amount) .setReason(RefundCreateParams.Reason.REQUESTED_BY_CUSTOMER) .build(); Refund.create(refundCreateParams, requestOptions); And this is the response: "error": { "message": "Insufficient funds in your Stripe balance to refund this amount.", "request_log_url": "xxxx", "type": "invalid_request_error" } } The charge used was a successful charge and was for more than the amount specified here. The connected account's balance also is high enough to cover the refund. Our account currently does not have enough balance to cover the amount specified in the refund, but since we are using transfer reversal i would have assumed that the reversed transfer is reponsible to cover this. Am i wrong here, or are we doing anything wrong? A: Answering my own question after chatting with Stripe support. As i said we use Stripe Connect with transfers to immediatly transfer a certain amount to the connected account and keep a part for ourselves. If we start a refund with transfer reversal it takes this "cut" into account and calculates what it needs to take from the connected account and what part it takes from the platform account. This lead the "insufficient funds" error, since our platform account did not have enough funds to cover that part at this time.
Stripe Connect refund with transfer reversal: insufficient funds
We are currently using Stripe Connect to accept payments on behalf of externals platforms. The payment process works fine (we are using Transfers to transfer fund directly on payment to the connected account), like this: PaymentIntentCreateParams.Builder paramsBuilder = PaymentIntentCreateParams .builder() .setAmount(getFinalPurchasePrice()) .setCustomer(customerStripeId) .setPaymentMethod(getStripePaymentMethodId()) .setConfirm(true) .setOffSession(true) .setOnBehalfOf(stripeConnectedAccountId) .setTransferData(PaymentIntentCreateParams.TransferData.builder() .setAmount(getFinalTransferPrice()) .setDestination(stripeConnectedAccountId) .build()) .setCurrency(getCurrency().toString().toLowerCase()); Now we are facing the issue of refunds. In test mode they worked fine (and as expected). But in live mode we are getting "insufficient funds". This is our request: RefundCreateParams refundCreateParams = RefundCreateParams.builder() .setReverseTransfer(true) .setCharge(charge.getId()) .setAmount(amount) .setReason(RefundCreateParams.Reason.REQUESTED_BY_CUSTOMER) .build(); Refund.create(refundCreateParams, requestOptions); And this is the response: "error": { "message": "Insufficient funds in your Stripe balance to refund this amount.", "request_log_url": "xxxx", "type": "invalid_request_error" } } The charge used was a successful charge and was for more than the amount specified here. The connected account's balance also is high enough to cover the refund. Our account currently does not have enough balance to cover the amount specified in the refund, but since we are using transfer reversal i would have assumed that the reversed transfer is reponsible to cover this. Am i wrong here, or are we doing anything wrong?
[ "Answering my own question after chatting with Stripe support. As i said we use Stripe Connect with transfers to immediatly transfer a certain amount to the connected account and keep a part for ourselves.\nIf we start a refund with transfer reversal it takes this \"cut\" into account and calculates what it needs to take from the connected account and what part it takes from the platform account. This lead the \"insufficient funds\" error, since our platform account did not have enough funds to cover that part at this time.\n" ]
[ 0 ]
[]
[]
[ "java", "stripe_payments" ]
stackoverflow_0074589854_java_stripe_payments.txt
Q: Google OAuth 2.0: refresh_token expiry, how many days? I posted the following nearly 3 years ago Google OAuth 2.0: Refresh access_token and New refresh_token I need an update do Google refresh_tokens expire? If they do, then how many days are their expiry? If Google refresh_tokens do expire, then do they expire differently per Google service? Some implementations of OAuth 2.0 authentication allow requests for refresh_token expiry as access_token expiry is provided by new access_token. Does Google OAuth 2.0 provide the same? Thank you A: Do Google refresh_tokens expire? If they do, then how many days are their expiry? You should check Refresh token app set to production. If your app is set to production. Then the following will cause a refresh token to expire The user has revoked your app's access. The refresh token has not been used for six months. The user changed passwords and the refresh token contains Gmail scopes. The user account has exceeded a maximum number of granted (live) refresh tokens. The user belongs to a Google Cloud Platform organization that has session control policies in effect. Note for nr 4: There is currently a limit of 100 refresh tokens per Google Account per OAuth 2.0 client ID. If the limit is reached, creating a new refresh token automatically invalidates the oldest refresh token without warning. app set to test. If your app is in testing phase. Then the following is true. A Google Cloud Platform project with an OAuth consent screen configured for an external user type and a publishing status of "Testing" is issued a refresh token expiring in 7 days. If Google refresh_tokens do expire, then do they expire differently per Google service? Not sure what you mean by different service, this should be across the board. Some implementations of OAuth 2.0 authentication allow requests for refresh_token expiry as access_token expiry is provided by new access_token. Does Google OAuth 2.0 provide the same? Not sure what you mean here. If your refresh token has expire you will need to request authorizing of the user again. There is no other way to get a new refresh token.
Google OAuth 2.0: refresh_token expiry, how many days?
I posted the following nearly 3 years ago Google OAuth 2.0: Refresh access_token and New refresh_token I need an update do Google refresh_tokens expire? If they do, then how many days are their expiry? If Google refresh_tokens do expire, then do they expire differently per Google service? Some implementations of OAuth 2.0 authentication allow requests for refresh_token expiry as access_token expiry is provided by new access_token. Does Google OAuth 2.0 provide the same? Thank you
[ "\nDo Google refresh_tokens expire? If they do, then how many days are their expiry?\n\nYou should check Refresh token\napp set to production.\nIf your app is set to production.\n\nThen the following will cause a refresh token to expire\n\nThe user has revoked your app's access.\nThe refresh token has not been used for six months.\nThe user changed passwords and the refresh token contains Gmail scopes.\nThe user account has exceeded a maximum number of granted (live) refresh tokens.\nThe user belongs to a Google Cloud Platform organization that has session control policies in effect.\n\nNote for nr 4:\nThere is currently a limit of 100 refresh tokens per Google Account per OAuth 2.0 client ID. If the limit is reached, creating a new refresh token automatically invalidates the oldest refresh token without warning.\napp set to test.\nIf your app is in testing phase. Then the following is true.\nA Google Cloud Platform project with an OAuth consent screen configured for an external user type and a publishing status of \"Testing\" is issued a refresh token expiring in 7 days.\n\n\n\nIf Google refresh_tokens do expire, then do they expire differently per Google service?\n\n\nNot sure what you mean by different service, this should be across the board.\n\n\n\nSome implementations of OAuth 2.0 authentication allow requests for refresh_token expiry as access_token expiry is provided by new access_token. Does Google OAuth 2.0 provide the same?\n\n\nNot sure what you mean here. If your refresh token has expire you will need to request authorizing of the user again. There is no other way to get a new refresh token.\n" ]
[ 0 ]
[]
[]
[ "google_oauth", "oauth_2.0", "refresh_token" ]
stackoverflow_0074659774_google_oauth_oauth_2.0_refresh_token.txt
Q: transform: translate function Not working I'm a beginner in CSS and HTML and I've tried everything to center the stuff I've written in my project but nothing is working even though I compared my code to others and it was the same please help. I was doing transform: translate(-50%, -50%); but every time I check there is no difference in the webpage. I also tried ms-transform and webkit-transform and nothing worked please help. .center{ position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); width: 400px; background: white border-radius: 10px; } this is the exact code please help :)
transform: translate function Not working
I'm a beginner in CSS and HTML and I've tried everything to center the stuff I've written in my project but nothing is working even though I compared my code to others and it was the same please help. I was doing transform: translate(-50%, -50%); but every time I check there is no difference in the webpage. I also tried ms-transform and webkit-transform and nothing worked please help. .center{ position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); width: 400px; background: white border-radius: 10px; } this is the exact code please help :)
[]
[]
[ "you have missed ; on background: white\n.center{\n position: absolute;\n top: 50%;\n left: 50%;\n transform: translate(-50%, -50%);\n width: 400px;\n background: white;\n border-radius: 10px;\n}\n\n" ]
[ -2 ]
[ "css", "html" ]
stackoverflow_0074659473_css_html.txt
Q: Mars mips assembly , addi instead of Ori There is a question that I can't completely understand because of various answers I have saw , this is the instructions set : lui $1,0xffffff00 ori $12,$1,0x0000ffff sra $10,$12,0x00000010 and $8,$12,$10 The question is if you change the Ori instruction to addi , what will be the value of $8 ? A: The problem is that there are many MIPS assemblers, and that the assemblers vary, so you have to know which one you're working with to understand what machine code you're actually getting (or if it will even accept that input). Then from the machine code, we can go strictly by the MIPS specification, and know exactly what a code sequence will do. However, the main difference on MIPS for addi vs. ori — beyond the "add" vs. "or" part — is that though these are both I-Type instructions having 16-bit immediate, addi will employ sign extension to bring the 16-bit immediate to 32 bits for the addition, while ori will apply zero extension to bring the 16-bit immediate to 32 bits for the logical or operation. Otherwise, since the low bits in $1 are zeros anyway, the actual add vs. or doesn't make a difference: add vs. or are the same if there's no carry, and when adding to zero there's no carry.
Mars mips assembly , addi instead of Ori
There is a question that I can't completely understand because of various answers I have saw , this is the instructions set : lui $1,0xffffff00 ori $12,$1,0x0000ffff sra $10,$12,0x00000010 and $8,$12,$10 The question is if you change the Ori instruction to addi , what will be the value of $8 ?
[ "The problem is that there are many MIPS assemblers, and that the assemblers vary, so you have to know which one you're working with to understand what machine code you're actually getting (or if it will even accept that input).\nThen from the machine code, we can go strictly by the MIPS specification, and know exactly what a code sequence will do.\nHowever, the main difference on MIPS for addi vs. ori — beyond the \"add\" vs. \"or\" part — is that though these are both I-Type instructions having 16-bit immediate, addi will employ sign extension to bring the 16-bit immediate to 32 bits for the addition, while ori will apply zero extension to bring the 16-bit immediate to 32 bits for the logical or operation.\nOtherwise, since the low bits in $1 are zeros anyway, the actual add vs. or doesn't make a difference: add vs. or are the same if there's no carry, and when adding to zero there's no carry.\n" ]
[ 1 ]
[]
[]
[ "assembly", "mips" ]
stackoverflow_0074659940_assembly_mips.txt
Q: Rendering Azure DevOps wiki markdown files as HTML We have a wiki in Azure DevOps and would like to port this to our webbased application to be visible within. (Redirection is not an option: the wiki has to run inside the application.) Looking at the Azure DevOps wiki HTML, it seems the DevOps wiki markdown is rendered by multiple JS scripts: it-markdown, katex, a script for TOC generation, etc. Has anyone managed to merge al these JS scripts into one to render an Azure DevOps wiki markdown file outside of DevOps environment? Note that this question is specific to Azure DevOps markdown flavor while this answer in generic doesn't cover specific features such as LaTeX, table of contents, etc. in markdown! A: One possible method would be to use pandoc. The DevOps Markdown is a bit special, so you'd need to call pandoc with pandoc --from=markdown-space_in_atx_header-blank_before_header \ --standalone --output OUT.html INPUT.md
Rendering Azure DevOps wiki markdown files as HTML
We have a wiki in Azure DevOps and would like to port this to our webbased application to be visible within. (Redirection is not an option: the wiki has to run inside the application.) Looking at the Azure DevOps wiki HTML, it seems the DevOps wiki markdown is rendered by multiple JS scripts: it-markdown, katex, a script for TOC generation, etc. Has anyone managed to merge al these JS scripts into one to render an Azure DevOps wiki markdown file outside of DevOps environment? Note that this question is specific to Azure DevOps markdown flavor while this answer in generic doesn't cover specific features such as LaTeX, table of contents, etc. in markdown!
[ "One possible method would be to use pandoc. The DevOps Markdown is a bit special, so you'd need to call pandoc with\npandoc --from=markdown-space_in_atx_header-blank_before_header \\\n --standalone --output OUT.html INPUT.md\n\n" ]
[ 0 ]
[]
[]
[ "azure_devops", "javascript", "markdown" ]
stackoverflow_0074659001_azure_devops_javascript_markdown.txt
Q: How to use GCC with Conan & CMake on MacOS I am tryting to setup a build of a project on my Macbook which should use GCC with both Conan & CMake. I have set the following environment variables: export CC=/usr/local/bin/gcc-12 export CXX=/usr/local/bin/g++-12 When I run Conan all libraries are installed (and built if missing): conan install ../ --build=missing My .conan/profiles/default shows that Conan has used gcc-12. [settings] os=Macos os_build=Macos arch=armv8 arch_build=armv8 compiler=gcc compiler.version=12 compiler.libcxx=libstdc++11 build_type=Release [options] [build_requires] [env] When I run CMake with: cmake -DCMAKE_C_COMPILER=/usr/local/bin/gcc-12 -DCMAKE_CXX_COMPILER=/usr/local/bin/g++-12 ../ I get the following error cmake ../ -- Ubuntu: Using Conan to manage dependencies -- Conan: Adjusting output directories -- Conan: Using cmake global configuration -- Conan: Adjusting default RPATHs Conan policies -- Conan: Adjusting language standard -- Current conanbuildinfo.cmake directory: ~/some-dir/build CMake Error at build/conanbuildinfo.cmake:1812 (message): Incorrect 'apple-clang', is not the one detected by CMake: 'GNU' Call Stack (most recent call first): build/conanbuildinfo.cmake:1369 (conan_check_compiler) CMakeLists.txt:62 (conan_basic_setup) build/conanbuildinfo.cmake:1369 contains the following: message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}', is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}'") This indicates that CMake has detected the Conan compiler as AppleClang NOT GCC I cannot see where conanbuildinfo.cmake gets the value for CONAN_COMPILER from. How can I force CMake & Conan to use GCC-12? A: It looks like CMake is detecting the wrong compiler because the compiler option in your Conan profile is set to gcc, but the compiler.version option is set to 12. This could be causing CMake to interpret the compiler as AppleClang instead of GCC. To fix this, you can try changing the compiler option in your Conan profile to gcc-12 like this: [settings] os=Macos os_build=Macos arch=armv8 arch_build=armv8 compiler=gcc-12 compiler.version=12 compiler.libcxx=libstdc++11 build_type=Release [options] [build_requires] [env] This should let CMake and Conan correctly identify the compiler as GCC with version 12. Alternatively, you can try using the -DCMAKE_CXX_COMPILER and -DCMAKE_C_COMPILER flags to explicitly specify the path to the GCC-12 compiler when running CMake, like this: cmake -DCMAKE_C_COMPILER=/usr/local/bin/gcc-12 -DCMAKE_CXX_COMPILER=/usr/local/bin/g++-12 ../ This should override the default compiler detected by CMake and force it to use GCC-12.
How to use GCC with Conan & CMake on MacOS
I am tryting to setup a build of a project on my Macbook which should use GCC with both Conan & CMake. I have set the following environment variables: export CC=/usr/local/bin/gcc-12 export CXX=/usr/local/bin/g++-12 When I run Conan all libraries are installed (and built if missing): conan install ../ --build=missing My .conan/profiles/default shows that Conan has used gcc-12. [settings] os=Macos os_build=Macos arch=armv8 arch_build=armv8 compiler=gcc compiler.version=12 compiler.libcxx=libstdc++11 build_type=Release [options] [build_requires] [env] When I run CMake with: cmake -DCMAKE_C_COMPILER=/usr/local/bin/gcc-12 -DCMAKE_CXX_COMPILER=/usr/local/bin/g++-12 ../ I get the following error cmake ../ -- Ubuntu: Using Conan to manage dependencies -- Conan: Adjusting output directories -- Conan: Using cmake global configuration -- Conan: Adjusting default RPATHs Conan policies -- Conan: Adjusting language standard -- Current conanbuildinfo.cmake directory: ~/some-dir/build CMake Error at build/conanbuildinfo.cmake:1812 (message): Incorrect 'apple-clang', is not the one detected by CMake: 'GNU' Call Stack (most recent call first): build/conanbuildinfo.cmake:1369 (conan_check_compiler) CMakeLists.txt:62 (conan_basic_setup) build/conanbuildinfo.cmake:1369 contains the following: message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}', is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}'") This indicates that CMake has detected the Conan compiler as AppleClang NOT GCC I cannot see where conanbuildinfo.cmake gets the value for CONAN_COMPILER from. How can I force CMake & Conan to use GCC-12?
[ "It looks like CMake is detecting the wrong compiler because the compiler option in your Conan profile is set to gcc, but the compiler.version option is set to 12. This could be causing CMake to interpret the compiler as AppleClang instead of GCC.\nTo fix this, you can try changing the compiler option in your Conan profile to gcc-12 like this:\n[settings]\nos=Macos\nos_build=Macos\narch=armv8\narch_build=armv8\ncompiler=gcc-12\ncompiler.version=12\ncompiler.libcxx=libstdc++11\nbuild_type=Release\n[options]\n[build_requires]\n[env]\n\nThis should let CMake and Conan correctly identify the compiler as GCC with version 12.\nAlternatively, you can try using the -DCMAKE_CXX_COMPILER and -DCMAKE_C_COMPILER flags to explicitly specify the path to the GCC-12 compiler when running CMake, like this:\ncmake -DCMAKE_C_COMPILER=/usr/local/bin/gcc-12 -DCMAKE_CXX_COMPILER=/usr/local/bin/g++-12 ../\n\nThis should override the default compiler detected by CMake and force it to use GCC-12.\n" ]
[ 0 ]
[]
[]
[ "c++", "cmake", "conan", "gcc", "macos" ]
stackoverflow_0074659157_c++_cmake_conan_gcc_macos.txt
Q: Is there a way to cut off part of a page with iframes? (in this case the top of it) Note: In this post, by "cut off" I am referring to hiding and showing parts of the page, not actually removing them off. Sorry. In this post, I am trying to ask if you can HIDE the top as you can hide the bottom and disable scrolling. I don't think I made that very clear. So, I know you can cut off part of the bottom of a page with iframes, like this: webpage in iframe with iframe partially cut off However, can you cut off the top? I couldn't figure out how I want to remove the blue bar ontop so only the output shows. This is my code: <iframe src="https://replit.com/@RusticPPreston2/FireFoxProxy-1?lite=1&outputonly=1" scrolling="no" style="position:fixed; top:0; left:0; bottom:0; right:0; width:100%; height:400; border:none;"> Your browser doesn't support iframes </iframe> Thank you for any help. I tried to resize it and jump to bottom as well, but that did not work as it just displayed the top. Edit: I also would like scrolling disabled so they cannot scroll to the top. The goal is to show only the output of the entire thing. Thank you. (This is the "output" part if you do not understand: replit output in lite mode with output part circled) A: It is possible to hide or show part of a page using iframes, but it is not possible to "cut off" part of a page using iframes. Iframes are used to embed one HTML page within another, and they allow you to display the contents of one page within another. They do not have any built-in capabilities for hiding or showing parts of the page.
Is there a way to cut off part of a page with iframes? (in this case the top of it)
Note: In this post, by "cut off" I am referring to hiding and showing parts of the page, not actually removing them off. Sorry. In this post, I am trying to ask if you can HIDE the top as you can hide the bottom and disable scrolling. I don't think I made that very clear. So, I know you can cut off part of the bottom of a page with iframes, like this: webpage in iframe with iframe partially cut off However, can you cut off the top? I couldn't figure out how I want to remove the blue bar ontop so only the output shows. This is my code: <iframe src="https://replit.com/@RusticPPreston2/FireFoxProxy-1?lite=1&outputonly=1" scrolling="no" style="position:fixed; top:0; left:0; bottom:0; right:0; width:100%; height:400; border:none;"> Your browser doesn't support iframes </iframe> Thank you for any help. I tried to resize it and jump to bottom as well, but that did not work as it just displayed the top. Edit: I also would like scrolling disabled so they cannot scroll to the top. The goal is to show only the output of the entire thing. Thank you. (This is the "output" part if you do not understand: replit output in lite mode with output part circled)
[ "It is possible to hide or show part of a page using iframes, but it is not possible to \"cut off\" part of a page using iframes. Iframes are used to embed one HTML page within another, and they allow you to display the contents of one page within another. They do not have any built-in capabilities for hiding or showing parts of the page.\n" ]
[ 0 ]
[]
[]
[ "html", "iframe", "javascript" ]
stackoverflow_0074659913_html_iframe_javascript.txt
Q: Offset and pagination working together for portfolio items in Wordpress Divi Theme I'm working on a wordpress website, which has to use the Divi theme. For some reasons they've enabled offsetting the blog posts from the modules frontend, but they don't do it for portfolio items. There's a page on my site which should contain all the case studies and it basicly looks like that: So, what I basicly did was that I've put on the first row the latest portfolio item, then I put on the second row two more portfolio items which I offset by 1 post, 3rd row is my testimonials content. My question is about the 4th row - I need to offset the portfolio items by 3 posts and I have to have pagination. The problem is that the moment I offset the items the pagination stops working. I tried gathering snippets and info all around the internet, but nothing seems to work. Here is what I've tried: // Native conditional tag only works on page load. Data update needs $conditional_tags data $is_front_page = et_fb_conditional_tag( 'is_front_page', $conditional_tags ); $is_search = et_fb_conditional_tag( 'is_search', $conditional_tags ); $offset_number = 3; // Prepare query arguments $query_args = array( 'posts_per_page' => (int) $args['posts_number'], 'post_type' => 'project', 'post_status' => 'publish', ); // Conditionally get paged data if ( defined( 'DOING_AJAX' ) && isset( $current_page[ 'paged'] ) ) { $et_paged = intval( $current_page[ 'paged' ] ); } else { $et_paged = $is_front_page ? get_query_var( 'page' ) : get_query_var( 'paged' ); $query_args['offset'] = ( ( $paged - 1 ) * intval( $posts_number ) ) + intval( $offset_number ); } The portfolio items get offsetted by 3 posts, but when I click older entries it doesn't work and I'm still shown the same items from the first page (offsetted by 3). Here is a pastebin of the whole module's code. A: Nobody wanted to help me, so I had to read wordpress documentations for 2 days, but it was worth it, I've managed to learn many new stuff! Here's the code if someone ever needs it: static function get_portfolio_item( $args = array(), $conditional_tags = array(), $current_page = array() ) { global $et_fb_processing_shortcode_object; $global_processing_original_value = $et_fb_processing_shortcode_object; $defaults = array( 'posts_number' => 10, 'include_categories' => '', 'fullwidth' => 'on', 'offset_number' => 3, ); $args = wp_parse_args( $args, $defaults ); // Native conditional tag only works on page load. Data update needs $conditional_tags data $is_front_page = et_fb_conditional_tag( 'is_front_page', $conditional_tags ); $is_search = et_fb_conditional_tag( 'is_search', $conditional_tags ); // Prepare query arguments $query_args = array( 'posts_per_page' => (int) $args['posts_number'], 'post_type' => 'project', 'post_status' => 'publish', 'offsetreduced' => true, ); // Conditionally get paged data if ( defined( 'DOING_AJAX' ) && isset( $current_page['paged'] ) ) { $et_paged = intval( $current_page[ 'paged' ] ); } else { $et_paged = $is_front_page ? get_query_var( 'page' ) : get_query_var( 'paged' ); } if ( $is_front_page ) { global $paged; $paged = $et_paged; } // support pagination in VB if ( isset( $args['__page'] ) ) { $et_paged = $args['__page']; } if ( ! is_search() ) { $query_args['paged'] = $et_paged; } if ( '' !== $args['offset_number'] && ! empty( $args['offset_number'] ) ) { /** * Offset + pagination don't play well. Manual offset calculation required * @see: https://codex.wordpress.org/Making_Custom_Queries_using_Offset_and_Pagination */ if ( $et_paged > 1 ) { $query_args['offset'] = ( ( $et_paged - 1 ) * intval( $args['posts_number'] ) ) + intval( $args['offset_number'] ); } else { $query_args['offset'] = intval( $args['offset_number'] ); } } // Passed categories parameter $include_categories = self::filter_invalid_term_ids( explode( ',', $args['include_categories'] ), 'project_category' ); if ( ! empty( $include_categories ) ) { $query_args['tax_query'] = array( array( 'taxonomy' => 'project_category', 'field' => 'id', 'terms' => $include_categories, 'operator' => 'IN', ) ); } // Get portfolio query $query = new WP_Query( $query_args ); // Format portfolio output, and add supplementary data $width = 'on' === $args['fullwidth'] ? 1080 : 400; $width = (int) apply_filters( 'et_pb_portfolio_image_width', $width ); $height = 'on' === $args['fullwidth'] ? 9999 : 284; $height = (int) apply_filters( 'et_pb_portfolio_image_height', $height ); $classtext = 'on' === $args['fullwidth'] ? 'et_pb_post_main_image' : ''; $titletext = get_the_title(); // Loop portfolio item data and add supplementary data if ( $query->have_posts() ) { $post_index = 0; while( $query->have_posts() ) { $query->the_post(); $categories = array(); $categories_object = get_the_terms( get_the_ID(), 'project_category' ); if ( ! empty( $categories_object ) ) { foreach ( $categories_object as $category ) { $categories[] = array( 'id' => $category->term_id, 'label' => $category->name, 'permalink' => get_term_link( $category ), ); } } // need to disable processnig to make sure get_thumbnail() doesn't generate errors $et_fb_processing_shortcode_object = false; // Get thumbnail $thumbnail = get_thumbnail( $width, $height, $classtext, $titletext, $titletext, false, 'Blogimage' ); $et_fb_processing_shortcode_object = $global_processing_original_value; // Append value to query post $query->posts[ $post_index ]->post_permalink = get_permalink(); $query->posts[ $post_index ]->post_thumbnail = print_thumbnail( $thumbnail['thumb'], $thumbnail['use_timthumb'], $titletext, $width, $height, '', false, true ); $query->posts[ $post_index ]->post_categories = $categories; $query->posts[ $post_index ]->post_class_name = get_post_class( '', get_the_ID() ); $post_index++; } $query->posts_next = array( 'label' => esc_html__( '&laquo; Older Entries', 'et_builder' ), 'url' => next_posts( $query->max_num_pages, false ), ); $query->posts_prev = array( 'label' => esc_html__( 'Next Entries &raquo;', 'et_builder' ), 'url' => ( $et_paged > 1 ) ? previous_posts( false ) : '', ); // Added wp_pagenavi support $query->wp_pagenavi = function_exists( 'wp_pagenavi' ) ? wp_pagenavi( array( 'query' => $query, 'echo' => false ) ) : false; } else if ( wp_doing_ajax() ) { // This is for the VB $query = array( 'posts' => self::get_no_results_template() ); } wp_reset_postdata(); return $query; } And in my childs function.php: add_filter('found_posts', 'adjust_offset_pagination', 1, 2 ); function adjust_offset_pagination($found_posts, $query) { if ( $query->get( 'offsetreduced' ) ) { return $found_posts - 3; } return $found_posts; } Cheers! A: You just need to use blog module to do this! Set it on "Projects". Good Luck!
Offset and pagination working together for portfolio items in Wordpress Divi Theme
I'm working on a wordpress website, which has to use the Divi theme. For some reasons they've enabled offsetting the blog posts from the modules frontend, but they don't do it for portfolio items. There's a page on my site which should contain all the case studies and it basicly looks like that: So, what I basicly did was that I've put on the first row the latest portfolio item, then I put on the second row two more portfolio items which I offset by 1 post, 3rd row is my testimonials content. My question is about the 4th row - I need to offset the portfolio items by 3 posts and I have to have pagination. The problem is that the moment I offset the items the pagination stops working. I tried gathering snippets and info all around the internet, but nothing seems to work. Here is what I've tried: // Native conditional tag only works on page load. Data update needs $conditional_tags data $is_front_page = et_fb_conditional_tag( 'is_front_page', $conditional_tags ); $is_search = et_fb_conditional_tag( 'is_search', $conditional_tags ); $offset_number = 3; // Prepare query arguments $query_args = array( 'posts_per_page' => (int) $args['posts_number'], 'post_type' => 'project', 'post_status' => 'publish', ); // Conditionally get paged data if ( defined( 'DOING_AJAX' ) && isset( $current_page[ 'paged'] ) ) { $et_paged = intval( $current_page[ 'paged' ] ); } else { $et_paged = $is_front_page ? get_query_var( 'page' ) : get_query_var( 'paged' ); $query_args['offset'] = ( ( $paged - 1 ) * intval( $posts_number ) ) + intval( $offset_number ); } The portfolio items get offsetted by 3 posts, but when I click older entries it doesn't work and I'm still shown the same items from the first page (offsetted by 3). Here is a pastebin of the whole module's code.
[ "Nobody wanted to help me, so I had to read wordpress documentations for 2 days, but it was worth it, I've managed to learn many new stuff! Here's the code if someone ever needs it:\nstatic function get_portfolio_item( $args = array(), $conditional_tags = array(), $current_page = array() ) {\n global $et_fb_processing_shortcode_object;\n\n $global_processing_original_value = $et_fb_processing_shortcode_object;\n\n $defaults = array(\n 'posts_number' => 10,\n 'include_categories' => '',\n 'fullwidth' => 'on',\n 'offset_number' => 3,\n );\n\n $args = wp_parse_args( $args, $defaults );\n\n // Native conditional tag only works on page load. Data update needs $conditional_tags data\n $is_front_page = et_fb_conditional_tag( 'is_front_page', $conditional_tags );\n $is_search = et_fb_conditional_tag( 'is_search', $conditional_tags );\n\n // Prepare query arguments\n $query_args = array(\n 'posts_per_page' => (int) $args['posts_number'],\n 'post_type' => 'project',\n 'post_status' => 'publish',\n 'offsetreduced' => true,\n );\n\n // Conditionally get paged data\n if ( defined( 'DOING_AJAX' ) && isset( $current_page['paged'] ) ) {\n $et_paged = intval( $current_page[ 'paged' ] );\n } else {\n $et_paged = $is_front_page ? get_query_var( 'page' ) : get_query_var( 'paged' );\n }\n\n if ( $is_front_page ) {\n global $paged;\n $paged = $et_paged;\n }\n\n // support pagination in VB\n if ( isset( $args['__page'] ) ) {\n $et_paged = $args['__page'];\n }\n\n if ( ! is_search() ) {\n $query_args['paged'] = $et_paged;\n }\n\n if ( '' !== $args['offset_number'] && ! empty( $args['offset_number'] ) ) {\n /**\n * Offset + pagination don't play well. Manual offset calculation required\n * @see: https://codex.wordpress.org/Making_Custom_Queries_using_Offset_and_Pagination\n */\n if ( $et_paged > 1 ) {\n $query_args['offset'] = ( ( $et_paged - 1 ) * intval( $args['posts_number'] ) ) + intval( $args['offset_number'] );\n } else {\n $query_args['offset'] = intval( $args['offset_number'] );\n }\n }\n\n // Passed categories parameter\n $include_categories = self::filter_invalid_term_ids( explode( ',', $args['include_categories'] ), 'project_category' );\n\n if ( ! empty( $include_categories ) ) {\n $query_args['tax_query'] = array(\n array(\n 'taxonomy' => 'project_category',\n 'field' => 'id',\n 'terms' => $include_categories,\n 'operator' => 'IN',\n )\n );\n }\n\n // Get portfolio query\n $query = new WP_Query( $query_args );\n\n // Format portfolio output, and add supplementary data\n $width = 'on' === $args['fullwidth'] ? 1080 : 400;\n $width = (int) apply_filters( 'et_pb_portfolio_image_width', $width );\n $height = 'on' === $args['fullwidth'] ? 9999 : 284;\n $height = (int) apply_filters( 'et_pb_portfolio_image_height', $height );\n $classtext = 'on' === $args['fullwidth'] ? 'et_pb_post_main_image' : '';\n $titletext = get_the_title();\n\n // Loop portfolio item data and add supplementary data\n if ( $query->have_posts() ) {\n $post_index = 0;\n while( $query->have_posts() ) {\n $query->the_post();\n\n $categories = array();\n\n $categories_object = get_the_terms( get_the_ID(), 'project_category' );\n\n if ( ! empty( $categories_object ) ) {\n foreach ( $categories_object as $category ) {\n $categories[] = array(\n 'id' => $category->term_id,\n 'label' => $category->name,\n 'permalink' => get_term_link( $category ),\n );\n }\n }\n\n // need to disable processnig to make sure get_thumbnail() doesn't generate errors\n $et_fb_processing_shortcode_object = false;\n\n // Get thumbnail\n $thumbnail = get_thumbnail( $width, $height, $classtext, $titletext, $titletext, false, 'Blogimage' );\n\n $et_fb_processing_shortcode_object = $global_processing_original_value;\n\n // Append value to query post\n $query->posts[ $post_index ]->post_permalink = get_permalink();\n $query->posts[ $post_index ]->post_thumbnail = print_thumbnail( $thumbnail['thumb'], $thumbnail['use_timthumb'], $titletext, $width, $height, '', false, true );\n $query->posts[ $post_index ]->post_categories = $categories;\n $query->posts[ $post_index ]->post_class_name = get_post_class( '', get_the_ID() );\n\n $post_index++;\n }\n\n $query->posts_next = array(\n 'label' => esc_html__( '&laquo; Older Entries', 'et_builder' ),\n 'url' => next_posts( $query->max_num_pages, false ),\n );\n\n $query->posts_prev = array(\n 'label' => esc_html__( 'Next Entries &raquo;', 'et_builder' ),\n 'url' => ( $et_paged > 1 ) ? previous_posts( false ) : '',\n );\n\n // Added wp_pagenavi support\n $query->wp_pagenavi = function_exists( 'wp_pagenavi' ) ? wp_pagenavi( array(\n 'query' => $query,\n 'echo' => false\n ) ) : false;\n } else if ( wp_doing_ajax() ) {\n // This is for the VB\n $query = array( 'posts' => self::get_no_results_template() );\n }\n\n wp_reset_postdata();\n\n return $query;\n}\n\nAnd in my childs function.php:\nadd_filter('found_posts', 'adjust_offset_pagination', 1, 2 );\nfunction adjust_offset_pagination($found_posts, $query) {\n if ( $query->get( 'offsetreduced' ) ) {\n return $found_posts - 3;\n }\n return $found_posts;\n}\n\nCheers!\n", "You just need to use blog module to do this!\nSet it on \"Projects\".\nGood Luck!\n" ]
[ 1, 0 ]
[]
[]
[ "php", "wordpress" ]
stackoverflow_0050544046_php_wordpress.txt
Q: Select dropdown option with Selenium (Python) I'm kinda new at Selenium, so a proposed myself a project. I'm trying to get as much information as I can from this URL https://statusinvest.com.br/acoes/proventos/ibovespa Until that time I was able to do everything, EXCEPT change the default option at the "Filtro por Índice". I would like to change it from "Ibovespa" to "--GERAL--" but it has been harder than I would expect! I tried via classical XPath (find then click) and by the Select() class in Selenium, but it appears to be beyound my knowledge and I'm totally stuck... Anyone has any tip on how to accomplish it? Thanks! A: So a very simple way to change the input option would be to do: from selenium.webdriver.common.by import By select_obj = driver.find_element(By.CLASS_NAME, 'select-wrapper') # object that contains all of the elements for first input selector select_obj.find_element(By.TAG_NAME, 'input').click() # click the input object to bring up the options select_obj.find_elements(By.TAG_NAME, 'li')[0].click() # click the first option
Select dropdown option with Selenium (Python)
I'm kinda new at Selenium, so a proposed myself a project. I'm trying to get as much information as I can from this URL https://statusinvest.com.br/acoes/proventos/ibovespa Until that time I was able to do everything, EXCEPT change the default option at the "Filtro por Índice". I would like to change it from "Ibovespa" to "--GERAL--" but it has been harder than I would expect! I tried via classical XPath (find then click) and by the Select() class in Selenium, but it appears to be beyound my knowledge and I'm totally stuck... Anyone has any tip on how to accomplish it? Thanks!
[ "So a very simple way to change the input option would be to do:\nfrom selenium.webdriver.common.by import By\n\nselect_obj = driver.find_element(By.CLASS_NAME, 'select-wrapper') # object that contains all of the elements for first input selector\nselect_obj.find_element(By.TAG_NAME, 'input').click() # click the input object to bring up the options\nselect_obj.find_elements(By.TAG_NAME, 'li')[0].click() # click the first option\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074659548_python_selenium.txt
Q: How to add union type of typescript to data-testid? I want to use data-testid attribute for e2e testing in my react app, and I'd happy to use custom union type to it for type safe testing. I'm using React and MUI. How can I archive type safe data-testid? I'm wondering I have to override some type around JSX Elements. A: In order to use the data-testid attribute for type-safe e2e testing in your React app, you will need to make a few changes to your code. First, you will need to create a custom type for the data-testid attribute. This type should be a union of all the possible values that the data-testid attribute can have in your app. For example: type TestId = 'button' | 'input' | 'form' | 'list-item'; Next, you will need to modify the type definition for the JSX.IntrinsicElements interface in the @types/react package. This interface defines the types for the built-in HTML elements that can be used in React, such as div, button, and form. To add the data-testid attribute to this interface, you will need to create an extension for the JSX.IntrinsicElements interface and add a new property for the data-testid attribute. This property should be of the type that you defined earlier, and it should be optional (using the ? syntax) so that it does not affect the types of existing HTML elements. declare namespace JSX { interface IntrinsicElements { [elemName: string]: any; data-testid?: TestId; } } With these changes in place, you will be able to use the data-testid attribute in your React components and have type-safe access to its values. For example: const MyComponent: React.FC = () => ( <button data-testid="button">Click me</button> );
How to add union type of typescript to data-testid?
I want to use data-testid attribute for e2e testing in my react app, and I'd happy to use custom union type to it for type safe testing. I'm using React and MUI. How can I archive type safe data-testid? I'm wondering I have to override some type around JSX Elements.
[ "In order to use the data-testid attribute for type-safe e2e testing in your React app, you will need to make a few changes to your code.\nFirst, you will need to create a custom type for the data-testid attribute. This type should be a union of all the possible values that the data-testid attribute can have in your app. For example:\ntype TestId = 'button' | 'input' | 'form' | 'list-item';\n\nNext, you will need to modify the type definition for the JSX.IntrinsicElements interface in the @types/react package. This interface defines the types for the built-in HTML elements that can be used in React, such as div, button, and form.\nTo add the data-testid attribute to this interface, you will need to create an extension for the JSX.IntrinsicElements interface and add a new property for the data-testid attribute. This property should be of the type that you defined earlier, and it should be optional (using the ? syntax) so that it does not affect the types of existing HTML elements.\ndeclare namespace JSX {\n interface IntrinsicElements {\n [elemName: string]: any;\n data-testid?: TestId;\n }\n}\n\nWith these changes in place, you will be able to use the data-testid attribute in your React components and have type-safe access to its values. For example:\nconst MyComponent: React.FC = () => (\n <button data-testid=\"button\">Click me</button>\n);\n\n" ]
[ 0 ]
[]
[]
[ "reactjs", "testing", "typescript" ]
stackoverflow_0074660034_reactjs_testing_typescript.txt
Q: return the value with an array of objects I thought using bracket or dot notation with people.pets or people[pets] would return the results of the pets but not having luck. This function will be called with an array of objects. Each object represents an owner and will have a pets property, which will be an array of pet names. The function should return an array of all the pets' names. If passed an empty array the function should return an empty array. A typical array of owners is shown below: [ { name: 'Malcolm', pets: ['Bear', 'Minu'], }, { name: 'Caroline', pets: ['Basil', 'Hamish'], }, ]; A: That would be a perfect example for the use of the Array.reduce(). It would however keep duplicates names. Let's say if two owners had the same pet name, it would appear twice, so it depends on what you want. const owners = [ { name: "Malcolm", pets: ["Bear", "Minu"] }, { name: "Caroline", pets: ["Basil", "Hamish"] }, ]; const pets = owners.reduce((acc, item) => acc.concat(item.pets), []);
return the value with an array of objects
I thought using bracket or dot notation with people.pets or people[pets] would return the results of the pets but not having luck. This function will be called with an array of objects. Each object represents an owner and will have a pets property, which will be an array of pet names. The function should return an array of all the pets' names. If passed an empty array the function should return an empty array. A typical array of owners is shown below: [ { name: 'Malcolm', pets: ['Bear', 'Minu'], }, { name: 'Caroline', pets: ['Basil', 'Hamish'], }, ];
[ "That would be a perfect example for the use of the Array.reduce().\nIt would however keep duplicates names. Let's say if two owners had the same pet name, it would appear twice, so it depends on what you want.\nconst owners = [\n { name: \"Malcolm\", pets: [\"Bear\", \"Minu\"] },\n { name: \"Caroline\", pets: [\"Basil\", \"Hamish\"] },\n];\n\nconst pets = owners.reduce((acc, item) => acc.concat(item.pets), []);\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "javascript", "object" ]
stackoverflow_0074660021_arrays_javascript_object.txt
Q: Acumatica - Get multiple selected lines from grid in code behind I am selecting multiple lines (ctrl/shift+click) from the grid on the Sales Order screen and want an action to have access to what was selected. How do I access the list of what's selected on the grid from the code behind? A: As stated, add the selected screen. First you would add a DAC extension to add the selected field, which is not a DB field. #region Selected [PXBool] [PXUIField(DisplayName = "Selected")] public virtual bool? Selected { get; set; } public abstract class selected : PX.Data.BQL.BqlBool.Field<selected> { } #endregion From there, you can add the selected field to your table in the UI. Also, ensure that commit changes is set on that field, or an action that you may call to query. Finally, you can just run a foreach for the view, and check for the selected field you added: foreach (SOLine line in Base.Transactions.Select()) { SOLineExt lineExt = line.GetExtension<SOLineExt>(); if (line.Selected == true) { //execute code on the record } }
Acumatica - Get multiple selected lines from grid in code behind
I am selecting multiple lines (ctrl/shift+click) from the grid on the Sales Order screen and want an action to have access to what was selected. How do I access the list of what's selected on the grid from the code behind?
[ "As stated, add the selected screen. First you would add a DAC extension to add the selected field, which is not a DB field.\n #region Selected \n [PXBool]\n [PXUIField(DisplayName = \"Selected\")]\n public virtual bool? Selected { get; set; }\n public abstract class selected : PX.Data.BQL.BqlBool.Field<selected> { }\n #endregion\n\nFrom there, you can add the selected field to your table in the UI. Also, ensure that commit changes is set on that field, or an action that you may call to query.\nFinally, you can just run a foreach for the view, and check for the selected field you added:\n foreach (SOLine line in Base.Transactions.Select())\n {\n SOLineExt lineExt = line.GetExtension<SOLineExt>();\n if (line.Selected == true)\n {\n //execute code on the record\n }\n }\n\n" ]
[ 0 ]
[]
[]
[ "acumatica", "grid" ]
stackoverflow_0066324324_acumatica_grid.txt
Q: OpenMP does not terminates I've took an example of the simple OpenMP code. #include <stdio.h> #include <omp.h> int main(int argc, char** argv){ int i; int thread_id; omp_set_num_threads(4); #pragma omp parallel { thread_id = omp_get_thread_num(); for( int i = 0; i < omp_get_num_threads(); i++){ if(i == omp_get_thread_num()){ printf("Hello from process: %d\n", thread_id); } #pragma omp barrier } } return 0; } Compiled it with gcc -fopenmp omp_test_hello.c -o a.exe. I did not change or set any environment variables (I've decided that internal directives in code should work similarly). When I execute file I get the following output: Hello from process: 0 Hello from process: 3 Hello from process: 3 Hello from process: 3 After that, the execution of the program does not stop, so it seems that something blocks it from termination. I've tried even more simple example of the code without the barrier and for loop. It does not terminates similarly, though the output include "signals" from all threads: int main(int argc, char** argv){ int i; int thread_id; omp_set_num_threads(4); #pragma omp parallel { thread_id = omp_get_thread_num(); printf("Hello from process: %d\n", thread_id); } return 0; } Output: Hello from process: 2 Hello from process: 1 Hello from process: 0 Hello from process: 3 I've managed to test these examples also on Linux and get same results. So what could be the problem? A: Problem solved by reinstalling mingw compiler.
OpenMP does not terminates
I've took an example of the simple OpenMP code. #include <stdio.h> #include <omp.h> int main(int argc, char** argv){ int i; int thread_id; omp_set_num_threads(4); #pragma omp parallel { thread_id = omp_get_thread_num(); for( int i = 0; i < omp_get_num_threads(); i++){ if(i == omp_get_thread_num()){ printf("Hello from process: %d\n", thread_id); } #pragma omp barrier } } return 0; } Compiled it with gcc -fopenmp omp_test_hello.c -o a.exe. I did not change or set any environment variables (I've decided that internal directives in code should work similarly). When I execute file I get the following output: Hello from process: 0 Hello from process: 3 Hello from process: 3 Hello from process: 3 After that, the execution of the program does not stop, so it seems that something blocks it from termination. I've tried even more simple example of the code without the barrier and for loop. It does not terminates similarly, though the output include "signals" from all threads: int main(int argc, char** argv){ int i; int thread_id; omp_set_num_threads(4); #pragma omp parallel { thread_id = omp_get_thread_num(); printf("Hello from process: %d\n", thread_id); } return 0; } Output: Hello from process: 2 Hello from process: 1 Hello from process: 0 Hello from process: 3 I've managed to test these examples also on Linux and get same results. So what could be the problem?
[ "Problem solved by reinstalling mingw compiler.\n" ]
[ 0 ]
[]
[]
[ "c", "openmp" ]
stackoverflow_0074614165_c_openmp.txt
Q: SwiftUI ShareLink run code before sharesheet is shown I have a ShareLink to share an image ShareLink(item: image) { Image(systemName: "square.and.arrow.up") } Now before I can share this image, I have to generate it with some function @State var image: UIImage var getImage() { // some code that updates @State variable } My ShareLink itself is in a context menu. My problem is that it is too expensive to generate this image (call the getImage() function) every time the view refreshes or the context menu is opened. Is there any way I can run code if the user taps on this ShareLink, that is then run and the results are then shown in the Share sheet? Note: I know this is possible using UIKit as fallback to generate the sharesheet, using a function like this: func actionSheet() { guard let urlShare = URL(string: "https://developer.apple.com/xcode/swiftui/") else { return } let activityVC = UIActivityViewController(activityItems: [urlShare], applicationActivities: nil) UIApplication.shared.windows.first?.rootViewController?.present(activityVC, animated: true, completion: nil) } as demonstrated in this article: https://medium.com/swift-productions/sharesheet-uiactivityviewcontroller-swiftui-47abcd69aba6 I am instead wondering if there is a way to do this with the new ios16 ShareLink A: I used @State var imageForShare: ImageFile? = nil for the image I wanted to share. I don't think you need to generate the image every time the view refreshes. Just only generate the image (in async way) whenever the content of the image changes, and that way SwiftUI knows when it needs to update the image. Since the async generating working in the background, it should not affect your view rendering.
SwiftUI ShareLink run code before sharesheet is shown
I have a ShareLink to share an image ShareLink(item: image) { Image(systemName: "square.and.arrow.up") } Now before I can share this image, I have to generate it with some function @State var image: UIImage var getImage() { // some code that updates @State variable } My ShareLink itself is in a context menu. My problem is that it is too expensive to generate this image (call the getImage() function) every time the view refreshes or the context menu is opened. Is there any way I can run code if the user taps on this ShareLink, that is then run and the results are then shown in the Share sheet? Note: I know this is possible using UIKit as fallback to generate the sharesheet, using a function like this: func actionSheet() { guard let urlShare = URL(string: "https://developer.apple.com/xcode/swiftui/") else { return } let activityVC = UIActivityViewController(activityItems: [urlShare], applicationActivities: nil) UIApplication.shared.windows.first?.rootViewController?.present(activityVC, animated: true, completion: nil) } as demonstrated in this article: https://medium.com/swift-productions/sharesheet-uiactivityviewcontroller-swiftui-47abcd69aba6 I am instead wondering if there is a way to do this with the new ios16 ShareLink
[ "I used @State var imageForShare: ImageFile? = nil for the image I wanted to share.\nI don't think you need to generate the image every time the view refreshes. Just only generate the image (in async way) whenever the content of the image changes, and that way SwiftUI knows when it needs to update the image.\nSince the async generating working in the background, it should not affect your view rendering.\n" ]
[ 0 ]
[]
[]
[ "swift", "swiftui" ]
stackoverflow_0073918417_swift_swiftui.txt
Q: Why is int rather than unsigned int used for C and C++ for loops? This is a rather silly question but why is int commonly used instead of unsigned int when defining a for loop for an array in C or C++? for(int i;i<arraySize;i++){} for(unsigned int i;i<arraySize;i++){} I recognize the benefits of using int when doing something other than array indexing and the benefits of an iterator when using C++ containers. Is it just because it does not matter when looping through an array? Or should I avoid it all together and use a different type such as size_t? A: Using int is more correct from a logical point of view for indexing an array. unsigned semantic in C and C++ doesn't really mean "not negative" but it's more like "bitmask" or "modulo integer". To understand why unsigned is not a good type for a "non-negative" number please consider these totally absurd statements: Adding a possibly negative integer to a non-negative integer you get a non-negative integer The difference of two non-negative integers is always a non-negative integer Multiplying a non-negative integer by a negative integer you get a non-negative result Obviously none of the above phrases make any sense... but it's how C and C++ unsigned semantic indeed works. Actually using an unsigned type for the size of containers is a design mistake of C++ and unfortunately we're now doomed to use this wrong choice forever (for backward compatibility). You may like the name "unsigned" because it's similar to "non-negative" but the name is irrelevant and what counts is the semantic... and unsigned is very far from "non-negative". For this reason when coding most loops on vectors my personally preferred form is: for (int i=0,n=v.size(); i<n; i++) { ... } (of course assuming the size of the vector is not changing during the iteration and that I actually need the index in the body as otherwise the for (auto& x : v)... is better). This running away from unsigned as soon as possible and using plain integers has the advantage of avoiding the traps that are a consequence of unsigned size_t design mistake. For example consider: // draw lines connecting the dots for (size_t i=0; i<pts.size()-1; i++) { drawLine(pts[i], pts[i+1]); } the code above will have problems if the pts vector is empty because pts.size()-1 is a huge nonsense number in that case. Dealing with expressions where a < b-1 is not the same as a+1 < b even for commonly used values is like dancing in a minefield. Historically the justification for having size_t unsigned is for being able to use the extra bit for the values, e.g. being able to have 65535 elements in arrays instead of just 32767 on 16-bit platforms. In my opinion even at that time the extra cost of this wrong semantic choice was not worth the gain (and if 32767 elements are not enough now then 65535 won't be enough for long anyway). Unsigned values are great and very useful, but NOT for representing container size or for indexes; for size and index regular signed integers work much better because the semantic is what you would expect. Unsigned values are the ideal type when you need the modulo arithmetic property or when you want to work at the bit level. A: This is a more general phenomenon, often people don't use the correct types for their integers. Modern C has semantic typedefs that are much preferable over the primitive integer types. E.g everything that is a "size" should just be typed as size_t. If you use the semantic types systematically for your application variables, loop variables come much easier with these types, too. And I have seen several bugs that where difficult to detect that came from using int or so. Code that all of a sudden crashed on large matrixes and stuff like that. Just coding correctly with correct types avoids that. A: It's purely laziness and ignorance. You should always use the right types for indices, and unless you have further information that restricts the range of possible indices, size_t is the right type. Of course if the dimension was read from a single-byte field in a file, then you know it's in the range 0-255, and int would be a perfectly reasonable index type. Likewise, int would be okay if you're looping a fixed number of times, like 0 to 99. But there's still another reason not to use int: if you use i%2 in your loop body to treat even/odd indices differently, i%2 is a lot more expensive when i is signed than when i is unsigned... A: Not much difference. One benefit of int is it being signed. Thus int i < 0 makes sense, while unsigned i < 0 doesn't much. If indexes are calculated, that may be beneficial (for example, you might get cases where you will never enter a loop if some result is negative). And yes, it is less to write :-) A: Using int to index an array is legacy, but still widely adopted. int is just a generic number type and does not correspond to the addressing capabilities of the platform. In case it happens to be shorter or longer than that, you may encounter strange results when trying to index a very large array that goes beyond. On modern platforms, off_t, ptrdiff_t and size_t guarantee much more portability. Another advantage of these types is that they give context to someone who reads the code. When you see the above types you know that the code will do array subscripting or pointer arithmetic, not just any calculation. So, if you want to write bullet-proof, portable and context-sensible code, you can do it at the expense of a few keystrokes. GCC even supports a typeof extension which relieves you from typing the same typename all over the place: typeof(arraySize) i; for (i = 0; i < arraySize; i++) { ... } Then, if you change the type of arraySize, the type of i changes automatically. A: It really depends on the coder. Some coders prefer type perfectionism, so they'll use whatever type they're comparing against. For example, if they're iterating through a C string, you might see: size_t sz = strlen("hello"); for (size_t i = 0; i < sz; i++) { ... } While if they're just doing something 10 times, you'll probably still see int: for (int i = 0; i < 10; i++) { ... } A: I use int cause it requires less physical typing and it doesn't matter - they take up the same amount of space, and unless your array has a few billion elements you won't overflow if you're not using a 16-bit compiler, which I'm usually not. A: Because unless you have an array with size bigger than two gigabyts of type char, or 4 gigabytes of type short or 8 gigabytes of type int etc, it doesn't really matter if the variable is signed or not. So, why type more when you can type less? A: Aside from the issue that it's shorter to type, the reason is that it allows negative numbers. Since we can't say in advance whether a value can ever be negative, most functions that take integer arguments take the signed variety. Since most functions use signed integers, it is often less work to use signed integers for things like loops. Otherwise, you have the potential of having to add a bunch of typecasts. As we move to 64-bit platforms, the unsigned range of a signed integer should be more than enough for most purposes. In these cases, there's not much reason not to use a signed integer. A: Consider the following simple example: int max = some_user_input; // or some_calculation_result for(unsigned int i = 0; i < max; ++i) do_something; If max happens to be a negative value, say -1, the -1 will be regarded as UINT_MAX (when two integers with the sam rank but different sign-ness are compared, the signed one will be treated as an unsigned one). On the other hand, the following code would not have this issue: int max = some_user_input; for(int i = 0; i < max; ++i) do_something; Give a negative max input, the loop will be safely skipped. A: Using a signed int is - in most cases - a mistake that could easily result in potential bugs as well as undefined behavior. Using size_t matches the system's word size (64 bits on 64 bit systems and 32 bits on 32 bit systems), always allowing for the correct range for the loop and minimizing the risk of an integer overflow. The int recommendation comes to solve an issue where reverse for loops were often written incorrectly by unexperienced programmers (of course, int might not be in the correct range for the loop): /* a correct reverse for loop */ for (size_t i = count; i > 0;) { --i; /* note that this is not part of the `for` statement */ /* code for loop where i is for zero based `index` */ } /* an incorrect reverse for loop (bug on count == 0) */ for (size_t i = count - 1; i > 0; --i) { /* i might have overflowed and undefined behavior occurs */ } In general, signed and unsigned variables shouldn't be mixed together, so at times using an int in unavoidable. However, the correct type for a for loop is as a rule size_t. There's a nice talk about this misconception that signed variables are better than unsigned variables, you can find it on YouTube (Signed Integers Considered Harmful by Robert Seacord). TL;DR;: Signed variables are more dangerous and require more code than unsigned variables (which should be preferred almost in all cases and definitely whenever negative values aren't logically expected). With unsigned variables the only concern is the overflow boundary which has a strictly defined behavior (wrap-around) and uses clearly defined modular mathematics. This allows a single edge case test to catch an overflow and that test can be performed after the mathematical operation was executed. However, with signed variables the overflow behavior is undefined (UB) and the negative range is actually larger than the positive range - things that add edge cases that must be tested for and explicitly handled before the mathematical operation can be executed. i.e., how much INT_MIN * -1? (the pre-processor will protect you, but without it you're in a jam). P.S. As for the example offered by @6502 in their answer, the whole thing is again an issue of trying to cut corners and a simple missing if statement. When a loop assumes at least 2 elements in an array, this assumption should be tested beforehand. i.e.: // draw lines connecting the dots - forward loop if(pts.size() > 1) { // first make sure there's enough dots for (size_t i=0; i < pts.size()-1; i++) { // then loop drawLine(pts[i], pts[i+1]); } } // or test against i + 1 : which tests the desired pts[i+1] for (size_t i = 0; i + 1 < pts.size(); i++) { // then loop drawLine(pts[i], pts[i+1]); } // or start i as 1 : but note that `-` is slower than `+` for (size_t i = 1; i < pts.size(); i++) { // then loop drawLine(pts[i - 1], pts[i]); }
Why is int rather than unsigned int used for C and C++ for loops?
This is a rather silly question but why is int commonly used instead of unsigned int when defining a for loop for an array in C or C++? for(int i;i<arraySize;i++){} for(unsigned int i;i<arraySize;i++){} I recognize the benefits of using int when doing something other than array indexing and the benefits of an iterator when using C++ containers. Is it just because it does not matter when looping through an array? Or should I avoid it all together and use a different type such as size_t?
[ "Using int is more correct from a logical point of view for indexing an array.\nunsigned semantic in C and C++ doesn't really mean \"not negative\" but it's more like \"bitmask\" or \"modulo integer\".\nTo understand why unsigned is not a good type for a \"non-negative\" number please consider these totally absurd statements:\n\nAdding a possibly negative integer to a non-negative integer you get a non-negative integer\nThe difference of two non-negative integers is always a non-negative integer\nMultiplying a non-negative integer by a negative integer you get a non-negative result\n\nObviously none of the above phrases make any sense... but it's how C and C++ unsigned semantic indeed works.\nActually using an unsigned type for the size of containers is a design mistake of C++ and unfortunately we're now doomed to use this wrong choice forever (for backward compatibility). You may like the name \"unsigned\" because it's similar to \"non-negative\" but the name is irrelevant and what counts is the semantic... and unsigned is very far from \"non-negative\".\nFor this reason when coding most loops on vectors my personally preferred form is:\nfor (int i=0,n=v.size(); i<n; i++) {\n ...\n}\n\n(of course assuming the size of the vector is not changing during the iteration and that I actually need the index in the body as otherwise the for (auto& x : v)... is better).\nThis running away from unsigned as soon as possible and using plain integers has the advantage of avoiding the traps that are a consequence of unsigned size_t design mistake. For example consider:\n// draw lines connecting the dots\nfor (size_t i=0; i<pts.size()-1; i++) {\n drawLine(pts[i], pts[i+1]);\n}\n\nthe code above will have problems if the pts vector is empty because pts.size()-1 is a huge nonsense number in that case. Dealing with expressions where a < b-1 is not the same as a+1 < b even for commonly used values is like dancing in a minefield.\nHistorically the justification for having size_t unsigned is for being able to use the extra bit for the values, e.g. being able to have 65535 elements in arrays instead of just 32767 on 16-bit platforms. In my opinion even at that time the extra cost of this wrong semantic choice was not worth the gain (and if 32767 elements are not enough now then 65535 won't be enough for long anyway).\nUnsigned values are great and very useful, but NOT for representing container size or for indexes; for size and index regular signed integers work much better because the semantic is what you would expect.\nUnsigned values are the ideal type when you need the modulo arithmetic property or when you want to work at the bit level.\n", "This is a more general phenomenon, often people don't use the correct types for their integers. Modern C has semantic typedefs that are much preferable over the primitive integer types. E.g everything that is a \"size\" should just be typed as size_t. If you use the semantic types systematically for your application variables, loop variables come much easier with these types, too.\nAnd I have seen several bugs that where difficult to detect that came from using int or so. Code that all of a sudden crashed on large matrixes and stuff like that. Just coding correctly with correct types avoids that.\n", "It's purely laziness and ignorance. You should always use the right types for indices, and unless you have further information that restricts the range of possible indices, size_t is the right type.\nOf course if the dimension was read from a single-byte field in a file, then you know it's in the range 0-255, and int would be a perfectly reasonable index type. Likewise, int would be okay if you're looping a fixed number of times, like 0 to 99. But there's still another reason not to use int: if you use i%2 in your loop body to treat even/odd indices differently, i%2 is a lot more expensive when i is signed than when i is unsigned...\n", "Not much difference. One benefit of int is it being signed. Thus int i < 0 makes sense, while unsigned i < 0 doesn't much.\nIf indexes are calculated, that may be beneficial (for example, you might get cases where you will never enter a loop if some result is negative).\nAnd yes, it is less to write :-)\n", "Using int to index an array is legacy, but still widely adopted. int is just a generic number type and does not correspond to the addressing capabilities of the platform. In case it happens to be shorter or longer than that, you may encounter strange results when trying to index a very large array that goes beyond.\nOn modern platforms, off_t, ptrdiff_t and size_t guarantee much more portability.\nAnother advantage of these types is that they give context to someone who reads the code. When you see the above types you know that the code will do array subscripting or pointer arithmetic, not just any calculation.\nSo, if you want to write bullet-proof, portable and context-sensible code, you can do it at the expense of a few keystrokes.\nGCC even supports a typeof extension which relieves you from typing the same typename all over the place:\ntypeof(arraySize) i;\n\nfor (i = 0; i < arraySize; i++) {\n ...\n}\n\nThen, if you change the type of arraySize, the type of i changes automatically.\n", "It really depends on the coder. Some coders prefer type perfectionism, so they'll use whatever type they're comparing against. For example, if they're iterating through a C string, you might see:\nsize_t sz = strlen(\"hello\");\nfor (size_t i = 0; i < sz; i++) {\n ...\n}\n\nWhile if they're just doing something 10 times, you'll probably still see int:\nfor (int i = 0; i < 10; i++) {\n ...\n}\n\n", "I use int cause it requires less physical typing and it doesn't matter - they take up the same amount of space, and unless your array has a few billion elements you won't overflow if you're not using a 16-bit compiler, which I'm usually not. \n", "Because unless you have an array with size bigger than two gigabyts of type char, or 4 gigabytes of type short or 8 gigabytes of type int etc, it doesn't really matter if the variable is signed or not.\nSo, why type more when you can type less?\n", "Aside from the issue that it's shorter to type, the reason is that it allows negative numbers.\nSince we can't say in advance whether a value can ever be negative, most functions that take integer arguments take the signed variety. Since most functions use signed integers, it is often less work to use signed integers for things like loops. Otherwise, you have the potential of having to add a bunch of typecasts.\nAs we move to 64-bit platforms, the unsigned range of a signed integer should be more than enough for most purposes. In these cases, there's not much reason not to use a signed integer.\n", "Consider the following simple example:\nint max = some_user_input; // or some_calculation_result\nfor(unsigned int i = 0; i < max; ++i)\n do_something;\n\nIf max happens to be a negative value, say -1, the -1 will be regarded as UINT_MAX (when two integers with the sam rank but different sign-ness are compared, the signed one will be treated as an unsigned one). On the other hand, the following code would not have this issue:\nint max = some_user_input;\nfor(int i = 0; i < max; ++i)\n do_something;\n\nGive a negative max input, the loop will be safely skipped.\n", "Using a signed int is - in most cases - a mistake that could easily result in potential bugs as well as undefined behavior.\nUsing size_t matches the system's word size (64 bits on 64 bit systems and 32 bits on 32 bit systems), always allowing for the correct range for the loop and minimizing the risk of an integer overflow.\nThe int recommendation comes to solve an issue where reverse for loops were often written incorrectly by unexperienced programmers (of course, int might not be in the correct range for the loop):\n/* a correct reverse for loop */\nfor (size_t i = count; i > 0;) {\n --i; /* note that this is not part of the `for` statement */\n /* code for loop where i is for zero based `index` */\n}\n/* an incorrect reverse for loop (bug on count == 0) */\nfor (size_t i = count - 1; i > 0; --i) {\n /* i might have overflowed and undefined behavior occurs */\n}\n\nIn general, signed and unsigned variables shouldn't be mixed together, so at times using an int in unavoidable. However, the correct type for a for loop is as a rule size_t.\nThere's a nice talk about this misconception that signed variables are better than unsigned variables, you can find it on YouTube (Signed Integers Considered Harmful by Robert Seacord).\nTL;DR;: Signed variables are more dangerous and require more code than unsigned variables (which should be preferred almost in all cases and definitely whenever negative values aren't logically expected).\nWith unsigned variables the only concern is the overflow boundary which has a strictly defined behavior (wrap-around) and uses clearly defined modular mathematics.\nThis allows a single edge case test to catch an overflow and that test can be performed after the mathematical operation was executed.\nHowever, with signed variables the overflow behavior is undefined (UB) and the negative range is actually larger than the positive range - things that add edge cases that must be tested for and explicitly handled before the mathematical operation can be executed.\ni.e., how much INT_MIN * -1? (the pre-processor will protect you, but without it you're in a jam).\nP.S.\nAs for the example offered by @6502 in their answer, the whole thing is again an issue of trying to cut corners and a simple missing if statement.\nWhen a loop assumes at least 2 elements in an array, this assumption should be tested beforehand. i.e.:\n// draw lines connecting the dots - forward loop\nif(pts.size() > 1) { // first make sure there's enough dots\n for (size_t i=0; i < pts.size()-1; i++) { // then loop\n drawLine(pts[i], pts[i+1]);\n }\n}\n// or test against i + 1 : which tests the desired pts[i+1]\nfor (size_t i = 0; i + 1 < pts.size(); i++) { // then loop\n drawLine(pts[i], pts[i+1]);\n}\n// or start i as 1 : but note that `-` is slower than `+`\nfor (size_t i = 1; i < pts.size(); i++) { // then loop\n drawLine(pts[i - 1], pts[i]);\n}\n\n" ]
[ 43, 32, 5, 4, 2, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "c", "for_loop", "int", "unsigned" ]
stackoverflow_0007488837_c_for_loop_int_unsigned.txt
Q: ResizeMode prop for video in Expo ReactNative Project is not resizing my video. I am using expo-av package to display video This is for the web development btw, not for Ios or Android So this is my view <View style={styles.fullView}> <Video shouldPlay isLooping resizeMode="cover" source={require("./res/bg.mp4")} style={{flex: 1}}/> </View> And This is my Stylesheet fullView: { flex: 1, }, container: { flex: 1, alignItems: 'center', justifyContent: 'center', }, heading: { color: 'white', marginBottom: 100, fontSize: 35, }, input: { padding: 6, fontSize: 16, width: 200, textAlign: 'center', color: 'white', }, bgcolor: { backgroundColor: '#33373861', borderRadius: 15, margin: 10, }, button: { alignItems: 'center', backgroundColor: '#FBC1F0', margin: 30, borderRadius: 15, padding: 10, height: 45, width: 200, }, image: { flex: 1, } What I am getting is this What I need is the whole video on screen I have tried to use resizeMode with cover,contain,stretch,repeat none of them work Its Like the resizeMode is not recognized itself. But when I change the height and width in stylesheet the video frame resizes But the video doesn't Video Resolution is 3840 x 2160. I want it to resize the video to screen size or adjust accordingly thanks in advance. A: I discovered this happens for web only because the Video html element is absolutely position for some reason. If you add the following line to your onReadyForDisplay function it will fix it: import { Video } from 'expo-av'; <Video style={styles.video} source={{ uri: 'http://path/to/file', }} useNativeControls={true} resizeMode="contain" onReadyForDisplay={videoData => { videoData.path[0].style.position = "initial" }} />
ResizeMode prop for video in Expo ReactNative Project is not resizing my video. I am using expo-av package to display video
This is for the web development btw, not for Ios or Android So this is my view <View style={styles.fullView}> <Video shouldPlay isLooping resizeMode="cover" source={require("./res/bg.mp4")} style={{flex: 1}}/> </View> And This is my Stylesheet fullView: { flex: 1, }, container: { flex: 1, alignItems: 'center', justifyContent: 'center', }, heading: { color: 'white', marginBottom: 100, fontSize: 35, }, input: { padding: 6, fontSize: 16, width: 200, textAlign: 'center', color: 'white', }, bgcolor: { backgroundColor: '#33373861', borderRadius: 15, margin: 10, }, button: { alignItems: 'center', backgroundColor: '#FBC1F0', margin: 30, borderRadius: 15, padding: 10, height: 45, width: 200, }, image: { flex: 1, } What I am getting is this What I need is the whole video on screen I have tried to use resizeMode with cover,contain,stretch,repeat none of them work Its Like the resizeMode is not recognized itself. But when I change the height and width in stylesheet the video frame resizes But the video doesn't Video Resolution is 3840 x 2160. I want it to resize the video to screen size or adjust accordingly thanks in advance.
[ "I discovered this happens for web only because the Video html element is absolutely position for some reason.\nIf you add the following line to your onReadyForDisplay function it will fix it:\nimport { Video } from 'expo-av';\n\n<Video\n style={styles.video}\n source={{\n uri: 'http://path/to/file',\n }}\n useNativeControls={true}\n resizeMode=\"contain\"\n onReadyForDisplay={videoData => {\n videoData.path[0].style.position = \"initial\"\n }}\n/>\n\n" ]
[ 0 ]
[]
[]
[ "expo", "frontend", "react_native", "web" ]
stackoverflow_0074123932_expo_frontend_react_native_web.txt
Q: How to convert python list to JSON array? If I have python list like pyList=[‘[email protected]’,’[email protected]’] And I want it to convert it to json array and add {} around every object, it should be like that : arrayJson=[{“email”:”[email protected]”},{“ email”:”[email protected]”}] any idea how to do that ? A: You can achieve this by using built-in json module import json arrayJson = json.dumps([{"email": item} for item in pyList]) A: Try to Google this kind of stuff first. :) import json array = [1, 2, 3] jsonArray = json.dumps(array) By the way, the result you asked for can not be achieved with the list you provided. You need to use python dictionaries to get json objects. The conversion is like below Python -> JSON list -> array dictionary -> object And here is the link to the docs https://docs.python.org/3/library/json.html A: pip install jsonwhatever. You should try it, you can put anything on it from jsonwhatever import jsonwhatever as jw pyList=['[email protected]','[email protected]'] jsonwe = jw.JsonWhatEver() mytr = jsonwe.jsonwhatever('my_custom_list', pyList) print(mytr)
How to convert python list to JSON array?
If I have python list like pyList=[‘[email protected]’,’[email protected]’] And I want it to convert it to json array and add {} around every object, it should be like that : arrayJson=[{“email”:”[email protected]”},{“ email”:”[email protected]”}] any idea how to do that ?
[ "You can achieve this by using built-in json module\nimport json\n\narrayJson = json.dumps([{\"email\": item} for item in pyList])\n\n", "Try to Google this kind of stuff first. :)\nimport json\n\narray = [1, 2, 3]\njsonArray = json.dumps(array)\n\nBy the way, the result you asked for can not be achieved with the list you provided.\nYou need to use python dictionaries to get json objects. The conversion is like below\nPython -> JSON\nlist -> array\ndictionary -> object\n\nAnd here is the link to the docs\nhttps://docs.python.org/3/library/json.html\n", "pip install jsonwhatever.\nYou should try it, you can put anything on it\nfrom jsonwhatever import jsonwhatever as jw\n\npyList=['[email protected]','[email protected]']\n\njsonwe = jw.JsonWhatEver()\n\nmytr = jsonwe.jsonwhatever('my_custom_list', pyList)\n\nprint(mytr)\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "arraylist", "arrays", "django", "json", "python" ]
stackoverflow_0071979765_arraylist_arrays_django_json_python.txt
Q: Create a DataFrame with data from a class I want to create a DataFrame to which I want to import data from a class. I mean, I type t1 = Transaction("20221128", "C1", 14) and I want a DataFrame to show data like: Column 1: Date Column 2: Concept Column 3: Amount The code where I want to implement this is: class Transactions: num_of_transactions = 0 amount = 0 def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount t1 = Transaction("20221128", "C1", 14) t2 = Transaction("20221129", "C2", 30) t3 = Transaction("20221130", "3", 14) I tried: def DataFrame(self): df = pd.DataFrame(self.date self.concept, self.amount) But looking at pandas documentation, I have seen it is not a valid way. Any help on that? Thank you! A: In order to create a new data frame, you have to provide the rows and the columns name. You have to change the code as the following: def DataFrame(self): df = pd.DataFrame(data=[[self.date, self.concept, self.amount]], columns=['Date','Concept','Amount']) A: You can create a DataFrame from a list of Transaction objects by first creating a list of dictionaries, where each dictionary represents a row in the DataFrame and has keys that correspond to the columns. Here's one way to do it: import pandas as pd # Create a list of Transaction objects transactions = [t1, t2, t3] # Create a list of dictionaries, where each dictionary represents a row in the DataFrame data = [] for t in transactions: row = {"Date": t.date, "Concept": t.concept, "Amount": t.amount} data.append(row) # Create a DataFrame from the list of dictionaries, specifying the columns in the desired order df = pd.DataFrame(data, columns=["Date", "Concept", "Amount"]) # Print the DataFrame print(df) This should produce a DataFrame that looks like this: | | Date | Concept | Amount | |---:|:---------|:----------|:---------| | 0 | 20221128 | C1 | 14 | | 1 | 20221129 | C2 | 30 | | 2 | 20221130 | 3 | 14 | The above code assumes that the Transaction class is defined as you have shown in your question, with the __init__ method and the class variables and methods that you have included. Note that I have replaced Transaction with Transactions in the class definition to match the name of the class, and I have also changed the self parameter of the add_money method to transaction, to avoid confusion with the self parameter of the instance methods. The DataFrame function is not part of the class definition, but is defined as a separate function that takes a list of Transaction objects as its argument. You can also add a class method to the Transactions class that returns a DataFrame representing all the instances of the class. To do this, you can add a class variable transactions_list that keeps track of all the instances of the class, and a class method to_dataframe that converts transactions_list to a DataFrame. Here's one way to implement it: import pandas as pd class Transactions: num_of_transactions = 0 amount = 0 transactions_list = [] # Class variable to store all instances of the class def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount # Add the instance to the transactions_list self.transactions_list.append(self) Transactions.add_transaction() Transactions.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount @classmethod def to_dataframe(cls): # Create a list of dictionaries representing each transaction transactions_list = [{'Date': t.date, 'Concept': t.concept, 'Amount': t.amount} for t in cls.transactions_list] # Create a DataFrame from the list of dictionaries df = pd.DataFrame(transactions_list) return df # Create some transactions t1 = Transactions("20221128", "C1", 14) t2 = Transactions("20221129", "C2", 30) t3 = Transactions("20221130", "3", 14) You can then call the class method to_dataframe to get a DataFrame representing all the transactions: df = Transactions.to_dataframe() This should create a DataFrame df with columns 'Date', 'Concept', and 'Amount' and rows corresponding to each transaction. A: for the example you provided we can do some modifications in the class so we could get a dataframe easily: class Transaction: num_of_transactions = 0 amount = 0 transactions = [] # <----- class atribute added def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) Transaction.transactions.append(self) # <----- append added now we can get a dataframe like this: pd.DataFrame([t.__dict__ for t in Transaction.transactions]) >>> ''' date concept amount 0 20221128 C1 14 1 20221129 C2 30 2 20221130 3 14
Create a DataFrame with data from a class
I want to create a DataFrame to which I want to import data from a class. I mean, I type t1 = Transaction("20221128", "C1", 14) and I want a DataFrame to show data like: Column 1: Date Column 2: Concept Column 3: Amount The code where I want to implement this is: class Transactions: num_of_transactions = 0 amount = 0 def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount t1 = Transaction("20221128", "C1", 14) t2 = Transaction("20221129", "C2", 30) t3 = Transaction("20221130", "3", 14) I tried: def DataFrame(self): df = pd.DataFrame(self.date self.concept, self.amount) But looking at pandas documentation, I have seen it is not a valid way. Any help on that? Thank you!
[ "In order to create a new data frame, you have to provide the rows and the columns name.\nYou have to change the code as the following:\ndef DataFrame(self):\n df = pd.DataFrame(data=[[self.date, self.concept, self.amount]], columns=['Date','Concept','Amount'])\n\n", "You can create a DataFrame from a list of Transaction objects by first creating a list of dictionaries, where each dictionary represents a row in the DataFrame and has keys that correspond to the columns. Here's one way to do it:\nimport pandas as pd\n\n# Create a list of Transaction objects\ntransactions = [t1, t2, t3]\n\n# Create a list of dictionaries, where each dictionary represents a row in the DataFrame\ndata = []\nfor t in transactions:\n row = {\"Date\": t.date, \"Concept\": t.concept, \"Amount\": t.amount}\n data.append(row)\n\n# Create a DataFrame from the list of dictionaries, specifying the columns in the desired order\ndf = pd.DataFrame(data, columns=[\"Date\", \"Concept\", \"Amount\"])\n\n# Print the DataFrame\nprint(df)\n\nThis should produce a DataFrame that looks like this:\n| | Date | Concept | Amount |\n|---:|:---------|:----------|:---------|\n| 0 | 20221128 | C1 | 14 |\n| 1 | 20221129 | C2 | 30 |\n| 2 | 20221130 | 3 | 14 |\n\nThe above code assumes that the Transaction class is defined as you have shown in your question, with the __init__ method and the class variables and methods that you have included. Note that I have replaced Transaction with Transactions in the class definition to match the name of the class, and I have also changed the self parameter of the add_money method to transaction, to avoid confusion with the self parameter of the instance methods. The DataFrame function is not part of the class definition, but is defined as a separate function that takes a list of Transaction objects as its argument.\nYou can also add a class method to the Transactions class that returns a DataFrame representing all the instances of the class. To do this, you can add a class variable transactions_list that keeps track of all the instances of the class, and a class method to_dataframe that converts transactions_list to a DataFrame.\nHere's one way to implement it:\nimport pandas as pd\n\nclass Transactions:\n\n num_of_transactions = 0\n amount = 0\n transactions_list = [] # Class variable to store all instances of the class\n\n def __init__(self, date, concept, amount):\n self.date = date\n self.concept = concept\n self.amount = amount\n # Add the instance to the transactions_list\n self.transactions_list.append(self)\n Transactions.add_transaction()\n Transactions.add_money(self)\n\n @classmethod\n def number_of_transactions(cls):\n return cls.num_of_transactions\n\n @classmethod\n def add_transaction(cls):\n cls.num_of_transactions += 1\n\n @classmethod\n def amount_of_money(cls):\n return cls.amount\n\n @classmethod\n def add_money(cls, self):\n cls.amount += self.amount\n\n @classmethod\n def to_dataframe(cls):\n # Create a list of dictionaries representing each transaction\n transactions_list = [{'Date': t.date, 'Concept': t.concept, 'Amount': t.amount} for t in cls.transactions_list]\n\n # Create a DataFrame from the list of dictionaries\n df = pd.DataFrame(transactions_list)\n\n return df\n\n# Create some transactions\nt1 = Transactions(\"20221128\", \"C1\", 14)\nt2 = Transactions(\"20221129\", \"C2\", 30)\nt3 = Transactions(\"20221130\", \"3\", 14)\n\nYou can then call the class method to_dataframe to get a DataFrame representing all the transactions:\ndf = Transactions.to_dataframe()\n\nThis should create a DataFrame df with columns 'Date', 'Concept', and 'Amount' and rows corresponding to each transaction.\n", "for the example you provided we can do some modifications in the class so we could get a dataframe easily:\nclass Transaction:\n\n num_of_transactions = 0\n amount = 0\n transactions = [] # <----- class atribute added\n\n def __init__(self, date, concept, amount):\n self.date = date\n self.concept = concept\n self.amount = amount\n Transaction.add_transaction()\n Transaction.add_money(self)\n Transaction.transactions.append(self) # <----- append added\n\nnow we can get a dataframe like this:\npd.DataFrame([t.__dict__ for t in Transaction.transactions])\n\n>>>\n'''\n date concept amount\n0 20221128 C1 14\n1 20221129 C2 30\n2 20221130 3 14\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074659144_dataframe_pandas_python_python_3.x.txt
Q: Spark: get number of cluster cores programmatically I run my spark application in yarn cluster. In my code I use number available cores of queue for creating partitions on my dataset: Dataset ds = ... ds.coalesce(config.getNumberOfCores()); My question: how can I get number available cores of queue by programmatically way and not by configuration? A: There are ways to get both the number of executors and the number of cores in a cluster from Spark. Here is a bit of Scala utility code that I've used in the past. You should easily be able to adapt it to Java. There are two key ideas: The number of workers is the number of executors minus one or sc.getExecutorStorageStatus.length - 1. The number of cores per worker can be obtained by executing java.lang.Runtime.getRuntime.availableProcessors on a worker. The rest of the code is boilerplate for adding convenience methods to SparkContext using Scala implicits. I wrote the code for 1.x years ago, which is why it is not using SparkSession. One final point: it is often a good idea to coalesce to a multiple of your cores as this can improve performance in the case of skewed data. In practice, I use anywhere between 1.5x and 4x, depending on the size of data and whether the job is running on a shared cluster or not. import org.apache.spark.SparkContext import scala.language.implicitConversions class RichSparkContext(val sc: SparkContext) { def executorCount: Int = sc.getExecutorStorageStatus.length - 1 // one is the driver def coresPerExecutor: Int = RichSparkContext.coresPerExecutor(sc) def coreCount: Int = executorCount * coresPerExecutor def coreCount(coresPerExecutor: Int): Int = executorCount * coresPerExecutor } object RichSparkContext { trait Enrichment { implicit def enrichMetadata(sc: SparkContext): RichSparkContext = new RichSparkContext(sc) } object implicits extends Enrichment private var _coresPerExecutor: Int = 0 def coresPerExecutor(sc: SparkContext): Int = synchronized { if (_coresPerExecutor == 0) sc.range(0, 1).map(_ => java.lang.Runtime.getRuntime.availableProcessors).collect.head else _coresPerExecutor } } Update Recently, getExecutorStorageStatus has been removed. We have switched to using SparkEnv's blockManager.master.getStorageStatus.length - 1 (the minus one is for the driver again). The normal way to get to it, via env of SparkContext is not accessible outside of the org.apache.spark package. Therefore, we use an encapsulation violation pattern: package org.apache.spark object EncapsulationViolator { def sparkEnv(sc: SparkContext): SparkEnv = sc.env } A: Found this while looking for the answer to pretty much the same question. I found that: Dataset ds = ... ds.coalesce(sc.defaultParallelism()); does exactly what the OP was looking for. For example, my 5 node x 8 core cluster returns 40 for the defaultParallelism. A: According to Databricks if the driver and executors are of the same node type, this is the way to go: java.lang.Runtime.getRuntime.availableProcessors * (sc.statusTracker.getExecutorInfos.length -1) A: You could run jobs on every machine and ask it for the number of cores, but that's not necessarily what's available for Spark (as pointed out by @tribbloid in a comment on another answer): import spark.implicits._ import scala.collection.JavaConverters._ import sys.process._ val procs = (1 to 1000).toDF.map(_ => "hostname".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap val nCpus = procs.values.sum Running it in the shell (on a tiny test cluster with two workers) gives: scala> :paste // Entering paste mode (ctrl-D to finish) import spark.implicits._ import scala.collection.JavaConverters._ import sys.process._ val procs = (1 to 1000).toDF.map(_ => "hostname".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap val nCpus = procs.values.sum // Exiting paste mode, now interpreting. import spark.implicits._ import scala.collection.JavaConverters._ import sys.process._ procs: scala.collection.immutable.Map[String,Int] = Map(ip-172-31-76-201.ec2.internal -> 2, ip-172-31-74-242.ec2.internal -> 2) nCpus: Int = 4 Add zeros to your range if you typically have lots of machines in your cluster. Even on my two-machine cluster 10000 completes in a couple seconds. This is probably only useful if you want more information than sc.defaultParallelism() will give you (as in @SteveC 's answer) A: For all of those that aren't using yarn clusters: If you are doing it in Python/Databricks here is a function I wrote that will help solve the opportunity. This will get you both the number of worker nodes as well as the number of CPU's and return the multiplied final CPU count of your worker distribution. def GetDistCPUCount(): nWorkers = int(spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterTargetWorkers')) GetType = spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterNodeType') GetSubString = pd.Series(test).str.split(pat = '_', expand = True) GetNumber = GetSubString[1].str.extract('(\d+)') ParseOutString = GetNumber.iloc[0,0] WorkerCPUs = int(ParseOutString) nCPUs = nWorkers * WorkerCPUs return nCPUs
Spark: get number of cluster cores programmatically
I run my spark application in yarn cluster. In my code I use number available cores of queue for creating partitions on my dataset: Dataset ds = ... ds.coalesce(config.getNumberOfCores()); My question: how can I get number available cores of queue by programmatically way and not by configuration?
[ "There are ways to get both the number of executors and the number of cores in a cluster from Spark. Here is a bit of Scala utility code that I've used in the past. You should easily be able to adapt it to Java. There are two key ideas:\n\nThe number of workers is the number of executors minus one or sc.getExecutorStorageStatus.length - 1.\nThe number of cores per worker can be obtained by executing java.lang.Runtime.getRuntime.availableProcessors on a worker.\n\nThe rest of the code is boilerplate for adding convenience methods to SparkContext using Scala implicits. I wrote the code for 1.x years ago, which is why it is not using SparkSession.\nOne final point: it is often a good idea to coalesce to a multiple of your cores as this can improve performance in the case of skewed data. In practice, I use anywhere between 1.5x and 4x, depending on the size of data and whether the job is running on a shared cluster or not.\nimport org.apache.spark.SparkContext\n\nimport scala.language.implicitConversions\n\n\nclass RichSparkContext(val sc: SparkContext) {\n\n def executorCount: Int =\n sc.getExecutorStorageStatus.length - 1 // one is the driver\n\n def coresPerExecutor: Int =\n RichSparkContext.coresPerExecutor(sc)\n\n def coreCount: Int =\n executorCount * coresPerExecutor\n\n def coreCount(coresPerExecutor: Int): Int =\n executorCount * coresPerExecutor\n\n}\n\n\nobject RichSparkContext {\n\n trait Enrichment {\n implicit def enrichMetadata(sc: SparkContext): RichSparkContext =\n new RichSparkContext(sc)\n }\n\n object implicits extends Enrichment\n\n private var _coresPerExecutor: Int = 0\n\n def coresPerExecutor(sc: SparkContext): Int =\n synchronized {\n if (_coresPerExecutor == 0)\n sc.range(0, 1).map(_ => java.lang.Runtime.getRuntime.availableProcessors).collect.head\n else _coresPerExecutor\n }\n\n}\n\nUpdate\nRecently, getExecutorStorageStatus has been removed. We have switched to using SparkEnv's blockManager.master.getStorageStatus.length - 1 (the minus one is for the driver again). The normal way to get to it, via env of SparkContext is not accessible outside of the org.apache.spark package. Therefore, we use an encapsulation violation pattern:\npackage org.apache.spark\n\nobject EncapsulationViolator {\n def sparkEnv(sc: SparkContext): SparkEnv = sc.env\n}\n\n", "Found this while looking for the answer to pretty much the same question.\nI found that:\nDataset ds = ...\nds.coalesce(sc.defaultParallelism());\n\ndoes exactly what the OP was looking for.\nFor example, my 5 node x 8 core cluster returns 40 for the defaultParallelism.\n", "According to Databricks if the driver and executors are of the same node type, this is the way to go:\njava.lang.Runtime.getRuntime.availableProcessors * (sc.statusTracker.getExecutorInfos.length -1)\n\n", "You could run jobs on every machine and ask it for the number of cores, but that's not necessarily what's available for Spark (as pointed out by @tribbloid in a comment on another answer):\nimport spark.implicits._\nimport scala.collection.JavaConverters._\nimport sys.process._\nval procs = (1 to 1000).toDF.map(_ => \"hostname\".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap\nval nCpus = procs.values.sum\n\nRunning it in the shell (on a tiny test cluster with two workers) gives:\nscala> :paste\n// Entering paste mode (ctrl-D to finish)\n\n import spark.implicits._\n import scala.collection.JavaConverters._\n import sys.process._\n val procs = (1 to 1000).toDF.map(_ => \"hostname\".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap\n val nCpus = procs.values.sum\n\n// Exiting paste mode, now interpreting.\n\nimport spark.implicits._ \nimport scala.collection.JavaConverters._\nimport sys.process._\nprocs: scala.collection.immutable.Map[String,Int] = Map(ip-172-31-76-201.ec2.internal -> 2, ip-172-31-74-242.ec2.internal -> 2)\nnCpus: Int = 4\n\nAdd zeros to your range if you typically have lots of machines in your cluster. Even on my two-machine cluster 10000 completes in a couple seconds.\nThis is probably only useful if you want more information than sc.defaultParallelism() will give you (as in @SteveC 's answer)\n", "For all of those that aren't using yarn clusters: If you are doing it in Python/Databricks here is a function I wrote that will help solve the opportunity. This will get you both the number of worker nodes as well as the number of CPU's and return the multiplied final CPU count of your worker distribution.\ndef GetDistCPUCount():\n nWorkers = int(spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterTargetWorkers'))\n GetType = spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterNodeType')\n GetSubString = pd.Series(test).str.split(pat = '_', expand = True)\n GetNumber = GetSubString[1].str.extract('(\\d+)')\n ParseOutString = GetNumber.iloc[0,0]\n WorkerCPUs = int(ParseOutString)\n nCPUs = nWorkers * WorkerCPUs\n return nCPUs\n\n" ]
[ 19, 5, 1, 1, 0 ]
[]
[]
[ "apache_spark", "core", "dataset", "hadoop_yarn", "java" ]
stackoverflow_0047399087_apache_spark_core_dataset_hadoop_yarn_java.txt
Q: Newly added line item is getting removed after making changes in data which is already bound to sap.m.Table We have to perform edit functionality where we have to take two scenarios into consideration: Make changes in existing entries. Add new entries and update the old entries. In the 2nd scenario, when we are trying to add a new entry, it is getting added to sap.m.Table but if we make any change in the old entry then the newly added line item is disappearing. let oContextLineItemEntry = oLineItmTab.getModel().createEntry("/EntityName", { properties: NewLineItem, }); let oTmp = oLineItmTab.getBindingInfo("items").template, oItem = oTmp.clone(); oItem.setBindingContext(oContextLineItemEntry); oLineItmTab.addItem(oItem); Here NewLineItem is an object which I want to add and it is blank. It is initiated like below: NewLineItem = oLineItmTab.getItems()[0].getBindingContext().getObject(); After this, I have removed all the values of the objects attribute. I tried with OData V2 OneWay binding, but it didn't work. I saw framework behavior is triggering this interaction onChange started onChange completed I went through these questions on SAP Community: https://answers.sap.com/questions/699607/newly-added-table-row-disappearing-when-changing-p.html https://answers.sap.com/questions/13305104/ui5-controls-and-onchange-event-in-a-sapuitabletab.html A: After you bind an aggregation it will be managed by the data binding. Therefore you should not try to modify the aggregations. Instead, do the changes in the model and then the aggregation should be updated according to data in the model. e.g. let newRow = {key: "_your_unique_id_", value: ""}; let model = table.getModel(); let tableData = model.getProperty("/tableEntityName"); tableData.unshift(newRow); model.setProperty("/tableEntityName", tableData); Besides that consider to set the growing property of the table to true. A: We had raised an OSS note for this issue to which SAP replied, this is not and issue with UI control but the way to use it. Basically we are not supposed to add new entries when OData binding is being used.
Newly added line item is getting removed after making changes in data which is already bound to sap.m.Table
We have to perform edit functionality where we have to take two scenarios into consideration: Make changes in existing entries. Add new entries and update the old entries. In the 2nd scenario, when we are trying to add a new entry, it is getting added to sap.m.Table but if we make any change in the old entry then the newly added line item is disappearing. let oContextLineItemEntry = oLineItmTab.getModel().createEntry("/EntityName", { properties: NewLineItem, }); let oTmp = oLineItmTab.getBindingInfo("items").template, oItem = oTmp.clone(); oItem.setBindingContext(oContextLineItemEntry); oLineItmTab.addItem(oItem); Here NewLineItem is an object which I want to add and it is blank. It is initiated like below: NewLineItem = oLineItmTab.getItems()[0].getBindingContext().getObject(); After this, I have removed all the values of the objects attribute. I tried with OData V2 OneWay binding, but it didn't work. I saw framework behavior is triggering this interaction onChange started onChange completed I went through these questions on SAP Community: https://answers.sap.com/questions/699607/newly-added-table-row-disappearing-when-changing-p.html https://answers.sap.com/questions/13305104/ui5-controls-and-onchange-event-in-a-sapuitabletab.html
[ "After you bind an aggregation it will be managed by the data binding. Therefore you should not try to modify the aggregations. Instead, do the changes in the model and then the aggregation should be updated according to data in the model. e.g.\nlet newRow = {key: \"_your_unique_id_\", value: \"\"};\nlet model = table.getModel();\nlet tableData = model.getProperty(\"/tableEntityName\");\ntableData.unshift(newRow);\nmodel.setProperty(\"/tableEntityName\", tableData);\n\nBesides that consider to set the growing property of the table to true.\n", "We had raised an OSS note for this issue to which SAP replied, this is not and issue with UI control but the way to use it.\nBasically we are not supposed to add new entries when OData binding is being used.\n" ]
[ 1, 0 ]
[]
[]
[ "odata", "sapui5" ]
stackoverflow_0073783591_odata_sapui5.txt
Q: how to connect an s3 bucket w/ airflow I have an airflow task where I try and load a file into an s3 bucket. I have airflow running on a Ec2 instance. Im running AF version 2.4.3 I have done pip install 'apache-airflow[amazon]' I start up my AF server, log in and go to the Admin section to add a connection. I open a new connection and I dont have an option for s3. My only Amazon options are: Amazon Elastic MapReduce Amazon Redshift Amazon Web services. what else am I missing? A: You need to define aws connection under "Amazon Web Services Connection" for more details see here A: You should define the connection within your DAG. You should also use a secure settings.ini file to save your secrets, and then call those variables from your DAG. See this answer for a complete guide: Airflow s3 connection using UI
how to connect an s3 bucket w/ airflow
I have an airflow task where I try and load a file into an s3 bucket. I have airflow running on a Ec2 instance. Im running AF version 2.4.3 I have done pip install 'apache-airflow[amazon]' I start up my AF server, log in and go to the Admin section to add a connection. I open a new connection and I dont have an option for s3. My only Amazon options are: Amazon Elastic MapReduce Amazon Redshift Amazon Web services. what else am I missing?
[ "You need to define aws connection under \"Amazon Web Services Connection\"\nfor more details see here\n", "You should define the connection within your DAG.\nYou should also use a secure settings.ini file to save your secrets, and then call those variables from your DAG.\nSee this answer for a complete guide: Airflow s3 connection using UI\n" ]
[ 1, 0 ]
[]
[]
[ "airflow", "amazon_s3", "amazon_web_services", "python" ]
stackoverflow_0074631434_airflow_amazon_s3_amazon_web_services_python.txt
Q: Prefix and postfix elements of a bash array I want to pre- and postfix an array in bash similar to brace expansion. Say I have a bash array ARRAY=( one two three ) I want to be able to pre- and postfix it like the following brace expansion echo prefix_{one,two,three}_suffix The best I've been able to find uses bash regex to either add a prefix or a suffix echo ${ARRAY[@]/#/prefix_} echo ${ARRAY[@]/%/_suffix} but I can't find anything on how to do both at once. Potentially I could use regex captures and do something like echo ${ARRAY[@]/.*/prefix_$1_suffix} but it doesn't seem like captures are supported in bash variable regex substitution. I could also store a temporary array variable like PRE=(${ARRAY[@]/#/prefix_}) echo ${PRE[@]/%/_suffix} This is probably the best I can think of, but it still seems sub par. A final alternative is to use a for loop akin to EXPANDED="" for E in ${ARRAY[@]}; do EXPANDED="prefix_${E}_suffix $EXPANDED" done echo $EXPANDED but that is super ugly. I also don't know how I would get it to work if I wanted spaces anywhere the prefix suffix or array elements. A: Bash brace expansion don't use regexes. The pattern used is just some shell glob, which you can find in bash manual 3.5.8.1 Pattern Matching. Your two-step solution is cool, but it needs some quotes for whitespace safety: ARR_PRE=("${ARRAY[@]/#/prefix_}") echo "${ARR_PRE[@]/%/_suffix}" You can also do it in some evil way: eval "something $(printf 'pre_%q_suf ' "${ARRAY[@]}")" A: Your last loop could be done in a whitespace-friendly way with: EXPANDED=() for E in "${ARRAY[@]}"; do EXPANDED+=("prefix_${E}_suffix") done echo "${EXPANDED[@]}" A: Prettier but essentially the same as the loop solution: $ ARRAY=(A B C) $ mapfile -t -d $'\0' EXPANDED < <(printf "prefix_%s_postfix\0" "${ARRAY[@]}") $ echo "${EXPANDED[@]}" prefix_A_postfix prefix_B_postfix prefix_C_postfix mapfile reads rows into elements of an array. With -d $'\0' it instead reads null-delimited strings and -t omits the delimiter from the result. See help mapfile. A: For arrays: ARRAY=( one two three ) (IFS=,; eval echo prefix_\{"${ARRAY[*]}"\}_suffix) For strings: STRING="one two three" eval echo prefix_\{${STRING// /,}\}_suffix eval causes its arguments to be evaluated twice, in both cases first evaluation results in echo prefix_{one,two,three}_suffix and second executes it. For array case subshell is used to avoid overwiting IFS You can also do this in zsh: echo ${${ARRAY[@]/#/prefix_}/%/_suffix} A: Perhaps this would be the most elegant solution: $ declare -a ARRAY=( one two three ) $ declare -p ARRAY declare -a ARRAY=([0]="one" [1]="two" [2]="three") $ $ IFS=$'\n' ARRAY=( $(printf 'prefix %s_suffix\n' "${ARRAY[@]}") ) $ $ declare -p ARRAY declare -a ARRAY=([0]="prefix one_suffix" [1]="prefix two_suffix" [2]="prefix three_suffix") $ $ printf '%s\n' "${ARRAY[@]}" prefix one_suffix prefix two_suffix prefix three_suffix $ By using IFS=$'\n' in front of the array reassignment (being valid only for this assignment line), it is possible to preserve spaces in both prefix & suffix as well as array element strings. Using "printf" is rather handy, because it allows to apply the format string (1st argument) to each additional string argument supplied to the call of "printf".
Prefix and postfix elements of a bash array
I want to pre- and postfix an array in bash similar to brace expansion. Say I have a bash array ARRAY=( one two three ) I want to be able to pre- and postfix it like the following brace expansion echo prefix_{one,two,three}_suffix The best I've been able to find uses bash regex to either add a prefix or a suffix echo ${ARRAY[@]/#/prefix_} echo ${ARRAY[@]/%/_suffix} but I can't find anything on how to do both at once. Potentially I could use regex captures and do something like echo ${ARRAY[@]/.*/prefix_$1_suffix} but it doesn't seem like captures are supported in bash variable regex substitution. I could also store a temporary array variable like PRE=(${ARRAY[@]/#/prefix_}) echo ${PRE[@]/%/_suffix} This is probably the best I can think of, but it still seems sub par. A final alternative is to use a for loop akin to EXPANDED="" for E in ${ARRAY[@]}; do EXPANDED="prefix_${E}_suffix $EXPANDED" done echo $EXPANDED but that is super ugly. I also don't know how I would get it to work if I wanted spaces anywhere the prefix suffix or array elements.
[ "Bash brace expansion don't use regexes. The pattern used is just some shell glob, which you can find in bash manual 3.5.8.1 Pattern Matching.\nYour two-step solution is cool, but it needs some quotes for whitespace safety:\nARR_PRE=(\"${ARRAY[@]/#/prefix_}\")\necho \"${ARR_PRE[@]/%/_suffix}\"\n\nYou can also do it in some evil way:\neval \"something $(printf 'pre_%q_suf ' \"${ARRAY[@]}\")\"\n\n", "Your last loop could be done in a whitespace-friendly way with:\nEXPANDED=()\nfor E in \"${ARRAY[@]}\"; do\n EXPANDED+=(\"prefix_${E}_suffix\")\ndone\necho \"${EXPANDED[@]}\"\n\n", "Prettier but essentially the same as the loop solution:\n$ ARRAY=(A B C)\n$ mapfile -t -d $'\\0' EXPANDED < <(printf \"prefix_%s_postfix\\0\" \"${ARRAY[@]}\")\n$ echo \"${EXPANDED[@]}\"\nprefix_A_postfix prefix_B_postfix prefix_C_postfix\n\nmapfile reads rows into elements of an array. With -d $'\\0' it instead reads null-delimited strings and -t omits the delimiter from the result. See help mapfile.\n", "For arrays:\nARRAY=( one two three )\n(IFS=,; eval echo prefix_\\{\"${ARRAY[*]}\"\\}_suffix)\n\nFor strings:\nSTRING=\"one two three\"\neval echo prefix_\\{${STRING// /,}\\}_suffix\n\neval causes its arguments to be evaluated twice, in both cases first evaluation results in\necho prefix_{one,two,three}_suffix\n\nand second executes it.\nFor array case subshell is used to avoid overwiting IFS\nYou can also do this in zsh:\necho ${${ARRAY[@]/#/prefix_}/%/_suffix}\n\n", "Perhaps this would be the most elegant solution:\n$ declare -a ARRAY=( one two three )\n$ declare -p ARRAY\ndeclare -a ARRAY=([0]=\"one\" [1]=\"two\" [2]=\"three\")\n$\n$ IFS=$'\\n' ARRAY=( $(printf 'prefix %s_suffix\\n' \"${ARRAY[@]}\") )\n$\n$ declare -p ARRAY\ndeclare -a ARRAY=([0]=\"prefix one_suffix\" [1]=\"prefix two_suffix\" [2]=\"prefix three_suffix\")\n$\n$ printf '%s\\n' \"${ARRAY[@]}\"\nprefix one_suffix\nprefix two_suffix\nprefix three_suffix\n$\n\nBy using IFS=$'\\n' in front of the array reassignment (being valid only for this assignment line), it is possible to preserve spaces in both prefix & suffix as well as array element strings.\nUsing \"printf\" is rather handy, because it allows to apply the format string (1st argument) to each additional string argument supplied to the call of \"printf\".\n" ]
[ 24, 15, 3, 1, 0 ]
[ "I have exactly the same question, and I come up with the following solution using sed's word boundary match mechanism:\nmyarray=( one two three )\nnewarray=( $(echo ${myarray[*]}|sed \"s/\\(\\b[^ ]\\+\\)/pre-\\1-post/g\") )\necho ${newarray[@]}\n> pre-one-post pre-two-post pre-three-post\necho ${#newarray[@]}\n> 3\n\nWaiting for more elegant solutions...\n" ]
[ -1 ]
[ "arrays", "bash", "regex" ]
stackoverflow_0020366609_arrays_bash_regex.txt
Q: XTEXT web editor - custom highlighting in Eclipse Ace enter image description here I need guidance about how to use custom highlighting in xtext web editor. I am using eclipse Ace for this. Actually, I need to highlight or change color of the text where user enters duplicate names. Regards. Define Top-Package { } Define Timing { UUID: "hello" SHORT-NAME: "xecutionTiming" CATEGORY: "" NAME: "iram" TRACEABLE-SPECIFICATION-REFS: "" Define Constraints { Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "rida" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "misbah" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "iram" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } } } } } A: The intended "vanilla Ace" way of doing validation is to either use workers (e.g. see javascript_worker.js), or do it yourself whenever you want it. Ace doesn't have semantic syntax highlighting in the same way VS Code does, but you can use editor.session.setAnnotations to add colored backgrounds behind tokens if you want.
XTEXT web editor - custom highlighting in Eclipse Ace
enter image description here I need guidance about how to use custom highlighting in xtext web editor. I am using eclipse Ace for this. Actually, I need to highlight or change color of the text where user enters duplicate names. Regards. Define Top-Package { } Define Timing { UUID: "hello" SHORT-NAME: "xecutionTiming" CATEGORY: "" NAME: "iram" TRACEABLE-SPECIFICATION-REFS: "" Define Constraints { Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "rida" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "misbah" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } Define Execution-Time-Constraint { UUID: "" SHORT-NAME: "WiperCtrlBasic" CATEGORY: "" NAME: "iram" RESUME-REFS: "" PREEMPTION-REFS: "" START: "" STOP: "" } } } } }
[ "The intended \"vanilla Ace\" way of doing validation is to either use workers (e.g. see javascript_worker.js), or do it yourself whenever you want it.\nAce doesn't have semantic syntax highlighting in the same way VS Code does, but you can use editor.session.setAnnotations to add colored backgrounds behind tokens if you want.\n" ]
[ 0 ]
[]
[]
[ "ace_editor", "editor", "highlight", "web", "xtext" ]
stackoverflow_0073499664_ace_editor_editor_highlight_web_xtext.txt
Q: How can I define some initial values that are variable in formula How can I define some initial values (that are variable)in formulas I should write code to predict and also optimize prediction of data series But it has many formulas as a gray box and in these formulas, I should define some initial values(as variables) A: Default Arguments: def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') We need to keep the following points in mind while calling functions: In the case of passing the keyword arguments, the order of arguments is important. There should be only one value for one parameter. The passed keyword name should match with the actual keyword name. In the case of calling a function containing non-keyword arguments, the order is important. Example #1: Calling functions without keyword arguments def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # 1 positional argument student('John') # 3 positional arguments student('John', 'Gates', 'Seventh') # 2 positional arguments student('John', 'Gates') student('John', 'Seventh') Output: John Mark studies in Fifth Standard John Gates studies in Seventh Standard John Gates studies in Fifth Standard John Seventh studies in Fifth Standard Example #2: Calling functions with keyword arguments def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # 1 keyword argument student(firstname ='John') # 2 keyword arguments student(firstname ='John', standard ='Seventh') # 2 keyword arguments student(lastname ='Gates', firstname ='John') Output: John Mark studies in Fifth Standard John Mark studies in Seventh Standard John Gates studies in Fifth Standard Example #3: Some Invalid function calls def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # required argument missing student() # non keyword argument after a keyword argument student(firstname ='John', 'Seventh') # unknown keyword argument student(subject ='Maths') The above code will throw an error because: In the first call, value is not passed for parameter firstname which is the required parameter. In the second call, there is a non-keyword argument after a keyword argument. In the third call, the passing keyword argument is not matched with the actual keyword name arguments. Example using dictionary mutable default argument values example using python dictionary itemName is the name of item and quantity is the number of such items are there def addItemToDictionary(itemName, quantity, itemList = {}): itemList[itemName] = quantity return itemList print(addItemToDictionary('notebook', 4)) print(addItemToDictionary('pencil', 1)) print(addItemToDictionary('eraser', 1)) Output {'notebook': 4} {'notebook': 4, 'pencil': 1} {'notebook': 4, 'pencil': 1, 'eraser': 1}
How can I define some initial values that are variable in formula
How can I define some initial values (that are variable)in formulas I should write code to predict and also optimize prediction of data series But it has many formulas as a gray box and in these formulas, I should define some initial values(as variables)
[ "Default Arguments:\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\nWe need to keep the following points in mind while calling functions:\nIn the case of passing the keyword arguments, the order of arguments is important.\nThere should be only one value for one parameter.\nThe passed keyword name should match with the actual keyword name.\nIn the case of calling a function containing non-keyword arguments, the order is important.\nExample #1: Calling functions without keyword arguments\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# 1 positional argument\nstudent('John')\n\n# 3 positional arguments \nstudent('John', 'Gates', 'Seventh') \n\n# 2 positional arguments\nstudent('John', 'Gates') \nstudent('John', 'Seventh')\n\nOutput:\nJohn Mark studies in Fifth Standard\nJohn Gates studies in Seventh Standard\nJohn Gates studies in Fifth Standard\nJohn Seventh studies in Fifth Standard\n\nExample #2: Calling functions with keyword arguments\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# 1 keyword argument\nstudent(firstname ='John') \n\n# 2 keyword arguments \nstudent(firstname ='John', standard ='Seventh')\n\n# 2 keyword arguments\nstudent(lastname ='Gates', firstname ='John') \n \n\nOutput:\nJohn Mark studies in Fifth Standard\nJohn Mark studies in Seventh Standard\nJohn Gates studies in Fifth Standard\n \n\nExample #3: Some Invalid function calls\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# required argument missing\nstudent() \n\n# non keyword argument after a keyword argument \nstudent(firstname ='John', 'Seventh')\n\n# unknown keyword argument\nstudent(subject ='Maths') \n\nThe above code will throw an error because:\nIn the first call, value is not passed for parameter firstname which is the required parameter.\nIn the second call, there is a non-keyword argument after a keyword argument.\nIn the third call, the passing keyword argument is not matched with the actual keyword name arguments.\nExample using dictionary\nmutable default argument values example using python dictionary\nitemName is the name of item and quantity is the number of such\nitems are there\ndef addItemToDictionary(itemName, quantity, itemList = {}):\n itemList[itemName] = quantity\n return itemList\n\n\nprint(addItemToDictionary('notebook', 4))\nprint(addItemToDictionary('pencil', 1))\nprint(addItemToDictionary('eraser', 1))\n\nOutput\n{'notebook': 4}\n{'notebook': 4, 'pencil': 1}\n{'notebook': 4, 'pencil': 1, 'eraser': 1}\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074659988_python.txt
Q: How do I pass down my search results to other components as a prop Looking to solve how to pass my search results to other components so when users use the search bar, the searched results gets displayed instead of that components rendered data.. in this case it would homeScreen. using react router v5 and i tried passing it through the router but many attempts didn't work, should i create a seperate search router too? App.js: <Container> <Route path="/" component={HomeScreen} exact /> <Route path="/login" component={LoginScreen} exact /> <Route path="/register" component={RegisterScreen} exact /> <Route path="/product/:id" component={ProductScreen} exact /> <Route path="/cart/:id?" component={CartScreen} exact /> </Container> header.js: function Header() { const userLogin = useSelector((state) => state.userLogin); const { userInfo } = userLogin; // const [items, setItems] = useState(""); const [searchResults, setSearchResults] = useState([]); const debounce = useDebounce(searchResults, 500); const dispatch = useDispatch(); const logoutHandler = () => { dispatch(logout()); }; useEffect(() => { axios.get(`/api/search/?search=${searchResults}`).then((response) => { setSearchResults(response.data[0]); console.log(response.data[0]); }); }, [debounce]); const handleSearch = (e) => { setSearchResults(e.target.value); }; return ( <div> <Navbar bg="dark" variant="dark" className="navCustom"> <Container> <LinkContainer to="/"> <Navbar.Brand>eCommerce</Navbar.Brand> </LinkContainer> <Form className="d-flex"> <Form.Control type="search" placeholder="Search" className="me-2" aria-label="Search" onChange={handleSearch} /> <Button variant="outline-success">Search</Button> </Form> HomeScreen.js: function HomeScreen({ searchResults }) { const dispatch = useDispatch(); const productList = useSelector((state) => state.productList); const { error, loading, products } = productList; useEffect(() => { dispatch(listProducts()); }, [dispatch]); return ( <div> {searchResults.length > 0 ? ( <Row> {searchResults.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> ) : ( // Fall back to rendering regular products <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> )} </div> ); } export default HomeScreen; A: If you are getting an "Uncaught TypeError: searchResults is undefined" error in the console, it means that the searchResults variable is being accessed in your code, but it has not been defined or initialized. This can happen if you are trying to access the searchResults variable before it has been set to a value, or if you are trying to access the searchResults variable in a scope where it is not available. To fix this error, you will need to make sure that the searchResults variable is defined and initialized before you try to access it in your code. You can do this by adding a default value for the searchResults variable when you declare it, or by checking if the searchResults variable is defined before you try to access it. Here is an example of how you could fix this error: // In the HomeScreen component function HomeScreen({ location }) { // Set a default value for searchResults const searchResults = location.state ? location.state.searchResults : []; return ( <div> {searchResults.length > 0 ? ( <Row> {searchResults.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> ) : ( // Fall back to rendering regular products <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> )} </div> ); } In this example, the searchResults variable is set to an empty array if the location.state property is undefined or null. This ensures that the searchResults variable is always defined and initialized, and it will prevent the "Uncaught TypeError: searchResults is undefined" error from occurring. A: This seems like a good use case of React Context. Within the Context, you can use useState to set the results. The Context can be provided to other components within your app.
How do I pass down my search results to other components as a prop
Looking to solve how to pass my search results to other components so when users use the search bar, the searched results gets displayed instead of that components rendered data.. in this case it would homeScreen. using react router v5 and i tried passing it through the router but many attempts didn't work, should i create a seperate search router too? App.js: <Container> <Route path="/" component={HomeScreen} exact /> <Route path="/login" component={LoginScreen} exact /> <Route path="/register" component={RegisterScreen} exact /> <Route path="/product/:id" component={ProductScreen} exact /> <Route path="/cart/:id?" component={CartScreen} exact /> </Container> header.js: function Header() { const userLogin = useSelector((state) => state.userLogin); const { userInfo } = userLogin; // const [items, setItems] = useState(""); const [searchResults, setSearchResults] = useState([]); const debounce = useDebounce(searchResults, 500); const dispatch = useDispatch(); const logoutHandler = () => { dispatch(logout()); }; useEffect(() => { axios.get(`/api/search/?search=${searchResults}`).then((response) => { setSearchResults(response.data[0]); console.log(response.data[0]); }); }, [debounce]); const handleSearch = (e) => { setSearchResults(e.target.value); }; return ( <div> <Navbar bg="dark" variant="dark" className="navCustom"> <Container> <LinkContainer to="/"> <Navbar.Brand>eCommerce</Navbar.Brand> </LinkContainer> <Form className="d-flex"> <Form.Control type="search" placeholder="Search" className="me-2" aria-label="Search" onChange={handleSearch} /> <Button variant="outline-success">Search</Button> </Form> HomeScreen.js: function HomeScreen({ searchResults }) { const dispatch = useDispatch(); const productList = useSelector((state) => state.productList); const { error, loading, products } = productList; useEffect(() => { dispatch(listProducts()); }, [dispatch]); return ( <div> {searchResults.length > 0 ? ( <Row> {searchResults.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> ) : ( // Fall back to rendering regular products <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> )} </div> ); } export default HomeScreen;
[ "If you are getting an \"Uncaught TypeError: searchResults is undefined\" error in the console, it means that the searchResults variable is being accessed in your code, but it has not been defined or initialized. This can happen if you are trying to access the searchResults variable before it has been set to a value, or if you are trying to access the searchResults variable in a scope where it is not available.\nTo fix this error, you will need to make sure that the searchResults variable is defined and initialized before you try to access it in your code. You can do this by adding a default value for the searchResults variable when you declare it, or by checking if the searchResults variable is defined before you try to access it.\nHere is an example of how you could fix this error:\n// In the HomeScreen component\nfunction HomeScreen({ location }) {\n // Set a default value for searchResults\n const searchResults = location.state ? location.state.searchResults : [];\n\n return (\n <div>\n {searchResults.length > 0 ? (\n <Row>\n {searchResults.map((product) => (\n <Col key={product._id} sm={12} md={6} lg={4} xl={3}>\n <Product product={product} />\n </Col>\n ))}\n </Row>\n ) : (\n // Fall back to rendering regular products\n <Row>\n {products &&\n products.map((product) => (\n <Col key={product._id} sm={12} md={6} lg={4} xl={3}>\n <Product product={product} />\n </Col>\n ))}\n </Row>\n )}\n </div>\n );\n}\n\nIn this example, the searchResults variable is set to an empty array if the location.state property is undefined or null. This ensures that the searchResults variable is always defined and initialized, and it will prevent the \"Uncaught TypeError: searchResults is undefined\" error from occurring.\n", "This seems like a good use case of React Context. Within the Context, you can use useState to set the results. The Context can be provided to other components within your app.\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "reactjs", "typescript" ]
stackoverflow_0074660058_javascript_reactjs_typescript.txt
Q: Ignore first element with XPATH I need to get only text from the following structure, however, ignoring the first element, which would be the <span>SIGNIFICADO: </span> tag <p class="p1"> <span>SIGNIFICADO: </span> <strong> <a href="www.site.com">Text Link</a> </strong> Some text Some text Some text </p> Currently I do it like this: p1=driver.find_element(By.XPATH,'//p[@class="p1"]').text And if I put this xpath: //p[@class="p1"]/text() Text that is inside the <a> tag is ignored. How can I get all the text except the first one that is inside <span> ?? A: To omit text of and before the first element, you can use //p[@class="p1"]//text()[preceding::*] This selects all text nodes that have at least one preceding element(here: <span>. Disadvantage is that this also discards text between the <p> element and the <span> element.
Ignore first element with XPATH
I need to get only text from the following structure, however, ignoring the first element, which would be the <span>SIGNIFICADO: </span> tag <p class="p1"> <span>SIGNIFICADO: </span> <strong> <a href="www.site.com">Text Link</a> </strong> Some text Some text Some text </p> Currently I do it like this: p1=driver.find_element(By.XPATH,'//p[@class="p1"]').text And if I put this xpath: //p[@class="p1"]/text() Text that is inside the <a> tag is ignored. How can I get all the text except the first one that is inside <span> ??
[ "To omit text of and before the first element, you can use\n//p[@class=\"p1\"]//text()[preceding::*]\n\nThis selects all text nodes that have at least one preceding element(here: <span>. Disadvantage is that this also discards text between the <p> element and the <span> element.\n" ]
[ 0 ]
[]
[]
[ "python", "xpath" ]
stackoverflow_0074656703_python_xpath.txt
Q: Convert a dictionary into a list by enumerating? I have a list created from a csv file that is a dictionary within a list. I need to officially convert the list to a dictionary. I have a working solution below but what's wrong with it is that it labels each dictionary row sequentially sub[i], for example name[0] and name[1]. I have searched around online and it seems that enumerate requires an integer be assigned to each row hence the naming sequence but i would need for this program to work to translate the dictionary while leaving the names for each row blank. data = [] check_correct_args() try: with open(sys.argv[1]) as csvfile: #with open('Address_Book.csv', "r") as csvfile: reader = csv.DictReader(csvfile) # reader = csv.reader(csvfile) for row in reader: data.append(row) #return data except FileNotFoundError: sys.exit("Couldn't read csv file") dictionary = {f'name{i}':v for i, v in enumerate(data)} I'm looking for something around the lines of (in psuedocode): dictionary = {f'':v for i, v in enumerate(data)} A: To convert a dictionary into a list by enumerating in Python, you can use the items() method to get a list of the key-value pairs in the dictionary, and then use a for loop to enumerate over the pairs and create a new list. Here is an example: # Define a dictionary my_dict = {'apple': 1, 'banana': 2, 'cherry': 3} # Create an empty list my_list = [] # Enumerate over the key-value pairs in the dictionary for key, value in my_dict.items(): # Create a tuple with the key and value, and append it to the list my_list.append((key, value)) # Print the resulting list print(my_list) In this example, the dictionary my_dict contains three key-value pairs, and the for loop enumerates over these pairs and creates a list of tuples containing the keys and values. The output of this code would be: [('apple', 1), ('banana', 2), ('cherry', 3)] You can also use a list comprehension to convert a dictionary into a list of tuples, like this: # Define a dictionary my_dict = {'apple': 1, 'banana': 2, 'cherry': 3} # Use a list comprehension to create a list of tuples my_list = [(key, value) for key, value in my_dict.items()] # Print the resulting list print(my_list) This code does the same thing as the previous example, but it uses a list comprehension to create the list of tuples in a more concise way. The output would be the same as above.
Convert a dictionary into a list by enumerating?
I have a list created from a csv file that is a dictionary within a list. I need to officially convert the list to a dictionary. I have a working solution below but what's wrong with it is that it labels each dictionary row sequentially sub[i], for example name[0] and name[1]. I have searched around online and it seems that enumerate requires an integer be assigned to each row hence the naming sequence but i would need for this program to work to translate the dictionary while leaving the names for each row blank. data = [] check_correct_args() try: with open(sys.argv[1]) as csvfile: #with open('Address_Book.csv', "r") as csvfile: reader = csv.DictReader(csvfile) # reader = csv.reader(csvfile) for row in reader: data.append(row) #return data except FileNotFoundError: sys.exit("Couldn't read csv file") dictionary = {f'name{i}':v for i, v in enumerate(data)} I'm looking for something around the lines of (in psuedocode): dictionary = {f'':v for i, v in enumerate(data)}
[ "To convert a dictionary into a list by enumerating in Python, you can use the items() method to get a list of the key-value pairs in the dictionary, and then use a for loop to enumerate over the pairs and create a new list.\nHere is an example:\n# Define a dictionary\nmy_dict = {'apple': 1, 'banana': 2, 'cherry': 3}\n\n# Create an empty list\nmy_list = []\n\n# Enumerate over the key-value pairs in the dictionary\nfor key, value in my_dict.items():\n # Create a tuple with the key and value, and append it to the list\n my_list.append((key, value))\n\n# Print the resulting list\nprint(my_list)\n\n\nIn this example, the dictionary my_dict contains three key-value pairs, and the for loop enumerates over these pairs and creates a list of tuples containing the keys and values. The output of this code would be:\n[('apple', 1), ('banana', 2), ('cherry', 3)]\n\n\nYou can also use a list comprehension to convert a dictionary into a list of tuples, like this:\n# Define a dictionary\nmy_dict = {'apple': 1, 'banana': 2, 'cherry': 3}\n\n# Use a list comprehension to create a list of tuples\nmy_list = [(key, value) for key, value in my_dict.items()]\n\n# Print the resulting list\nprint(my_list)\n\n\nThis code does the same thing as the previous example, but it uses a list comprehension to create the list of tuples in a more concise way. The output would be the same as above.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074659808_python_python_3.x.txt
Q: How do I crop a python array to maximum size with only non-zero values (largest non-zero rectangle) I have a numpy array of pixel data, something like 0 0 0 0 0 0 0 0 1 3 4 6 1 0 0 2 3 5 2 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 I would like to get a new array which excludes any outer rows/columns with zeroes, so I just end up with only the non-zero values (that works for any given array) i.e. 1 3 4 6 1 2 3 5 2 1 So far all I've managed to get is 1 3 4 6 1 2 3 5 2 1 1 0 0 1 0 using np.argwhere to find the "min" and "max" non-zero values, but this still includes rows/columns with zero and non-zero values in. My actual array: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 A: Welcome to StackOverflow! Input: [[ 0 0 0 ... 0 0 0] [ 0 0 0 ... 0 0 0] [ 0 0 1872 ... 1765 0 0] ... [ 0 0 1850 ... 1800 0 0] [ 0 0 0 ... 0 0 0] [ 0 0 0 ... 0 0 0]] Input array.npy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Solution 1: np_input = np.load('array.npy') # Remove all zeros from column np_input = np_input[:, (np_input != 0).any(axis=0)] # Remove all zeros from row np_input = np_input[(np_input != 0).any(axis=1)] # converting to list of lists np_input = np_input.tolist() # Remove sub list that contains a zero np_input = [x for x in np_input if 0 not in x] # Convert pixles_input to numpy array final_np = np.array(np_input) print(final_np) Solution 2: np_input = np.load('array.npy') final_np = np.array([x for x in np_input[:, (np_input != 0).any(axis=0)][(np_input != 0).any(axis=1)].tolist() if 0 not in x]) print(final_np) Output: [[1872 1803 1731 ... 1709 1774 1765] [1937 1746 1790 ... 1685 1814 1756] [1754 1895 1806 ... 1817 1885 1792] ... [1861 1895 1819 ... 1861 1867 1844] [1822 1867 1806 ... 1786 1919 1887] [1850 1926 1855 ... 1861 1761 1800]] Output array.npy 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800
How do I crop a python array to maximum size with only non-zero values (largest non-zero rectangle)
I have a numpy array of pixel data, something like 0 0 0 0 0 0 0 0 1 3 4 6 1 0 0 2 3 5 2 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 I would like to get a new array which excludes any outer rows/columns with zeroes, so I just end up with only the non-zero values (that works for any given array) i.e. 1 3 4 6 1 2 3 5 2 1 So far all I've managed to get is 1 3 4 6 1 2 3 5 2 1 1 0 0 1 0 using np.argwhere to find the "min" and "max" non-zero values, but this still includes rows/columns with zero and non-zero values in. My actual array: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[ "Welcome to StackOverflow!\nInput:\n[[ 0 0 0 ... 0 0 0]\n [ 0 0 0 ... 0 0 0]\n [ 0 0 1872 ... 1765 0 0]\n ...\n [ 0 0 1850 ... 1800 0 0]\n [ 0 0 0 ... 0 0 0]\n [ 0 0 0 ... 0 0 0]]\n\nInput array.npy\n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0\n0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0\n0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0\n0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0\n0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0\n0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0\n0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0\n0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0\n0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0\n0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0\n0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0\n0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0\n0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0\n0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0\n0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0\n0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0\n0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0\n0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0\n0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0\n0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0\n0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0\n0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0\n0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0\n0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0\n0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0\n0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0\n0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0\n0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0\n0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0\n0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0\n0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0\n0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0\n0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0\n0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0\n0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0\n0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0\n0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0\n0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0\n0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0\n0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0\n0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0\n0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n\n\nSolution 1:\nnp_input = np.load('array.npy')\n\n# Remove all zeros from column\nnp_input = np_input[:, (np_input != 0).any(axis=0)]\n\n# Remove all zeros from row\nnp_input = np_input[(np_input != 0).any(axis=1)]\n\n# converting to list of lists\nnp_input = np_input.tolist()\n\n# Remove sub list that contains a zero\nnp_input = [x for x in np_input if 0 not in x]\n\n# Convert pixles_input to numpy array\nfinal_np = np.array(np_input)\n\nprint(final_np)\n\nSolution 2:\nnp_input = np.load('array.npy')\nfinal_np = np.array([x for x in np_input[:, (np_input != 0).any(axis=0)][(np_input != 0).any(axis=1)].tolist() if 0 not in x])\nprint(final_np)\n\n\nOutput:\n[[1872 1803 1731 ... 1709 1774 1765]\n [1937 1746 1790 ... 1685 1814 1756]\n [1754 1895 1806 ... 1817 1885 1792]\n ...\n [1861 1895 1819 ... 1861 1867 1844]\n [1822 1867 1806 ... 1786 1919 1887]\n [1850 1926 1855 ... 1861 1761 1800]]\n\nOutput array.npy\n1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765\n1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756\n1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792\n1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817\n1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754\n1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864\n1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824\n1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876\n1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754\n1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928\n1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798\n1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875\n1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825\n1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813\n1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857\n1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857\n1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818\n1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898\n1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822\n1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834\n1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819\n1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907\n1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861\n1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881\n1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873\n1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842\n1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888\n1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873\n1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796\n1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755\n1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816\n1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855\n1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787\n1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842\n1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738\n1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829\n1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776\n1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721\n1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844\n1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887\n1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800\n\n" ]
[ 0 ]
[ "If we go by your assumption that there likely won't be any zeros in the middle of the array, we can figure out if a row contains any zeros using any(axis=1) (or axis=0 for columns), and if a row contains all zeros using all\ndata = np.array([[0, 0, 0, 0, 0, 0, 0],\n [0, 1, 3, 4, 6, 1, 0],\n [0, 2, 3, 5, 2, 1, 0],\n [0, 1, 0, 0, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0]])\n\nTo start, we want to delete those rows and columns that are all zeros.\ndelete_rows = (data == 0).all(axis=1)\ndelete_cols = (data == 0).all(axis=0)\n\nFor now, let's set those rows to -999 (since your data is pixel data, -999 is an invalid value that you never expect to see) so that data == 0 for the future steps isn't confused by these \"border\" rows/cols\ndata[delete_rows, :] = -999\ndata[:, delete_cols] = -999\n\nNext, let's find any rows that contain any zeros and are next to a row that's going to be deleted (previous or next row is in delete_rows):\nzero_rows = (data == 0).any(axis=1)\n\nd_r = np.zeros(zero_rows.shape, dtype=bool)\nd_r[1:] = d_r[1:] | delete_rows[:-1]\nd_r[:-1] = d_r[:-1] | delete_rows[1:]\n\ndelete_rows = delete_rows | (zero_rows & d_r)\ndata[delete_rows, :] = -999\n\nWe can repeat this until there are no more changes to delete_rows. I.e.:\ndel_count = sum(delete_rows)\nprev_del_count = del_count + 1\n\nwhile del_count != prev_del_count:\n zero_rows = (data == 0).any(axis=1)\n\n d_r = np.zeros(zero_rows.shape, dtype=bool)\n d_r[1:] = d_r[1:] | delete_rows[:-1]\n d_r[:-1] = d_r[:-1] | delete_rows[1:]\n\n delete_rows = delete_rows | (zero_rows & d_r)\n prev_del_count, del_count = del_count, sum(delete_rows)\n data[delete_rows, :] = -999\n\nThen, we can do the same for columns:\ndel_count = sum(delete_cols)\nprev_del_count = del_count + 1\n\nwhile del_count != prev_del_count:\n zero_cols = (data == 0).any(axis=0)\n\n d_c = np.zeros(zero_cols.shape, dtype=bool)\n d_c[1:] = d_c[1:] | delete_cols[:-1]\n d_c[:-1] = d_c[:-1] | delete_cols[1:]\n\n delete_cols = delete_cols | (zero_cols & d_c)\n prev_del_count, del_count = del_count, sum(delete_cols)\n data[:, delete_cols] = -999\n\nNow, we have:\ndelete_rows = np.array([ True, False, False, True, True])\ndelete_cols = np.array([ True, False, False, False, False, False, True])\n\nAnd we can filter out the required rows and cols:\nfiltered_data = data[~delete_rows, :][:, ~delete_cols]\n\nwhich gives:\narray([[1, 3, 4, 6, 1],\n [2, 3, 5, 2, 1]])\n\n" ]
[ -1 ]
[ "numpy", "numpy_ndarray", "python", "python_3.x" ]
stackoverflow_0074655756_numpy_numpy_ndarray_python_python_3.x.txt
Q: Project in eclipse works fine but file .jar its not I launch my app in eclipse and works fine and I got the initialization of de entityManagerFactory by default as I wish: 2022-11-28 13:32:58.558 INFO 12176 --- [ restartedMain] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' but when I deploy my app by a runnable jar file it is not launching, I'm sharing this question after much research, here is my pom (I'm extracting the libraries into generated jar): <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.6</version> </parent> <groupId>ZpectrumApp</groupId> <artifactId>ZpectrumApp</artifactId> <version>0.0.1-SNAPSHOT</version> <name>ZpectrumApp</name> <description>End-Degree project for Spring Boot</description> <dependencies> <dependency> <groupId>org.jcodec</groupId> <artifactId>jcodec</artifactId> <version>0.2.3</version> </dependency> <dependency> <groupId>org.jcodec</groupId> <artifactId>jcodec-javase</artifactId> <version>0.2.3</version> </dependency> <!-- https://mvnrepository.com/artifact/org.hibernate.javax.persistence/hibernate-jpa-2.1-api --> <dependency> <groupId>org.hibernate.javax.persistence</groupId> <artifactId>hibernate-jpa-2.1-api</artifactId> <version>1.0.2.Final</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jdbc</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20180130</version> <type>jar</type> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> </dependency> </dependencies> <properties> <start-class>ZpectrumApplication.Application</start-class> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> I thinks it is possible a incompatibility from one spring-boot version to another dependency, but I'm not certain. Also attach the console trace. ZpectrumApplication: Cannot create inner bean '(inner bean)#71a9b4c7' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#71a9b4c7': Cannot resolve reference to bean 'entityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'entityManagerFactory' available A: You have to build and package the jar with dependencies, add the below to your plugins in the pom.xml, change the main mainClass to yours. <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.byteworks.epms.DeviceServiceListener</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> A: I solve it by deploying the project by command prompt in the next three steps: downloading Apache-Maven https://maven.apache.org/download.cgi following this tutorial: https://www.youtube.com/watch?v=-ARoNTL90hY modifying the pom.xml with manual configuration -> https://www.baeldung.com/executable-jar-with-maven
Project in eclipse works fine but file .jar its not
I launch my app in eclipse and works fine and I got the initialization of de entityManagerFactory by default as I wish: 2022-11-28 13:32:58.558 INFO 12176 --- [ restartedMain] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' but when I deploy my app by a runnable jar file it is not launching, I'm sharing this question after much research, here is my pom (I'm extracting the libraries into generated jar): <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.6</version> </parent> <groupId>ZpectrumApp</groupId> <artifactId>ZpectrumApp</artifactId> <version>0.0.1-SNAPSHOT</version> <name>ZpectrumApp</name> <description>End-Degree project for Spring Boot</description> <dependencies> <dependency> <groupId>org.jcodec</groupId> <artifactId>jcodec</artifactId> <version>0.2.3</version> </dependency> <dependency> <groupId>org.jcodec</groupId> <artifactId>jcodec-javase</artifactId> <version>0.2.3</version> </dependency> <!-- https://mvnrepository.com/artifact/org.hibernate.javax.persistence/hibernate-jpa-2.1-api --> <dependency> <groupId>org.hibernate.javax.persistence</groupId> <artifactId>hibernate-jpa-2.1-api</artifactId> <version>1.0.2.Final</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jdbc</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20180130</version> <type>jar</type> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> </dependency> </dependencies> <properties> <start-class>ZpectrumApplication.Application</start-class> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> I thinks it is possible a incompatibility from one spring-boot version to another dependency, but I'm not certain. Also attach the console trace. ZpectrumApplication: Cannot create inner bean '(inner bean)#71a9b4c7' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#71a9b4c7': Cannot resolve reference to bean 'entityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'entityManagerFactory' available
[ "You have to build and package the jar with dependencies, add the below to your plugins in the pom.xml, change the main mainClass to yours.\n<plugin>\n <artifactId>maven-assembly-plugin</artifactId>\n <executions>\n <execution>\n <phase>package</phase>\n <goals>\n <goal>single</goal>\n </goals>\n </execution>\n </executions>\n <configuration>\n <archive>\n <manifest>\n <addClasspath>true</addClasspath>\n <mainClass>com.byteworks.epms.DeviceServiceListener</mainClass>\n </manifest>\n </archive>\n <descriptorRefs>\n <descriptorRef>jar-with-dependencies</descriptorRef>\n </descriptorRefs>\n </configuration>\n</plugin>\n\n", "I solve it by deploying the project by command prompt in the next three steps:\n\ndownloading Apache-Maven https://maven.apache.org/download.cgi\nfollowing this tutorial: https://www.youtube.com/watch?v=-ARoNTL90hY\nmodifying the pom.xml with manual configuration -> https://www.baeldung.com/executable-jar-with-maven\n\n" ]
[ 0, -1 ]
[]
[]
[ "deployment", "eclipse", "java", "spring", "spring_boot" ]
stackoverflow_0074572453_deployment_eclipse_java_spring_spring_boot.txt
Q: 500 Internal Server Error While Reports in Acumatica The cash summary by branch report is customized report and we can run earlier and the issue is currently happened. Faced that the issue that is the 500 Internal Server Error when we run the report. But We have no this server error when run for each month period (Eg- From Date to(11/1/2022), To Date(11/16/2022)) and can't run by year(Eg- From Date(1/1/2018), To Date(11/16/20220). Why is this the case and how to solve it? enter image description here enter image description here enter image description here We have no this server error when run for each month period (Eg- From Date to(11/1/2022), To Date(11/16/2022)) and can't run by year(Eg- From Date(1/1/2018), To Date(11/16/20220). Please help to check and let me know any concern A: When building the new report, I would set a default values that work, and then be able to use the Preview function. It will execute the report using the default values only during preview. You can then use that to troubleshoot - start with the base report that works with that range that crashes on the custom report, and then slowly add your changes. You would then find out what change causes the 500 internal server error. If you have access to the server, you can try using failed request tracing to get additional information on the errors that are not being sent to the web client. Here is a link on how to troubleshoot failed requests: https://learn.microsoft.com/en-us/iis/troubleshoot/using-failed-request-tracing/troubleshooting-failed-requests-using-tracing-in-iis?source=recommendations
500 Internal Server Error While Reports in Acumatica
The cash summary by branch report is customized report and we can run earlier and the issue is currently happened. Faced that the issue that is the 500 Internal Server Error when we run the report. But We have no this server error when run for each month period (Eg- From Date to(11/1/2022), To Date(11/16/2022)) and can't run by year(Eg- From Date(1/1/2018), To Date(11/16/20220). Why is this the case and how to solve it? enter image description here enter image description here enter image description here We have no this server error when run for each month period (Eg- From Date to(11/1/2022), To Date(11/16/2022)) and can't run by year(Eg- From Date(1/1/2018), To Date(11/16/20220). Please help to check and let me know any concern
[ "When building the new report, I would set a default values that work, and then be able to use the Preview function. It will execute the report using the default values only during preview.\nYou can then use that to troubleshoot - start with the base report that works with that range that crashes on the custom report, and then slowly add your changes. You would then find out what change causes the 500 internal server error.\nIf you have access to the server, you can try using failed request tracing to get additional information on the errors that are not being sent to the web client. Here is a link on how to troubleshoot failed requests: https://learn.microsoft.com/en-us/iis/troubleshoot/using-failed-request-tracing/troubleshooting-failed-requests-using-tracing-in-iis?source=recommendations\n" ]
[ 0 ]
[]
[]
[ "acumatica", "report" ]
stackoverflow_0074622417_acumatica_report.txt
Q: RecyclerView that does not scroll and shows all items I have a RecyclerView (and some other views) in a ScrollView. Currently the RecyclerView is laid out as very small (it shows 2 items out of 5 that it contains) and it scrolls independently of the ScrollView, which is obviously not great UX. I would like to get the RecyclerView to not scroll and to extend so that all its items are visible. (I know it's stupid to use a RecyclerView in this case. I'm only doing this because somewhere else in the app I need a normal RecyclerView with scrolling etc. but the same kind of content, and I don't want to duplicate code). A: It’s pretty simple, simply set the RecyclerView’s height to wrap_content. You might also benefit from disabling nested scrolling on the recycler view, like so: RecyclerView recycler = (RecyclerView) findViewById(R.id.recycler); recycler.setNestedScrollingEnabled(false); A: The solution of setNestedScrollingEnabled(false) isn't as full as it should: you need to use NestedScrollView instead of ScrollViewfocusableInTouchMode="true" to the child of the NestedScrollView . If you insist on using ScrollView, you should also set minHeight to the RecyclerView, and also set overScrollMode="never" . In this case, it still isn't a good solution because the minHeight might not be enough in some cases Other alternative solutions that you should consider: Replace the ScrollView&RecyclerView with a single RecyclerView, which has views with additional view type for what you had in the ScrollView Use GridLayout or another layout instead. A: Maybe it is not completely clear at first sight what to do with all these answers. I just tried them and the working one is: <android.support.v4.widget.NestedScrollView android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:id="@+id/person_properties" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="wrap_content"> ... <android.support.v7.widget.RecyclerView android:layout_width="match_parent" android:layout_height="wrap_content" android:overScrollMode="never" /> </LinearLayout> </android.support.v4.widget.NestedScrollView> No need to change anything programmatically. A: In your activity.xml file <androidx.core.widget.NestedScrollView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".ActivityName"> <androidx.recyclerview.widget.RecyclerView android:id="@+id/RecyclerView" android:layout_width="match_parent" android:layout_height="wrap_content" android:nestedScrollingEnabled="false"> </androidx.recyclerview.widget.RecyclerView> </androidx.core.widget.NestedScrollView> In RecyclerView use android:nestedSrollingEnabled="false" and use NestedScrollView as a parent Scroll View. A: If you are using RecyclerView inside ScrollView then Replace ScrollView with NestedScrollView and enable the nested scrolling android:nestedScrollingEnabled="false" This Solved my problem A: An easy way is to use in your Custom Adapter, inside the onBindViewHolder method this line: holder.setIsRecyclable(false); A: I realised that I use BottomNavigationView which blocked my recycler view from displaying the last item. I fixed it by adding paddingBottom to it: <androidx.recyclerview.widget.RecyclerView android:id="@+id/recipient_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="70dp" app:layout_constraintTop_toBottomOf="@id/view" /> A: Also try to play with: android:overScrollMode A: You should replace your scrollView for androidx.core.widget.NestedScrollView with matchparent, its simple and work fine. A: Following is the code for disabling scroll in the recycler-view, and showing all the items in your layout. It might work: public class NoScrollRecycler extends RecyclerView { public NoScrollRecycler(Context context){ super(context); } public NoScrollRecycler(Context context, AttributeSet attrs){ super(context, attrs); } public NoScrollRecycler(Context context, AttributeSet attrs, int style){ super(context, attrs, style); } @Override public boolean dispatchTouchEvent(MotionEvent ev){ //Ignore scroll events. if(ev.getAction() == MotionEvent.ACTION_MOVE) return true; //Dispatch event for non-scroll actions, namely clicks! return super.dispatchTouchEvent(ev); } } use like this way: <com.example.custom.NoScrollRecycler android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/color_white"/> A: probably parent of recyclerView is a constraintLayout changed it to RelativeLayout A: I also had a recycler view inside a scrollview. Using NestedScrollView, "height=wrap content" on recycler view, and "nestedScrollingEnabled=false" on recycler view worked. However I had to go a step further since my recycler view data and height changed after layout: recylerview.measure(View.MeasureSpec.makeMeasureSpec(recylerview.width,View.MeasureSpec.EXACTLY), View.MeasureSpec.UNSPECIFIED) val height = recylerview.measuredHeight recylerview.layoutParams.height = height
RecyclerView that does not scroll and shows all items
I have a RecyclerView (and some other views) in a ScrollView. Currently the RecyclerView is laid out as very small (it shows 2 items out of 5 that it contains) and it scrolls independently of the ScrollView, which is obviously not great UX. I would like to get the RecyclerView to not scroll and to extend so that all its items are visible. (I know it's stupid to use a RecyclerView in this case. I'm only doing this because somewhere else in the app I need a normal RecyclerView with scrolling etc. but the same kind of content, and I don't want to duplicate code).
[ "It’s pretty simple, simply set the RecyclerView’s height to wrap_content.\nYou might also benefit from disabling nested scrolling on the recycler view, like so:\nRecyclerView recycler = (RecyclerView) findViewById(R.id.recycler);\nrecycler.setNestedScrollingEnabled(false);\n\n", "The solution of setNestedScrollingEnabled(false) isn't as full as it should: you need to use NestedScrollView instead of ScrollViewfocusableInTouchMode=\"true\" to the child of the NestedScrollView .\nIf you insist on using ScrollView, you should also set minHeight to the RecyclerView, and also set overScrollMode=\"never\" . In this case, it still isn't a good solution because the minHeight might not be enough in some cases\nOther alternative solutions that you should consider:\n\nReplace the ScrollView&RecyclerView with a single RecyclerView, which has views with additional view type for what you had in the ScrollView\nUse GridLayout or another layout instead.\n\n", "Maybe it is not completely clear at first sight what to do with all these answers.\nI just tried them and the working one is:\n<android.support.v4.widget.NestedScrollView\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\">\n\n <LinearLayout\n android:id=\"@+id/person_properties\"\n android:orientation=\"vertical\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\">\n...\n <android.support.v7.widget.RecyclerView\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:overScrollMode=\"never\" />\n </LinearLayout>\n</android.support.v4.widget.NestedScrollView>\n\nNo need to change anything programmatically.\n", "In your activity.xml file\n<androidx.core.widget.NestedScrollView \n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".ActivityName\">\n\n <androidx.recyclerview.widget.RecyclerView\n android:id=\"@+id/RecyclerView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:nestedScrollingEnabled=\"false\">\n\n </androidx.recyclerview.widget.RecyclerView>\n\n</androidx.core.widget.NestedScrollView>\n\nIn RecyclerView use android:nestedSrollingEnabled=\"false\" and use NestedScrollView as a parent Scroll View. \n", "If you are using RecyclerView inside ScrollView then Replace ScrollView with NestedScrollView and enable the nested scrolling\nandroid:nestedScrollingEnabled=\"false\"\n\nThis Solved my problem\n", "An easy way is to use in your Custom Adapter, inside the onBindViewHolder method this line: holder.setIsRecyclable(false);\n", "I realised that I use BottomNavigationView which blocked my recycler view from displaying the last item. I fixed it by adding paddingBottom to it:\n <androidx.recyclerview.widget.RecyclerView\n android:id=\"@+id/recipient_list\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:paddingBottom=\"70dp\"\n app:layout_constraintTop_toBottomOf=\"@id/view\"\n />\n\n", "Also try to play with:\nandroid:overScrollMode\n\n", "You should replace your scrollView for androidx.core.widget.NestedScrollView with matchparent, its simple and work fine.\n", "Following is the code for disabling scroll in the recycler-view, and showing all the items in your layout. It might work:\npublic class NoScrollRecycler extends RecyclerView {\n\n public NoScrollRecycler(Context context){\n super(context);\n }\n\n public NoScrollRecycler(Context context, AttributeSet attrs){\n super(context, attrs);\n }\n\n public NoScrollRecycler(Context context, AttributeSet attrs, int style){\n super(context, attrs, style);\n }\n\n @Override\n public boolean dispatchTouchEvent(MotionEvent ev){\n\n //Ignore scroll events.\n if(ev.getAction() == MotionEvent.ACTION_MOVE)\n return true;\n\n //Dispatch event for non-scroll actions, namely clicks!\n return super.dispatchTouchEvent(ev);\n }\n}\n\nuse like this way:\n<com.example.custom.NoScrollRecycler\n android:id=\"@+id/recyclerView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:background=\"@color/color_white\"/>\n\n", "probably parent of recyclerView is a constraintLayout changed it to RelativeLayout\n", "I also had a recycler view inside a scrollview.\nUsing NestedScrollView, \"height=wrap content\" on recycler view, and \"nestedScrollingEnabled=false\" on recycler view worked.\nHowever I had to go a step further since my recycler view data and height changed after layout:\nrecylerview.measure(View.MeasureSpec.makeMeasureSpec(recylerview.width,View.MeasureSpec.EXACTLY), View.MeasureSpec.UNSPECIFIED)\n\nval height = recylerview.measuredHeight\nrecylerview.layoutParams.height = height\n\n" ]
[ 100, 36, 29, 13, 8, 4, 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "android", "android_recyclerview" ]
stackoverflow_0038192841_android_android_recyclerview.txt
Q: how to print month on calendar with list comprehension in python basically need to print a calendar for a month using list comprehension. cant figure out how to make this work, if anyone can help itd be greatly appreciated not great with list comprehension so not sure where to even start with this A: You can use the calendar library to display a calendar in a variety of formats >>> calendar.monthcalendar(2022, 12) [[0, 0, 0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24, 25], [26, 27, 28, 29, 30, 31, 0]] >>> calendar.TextCalendar().prmonth(2022, 12) December 2022 Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 To produce the format in your example you can use a list comprehension to convert the nested list of int to your padded strings >>> [['{:02d}'.format(day) if day != 0 else '' for day in week] for week in calendar.monthcalendar(2022, 12)] [['', '', '', '01', '02', '03', '04'], ['05', '06', '07', '08', '09', '10', '11'], ['12', '13', '14', '15', '16', '17', '18'], ['19', '20', '21', '22', '23', '24', '25'], ['26', '27', '28', '29', '30', '31', '']]
how to print month on calendar with list comprehension in python
basically need to print a calendar for a month using list comprehension. cant figure out how to make this work, if anyone can help itd be greatly appreciated not great with list comprehension so not sure where to even start with this
[ "You can use the calendar library to display a calendar in a variety of formats\n>>> calendar.monthcalendar(2022, 12)\n[[0, 0, 0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9, 10, 11],\n [12, 13, 14, 15, 16, 17, 18],\n [19, 20, 21, 22, 23, 24, 25],\n [26, 27, 28, 29, 30, 31, 0]]\n\n>>> calendar.TextCalendar().prmonth(2022, 12)\n December 2022\nMo Tu We Th Fr Sa Su\n 1 2 3 4\n 5 6 7 8 9 10 11\n12 13 14 15 16 17 18\n19 20 21 22 23 24 25\n26 27 28 29 30 31\n\nTo produce the format in your example you can use a list comprehension to convert the nested list of int to your padded strings\n>>> [['{:02d}'.format(day) if day != 0 else '' for day in week] for week in calendar.monthcalendar(2022, 12)]\n[['', '', '', '01', '02', '03', '04'],\n ['05', '06', '07', '08', '09', '10', '11'],\n ['12', '13', '14', '15', '16', '17', '18'],\n ['19', '20', '21', '22', '23', '24', '25'],\n ['26', '27', '28', '29', '30', '31', '']]\n\n" ]
[ 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0074659961_list_comprehension_python.txt
Q: How to change the font of Axis label in Pyqtgraph I have a custom font, I am able to set this font in title of the graph, I need help in setting the axis label font.(left, bottom axis labels) I am able to set the font to the title of the graph like this graphWidget = pyqtgraph.PlotWidget() graph = graphWidget.getPlotItem() graph.titleLabel.item.setFont(font) I would like to know if there's any similar way to set the font for axis labels. A: To set custom QFont to axis label, you have to setFont for label of each axis. Here is a short example, which changes font family to Times for title, bottom and left axis. import sys import pyqtgraph from PyQt5.QtGui import QFont from PyQt5.QtWidgets import QApplication app = QApplication(sys.argv) # Define your font my_font = QFont("Times", 10, QFont.Bold) graphWidget = pyqtgraph.PlotWidget() graphWidget.setTitle("My plot") # Set label for both axes graphWidget.setLabel('bottom', "My x axis label") graphWidget.setLabel('left', "My y axis label") # Set your custom font for both axes graphWidget.getAxis("bottom").label.setFont(my_font) graphWidget.getAxis("left").label.setFont(my_font) graph = graphWidget.getPlotItem() # Set font for plot title graph.titleLabel.item.setFont(my_font) graphWidget.show() app.exec()
How to change the font of Axis label in Pyqtgraph
I have a custom font, I am able to set this font in title of the graph, I need help in setting the axis label font.(left, bottom axis labels) I am able to set the font to the title of the graph like this graphWidget = pyqtgraph.PlotWidget() graph = graphWidget.getPlotItem() graph.titleLabel.item.setFont(font) I would like to know if there's any similar way to set the font for axis labels.
[ "To set custom QFont to axis label, you have to setFont for label of each axis.\nHere is a short example, which changes font family to Times for title, bottom and left axis.\nimport sys\n\nimport pyqtgraph\nfrom PyQt5.QtGui import QFont\nfrom PyQt5.QtWidgets import QApplication\n\napp = QApplication(sys.argv)\n\n# Define your font\nmy_font = QFont(\"Times\", 10, QFont.Bold)\n\ngraphWidget = pyqtgraph.PlotWidget()\ngraphWidget.setTitle(\"My plot\")\n\n# Set label for both axes\ngraphWidget.setLabel('bottom', \"My x axis label\")\ngraphWidget.setLabel('left', \"My y axis label\")\n\n# Set your custom font for both axes\ngraphWidget.getAxis(\"bottom\").label.setFont(my_font)\ngraphWidget.getAxis(\"left\").label.setFont(my_font)\n\ngraph = graphWidget.getPlotItem()\n# Set font for plot title\ngraph.titleLabel.item.setFont(my_font)\n\ngraphWidget.show()\napp.exec()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt", "pyqt5", "pyqtgraph", "pyside2", "python" ]
stackoverflow_0074628737_pyqt_pyqt5_pyqtgraph_pyside2_python.txt
Q: In GEKKO, I got no solution found with error code -2 and EXIT: restoration failed. What would be the probable reason for the error? I tried solving a equation with 11 intermediates to fit a constrained parameter.every intermediate and solution depends on that single parameter.But I'm getting a error code as -2.I don't know what error code -2 means.ultimately it shows solution not found and I used IMODE=2 for the operation A: The error code -2 comes from the IPOPT solver for problems where the solver could not solve the equations (achieve equation feasibility), let alone optimize the objective. Here are a couple examples that produce an error code: Unbounded Solution (Error Code -1) m = GEKKO() y = m.Var(value=2,lb=0) m.Equation(y**2>=1) m.Maximize(y) m.solve(disp=True) EXIT: Maximum Number of Iterations Exceeded. An error occured. The error code is -1 Infeasible Solution (Error Code 2) m = GEKKO() y = m.Var(value=2,lb=2) m.Equation(y**2<=1) m.Maximize(y) m.solve(disp=True) EXIT: Converged to a point of local infeasibility. Problem may be infeasible. An error occured. The error code is 2 Here is a list of error codes from IPOPT from the documentation. ✅ Solve_Succeeded: Console Message: EXIT: Optimal Solution Found. This message indicates that Ipopt found a (locally) optimal point within the desired tolerances. ✅❌ Solved_To_Acceptable_Level: Console Message: EXIT: Solved To Acceptable Level. This indicates that the algorithm did not converge to the "desired" tolerances, but that it was able to obtain a point satisfying the "acceptable" tolerance level as specified by the acceptable_tol options. This may happen if the desired tolerances are too small for the current problem. ✅❌ Feasible_Point_Found: Console Message: EXIT: Feasible point for square problem found. This message is printed if the problem is "square" (i.e., it has as many equality constraints as free variables) and Ipopt found a point that is feasible w.r.t. constr_viol_tol. It may, however, not be feasible w.r.t. tol. ❌ Infeasible_Problem_Detected: Console Message: EXIT: Converged to a point of local infeasibility. Problem may be infeasible. The restoration phase converged to a point that is a minimizer for the constraint violation (in the ℓ1-norm), but is not feasible for the original problem. This indicates that the problem may be infeasible (or at least that the algorithm is stuck at a locally infeasible point). The returned point (the minimizer of the constraint violation) might help you to find which constraint is causing the problem. If you believe that the NLP is feasible, it might help to start the optimization from a different point. ❌ Search_Direction_Becomes_Too_Small: Console Message: EXIT: Search Direction is becoming Too Small. This indicates that Ipopt is calculating very small step sizes and is making very little progress. This could happen if the problem has been solved to the best numerical accuracy possible given the current scaling. ❌ Diverging_Iterates: Console Message: EXIT: Iterates divering; problem might be unbounded. This message is printed if the max-norm of the iterates becomes larger than the value of the option diverging_iterates_tol. This can happen if the problem is unbounded below and the iterates are diverging. ❌ User_Requested_Stop: Console Message: EXIT: Stopping optimization at current point as requested by user. This message is printed if the user call-back method Ipopt::TNLP::intermediate_callback returned false. ❌ Maximum_Iterations_Exceeded: Console Message: EXIT: Maximum Number of Iterations Exceeded. This indicates that Ipopt has exceeded the maximum number of iterations as specified by the option max_iter. ❌ Maximum_WallTime_Exceeded: Console Message: EXIT: Maximum wallclock time exceeded. This indicates that Ipopt has exceeded the maximum number of wallclock seconds as specified by the option max_wall_time. ❌ Maximum_CpuTime_Exceeded: Console Message: EXIT: Maximum CPU time exceeded. This indicates that Ipopt has exceeded the maximum number of CPU seconds as specified by the option max_cpu_time. ❌ Restoration_Failed: Console Message: EXIT: Restoration Failed! This indicates that the restoration phase failed to find a feasible point that was acceptable to the filter line search for the original problem. This could happen if the problem is highly degenerate, does not satisfy the constraint qualification, or if your NLP code provides incorrect derivative information. ❌ Error_In_Step_Computation: Console Output: EXIT: Error in step computation! This message is printed if Ipopt is unable to compute a step towards a new iterate and the current iterate is not acceptable for the specified tolerances. A possible reason is that a search direction could not be computed despite several attempts to modify the iteration matrix. Usually, the value of the regularization parameter then becomes too large. One situation where this can happen is when values in the Hessian are invalid (NaN or Inf). You can check whether this is true by using the option check_derivatives_for_naninf. Another reason is that the feasibility restoration phase could not be activated because the current iterate is not infeasible. Reasons for this again include that the problem is highly degenerate, badly scaled, does not satisfy the constraint qualification, or that your NLP code provides incorrect derivative information. Before Ipopt 3.14, this resulted in a Restoration_Failed status code with message "Restoration phase is called at almost feasible point..." ❌ Invalid_Option: Console Message: (details about the particular error will be output to the console) This indicates that there was some problem specifying the options. See the specific message for details. This return code is also used when a linear solver is choosen that was not linked in and a library that contains this linear solver could not be loaded. ❌ Not_Enough_Degrees_Of_Freedom: Console Message: EXIT: Problem has too few degrees of freedom. This indicates that your problem, as specified, has too few degrees of freedom. This can happen if you have too many equality constraints, or if you fix too many variables (Ipopt removes fixed variables by default, see also the option fixed_variable_treatment). ❌ Invalid_Problem_Definition: Console Message: EXIT: Problem has inconsistent variable bounds or constraint sides. This indicates that either there was an exception of some sort when building the IpoptProblem structure in the C or Fortran interface or bounds specified for variables or constraints were inconsistent (lower bound larger than upper bound, left-hand-side larger than right-hand-side). Likely there is an error in your model or the main routine. ❌ Unrecoverable_Exception: Console Message: (details about the particular error will be output to the console) This indicates that Ipopt has thrown an exception that does not have an internal return code. See the specific message for details. ❌ NonIpopt_Exception_Thrown: Console Message: Unknown Exception caught in Ipopt An unknown exception was caught in Ipopt. This exception could have originated from your model or any linked in third party code. See also Ipopt::IpoptApplication::RethrowNonIpoptException. ❌ Insufficient_Memory: Console Message: EXIT: Not enough memory. An error occurred while trying to allocate memory. The problem may be too large for your current memory and swap configuration. ❌ Console Message: EXIT: Integer type too small for required memory. A linear solver requires more working space than what can be communicated to it via the used integer type. ❌ Internal_Error: Console: EXIT: INTERNAL ERROR: Unknown SolverReturn value - Notify IPOPT Authors. An unknown internal error has occurred. Please notify the authors of Ipopt via the mailing list.
In GEKKO, I got no solution found with error code -2 and EXIT: restoration failed. What would be the probable reason for the error?
I tried solving a equation with 11 intermediates to fit a constrained parameter.every intermediate and solution depends on that single parameter.But I'm getting a error code as -2.I don't know what error code -2 means.ultimately it shows solution not found and I used IMODE=2 for the operation
[ "The error code -2 comes from the IPOPT solver for problems where the solver could not solve the equations (achieve equation feasibility), let alone optimize the objective. Here are a couple examples that produce an error code:\nUnbounded Solution (Error Code -1)\nm = GEKKO()\ny = m.Var(value=2,lb=0)\nm.Equation(y**2>=1)\nm.Maximize(y)\nm.solve(disp=True)\n\nEXIT: Maximum Number of Iterations Exceeded.\n \n An error occured.\n The error code is -1\n\nInfeasible Solution (Error Code 2)\nm = GEKKO()\ny = m.Var(value=2,lb=2)\nm.Equation(y**2<=1)\nm.Maximize(y)\nm.solve(disp=True)\n\nEXIT: Converged to a point of local infeasibility. Problem may be infeasible.\n \n An error occured.\n The error code is 2\n\nHere is a list of error codes from IPOPT from the documentation.\n✅ Solve_Succeeded:\nConsole Message: EXIT: Optimal Solution Found.\nThis message indicates that Ipopt found a (locally) optimal point within the desired tolerances.\n✅❌ Solved_To_Acceptable_Level:\nConsole Message: EXIT: Solved To Acceptable Level.\nThis indicates that the algorithm did not converge to the \"desired\" tolerances, but that it was able to obtain a point satisfying the \"acceptable\" tolerance level as specified by the acceptable_tol options. This may happen if the desired tolerances are too small for the current problem.\n✅❌ Feasible_Point_Found:\nConsole Message: EXIT: Feasible point for square problem found.\nThis message is printed if the problem is \"square\" (i.e., it has as many equality constraints as free variables) and Ipopt found a point that is feasible w.r.t. constr_viol_tol. It may, however, not be feasible w.r.t. tol.\n❌ Infeasible_Problem_Detected:\nConsole Message: EXIT: Converged to a point of local infeasibility. Problem may be infeasible.\nThe restoration phase converged to a point that is a minimizer for the constraint violation (in the ℓ1-norm), but is not feasible for the original problem. This indicates that the problem may be infeasible (or at least that the algorithm is stuck at a locally infeasible point). The returned point (the minimizer of the constraint violation) might help you to find which constraint is causing the problem. If you believe that the NLP is feasible, it might help to start the optimization from a different point.\n❌ Search_Direction_Becomes_Too_Small:\nConsole Message: EXIT: Search Direction is becoming Too Small.\nThis indicates that Ipopt is calculating very small step sizes and is making very little progress. This could happen if the problem has been solved to the best numerical accuracy possible given the current scaling.\n❌ Diverging_Iterates:\nConsole Message: EXIT: Iterates divering; problem might be unbounded.\nThis message is printed if the max-norm of the iterates becomes larger than the value of the option diverging_iterates_tol. This can happen if the problem is unbounded below and the iterates are diverging.\n❌ User_Requested_Stop:\nConsole Message: EXIT: Stopping optimization at current point as requested by user.\nThis message is printed if the user call-back method Ipopt::TNLP::intermediate_callback returned false.\n❌ Maximum_Iterations_Exceeded:\nConsole Message: EXIT: Maximum Number of Iterations Exceeded.\nThis indicates that Ipopt has exceeded the maximum number of iterations as specified by the option max_iter.\n❌ Maximum_WallTime_Exceeded:\nConsole Message: EXIT: Maximum wallclock time exceeded.\nThis indicates that Ipopt has exceeded the maximum number of wallclock seconds as specified by the option max_wall_time.\n❌ Maximum_CpuTime_Exceeded:\nConsole Message: EXIT: Maximum CPU time exceeded.\nThis indicates that Ipopt has exceeded the maximum number of CPU seconds as specified by the option max_cpu_time.\n❌ Restoration_Failed:\nConsole Message: EXIT: Restoration Failed!\nThis indicates that the restoration phase failed to find a feasible point that was acceptable to the filter line search for the original problem. This could happen if the problem is highly degenerate, does not satisfy the constraint qualification, or if your NLP code provides incorrect derivative information.\n❌ Error_In_Step_Computation:\nConsole Output: EXIT: Error in step computation!\nThis message is printed if Ipopt is unable to compute a step towards a new iterate and the current iterate is not acceptable for the specified tolerances.\nA possible reason is that a search direction could not be computed despite several attempts to modify the iteration matrix. Usually, the value of the regularization parameter then becomes too large. One situation where this can happen is when values in the Hessian are invalid (NaN or Inf). You can check whether this is true by using the option check_derivatives_for_naninf.\nAnother reason is that the feasibility restoration phase could not be activated because the current iterate is not infeasible. Reasons for this again include that the problem is highly degenerate, badly scaled, does not satisfy the constraint qualification, or that your NLP code provides incorrect derivative information. Before Ipopt 3.14, this resulted in a Restoration_Failed status code with message \"Restoration phase is called at almost feasible point...\"\n❌ Invalid_Option:\nConsole Message: (details about the particular error will be output to the console)\nThis indicates that there was some problem specifying the options. See the specific message for details. This return code is also used when a linear solver is choosen that was not linked in and a library that contains this linear solver could not be loaded.\n❌ Not_Enough_Degrees_Of_Freedom:\nConsole Message: EXIT: Problem has too few degrees of freedom.\nThis indicates that your problem, as specified, has too few degrees of freedom. This can happen if you have too many equality constraints, or if you fix too many variables (Ipopt removes fixed variables by default, see also the option fixed_variable_treatment).\n❌ Invalid_Problem_Definition:\nConsole Message: EXIT: Problem has inconsistent variable bounds or constraint sides.\nThis indicates that either there was an exception of some sort when building the IpoptProblem structure in the C or Fortran interface or bounds specified for variables or constraints were inconsistent (lower bound larger than upper bound, left-hand-side larger than right-hand-side). Likely there is an error in your model or the main routine.\n❌ Unrecoverable_Exception:\nConsole Message: (details about the particular error will be output to the console)\nThis indicates that Ipopt has thrown an exception that does not have an internal return code. See the specific message for details.\n❌ NonIpopt_Exception_Thrown:\nConsole Message: Unknown Exception caught in Ipopt\nAn unknown exception was caught in Ipopt. This exception could have originated from your model or any linked in third party code. See also Ipopt::IpoptApplication::RethrowNonIpoptException.\n❌ Insufficient_Memory:\nConsole Message: EXIT: Not enough memory.\nAn error occurred while trying to allocate memory. The problem may be too large for your current memory and swap configuration.\n❌ Console Message: EXIT: Integer type too small for required memory.\nA linear solver requires more working space than what can be communicated to it via the used integer type.\n❌ Internal_Error:\nConsole: EXIT: INTERNAL ERROR: Unknown SolverReturn value - Notify IPOPT Authors.\nAn unknown internal error has occurred. Please notify the authors of Ipopt via the mailing list.\n" ]
[ 2 ]
[]
[]
[ "gekko" ]
stackoverflow_0074588301_gekko.txt
Q: Unable to list files in google drive using python Not sure if this has to do with my code or something on the Google side, however I'm able to push files to drive, but for some reason I cannot list the file/folder metadata inside a folder. Here is the code I'm using: SCOPES = ['https://www.googleapis.com/auth/drive'] SERVICE_ACCOUNT_FILE = 'creds.json' credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES) service = build('drive', 'v3', credentials=credentials) topFolderId = '0AAYXadsMHp8IUk9PVA' items = [] pageToken = "" while pageToken is not None: response = service.files().list(q="'" + topFolderId + "' in parents", pageSize=1000, pageToken=pageToken, fields="nextPageToken, files(id, name)").execute() items.extend(response.get('files', [])) pageToken = response.get('nextPageToken') Any ideas here, I don't think it's permissions related as I'm able to put files in Drive, just not list them. A: I think you forgot .execute() try: service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list(q="'" + topFolderId + "' in parents", pageSize=10, fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) if not items: print('No files found.') return print('Files:') for item in items: print(u'{0} ({1})'.format(item['name'], item['id'])) except HttpError as error: # TODO(developer) - Handle errors from drive API. print(f'An error occurred: {error}')
Unable to list files in google drive using python
Not sure if this has to do with my code or something on the Google side, however I'm able to push files to drive, but for some reason I cannot list the file/folder metadata inside a folder. Here is the code I'm using: SCOPES = ['https://www.googleapis.com/auth/drive'] SERVICE_ACCOUNT_FILE = 'creds.json' credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES) service = build('drive', 'v3', credentials=credentials) topFolderId = '0AAYXadsMHp8IUk9PVA' items = [] pageToken = "" while pageToken is not None: response = service.files().list(q="'" + topFolderId + "' in parents", pageSize=1000, pageToken=pageToken, fields="nextPageToken, files(id, name)").execute() items.extend(response.get('files', [])) pageToken = response.get('nextPageToken') Any ideas here, I don't think it's permissions related as I'm able to put files in Drive, just not list them.
[ "I think you forgot .execute()\ntry:\n service = build('drive', 'v3', credentials=creds)\n\n # Call the Drive v3 API\n results = service.files().list(q=\"'\" + topFolderId + \"' in parents\",\n pageSize=10, fields=\"nextPageToken, files(id, name)\").execute()\n items = results.get('files', [])\n\n if not items:\n print('No files found.')\n return\n print('Files:')\n for item in items:\n print(u'{0} ({1})'.format(item['name'], item['id']))\nexcept HttpError as error:\n # TODO(developer) - Handle errors from drive API.\n print(f'An error occurred: {error}')\n\n" ]
[ 1 ]
[]
[]
[ "google_api", "google_api_python_client", "google_drive_api", "python", "python_3.x" ]
stackoverflow_0074659022_google_api_google_api_python_client_google_drive_api_python_python_3.x.txt
Q: Is there a way to change the color of the left pane that has the line numbers of Ace-Editor in React? I have already tried setting the background color using the style prop provided by AceEditor but that only works for the body. How can I change the background color of the left pane that has the line numbers to transparent too? <AceEditor mode="javascript" theme="dreamweaver" // onChange={onChange} style={{ backgroundColor: "transparent", fontSize: "1.2rem", color: "white", }} name="UNIQUE_ID_OF_DIV" editorProps={{ $blockScrolling: true }} value={`function onLoad(editor) { console.log("i've loaded"); } `} /> A: In a non-React-specific way, you could change the line number panel ("gutter") background with CSS like so .ace-tm .ace_gutter { background: #F8F8F8; }
Is there a way to change the color of the left pane that has the line numbers of Ace-Editor in React?
I have already tried setting the background color using the style prop provided by AceEditor but that only works for the body. How can I change the background color of the left pane that has the line numbers to transparent too? <AceEditor mode="javascript" theme="dreamweaver" // onChange={onChange} style={{ backgroundColor: "transparent", fontSize: "1.2rem", color: "white", }} name="UNIQUE_ID_OF_DIV" editorProps={{ $blockScrolling: true }} value={`function onLoad(editor) { console.log("i've loaded"); } `} />
[ "In a non-React-specific way, you could change the line number panel (\"gutter\") background with CSS like so\n.ace-tm .ace_gutter {\n background: #F8F8F8;\n}\n\n" ]
[ 0 ]
[]
[]
[ "ace_editor", "reactjs" ]
stackoverflow_0073351676_ace_editor_reactjs.txt