content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Foundation models vs transfer learning What is the difference between the idea of transfer learning and applying foundation models? As much as I understand, both methods use 'knowledge' gained from training on big amount of data to solve an unseen task. For example, a model can learn to understand English text and then be adjusted to write summaries. A: Transfer learning and applying foundation models are similar in that they both involve using knowledge gained from training a model on a large dataset to solve a new, related task. However, there are some key differences between the two concepts. Transfer learning involves taking a pre-trained model that has already been trained on a large dataset and using it as a starting point to train a new model on a different, but related, dataset. For example, a model that has been trained to recognize objects in images could be used as the starting point to train a new model to classify medical images. By starting with a pre-trained model, transfer learning can save time and resources because the new model doesn't have to be trained from scratch. On the other hand, applying foundation models involves using a pre-existing model as a component of a larger system to solve a new task. In this case, the pre-existing model is not necessarily trained on a large dataset, and it is not necessarily the starting point for training a new model. Instead, the pre-existing model is used as a building block to construct a more complex system that can solve the new task. For example, a foundation model that has been trained to recognize speech could be used as part of a larger system to transcribe audio recordings. In summary, transfer learning involves using a pre-trained model as the starting point to train a new model on a different dataset, while applying foundation models involves using a pre-existing model as a component of a larger system to solve a new task. Both approaches can help save time and resources by leveraging existing knowledge, but they are used in slightly different ways. A: Applying foundation models* is just an example of transfer learning. Transfer learning refers to machine learning methods that "transfer" knowledge from a source domain to a target domain. Here, domain can be interpreted in many ways: genre, language, task, etc. So transfer learning is very broad as it doesn't specify e.g., the form of the source domain knowledge, whether both the source and the target domain are accessible at training time, etc. Also, transfer learning has been studied long before the era of foundation models. Applying a foundation model is only one instance of transfer learning where the source domain knowledge is represented in the form of a pretrained model; domain is interpreted as task, and; if fine-tuning on the target domain is performed: the source domain data may not be accessible anymore, and the target domain has labeled data. The list may be incomplete because there are many aspects based on which we can categorize transfer learning. Some examples of transfer learning that doesn't use foundation models include multi-task learning, cross-lingual learning via e.g., cross-lingual embedding, domain-adversarial training, and so on. I recommend reading Chapter 3 of the thesis by Sebastian Ruder for an overview of transfer learning in NLP. *) There are controversies surrounding the term foundation model in NLP. At the moment, it is almost exclusively used by Stanford researchers; others in the NLP community don't use it that much. While most people would be familiar with the term, I suggest using pretrained model for now.
Foundation models vs transfer learning
What is the difference between the idea of transfer learning and applying foundation models? As much as I understand, both methods use 'knowledge' gained from training on big amount of data to solve an unseen task. For example, a model can learn to understand English text and then be adjusted to write summaries.
[ "Transfer learning and applying foundation models are similar in that they both involve using knowledge gained from training a model on a large dataset to solve a new, related task. However, there are some key differences between the two concepts.\nTransfer learning involves taking a pre-trained model that has already been trained on a large dataset and using it as a starting point to train a new model on a different, but related, dataset. For example, a model that has been trained to recognize objects in images could be used as the starting point to train a new model to classify medical images. By starting with a pre-trained model, transfer learning can save time and resources because the new model doesn't have to be trained from scratch.\nOn the other hand, applying foundation models involves using a pre-existing model as a component of a larger system to solve a new task. In this case, the pre-existing model is not necessarily trained on a large dataset, and it is not necessarily the starting point for training a new model. Instead, the pre-existing model is used as a building block to construct a more complex system that can solve the new task. For example, a foundation model that has been trained to recognize speech could be used as part of a larger system to transcribe audio recordings.\nIn summary, transfer learning involves using a pre-trained model as the starting point to train a new model on a different dataset, while applying foundation models involves using a pre-existing model as a component of a larger system to solve a new task. Both approaches can help save time and resources by leveraging existing knowledge, but they are used in slightly different ways.\n", "Applying foundation models* is just an example of transfer learning.\nTransfer learning refers to machine learning methods that \"transfer\" knowledge from a source domain to a target domain. Here, domain can be interpreted in many ways: genre, language, task, etc. So transfer learning is very broad as it doesn't specify e.g., the form of the source domain knowledge, whether both the source and the target domain are accessible at training time, etc. Also, transfer learning has been studied long before the era of foundation models. Applying a foundation model is only one instance of transfer learning where\n\nthe source domain knowledge is represented in the form of a pretrained model;\ndomain is interpreted as task, and;\nif fine-tuning on the target domain is performed: the source domain data may not be accessible anymore, and the target domain has labeled data.\n\nThe list may be incomplete because there are many aspects based on which we can categorize transfer learning. Some examples of transfer learning that doesn't use foundation models include multi-task learning, cross-lingual learning via e.g., cross-lingual embedding, domain-adversarial training, and so on. I recommend reading Chapter 3 of the thesis by Sebastian Ruder for an overview of transfer learning in NLP.\n*) There are controversies surrounding the term foundation model in NLP. At the moment, it is almost exclusively used by Stanford researchers; others in the NLP community don't use it that much. While most people would be familiar with the term, I suggest using pretrained model for now.\n" ]
[ 1, 0 ]
[]
[]
[ "artificial_intelligence", "machine_learning", "nlp", "transfer_learning" ]
stackoverflow_0074661602_artificial_intelligence_machine_learning_nlp_transfer_learning.txt
Q: if conditional doesn't fulfill its condition in my python code thought = input("What is the answer to the great question of life, universe and everything? ").lower().strip() if thought == "42" or "forty two" or "forty-two": print("Yes") else: print("No") thought = input("What is the answer to the great question of life, universe and everything? ").lower().strip() if thought == "42" or "forty two" or "forty-two": print("Yes") else: print("No") ... If the input was 42, forty two, or forty-two the answer was supposed to be "Yes" and otherwise "No". But in this case the answer is always "Yes". A: Your if statement is wrong. What's happening in your code is that first you check the input to 42 then just check a string and then again check just another string. Meaning, let's say you gave an input of "Hello, World!" then the if statement will work like that: if thought == 42 -> False if "forty two" -> True if "forty-two" - > True. Because you use 'or' it means that only one of those three statements should be true for the program to execute the if statement. to solve this you can compare each result with the input like that: if thought == "42" or thought == "forty two" or thought == "forty-two": print("Yes") Or you can compare it to a list of results like that: if thought in ["42", "forty two", "forty-two"]: print("Yes")
if conditional doesn't fulfill its condition in my python code
thought = input("What is the answer to the great question of life, universe and everything? ").lower().strip() if thought == "42" or "forty two" or "forty-two": print("Yes") else: print("No") thought = input("What is the answer to the great question of life, universe and everything? ").lower().strip() if thought == "42" or "forty two" or "forty-two": print("Yes") else: print("No") ... If the input was 42, forty two, or forty-two the answer was supposed to be "Yes" and otherwise "No". But in this case the answer is always "Yes".
[ "Your if statement is wrong. What's happening in your code is that first you check the input to 42 then just check a string and then again check just another string.\nMeaning,\nlet's say you gave an input of \"Hello, World!\"\nthen the if statement will work like that:\nif thought == 42 -> False\nif \"forty two\" -> True\nif \"forty-two\" - > True.\n\nBecause you use 'or' it means that only one of those three statements should be true for the program to execute the if statement.\nto solve this you can compare each result with the input like that:\nif thought == \"42\" or thought == \"forty two\" or thought == \"forty-two\":\n print(\"Yes\")\n\nOr you can compare it to a list of results like that:\nif thought in [\"42\", \"forty two\", \"forty-two\"]:\n print(\"Yes\")\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "if_statement", "python_3.x" ]
stackoverflow_0074664492_conditional_statements_if_statement_python_3.x.txt
Q: Customize initial project template for Flutter I am trying to customize the starter template for a Flutter app. For projects that I work on I follow a very specific structure with my Themes, Blocs, Listeners, etc. I would love it if I could customize the starter template for Flutter so that I have everything all laid out and ready to use on doing something like flutter create . I am aware of where the template is located. I was able to go to [flutter_dir]/packages/flutter_tools/templates/app/lib/main.dart.tmpl and customize the template. But I am not able to figure out how I can add a new file/directory there. Let's say if I create a new file called config.dart.tmpl in the same directory as above, it does not get generated when I do flutter create Any ideas how to get this to work or where I should be looking? NOTE I have already tried out other options like Mason. It works fine for me, but I am more interested in doing it with default flutter tooling. Already seen https://stackoverflow.com/a/61974852/6891637 and IDE file templates is not what I am looking for. A: Currently there's no easy way to create a template similar to skeleton, package, and plugin. There's an open issue ticket for this feature. However, if you really need to create one. You can explore how skeleton was added as a template on this pull request. Or as a workaround, you can just create a Flutter project template on your repo and fork it when starting a project. A: This is the current issue about this topic : https://github.com/flutter/flutter/issues/77104 As Felix Angelov said, Mason and VeryGoodCLI are the best alternatives right now.
Customize initial project template for Flutter
I am trying to customize the starter template for a Flutter app. For projects that I work on I follow a very specific structure with my Themes, Blocs, Listeners, etc. I would love it if I could customize the starter template for Flutter so that I have everything all laid out and ready to use on doing something like flutter create . I am aware of where the template is located. I was able to go to [flutter_dir]/packages/flutter_tools/templates/app/lib/main.dart.tmpl and customize the template. But I am not able to figure out how I can add a new file/directory there. Let's say if I create a new file called config.dart.tmpl in the same directory as above, it does not get generated when I do flutter create Any ideas how to get this to work or where I should be looking? NOTE I have already tried out other options like Mason. It works fine for me, but I am more interested in doing it with default flutter tooling. Already seen https://stackoverflow.com/a/61974852/6891637 and IDE file templates is not what I am looking for.
[ "Currently there's no easy way to create a template similar to skeleton, package, and plugin. There's an open issue ticket for this feature. However, if you really need to create one. You can explore how skeleton was added as a template on this pull request.\nOr as a workaround, you can just create a Flutter project template on your repo and fork it when starting a project.\n", "This is the current issue about this topic : https://github.com/flutter/flutter/issues/77104\nAs Felix Angelov said, Mason and VeryGoodCLI are the best alternatives right now.\n" ]
[ 1, 0 ]
[]
[]
[ "flutter" ]
stackoverflow_0068721210_flutter.txt
Q: Cannot access the ASP.NET Core app deployed on Kubernetes I am new to Docker and Kubernetes, and trying to deploy my ASP.Net Core 6.0 web application on Kubernetes with Docker image. I can see the service running with type: NodePort as in the last line of the screenshot 1, but I cannot access this port on my browser at all. I can also see the Docker container created by Kubernetes Pod running on Docker Desktop Windows application as in screenshot 2, but I don't know how to access my deployed application from the browser. Any suggestion or solution would be appreciated. A: Seems to be you need to expose the service , so that it will allow external traffic. In order to expose the service use : kubectl expose deployment <deployment> --type="Loadbalancer"--port=8080, this will create an external IP. Check the created external IP by using Kubectl get services command. If not visible, wait for a few minutes to get the service exposed. So, wait for a few minutes and check again the External IP will be visible . Now access the service using http://<EXTERNAL_IP>:8080in the browser. For more information Refer to this Lab on how to Deploy ASP.NET Core app on Kubernetes.
Cannot access the ASP.NET Core app deployed on Kubernetes
I am new to Docker and Kubernetes, and trying to deploy my ASP.Net Core 6.0 web application on Kubernetes with Docker image. I can see the service running with type: NodePort as in the last line of the screenshot 1, but I cannot access this port on my browser at all. I can also see the Docker container created by Kubernetes Pod running on Docker Desktop Windows application as in screenshot 2, but I don't know how to access my deployed application from the browser. Any suggestion or solution would be appreciated.
[ "Seems to be you need to expose the service , so that it will allow external traffic. In order to expose the service use : kubectl expose deployment <deployment> --type=\"Loadbalancer\"--port=8080, this will create an external IP.\nCheck the created external IP by using Kubectl get services command.\nIf not visible, wait for a few minutes to get the service exposed. So, wait for a few minutes and check again the External IP will be visible .\nNow access the service using http://<EXTERNAL_IP>:8080in the browser.\nFor more information Refer to this Lab on how to Deploy ASP.NET Core app on Kubernetes.\n" ]
[ 1 ]
[]
[]
[ "asp.net_core_6.0", "docker", "kubernetes" ]
stackoverflow_0074665081_asp.net_core_6.0_docker_kubernetes.txt
Q: How to run Puppeteer code in any web browser? I'm trying to do some web scraping with Puppeteer and I need to retrieve the value into a Website I'm building. I have tried to load the Puppeteer file in the html file as if it was a JavaScript file but I keep getting an error. However, if I run it in a cmd window it works well. Scraper.js: getPrice(); function getPrice() { const puppeteer = require('puppeteer'); void (async () => { try { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('http://example.com') await page.setViewport({ width: 1920, height: 938 }) await page.waitForSelector('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button') await page.click('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button') await page.waitForSelector('.modal-content') await page.click('.tile-hsearch-hws > .m-search-tabs > #edit-search-panel > .l-em-reset > .m-field-wrap > .l-xs-col-4 > .analytics-click') await page.waitForNavigation(); await page.waitForSelector('.tile-search-filter > .l-display-none') const innerText = await page.evaluate(() => document.querySelector('.tile-search-filter > .l-display-none').innerText); console.log(innerText) } catch (error) { console.log(error) } })() } index.html: <html> <head></head> <body> <script src="../js/scraper.js" type="text/javascript"></script> </body> </html> The expected result should be this one in the console of Chrome: But I'm getting this error instead: What am I doing wrong? A: EDIT: Since puppeteer removed support for puppeteer-web, I moved it out of the repo and tried to patch it a bit. It does work with browser. The package is called puppeteer-web, specifically made for such cases. But the main point is, there must be some instance of chrome running on some server. Only then you can connect to it. You can use it later on in your web page to drive another browser instance through its WS Endpoint: <script src="https://unpkg.com/puppeteer-web"> </script> <script> const browser = await puppeteer.connect({ browserWSEndpoint: `ws://0.0.0.0:8080`, // <-- connect to a server running somewhere ignoreHTTPSErrors: true }); const pagesCount = (await browser.pages()).length; const browserWSEndpoint = await browser.wsEndpoint(); console.log({ browserWSEndpoint, pagesCount }); </script> I had some fun with puppeteer and webpack, playground-react-puppeteer playground-electron-react-puppeteer-example See these answers for full understanding of creating the server and more, Official link to puppeteer-web Puppeteer with docker Puppeteer with chrome extension Puppeteer with local wsEndpoint A: Instead use puppeteer in backend and make an API to interface your frontend with it if you're main is to web-scrape and get the data in the frontend.
How to run Puppeteer code in any web browser?
I'm trying to do some web scraping with Puppeteer and I need to retrieve the value into a Website I'm building. I have tried to load the Puppeteer file in the html file as if it was a JavaScript file but I keep getting an error. However, if I run it in a cmd window it works well. Scraper.js: getPrice(); function getPrice() { const puppeteer = require('puppeteer'); void (async () => { try { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('http://example.com') await page.setViewport({ width: 1920, height: 938 }) await page.waitForSelector('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button') await page.click('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button') await page.waitForSelector('.modal-content') await page.click('.tile-hsearch-hws > .m-search-tabs > #edit-search-panel > .l-em-reset > .m-field-wrap > .l-xs-col-4 > .analytics-click') await page.waitForNavigation(); await page.waitForSelector('.tile-search-filter > .l-display-none') const innerText = await page.evaluate(() => document.querySelector('.tile-search-filter > .l-display-none').innerText); console.log(innerText) } catch (error) { console.log(error) } })() } index.html: <html> <head></head> <body> <script src="../js/scraper.js" type="text/javascript"></script> </body> </html> The expected result should be this one in the console of Chrome: But I'm getting this error instead: What am I doing wrong?
[ "EDIT: Since puppeteer removed support for puppeteer-web, I moved it out of the repo and tried to patch it a bit.\nIt does work with browser. The package is called puppeteer-web, specifically made for such cases.\nBut the main point is, there must be some instance of chrome running on some server. Only then you can connect to it.\nYou can use it later on in your web page to drive another browser instance through its WS Endpoint:\n<script src=\"https://unpkg.com/puppeteer-web\">\n</script>\n\n<script>\n const browser = await puppeteer.connect({\n browserWSEndpoint: `ws://0.0.0.0:8080`, // <-- connect to a server running somewhere\n ignoreHTTPSErrors: true\n });\n\n const pagesCount = (await browser.pages()).length;\n const browserWSEndpoint = await browser.wsEndpoint();\n console.log({ browserWSEndpoint, pagesCount });\n</script>\n\nI had some fun with puppeteer and webpack,\n\nplayground-react-puppeteer\nplayground-electron-react-puppeteer-example\n\nSee these answers for full understanding of creating the server and more,\n\nOfficial link to puppeteer-web\nPuppeteer with docker\nPuppeteer with chrome extension\nPuppeteer with local wsEndpoint\n\n", "Instead use puppeteer in backend and make an API to interface your frontend with it if you're main is to web-scrape and get the data in the frontend.\n" ]
[ 18, 0 ]
[]
[]
[ "javascript", "node.js", "puppeteer", "web_scraping" ]
stackoverflow_0054647694_javascript_node.js_puppeteer_web_scraping.txt
Q: How to specify library version when using import in js? I am trying to import cloudinary to my project, and in their documentation it says that I have to use require and specify the version to v2 like this const cloudinary = require("cloudinary").v2;. However, I have specified that my type in package.json as 'module', so I cannot use require I only have to say import. so, my question is how do I specify the version to v2. currently, I can upload to the cloudinary server, but I cannot get back the link to it. here is my cod: my configuration import multer from "multer"; import cloudinary from "cloudinary"; import { CloudinaryStorage } from "multer-storage-cloudinary"; import dotenv from "dotenv"; cloudinary.config({ cloud_name: process.env.CLOUDINARY_CLOUD_NAME, api_key: process.env.CLOUDINARY_KEY, api_secret: process.env.CLOUDINARY_SECRET, }); const storage = new CloudinaryStorage({ cloudinary: cloudinary, params: { folder: "ecommerce", }, }); const upload = multer({ storage: storage }); my route productsRoutes.post("/products", upload.single("image"), async (req, res) => { console.log(req.file); }); currently, the console log prints nothing when I submit the image, but it does upload it to the server. sorry if my question is badly formatted, and any help is greatly appreciate it. A: My team and I had the same problem this week... If you didn't get to solve it we have looked into different possibilities and we got this simple but fine solution -> (cloudinary: cloudinary.v2). So the code will stay like this in our case: import multer from "multer" import cloudinary from 'cloudinary'; import {CloudinaryStorage} from "multer-storage-cloudinary"; const storage = new CloudinaryStorage({ cloudinary: cloudinary.v2, params: { folder: "avatars", allowedFormats: ["jpg", "png", "jpeg", "gif"], }, }); const upload = multer({ storage }); export default upload; So you have to signal that you are using the v2 version on cloudinary in the CloudinaryStorage. I hope that this helps even if it's 11 months late x). We just dealed with it. A: To specify the version of the Cloudinary library you're using in your project, you can use the import statement and specify the version number after the library name, separated by a /. For example, to import version 2 of the Cloudinary library, you would use the following code: import cloudinary from "cloudinary/v2";
How to specify library version when using import in js?
I am trying to import cloudinary to my project, and in their documentation it says that I have to use require and specify the version to v2 like this const cloudinary = require("cloudinary").v2;. However, I have specified that my type in package.json as 'module', so I cannot use require I only have to say import. so, my question is how do I specify the version to v2. currently, I can upload to the cloudinary server, but I cannot get back the link to it. here is my cod: my configuration import multer from "multer"; import cloudinary from "cloudinary"; import { CloudinaryStorage } from "multer-storage-cloudinary"; import dotenv from "dotenv"; cloudinary.config({ cloud_name: process.env.CLOUDINARY_CLOUD_NAME, api_key: process.env.CLOUDINARY_KEY, api_secret: process.env.CLOUDINARY_SECRET, }); const storage = new CloudinaryStorage({ cloudinary: cloudinary, params: { folder: "ecommerce", }, }); const upload = multer({ storage: storage }); my route productsRoutes.post("/products", upload.single("image"), async (req, res) => { console.log(req.file); }); currently, the console log prints nothing when I submit the image, but it does upload it to the server. sorry if my question is badly formatted, and any help is greatly appreciate it.
[ "My team and I had the same problem this week... If you didn't get to solve it we have looked into different possibilities and we got this simple but fine solution -> (cloudinary: cloudinary.v2). So the code will stay like this in our case:\n\n\nimport multer from \"multer\"\nimport cloudinary from 'cloudinary';\nimport {CloudinaryStorage} from \"multer-storage-cloudinary\";\n\nconst storage = new CloudinaryStorage({\n cloudinary: cloudinary.v2,\n params: {\n folder: \"avatars\",\n allowedFormats: [\"jpg\", \"png\", \"jpeg\", \"gif\"],\n },\n});\n\nconst upload = multer({ storage });\nexport default upload;\n\n\n\nSo you have to signal that you are using the v2 version on cloudinary in the CloudinaryStorage.\nI hope that this helps even if it's 11 months late x). We just dealed with it.\n", "To specify the version of the Cloudinary library you're using in your project, you can use the import statement and specify the version number after the library name, separated by a /. For example, to import version 2 of the Cloudinary library, you would use the following code:\nimport cloudinary from \"cloudinary/v2\";\n\n" ]
[ 0, 0 ]
[]
[]
[ "cloudinary", "javascript", "multer" ]
stackoverflow_0070368816_cloudinary_javascript_multer.txt
Q: Question about JSON vs CBOR serialization I have a theorical doubt about how serialization works, and especially about the difference between serialization schemes like JSON and binary serialization schemes like CBOR. My question is: if a JSON serializer converts an object into a JSON string, then, to store or transmit the resulting JSON string, do you have to also convert the JSON string into its bytes representation? Is this why binary schemes might be faster, since they produce a binary output already? A: In memory, a string is anyway represented as a sequence of bytes (actually, everything is just a sequence of bytes in memory), so this should not matter. What matters is the conversion from the in-memory representation of a Javascript variable into its in-memory representation of its string equivalent. An extremely simply example is a numeric variable with value -1. This can be internally represented by one byte: > Buffer.of(-1) <Buffer ff> but its JSON serialization "-1" takes two bytes: > Buffer.from(JSON.stringify(-1)) <Buffer 2d 31> This should give an idea why a binary scheme that sticks closer to the internal representation can be output (and input) faster.
Question about JSON vs CBOR serialization
I have a theorical doubt about how serialization works, and especially about the difference between serialization schemes like JSON and binary serialization schemes like CBOR. My question is: if a JSON serializer converts an object into a JSON string, then, to store or transmit the resulting JSON string, do you have to also convert the JSON string into its bytes representation? Is this why binary schemes might be faster, since they produce a binary output already?
[ "In memory, a string is anyway represented as a sequence of bytes (actually, everything is just a sequence of bytes in memory), so this should not matter.\nWhat matters is the conversion from the in-memory representation of a Javascript variable into its in-memory representation of its string equivalent. An extremely simply example is a numeric variable with value -1. This can be internally represented by one byte:\n> Buffer.of(-1)\n<Buffer ff>\n\nbut its JSON serialization \"-1\" takes two bytes:\n> Buffer.from(JSON.stringify(-1))\n<Buffer 2d 31>\n\nThis should give an idea why a binary scheme that sticks closer to the internal representation can be output (and input) faster.\n" ]
[ 0 ]
[]
[]
[ "cbor", "json", "serialization" ]
stackoverflow_0074627810_cbor_json_serialization.txt
Q: .NET 7 Error CS0118 'UserService' is a namespace but is used like a type I have my services under a folder called Services in the project and I have grouped the User related parts in a folder called UserService. And under that folder, I have an Interface called IUserService.cs and a class called UserService.cs which contain basic user authentication methods. Here is a picture of the folder structure: To register this User service to dependency injection I have used an extension method class called RegisterServices.cs. And this is the error I get when i try to register the user service using the below code: Severity Code Description Project File Line Suppression State Error CS0118 'UserService' is a namespace but is used like a type Infinium.API D:\Infinium Projects\Infinium\Infinium.API\Services\ServiceRegister.cs 16 Active Below is the code of RegisterServices.cs using Infinium.API.Authorization; using Infinium.API.DataManager; using Infinium.API.Services.UserService; namespace Infinium.API.Services { public static class ServiceRegister { public static void RegisterServices(this IServiceCollection services) { // vNext DB services.AddSingleton<IDataAccessor, DataAccessor>(); //services.Configure<AppSettings>(getser) services.AddScoped<IJwtUtils, JwtUtils>(); services.AddScoped<IUserService, UserService>(); } } } I have added the RegisterServices extension method class to Program.cs as follows; using Infinium.API.Authorization; using Infinium.API.Helpers; using Infinium.API.Services; using MaxRAV.API.Helpers; using Serilog; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddCors(options => { options.AddPolicy(name: "AllowedCorsOrigins", builder => { builder .SetIsOriginAllowed((_) => true) .AllowAnyHeader() .AllowAnyMethod() .AllowCredentials(); }); }); builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // configure strongly typed settings object builder.Services.Configure<AppSettings>(builder.Configuration.GetSection("AppSettings")); builder.Services.RegisterServices(); builder.Host.UseSerilog((ctx, lc) => lc .WriteTo.Console() .WriteTo.Seq("http://localhost:5341") // comment if not configired ); var app = builder.Build(); app.UseCors("AllowedCorsOrigins"); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); // global error handler app.UseMiddleware<ErrorHandlerMiddleware>(); // custom jwt auth middleware app.UseMiddleware<JwtMiddleware>(); //app.UseAuthorization(); app.MapControllers(); app.Run(); The same code used to work with .NET 5 and in .NET 6 sometimes this error was thrown. I assume that the folder name conflicts with the class name, but I am not sure why it doesn't happen all the time. Maybe the solution is to rename the folder? A: There are multiple ways to handle this, e.g. use fully qualified type names, like services.AddScoped<Infinium.API.Services.UserService.IUserService, Infinium.API.Services.UserService.UserService>(); Namespace names conflicting with class names sometimes can cause this kind of issues. Another idea would be to maintain a separation between these, so that if your namespace ends with UserService, the class has a different name. An advice would be to have the namespace ending with a plural version of whatever you have, like Infinium.API.Services.UserServices and the class name in the namespace is UserService
.NET 7 Error CS0118 'UserService' is a namespace but is used like a type
I have my services under a folder called Services in the project and I have grouped the User related parts in a folder called UserService. And under that folder, I have an Interface called IUserService.cs and a class called UserService.cs which contain basic user authentication methods. Here is a picture of the folder structure: To register this User service to dependency injection I have used an extension method class called RegisterServices.cs. And this is the error I get when i try to register the user service using the below code: Severity Code Description Project File Line Suppression State Error CS0118 'UserService' is a namespace but is used like a type Infinium.API D:\Infinium Projects\Infinium\Infinium.API\Services\ServiceRegister.cs 16 Active Below is the code of RegisterServices.cs using Infinium.API.Authorization; using Infinium.API.DataManager; using Infinium.API.Services.UserService; namespace Infinium.API.Services { public static class ServiceRegister { public static void RegisterServices(this IServiceCollection services) { // vNext DB services.AddSingleton<IDataAccessor, DataAccessor>(); //services.Configure<AppSettings>(getser) services.AddScoped<IJwtUtils, JwtUtils>(); services.AddScoped<IUserService, UserService>(); } } } I have added the RegisterServices extension method class to Program.cs as follows; using Infinium.API.Authorization; using Infinium.API.Helpers; using Infinium.API.Services; using MaxRAV.API.Helpers; using Serilog; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddCors(options => { options.AddPolicy(name: "AllowedCorsOrigins", builder => { builder .SetIsOriginAllowed((_) => true) .AllowAnyHeader() .AllowAnyMethod() .AllowCredentials(); }); }); builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // configure strongly typed settings object builder.Services.Configure<AppSettings>(builder.Configuration.GetSection("AppSettings")); builder.Services.RegisterServices(); builder.Host.UseSerilog((ctx, lc) => lc .WriteTo.Console() .WriteTo.Seq("http://localhost:5341") // comment if not configired ); var app = builder.Build(); app.UseCors("AllowedCorsOrigins"); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); // global error handler app.UseMiddleware<ErrorHandlerMiddleware>(); // custom jwt auth middleware app.UseMiddleware<JwtMiddleware>(); //app.UseAuthorization(); app.MapControllers(); app.Run(); The same code used to work with .NET 5 and in .NET 6 sometimes this error was thrown. I assume that the folder name conflicts with the class name, but I am not sure why it doesn't happen all the time. Maybe the solution is to rename the folder?
[ "There are multiple ways to handle this, e.g. use fully qualified type names, like\nservices.AddScoped<Infinium.API.Services.UserService.IUserService, \n Infinium.API.Services.UserService.UserService>();\n\nNamespace names conflicting with class names sometimes can cause this kind of issues.\nAnother idea would be to maintain a separation between these, so that if your namespace ends with UserService, the class has a different name. An advice would be to have the namespace ending with a plural version of whatever you have, like\nInfinium.API.Services.UserServices\n\nand the class name in the namespace is\nUserService\n\n" ]
[ 2 ]
[]
[]
[ ".net_7.0", "c#", "c#_11.0", "dependency_injection", "visual_studio_2022" ]
stackoverflow_0074665433_.net_7.0_c#_c#_11.0_dependency_injection_visual_studio_2022.txt
Q: How to validate a move in a checkerboard C? Hi im a newbie at programming and this task is frustrating me rn because I still can't get the piece to move on the board. I am guessing this has to do on my while loop function but still can't get it right. The 1st user input should detect the coordinate of the character to move and show what are the available spots to where it can send the piece using validateMove(). Here's my code, Been onto this for weeks now. #include <stdio.h> char board[8][8] = { {' ', 'B', ' ', 'B', ' ', 'B', ' ', 'B'}, {'B', ' ', 'B', ' ', 'B', ' ', 'B', ' '}, {' ', 'B', ' ', 'B', ' ', 'B', ' ', 'B'}, {'*', ' ', '*', ' ', '*', ' ', '*', ' '}, {' ', '*', ' ', '*', ' ', '*', ' ', '*'}, {'W', ' ', 'W', ' ', 'W', ' ', 'W', ' '}, {' ', 'W', ' ', 'W', ' ', 'W', ' ', 'W'}, {'W', ' ', 'W', ' ', 'W', ' ', 'W', ' '}}; void printBoard() { int i , j , k ; printf("\n "); for(i=0;i<8;i++) printf(" %d", i+1 ); printf(" \n"); for(k=0;k<8;k++) { printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); printf(" %d ", k+1); for(j=0;j<8;j++) { printf("|| %c ", board[k][j]); } printf("|| \n"); } printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); } void validateMoveChecker(int x1, int y1, int af, int bf, int v) { int a, b; for(a=1, b=1; board[x1+ af * a][y1+ bf * b] == '*'; a++, b++) { if((x1+af*a) == -1 || (y1+bf*b)==v) return; printf("%d%d , ", x1+af*a , y1+bf*b); } } int validateMove(int x1 , int y1) { printf( "Available coordinates to send the piece are: \n" ); validateMoveChecker(x1, y1, -1, 1, 8); validateMoveChecker(x1, y1, 1, -1, -1); validateMoveChecker(x1, y1, 1, 1, 8); validateMoveChecker(x1, y1, -1, -1, -1); } int getChange(int x1 ,int y1 ,int x2 ,int y2 ) { char temp; temp = board[x1][y1]; board[x1][y1] = board[x2][y2]; board[x2][y2] = temp; } int check(int x, int y) { switch(board[x][y]) { case 'B': return 1; break; default: return 0; } } int main(){ int x1, y1, x2, y2, pos1, pos2, x, y; char board[8][8]; printBoard(); do { printf("\nEnter the position of the piece [X,Y]: "); scanf("%d", &pos1); y1 = pos1%10; x1 = pos1/10; switch(board[x1][y1]) { case 'B': validateMove(x1, y1); break; default: printf("Invalid Position. Please enter again!"); } }while(x1!=0 && y1!=0); printf("\nEnter the position on where to send the piece [X,Y]: "); scanf("%d", &pos2); y2 = pos2%10; x2 = pos2/10; getChange(x1 ,y1 ,x2 ,y2); if((x2-x1==y2-y1)!=0) { getChange(x1 ,y1 ,x2 ,y2); check(x,y); } } A: I changed printBoard() to show index starting with 0 (1 is fine too, but then you need to adjust indexes): void printBoard() { int i , j , k ; printf("\n "); for(i=0;i<8;i++) printf(" %d", i ); printf(" \n"); for(k=0;k<8;k++) { printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); printf(" %d ", k); for(j=0;j<8;j++) { printf("|| %c ", board[k][j]); } printf("|| \n"); } printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); } and then in main() I made the following changes: removed the local variable board; your code relies on a global variable of the same name. It would, however, be a better design to pass board to the functions that need it instead of using a global variable. changed the two scanf() to just read the coordinates directly. added a printBoard() after getChange() to verify the move worked. (not fixed) the variables x and y are uninitialized. int main(void) { int x1, y1, x2, y2, x, y; printBoard(); do { printf("\nEnter the position of the piece [X,Y]: "); scanf("%d,%d", &x1, &y1); switch(board[x1][y1]) { case 'B': validateMove(x1, y1); break; default: printf("Invalid Position. Please enter again!"); } } while(x1 && y1); printf("\nEnter the position on where to send the piece [X,Y]: "); scanf("%d,%d", &x2, &y2); getChange(x1, y1 ,x2 ,y2); printBoard(); if((x2-x1==y2-y1)!=0) { getChange(x1 ,y1 ,x2 ,y2); check(x, y); } } and example session: 0 1 2 3 4 5 6 7 ------------------------------------------ 0 || || B || || B || || B || || B || ------------------------------------------ 1 || B || || B || || B || || B || || ------------------------------------------ 2 || || B || || B || || B || || B || ------------------------------------------ 3 || * || || * || || * || || * || || ------------------------------------------ 4 || || * || || * || || * || || * || ------------------------------------------ 5 || W || || W || || W || || W || || ------------------------------------------ 6 || || W || || W || || W || || W || ------------------------------------------ 7 || W || || W || || W || || W || || ------------------------------------------ Enter the position of the piece [X,Y]: 0,1 Available coordinates to send the piece are: Enter the position on where to send the piece [X,Y]: 0,0 0 1 2 3 4 5 6 7 ------------------------------------------ 0 || B || || || B || || B || || B || ------------------------------------------ 1 || B || || B || || B || || B || || ------------------------------------------ 2 || || B || || B || || B || || B || ------------------------------------------ 3 || * || || * || || * || || * || || ------------------------------------------ 4 || || * || || * || || * || || * || ------------------------------------------ 5 || W || || W || || W || || W || || ------------------------------------------ 6 || || W || || W || || W || || W || ------------------------------------------ 7 || W || || W || || W || || W || || ------------------------------------------
How to validate a move in a checkerboard C?
Hi im a newbie at programming and this task is frustrating me rn because I still can't get the piece to move on the board. I am guessing this has to do on my while loop function but still can't get it right. The 1st user input should detect the coordinate of the character to move and show what are the available spots to where it can send the piece using validateMove(). Here's my code, Been onto this for weeks now. #include <stdio.h> char board[8][8] = { {' ', 'B', ' ', 'B', ' ', 'B', ' ', 'B'}, {'B', ' ', 'B', ' ', 'B', ' ', 'B', ' '}, {' ', 'B', ' ', 'B', ' ', 'B', ' ', 'B'}, {'*', ' ', '*', ' ', '*', ' ', '*', ' '}, {' ', '*', ' ', '*', ' ', '*', ' ', '*'}, {'W', ' ', 'W', ' ', 'W', ' ', 'W', ' '}, {' ', 'W', ' ', 'W', ' ', 'W', ' ', 'W'}, {'W', ' ', 'W', ' ', 'W', ' ', 'W', ' '}}; void printBoard() { int i , j , k ; printf("\n "); for(i=0;i<8;i++) printf(" %d", i+1 ); printf(" \n"); for(k=0;k<8;k++) { printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); printf(" %d ", k+1); for(j=0;j<8;j++) { printf("|| %c ", board[k][j]); } printf("|| \n"); } printf(" "); for(i=0;i<42;i++) { printf("-"); } printf(" \n"); } void validateMoveChecker(int x1, int y1, int af, int bf, int v) { int a, b; for(a=1, b=1; board[x1+ af * a][y1+ bf * b] == '*'; a++, b++) { if((x1+af*a) == -1 || (y1+bf*b)==v) return; printf("%d%d , ", x1+af*a , y1+bf*b); } } int validateMove(int x1 , int y1) { printf( "Available coordinates to send the piece are: \n" ); validateMoveChecker(x1, y1, -1, 1, 8); validateMoveChecker(x1, y1, 1, -1, -1); validateMoveChecker(x1, y1, 1, 1, 8); validateMoveChecker(x1, y1, -1, -1, -1); } int getChange(int x1 ,int y1 ,int x2 ,int y2 ) { char temp; temp = board[x1][y1]; board[x1][y1] = board[x2][y2]; board[x2][y2] = temp; } int check(int x, int y) { switch(board[x][y]) { case 'B': return 1; break; default: return 0; } } int main(){ int x1, y1, x2, y2, pos1, pos2, x, y; char board[8][8]; printBoard(); do { printf("\nEnter the position of the piece [X,Y]: "); scanf("%d", &pos1); y1 = pos1%10; x1 = pos1/10; switch(board[x1][y1]) { case 'B': validateMove(x1, y1); break; default: printf("Invalid Position. Please enter again!"); } }while(x1!=0 && y1!=0); printf("\nEnter the position on where to send the piece [X,Y]: "); scanf("%d", &pos2); y2 = pos2%10; x2 = pos2/10; getChange(x1 ,y1 ,x2 ,y2); if((x2-x1==y2-y1)!=0) { getChange(x1 ,y1 ,x2 ,y2); check(x,y); } }
[ "I changed printBoard() to show index starting with 0 (1 is fine too, but then you need to adjust indexes):\nvoid printBoard() {\n int i , j , k ;\n printf(\"\\n \");\n for(i=0;i<8;i++)\n printf(\" %d\", i );\n printf(\" \\n\");\n for(k=0;k<8;k++) {\n printf(\" \");\n for(i=0;i<42;i++) {\n printf(\"-\");\n }\n printf(\" \\n\");\n printf(\" %d \", k);\n for(j=0;j<8;j++) {\n printf(\"|| %c \", board[k][j]);\n }\n printf(\"|| \\n\");\n }\n printf(\" \");\n for(i=0;i<42;i++) {\n printf(\"-\");\n }\n printf(\" \\n\");\n}\n\nand then in main() I made the following changes:\n\nremoved the local variable board; your code relies on a global variable of the same name. It would, however, be a better design to pass board to the functions that need it instead of using a global variable.\nchanged the two scanf() to just read the coordinates directly.\nadded a printBoard() after getChange() to verify the move worked.\n(not fixed) the variables x and y are uninitialized.\n\nint main(void) {\n int x1, y1, x2, y2, x, y;\n printBoard();\n do {\n printf(\"\\nEnter the position of the piece [X,Y]: \");\n scanf(\"%d,%d\", &x1, &y1);\n switch(board[x1][y1]) {\n case 'B':\n validateMove(x1, y1);\n break;\n default:\n printf(\"Invalid Position. Please enter again!\");\n }\n } while(x1 && y1);\n\n printf(\"\\nEnter the position on where to send the piece [X,Y]: \");\n scanf(\"%d,%d\", &x2, &y2);\n getChange(x1, y1 ,x2 ,y2);\n printBoard();\n if((x2-x1==y2-y1)!=0) {\n getChange(x1 ,y1 ,x2 ,y2);\n check(x, y);\n }\n}\n\nand example session:\n 0 1 2 3 4 5 6 7 \n ------------------------------------------ \n 0 || || B || || B || || B || || B || \n ------------------------------------------ \n 1 || B || || B || || B || || B || || \n ------------------------------------------ \n 2 || || B || || B || || B || || B || \n ------------------------------------------ \n 3 || * || || * || || * || || * || || \n ------------------------------------------ \n 4 || || * || || * || || * || || * || \n ------------------------------------------ \n 5 || W || || W || || W || || W || || \n ------------------------------------------ \n 6 || || W || || W || || W || || W || \n ------------------------------------------ \n 7 || W || || W || || W || || W || || \n ------------------------------------------ \n\nEnter the position of the piece [X,Y]: 0,1\nAvailable coordinates to send the piece are: \n\nEnter the position on where to send the piece [X,Y]: 0,0\n\n 0 1 2 3 4 5 6 7 \n ------------------------------------------ \n 0 || B || || || B || || B || || B || \n ------------------------------------------ \n 1 || B || || B || || B || || B || || \n ------------------------------------------ \n 2 || || B || || B || || B || || B || \n ------------------------------------------ \n 3 || * || || * || || * || || * || || \n ------------------------------------------ \n 4 || || * || || * || || * || || * || \n ------------------------------------------ \n 5 || W || || W || || W || || W || || \n ------------------------------------------ \n 6 || || W || || W || || W || || W || \n ------------------------------------------ \n 7 || W || || W || || W || || W || || \n ------------------------------------------ \n\n" ]
[ 0 ]
[]
[]
[ "c" ]
stackoverflow_0074665323_c.txt
Q: How to solve the error: 'tuple' object has no attribute 'decode' with django-channels When I tried to execute the channels' tutorial in order to establishing the django website with websocket, the error message emerged: AttributeError: 'tuple' object has no attribute 'decode' I just executed following code: $ python3 manage.py shell >>> import channels.layers >>> channel_layer = channels.layers.get_channel_layer() >>> from asgiref.sync import async_to_sync >>> async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'}) Tutorial link: https://channels.readthedocs.io/en/stable/tutorial/part_2.html Environment: Ubuntu 20.04LTS with Python 3.8.10 django 4.1.3 channels 4.0.0 channels-redis 4.0.0 daphne 4.0.0 asgiref 3.5.2 I think the problem may caused by asgiref, but there is no documentation for reference. A: It looks like you are encountering an AttributeError when trying to send a message to a channel using the channels and asgiref libraries in Python. This error is raised when you try to call a method on an object that does not have that method. In this case, it appears that the decode method is being called on a tuple object, but tuple objects do not have a decode method. This is likely caused by a mistake in your code, where you are trying to treat a tuple object as if it were a str object. To fix this error, you will need to find the line of code where the decode method is being called on a tuple object and correct it. This may involve changing the type of the object, or simply using a different method that is appropriate for the object's type. Without more information about your code and the context in which the error is occurring, it is difficult to provide a specific solution. However, the general steps to fix this error are: Identify the line of code where the AttributeError is being raised. Determine why the decode method is being called on a tuple object. Correct the code to use a different method or object type that is appropriate for the situation. Once you have fixed the code, you should be able to send messages to channels without encountering this error.
How to solve the error: 'tuple' object has no attribute 'decode' with django-channels
When I tried to execute the channels' tutorial in order to establishing the django website with websocket, the error message emerged: AttributeError: 'tuple' object has no attribute 'decode' I just executed following code: $ python3 manage.py shell >>> import channels.layers >>> channel_layer = channels.layers.get_channel_layer() >>> from asgiref.sync import async_to_sync >>> async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'}) Tutorial link: https://channels.readthedocs.io/en/stable/tutorial/part_2.html Environment: Ubuntu 20.04LTS with Python 3.8.10 django 4.1.3 channels 4.0.0 channels-redis 4.0.0 daphne 4.0.0 asgiref 3.5.2 I think the problem may caused by asgiref, but there is no documentation for reference.
[ "It looks like you are encountering an AttributeError when trying to send a message to a channel using the channels and asgiref libraries in Python. This error is raised when you try to call a method on an object that does not have that method.\nIn this case, it appears that the decode method is being called on a tuple object, but tuple objects do not have a decode method. This is likely caused by a mistake in your code, where you are trying to treat a tuple object as if it were a str object.\nTo fix this error, you will need to find the line of code where the decode method is being called on a tuple object and correct it. This may involve changing the type of the object, or simply using a different method that is appropriate for the object's type.\nWithout more information about your code and the context in which the error is occurring, it is difficult to provide a specific solution. However, the general steps to fix this error are:\n\nIdentify the line of code where the AttributeError is being raised.\nDetermine why the decode method is being called on a tuple object.\nCorrect the code to use a different method or object type that is appropriate for the situation.\n\nOnce you have fixed the code, you should be able to send messages to channels without encountering this error.\n" ]
[ 0 ]
[]
[]
[ "channels", "django_channels", "python" ]
stackoverflow_0074665490_channels_django_channels_python.txt
Q: Vue pageable container scrollbar refreshes after fetching every new page I have a pageable div. With jquery scroll function, I fetch new pages when the scroll reaches near the bottom of the div. I fetch the new page in a parent component - say App.vue. And the scrollbar is in PropertyBox.vue. Whenever I fetch the new page I update the data in App.vue and I send it as a property to PropertyBox. In order to be able to see the changes in PropertyBox I update it's key in App.vue (without that changes doesn't appear even if new data is consumed). The issue is that because the key is updated and the component is hard reloaded the scrollbar goes to top each time. How can I overcome this? Should I use another method for hard reloading the child component(PropertyBox)? A: To prevent the scrollbar from resetting to the top of the div each time you update the data, you can try using a state management library such as Vuex to manage the shared data between your components. This will allow you to update the data in a centralized store and prevent the child component from hard reloading and resetting the scroll position. Here's an example of how you can use Vuex to manage the data in your app: First, you'll need to install the Vuex library and create a new store: // main.js import Vuex from 'vuex'; Vue.use(Vuex); const store = new Vuex.Store({ state: { // initial state here }, mutations: { // mutations to update the state here } }); new Vue({ store, render: h => h(App), }).$mount('#app'); Next, you can move the data you want to share between components into the store. For example, you can move the properties array into the store's state: const store = new Vuex.Store({ state: { properties: [], }, mutations: { // mutations to update the state here } }); Then, in your parent component (App.vue), you can use the mapState helper from Vuex to map the properties array from the store to a computed property in the component: // App.vue import { mapState } from 'vuex'; export default { computed: { ...mapState(['properties']), }, methods: { async fetchProperties() { // fetch new properties and update the store using a mutation }, }, }; Finally, in your child component (PropertyBox.vue), you can also use the mapState helper to map the properties array from the store to a computed property in the component: // PropertyBox.vue import { mapState } from 'vuex'; export default { computed: { ...mapState(['properties']), }, mounted() { this.$nextTick(() => { // initialize the scroll event listener here }); }, }; With this setup, you can update the properties array in the store using a mutation, and the changes will be automatically reflected in both the parent and child components without resetting the scroll position.
Vue pageable container scrollbar refreshes after fetching every new page
I have a pageable div. With jquery scroll function, I fetch new pages when the scroll reaches near the bottom of the div. I fetch the new page in a parent component - say App.vue. And the scrollbar is in PropertyBox.vue. Whenever I fetch the new page I update the data in App.vue and I send it as a property to PropertyBox. In order to be able to see the changes in PropertyBox I update it's key in App.vue (without that changes doesn't appear even if new data is consumed). The issue is that because the key is updated and the component is hard reloaded the scrollbar goes to top each time. How can I overcome this? Should I use another method for hard reloading the child component(PropertyBox)?
[ "To prevent the scrollbar from resetting to the top of the div each time you update the data, you can try using a state management library such as Vuex to manage the shared data between your components. This will allow you to update the data in a centralized store and prevent the child component from hard reloading and resetting the scroll position.\nHere's an example of how you can use Vuex to manage the data in your app:\nFirst, you'll need to install the Vuex library and create a new store:\n// main.js\nimport Vuex from 'vuex';\nVue.use(Vuex);\n\nconst store = new Vuex.Store({\n state: {\n // initial state here\n },\n mutations: {\n // mutations to update the state here\n }\n});\n\nnew Vue({\n store,\n render: h => h(App),\n}).$mount('#app');\n\nNext, you can move the data you want to share between components into the store. For example, you can move the properties array into the store's state:\nconst store = new Vuex.Store({\n state: {\n properties: [],\n },\n mutations: {\n // mutations to update the state here\n }\n});\n\nThen, in your parent component (App.vue), you can use the mapState helper from Vuex to map the properties array from the store to a computed property in the component:\n// App.vue\nimport { mapState } from 'vuex';\n\nexport default {\n computed: {\n ...mapState(['properties']),\n },\n methods: {\n async fetchProperties() {\n // fetch new properties and update the store using a mutation\n },\n },\n};\n\nFinally, in your child component (PropertyBox.vue), you can also use the mapState helper to map the properties array from the store to a computed property in the component:\n// PropertyBox.vue\nimport { mapState } from 'vuex';\n\nexport default {\n computed: {\n ...mapState(['properties']),\n },\n mounted() {\n this.$nextTick(() => {\n // initialize the scroll event listener here\n });\n },\n};\n\nWith this setup, you can update the properties array in the store using a mutation, and the changes will be automatically reflected in both the parent and child components without resetting the scroll position.\n" ]
[ 1 ]
[]
[]
[ "javascript", "vue.js" ]
stackoverflow_0074665512_javascript_vue.js.txt
Q: Add a custom javascript to the FastAPI Swagger UI docs webpage in Python I want to load my custom javascript file or code to the FastAPI Swagger UI webpage, to add some dynamic interaction when I create a FastAPI object. For example, in Swagger UI on docs webpage I would like to <script src="custom_script.js"></script> or <script> alert('worked!') </script> I tried: api = FastAPI(docs_url=None) api.mount("/static", StaticFiles(directory="static"), name="static") @api.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=api.openapi_url, title=api.title + " - Swagger UI", oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/sample.js", swagger_css_url="/static/sample.css", ) but it is not working. Is there a way just to insert my custom javascript code on docs webpage of FastAPI Swagger UI with Python ? A: Finally I made it working. This is what I did: from fastapi.openapi.docs import ( get_redoc_html, get_swagger_ui_html, get_swagger_ui_oauth2_redirect_html, ) from fastapi.staticfiles import StaticFiles api = FastAPI(docs_url=None) path_to_static = os.path.join(os.path.dirname(__file__), 'static') logger.info(f"path_to_static: {path_to_static}") api.mount("/static", StaticFiles(directory=path_to_static), name="static") @api.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=api.openapi_url, title="My API", oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/custom_script.js", # swagger_css_url="/static/swagger-ui.css", # swagger_favicon_url="/static/favicon-32x32.png", ) Important notes: Make sure the static path is correct and all your files are in the static folder, by default the static folder should be in the same folder with the script that created the FastAPI object. For example: -parent_folder Build_FastAPI.py -static_folder custom_script.js custom_css.css Find the swagger-ui-bundle.js on internet and copy-paste all its content to custom_script.js, then add your custom javascript code at the beginning or at the end of custom_script.js. For example: setTimeout(function(){alert('My custom script is working!')}, 5000); ... ..... /*! For license information please see swagger-ui-bundle.js.LICENSE.txt */ !function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.SwaggerUIBundle=t():e.SwaggerUIBundle=t()} ... ..... Save and refresh your browser, you are all way up! IF SOMEBODY KNOWS A BETTER ANSWER YOUR ARE WELCOME, THE BEST ONE WILL BE ACCEPTED!
Add a custom javascript to the FastAPI Swagger UI docs webpage in Python
I want to load my custom javascript file or code to the FastAPI Swagger UI webpage, to add some dynamic interaction when I create a FastAPI object. For example, in Swagger UI on docs webpage I would like to <script src="custom_script.js"></script> or <script> alert('worked!') </script> I tried: api = FastAPI(docs_url=None) api.mount("/static", StaticFiles(directory="static"), name="static") @api.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=api.openapi_url, title=api.title + " - Swagger UI", oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/sample.js", swagger_css_url="/static/sample.css", ) but it is not working. Is there a way just to insert my custom javascript code on docs webpage of FastAPI Swagger UI with Python ?
[ "Finally I made it working. This is what I did:\nfrom fastapi.openapi.docs import (\n get_redoc_html,\n get_swagger_ui_html,\n get_swagger_ui_oauth2_redirect_html,\n)\nfrom fastapi.staticfiles import StaticFiles\n\napi = FastAPI(docs_url=None) \n\npath_to_static = os.path.join(os.path.dirname(__file__), 'static')\nlogger.info(f\"path_to_static: {path_to_static}\")\napi.mount(\"/static\", StaticFiles(directory=path_to_static), name=\"static\")\n\[email protected](\"/docs\", include_in_schema=False)\n async def custom_swagger_ui_html():\n return get_swagger_ui_html(\n openapi_url=api.openapi_url,\n title=\"My API\",\n oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url,\n swagger_js_url=\"/static/custom_script.js\",\n # swagger_css_url=\"/static/swagger-ui.css\",\n # swagger_favicon_url=\"/static/favicon-32x32.png\",\n )\n\nImportant notes:\n\nMake sure the static path is correct and all your files are in the static folder, by default the static folder should be in the same folder with the script that created the FastAPI object.\n\nFor example:\n\n -parent_folder\n Build_FastAPI.py\n -static_folder\n custom_script.js\n custom_css.css\n\n\n\nFind the swagger-ui-bundle.js on internet and copy-paste all its content to custom_script.js, then add your custom javascript code at the beginning or at the end of custom_script.js.\n\nFor example:\nsetTimeout(function(){alert('My custom script is working!')}, 5000);\n...\n.....\n/*! For license information please see swagger-ui-bundle.js.LICENSE.txt */\n !function(e,t){\"object\"==typeof exports&&\"object\"==typeof module?module.exports=t():\"function\"==typeof define&&define.amd?define([],t):\"object\"==typeof exports?exports.SwaggerUIBundle=t():e.SwaggerUIBundle=t()}\n...\n.....\n\n\nSave and refresh your browser, you are all way up!\n\nIF SOMEBODY KNOWS A BETTER ANSWER YOUR ARE WELCOME, THE BEST ONE WILL BE ACCEPTED!\n" ]
[ 1 ]
[]
[]
[ "fastapi", "python", "swagger_ui" ]
stackoverflow_0074661044_fastapi_python_swagger_ui.txt
Q: Android Studio doesn't find com.android.support:support-v4:19.1.0 I have imported one project into Android Studio but I got the error: Could not find com.android.support:support-v4:19.1.0. Where could I find this file? I have imported the project using Gradle. I have the Android Studio version 0.5.7 the last android sdk and java 1.7u55. A: Just add this code to you build.gradle file dependencies { compile 'com.android.support:support-v4:19.+' } and press Tools -> Android -> Sync Project with Gradle Files Gradle will download necessary files by himself A: It does not work for me either. It works with 19.0.1 But if (I use gradle) I do this in my build.gradle: repositories { def androidHome = System.getenv("ANDROID_HOME") mavenCentral() maven { url "$androidHome/extras/android/m2repository/" } } It finds the artifact. A: From the SDK Manager, delete and re-install the Android Support Library 19.1 package. A: I had this same problem with morning. I found the Jar file that I needed in /<MySdkFolder>/extras/android/support/ - in there are some sub folders with the different support libraries in them, so the last part of the path depends on which one that you want to use. I just copied this into the lib folder of the project. I'm sure there is a more technical solution but it worked for me. A: Right clicking on the library and select the import as library option from the context menu works for me. A: Following theory works at me: Android Studio has problems importing support-v4:19.1.+ library when it comes through a transitive dependency. Solution Adding support-v4 as own dependency and exclude this lib where it comes transitive. then i could not more see this import issue A: Try to go Project Structure -> Dependencies -> Add : then select -> File dependecies then select the proper library A: This artifact is available on google maven repository. So need to add following in the build.gradle: allprojects { repositories { mavenLocal() google() } } A: I had a similar problem. This line to build.gradle works --> implementation 'com.android.support:support v4:28.0.0'
Android Studio doesn't find com.android.support:support-v4:19.1.0
I have imported one project into Android Studio but I got the error: Could not find com.android.support:support-v4:19.1.0. Where could I find this file? I have imported the project using Gradle. I have the Android Studio version 0.5.7 the last android sdk and java 1.7u55.
[ "Just add this code to you build.gradle file\ndependencies {\n compile 'com.android.support:support-v4:19.+'\n}\n\nand press Tools -> Android -> Sync Project with Gradle Files\nGradle will download necessary files by himself\n", "It does not work for me either. It works with 19.0.1\nBut if (I use gradle) I do this in my build.gradle:\nrepositories {\n def androidHome = System.getenv(\"ANDROID_HOME\")\n mavenCentral()\n maven {\n url \"$androidHome/extras/android/m2repository/\"\n }\n}\n\nIt finds the artifact.\n", "From the SDK Manager, delete and re-install the Android Support Library 19.1 package. \n", "I had this same problem with morning. I found the Jar file that I needed in /<MySdkFolder>/extras/android/support/ - in there are some sub folders with the different support libraries in them, so the last part of the path depends on which one that you want to use.\nI just copied this into the lib folder of the project. I'm sure there is a more technical solution but it worked for me.\n", "Right clicking on the library and select the import as library option from the context menu works for me.\n", "Following theory works at me: \nAndroid Studio has problems importing support-v4:19.1.+ library when it comes through a transitive dependency. \nSolution Adding support-v4 as own dependency and exclude this lib where it comes transitive. then i could not more see this import issue\n", "Try to go \n\nProject Structure -> Dependencies -> Add : then select -> File\n dependecies\n\nthen select the proper library \n", "This artifact is available on google maven repository. So need to add following in the build.gradle:\nallprojects {\n repositories {\n mavenLocal()\n google()\n }\n}\n\n", "I had a similar problem. This line to build.gradle works -->\nimplementation 'com.android.support:support v4:28.0.0'\n" ]
[ 8, 2, 2, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "android", "android_studio", "java", "sdk" ]
stackoverflow_0023386545_android_android_studio_java_sdk.txt
Q: Add one element at the end of every sub list of a list So another prolog question here. As the title indicated, I tried to add one element on each sublist of a list. But things really don't go well. Here's my code so far: add_char(Char, Motif, NewM):- append(Motif, [Char], NewM). add_all(Char, [], _, RE). add_all(Char, [H|T], Pred, RE):- add_char(Char, H, NewH), append(Pred, [NewH], RE), add_all(Char, T, RE, NewRE). My code just wanna return the head instead of the whole list as result. like: ?- add_all(h, [(v=[1,2,3]), (i = [5,6,7]), (e = [r,e,w])], [],X). X = [v=[1, 2, 3, h]] What I expect is X = [v=[1, 2, 3, h],i = [5,6,7,h],e = [r,e,w,h]]. Can anyone help? A: Here it is, a simple recursive solution with append/3. add_all(_,[],[]). add_all(El,[(V=L)|T],[(V=L1)|T1]):- append(L,[El],L1), add_all(El,T,T1). ? add_all(h, [(v=[1,2,3]), (i = [5,6,7]), (e = [r,e,w])], X). X = [v=[1, 2, 3, h], i=[5, 6, 7, h], e=[r, e, w, h]] false If you want to remove the false result, place a cut (!) in the body of the first rule. A: Things you are conveniently ignoring: Singleton variables: [Char,RE] Singleton variables: [NewRE] Attend to those warnings, they are there to point out problems. Anyway; those things in your list are not exactly sublists; you have a list of terms properly written as =(v,[1,2,3]), so you cannot use append on them until you take them apart ('destructure' them) and get the lists out. add_char(Char, (X=Motif), (X=NewM)) :- % destructures v=[1,2,3] append(Motif, [Char], NewM). % appends to [1,2,3] add_all(_, [], []). % stops when [H|T] is empty [] add_all(Char, [H|T], [NewH|Rs]):- % relates [H|T] and [NewH|NewHs] add_char(Char, H, NewH), % adds this one add_all(Char, T, Rs). % does the rest It is weird to use = like (v=[1,2,3]) I think v-[1,2,3] is more standard. And SWI Prolog is optimised if you put the [H|T] as the first argument, because stepping over lists is so common, so swap them around if you can.
Add one element at the end of every sub list of a list
So another prolog question here. As the title indicated, I tried to add one element on each sublist of a list. But things really don't go well. Here's my code so far: add_char(Char, Motif, NewM):- append(Motif, [Char], NewM). add_all(Char, [], _, RE). add_all(Char, [H|T], Pred, RE):- add_char(Char, H, NewH), append(Pred, [NewH], RE), add_all(Char, T, RE, NewRE). My code just wanna return the head instead of the whole list as result. like: ?- add_all(h, [(v=[1,2,3]), (i = [5,6,7]), (e = [r,e,w])], [],X). X = [v=[1, 2, 3, h]] What I expect is X = [v=[1, 2, 3, h],i = [5,6,7,h],e = [r,e,w,h]]. Can anyone help?
[ "Here it is, a simple recursive solution with append/3.\nadd_all(_,[],[]).\nadd_all(El,[(V=L)|T],[(V=L1)|T1]):-\n append(L,[El],L1),\n add_all(El,T,T1).\n\n? add_all(h, [(v=[1,2,3]), (i = [5,6,7]), (e = [r,e,w])], X).\nX = [v=[1, 2, 3, h], i=[5, 6, 7, h], e=[r, e, w, h]]\nfalse\n\nIf you want to remove the false result, place a cut (!) in the body of the first rule.\n", "Things you are conveniently ignoring:\n\nSingleton variables: [Char,RE]\nSingleton variables: [NewRE]\n\nAttend to those warnings, they are there to point out problems.\nAnyway; those things in your list are not exactly sublists; you have a list of terms properly written as =(v,[1,2,3]), so you cannot use append on them until you take them apart ('destructure' them) and get the lists out.\nadd_char(Char, (X=Motif), (X=NewM)) :- % destructures v=[1,2,3]\n append(Motif, [Char], NewM). % appends to [1,2,3]\n\n\nadd_all(_, [], []). % stops when [H|T] is empty []\n\nadd_all(Char, [H|T], [NewH|Rs]):- % relates [H|T] and [NewH|NewHs]\n add_char(Char, H, NewH), % adds this one\n add_all(Char, T, Rs). % does the rest\n\nIt is weird to use = like (v=[1,2,3]) I think v-[1,2,3] is more standard. And SWI Prolog is optimised if you put the [H|T] as the first argument, because stepping over lists is so common, so swap them around if you can.\n" ]
[ 1, 0 ]
[]
[]
[ "list", "prolog" ]
stackoverflow_0074664451_list_prolog.txt
Q: Error: Cannot find module 'node-fetch'\nRequire stack:\n- /var/task IN AWS Lambda I am not getting any error in local with node-fetch in the package.json also have node-fetch dependencies but when I deployed in aws lambda then this dependencies not found. import fetch from 'node-fetch'; { "errorType": "Runtime.ImportModuleError", "errorMessage": "Error: Cannot find module 'node-fetch'\nRequire stack:\n- /var/task/query1.js\n- /var/task/query1Main.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js", "trace": [ "Runtime.ImportModuleError: Error: Cannot find module 'node-fetch'", "Require stack:", "- /var/task/query1.js", "- /var/task/query1Main.js", "- /var/runtime/UserFunction.js", "- /var/runtime/index.js", " at _loadUserApp (/var/runtime/UserFunction.js:100:13)", " at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:999:30)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)", " at Module.load (internal/modules/cjs/loader.js:863:32)", " at Function.Module._load (internal/modules/cjs/loader.js:708:14)", " at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)", " at internal/main/run_main_module.js:17:47" ] } A: You have to declare external packages as layers for a lambda https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html A: I had the same issue using Node version 18.x and fixed after downgrading to 16.x This can be done in the lambda page -> scroll down to Runtime settings I hope you already resolved the issue but I'm posting this in case someone else finds it useful.
Error: Cannot find module 'node-fetch'\nRequire stack:\n- /var/task IN AWS Lambda
I am not getting any error in local with node-fetch in the package.json also have node-fetch dependencies but when I deployed in aws lambda then this dependencies not found. import fetch from 'node-fetch'; { "errorType": "Runtime.ImportModuleError", "errorMessage": "Error: Cannot find module 'node-fetch'\nRequire stack:\n- /var/task/query1.js\n- /var/task/query1Main.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js", "trace": [ "Runtime.ImportModuleError: Error: Cannot find module 'node-fetch'", "Require stack:", "- /var/task/query1.js", "- /var/task/query1Main.js", "- /var/runtime/UserFunction.js", "- /var/runtime/index.js", " at _loadUserApp (/var/runtime/UserFunction.js:100:13)", " at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:999:30)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)", " at Module.load (internal/modules/cjs/loader.js:863:32)", " at Function.Module._load (internal/modules/cjs/loader.js:708:14)", " at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)", " at internal/main/run_main_module.js:17:47" ] }
[ "You have to declare external packages as layers for a lambda\nhttps://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html\n", "I had the same issue using Node version 18.x and fixed after downgrading to 16.x\nThis can be done in the lambda page -> scroll down to Runtime settings\nI hope you already resolved the issue but I'm posting this in case someone else finds it useful.\n" ]
[ 1, 0 ]
[]
[]
[ "aws_appsync", "aws_lambda", "node.js" ]
stackoverflow_0069522292_aws_appsync_aws_lambda_node.js.txt
Q: Output array I don't know what it is? I have this code and when I run it I get this: [I@2c7b84de. Can someone explain to me why I get that? public class EJer22 { public static void main(String[] args){ int[] numeros = { 50, 21, 6, 97, 18 }; System.out.println(numeros); } public static int[] pruebas (int[] P){ int[] S = new int[P.length]; if(P.length < 10){ System.out.println("Hay mas de 10 numeros"); } else { for(int i = 0; i < P.length; i++){ //S = P[i]; if(10 >= P[i]){ S[i] = -1; }else{ S[i] = P[i]; } } } return S; } } A: You get this because of that: System.out.println(numeros); It‘s the String representation of an integer array. A: numeros is an int array, which does not automatically serialise to a String. You are seeing the object reference value, rather than the contents. If you convert it to a List, you will see the individual elements as the toString() method of List will do this for you public static void main(String[] args){ List<Integer> numeros = Arrays.asList(50, 21, 6, 97, 18 ); System.out.println(numeros); } Or, use Arrays.toString(numeros) to avoid using a List. The pruebas method is never called but I guess that is not relevant to the question you've asked?
Output array I don't know what it is?
I have this code and when I run it I get this: [I@2c7b84de. Can someone explain to me why I get that? public class EJer22 { public static void main(String[] args){ int[] numeros = { 50, 21, 6, 97, 18 }; System.out.println(numeros); } public static int[] pruebas (int[] P){ int[] S = new int[P.length]; if(P.length < 10){ System.out.println("Hay mas de 10 numeros"); } else { for(int i = 0; i < P.length; i++){ //S = P[i]; if(10 >= P[i]){ S[i] = -1; }else{ S[i] = P[i]; } } } return S; } }
[ "You get this because of that:\nSystem.out.println(numeros);\n\nIt‘s the String representation of an integer array.\n", "numeros is an int array, which does not automatically serialise to a String. You are seeing the object reference value, rather than the contents.\nIf you convert it to a List, you will see the individual elements as the toString() method of List will do this for you\npublic static void main(String[] args){\n List<Integer> numeros = Arrays.asList(50, 21, 6, 97, 18 );\n System.out.println(numeros);\n}\n\nOr, use Arrays.toString(numeros) to avoid using a List.\nThe pruebas method is never called but I guess that is not relevant to the question you've asked?\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "java" ]
stackoverflow_0074665469_arrays_java.txt
Q: How do I get a better stack trace on Angular errors Currently, when I get an error in my production Angular (v7) app, I get a stack trace like this. But this is virtually impossible to get any meaningful information from. How can I get a better stack trace, so that I could narrow down where this elusive error is coming from? Right now, I'm living with this error in prod because it never happens on my development machine, and I have no idea how to isolate it because the stacktrace is virtually useless to me. I am used to languages like C#, where you get a very concise stacktrace that gives you a map down to the function in error. This stack trace has no meaning. TypeError: Cannot read property 'appendChild' of null at create_dom_structure (eval at (:15:1), :1:16231) at load_map (eval at (:15:1), :1:101028) at Object.load (eval at (:15:1), :1:107834) at eval (eval at (:15:1), :1:108251) at t.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:8844) at Object.onInvokeTask (https://mywebsite.com/main.25a9fda6ea42f4308b79.js:1:467756) at t.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:8765) at e.runTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:4026) at e.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:9927) at invoke (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:9818) My error handler: export class AppErrorHandler extends ErrorHandler { constructor( private _http: Http, private injector: Injector, ) { super(); } public handleError(error: any): void { if (error.status === '401') { alert('You are not logged in, please log in and come back!'); } else { const router = this.injector.get(Router); const reportOject = { status: error.status, name: error.name, message: error.message, httpErrorCode: error.httpErrorCode, stack: error.stack, url: location.href, route: router.url, }; this._http.post(`${Endpoint.APIRoot}Errors/AngularError`, reportOject) .toPromise() .catch((respError: any) => { Utils.formatError(respError, `AppErrorHandler()`); }); } super.handleError(error); } } A: Generally Javascript stack traces are more useful. Unfortunately, in a production application you generally turn on minification, which is why you get such an unreadable mess. If you can afford a larger Javascript bundle, it may be useful to turn off the minification just to get a better stack trace in production. How to do this will vary depending on which version of the Angular CLI you're using. For the v7 CLI: In the file angular.json, set the following properties { ... "projects": { "my-project": { ... "architect": { "build": { ... "configurations": { "production": { "optimization": false, "buildOptimizer": false, ... } ... } Alternative solutions in this question A: The page below will take a stack trace then download the minified js and select 100 characters before and after the error location. It will do that for all levels then display. As is it must be accessed from the same domain with same deploy which stack trace was collected. I'm sure the same concept could be easily ported to node as to not have any cross domain or path limitations. It also has a button that can replace base path with a backup path or secondary domain name. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title></title> <script> var updateUrl = false; const scriptFiles = new Map() var newDivGroup = null; function clickDecode(updateUrlNew) { updateUrl = updateUrlNew; decodeNow(); } async function decodeNow() { var textOutput = document.getElementById('text-output'); textOutput.innerHTML = ''; var myString = document.getElementById('text-input').value; // console.log(myString); var myRegexp = /(https:[a-zA-Z0-9\/\.]+.js):([0-9]+):([0-9]+)/g; var matches = myString.matchAll(myRegexp); for (const match of matches) { newDivGroup = document.createElement("div"); newDivGroup.style.marginBottom = '6px'; newDivGroup.style.padding = '15px'; newDivGroup.style.backgroundColor = '#eee'; newDivGroup.style.border = '1px solid #555' textOutput.appendChild(newDivGroup); // logToOutput('-'); logToOutput(match[0]); await showErrorLocation(match[1], match[2], match[3]) } } function logToOutput(message) { const newDiv = document.createElement("div"); const newContent = document.createTextNode(message); newDiv.appendChild(newContent); newDivGroup.appendChild(newDiv); } async function showErrorLocation(scriptUrl, row, position) { if(updateUrl) { scriptUrl = scriptUrl.toLowerCase(); scriptUrl = scriptUrl.replaceAll('/dist/', '/dist-old/'); } position = parseInt(position); if(!scriptFiles.has(scriptUrl)) { const fileContents = await downloadScriptFile(scriptUrl); scriptFiles.set(scriptUrl, fileContents); } const fileContents = scriptFiles.get(scriptUrl); const fileLines = fileContents.split('\n'); const fileLine = fileLines[row - 1]; let before = fileLine.substring(position - 100, position - 1); let after = fileLine.substring(position - 1, position + 100); logToOutput(before); logToOutput(after); } function downloadScriptFile(filePath) { return new Promise((resolve, reject) => { let xhr = new XMLHttpRequest(); xhr.open('GET', filePath); xhr.responseType = 'text'; xhr.send(); xhr.onload = function() { if (xhr.status != 200) { console.error(`Error ${xhr.status}: ${xhr.statusText}`); reject(); } else { // console.log(xhr.response); return resolve(xhr.response); } }; }); } </script> <body style="padding: 30px;"> <textarea id="text-input" rows="12" cols="150"> </textarea> <br/> <button onclick="clickDecode(false)">Run (current deploy)</button> <button onclick="clickDecode(true)">Replace URL and Run (prior deploy)</button> <br/><br/> <div id="text-output"> </div> </body> </html> A: UPDATED INFO FOR NEW ANGULAR VERSIONS We have available some debugging aids for Angular: https://developer.chrome.com/blog/devtools-better-angular-debugging/ But in any case, one of the new features of Angular 15 is Better Stack Traces: With the launch of the latest version of Angular, debugging Angular applications has been simplified and is now more straightforward with stack traces. Angular’s development team strives to achieve a standard for tracing development code irrespective of displaying libraries during the entire Angular development lifecycle. The primary aim of developing such stack traces is to improve the display of error messages as they come by. For example, developers get a one-liner error message during the code discovery phase if we talk about the previous versions of Angular. And a lot more to deal with the lengthy procedure to resolve that bug. First, let’s see the snippet for previous error indications: ERROR Error: Uncaught (in promise): Error Error at app.component.ts:18:11 at Generator.next (<anonymous>) at asyncGeneratorStep (asyncToGenerator.js:3:1) at _next (asyncToGenerator.js:25:1) at _ZoneDelegate.invoke (zone.js:372:26) at Object.onInvoke (core.mjs:26378:33) at _ZoneDelegate.invoke (zone.js:371:52) at Zone.run (zone.js:134:43) at zone.js:1275:36 at _ZoneDelegate.invokeTask (zone.js:406:31) at resolvePromise (zone.js:1211:31) at zone.js:1118:17 at zone.js:1134:33 Developers were not able to understand the ERROR snippets due the following reasons: Third-party dependencies were solely responsible for such error message inputs. You were not getting any information related to where such user interaction encountered this bug. With an active and long collaboration with the Angular and Chrome DevTool team, it was pretty useful for the Angular community to perform integration with third-party dependencies (with the help of node_modules, zone.js, etc.); and thus, could achieve linked stack traces. Now, with Angular 15, you can see the improvement in the stack traces as mentioned below: ERROR Error: Uncaught (in promise): Error Error at app.component.ts:18:11 at fetch (async) at (anonymous) (app.component.ts:4) at request (app.component.ts:4) at (anonymous) (app.component.ts:17) at submit (app.component.ts:15) at AppComponent_click_3_listener (app.component.html:4) The above code shows that error message information from where it got encountered,, so developers can directly go to that code part and fix it immediately. SOURCE: https://devtechnosys.com/insights/latest-angular-v15-features/ https://www.albiorixtech.com/blog/angular-15-best-features-new-updates/
How do I get a better stack trace on Angular errors
Currently, when I get an error in my production Angular (v7) app, I get a stack trace like this. But this is virtually impossible to get any meaningful information from. How can I get a better stack trace, so that I could narrow down where this elusive error is coming from? Right now, I'm living with this error in prod because it never happens on my development machine, and I have no idea how to isolate it because the stacktrace is virtually useless to me. I am used to languages like C#, where you get a very concise stacktrace that gives you a map down to the function in error. This stack trace has no meaning. TypeError: Cannot read property 'appendChild' of null at create_dom_structure (eval at (:15:1), :1:16231) at load_map (eval at (:15:1), :1:101028) at Object.load (eval at (:15:1), :1:107834) at eval (eval at (:15:1), :1:108251) at t.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:8844) at Object.onInvokeTask (https://mywebsite.com/main.25a9fda6ea42f4308b79.js:1:467756) at t.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:8765) at e.runTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:4026) at e.invokeTask (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:9927) at invoke (https://mywebsite.com/polyfills.d8680adf69e7ebd1de57.js:1:9818) My error handler: export class AppErrorHandler extends ErrorHandler { constructor( private _http: Http, private injector: Injector, ) { super(); } public handleError(error: any): void { if (error.status === '401') { alert('You are not logged in, please log in and come back!'); } else { const router = this.injector.get(Router); const reportOject = { status: error.status, name: error.name, message: error.message, httpErrorCode: error.httpErrorCode, stack: error.stack, url: location.href, route: router.url, }; this._http.post(`${Endpoint.APIRoot}Errors/AngularError`, reportOject) .toPromise() .catch((respError: any) => { Utils.formatError(respError, `AppErrorHandler()`); }); } super.handleError(error); } }
[ "Generally Javascript stack traces are more useful. Unfortunately, in a production application you generally turn on minification, which is why you get such an unreadable mess. \nIf you can afford a larger Javascript bundle, it may be useful to turn off the minification just to get a better stack trace in production.\nHow to do this will vary depending on which version of the Angular CLI you're using. For the v7 CLI:\nIn the file angular.json, set the following properties\n{\n ...\n \"projects\": {\n \"my-project\": {\n ...\n \"architect\": {\n \"build\": {\n ...\n \"configurations\": {\n \"production\": {\n \"optimization\": false,\n \"buildOptimizer\": false,\n ...\n }\n ...\n}\n\nAlternative solutions in this question \n", "The page below will take a stack trace then download the minified js and select 100 characters before and after the error location. It will do that for all levels then display.\nAs is it must be accessed from the same domain with same deploy which stack trace was collected. I'm sure the same concept could be easily ported to node as to not have any cross domain or path limitations.\nIt also has a button that can replace base path with a backup path or secondary domain name.\n<!doctype html>\n<html lang=\"en\">\n<head>\n <meta charset=\"utf-8\">\n <title></title>\n\n <script>\n var updateUrl = false;\n\n const scriptFiles = new Map()\n var newDivGroup = null;\n\n function clickDecode(updateUrlNew) { \n updateUrl = updateUrlNew;\n decodeNow();\n }\n\n async function decodeNow() {\n var textOutput = document.getElementById('text-output');\n textOutput.innerHTML = '';\n\n var myString = document.getElementById('text-input').value;\n // console.log(myString);\n var myRegexp = /(https:[a-zA-Z0-9\\/\\.]+.js):([0-9]+):([0-9]+)/g;\n var matches = myString.matchAll(myRegexp);\n for (const match of matches) {\n newDivGroup = document.createElement(\"div\");\n newDivGroup.style.marginBottom = '6px';\n newDivGroup.style.padding = '15px';\n newDivGroup.style.backgroundColor = '#eee';\n newDivGroup.style.border = '1px solid #555'\n textOutput.appendChild(newDivGroup);\n\n // logToOutput('-');\n logToOutput(match[0]);\n await showErrorLocation(match[1], match[2], match[3])\n }\n }\n\n function logToOutput(message) {\n const newDiv = document.createElement(\"div\");\n const newContent = document.createTextNode(message);\n newDiv.appendChild(newContent);\n \n newDivGroup.appendChild(newDiv);\n }\n\n async function showErrorLocation(scriptUrl, row, position) {\n if(updateUrl) {\n scriptUrl = scriptUrl.toLowerCase(); \n scriptUrl = scriptUrl.replaceAll('/dist/', '/dist-old/'); \n }\n position = parseInt(position);\n if(!scriptFiles.has(scriptUrl)) {\n const fileContents = await downloadScriptFile(scriptUrl);\n scriptFiles.set(scriptUrl, fileContents);\n }\n const fileContents = scriptFiles.get(scriptUrl);\n const fileLines = fileContents.split('\\n');\n const fileLine = fileLines[row - 1];\n\n let before = fileLine.substring(position - 100, position - 1);\n let after = fileLine.substring(position - 1, position + 100);\n logToOutput(before);\n logToOutput(after);\n }\n\n function downloadScriptFile(filePath) {\n return new Promise((resolve, reject) => {\n let xhr = new XMLHttpRequest();\n xhr.open('GET', filePath);\n xhr.responseType = 'text';\n xhr.send();\n xhr.onload = function() {\n if (xhr.status != 200) {\n console.error(`Error ${xhr.status}: ${xhr.statusText}`);\n reject();\n } else {\n // console.log(xhr.response);\n return resolve(xhr.response);\n }\n };\n });\n }\n </script>\n<body style=\"padding: 30px;\">\n\n <textarea id=\"text-input\" rows=\"12\" cols=\"150\">\n\n </textarea>\n <br/>\n <button onclick=\"clickDecode(false)\">Run (current deploy)</button>\n <button onclick=\"clickDecode(true)\">Replace URL and Run (prior deploy)</button>\n <br/><br/>\n <div id=\"text-output\">\n\n </div>\n</body>\n</html>\n\n", "UPDATED INFO FOR NEW ANGULAR VERSIONS\nWe have available some debugging aids for Angular:\nhttps://developer.chrome.com/blog/devtools-better-angular-debugging/\nBut in any case, one of the new features of Angular 15 is\nBetter Stack Traces:\nWith the launch of the latest version of Angular, debugging Angular applications has been simplified and is now more straightforward with stack traces.\nAngular’s development team strives to achieve a standard for tracing development code irrespective of displaying libraries during the entire Angular development lifecycle.\nThe primary aim of developing such stack traces is to improve the display of error messages as they come by. For example, developers get a one-liner error message during the code discovery phase if we talk about the previous versions of Angular. And a lot more to deal with the lengthy procedure to resolve that bug.\nFirst, let’s see the snippet for previous error indications:\nERROR Error: Uncaught (in promise): Error\nError\n at app.component.ts:18:11\n at Generator.next (<anonymous>)\n at asyncGeneratorStep (asyncToGenerator.js:3:1)\n at _next (asyncToGenerator.js:25:1)\n at _ZoneDelegate.invoke (zone.js:372:26)\n at Object.onInvoke (core.mjs:26378:33)\n at _ZoneDelegate.invoke (zone.js:371:52)\n at Zone.run (zone.js:134:43)\n at zone.js:1275:36\n at _ZoneDelegate.invokeTask (zone.js:406:31)\n at resolvePromise (zone.js:1211:31)\n at zone.js:1118:17\n at zone.js:1134:33\n\nDevelopers were not able to understand the ERROR snippets due the following reasons:\nThird-party dependencies were solely responsible for such error message inputs.\nYou were not getting any information related to where such user interaction encountered this bug.\nWith an active and long collaboration with the Angular and Chrome DevTool team, it was pretty useful for the Angular community to perform integration with third-party dependencies (with the help of node_modules, zone.js, etc.); and thus, could achieve linked stack traces.\n\nNow, with Angular 15, you can see the improvement in the stack traces as mentioned below:\nERROR Error: Uncaught (in promise): Error\nError\n at app.component.ts:18:11\n at fetch (async) \n at (anonymous) (app.component.ts:4)\n at request (app.component.ts:4)\n at (anonymous) (app.component.ts:17)\n at submit (app.component.ts:15)\n at AppComponent_click_3_listener (app.component.html:4)\n\nThe above code shows that error message information from where it got encountered,, so developers can directly go to that code part and fix it immediately.\nSOURCE:\nhttps://devtechnosys.com/insights/latest-angular-v15-features/\nhttps://www.albiorixtech.com/blog/angular-15-best-features-new-updates/\n" ]
[ 4, 0, 0 ]
[]
[]
[ "angular", "error_handling", "stack_trace" ]
stackoverflow_0055171139_angular_error_handling_stack_trace.txt
Q: Disable taphold default event, cross device I'm struggling to disable default taphold browser event. Nothing that I have found on Google provided any help. I have only Android 4.4.4 mobile and Chrome dev tools for testing. I tried CSS fixes, such as webkit-touch-callout and others, but apparently they don't work for Android, also they don't work in Chrome dev tools. I also tried detecting right click, (e.button==2), it doesn't work. I came up with a solution, but it solves one problem and creates another. I just want to have a custom action for 'long press' event for selected anchors and I don't want the default pop up to appear (open in a new tab, copy link address, etc.) This is what I did: var timer; var tap; $("body").on("touchstart", my_selector, function(e) { e.preventDefault(); timer = setTimeout(function() { alert('taphold!'); tap=false; },500); }); $("body").on("touchend", my_selector, function() { if(tap) alert('tap'); else tap=true; clearTimeout(timer); }); It successfully disables the default taphold event and context menu doesn't appear. However it also disables useful events, such as swipe. The links are in a vertical menu and the menu is higher than the screen, so a user has to scroll it. If he tries to scroll, starting on an anchor, it won't scroll, it will alert 'tap!' Any ideas how could I disable taphold default or how could I fix this code so it disables only tap events and leave default swipe events enabled? Edit: Now I thought about setting a timeout, if the pointer is in the same place for lets say 100ms, then prevent default action. However e.preventDefault(); doesn't work inside setTimeout callback. So now I'm just asking about the simplest example. Can I prevent default actions after certain amount of time has passed (while the touch is still there). And this is my whole problem in a fiddle. http://jsfiddle.net/56Szw/593/ This is not my code, I got this from http://www.gianlucaguarini.com/blog/detecting-the-tap-event-on-a-mobile-touch-device-using-javascript/ Notice that while swiping the box up and down, scrolling doesn't work. A: I got the solution. It was so simple! I had no idea there's an oncontextmenu event. This solves everything: $("body").on("contextmenu", my_selector, function() { return false; }); A: For an <img> I had to use event.preventDefault() instead of return false. document.querySelector('img').addEventListener('contextmenu', (event) => { event.preventDefault(); }
Disable taphold default event, cross device
I'm struggling to disable default taphold browser event. Nothing that I have found on Google provided any help. I have only Android 4.4.4 mobile and Chrome dev tools for testing. I tried CSS fixes, such as webkit-touch-callout and others, but apparently they don't work for Android, also they don't work in Chrome dev tools. I also tried detecting right click, (e.button==2), it doesn't work. I came up with a solution, but it solves one problem and creates another. I just want to have a custom action for 'long press' event for selected anchors and I don't want the default pop up to appear (open in a new tab, copy link address, etc.) This is what I did: var timer; var tap; $("body").on("touchstart", my_selector, function(e) { e.preventDefault(); timer = setTimeout(function() { alert('taphold!'); tap=false; },500); }); $("body").on("touchend", my_selector, function() { if(tap) alert('tap'); else tap=true; clearTimeout(timer); }); It successfully disables the default taphold event and context menu doesn't appear. However it also disables useful events, such as swipe. The links are in a vertical menu and the menu is higher than the screen, so a user has to scroll it. If he tries to scroll, starting on an anchor, it won't scroll, it will alert 'tap!' Any ideas how could I disable taphold default or how could I fix this code so it disables only tap events and leave default swipe events enabled? Edit: Now I thought about setting a timeout, if the pointer is in the same place for lets say 100ms, then prevent default action. However e.preventDefault(); doesn't work inside setTimeout callback. So now I'm just asking about the simplest example. Can I prevent default actions after certain amount of time has passed (while the touch is still there). And this is my whole problem in a fiddle. http://jsfiddle.net/56Szw/593/ This is not my code, I got this from http://www.gianlucaguarini.com/blog/detecting-the-tap-event-on-a-mobile-touch-device-using-javascript/ Notice that while swiping the box up and down, scrolling doesn't work.
[ "I got the solution. It was so simple! I had no idea there's an oncontextmenu event. This solves everything:\n$(\"body\").on(\"contextmenu\", my_selector, function() { return false; });\n\n", "For an <img> I had to use event.preventDefault() instead of return false.\ndocument.querySelector('img').addEventListener('contextmenu', (event) => {\n event.preventDefault();\n}\n\n" ]
[ 6, 0 ]
[]
[]
[ "android", "javascript", "jquery", "mobile", "touch" ]
stackoverflow_0030775237_android_javascript_jquery_mobile_touch.txt
Q: How to append 'explode'd columns to a dataframe keeping all existing columns? I'm trying to add exploded columns to a dataframe: from pyspark.sql.functions import * from pyspark.sql.types import * # Convenience function for turning JSON strings into DataFrames. def jsonToDataFrame(json, schema=None): # SparkSessions are available with Spark 2.0+ reader = spark.read if schema: reader.schema(schema) return reader.json(sc.parallelize([json])) schema = StructType().add("a", MapType(StringType(), IntegerType())) events = jsonToDataFrame(""" { "a": { "b": 1, "c": 2 } } """, schema) display( events.withColumn("a", explode("a").alias("x", "y")) ) However, I'm hitting the following error: AnalysisException: The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF expected 2 aliases but got a Any ideas? A: This error message is telling you that the explode function is expecting two aliases in the AS clause, but it only found one. The AS clause is used to specify the names of the columns that are produced by the explode function. To fix this error, you need to provide two aliases in the AS clause, like this: events.withColumn("a", explode("a").alias("x", "y")) Alternatively, you can use the col function to specify the names of the columns that are produced by the explode function: events.withColumn("a", explode("a").alias(col("x"), col("y"))) Both of these approaches will fix the error that you're seeing. A: To add the exploded columns to a DataFrame while keeping all the existing columns, you can use the select method to specify which columns to include in the output DataFrame. Here's an example of how you can do this: from pyspark.sql.functions import * from pyspark.sql.types import * # Convenience function for turning JSON strings into DataFrames. def jsonToDataFrame(json, schema=None): # SparkSessions are available with Spark 2.0+ reader = spark.read if schema: reader.schema(schema) return reader.json(sc.parallelize([json])) schema = StructType().add("a", MapType(StringType(), IntegerType())) events = jsonToDataFrame(""" { "a": { "b": 1, "c": 2 } } """, schema) display( events .withColumn("a", explode("a").alias("x", "y")) .select("*", "x", "y") ) This code will explode the a column and add the resulting columns (x and y) to the DataFrame, while keeping all the existing columns. A: In the end, I used the following: display( events.select(explode("a").alias("x", "y"), *[c for c in events.columns]) ) This approach uses select to specify the columns to return. The first argument explodes the data: explode("a").alias("x", "y") The second argument specifies all existing columns should be included in the select:\ *[c for c in events.columns] Note that I'm prefixing the list with * - this sends each column name as a separate parameter. Simpler Method The API docs specify: Parameters colsstr, Column, or list column names (string) or expressions (Column). If one of the column names is ‘*’, that column is expanded to include all columns in the current DataFrame. We can simplify the first approach by passing in "*" to select all the columns: display( events.select("*", explode("a").alias("x", "y")) )
How to append 'explode'd columns to a dataframe keeping all existing columns?
I'm trying to add exploded columns to a dataframe: from pyspark.sql.functions import * from pyspark.sql.types import * # Convenience function for turning JSON strings into DataFrames. def jsonToDataFrame(json, schema=None): # SparkSessions are available with Spark 2.0+ reader = spark.read if schema: reader.schema(schema) return reader.json(sc.parallelize([json])) schema = StructType().add("a", MapType(StringType(), IntegerType())) events = jsonToDataFrame(""" { "a": { "b": 1, "c": 2 } } """, schema) display( events.withColumn("a", explode("a").alias("x", "y")) ) However, I'm hitting the following error: AnalysisException: The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF expected 2 aliases but got a Any ideas?
[ "This error message is telling you that the explode function is expecting two aliases in the AS clause, but it only found one. The AS clause is used to specify the names of the columns that are produced by the explode function.\nTo fix this error, you need to provide two aliases in the AS clause, like this:\nevents.withColumn(\"a\", explode(\"a\").alias(\"x\", \"y\"))\n\nAlternatively, you can use the col function to specify the names of the columns that are produced by the explode function:\nevents.withColumn(\"a\", explode(\"a\").alias(col(\"x\"), col(\"y\")))\n\nBoth of these approaches will fix the error that you're seeing.\n", "To add the exploded columns to a DataFrame while keeping all the existing columns, you can use the select method to specify which columns to include in the output DataFrame.\nHere's an example of how you can do this:\nfrom pyspark.sql.functions import *\nfrom pyspark.sql.types import *\n\n# Convenience function for turning JSON strings into DataFrames.\ndef jsonToDataFrame(json, schema=None):\n # SparkSessions are available with Spark 2.0+\n reader = spark.read\n if schema:\n reader.schema(schema)\n return reader.json(sc.parallelize([json]))\n\nschema = StructType().add(\"a\", MapType(StringType(), IntegerType()))\n\nevents = jsonToDataFrame(\"\"\"\n{\n \"a\": {\n \"b\": 1,\n \"c\": 2\n }\n}\n\"\"\", schema)\n\ndisplay(\n events\n .withColumn(\"a\", explode(\"a\").alias(\"x\", \"y\"))\n .select(\"*\", \"x\", \"y\")\n)\n\nThis code will explode the a column and add the resulting columns (x and y) to the DataFrame, while keeping all the existing columns.\n", "In the end, I used the following:\ndisplay(\n events.select(explode(\"a\").alias(\"x\", \"y\"), *[c for c in events.columns])\n)\n\nThis approach uses select to specify the columns to return.\nThe first argument explodes the data:\nexplode(\"a\").alias(\"x\", \"y\")\n\nThe second argument specifies all existing columns should be included in the select:\\\n*[c for c in events.columns]\n\nNote that I'm prefixing the list with * - this sends each column name as a separate parameter.\n\nSimpler Method\nThe API docs specify:\nParameters\ncolsstr, Column, or list\ncolumn names (string) or expressions (Column). If one of the column names is ‘*’, that column is expanded to include all columns in the current DataFrame.\n\nWe can simplify the first approach by passing in \"*\" to select all the columns:\ndisplay(\n events.select(\"*\", explode(\"a\").alias(\"x\", \"y\"))\n)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "pyspark" ]
stackoverflow_0074659811_pyspark.txt
Q: how to implement relations on firebase firestore So I have to save history of actions that happen on my website. What im currently doing is saving the userData to the action since I dont think firestore has easy relations. The problem comes when I change the userData on user, then history is displaying the old userData instead of the new one. So the solution that im thinking of is looping through all his actions and changing all the userData there. Or is there something else I can do? Is it possible to create firebase cloud founction, that would search for 10 recent activities from any user, and append the that releated user data to it? or would that take too long? how to handle relations in firestore overall? Since if I do it all on frontend I would have to make 10 user data requests to display them. firebase relations are generating too many fetches if used on frontend, how do I solve that? A: It sounds like you are storing the user's data along with each action in your Firestore database. This is a common approach, as Firestore does not support relationships between documents in the same way that a relational database does. However, it can lead to problems when the user's data changes, as you have observed. One solution to this problem is to update the user's data in all of their actions whenever the data changes. This can be done using a Firestore query to find all of the actions for a given user, and then updating each action with the new user data. This will ensure that the user's data is up to date in all of their actions. Another approach you can try is to use a Cloud Function to automatically update the user's data in their actions whenever the data changes. This function could be triggered by a change to the user's data, and could use a similar query to the one mentioned above to find and update the user's actions. This approach would be more efficient than updating the data on the frontend, as it would only require a single request to the Cloud Function rather than multiple requests to update each action individually. Overall, handling relationships in Firestore can be challenging, as it does not support relationships in the same way that a relational database does. However, by using techniques like the ones mentioned above, you can work around these limitations and ensure that your data is consistent and up to date. Hope this helps!
how to implement relations on firebase firestore
So I have to save history of actions that happen on my website. What im currently doing is saving the userData to the action since I dont think firestore has easy relations. The problem comes when I change the userData on user, then history is displaying the old userData instead of the new one. So the solution that im thinking of is looping through all his actions and changing all the userData there. Or is there something else I can do? Is it possible to create firebase cloud founction, that would search for 10 recent activities from any user, and append the that releated user data to it? or would that take too long? how to handle relations in firestore overall? Since if I do it all on frontend I would have to make 10 user data requests to display them. firebase relations are generating too many fetches if used on frontend, how do I solve that?
[ "It sounds like you are storing the user's data along with each action in your Firestore database. This is a common approach, as Firestore does not support relationships between documents in the same way that a relational database does. However, it can lead to problems when the user's data changes, as you have observed.\nOne solution to this problem is to update the user's data in all of their actions whenever the data changes. This can be done using a Firestore query to find all of the actions for a given user, and then updating each action with the new user data. This will ensure that the user's data is up to date in all of their actions.\nAnother approach you can try is to use a Cloud Function to automatically update the user's data in their actions whenever the data changes. This function could be triggered by a change to the user's data, and could use a similar query to the one mentioned above to find and update the user's actions. This approach would be more efficient than updating the data on the frontend, as it would only require a single request to the Cloud Function rather than multiple requests to update each action individually.\nOverall, handling relationships in Firestore can be challenging, as it does not support relationships in the same way that a relational database does. However, by using techniques like the ones mentioned above, you can work around these limitations and ensure that your data is consistent and up to date.\nHope this helps!\n" ]
[ 0 ]
[]
[]
[ "firebase", "google_cloud_firestore" ]
stackoverflow_0074665513_firebase_google_cloud_firestore.txt
Q: Alert on only the 1st candle meeting the condition I have add an alert that when the 3ma are turning Green or Red to receive an alert. But how can I only receive the alert once (when the 3 MAs are either all Green or all Red)and not on all candles meeting the condition. And receive an alert when one of the 3 Mas change color. Thank you for your help. //@version=5 indicator(title='Moving Average Colored 3SMA/WMA', shorttitle='NGU-Shujie-Colored3SMA-WMA', overlay=true) smaplot1 = input(true, title='Show SMA7 on chart') len1 = input.int(7, minval=1, title='SMA Length') src1 = close out1 = ta.sma(src1, len1) up1 = out1 > out1[1] down1 = out1 < out1[1] mycolor1 = up1 ? color.green : down1 ? color.red : color.blue plot(out1 and smaplot1 ? out1 : na, title='SMA7', color=mycolor1, linewidth=2) smaplot2 = input(false, title='Show SMA100 on chart') len2 = input.int(100, minval=1, title='SMA Length') src2 = close out2 = ta.sma(src2, len2) up2 = out2 > out2[1] down2 = out2 < out2[1] mycolor2 = up2 ? color.rgb(251, 242, 110) : down2 ? color.rgb(251, 242, 110) : color.rgb(251, 242, 110) plot(out2 and smaplot2 ? out2 : na, title='SMA100', color=mycolor2, linewidth=4) wmaplot1 = input(false, title='Show WMA34 on chart') len3 = input.int(34, minval=1, title='WMA Length') src3 = close out3 = ta.wma(src3, len3) up3 = out3 > out3[1] down3 = out3 < out3[1] mycolor3 = up3 ? color.green : down3 ? color.red : color.blue plot(out3 and wmaplot1 ? out3 : na, title='WMA34', color=mycolor3, linewidth=3) wmaplot2 = input(false, title='Show WMA on chart') len4 = input.int(20, minval=1, title='WMA Length') src4 = close out4 = ta.wma(src4, len4) up4 = out4 > out4[1] down4 = out4 < out4[1] mycolor4 = up4 ? color.green : down4 ? color.red : color.blue plot(out4 and wmaplot2 ? out4 : na, title='WMA', color=mycolor4, linewidth=2) alertcondition(up1 and up3 and up4, title="Buy", message="green buy") alertcondition(down1 and down3 and down4, title="Sell", message="red sell") With my script I receive an alert on all candles. Want only one alert until one MA change color. A: There are various ways to achieve this, but in my opinion the easiest one is checking if you are currently "all green" and not "all green" on previous bar. That way you won't get an alert if you are "all green" on current bar and not "all green" on previous bar: buyCond = up1 and up3 and up4 sellCond = down1 and down3 and down4 alertcondition(buyCond and not buyCond[1], title="Buy", message="green buy") alertcondition(sellCond and not sellCond[1], title="Sell", message="red sell")
Alert on only the 1st candle meeting the condition
I have add an alert that when the 3ma are turning Green or Red to receive an alert. But how can I only receive the alert once (when the 3 MAs are either all Green or all Red)and not on all candles meeting the condition. And receive an alert when one of the 3 Mas change color. Thank you for your help. //@version=5 indicator(title='Moving Average Colored 3SMA/WMA', shorttitle='NGU-Shujie-Colored3SMA-WMA', overlay=true) smaplot1 = input(true, title='Show SMA7 on chart') len1 = input.int(7, minval=1, title='SMA Length') src1 = close out1 = ta.sma(src1, len1) up1 = out1 > out1[1] down1 = out1 < out1[1] mycolor1 = up1 ? color.green : down1 ? color.red : color.blue plot(out1 and smaplot1 ? out1 : na, title='SMA7', color=mycolor1, linewidth=2) smaplot2 = input(false, title='Show SMA100 on chart') len2 = input.int(100, minval=1, title='SMA Length') src2 = close out2 = ta.sma(src2, len2) up2 = out2 > out2[1] down2 = out2 < out2[1] mycolor2 = up2 ? color.rgb(251, 242, 110) : down2 ? color.rgb(251, 242, 110) : color.rgb(251, 242, 110) plot(out2 and smaplot2 ? out2 : na, title='SMA100', color=mycolor2, linewidth=4) wmaplot1 = input(false, title='Show WMA34 on chart') len3 = input.int(34, minval=1, title='WMA Length') src3 = close out3 = ta.wma(src3, len3) up3 = out3 > out3[1] down3 = out3 < out3[1] mycolor3 = up3 ? color.green : down3 ? color.red : color.blue plot(out3 and wmaplot1 ? out3 : na, title='WMA34', color=mycolor3, linewidth=3) wmaplot2 = input(false, title='Show WMA on chart') len4 = input.int(20, minval=1, title='WMA Length') src4 = close out4 = ta.wma(src4, len4) up4 = out4 > out4[1] down4 = out4 < out4[1] mycolor4 = up4 ? color.green : down4 ? color.red : color.blue plot(out4 and wmaplot2 ? out4 : na, title='WMA', color=mycolor4, linewidth=2) alertcondition(up1 and up3 and up4, title="Buy", message="green buy") alertcondition(down1 and down3 and down4, title="Sell", message="red sell") With my script I receive an alert on all candles. Want only one alert until one MA change color.
[ "There are various ways to achieve this, but in my opinion the easiest one is checking if you are currently \"all green\" and not \"all green\" on previous bar. That way you won't get an alert if you are \"all green\" on current bar and not \"all green\" on previous bar:\nbuyCond = up1 and up3 and up4\nsellCond = down1 and down3 and down4\nalertcondition(buyCond and not buyCond[1], title=\"Buy\", message=\"green buy\")\nalertcondition(sellCond and not sellCond[1], title=\"Sell\", message=\"red sell\")\n\n" ]
[ 0 ]
[]
[]
[ "pine_script" ]
stackoverflow_0074664983_pine_script.txt
Q: Django QuerySet: additional field for counting value's occurence I have a QuerySet object with 100 items, for each of them I need to know how many times a particular contract_number occurs in the contract_number field. Example of expected output: [{'contract_number': 123, 'contract_count': 2}, {'contract_number': 456, 'contract_count': 1} ...] This means that value 123 occurs 2 times for the whole contract_number field. Important thing: I cannot reduce the amount of items, so grouping won't work here. The SQL equivalent for this would be an additional field contract_count as below: SELECT *, (SELECT count(contract_number) FROM table where t.contract_number = contract_number) as contract_count FROM table as t The question is how to do it with a Python object. After some research, I have found out that for more complex queries the Queryset extra method should be used. Below is one of my tries, but the result is not what I have expected queryset = Tracker.objects.extra( select={ 'contract_count': ''' SELECT COUNT(*) FROM table WHERE contract_number = %s ''' },select_params=(F('contract_number'),),) My models.py: class Tracker(models.Model): contract_number = models.IntegerField() EDIT: The solution to my problem was Subquery() A: You can use annotation like this: from django.db.models import Count Tracker.objects.values('contract_number').annotate(contract_count=Count('contract_number')).order_by() A: Solutions: counttraker=Traker.objects.values('contract_number').annotate(Count('contract_number')) subquery=counttraker.filter(contract_number=OuterRef('contract_number').values('contract_number__count')[:1] traker=Traker.objects.annotate(count=Subquery(subquery))
Django QuerySet: additional field for counting value's occurence
I have a QuerySet object with 100 items, for each of them I need to know how many times a particular contract_number occurs in the contract_number field. Example of expected output: [{'contract_number': 123, 'contract_count': 2}, {'contract_number': 456, 'contract_count': 1} ...] This means that value 123 occurs 2 times for the whole contract_number field. Important thing: I cannot reduce the amount of items, so grouping won't work here. The SQL equivalent for this would be an additional field contract_count as below: SELECT *, (SELECT count(contract_number) FROM table where t.contract_number = contract_number) as contract_count FROM table as t The question is how to do it with a Python object. After some research, I have found out that for more complex queries the Queryset extra method should be used. Below is one of my tries, but the result is not what I have expected queryset = Tracker.objects.extra( select={ 'contract_count': ''' SELECT COUNT(*) FROM table WHERE contract_number = %s ''' },select_params=(F('contract_number'),),) My models.py: class Tracker(models.Model): contract_number = models.IntegerField() EDIT: The solution to my problem was Subquery()
[ "You can use annotation like this:\nfrom django.db.models import Count\nTracker.objects.values('contract_number').annotate(contract_count=Count('contract_number')).order_by()\n\n", "Solutions:\ncounttraker=Traker.objects.values('contract_number').annotate(Count('contract_number'))\nsubquery=counttraker.filter(contract_number=OuterRef('contract_number').values('contract_number__count')[:1]\ntraker=Traker.objects.annotate(count=Subquery(subquery))\n\n" ]
[ 5, 0 ]
[]
[]
[ "django", "django_queryset", "extra", "python", "sql" ]
stackoverflow_0051150898_django_django_queryset_extra_python_sql.txt
Q: Accept a number and print its alternate digits, int Rearrange(int a) { long int b,j,i=0,num=0,count=0,arr[100]; while(a>0) { b=a%10;a=a/10; arr[i]=b; i++; count ++; } j=count; for(i=0;i<=count/2;i++) { t=arr[i]; arr[i]=arr[count-i-1]; arr[count-i-1]=t; count--; } for(i=0;i<j;i+=2) { num=num*10 + arr[i]%10; } return num; } I want to write a function in c rearrange which prints the alternate digits of a number it is given. for example: input:- 12345 output:- 135 Thank you A: Why complicating a simple problem? If you don't mind an alternative approach, please check the below code. #include <stdio.h> #include <stdlib.h> #include <string.h> int main() { int input = 0; int len = 0; int i = 0; char sinput[64] = {0, }; printf("Enter the number :"); scanf("%d", &input); sprintf(sinput, "%d", input); len = strlen(sinput); printf("Output : "); for (i = 0; i < len; i+=2) { printf("%c\t", sinput[i]); } printf("\n"); return 0; } Sample i/o: [sourav@braodsword temp]$ ./a.out Enter the number :123456 Output : 1 3 5 [sourav@braodsword temp]$ A: Your for loops are faulty. Change those to: for(i=0;i<=count/2;i++) { int t=arr[i]; arr[i]=arr[j]; /* Use j */ arr[j]=t; /* Use j */ /* count--; Dont decrement */ j--; } for(i=0;i<count;i+=2) /* Should be count */ { num=num*10 + arr[i]%10; } Demo There can be many alternate ways to solve, but I just want to show you how the approach in thought process can be implemented correctly. A: In your code problem is with the first for loop. Please check the below code. int Rearrange(int a) { long int b = 0, j = 0, i = 0, num = 0, count = 0, arr[100]; while (a > 0) { b = a % 10; a = a/10; arr[i] = b; i++; count++; } j = count; for (i = 0; i < count/2; i++) // Condition is problematic { long int t = arr[i]; arr[i] = arr[count-i-1]; arr[count - i - 1] = t; // count--; // this is problamatic. } for (i = 0; i < j; i += 2) { num = num * 10 + arr[i] % 10; } return num; } A: #include<stdio.h> int main() { int n,arr[40]; scanf("%d",&n); printf("%d",n); int s=0,i=0; while(n!=0) { arr[i]=n%10; printf("%d",arr[i]); n=n/10; i++; } for(int j=i-1;j>=0;j-=2) { s =s*10+arr[j]; } printf("\n%d",s); return 0; } A: int alternatedigits(int n) { int a[10],i=0,count=0,sum=0; while(n!=0) { a[i]=n%10; i++; count++; n=n/10; } for(int i=count;i>=0;i++) { if(i%2==0) { sum=sum*10+a[i]; } } return sum; }
Accept a number and print its alternate digits,
int Rearrange(int a) { long int b,j,i=0,num=0,count=0,arr[100]; while(a>0) { b=a%10;a=a/10; arr[i]=b; i++; count ++; } j=count; for(i=0;i<=count/2;i++) { t=arr[i]; arr[i]=arr[count-i-1]; arr[count-i-1]=t; count--; } for(i=0;i<j;i+=2) { num=num*10 + arr[i]%10; } return num; } I want to write a function in c rearrange which prints the alternate digits of a number it is given. for example: input:- 12345 output:- 135 Thank you
[ "Why complicating a simple problem?\nIf you don't mind an alternative approach, please check the below code.\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nint main()\n{\n int input = 0;\n int len = 0;\n int i = 0;\n char sinput[64] = {0, };\n\n printf(\"Enter the number :\");\n scanf(\"%d\", &input);\n sprintf(sinput, \"%d\", input);\n len = strlen(sinput);\n\n printf(\"Output : \");\n for (i = 0; i < len; i+=2)\n {\n printf(\"%c\\t\", sinput[i]);\n }\n printf(\"\\n\");\n return 0;\n}\n\nSample i/o:\n[sourav@braodsword temp]$ ./a.out \nEnter the number :123456\nOutput : 1 3 5 \n[sourav@braodsword temp]$\n\n", "Your for loops are faulty. Change those to:\nfor(i=0;i<=count/2;i++)\n{\n int t=arr[i]; \n arr[i]=arr[j]; /* Use j */\n arr[j]=t; /* Use j */\n /* count--; Dont decrement */\n j--;\n}\nfor(i=0;i<count;i+=2) /* Should be count */\n{\n num=num*10 + arr[i]%10;\n}\n\nDemo\nThere can be many alternate ways to solve, but I just want to show you how the approach in thought process can be implemented correctly.\n", "In your code problem is with the first for loop.\nPlease check the below code.\nint Rearrange(int a)\n{\n long int b = 0, j = 0, i = 0, num = 0, count = 0, arr[100];\n\n while (a > 0)\n {\n b = a % 10; a = a/10;\n arr[i] = b;\n i++;\n count++;\n }\n j = count;\n for (i = 0; i < count/2; i++) // Condition is problematic\n {\n long int t = arr[i]; \n arr[i] = arr[count-i-1];\n arr[count - i - 1] = t;\n // count--; // this is problamatic.\n }\n for (i = 0; i < j; i += 2)\n {\n num = num * 10 + arr[i] % 10;\n }\n return num;\n}\n\n", " #include<stdio.h>\n int main()\n {\n int n,arr[40];\n scanf(\"%d\",&n);\n printf(\"%d\",n);\n int s=0,i=0;\n while(n!=0)\n {\n arr[i]=n%10;\n printf(\"%d\",arr[i]);\n n=n/10;\n i++;\n }\n for(int j=i-1;j>=0;j-=2)\n {\n s =s*10+arr[j];\n }\n printf(\"\\n%d\",s);\n return 0;\n }\n\n", "int alternatedigits(int n)\n{\n int a[10],i=0,count=0,sum=0;\n while(n!=0)\n {\n a[i]=n%10;\n i++;\n count++;\n n=n/10;\n }\n for(int i=count;i>=0;i++)\n {\n if(i%2==0)\n {\n sum=sum*10+a[i];\n }\n }\n return sum;\n}\n\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "c" ]
stackoverflow_0027039546_c.txt
Q: how to add space after using the sort function the string? I have used the function sort to sort the string in c++, but after the function work and I cout the test case, the space is gone, how can I keep the space? Thank you. For example, I have a string "1 34 2 3" and I want to sort it, but after sorting, I want to keep the space #include <iostream> using namespace std; int main () { int numberOfTestCase; cin >> numberOfTestCase; int i = 0; int thirdSmallestInterger; string testCase; while(i < numberOfTestCase) { i++; getline(cin >> ws, testCase); cout << "And the test case is: " << testCase; //luu y ghi nho function o duoi sort(testCase.begin() , testCase.end()); cout << "And the sorted test case is: " << testCase; } return 0; } A: #include <iostream> using namespace std; int main () { int numberOfTestCase; cin >> numberOfTestCase; int i = 0; int thirdSmallestInterger; string testCase; while(i < numberOfTestCase) { i++; getline(cin >> ws, testCase); cout << "And the test case is: " << testCase; sort(testCase.begin() , testCase.end()); int a = testCase.length(); for(int i=0;i<a;i++){ cout<<testCase[i]<<" "; } cout<<endl; return 0; }
how to add space after using the sort function the string?
I have used the function sort to sort the string in c++, but after the function work and I cout the test case, the space is gone, how can I keep the space? Thank you. For example, I have a string "1 34 2 3" and I want to sort it, but after sorting, I want to keep the space #include <iostream> using namespace std; int main () { int numberOfTestCase; cin >> numberOfTestCase; int i = 0; int thirdSmallestInterger; string testCase; while(i < numberOfTestCase) { i++; getline(cin >> ws, testCase); cout << "And the test case is: " << testCase; //luu y ghi nho function o duoi sort(testCase.begin() , testCase.end()); cout << "And the sorted test case is: " << testCase; } return 0; }
[ "#include <iostream>\n\nusing namespace std;\n\nint\nmain ()\n{\n int numberOfTestCase;\n cin >> numberOfTestCase;\n int i = 0;\n int thirdSmallestInterger;\n string testCase;\n while(i < numberOfTestCase)\n {\n i++;\n getline(cin >> ws, testCase);\n cout << \"And the test case is: \" << testCase;\n sort(testCase.begin() , testCase.end());\n\n\n int a = testCase.length();\n for(int i=0;i<a;i++){\n cout<<testCase[i]<<\" \"; \n }\n cout<<endl;\n\n return 0;\n\n}\n\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074665322_c++.txt
Q: Could not resolve com.android.tools.build:aapt2:7.3.1-8691043 when i create new project the project run successfully but after that when i exit android studio and reopen the project comes up with this error Execution failed for task ':app:processDebugResources'. > Could not resolve all files for configuration ':app:debugRuntimeClasspath'. > Failed to transform material-1.7.0.aar (com.google.android.material:material:1.7.0) to match attributes {artifactType=android-compiled-dependencies-resources, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.libraryelements=aar, org.gradle.status=release, org.gradle.usage=java-runtime}. > Could not isolate parameters com.android.build.gradle.internal.dependency.AarResourcesCompilerTransform$Parameters_Decorated@8d32791 of artifact transform AarResourcesCompilerTransform > Could not isolate value com.android.build.gradle.internal.dependency.AarResourcesCompilerTransform$Parameters_Decorated@8d32791 of type AarResourcesCompilerTransform.Parameters > Could not resolve all files for configuration ':app:detachedConfiguration2'. > Could not find com.android.tools.build:aapt2:7.3.1-8691043. Searched in the following locations: - https://dl.google.com/dl/android/maven2/com/android/tools/build/aapt2/7.3.1-8691043/aapt2-7.3.1-8691043.pom - https://repo.maven.apache.org/maven2/com/android/tools/build/aapt2/7.3.1-8691043/aapt2-7.3.1-8691043.pom Required by: project :app android studio dolphin android gradle plugin version 7.3.1 gradle version 7.4 A: What I experienced is a bit different, but it may help your case. Here is my environment: Dolphin patch 1, Gradle 7.3.1, latest dependencies, Windows 11, Java 1_8. My project launched and okay. Then I tweaked some code, like changing position of a button, when ran again, I got aapt2 error. I did Build->Clean Project, ran again and still got this aapt2 error. So I closed the project, quit Studio, restarted Windows, then started Studio,load the project, and ran again, it passed debug and launched and everything was okay. I still do not know though, why restarting Windows fix the aapt2 error. I'll leave to the development team to find the answer, it may have to do with cache or temp file or what.... A: I had exactly this problem, everything I did couldn't solve it, so I uninstalled Android Studio, I deleted all its files manually, old versions and current version files, Gradle files etc. was deleted, only my projects and sdk files was remained. Then I reinstalled Android Studio and downloaded Gradle and updates, and it works.
Could not resolve com.android.tools.build:aapt2:7.3.1-8691043
when i create new project the project run successfully but after that when i exit android studio and reopen the project comes up with this error Execution failed for task ':app:processDebugResources'. > Could not resolve all files for configuration ':app:debugRuntimeClasspath'. > Failed to transform material-1.7.0.aar (com.google.android.material:material:1.7.0) to match attributes {artifactType=android-compiled-dependencies-resources, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.libraryelements=aar, org.gradle.status=release, org.gradle.usage=java-runtime}. > Could not isolate parameters com.android.build.gradle.internal.dependency.AarResourcesCompilerTransform$Parameters_Decorated@8d32791 of artifact transform AarResourcesCompilerTransform > Could not isolate value com.android.build.gradle.internal.dependency.AarResourcesCompilerTransform$Parameters_Decorated@8d32791 of type AarResourcesCompilerTransform.Parameters > Could not resolve all files for configuration ':app:detachedConfiguration2'. > Could not find com.android.tools.build:aapt2:7.3.1-8691043. Searched in the following locations: - https://dl.google.com/dl/android/maven2/com/android/tools/build/aapt2/7.3.1-8691043/aapt2-7.3.1-8691043.pom - https://repo.maven.apache.org/maven2/com/android/tools/build/aapt2/7.3.1-8691043/aapt2-7.3.1-8691043.pom Required by: project :app android studio dolphin android gradle plugin version 7.3.1 gradle version 7.4
[ "What I experienced is a bit different, but it may help your case. Here is my environment: Dolphin patch 1, Gradle 7.3.1, latest dependencies, Windows 11, Java 1_8. My project launched and okay. Then I tweaked some code, like changing position of a button, when ran again, I got aapt2 error. I did Build->Clean Project, ran again and still got this aapt2 error. So I closed the project, quit Studio, restarted Windows, then started Studio,load the project, and ran again, it passed debug and launched and everything was okay. I still do not know though, why restarting Windows fix the aapt2 error. I'll leave to the development team to find the answer, it may have to do with cache or temp file or what....\n", "I had exactly this problem, everything I did couldn't solve it, so I uninstalled Android Studio, I deleted all its files manually, old versions and current version files, Gradle files etc. was deleted, only my projects and sdk files was remained. Then I reinstalled Android Studio and downloaded Gradle and updates, and it works.\n" ]
[ 0, 0 ]
[]
[]
[ "android_gradle_plugin", "android_studio", "gradle" ]
stackoverflow_0074199811_android_gradle_plugin_android_studio_gradle.txt
Q: How to extract first min value and second min value within a space of time I am trying to select customers who have spent amount less than 15 in two consecutive occasions, first purchase must be less than 15, second purchase must be less than 15 and must be 90 days or more after the first purchase. I manage to get those but not able to isolate two consecutive dates with amounts less than 15. Ideally, only cust1 should appear as my result because even though event on 2010-01-01 violated the rule, another event on 2011-12-18 is preceded by an event that meets the requirement. On the other hand cust2 never had events with amount less than 15 in two consecutive occasions with second date being >= 90 days, so cust2 should not make my list. #MyList table customer purchasedate amount customer purchasedate amount cust1 2008-11-01 10 cust1 2010-01-01 25 cust1 2010-12-03 30 cust1 2010-12-25 22 cust1 2011-12-18 7 cust1 2011-12-24 11 cust1 2014-10-06 9 cust2 2010-01-01 11 cust2 2010-02-05 25 cust2 2013-10-17 8 cust2 2014-10-28 27 cust3 2010-01-01 6 cust3 2011-04-05 25 cust3 2013-01-01 8 cust3 2013-02-28 5 cust3 2013-04-05 12 My script below WITH firstevent AS( SELECT * FROM( SELECT *, row_number() OVER(PARTITION BY customer ORDER BY purchasedate ASC) rn FROM #MyList WHERE amount < 15) A WHERE rn = 1 ), secondevent AS ( SELECT * FROM( SELECT *, ROW_NUMBER() OVER(PARTITION BY customer ORDER BY purchasedate DESC) rn FROM #MyList WHERE amount < 15) B WHERE rn = 1 ) SELECT f.customer, f.amount AS FirstAmount, s.amount AS LastAmount, f.purchasedate AS Date1, s.purchasedate AS Date2 FROM firstevent f INNER JOIN secondevent s ON (f.customer = s.customer) WHERE DATEDIFF(D, f.purchasedate, s.purchasedate) >= 90 A: use LAG() to obtain the previous date and amount and compare select * from ( select *, prev_date = lag(purchasedate) over (partition by customer order by purchasedate), prev_amount = lag(amount) over (partition by customer order by purchasedate) from #MyList ) l where l.amount < 15 and l.prev_amount < 15 and datediff(day, l.prev_date, l.purchasedate) >= 90 or using the row_number() method, perform a self join and compare with list as ( select *, rn = row_number() over (partition by customer order by purchasedate) from #MyList ) select * from list l1 inner join list l2 on l1.customer = l2.customer and l1.rn = l2.rn - 1 where l1.amount < 15 and l2.amount < 15 and datediff(day, l1.purchasedate, l2.purchasedate) >= 90
How to extract first min value and second min value within a space of time
I am trying to select customers who have spent amount less than 15 in two consecutive occasions, first purchase must be less than 15, second purchase must be less than 15 and must be 90 days or more after the first purchase. I manage to get those but not able to isolate two consecutive dates with amounts less than 15. Ideally, only cust1 should appear as my result because even though event on 2010-01-01 violated the rule, another event on 2011-12-18 is preceded by an event that meets the requirement. On the other hand cust2 never had events with amount less than 15 in two consecutive occasions with second date being >= 90 days, so cust2 should not make my list. #MyList table customer purchasedate amount customer purchasedate amount cust1 2008-11-01 10 cust1 2010-01-01 25 cust1 2010-12-03 30 cust1 2010-12-25 22 cust1 2011-12-18 7 cust1 2011-12-24 11 cust1 2014-10-06 9 cust2 2010-01-01 11 cust2 2010-02-05 25 cust2 2013-10-17 8 cust2 2014-10-28 27 cust3 2010-01-01 6 cust3 2011-04-05 25 cust3 2013-01-01 8 cust3 2013-02-28 5 cust3 2013-04-05 12 My script below WITH firstevent AS( SELECT * FROM( SELECT *, row_number() OVER(PARTITION BY customer ORDER BY purchasedate ASC) rn FROM #MyList WHERE amount < 15) A WHERE rn = 1 ), secondevent AS ( SELECT * FROM( SELECT *, ROW_NUMBER() OVER(PARTITION BY customer ORDER BY purchasedate DESC) rn FROM #MyList WHERE amount < 15) B WHERE rn = 1 ) SELECT f.customer, f.amount AS FirstAmount, s.amount AS LastAmount, f.purchasedate AS Date1, s.purchasedate AS Date2 FROM firstevent f INNER JOIN secondevent s ON (f.customer = s.customer) WHERE DATEDIFF(D, f.purchasedate, s.purchasedate) >= 90
[ "use LAG() to obtain the previous date and amount and compare\nselect *\nfrom\n(\n select *,\n prev_date = lag(purchasedate) over (partition by customer \n order by purchasedate),\n prev_amount = lag(amount) over (partition by customer \n order by purchasedate)\n from #MyList \n) l\nwhere l.amount < 15 \nand l.prev_amount < 15\nand datediff(day, l.prev_date, l.purchasedate) >= 90\n\nor using the row_number() method, perform a self join and compare\nwith list as\n(\n select *, rn = row_number() over (partition by customer order by purchasedate)\n from #MyList\n) \nselect *\nfrom list l1\n inner join list l2 on l1.customer = l2.customer\n and l1.rn = l2.rn - 1\nwhere l1.amount < 15\nand l2.amount < 15\nand datediff(day, l1.purchasedate, l2.purchasedate) >= 90\n\n" ]
[ 2 ]
[]
[]
[ "min", "ranking", "sql_server", "tsql" ]
stackoverflow_0074665318_min_ranking_sql_server_tsql.txt
Q: PUT vs POST when adding documents in elastic search I am new to Elasticsearch and trying to add documents in elastic index. I have got confused between PUT and POST here as both are producing same results in below scenario: curl -H "Content-Type: application/json" -XPUT "localhost:9200/products/mobiles/1?pretty" -d" { "name": "iPhone 7", "camera": "12MP", "storage": "256GB", "display": "4.7inch", "battery": "1,960mAh", "reviews": ["Incredibly happy after having used it for one week", "Best iPhone so far", "Very expensive, stick to Android"] } " vs curl -H "Content-Type: application/json" -XPOST "localhost:9200/products/mobiles/1?pretty" -d" { "name": "iPhone 7", "camera": "12MP", "storage": "256GB", "display": "4.7inch", "battery": "1,960mAh", "reviews": ["Incredibly happy after having used it for one week", "Best iPhone so far", "Very expensive, stick to Android"] } " A: POST :used to achieve auto-generation of ids. PUT :used when you want to specify an id. see this A: They both are among safe methods of HTTP. usually we use POST to create a resource and PUT to modify that. besides if you're free to set-up the server side, you can use both of them because they both have similar properties like: they both have body, they are safe, data is not shown in URL, and .... though it is better to consider standard rules that I said one of them before: usually we use POST to create a resource and PUT to modify that. this way your code is more readable, changeable ... for going deeper you can consider these tips according to put-versus-post: Deciding between POST and PUT is easy: use PUT if and only if the endpoint will follow these 2 rules: The endpoint must be idempotent: so safe to redo the request over and over again; The URI must be the address to the resource being updated. When we use PUT, we’re saying that we want the resource that we’re sending in our request to be stored at the given URI. We’re literally “putting” the resource at this address. A: The only different between POST and PUT is that you cannot use PUT to create documents with auto ID generation. The following query will create a document and auto generate an ID: POST /products/_doc { "name": "Shoes", "price": 100, "in_stock": 64 } Trying the same with PUT results to an "Incorrect HTTP method". PUT /products/_doc { "name": "Shoes", "price": 100, "in_stock": 64 } Unless I didn't experiment hard enough, this is the only difference between POST and PUT when creating documents. Other than this, POST and PUT will get you to achieve the same things.
PUT vs POST when adding documents in elastic search
I am new to Elasticsearch and trying to add documents in elastic index. I have got confused between PUT and POST here as both are producing same results in below scenario: curl -H "Content-Type: application/json" -XPUT "localhost:9200/products/mobiles/1?pretty" -d" { "name": "iPhone 7", "camera": "12MP", "storage": "256GB", "display": "4.7inch", "battery": "1,960mAh", "reviews": ["Incredibly happy after having used it for one week", "Best iPhone so far", "Very expensive, stick to Android"] } " vs curl -H "Content-Type: application/json" -XPOST "localhost:9200/products/mobiles/1?pretty" -d" { "name": "iPhone 7", "camera": "12MP", "storage": "256GB", "display": "4.7inch", "battery": "1,960mAh", "reviews": ["Incredibly happy after having used it for one week", "Best iPhone so far", "Very expensive, stick to Android"] } "
[ "\nPOST :used to achieve auto-generation of ids.\nPUT :used when you want to specify an id.\n\nsee this\n", "They both are among safe methods of HTTP.\nusually we use POST to create a resource and PUT to modify that. besides if you're free to set-up the server side, you can use both of them because they both have similar properties like: they both have body, they are safe, data is not shown in URL, and ....\nthough it is better to consider standard rules that I said one of them before:\nusually we use POST to create a resource and PUT to modify that. this way your code is more readable, changeable ...\nfor going deeper you can consider these tips according to put-versus-post:\nDeciding between POST and PUT is easy: use PUT if and only if the endpoint will follow these 2 rules:\n\nThe endpoint must be idempotent: so safe to redo the request over and over again;\nThe URI must be the address to the resource being updated.\n\nWhen we use PUT, we’re saying that we want the resource that we’re sending in our request to be stored at the given URI. We’re literally “putting” the resource at this address.\n", "The only different between POST and PUT is that you cannot use PUT to create documents with auto ID generation.\nThe following query will create a document and auto generate an ID:\nPOST /products/_doc\n{\n \"name\": \"Shoes\",\n \"price\": 100,\n \"in_stock\": 64\n}\n\nTrying the same with PUT results to an \"Incorrect HTTP method\".\nPUT /products/_doc\n{\n \"name\": \"Shoes\",\n \"price\": 100,\n \"in_stock\": 64\n}\n\nUnless I didn't experiment hard enough, this is the only difference between POST and PUT when creating documents.\nOther than this, POST and PUT will get you to achieve the same things.\n" ]
[ 9, 3, 0 ]
[]
[]
[ "elasticsearch" ]
stackoverflow_0056766688_elasticsearch.txt
Q: Arduino extract data from serial monitor I wrote a simple controller for my robot in Python and now I want to send the data over the serial monitor to the Arduino. I managed to send the values but now I want to know how I can extract the data from the monitor with the Arduino. My Python code: import PySimpleGUI as sg import serial import time import math ArmLänge = 205 TextX = 10 TextY = 10 TextZ = 10 font = ("Courier New", 11) sg.theme("DarkBlue3") sg.set_options(font=font) ser = serial.Serial("COM6") ser.flushInput() layout = [ [sg.Text("Forward Kinematics:", font=("Helvetica", 12)), sg.Text(" Inverse Kinematics:", font=("Helvetica", 12))], [sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X'), sg.Text("X"),sg.InputText(size=(10, 10), key="InputX")], [sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y'), sg.Text("Y"),sg.InputText(size=(10, 10), key="InputY")], [sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z'), sg.Text("Z"),sg.InputText(size=(10, 10), key="InputZ")], [sg.Push(), sg.Button('Exit'), sg.Button("Move")], ] window = sg.Window("Controller", layout, finalize=True) window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release') while True: event, values = window.read() if event in (sg.WINDOW_CLOSED, 'Exit'): break elif event == 'SLIDER_X Release': print("X Value:", values["SLIDER_X"]) elif event == 'SLIDER_Y Release': print("Y Value:", values["SLIDER_Y"]) elif event == 'SLIDER_Z Release': print("Z Value:", values["SLIDER_Z"]) #elif event == "Move": #print("IK X:", values['InputX']) #print("IK Y:", values['InputY']) #print("IK Z:", values['InputZ']) valX = int(values["SLIDER_X"]/2) valY = int(values["SLIDER_Y"]/2) valZ = int(values["SLIDER_Z"]/2) Data = [1,valX,valY,valZ] print(Data) ser.write(Data) if values['InputX'] >= str(1): x = float(values['InputX']) y = float(values['InputY']) z = float(values['InputZ']) h = round(math.sqrt(x ** 2 + y ** 2)) joint2 = round(math.degrees(math.atan(y / x))) joint3 = round(math.degrees(math.acos((h / 2) / (ArmLänge / 2)))) print("----Ergebnis:----") print("Höhe:", h) print("Joint2:", joint2,"°") print("Joint3:", joint3,"°") IKData = [2, h, joint2, joint3] print(IKData) ser.write(IKData) window.close() ser.close() It may not be the best code but it works. I need to extract every number for example [1, 20, 45, 30]. How can I do that? A: I am assuming that the data goes to Serial in this Format as String: [1,valX,valY,valZ] After reading the data from Serial and converting the data line to String with String() function, you can assign the values to desired variables using sscanf() function. The function works like this - sscanf(const char *str, const char *format, ...) So, here it would work like this - int data_val1, data_val2, data_val3; sscanf(Your_SerialData_String, [1,%d,%d,%d], data_val1, data_val2, data_val3);
Arduino extract data from serial monitor
I wrote a simple controller for my robot in Python and now I want to send the data over the serial monitor to the Arduino. I managed to send the values but now I want to know how I can extract the data from the monitor with the Arduino. My Python code: import PySimpleGUI as sg import serial import time import math ArmLänge = 205 TextX = 10 TextY = 10 TextZ = 10 font = ("Courier New", 11) sg.theme("DarkBlue3") sg.set_options(font=font) ser = serial.Serial("COM6") ser.flushInput() layout = [ [sg.Text("Forward Kinematics:", font=("Helvetica", 12)), sg.Text(" Inverse Kinematics:", font=("Helvetica", 12))], [sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X'), sg.Text("X"),sg.InputText(size=(10, 10), key="InputX")], [sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y'), sg.Text("Y"),sg.InputText(size=(10, 10), key="InputY")], [sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z'), sg.Text("Z"),sg.InputText(size=(10, 10), key="InputZ")], [sg.Push(), sg.Button('Exit'), sg.Button("Move")], ] window = sg.Window("Controller", layout, finalize=True) window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release') while True: event, values = window.read() if event in (sg.WINDOW_CLOSED, 'Exit'): break elif event == 'SLIDER_X Release': print("X Value:", values["SLIDER_X"]) elif event == 'SLIDER_Y Release': print("Y Value:", values["SLIDER_Y"]) elif event == 'SLIDER_Z Release': print("Z Value:", values["SLIDER_Z"]) #elif event == "Move": #print("IK X:", values['InputX']) #print("IK Y:", values['InputY']) #print("IK Z:", values['InputZ']) valX = int(values["SLIDER_X"]/2) valY = int(values["SLIDER_Y"]/2) valZ = int(values["SLIDER_Z"]/2) Data = [1,valX,valY,valZ] print(Data) ser.write(Data) if values['InputX'] >= str(1): x = float(values['InputX']) y = float(values['InputY']) z = float(values['InputZ']) h = round(math.sqrt(x ** 2 + y ** 2)) joint2 = round(math.degrees(math.atan(y / x))) joint3 = round(math.degrees(math.acos((h / 2) / (ArmLänge / 2)))) print("----Ergebnis:----") print("Höhe:", h) print("Joint2:", joint2,"°") print("Joint3:", joint3,"°") IKData = [2, h, joint2, joint3] print(IKData) ser.write(IKData) window.close() ser.close() It may not be the best code but it works. I need to extract every number for example [1, 20, 45, 30]. How can I do that?
[ "I am assuming that the data goes to Serial in this Format as String:\n\n[1,valX,valY,valZ]\n\nAfter reading the data from Serial and converting the data line to String with String() function, you can assign the values to desired variables using sscanf() function.\nThe function works like this -\n\nsscanf(const char *str, const char *format, ...)\n\nSo, here it would work like this -\nint data_val1, data_val2, data_val3;\n\nsscanf(Your_SerialData_String, [1,%d,%d,%d], data_val1, data_val2, data_val3);\n\n" ]
[ 0 ]
[]
[]
[ "arduino", "python", "python_3.x", "serial_port" ]
stackoverflow_0074656927_arduino_python_python_3.x_serial_port.txt
Q: Storing data for storing and display multivalued data in Javascript So, I am trying to make a form, where user enter there name and email, and after that select the skill and level, a user can enter as many skills and level, as he/she wants. But, I am facing difficulty in how to store that repetitive skills and level in Javascript array and display it. Here is my HTML- <section class="contact-form"> <div class="container"> <input type="text" id="name" class="input" placeholder="Enter your name..."/> <input type="text" id="email" class="input" placeholder="Enter your email..."/> <br> <br> <label for="inputEmail4" class="form-label">Add Skills</label> <br> <select name="dog-names" id="skills-name" class="dropdown"> <option value="Java">Java</option> <option value="HTML">HTML</option> <option value="Bootstrap">Bootstrap</option> <option value="JavaScript">JavaScript</option> <option value="Python">Python</option> </select> <select name="dog-names" id="levels" class="dropdown"> <option value="level1">level1</option> <option value="level2">level2</option> <option value="level3">level3</option> <option value="level4">level4</option> <option value="level5">level5</option> </select> <input type="button" value="Add" id="btnClick" class="Addbtn" onclick="addMore()"> <span id="addError"></span> <br><br> <label for="inputEmail4" class="form-label">Selected Skills</label> <ul id="written-box"></ul> <input type="button" value="Submit" class = "submitBtn" onclick="submit()"> <br> <span id="submitError"></span> </div> And now this is my JS code - // If any field is blank, then this error will pop-up function submit(){ document.getElementById('submitError').innerHTML = ""; let name = document.getElementById('name').value; let email = document.getElementById('email').value; if(name == '' || email == ''){ document.getElementById('submitError').innerHTML = "One or more field is blank! Fill all details"; } } // Function for add skill section function addMore(){ document.getElementById('addError').innerHTML = ""; let skill = document.querySelector('#skills-name').value; let level = document.querySelector('#levels').value; if(skill == '' || level == ''){ document.getElementById('addError').innerHTML = "Please Select both Skill & level"; } else{ let box = document.getElementById('written-box'); let li = document.createElement('li'); li.textContent = skill + " - " + level; // Creating 'x' to remove the mistakenly selected skill let a = document.createElement('a'); a.textContent = "delete"; a.href = "javascript:void(0)"; a.className = "remove"; li.appendChild(a); let pos = box.firstElementChild; if(pos == null) box.appendChild(li); else box.insertBefore(li, pos); box.appendChild(li); } document.getElementById('skill').value = ""; document.getElementById('level').value = ""; } let btn = document.querySelector('ul'); btn.addEventListener('click', function(e){ let box = document.getElementById('written-box'); let li = e.target.parentNode; box.removeChild(li); }); For more clarity, I have done this- And want to display this :) A: To store the skills and levels in a JavaScript array, you can create an empty array and then use the push method to add each skill and level to the array as they are selected. Here is an example of how you can do this: let skillsArray = []; function addMore() { let skill = document.querySelector('#skills-name').value; let level = document.querySelector('#levels').value; if (skill == '' || level == '') { document.getElementById('addError').innerHTML = "Please Select both Skill & level"; } else { // Add the skill and level to the array skillsArray.push({ skill: skill, level: level }); // Display the selected skills and levels in the "Selected Skills" section let box = document.getElementById('written-box'); let li = document.createElement('li'); li.textContent = skill + " - " + level; // Creating 'x' to remove the mistakenly selected skill let a = document.createElement('a'); a.textContent = "delete"; a.href = "javascript:void(0)"; a.className = "remove"; li.appendChild(a); let pos = box.firstElementChild; if (pos == null) { box.appendChild(li); } else { box.insertBefore(li, pos); } box.appendChild(li); } document.getElementById('skill').value = ""; document.getElementById('level').value = ""; } You can then use the skillsArray to access the selected skills and levels in your submit function. For example, you could iterate over the array and display each skill and level, or you could use the array to create a JSON object to send to the server when the form is submitted. function submit() { document.getElementById('submitError').innerHTML = ""; let name = document.getElementById('name').value; let email = document.getElementById('email').value; if (name == '' || email == '') { document.getElementById('submitError').innerHTML = "One or more field is blank! Fill all details"; } // Use the skillsArray to access the selected skills and levels for (let i = 0; i < skillsArray.length; i++) { let skill = skillsArray[i].skill; let level = skillsArray[i].level; // Do something with the skill and level, such as displaying them or creating a JSON object } } I hope this helps!
Storing data for storing and display multivalued data in Javascript
So, I am trying to make a form, where user enter there name and email, and after that select the skill and level, a user can enter as many skills and level, as he/she wants. But, I am facing difficulty in how to store that repetitive skills and level in Javascript array and display it. Here is my HTML- <section class="contact-form"> <div class="container"> <input type="text" id="name" class="input" placeholder="Enter your name..."/> <input type="text" id="email" class="input" placeholder="Enter your email..."/> <br> <br> <label for="inputEmail4" class="form-label">Add Skills</label> <br> <select name="dog-names" id="skills-name" class="dropdown"> <option value="Java">Java</option> <option value="HTML">HTML</option> <option value="Bootstrap">Bootstrap</option> <option value="JavaScript">JavaScript</option> <option value="Python">Python</option> </select> <select name="dog-names" id="levels" class="dropdown"> <option value="level1">level1</option> <option value="level2">level2</option> <option value="level3">level3</option> <option value="level4">level4</option> <option value="level5">level5</option> </select> <input type="button" value="Add" id="btnClick" class="Addbtn" onclick="addMore()"> <span id="addError"></span> <br><br> <label for="inputEmail4" class="form-label">Selected Skills</label> <ul id="written-box"></ul> <input type="button" value="Submit" class = "submitBtn" onclick="submit()"> <br> <span id="submitError"></span> </div> And now this is my JS code - // If any field is blank, then this error will pop-up function submit(){ document.getElementById('submitError').innerHTML = ""; let name = document.getElementById('name').value; let email = document.getElementById('email').value; if(name == '' || email == ''){ document.getElementById('submitError').innerHTML = "One or more field is blank! Fill all details"; } } // Function for add skill section function addMore(){ document.getElementById('addError').innerHTML = ""; let skill = document.querySelector('#skills-name').value; let level = document.querySelector('#levels').value; if(skill == '' || level == ''){ document.getElementById('addError').innerHTML = "Please Select both Skill & level"; } else{ let box = document.getElementById('written-box'); let li = document.createElement('li'); li.textContent = skill + " - " + level; // Creating 'x' to remove the mistakenly selected skill let a = document.createElement('a'); a.textContent = "delete"; a.href = "javascript:void(0)"; a.className = "remove"; li.appendChild(a); let pos = box.firstElementChild; if(pos == null) box.appendChild(li); else box.insertBefore(li, pos); box.appendChild(li); } document.getElementById('skill').value = ""; document.getElementById('level').value = ""; } let btn = document.querySelector('ul'); btn.addEventListener('click', function(e){ let box = document.getElementById('written-box'); let li = e.target.parentNode; box.removeChild(li); }); For more clarity, I have done this- And want to display this :)
[ "To store the skills and levels in a JavaScript array, you can create an empty array and then use the push method to add each skill and level to the array as they are selected. Here is an example of how you can do this:\nlet skillsArray = [];\n\nfunction addMore() {\n let skill = document.querySelector('#skills-name').value;\n let level = document.querySelector('#levels').value;\n\n if (skill == '' || level == '') {\n document.getElementById('addError').innerHTML = \"Please Select both Skill & level\";\n } else {\n // Add the skill and level to the array\n skillsArray.push({\n skill: skill,\n level: level\n });\n\n // Display the selected skills and levels in the \"Selected Skills\" section\n let box = document.getElementById('written-box');\n\n let li = document.createElement('li');\n li.textContent = skill + \" - \" + level;\n\n // Creating 'x' to remove the mistakenly selected skill\n let a = document.createElement('a');\n a.textContent = \"delete\";\n a.href = \"javascript:void(0)\";\n a.className = \"remove\";\n li.appendChild(a);\n\n let pos = box.firstElementChild;\n\n if (pos == null) {\n box.appendChild(li);\n } else {\n box.insertBefore(li, pos);\n }\n\n box.appendChild(li);\n }\n\n document.getElementById('skill').value = \"\";\n document.getElementById('level').value = \"\";\n}\n\nYou can then use the skillsArray to access the selected skills and levels in your submit function. For example, you could iterate over the array and display each skill and level, or you could use the array to create a JSON object to send to the server when the form is submitted.\nfunction submit() {\n document.getElementById('submitError').innerHTML = \"\";\n\n let name = document.getElementById('name').value;\n let email = document.getElementById('email').value;\n\n if (name == '' || email == '') {\n document.getElementById('submitError').innerHTML = \"One or more field is blank! Fill all details\";\n }\n\n // Use the skillsArray to access the selected skills and levels\n for (let i = 0; i < skillsArray.length; i++) {\n let skill = skillsArray[i].skill;\n let level = skillsArray[i].level;\n\n // Do something with the skill and level, such as displaying them or creating a JSON object\n }\n}\n\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "forms", "html", "javascript" ]
stackoverflow_0074665538_forms_html_javascript.txt
Q: Blazor Server - simple hyperlink to external site causes "attempting to reconnect.." Any simple link to an external site from my Blazor Server site works....but while the site loads the user sees the Blazor "Attempting to reconnect to the server: 1 of 8" message first. How do I get rid of that and provide a more seamless experience? I have tried coding my own javascript exit routine by listening to the click event and manually doing window.location.href = url; with the same results. I have also tried calling Blazor.disconnect() before jumping to the external site, but again no luck. I tried embedding calls to a NavigationManager in my code, but that's really ugly, and I didn't have much luck with that either. ...surely a very simple scenario! - my code is simply: <a href="https://www.google.com">Go</a> A: I haven't had the opportunity to test this myself, but the answer on this post here might be useful in setting the CSS to "display: none" when the link is clicked. How to disable "Attempting to reconnect to the server" message on ASP.NET Core producton server A: with thanks to @Silky, and @Begging - based on those answers I came up with the following which worked for me. approach: on click of any link - see if it looks like an external link, and if so add a body class that causes the built-in "reconnect" overlay to be hidden, and change the cursor. Note that adding a class to the body was the way to go for me as trying to affect the overlay (#components-reconnect-modal) has issues as it gets generated/inserted in an unknown (to me) way. Javascript (JQuery): $(document).on('click', 'a', handleExternalLink); handleExternalLink: function (e) { var ref = $(this).attr('href'); if (ref.indexOf('http') != -1) { $('body').removeClass('hide-reconnect').addClass('hide-reconnect'); window.location.href = ref; } } Css: html body.hide-reconnect #components-reconnect-modal { visibility: hidden !important; } html body.hide-reconnect, html body.hide-reconnect a:hover { cursor: wait; }
Blazor Server - simple hyperlink to external site causes "attempting to reconnect.."
Any simple link to an external site from my Blazor Server site works....but while the site loads the user sees the Blazor "Attempting to reconnect to the server: 1 of 8" message first. How do I get rid of that and provide a more seamless experience? I have tried coding my own javascript exit routine by listening to the click event and manually doing window.location.href = url; with the same results. I have also tried calling Blazor.disconnect() before jumping to the external site, but again no luck. I tried embedding calls to a NavigationManager in my code, but that's really ugly, and I didn't have much luck with that either. ...surely a very simple scenario! - my code is simply: <a href="https://www.google.com">Go</a>
[ "I haven't had the opportunity to test this myself, but the answer on this post here might be useful in setting the CSS to \"display: none\" when the link is clicked.\nHow to disable \"Attempting to reconnect to the server\" message on ASP.NET Core producton server\n", "with thanks to @Silky, and @Begging - based on those answers I came up with the following which worked for me.\napproach: on click of any link - see if it looks like an external link, and if so add a body class that causes the built-in \"reconnect\" overlay to be hidden, and change the cursor. Note that adding a class to the body was the way to go for me as trying to affect the overlay (#components-reconnect-modal) has issues as it gets generated/inserted in an unknown (to me) way.\nJavascript (JQuery):\n\n\n$(document).on('click', 'a', handleExternalLink);\nhandleExternalLink: function (e) {\n var ref = $(this).attr('href');\n if (ref.indexOf('http') != -1) {\n $('body').removeClass('hide-reconnect').addClass('hide-reconnect');\n window.location.href = ref;\n }\n}\n\n\n\nCss:\n\n\nhtml body.hide-reconnect #components-reconnect-modal {\n visibility: hidden !important;\n}\n\nhtml body.hide-reconnect, html body.hide-reconnect a:hover {\n cursor: wait;\n}\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "blazor", "blazor_server_side" ]
stackoverflow_0074661466_blazor_blazor_server_side.txt
Q: How can i increment a field? PHP How can i increment my qty every time i process i want it to increment the value not just update it? $prodid = $this->input->post('product')[$x]; if($prodid){ $product_quanti = array( 'qty' => $this->input->post('qty')[$x], ); $this->db->where('id', $prodid); $update = $this->db->update('products', $product_quanti); } i tried this $prodid = $this->input->post('product')[$x]; if($prodid){ $product_quanti = array( 'qty' => $this->input->post('qty')[$x] + $this->input->post('qty')[$x], ); $this->db->where('id', $prodid); $update = $this->db->update('products', $product_quanti); } but every time i process it it just randomly increment a number i the qty was 200 and when i input 500 it randomly return 1000 how can i increment it in the correct way? how can i make the avaiable stock + incoming stock formula? A: You are setting the value to twice the input, instead of the current value + the input. So 500 will result in 1000. You can do something like this instead: $inputQuantity = (int)$this->input->post('qty')[$x]; $this->db->where('id', $prodid); $this->db->set('qty', "qty+$inputQuantity", false); $this->db->update('products');
How can i increment a field? PHP
How can i increment my qty every time i process i want it to increment the value not just update it? $prodid = $this->input->post('product')[$x]; if($prodid){ $product_quanti = array( 'qty' => $this->input->post('qty')[$x], ); $this->db->where('id', $prodid); $update = $this->db->update('products', $product_quanti); } i tried this $prodid = $this->input->post('product')[$x]; if($prodid){ $product_quanti = array( 'qty' => $this->input->post('qty')[$x] + $this->input->post('qty')[$x], ); $this->db->where('id', $prodid); $update = $this->db->update('products', $product_quanti); } but every time i process it it just randomly increment a number i the qty was 200 and when i input 500 it randomly return 1000 how can i increment it in the correct way? how can i make the avaiable stock + incoming stock formula?
[ "You are setting the value to twice the input, instead of the current value + the input. So 500 will result in 1000.\nYou can do something like this instead:\n$inputQuantity = (int)$this->input->post('qty')[$x];\n$this->db->where('id', $prodid);\n$this->db->set('qty', \"qty+$inputQuantity\", false);\n$this->db->update('products');\n\n" ]
[ 1 ]
[]
[]
[ "codeigniter_3", "php", "post" ]
stackoverflow_0074665488_codeigniter_3_php_post.txt
Q: Plupload upload blob data I have plupload and I have microphone, I want to add and send audio I have recorded from plupload to server. How can I do it with close to zero needless actions ? I tried to use addFile, but it do not work var uploader = new plupload.Uploader({ runtimes : 'html5,flash,silverlight,html4', browse_button : 'pickfiles', // you can pass in id... container: document.getElementById('container'), // ... or DOM Element itself url : "upload.php", chunk_size : '1mb', init: { PostInit: function() { document.getElementById('filelist').innerHTML = ''; document.getElementById('uploadfiles').onclick = function() { { uploader.start(); $("#uploadfiles").hide(); return false; } }; }, FilesAdded: function(up, files) { plupload.each(files, function(file) { document.getElementById('filelist').innerHTML += '<div id="' + file.id + '">' + file.name + ' (' + plupload.formatSize(file.size) + ') <b></b></div>'; $("#uploadfiles").show(); }); }, Error: function(up, err) { document.getElementById('console').innerHTML += "\nError #" + err.code + ": " + err.message; } } }); below is audio example I want to save from microphone (437) [Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, …] [0 … 99] [100 … 199] [200 … 299] [300 … 399] [400 … 436] A: To upload the audio file you recorded from the microphone, you can use the addFile method provided by the Plupload library to add the file to the uploader's queue. Then, you can call the start method on the uploader to begin uploading the file to the server. Here is an example of how you can do this: // Add the audio file to the uploader's queue uploader.addFile(audioFile); // Start uploading the file uploader.start(); You can also add a callback function to the FileUploaded event to be notified when the file has been successfully uploaded to the server. // Add a callback function to the FileUploaded event uploader.bind('FileUploaded', function(up, file, response) { // Handle the response from the server }); Note that you may need to modify the server-side code (the upload.php file in your example) to handle the uploaded audio file and save it to the server. A: To add an audio file recorded from a microphone to a plupload instance and then upload it to a server, you can use the addFile method of the plupload instance. This method allows you to add a file to the list of files that will be uploaded by the plupload instance. Here is an example of how you can use the addFile method to add an audio file recorded from a microphone to a plupload instance: // Create a plupload instance var uploader = new plupload.Uploader({ // Your plupload configuration options }); // Initialize the plupload instance uploader.init(); // Record the audio from the microphone navigator.mediaDevices.getUserMedia({ audio: true }) .then(function(stream) { // Create a new audio Blob from the stream var audioBlob = new Blob(stream); // Add the audio Blob to the plupload instance using the addFile method uploader.addFile(audioBlob); }); In this code, the getUserMedia method is used to record audio from the microphone. When the audio recording is complete, the audio data is converted into a Blob object and then added to the plupload instance using the addFile method. Once the audio file has been added to the plupload instance, you can upload it to the server by calling the start method of the plupload instance. This will cause the plupload instance to begin uploading the audio file to the server, using the URL specified in the plupload configuration. You can find more information about the addFile method and other plupload methods in the plupload documentation: https://www.plupload.com/docs/API/Plupload. I hope this helps! Let me know if you have any other questions.
Plupload upload blob data
I have plupload and I have microphone, I want to add and send audio I have recorded from plupload to server. How can I do it with close to zero needless actions ? I tried to use addFile, but it do not work var uploader = new plupload.Uploader({ runtimes : 'html5,flash,silverlight,html4', browse_button : 'pickfiles', // you can pass in id... container: document.getElementById('container'), // ... or DOM Element itself url : "upload.php", chunk_size : '1mb', init: { PostInit: function() { document.getElementById('filelist').innerHTML = ''; document.getElementById('uploadfiles').onclick = function() { { uploader.start(); $("#uploadfiles").hide(); return false; } }; }, FilesAdded: function(up, files) { plupload.each(files, function(file) { document.getElementById('filelist').innerHTML += '<div id="' + file.id + '">' + file.name + ' (' + plupload.formatSize(file.size) + ') <b></b></div>'; $("#uploadfiles").show(); }); }, Error: function(up, err) { document.getElementById('console').innerHTML += "\nError #" + err.code + ": " + err.message; } } }); below is audio example I want to save from microphone (437) [Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, Blob, …] [0 … 99] [100 … 199] [200 … 299] [300 … 399] [400 … 436]
[ "To upload the audio file you recorded from the microphone, you can use the addFile method provided by the Plupload library to add the file to the uploader's queue. Then, you can call the start method on the uploader to begin uploading the file to the server.\nHere is an example of how you can do this:\n// Add the audio file to the uploader's queue\nuploader.addFile(audioFile);\n\n// Start uploading the file\nuploader.start();\n\nYou can also add a callback function to the FileUploaded event to be notified when the file has been successfully uploaded to the server.\n// Add a callback function to the FileUploaded event\nuploader.bind('FileUploaded', function(up, file, response) {\n // Handle the response from the server\n});\n\nNote that you may need to modify the server-side code (the upload.php file in your example) to handle the uploaded audio file and save it to the server.\n", "To add an audio file recorded from a microphone to a plupload instance and then upload it to a server, you can use the addFile method of the plupload instance. This method allows you to add a file to the list of files that will be uploaded by the plupload instance.\nHere is an example of how you can use the addFile method to add an audio file recorded from a microphone to a plupload instance:\n// Create a plupload instance\nvar uploader = new plupload.Uploader({\n // Your plupload configuration options\n});\n\n// Initialize the plupload instance\nuploader.init();\n\n// Record the audio from the microphone\nnavigator.mediaDevices.getUserMedia({ audio: true })\n .then(function(stream) {\n // Create a new audio Blob from the stream\n var audioBlob = new Blob(stream);\n\n // Add the audio Blob to the plupload instance using the addFile method\n uploader.addFile(audioBlob);\n });\n\nIn this code, the getUserMedia method is used to record audio from the microphone. When the audio recording is complete, the audio data is converted into a Blob object and then added to the plupload instance using the addFile method.\nOnce the audio file has been added to the plupload instance, you can upload it to the server by calling the start method of the plupload instance. This will cause the plupload instance to begin uploading the audio file to the server, using the URL specified in the plupload configuration.\nYou can find more information about the addFile method and other plupload methods in the plupload documentation: https://www.plupload.com/docs/API/Plupload.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "plupload" ]
stackoverflow_0074665531_javascript_plupload.txt
Q: switch statement what is wrong here guys can u help me? I'm kinda new to this but? can anyone correct my first switch statement? I dont know what is wrong here and im not even sure if the starting declarations are even correct /*full name: Patrick Peregrino Course and Section: BSIT11M3 Subject: Computer Programming 1 Professor: Mr. Andro Philip G. Banag, MIT* Date: 12/01/2022*/ import java.util.Scanner; public class sapul { public static void main(String[]args) { Scanner kng = new Scanner(System.in); int T; System.out.println("Please enter a temperature to check the remarks if it is Ice, Water or Steam"); System.out.println("Enter Number :"); T = kng.nextInt(); if(T<0) { System.out.println("Temperature =" + T); System.out.println("Remarks = ICE"); } else if(T>0&&T<100) { System.out.println("Temperature =" + T); System.out.println("Remarks = WATER"); } else if(T>100) { System.out.println("Temperature =" + T); System.out.println("Remarks = STEAM"); } else { System.out.println("INVALID, Only numbers are valid"); } } } A: As already commented, it's more an if else structure rather than a real switch statement. You've also missed to tell us what is going wrong with your code but looking into it, you're definitely missing actions for T=0 and T=100 as you always check greater or lower than but without equal. Could this be your issue?
switch statement what is wrong here guys can u help me?
I'm kinda new to this but? can anyone correct my first switch statement? I dont know what is wrong here and im not even sure if the starting declarations are even correct /*full name: Patrick Peregrino Course and Section: BSIT11M3 Subject: Computer Programming 1 Professor: Mr. Andro Philip G. Banag, MIT* Date: 12/01/2022*/ import java.util.Scanner; public class sapul { public static void main(String[]args) { Scanner kng = new Scanner(System.in); int T; System.out.println("Please enter a temperature to check the remarks if it is Ice, Water or Steam"); System.out.println("Enter Number :"); T = kng.nextInt(); if(T<0) { System.out.println("Temperature =" + T); System.out.println("Remarks = ICE"); } else if(T>0&&T<100) { System.out.println("Temperature =" + T); System.out.println("Remarks = WATER"); } else if(T>100) { System.out.println("Temperature =" + T); System.out.println("Remarks = STEAM"); } else { System.out.println("INVALID, Only numbers are valid"); } } }
[ "As already commented, it's more an if else structure rather than a real switch statement.\nYou've also missed to tell us what is going wrong with your code but looking into it, you're definitely missing actions for T=0 and T=100 as you always check greater or lower than but without equal.\nCould this be your issue?\n" ]
[ 0 ]
[]
[]
[ "java" ]
stackoverflow_0074665548_java.txt
Q: NextJs : Your `getStaticProps` function did not return an object I am making a web application with NextJs. In the page I need to fetch an api to get data and display it. But it compiles I have got an error. The error is : Error: Your `getStaticProps` function did not return an object. Did you forget to add a `return`? And there is my function : export async function getStaticProps(context) { try { const res = await fetch(ApiLinks.players.all) .then((response) => response.json()) .then((response) => response.data.teamMembers) const responsePlayers = res.players; const responseStaff = res.staff; return { props: { responsePlayers, responseStaff, } } } catch (err) { console.error(err); } } A: It's because getStaticProps must return props it's better to add try-catch outside of it, so props can have data or not : sth like this : export async function getStaticProps(context) { let props = {}; try { const res = await fetch(ApiLinks.players.all) .then(response => response.json()) .then(response => response.data.teamMembers); const responsePlayers = res.players; const responseStaff = res.staff; props = { responsePlayers, responseStaff, }; } catch (err) { console.error(err); } return { props, }; } A: It looks like the issue is that you are not returning an object from the getStaticProps function if the fetch call fails and an error is thrown. When an error is thrown in the try block, the catch block is executed, but the catch block does not return anything. As a result, the getStaticProps function does not return an object, which is causing the error. To fix this, you can add a return statement at the end of the getStaticProps function to return an object with default values if an error occurs. Here is an example of how you can do this: export async function getStaticProps(context) { try { const res = await fetch(ApiLinks.players.all) .then((response) => response.json()) .then((response) => response.data.teamMembers) const responsePlayers = res.players; const responseStaff = res.staff; return { props: { responsePlayers, responseStaff, } } } catch (err) { console.error(err); // Return an object with default values if an error occurs return { props: { responsePlayers: [], responseStaff: [], } }; } } Alternatively, you can use the try and catch keywords to handle the error and return an object with default values inside the catch block. Here is an example of how you can do this: export async function getStaticProps(context) { try { const res = await fetch(ApiLinks.players.all) .then((response) => response.json()) .then((response) => response.data.teamMembers) const responsePlayers = res.players; const responseStaff = res.staff; return { props: { responsePlayers, responseStaff, } } } catch (err) { console.error(err); // Handle the error and return an object with default values return { props: { responsePlayers: [], responseStaff: [], } }; } } A: Your function seems to be fine. Just check by console log whether both of the responsePlayers and responseStaff are object or not and if not try returning then like this : return{ props:{ responsePlayers:responsePlayers, responseStaff:responseStaff } } also add this in the catch return{ props:null } and check if props is null or not in the above component.
NextJs : Your `getStaticProps` function did not return an object
I am making a web application with NextJs. In the page I need to fetch an api to get data and display it. But it compiles I have got an error. The error is : Error: Your `getStaticProps` function did not return an object. Did you forget to add a `return`? And there is my function : export async function getStaticProps(context) { try { const res = await fetch(ApiLinks.players.all) .then((response) => response.json()) .then((response) => response.data.teamMembers) const responsePlayers = res.players; const responseStaff = res.staff; return { props: { responsePlayers, responseStaff, } } } catch (err) { console.error(err); } }
[ "It's because getStaticProps must return props it's better to add try-catch outside of it, so props can have data or not :\nsth like this :\nexport async function getStaticProps(context) {\n let props = {};\n try {\n const res = await fetch(ApiLinks.players.all)\n .then(response => response.json())\n .then(response => response.data.teamMembers);\n\n const responsePlayers = res.players;\n const responseStaff = res.staff;\n props = {\n responsePlayers,\n responseStaff,\n };\n } catch (err) {\n console.error(err);\n }\n\n return {\n props,\n };\n}\n\n", "It looks like the issue is that you are not returning an object from the getStaticProps function if the fetch call fails and an error is thrown. When an error is thrown in the try block, the catch block is executed, but the catch block does not return anything. As a result, the getStaticProps function does not return an object, which is causing the error.\nTo fix this, you can add a return statement at the end of the getStaticProps function to return an object with default values if an error occurs. Here is an example of how you can do this:\nexport async function getStaticProps(context) {\n try {\n const res = await fetch(ApiLinks.players.all)\n .then((response) => response.json())\n .then((response) => response.data.teamMembers)\n\n const responsePlayers = res.players;\n const responseStaff = res.staff;\n\n return {\n props: {\n responsePlayers,\n responseStaff,\n }\n }\n } catch (err) {\n console.error(err);\n\n // Return an object with default values if an error occurs\n return {\n props: {\n responsePlayers: [],\n responseStaff: [],\n }\n };\n }\n}\n\nAlternatively, you can use the try and catch keywords to handle the error and return an object with default values inside the catch block. Here is an example of how you can do this:\nexport async function getStaticProps(context) {\n try {\n const res = await fetch(ApiLinks.players.all)\n .then((response) => response.json())\n .then((response) => response.data.teamMembers)\n\n const responsePlayers = res.players;\n const responseStaff = res.staff;\n\n return {\n props: {\n responsePlayers,\n responseStaff,\n }\n }\n } catch (err) {\n console.error(err);\n\n // Handle the error and return an object with default values\n return {\n props: {\n responsePlayers: [],\n responseStaff: [],\n }\n };\n }\n}\n\n", "Your function seems to be fine. Just check by console log whether both of the responsePlayers and responseStaff are object or not and if not try returning then like this :\nreturn{\n props:{\n responsePlayers:responsePlayers,\n responseStaff:responseStaff\n }\n}\n\nalso add this in the catch\nreturn{\n props:null\n}\n\nand check if props is null or not in the above component.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "getstaticprops", "javascript", "next.js" ]
stackoverflow_0074665525_getstaticprops_javascript_next.js.txt
Q: How do I simply mutate the array inside a signal This is a working example, with a problem. I want to splice an existing array inside a signal, and return it to see it updated. But it doesn't work. How do I simply mutate the array inside a signal? I don't want to create new arrays just a simple splice. There is no example in the docs about mutating an array. import { render } from 'solid-js/web'; import { createSignal, createEffect } from 'solid-js' function HelloWorld() { let [a, setA] = createSignal([]) setTimeout(() => setA(a => { a.splice(0, 0, 'hello') // this logs as requested if I uncomment this //return ['hello'] return a })) createEffect(() => { console.log(a()) }) return <div>Hello World!</div>; } render(() => <HelloWorld />, document.getElementById('app')) A: The Solid tutorial strongly recommends immutability: Solid strongly recommends the use of shallow immutable patterns for updating state. By separating reads and writes we maintain better control over the reactivity of our system without the risk of losing track of changes to our proxy when passed through layers of components. An immutable way to accomplish what you're going for might look something like this: setA(a => ['hello', ...a]) If, however, you determine you must mutate the signal, you can specify a how Solid determines if a signal has been updated within createSignal's options object. By default, signal changes are compared by referential equality using the === operator. You can tell Solid to always re-run dependents after the setter is called by setting equals to false: let [a, setA] = createSignal([], { equals: false }); Or you can pass a function that takes the previous and next values of the signal and returns a boolean: let [a, setA] = createSignal([], { equals: (prev, next) => { if (/* dependents should run */) { return false; } // dependents shouldn't run return true; } }); A: Warning: I'm new to Solid.js and Javascript/Typescript in general. What I've done below might be a really bad idea. I came across the same issue and figured out the following. It seems to work. Basically keep the value (e.g. a very large array) in a lightweight wrapped type. Recreate the wrapped type instead of recreating the array. The function createSignalMutable below returns a getter function that returns the unwrapped value, and an updater function that updates the unwrapped value. import { render } from "solid-js/web"; import { createSignal, createEffect } from "solid-js"; type ThinWrapper<T> = { internal: T; }; function createSignalMutable<T>(value: T) { let [getter, setter] = createSignal<ThinWrapper<T>>({ internal: value }); let getUnwrapped = () => getter().internal; let updateUnwrapped = (mutator: (value: T) => void) => setter((oldWrapped: ThinWrapper<T>) => { mutator(oldWrapped.internal); let newWrapped: ThinWrapper<T> = { internal: value }; return newWrapped; }); return [getUnwrapped, updateUnwrapped]; } let initialValue: number[] = []; let [numbers, updateNumbers] = createSignalMutable(initialValue); createEffect(() => { console.log("Array length is now ", [...numbers()].length); }); updateNumbers((val) => val.push(1)); updateNumbers((val) => val.push(2)); This will print Array length is now 0 Array length is now 1 Array length is now 2 Plyground link here.
How do I simply mutate the array inside a signal
This is a working example, with a problem. I want to splice an existing array inside a signal, and return it to see it updated. But it doesn't work. How do I simply mutate the array inside a signal? I don't want to create new arrays just a simple splice. There is no example in the docs about mutating an array. import { render } from 'solid-js/web'; import { createSignal, createEffect } from 'solid-js' function HelloWorld() { let [a, setA] = createSignal([]) setTimeout(() => setA(a => { a.splice(0, 0, 'hello') // this logs as requested if I uncomment this //return ['hello'] return a })) createEffect(() => { console.log(a()) }) return <div>Hello World!</div>; } render(() => <HelloWorld />, document.getElementById('app'))
[ "The Solid tutorial strongly recommends immutability:\n\nSolid strongly recommends the use of shallow immutable patterns for updating state. By separating reads and writes we maintain better control over the reactivity of our system without the risk of losing track of changes to our proxy when passed through layers of components.\n\nAn immutable way to accomplish what you're going for might look something like this:\nsetA(a => ['hello', ...a])\n\nIf, however, you determine you must mutate the signal, you can specify a how Solid determines if a signal has been updated within createSignal's options object. By default, signal changes are compared by referential equality using the === operator. You can tell Solid to always re-run dependents after the setter is called by setting equals to false:\nlet [a, setA] = createSignal([], { equals: false });\n\nOr you can pass a function that takes the previous and next values of the signal and returns a boolean:\nlet [a, setA] = createSignal([], { equals: (prev, next) => {\n if (/* dependents should run */) {\n return false;\n }\n // dependents shouldn't run\n return true;\n} });\n\n", "Warning: I'm new to Solid.js and Javascript/Typescript in general. What I've done below might be a really bad idea.\nI came across the same issue and figured out the following. It seems to work. Basically keep the value (e.g. a very large array) in a lightweight wrapped type. Recreate the wrapped type instead of recreating the array. The function createSignalMutable below returns a getter function that returns the unwrapped value, and an updater function that updates the unwrapped value.\nimport { render } from \"solid-js/web\";\nimport { createSignal, createEffect } from \"solid-js\";\n\ntype ThinWrapper<T> = {\n internal: T;\n};\n\nfunction createSignalMutable<T>(value: T) {\n let [getter, setter] = createSignal<ThinWrapper<T>>({ internal: value });\n\n let getUnwrapped = () => getter().internal;\n\n let updateUnwrapped = (mutator: (value: T) => void) =>\n setter((oldWrapped: ThinWrapper<T>) => {\n mutator(oldWrapped.internal);\n let newWrapped: ThinWrapper<T> = { internal: value };\n return newWrapped;\n });\n\n return [getUnwrapped, updateUnwrapped];\n}\n\nlet initialValue: number[] = [];\nlet [numbers, updateNumbers] = createSignalMutable(initialValue);\n\ncreateEffect(() => {\n console.log(\"Array length is now \", [...numbers()].length);\n});\n\nupdateNumbers((val) => val.push(1));\nupdateNumbers((val) => val.push(2));\n\nThis will print\nArray length is now 0\nArray length is now 1\nArray length is now 2\n\nPlyground link here.\n" ]
[ 2, 0 ]
[]
[]
[ "solid_js", "typescript" ]
stackoverflow_0071962713_solid_js_typescript.txt
Q: it is prompted that errors exist in required project, but no error is displayed in the taskbar below As shown in the image below, whenever I run my model, it always tells me that there is an error exists in my model. enter image description here But when I turn to the "Problems" window at the bottom of my workspace, it doesn't show any exceptions, and the model also works normally. enter image description here I want to know how can I deal with the problem, and is there an error in my model, what may cause such a problem? A: These can happen rarely, typically it is not a problem. Try any of these options: Close the model, then close AnyLogic, then restart and reopen Delete your Workspace folder (C:/Users/username/.AnyLogicProfessional/Workspace8.8), then restart AnyLogic Reinstall AnyLogic Likely the 2nd will help, but try them all
it is prompted that errors exist in required project, but no error is displayed in the taskbar below
As shown in the image below, whenever I run my model, it always tells me that there is an error exists in my model. enter image description here But when I turn to the "Problems" window at the bottom of my workspace, it doesn't show any exceptions, and the model also works normally. enter image description here I want to know how can I deal with the problem, and is there an error in my model, what may cause such a problem?
[ "These can happen rarely, typically it is not a problem. Try any of these options:\n\nClose the model, then close AnyLogic, then restart and reopen\nDelete your Workspace folder (C:/Users/username/.AnyLogicProfessional/Workspace8.8), then restart AnyLogic\nReinstall AnyLogic\n\nLikely the 2nd will help, but try them all\n" ]
[ 0 ]
[]
[]
[ "anylogic" ]
stackoverflow_0074664484_anylogic.txt
Q: hide certain row when button clicked what function should I do if I want to hide certain row when I click the button in that row <tr class="record"> <td><?php echo $row['invoice']; ?></td> <td><?php echo $row['name']; ?></td> <td><?php echo $row['qty']; ?></td> <td> <button type="button" name="button" onclick="doneOrder('Is this order done?')" class="btn btn-success"> <i class="fa-solid fa-check"></i> </button> </td> </tr> A: To hide a row when a button in that row is clicked, you can use the style.display property of the tr element to hide the row. Here is an example of how you can do this using JavaScript: function doneOrder(message) { // Show the message to the user alert(message); // Get the parent tr element of the button var trElement = this.parentNode.parentNode; // Set the display property of the tr element to 'none' to hide it trElement.style.display = 'none'; } In this code, the doneOrder function is called when the button is clicked. The function displays a message to the user using the alert function, and then retrieves the parent tr element of the button using the parentNode property. Finally, the style.display property of the tr element is set to 'none' to hide the row. I hope this helps! Let me know if you have any other questions.
hide certain row when button clicked
what function should I do if I want to hide certain row when I click the button in that row <tr class="record"> <td><?php echo $row['invoice']; ?></td> <td><?php echo $row['name']; ?></td> <td><?php echo $row['qty']; ?></td> <td> <button type="button" name="button" onclick="doneOrder('Is this order done?')" class="btn btn-success"> <i class="fa-solid fa-check"></i> </button> </td> </tr>
[ "To hide a row when a button in that row is clicked, you can use the style.display property of the tr element to hide the row. Here is an example of how you can do this using JavaScript:\nfunction doneOrder(message) {\n // Show the message to the user\n alert(message);\n\n // Get the parent tr element of the button\n var trElement = this.parentNode.parentNode;\n\n // Set the display property of the tr element to 'none' to hide it\n trElement.style.display = 'none';\n}\n\nIn this code, the doneOrder function is called when the button is clicked. The function displays a message to the user using the alert function, and then retrieves the parent tr element of the button using the parentNode property.\nFinally, the style.display property of the tr element is set to 'none' to hide the row.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "buttonclick", "columnsorting", "function", "javascript", "jquery" ]
stackoverflow_0074665528_buttonclick_columnsorting_function_javascript_jquery.txt
Q: JavaScript JSON reviver in Python I'm having problem translating my JavaScript snippet to Python. The JavaScript code looks like this: const reviver = (_key, value) => { try { return JSON.parse(value, reviver); } catch { if(typeof value === 'string') { const semiValues = value.split(';'); if(semiValues.length > 1) { return stringToObject(JSON.stringify(semiValues)); } const commaValues = value.split(','); if(commaValues.length > 1) { return stringToObject(JSON.stringify(commaValues)); } } const int = Number(value); if(value.length && !isNaN(int)) { return int; } return value; } }; const stringToObject = (str) => { const formatted = str.replace(/"{/g, '{').replace(/}"/g, '}').replace(/"\[/g, '[').replace(/\]"/g, ']').replace(/\\"/g, '"'); return JSON.parse(formatted, reviver); }; The goal of the function is that: String values that are numbers are parsed String values that are json are parsed using these rules String values like "499,504;554,634" should be parsed to [(499, 504), (554, 634)] I have tried using the JSONDecoder. import json def object_hook(value): try: return json.loads(value) except: if(isinstance(value, str)): semiValues = value.split(';') if(len(semiValues) > 1): return parse_response(json.dumps(semiValues)) commaValues = value.split(',') if(commaValues.length > 1): return parse_response(json.dumps(commaValues)) try: return float(value) except ValueError: return value def parse_response(data: str): formatted = data.replace("\"{", "{").replace("}\"", '}').replace("\"[", '[').replace("]\"", ']').replace("\\\"", "\"") return json.load(formatted, object_hook=object_hook) A: I solved my issue by iterating through the values and parse them accordingly import json def parse_value(value): if(isinstance(value, str)): try: return parse_value(json.loads(value)) except: pass semi_values = value.split(';') if(len(semi_values) > 1): return list(map(parse_value, semi_values)) comma_values = value.split(',') if(len(comma_values) > 1): return list(map(parse_value, comma_values)) if(value.replace('.','',1).isdigit()): return int(value) if(isinstance(value, dict)): return {k: parse_value(v) for k, v in value.items()} if(isinstance(value, list)): return list(map(parse_value, value)) return value A: Your Python code looks like it's on the right track, but there are a few issues with it. First, you're using commaValues.length instead of len(commaValues) in the if statement that checks if the length of commaValues is greater than 1. Second, json.load() expects a file-like object as its first argument, not a string. You can use json.loads() instead to parse a JSON string. Here's how I would write the code in Python: import json def reviver(key, value): try: return json.loads(value, reviver=reviver) except: if isinstance(value, str): semiValues = value.split(';') if len(semiValues) > 1: return stringToObject(json.dumps(semiValues)) commaValues = value.split(',') if len(commaValues) > 1: return stringToObject(json.dumps(commaValues)) try: return int(value) except ValueError: return value def stringToObject(str): formatted = str.replace('"{', '{').replace('}"', '}').replace('"[', '[').replace(']"', ']').replace('\\"', '"') return json.loads(formatted, reviver=reviver) Note that I also changed the try statement that tries to convert the string to a number to use int() instead of float() to parse the value as an integer instead of a floating-point number. I also changed the function and variable names to follow the Python convention of using lowercase words separated by underscores (e.g. string_to_object instead of stringToObject).
JavaScript JSON reviver in Python
I'm having problem translating my JavaScript snippet to Python. The JavaScript code looks like this: const reviver = (_key, value) => { try { return JSON.parse(value, reviver); } catch { if(typeof value === 'string') { const semiValues = value.split(';'); if(semiValues.length > 1) { return stringToObject(JSON.stringify(semiValues)); } const commaValues = value.split(','); if(commaValues.length > 1) { return stringToObject(JSON.stringify(commaValues)); } } const int = Number(value); if(value.length && !isNaN(int)) { return int; } return value; } }; const stringToObject = (str) => { const formatted = str.replace(/"{/g, '{').replace(/}"/g, '}').replace(/"\[/g, '[').replace(/\]"/g, ']').replace(/\\"/g, '"'); return JSON.parse(formatted, reviver); }; The goal of the function is that: String values that are numbers are parsed String values that are json are parsed using these rules String values like "499,504;554,634" should be parsed to [(499, 504), (554, 634)] I have tried using the JSONDecoder. import json def object_hook(value): try: return json.loads(value) except: if(isinstance(value, str)): semiValues = value.split(';') if(len(semiValues) > 1): return parse_response(json.dumps(semiValues)) commaValues = value.split(',') if(commaValues.length > 1): return parse_response(json.dumps(commaValues)) try: return float(value) except ValueError: return value def parse_response(data: str): formatted = data.replace("\"{", "{").replace("}\"", '}').replace("\"[", '[').replace("]\"", ']').replace("\\\"", "\"") return json.load(formatted, object_hook=object_hook)
[ "I solved my issue by iterating through the values and parse them accordingly\nimport json\n\ndef parse_value(value):\n if(isinstance(value, str)):\n try:\n return parse_value(json.loads(value))\n except:\n pass\n semi_values = value.split(';')\n if(len(semi_values) > 1):\n return list(map(parse_value, semi_values))\n comma_values = value.split(',')\n if(len(comma_values) > 1):\n return list(map(parse_value, comma_values))\n if(value.replace('.','',1).isdigit()):\n return int(value)\n if(isinstance(value, dict)):\n return {k: parse_value(v) for k, v in value.items()}\n if(isinstance(value, list)):\n return list(map(parse_value, value))\n return value\n\n", "Your Python code looks like it's on the right track, but there are a few issues with it. First, you're using commaValues.length instead of len(commaValues) in the if statement that checks if the length of commaValues is greater than 1. Second, json.load() expects a file-like object as its first argument, not a string. You can use json.loads() instead to parse a JSON string.\nHere's how I would write the code in Python:\nimport json\n\ndef reviver(key, value):\n try:\n return json.loads(value, reviver=reviver)\n except:\n if isinstance(value, str):\n semiValues = value.split(';')\n if len(semiValues) > 1:\n return stringToObject(json.dumps(semiValues))\n commaValues = value.split(',')\n if len(commaValues) > 1:\n return stringToObject(json.dumps(commaValues))\n\n try:\n return int(value)\n except ValueError:\n return value\n\ndef stringToObject(str):\n formatted = str.replace('\"{', '{').replace('}\"', '}').replace('\"[', '[').replace(']\"', ']').replace('\\\\\"', '\"')\n return json.loads(formatted, reviver=reviver)\n\nNote that I also changed the try statement that tries to convert the string to a number to use int() instead of float() to parse the value as an integer instead of a floating-point number. I also changed the function and variable names to follow the Python convention of using lowercase words separated by underscores (e.g. string_to_object instead of stringToObject).\n" ]
[ 0, 0 ]
[ "Does this code works for you?\ndef reviver(_key, value):\n try:\n return json.loads(value, object_hook=reviver)\n except:\n if type(value) == str:\n semi_values = value.split(';')\n if len(semi_values) > 1:\n return string_to_object(json.dumps(semi_values))\n comma_values = value.split(',')\n if len(comma_values) > 1:\n return string_to_object(json.dumps(comma_values))\n int_val = int(value)\n if len(value) and not isinstance(int_val, int):\n return int_val\n return value\n\ndef string_to_object(str):\n formatted = str.replace('\"{', '{').replace('}\"', '}').replace('\"[', '[').replace(']\"', ']').replace('\\\\\"', '\"')\n return json.loads(formatted, object_hook=reviver)\n\n" ]
[ -1 ]
[ "javascript", "json", "python", "reviver_function" ]
stackoverflow_0074654080_javascript_json_python_reviver_function.txt
Q: Given a subject and an object, what methods can i use to inference a possible verb? Given a subject A and an object B, for example, A is "Peter", B is "iPhone", Peter can be 'playing' or 'using' iPhone, the verb varies depending on the context, in this case, what kinds of method can I use to inference a possible verb? I assume a model, which can be BERT or other models, learns the correlation between subjects, verbs, and objects through a given corpus, but I don't really know about NLP. I am expecting some off-the-shell models, or models that can be used through simple fine-tuning. A: Pretrained language models such as BERT can be used for this task. For your example, you can give BERT an input such as Peter [MASK] an iPhone and let BERT completes the masked tokens. Language models like BERT were pretrained to predict such masked tokens on massive amounts of text, so tasks like this are a perfect fit for them without any fine-tuning. Several drawbacks I can think of: You have to manually specify the number of masked tokens between the subject and the object. For instance, the example above cannot result in Peter is buying an iPhone because there are only one masked token whereas the result has 2 tokens between Peter and an iPhone. Related to the previous one, pretrained language models usually tokenize their input into subwords. For example, the word buying may be tokenized into __buy and ing where __ marks the start of a word. So, you can never get buying as the prediction if your template only has one masked token. There's no way to guarantee that the predicted tokens will always correspond to a verb. You can construct the template such that the masked tokens are much more likely to correspond to a verb e.g., Peter is [MASK] an iPhone but there is always a risk of wrong predictions due to the probabilistic nature of pretrained language models.
Given a subject and an object, what methods can i use to inference a possible verb?
Given a subject A and an object B, for example, A is "Peter", B is "iPhone", Peter can be 'playing' or 'using' iPhone, the verb varies depending on the context, in this case, what kinds of method can I use to inference a possible verb? I assume a model, which can be BERT or other models, learns the correlation between subjects, verbs, and objects through a given corpus, but I don't really know about NLP. I am expecting some off-the-shell models, or models that can be used through simple fine-tuning.
[ "Pretrained language models such as BERT can be used for this task. For your example, you can give BERT an input such as Peter [MASK] an iPhone and let BERT completes the masked tokens. Language models like BERT were pretrained to predict such masked tokens on massive amounts of text, so tasks like this are a perfect fit for them without any fine-tuning. Several drawbacks I can think of:\n\nYou have to manually specify the number of masked tokens between the subject and the object. For instance, the example above cannot result in Peter is buying an iPhone because there are only one masked token whereas the result has 2 tokens between Peter and an iPhone.\nRelated to the previous one, pretrained language models usually tokenize their input into subwords. For example, the word buying may be tokenized into __buy and ing where __ marks the start of a word. So, you can never get buying as the prediction if your template only has one masked token.\nThere's no way to guarantee that the predicted tokens will always correspond to a verb. You can construct the template such that the masked tokens are much more likely to correspond to a verb e.g., Peter is [MASK] an iPhone but there is always a risk of wrong predictions due to the probabilistic nature of pretrained language models.\n\n" ]
[ 0 ]
[]
[]
[ "bert_language_model", "nlp", "nlp_question_answering" ]
stackoverflow_0074642611_bert_language_model_nlp_nlp_question_answering.txt
Q: Blazor - Bind query strings to a dictionary I have a page with a table and dynamic filters in it. When a filter changes, I need to redirect to the current page with a new or updated query parameter. For example, if I'm in the /users route and the name "John" is being searched, I need to redirect to /users?name=john. If I want to see all Johns with application role, I need to redirect to /users?name=john&hasRole=true and so on. The reason for this is that a search result must be reproducible by sharing the URL. I need all query parameters as a Dictionary<string, string>. How can I bind all query parameters to a Dictionary<string, string> in Blazor WASM? A: Chris Sainty has a nice post on working with query strings in blazor but essentially you'll need the NavManager and Microsoft.AspNetCore.WebUtilities and then you can use the ParseQuery method to get a dictionary of the query string params: protected override void OnInitialized() { var uri = NavManager.ToAbsoluteUri(NavManager.Uri); if (QueryHelpers.ParseQuery(uri.Query).TryGetValue("initialCount", out var _initialCount)) { currentCount = Convert.ToInt32(_initialCount); } } A: If you know what filters you are applying, you define the query parameters as component parameters (using Brandon's counter example in the other answer). @page "/counter" @inject NavigationManager NavManager <PageTitle>Counter</PageTitle> <h1>Counter</h1> <p role="status">Current count: @CurrentCount</p> <button class="btn btn-primary" @onclick="() => IncrementCount(1)">Increment</button> <button class="btn btn-primary" @onclick="() => IncrementCount(50)">Count plus 50</button> @code { [Parameter, SupplyParameterFromQuery] public int CurrentCount { get; set; } = 0; private void IncrementCount(int value) => NavManager.NavigateTo($"/counter?CurrentCount={CurrentCount + value}"); }
Blazor - Bind query strings to a dictionary
I have a page with a table and dynamic filters in it. When a filter changes, I need to redirect to the current page with a new or updated query parameter. For example, if I'm in the /users route and the name "John" is being searched, I need to redirect to /users?name=john. If I want to see all Johns with application role, I need to redirect to /users?name=john&hasRole=true and so on. The reason for this is that a search result must be reproducible by sharing the URL. I need all query parameters as a Dictionary<string, string>. How can I bind all query parameters to a Dictionary<string, string> in Blazor WASM?
[ "Chris Sainty has a nice post on working with query strings in blazor but essentially you'll need the NavManager and Microsoft.AspNetCore.WebUtilities and then you can use the ParseQuery method to get a dictionary of the query string params:\n protected override void OnInitialized()\n {\n var uri = NavManager.ToAbsoluteUri(NavManager.Uri);\n if (QueryHelpers.ParseQuery(uri.Query).TryGetValue(\"initialCount\", out var _initialCount))\n {\n currentCount = Convert.ToInt32(_initialCount);\n }\n }\n\n", "If you know what filters you are applying, you define the query parameters as component parameters (using Brandon's counter example in the other answer).\n@page \"/counter\"\n@inject NavigationManager NavManager\n\n<PageTitle>Counter</PageTitle>\n\n<h1>Counter</h1>\n\n<p role=\"status\">Current count: @CurrentCount</p>\n\n<button class=\"btn btn-primary\" @onclick=\"() => IncrementCount(1)\">Increment</button>\n\n<button class=\"btn btn-primary\" @onclick=\"() => IncrementCount(50)\">Count plus 50</button>\n\n@code {\n [Parameter, SupplyParameterFromQuery] public int CurrentCount { get; set; } = 0;\n\n private void IncrementCount(int value)\n => NavManager.NavigateTo($\"/counter?CurrentCount={CurrentCount + value}\");\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "blazor", "blazor_webassembly" ]
stackoverflow_0074660809_blazor_blazor_webassembly.txt
Q: Check if mac executable has debug info I want to make sure my executable has debug info, trying the linux equivalent doesn't help: $ file ./my_lovely_program ./my_lovely_program: Mach-O 64-bit executable arm64 # with debug info? without? EDIT (from the answer of @haggbart) It seems that my executable has no debug info (?) $ dwarfdump --debug-info ./compi ./compi: file format Mach-O arm64 .debug_info contents: # <--- empty, right? And with the other option, I'm not sure: $ otool -hv ./compi ./compi: Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 ARM64 ALL 0x00 EXECUTE 19 1816 NOUNDEFS DYLDLINK TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK PIE This is very weird because I can perfectly debug it with lldb (lldb) b main Breakpoint 1: where = compi`main + 24 at main.cpp:50:9, address = 0x0000000100018650 (lldb) run Process 6067 launched: '/Users/oren/Downloads/compi' (arm64) Process 6067 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100018650 compi`main(argc=3, argv=0x000000016fdff7b8) at main.cpp:50:9 47 /*****************/ 48 int main(int argc, char **argv) 49 { -> 50 if (argc == 3) 51 { 52 const char *input = argv[1]; 53 const char *output = argv[2]; Target 0: (compi) stopped. A: To check whether a Mac executable has debug information, you can use the otool command with the -h option. For example: otool -hv /path/to/executable If the output of the command includes a LC_DYSYMTAB section, then the executable has debug information. Here's an example of what that output might look like: dwarfdump command is another tool that you can use to view debug information: dwarfdump --debug-info ./my_lovely_program The output of the dwarfdump command will include a list of the debug information in the executable. If the output includes a .debug_info section, then the executable has debug information. Use the nm command to view the symbols in an executable. The nm command lists the symbols in an executable along with their type and value. If you see any symbols that start with _dyld, then the executable has debug information. Here's an example of how to use the nm command: nm /path/to/executable You can use the file command to view the type of a file. The file command can identify the type of a file based on its contents. If the file command identifies the executable as a Mach-O dynamically linked shared library or Mach-O dynamically linked shared library stub, then it has debug information. Here's an example of how to use the file command: file /path/to/executable If you have access to the source code for the executable, you can check the build settings to see if debug information was included. In most cases, debug information is included by default when you build an executable in a debug configuration, but you can also enable or disable debug information in the build settings. For example, in Xcode, you can check the "Debug Information Format" build setting to see if debug information is included in the executable. A: If you run : dsymutil -s ./my_lovely_propgram | grep N_OSO and it shows output, it means there is debug info.
Check if mac executable has debug info
I want to make sure my executable has debug info, trying the linux equivalent doesn't help: $ file ./my_lovely_program ./my_lovely_program: Mach-O 64-bit executable arm64 # with debug info? without? EDIT (from the answer of @haggbart) It seems that my executable has no debug info (?) $ dwarfdump --debug-info ./compi ./compi: file format Mach-O arm64 .debug_info contents: # <--- empty, right? And with the other option, I'm not sure: $ otool -hv ./compi ./compi: Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 ARM64 ALL 0x00 EXECUTE 19 1816 NOUNDEFS DYLDLINK TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK PIE This is very weird because I can perfectly debug it with lldb (lldb) b main Breakpoint 1: where = compi`main + 24 at main.cpp:50:9, address = 0x0000000100018650 (lldb) run Process 6067 launched: '/Users/oren/Downloads/compi' (arm64) Process 6067 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100018650 compi`main(argc=3, argv=0x000000016fdff7b8) at main.cpp:50:9 47 /*****************/ 48 int main(int argc, char **argv) 49 { -> 50 if (argc == 3) 51 { 52 const char *input = argv[1]; 53 const char *output = argv[2]; Target 0: (compi) stopped.
[ "To check whether a Mac executable has debug information, you can use the otool command with the -h option. For example:\notool -hv /path/to/executable\n\nIf the output of the command includes a LC_DYSYMTAB section, then the executable has debug information. Here's an example of what that output might look like:\ndwarfdump command is another tool that you can use to view debug information:\ndwarfdump --debug-info ./my_lovely_program\n\nThe output of the dwarfdump command will include a list of the debug information in the executable. If the output includes a .debug_info section, then the executable has debug information.\nUse the nm command to view the symbols in an executable. The nm command lists the symbols in an executable along with their type and value. If you see any symbols that start with _dyld, then the executable has debug information. Here's an example of how to use the nm command:\nnm /path/to/executable\n\nYou can use the file command to view the type of a file. The file command can identify the type of a file based on its contents. If the file command identifies the executable as a Mach-O dynamically linked shared library or Mach-O dynamically linked shared library stub, then it has debug information. Here's an example of how to use the file command:\nfile /path/to/executable\n\nIf you have access to the source code for the executable, you can check the build settings to see if debug information was included. In most cases, debug information is included by default when you build an executable in a debug configuration, but you can also enable or disable debug information in the build settings. For example, in Xcode, you can check the \"Debug Information Format\" build setting to see if debug information is included in the executable.\n", "If you run :\ndsymutil -s ./my_lovely_propgram | grep N_OSO\n\nand it shows output, it means there is debug info.\n" ]
[ 1, 0 ]
[]
[]
[ "debug_information", "macos" ]
stackoverflow_0074622676_debug_information_macos.txt
Q: Daily I’m receiving a new table in the BigQuery, I want concatenate this new table data to the main table, dataset schema are same Daily I’m receiving a new table (example :tablename_20220811) in the BigQuery, I want concatenate this new table data to the main_table, dataset schema are same. I tried using wild cards,I don’t know how to pull the daily loaded table. A: You can use BigQuery scheduled queries with an interval (cron) in the schedule parameters : Example with gcloud cli : bq query \ --use_legacy_sql=false \ --destination_table=mydataset.desttable \ --display_name='My Scheduled Query' \ --schedule='every 24 hours' \ --append_table=true \ 'SELECT 1 FROM mydataset.tablename_* where _TABLE_SUFFIX = FORMAT_DATE('%Y%m%d', CURRENT_DATE())' In order to target on the expected table, I used a wildcard and a filter based on the table suffix. The table suffix should be equals to the current date as STRING with the following format yyyymmdd. The cron plan to run the query every day. You can also configure it directly with the Google Cloud console. A: It sounds like you have the right naming format for BigQuery to treat your tables as a single 'date-sharded table'. You need to ensure that the daily tables have the same schema are in the same dataset have the same name apart from the _yyyymmdd suffix You will know if this worked because only one table will appear (with an icon showing multiple tables, rather than the usual icon). With this in hand, you can write queries like SELECT fieldA, fieldB, FROM `some_dataset.tablename_*` WHERE _table_suffix BETWEEN '20220101' AND '20221201' This gives you some idea of what's possible: select from the full date-sharded table using backticks (essential!) and the wildcard syntax filter using the special _table_suffix meta-field
Daily I’m receiving a new table in the BigQuery, I want concatenate this new table data to the main table, dataset schema are same
Daily I’m receiving a new table (example :tablename_20220811) in the BigQuery, I want concatenate this new table data to the main_table, dataset schema are same. I tried using wild cards,I don’t know how to pull the daily loaded table.
[ "You can use BigQuery scheduled queries with an interval (cron) in the schedule parameters :\nExample with gcloud cli :\nbq query \\\n --use_legacy_sql=false \\\n --destination_table=mydataset.desttable \\\n --display_name='My Scheduled Query' \\\n --schedule='every 24 hours' \\\n --append_table=true \\\n 'SELECT\n 1\n FROM\n mydataset.tablename_*\n where _TABLE_SUFFIX = FORMAT_DATE('%Y%m%d', CURRENT_DATE())'\n\nIn order to target on the expected table, I used a wildcard and a filter based on the table suffix. The table suffix should be equals to the current date as STRING with the following format yyyymmdd.\nThe cron plan to run the query every day.\nYou can also configure it directly with the Google Cloud console.\n", "It sounds like you have the right naming format for BigQuery to treat your tables as a single 'date-sharded table'.\nYou need to ensure that the daily tables\n\nhave the same schema\nare in the same dataset\nhave the same name apart from the _yyyymmdd suffix\n\nYou will know if this worked because only one table will appear (with an icon showing multiple tables, rather than the usual icon).\nWith this in hand, you can write queries like\n SELECT\n fieldA,\n fieldB,\n FROM\n `some_dataset.tablename_*`\n WHERE\n _table_suffix BETWEEN '20220101' AND '20221201'\n\nThis gives you some idea of what's possible:\n\nselect from the full date-sharded table using backticks (essential!) and the wildcard syntax\nfilter using the special _table_suffix meta-field\n\n" ]
[ 0, 0 ]
[]
[]
[ "google_bigquery" ]
stackoverflow_0074662978_google_bigquery.txt
Q: How do I implement a function to traverse a nested list in Prolog? Let's say we have a nested list that looks like this: X = [ [ 1, 2, 3], [4, 5, 6], [7, 8, 0] ] I want to traverse the list X and do the following: Make sure that there are only 3 nested list. Each nested list has 3 integers in them Only one of the nested lists contain a zero (0). For example, if the first nested list has a 0 in it then none of the nested list should have a 0. Make sure none of the nested lists contain an integer greater than 8. So far I did this. But it is not working. nested_lists([[Head|State]]) :- Head >= 0, Head =< 8, nested_lists(State). A: The first thing in X is [1,2,3] so Head >= 0 is [1,2,3] >= 0 which doesn't work. You need another predicate; an outer one to unpack the lists, another to unpack the integers in each list. This pattern: inner([]). inner([N|Ns]) :- write(N), write(' '), inner(Ns). outer([]). outer([L|Ls]) :- inner(L), outer(Ls). e.g. ?- X = [[1, 2, 3], [4, 5, 6], [7, 8, 0]], outer(X). 1 2 3 4 5 6 7 8 0 I'll leave filling in the details as an exercise for the reader, because it's an exercise. If you just want checks achieved, there are shorter ways: validate(X) :- X = [[A,B,C],[D,E,F],[G,H,I]], All = [A,B,C,D,E,F,G,H,I], forall(member(N, All), (integer(N),between(0,8,N))), sort(All, [_,Second|_]), \+ Second = 0. X must unify with that structure of three lists of three items. The variables which are bound must all be integers between 0 and 8. If you sort them and there are more than one 0, the sort will begin [0,0,...] so check the second item and if that's a 0 there are too many zeros, so it should not \+ be a zero.
How do I implement a function to traverse a nested list in Prolog?
Let's say we have a nested list that looks like this: X = [ [ 1, 2, 3], [4, 5, 6], [7, 8, 0] ] I want to traverse the list X and do the following: Make sure that there are only 3 nested list. Each nested list has 3 integers in them Only one of the nested lists contain a zero (0). For example, if the first nested list has a 0 in it then none of the nested list should have a 0. Make sure none of the nested lists contain an integer greater than 8. So far I did this. But it is not working. nested_lists([[Head|State]]) :- Head >= 0, Head =< 8, nested_lists(State).
[ "The first thing in X is [1,2,3] so Head >= 0 is [1,2,3] >= 0 which doesn't work.\nYou need another predicate; an outer one to unpack the lists, another to unpack the integers in each list. This pattern:\ninner([]).\ninner([N|Ns]) :-\n write(N), write(' '),\n inner(Ns).\n\nouter([]).\nouter([L|Ls]) :-\n inner(L),\n outer(Ls).\n\ne.g.\n?- X = [[1, 2, 3], [4, 5, 6], [7, 8, 0]],\n outer(X).\n1 2 3 4 5 6 7 8 0\n\nI'll leave filling in the details as an exercise for the reader, because it's an exercise.\n\nIf you just want checks achieved, there are shorter ways:\nvalidate(X) :-\n X = [[A,B,C],[D,E,F],[G,H,I]],\n All = [A,B,C,D,E,F,G,H,I],\n forall(member(N, All), (integer(N),between(0,8,N))),\n sort(All, [_,Second|_]),\n \\+ Second = 0.\n\nX must unify with that structure of three lists of three items. The variables which are bound must all be integers between 0 and 8. If you sort them and there are more than one 0, the sort will begin [0,0,...] so check the second item and if that's a 0 there are too many zeros, so it should not \\+ be a zero.\n" ]
[ 0 ]
[]
[]
[ "nested_lists", "prolog", "prolog_dif", "swi_prolog" ]
stackoverflow_0074664262_nested_lists_prolog_prolog_dif_swi_prolog.txt
Q: Skips the first page. Scraping python The program does not want to collect data from the first page. Starts collecting from the second page. If I try to collect data from the first page separately, everything works. And with the help of a cycle through the pages, then the first page is skipped import requests from bs4 import BeautifulSoup headers = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36" } def collect_products(url="https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"): response = requests.get(url = url, headers = headers) data_list = [] soup = BeautifulSoup(response.text, 'lxml') page_cout = int(soup.find('section', class_ = 'css-j8u5qq').find_all('a', class_ = 'css-1mi714g')[-1].text.strip()) print(f'[INFO] Total pages: { page_cout }') for page in range(1, page_cout + 1): data = {} print(f'[INFO] Processing {page} page') url = f"https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"+f"&page={ page }" response = requests.get(url = url, headers = headers) soup = BeautifulSoup(response.text, 'lxml') items = soup.find_all("div", {"data-cy" : "l-card"}) for item in items: olx = 'https://www.olx.ua' try: link = olx + item.find('a', class_ = 'css-rc5s2u').get('href').strip() except: link = 'err' try: title = item.find('h6', class_ = 'css-1pvd0aj-Text eu5v0x0').text.strip() except: title = 'err' try: fettle = item.find('div', class_ = 'css-puf171').text.strip() except: fettle = 'err' try: price = item.find('p', class_ = 'css-1q7gvpp-Text eu5v0x0').text.strip() except: price = 'err' try: url = f"{link}" response = requests.get(url = url, headers = headers) soup = BeautifulSoup(response.text, 'lxml') description = soup.find('div' , class_ = 'css-g5mtbi-Text').text.strip() except: description = 'err' print(title) print(fettle) print(price) print(link) print(description) return data_list if __name__ == '__main__': collect_products() what are other options to solve the problem? A: import httpx import trio from bs4 import BeautifulSoup import pandas as pd from urllib.parse import urljoin headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0' } class Spider: def __init__(self, client) -> None: self.client = client self.limiter = trio.CapacityLimiter(10) async def get(self, page): params = { 'currency': 'UAH', 'page': page } while True: try: r = await self.client.get('noutbuki', params=params) if r.is_success: break except httpx.RequestError: continue return await self.get_soup(r.text) async def get_soup(self, content): return BeautifulSoup(content, 'lxml') async def crawl(data, page, sender): async with data.limiter, sender: soup = await data.get(page) goal = [urljoin(str(data.client.base_url), x['href']) for x in soup.select('a.css-rc5s2u, a.marginright5')] await sender.send(goal) async def main(): async with httpx.AsyncClient(timeout=5, headers=headers, follow_redirects=True, base_url='https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/') as client, trio.open_nursery() as nurse: sender, receiver = trio.open_memory_channel(0) nurse.start_soon(rec, receiver) data = Spider(client) async with sender: for page in range(1, 3): nurse.start_soon(crawl, data, page, sender.clone()) async def rec(receiver): async with receiver: allin = [] async for val in receiver: allin.extend(val) df = pd.DataFrame(allin, columns=['URL']) print(df) if __name__ == "__main__": trio.run(main) Output: URL 0 https://www.olx.ua/d/uk/obyavlenie/lenovo-thin... 1 https://www.olx.ua/d/uk/obyavlenie/kak-novyy-i... 2 https://www.olx.ua/d/uk/obyavlenie/dell-xps-13... 3 https://www.olx.ua/d/uk/obyavlenie/apple-macbo... 4 https://www.olx.ua/d/uk/obyavlenie/u-menya-est... .. ... 91 https://www.olx.ua/d/uk/obyavlenie/noutbuk-ace... 92 https://www.olx.ua/d/uk/obyavlenie/noutbuk-na-... 93 https://www.olx.ua/d/uk/obyavlenie/noutbuk-fuj... 94 https://www.olx.ua/d/uk/obyavlenie/noutbuk-15-... 95 https://www.olx.ua/d/uk/obyavlenie/ultrabuk-hp... [96 rows x 1 columns]
Skips the first page. Scraping python
The program does not want to collect data from the first page. Starts collecting from the second page. If I try to collect data from the first page separately, everything works. And with the help of a cycle through the pages, then the first page is skipped import requests from bs4 import BeautifulSoup headers = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36" } def collect_products(url="https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"): response = requests.get(url = url, headers = headers) data_list = [] soup = BeautifulSoup(response.text, 'lxml') page_cout = int(soup.find('section', class_ = 'css-j8u5qq').find_all('a', class_ = 'css-1mi714g')[-1].text.strip()) print(f'[INFO] Total pages: { page_cout }') for page in range(1, page_cout + 1): data = {} print(f'[INFO] Processing {page} page') url = f"https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"+f"&page={ page }" response = requests.get(url = url, headers = headers) soup = BeautifulSoup(response.text, 'lxml') items = soup.find_all("div", {"data-cy" : "l-card"}) for item in items: olx = 'https://www.olx.ua' try: link = olx + item.find('a', class_ = 'css-rc5s2u').get('href').strip() except: link = 'err' try: title = item.find('h6', class_ = 'css-1pvd0aj-Text eu5v0x0').text.strip() except: title = 'err' try: fettle = item.find('div', class_ = 'css-puf171').text.strip() except: fettle = 'err' try: price = item.find('p', class_ = 'css-1q7gvpp-Text eu5v0x0').text.strip() except: price = 'err' try: url = f"{link}" response = requests.get(url = url, headers = headers) soup = BeautifulSoup(response.text, 'lxml') description = soup.find('div' , class_ = 'css-g5mtbi-Text').text.strip() except: description = 'err' print(title) print(fettle) print(price) print(link) print(description) return data_list if __name__ == '__main__': collect_products() what are other options to solve the problem?
[ "import httpx\nimport trio\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom urllib.parse import urljoin\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'\n}\n\n\nclass Spider:\n def __init__(self, client) -> None:\n self.client = client\n self.limiter = trio.CapacityLimiter(10)\n\n async def get(self, page):\n params = {\n 'currency': 'UAH',\n 'page': page\n }\n while True:\n try:\n r = await self.client.get('noutbuki', params=params)\n if r.is_success:\n break\n except httpx.RequestError:\n continue\n return await self.get_soup(r.text)\n\n async def get_soup(self, content):\n return BeautifulSoup(content, 'lxml')\n\n\nasync def crawl(data, page, sender):\n async with data.limiter, sender:\n soup = await data.get(page)\n goal = [urljoin(str(data.client.base_url), x['href'])\n for x in soup.select('a.css-rc5s2u, a.marginright5')]\n await sender.send(goal)\n\n\nasync def main():\n async with httpx.AsyncClient(timeout=5, headers=headers, follow_redirects=True, base_url='https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/') as client, trio.open_nursery() as nurse:\n sender, receiver = trio.open_memory_channel(0)\n nurse.start_soon(rec, receiver)\n data = Spider(client)\n async with sender:\n for page in range(1, 3):\n nurse.start_soon(crawl, data, page, sender.clone())\n\n\nasync def rec(receiver):\n async with receiver:\n allin = []\n async for val in receiver:\n allin.extend(val)\n df = pd.DataFrame(allin, columns=['URL'])\n print(df)\n\n\nif __name__ == \"__main__\":\n trio.run(main)\n\nOutput:\n URL\n0 https://www.olx.ua/d/uk/obyavlenie/lenovo-thin...\n1 https://www.olx.ua/d/uk/obyavlenie/kak-novyy-i...\n2 https://www.olx.ua/d/uk/obyavlenie/dell-xps-13...\n3 https://www.olx.ua/d/uk/obyavlenie/apple-macbo...\n4 https://www.olx.ua/d/uk/obyavlenie/u-menya-est...\n.. ...\n91 https://www.olx.ua/d/uk/obyavlenie/noutbuk-ace...\n92 https://www.olx.ua/d/uk/obyavlenie/noutbuk-na-...\n93 https://www.olx.ua/d/uk/obyavlenie/noutbuk-fuj...\n94 https://www.olx.ua/d/uk/obyavlenie/noutbuk-15-...\n95 https://www.olx.ua/d/uk/obyavlenie/ultrabuk-hp...\n\n[96 rows x 1 columns]\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074665378_beautifulsoup_python_web_scraping.txt
Q: API calls get canceled on page reload React JS I have a situation where I want to log the time of user whenever reloads or closes a tab. I have written the below code for the same. useEffect(() => { const cleanup = () => { deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`); agentLogout(`agentId=${userId.toUpperCase()}`); > }; window.addEventListener('beforeunload', cleanup); return () => { window.removeEventListener('beforeunload', cleanup); > }; []); My API call gets triggered but also gets canceled as soon as I reload the page. Is there any alternate to achieve this? Below is a screenshot of the network tab. (I have preserve logs enabled; that's why I can see the logs after refresh). network call screenshot A: It looks like the issue is that the beforeunload event is being cancelled by the browser when the page is refreshed. This is because the beforeunload event is meant to allow the page to prompt the user before they navigate away from the page, but the browser automatically cancels this event when the page is being refreshed. One possible solution to this issue would be to use the unload event instead of the beforeunload event. The unload event is not cancelable, so it should be triggered even when the page is refreshed. Here is an example of how you could use the unload event in your code: useEffect(() => { const cleanup = () => { deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`); agentLogout(`agentId=${userId.toUpperCase()}`); }; window.addEventListener('unload', cleanup); return () => { window.removeEventListener('unload', cleanup); }; }, []); Another possible solution would be to use the beforeunload event, but to call your API using an asynchronous function (e.g. using setTimeout) so that the browser has time to complete the unload process before the API call is made. This would allow the beforeunload event to complete and trigger your API call, even when the page is being refreshed. Here is an example of how you could do this: useEffect(() => { const cleanup = () => { setTimeout(() => { deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`); agentLogout(`agentId=${userId.toUpperCase()}`); }, 0); }; window.addEventListener('beforeunload', cleanup); return () => { window.removeEventListener('beforeunload', cleanup); }; }, []);
API calls get canceled on page reload React JS
I have a situation where I want to log the time of user whenever reloads or closes a tab. I have written the below code for the same. useEffect(() => { const cleanup = () => { deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`); agentLogout(`agentId=${userId.toUpperCase()}`); > }; window.addEventListener('beforeunload', cleanup); return () => { window.removeEventListener('beforeunload', cleanup); > }; []); My API call gets triggered but also gets canceled as soon as I reload the page. Is there any alternate to achieve this? Below is a screenshot of the network tab. (I have preserve logs enabled; that's why I can see the logs after refresh). network call screenshot
[ "It looks like the issue is that the beforeunload event is being cancelled by the browser when the page is refreshed. This is because the beforeunload event is meant to allow the page to prompt the user before they navigate away from the page, but the browser automatically cancels this event when the page is being refreshed.\nOne possible solution to this issue would be to use the unload event instead of the beforeunload event. The unload event is not cancelable, so it should be triggered even when the page is refreshed.\nHere is an example of how you could use the unload event in your code:\nuseEffect(() => {\n const cleanup = () => {\n deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`);\n agentLogout(`agentId=${userId.toUpperCase()}`);\n };\n\n window.addEventListener('unload', cleanup);\n\n return () => {\n window.removeEventListener('unload', cleanup);\n };\n}, []);\n\nAnother possible solution would be to use the beforeunload event, but to call your API using an asynchronous function (e.g. using setTimeout) so that the browser has time to complete the unload process before the API call is made. This would allow the beforeunload event to complete and trigger your API call, even when the page is being refreshed.\nHere is an example of how you could do this:\nuseEffect(() => {\n const cleanup = () => {\n setTimeout(() => {\n deleteAgentLoginDetails(`agentId=${userId.toUpperCase()}`);\n agentLogout(`agentId=${userId.toUpperCase()}`);\n }, 0);\n };\n\n window.addEventListener('beforeunload', cleanup);\n\n return () => {\n window.removeEventListener('beforeunload', cleanup);\n };\n}, []);\n\n" ]
[ 0 ]
[]
[]
[ "frontend", "javascript", "reactjs", "typescript" ]
stackoverflow_0074657352_frontend_javascript_reactjs_typescript.txt
Q: Shopify Storage Redis Issue with Node React App I have added session storage in serve.js as follows :- import SessionHandler from "./SessionHandler"; const sessionStorage = new SessionHandler(); Shopify.Context.initialize({ API_KEY: process.env.SHOPIFY_API_KEY, API_SECRET_KEY: process.env.SHOPIFY_API_SECRET, SCOPES: process.env.SCOPES.split(","), HOST_NAME: process.env.HOST.replace(/https:\/\//, ""), API_VERSION: ApiVersion.October21, IS_EMBEDDED_APP: false, // This should be replaced with your preferred storage strategy //SESSION_STORAGE: new Shopify.Session.MemorySessionStorage(), SESSION_STORAGE: new Shopify.Session.CustomSessionStorage( sessionStorage.storeCallback, sessionStorage.loadCallback, sessionStorage.deleteCallback ), }); My router get function is router.get("(.*)", async (ctx) => { const shop = ctx.query.shop; let documentQuery = { shop: shop }; let data = await SessionStorage.findOne(documentQuery); //this finds the store in the session table if (ACTIVE_SHOPIFY_SHOPS[shop] === undefined) { if (data == null) { ctx.redirect(`/auth?shop=${shop}`); } else { await handleRequest(ctx); } } else { await handleRequest(ctx); } }); and than in the SessionHandler file added code as attached in file , but when I run install the app it goes to the storeCallback , loadcallback and deletecallback function multiple times StoreCallback Function Code Load and delete callback function code A: sorry I have edited my answer as I think its incorrect . all I can say for now is to look at this example:https://github.com/Shopify/shopify-api-node/blob/main/docs/usage/customsessions.md if you havent already..
Shopify Storage Redis Issue with Node React App
I have added session storage in serve.js as follows :- import SessionHandler from "./SessionHandler"; const sessionStorage = new SessionHandler(); Shopify.Context.initialize({ API_KEY: process.env.SHOPIFY_API_KEY, API_SECRET_KEY: process.env.SHOPIFY_API_SECRET, SCOPES: process.env.SCOPES.split(","), HOST_NAME: process.env.HOST.replace(/https:\/\//, ""), API_VERSION: ApiVersion.October21, IS_EMBEDDED_APP: false, // This should be replaced with your preferred storage strategy //SESSION_STORAGE: new Shopify.Session.MemorySessionStorage(), SESSION_STORAGE: new Shopify.Session.CustomSessionStorage( sessionStorage.storeCallback, sessionStorage.loadCallback, sessionStorage.deleteCallback ), }); My router get function is router.get("(.*)", async (ctx) => { const shop = ctx.query.shop; let documentQuery = { shop: shop }; let data = await SessionStorage.findOne(documentQuery); //this finds the store in the session table if (ACTIVE_SHOPIFY_SHOPS[shop] === undefined) { if (data == null) { ctx.redirect(`/auth?shop=${shop}`); } else { await handleRequest(ctx); } } else { await handleRequest(ctx); } }); and than in the SessionHandler file added code as attached in file , but when I run install the app it goes to the storeCallback , loadcallback and deletecallback function multiple times StoreCallback Function Code Load and delete callback function code
[ "sorry I have edited my answer as I think its incorrect . all I can say for now is to look at this example:https://github.com/Shopify/shopify-api-node/blob/main/docs/usage/customsessions.md\nif you havent already..\n" ]
[ 0 ]
[]
[]
[ "redis", "shopify", "shopify_api_node", "shopify_app" ]
stackoverflow_0071231704_redis_shopify_shopify_api_node_shopify_app.txt
Q: Using NodeJs and Javascript to Download Zip Files? I have tried multiple methods suggested on stackoverflow, but each time I end up with a corrupted 1 or 2 kb zip file. If anyone can correct my error or suggest an alternate path to downloading ZIPs, I'd be grateful. I want to extract them too, but that's a battle for later. Here's my latest attempt: const http = require('https'); // or 'https' for https:// URLs const download = (url, dest, cb) => { const file = fs.createWriteStream(dest); const request = http.get(url, (response) => { // check if response is success response.pipe(file); }); // close() is async, call cb after close completes file.on('finish', () => file.close(cb)); // check for request error too request.on('error', (err) => { fs.unlink(dest, () => cb(err.message)); // delete the (partial) file and then return the error }); file.on('error', (err) => { // Handle errors fs.unlink(dest, () => cb(err.message)); // delete the (partial) file and then return the error }); }; download("https://file-examples.com/wp-content/uploads/2017/02/zip_5MB.zip", __dirname + "/Download/" + "Zipfile.zip"); ``` Thank you for your help and time, it is greatly appreciated. A: There are a few issues with your code that could be causing the zip file to be corrupted. One issue is that you are not checking the HTTP status code of the response to ensure that it is a success (status code 200). If the response is not successful, the zip file may not be downloaded completely, resulting in a corrupted file. Another issue is that you are not handling the error event for the response object. This means that if an error occurs while downloading the file, your code will not be able to handle it and the file may be corrupted. Additionally, you are not calling the callback function (cb) after the file has been downloaded, which means that your code may not be able to properly handle the completion of the download. Here is an updated version of your code that addresses these issues: const http = require('https'); // or 'https' for https:// URLs const download = (url, dest, cb) => { // create the write stream for the destination file const file = fs.createWriteStream(dest); // make the HTTP GET request const request = http.get(url, (response) => { // check if the response is successful (status code 200) if (response.statusCode === 200) { // pipe the response data to the destination file response.pipe(file); } else { // if the response is not successful, return an error cb(new Error(`Request failed with status code: ${response.statusCode}`)); } }); // handle errors for the request object request.on('error', (err) => { // delete the (partial) file and then return the error fs.unlink(dest, () => cb(err)); }); // handle errors for the write stream file.on('error', (err) => { // delete the (partial) file and then return the error fs.unlink(dest, () => cb(err)); }); // close the write stream and call the callback function when it is done file.on('finish', () => file.close(cb)); }; download("https://file-examples.com/wp-content/uploads/2017/02/zip_5MB.zip", __dirname + "/Download/" + "Zipfile.zip", (err) => {
Using NodeJs and Javascript to Download Zip Files?
I have tried multiple methods suggested on stackoverflow, but each time I end up with a corrupted 1 or 2 kb zip file. If anyone can correct my error or suggest an alternate path to downloading ZIPs, I'd be grateful. I want to extract them too, but that's a battle for later. Here's my latest attempt: const http = require('https'); // or 'https' for https:// URLs const download = (url, dest, cb) => { const file = fs.createWriteStream(dest); const request = http.get(url, (response) => { // check if response is success response.pipe(file); }); // close() is async, call cb after close completes file.on('finish', () => file.close(cb)); // check for request error too request.on('error', (err) => { fs.unlink(dest, () => cb(err.message)); // delete the (partial) file and then return the error }); file.on('error', (err) => { // Handle errors fs.unlink(dest, () => cb(err.message)); // delete the (partial) file and then return the error }); }; download("https://file-examples.com/wp-content/uploads/2017/02/zip_5MB.zip", __dirname + "/Download/" + "Zipfile.zip"); ``` Thank you for your help and time, it is greatly appreciated.
[ "There are a few issues with your code that could be causing the zip file to be corrupted. One issue is that you are not checking the HTTP status code of the response to ensure that it is a success (status code 200). If the response is not successful, the zip file may not be downloaded completely, resulting in a corrupted file.\nAnother issue is that you are not handling the error event for the response object. This means that if an error occurs while downloading the file, your code will not be able to handle it and the file may be corrupted.\nAdditionally, you are not calling the callback function (cb) after the file has been downloaded, which means that your code may not be able to properly handle the completion of the download.\nHere is an updated version of your code that addresses these issues:\nconst http = require('https'); // or 'https' for https:// URLs\n\nconst download = (url, dest, cb) => {\n // create the write stream for the destination file\n const file = fs.createWriteStream(dest);\n\n // make the HTTP GET request\n const request = http.get(url, (response) => {\n // check if the response is successful (status code 200)\n if (response.statusCode === 200) {\n // pipe the response data to the destination file\n response.pipe(file);\n } else {\n // if the response is not successful, return an error\n cb(new Error(`Request failed with status code: ${response.statusCode}`));\n }\n });\n\n // handle errors for the request object\n request.on('error', (err) => {\n // delete the (partial) file and then return the error\n fs.unlink(dest, () => cb(err));\n });\n\n // handle errors for the write stream\n file.on('error', (err) => {\n // delete the (partial) file and then return the error\n fs.unlink(dest, () => cb(err));\n });\n\n // close the write stream and call the callback function when it is done\n file.on('finish', () => file.close(cb));\n};\n\ndownload(\"https://file-examples.com/wp-content/uploads/2017/02/zip_5MB.zip\", __dirname + \"/Download/\" + \"Zipfile.zip\", (err) => {\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "node.js", "zip" ]
stackoverflow_0074665435_javascript_node.js_zip.txt
Q: Swift Codable Type encodes successfully but fails to decode I have an enum for decoding points, polygons, and multipolygons from GeoJSON files using several nested Codable types. I can decode GeoJSON files without issue, but when I encode this data to store as binary data in Core Data, it can't be decoded again. I'm getting typeMismatch when trying to decode. Here's my enum: enum GeoJSONFeatureGeometryCoordinates: Codable { case point([Double]) case polygon([[[Double]]]) case multipolygon([[[[Double]]]]) init(from decoder: Decoder) throws { let container = try decoder.singleValueContainer() do { let polygonVal = try container.decode([[[Double]]].self) self = .polygon(polygonVal) } catch DecodingError.typeMismatch { do { let multipolygonVal = try container.decode([[[[Double]]]].self) self = .multipolygon(multipolygonVal) } catch DecodingError.typeMismatch { let pointVal = try container.decode([Double].self) self = .point(pointVal) } } } } And here's where I successfully encode it but throw an error when testing the decoding do { let encoder = JSONEncoder() let encodedCoordinates = try encoder.encode(centroid.geometry.coordinates) let decoder = JSONDecoder() let decodedCoordinates = try decoder.decode(GeoJSONFeatureGeometryCoordinates.self, from: encodedCoordinates) } catch { fatalError("\n\n Error archiving feature.geometry.coordinates as Data") } A: The decoding fails because you are encoding it in one format and decoding it in another format. Since you did not provide an encode(to:) implementation, an implementation is automatically generated by the Swift compiler. This implementation encodes your enum to a format like this: { "point": { "_0": [] // the Double array goes here } } Notice how the name of the enum case, as well as the names of the associated values are also recorded. This is different from the format that your init(from:) implementation expects. Therefore decoding fails. You can add an explicit implementation of encode(to:), so that it produces the format that init(from:) expects. func encode(to encoder: Encoder) throws { var container = encoder.singleValueContainer() switch self { case .point(let arr): try container.encode(arr) case .polygon(let arr): try container.encode(arr) case .multipolygon(let arr): try container.encode(arr) } }
Swift Codable Type encodes successfully but fails to decode
I have an enum for decoding points, polygons, and multipolygons from GeoJSON files using several nested Codable types. I can decode GeoJSON files without issue, but when I encode this data to store as binary data in Core Data, it can't be decoded again. I'm getting typeMismatch when trying to decode. Here's my enum: enum GeoJSONFeatureGeometryCoordinates: Codable { case point([Double]) case polygon([[[Double]]]) case multipolygon([[[[Double]]]]) init(from decoder: Decoder) throws { let container = try decoder.singleValueContainer() do { let polygonVal = try container.decode([[[Double]]].self) self = .polygon(polygonVal) } catch DecodingError.typeMismatch { do { let multipolygonVal = try container.decode([[[[Double]]]].self) self = .multipolygon(multipolygonVal) } catch DecodingError.typeMismatch { let pointVal = try container.decode([Double].self) self = .point(pointVal) } } } } And here's where I successfully encode it but throw an error when testing the decoding do { let encoder = JSONEncoder() let encodedCoordinates = try encoder.encode(centroid.geometry.coordinates) let decoder = JSONDecoder() let decodedCoordinates = try decoder.decode(GeoJSONFeatureGeometryCoordinates.self, from: encodedCoordinates) } catch { fatalError("\n\n Error archiving feature.geometry.coordinates as Data") }
[ "The decoding fails because you are encoding it in one format and decoding it in another format.\nSince you did not provide an encode(to:) implementation, an implementation is automatically generated by the Swift compiler. This implementation encodes your enum to a format like this:\n{\n \"point\": { \n \"_0\": [] // the Double array goes here\n }\n}\n\nNotice how the name of the enum case, as well as the names of the associated values are also recorded. This is different from the format that your init(from:) implementation expects. Therefore decoding fails.\nYou can add an explicit implementation of encode(to:), so that it produces the format that init(from:) expects.\nfunc encode(to encoder: Encoder) throws {\n var container = encoder.singleValueContainer()\n switch self {\n case .point(let arr):\n try container.encode(arr)\n case .polygon(let arr):\n try container.encode(arr)\n case .multipolygon(let arr):\n try container.encode(arr)\n }\n}\n\n" ]
[ 2 ]
[]
[]
[ "codable", "geojson", "swift" ]
stackoverflow_0074665467_codable_geojson_swift.txt
Q: Set environment variable with cron. Not working How can I set for example with cronjob 1/* * * * * bash -c 'export TZ=Europe/London' 1/* * * * * bash -c '. /path/to/script.sh' This script has multiple exports which is getting the timezone offset and would like that cronjob update the system var TZ with that one. If I run it from shell it works with ". /path/to/script.th" as source but not in a cronjob. At least how to set the environment variables with cronjob. I also have tried that the script writes into the /etc/environment TZ=Europe/London and cronjob to set -a;. /etc/environment; set +a; None are working but they do from shell. Here is my script. What are my options from here when the script is ran by cronjob or anything else...Also tried to make it run with supervisord...same result. Everything is working running locally from shell, but can't figure out how to modify the system "$TZ" environment with a script or from cronjob. The script writes into the /etc/environment but when I set it from cron still won't update but it will from shell. I know it all works from current shell but /etc/environment should be available to all. I have edited so the way to go is write with the script into the /etc/profile.d/custom.sh This works also inside a docker container to update periodicaly environmental variable if needed. #!/bin/bash ####################### ## Preload variables ## ####################### . /etc/profile . ~/.bash_profile . ~/.bashrc ############################# ## Get Boardtime from API ## ############################# boardtime=$(curl -s http://server/api/get-time \ -u user:pass \ -d locale='en_US' | \ grep -oP '(?<=boardTime":")[^","boardTimeLocalized"]*' | \ sed -r -e 's/^.{8}/& /' -e 's/[^ ]{2}/&:/5g' | \ sed 's/:$//') formatted_boardtime=$(date -d "$boardtime" "+%a, %e %b %Y %H:%M:%S %z") echo -ne "\nLocaltime is $formatted_boardtime!\n" ################################### ## Get timezone offset from ship ## ################################### minutes=$(curl -s http://server/timezone/currentOffset.txt | grep -oP "[-+][0-9]{0,9}/|[0-9]{0,9}") echo -ne "\nTimezone offset is $minutes\n" ((h=$minutes / 60)) ############################# ## Set Offset Symbol Value ## ############################# symbol=$(echo $minutes | grep -oP "[-]") digit=$(echo $h | grep -oP "[0-9]{2,4}") if [[ $symbol == "-" ]] && [[ $digit ]]; then offset="-"$h"00" elif [[ $symbol != "-" ]] && [[ $digit ]]; then offset="+"$h"00" elif [[ $symbol == "-" ]] && [[ ! $digit ]]; then offset="-0$(echo "$h" | grep -oP '[0-9]{0,4}')00" else offset="+0$(echo "$h" | grep -oP '[0-9]{0,4}')00" fi ########################## ## Set Current timezone ## ########################## localtime=$(echo $formatted_boardtime | sed -E "s/([-+][0-9]{4})$/"$offset"/") if [[ $symbol == "-" && $offset == -1100 ]]; then export TZ="US/Samoa" elif [[ $symbol == "-" && $offset == -1000 ]]; then export TZ="Pacific/Honolulu" elif [[ $symbol == "-" && $offset == -0900 ]]; then export TZ="America/Nome" elif [[ $symbol == "-" && $offset == -0800 ]]; then export TZ="America/Los_Angeles" elif [[ $symbol == "-" && $offset == -0700 ]]; then export TZ="America/Denver" elif [[ $symbol == "-" && $offset == -0600 ]]; then export TZ="America/Cancun" elif [[ $symbol == "-" && $offset == -0500 ]]; then export TZ="America/Lima" elif [[ $symbol == "-" && $offset == -0400 ]]; then export TZ="America/Santiago" elif [[ $symbol == "-" && $offset == -0300 ]]; then export TZ="America/Bahia" elif [[ $symbol == "-" && $offset == -0200 ]]; then export TZ="America/Noronha" elif [[ $symbol == "-" && $offset == -0100 ]]; then export TZ="Atlantic/Azores" elif [[ $symbol != "-" && $offset == +0000 ]]; then export TZ="Europe/London" elif [[ $symbol != "-" && $offset == +0100 ]]; then export TZ="Europe/Berlin" elif [[ $symbol != "-" && $offset == +0200 ]]; then export TZ="Europe/Athens" elif [[ $symbol != "-" && $offset == +0300 ]]; then export TZ="Asia/Qatar" elif [[ $symbol != "-" && $offset == +0400 ]]; then export TZ="Asia/Dubai" elif [[ $symbol != "-" && $offset == +0500 ]]; then export TZ="Indian/Maldives" elif [[ $symbol != "-" && $offset == +0600 ]]; then export TZ="Asia/Thimbu" elif [[ $symbol != "-" && $offset == +0700 ]]; then export TZ="Asia/Bangkok" elif [[ $symbol != "-" && $offset == +0800 ]]; then export TZ="Asia/Hong_Kong" elif [[ $symbol != "-" && $offset == +0900 ]]; then export TZ="Asia/Tokyo" elif [[ $symbol != "-" && $offset == +1000 ]]; then export TZ="Australia/Melbourne" elif [[ $symbol != "-" && $offset == +1100 ]]; then export TZ="Pacific/Ponape" elif [[ $symbol != "-" && $offset == +1200 ]]; then export TZ="Pacific/Fiji" elif [[ $symbol != "-" && $offset == +1300 ]]; then export TZ="Pacific/Apia" elif [[ $symbol != "-" && $offset == +1400 ]]; then export TZ="Pacific/Kiritimati" fi echo -ne "\nLocaltime is $localtime!\n" echo -ne "\nTimezone matching offset is set $TZ!\n" echo "$TZ" > /etc/timezone echo "TZ=$TZ" > /etc/profile.d/custom.sh ########################################################### ## Restart postfix to apply new timezone also to postfix ## ########################################################### /usr/sbin/postfix stop && /usr/sbin/postfix start A: Can mark as a working solution for this using as follows then: The solution is create a /etc/profile.d/custom.sh So I am writting in there with the script the that is ran by a cronjob and it will work also as a non-login also. Inside there simply add all the environments you need like export TZ=Europe/London export MYENV=value export etc.. ..
Set environment variable with cron. Not working
How can I set for example with cronjob 1/* * * * * bash -c 'export TZ=Europe/London' 1/* * * * * bash -c '. /path/to/script.sh' This script has multiple exports which is getting the timezone offset and would like that cronjob update the system var TZ with that one. If I run it from shell it works with ". /path/to/script.th" as source but not in a cronjob. At least how to set the environment variables with cronjob. I also have tried that the script writes into the /etc/environment TZ=Europe/London and cronjob to set -a;. /etc/environment; set +a; None are working but they do from shell. Here is my script. What are my options from here when the script is ran by cronjob or anything else...Also tried to make it run with supervisord...same result. Everything is working running locally from shell, but can't figure out how to modify the system "$TZ" environment with a script or from cronjob. The script writes into the /etc/environment but when I set it from cron still won't update but it will from shell. I know it all works from current shell but /etc/environment should be available to all. I have edited so the way to go is write with the script into the /etc/profile.d/custom.sh This works also inside a docker container to update periodicaly environmental variable if needed. #!/bin/bash ####################### ## Preload variables ## ####################### . /etc/profile . ~/.bash_profile . ~/.bashrc ############################# ## Get Boardtime from API ## ############################# boardtime=$(curl -s http://server/api/get-time \ -u user:pass \ -d locale='en_US' | \ grep -oP '(?<=boardTime":")[^","boardTimeLocalized"]*' | \ sed -r -e 's/^.{8}/& /' -e 's/[^ ]{2}/&:/5g' | \ sed 's/:$//') formatted_boardtime=$(date -d "$boardtime" "+%a, %e %b %Y %H:%M:%S %z") echo -ne "\nLocaltime is $formatted_boardtime!\n" ################################### ## Get timezone offset from ship ## ################################### minutes=$(curl -s http://server/timezone/currentOffset.txt | grep -oP "[-+][0-9]{0,9}/|[0-9]{0,9}") echo -ne "\nTimezone offset is $minutes\n" ((h=$minutes / 60)) ############################# ## Set Offset Symbol Value ## ############################# symbol=$(echo $minutes | grep -oP "[-]") digit=$(echo $h | grep -oP "[0-9]{2,4}") if [[ $symbol == "-" ]] && [[ $digit ]]; then offset="-"$h"00" elif [[ $symbol != "-" ]] && [[ $digit ]]; then offset="+"$h"00" elif [[ $symbol == "-" ]] && [[ ! $digit ]]; then offset="-0$(echo "$h" | grep -oP '[0-9]{0,4}')00" else offset="+0$(echo "$h" | grep -oP '[0-9]{0,4}')00" fi ########################## ## Set Current timezone ## ########################## localtime=$(echo $formatted_boardtime | sed -E "s/([-+][0-9]{4})$/"$offset"/") if [[ $symbol == "-" && $offset == -1100 ]]; then export TZ="US/Samoa" elif [[ $symbol == "-" && $offset == -1000 ]]; then export TZ="Pacific/Honolulu" elif [[ $symbol == "-" && $offset == -0900 ]]; then export TZ="America/Nome" elif [[ $symbol == "-" && $offset == -0800 ]]; then export TZ="America/Los_Angeles" elif [[ $symbol == "-" && $offset == -0700 ]]; then export TZ="America/Denver" elif [[ $symbol == "-" && $offset == -0600 ]]; then export TZ="America/Cancun" elif [[ $symbol == "-" && $offset == -0500 ]]; then export TZ="America/Lima" elif [[ $symbol == "-" && $offset == -0400 ]]; then export TZ="America/Santiago" elif [[ $symbol == "-" && $offset == -0300 ]]; then export TZ="America/Bahia" elif [[ $symbol == "-" && $offset == -0200 ]]; then export TZ="America/Noronha" elif [[ $symbol == "-" && $offset == -0100 ]]; then export TZ="Atlantic/Azores" elif [[ $symbol != "-" && $offset == +0000 ]]; then export TZ="Europe/London" elif [[ $symbol != "-" && $offset == +0100 ]]; then export TZ="Europe/Berlin" elif [[ $symbol != "-" && $offset == +0200 ]]; then export TZ="Europe/Athens" elif [[ $symbol != "-" && $offset == +0300 ]]; then export TZ="Asia/Qatar" elif [[ $symbol != "-" && $offset == +0400 ]]; then export TZ="Asia/Dubai" elif [[ $symbol != "-" && $offset == +0500 ]]; then export TZ="Indian/Maldives" elif [[ $symbol != "-" && $offset == +0600 ]]; then export TZ="Asia/Thimbu" elif [[ $symbol != "-" && $offset == +0700 ]]; then export TZ="Asia/Bangkok" elif [[ $symbol != "-" && $offset == +0800 ]]; then export TZ="Asia/Hong_Kong" elif [[ $symbol != "-" && $offset == +0900 ]]; then export TZ="Asia/Tokyo" elif [[ $symbol != "-" && $offset == +1000 ]]; then export TZ="Australia/Melbourne" elif [[ $symbol != "-" && $offset == +1100 ]]; then export TZ="Pacific/Ponape" elif [[ $symbol != "-" && $offset == +1200 ]]; then export TZ="Pacific/Fiji" elif [[ $symbol != "-" && $offset == +1300 ]]; then export TZ="Pacific/Apia" elif [[ $symbol != "-" && $offset == +1400 ]]; then export TZ="Pacific/Kiritimati" fi echo -ne "\nLocaltime is $localtime!\n" echo -ne "\nTimezone matching offset is set $TZ!\n" echo "$TZ" > /etc/timezone echo "TZ=$TZ" > /etc/profile.d/custom.sh ########################################################### ## Restart postfix to apply new timezone also to postfix ## ########################################################### /usr/sbin/postfix stop && /usr/sbin/postfix start
[ "Can mark as a working solution for this using as follows then:\nThe solution is create a /etc/profile.d/custom.sh\nSo I am writting in there with the script the that is ran by a cronjob and it will work also as a non-login also.\nInside there simply add all the environments you need like\nexport TZ=Europe/London\nexport MYENV=value\nexport etc..\n..\n\n" ]
[ 0 ]
[]
[]
[ "cron", "environment_variables", "variables" ]
stackoverflow_0074631176_cron_environment_variables_variables.txt
Q: As soon as I installed nextjs there was a globals.css error without ever interfering with the code The link to that repository is here. After creating the next.js environment using "npx create-next-app@latest ./" and running "npm run dev". The very basic commands to run, This error pops up:- ` ../../../#React Projects/My projects/causs/styles/global.css Global CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules). Read more: https://nextjs.org/docs/messages/css-global Location: ..\..\..\#React Projects\My projects\causs\pages\_app.js The code in _app.js is default that comes while creating next.js import '../styles/global.css' export default function MyApp({ Component, pageProps }) { return <Component {...pageProps} /> } the code in index.js is same, which comes as default: import Head from 'next/head' import Image from 'next/image' import styles from '../styles/Home.module.css' export default function Home() { return ( <div className={styles.container}> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <h1 className={styles.title}> Welcome to <a href="https://nextjs.org">Next.js!</a> </h1> <p className={styles.description}> Get started by editing{' '} <code className={styles.code}>pages/index.js</code> </p> <div className={styles.grid}> <a href="https://nextjs.org/docs" className={styles.card}> <h2>Documentation &rarr;</h2> <p>Find in-depth information about Next.js features and API.</p> </a> <a href="https://nextjs.org/learn" className={styles.card}> <h2>Learn &rarr;</h2> <p>Learn about Next.js in an interactive course with quizzes!</p> </a> <a href="https://github.com/vercel/next.js/tree/canary/examples" className={styles.card} > <h2>Examples &rarr;</h2> <p>Discover and deploy boilerplate example Next.js projects.</p> </a> <a href="https://vercel.com/new?utm_source=create-next-app&utm_medium=default-template&utm_campaign=create-next-app" target="_blank" rel="noopener noreferrer" className={styles.card} > <h2>Deploy &rarr;</h2> <p> Instantly deploy your Next.js site to a public URL with Vercel. </p> </a> </div> </main> <footer className={styles.footer}> <a href="https://vercel.com?utm_source=create-next-app&utm_medium=default-template&utm_campaign=create-next-app" target="_blank" rel="noopener noreferrer" > Powered by{' '} <span className={styles.logo}> <Image src="/vercel.svg" alt="Vercel Logo" width={72} height={16} /> </span> </a> </footer> </div> ) } ` Anything else you can see by going through the repository. The next.config.js file is also default one. ` /** @type {import('next').NextConfig} */ const nextConfig = { reactStrictMode: true, } module.exports = nextConfig I have installed latest version of node - v18.12.1 and npm - 8.19.2 Please help, I've no idea what to do. except adding and removing import '../styles/global.css' ` in pages/_app.js file. In conclusion, a newbie with next.js who has only created two projects, tried "npx create-next-app@latest ./" and then when I ran the environment it resulted with ` ../../../#React Projects/My projects/causs/styles/global.css Global CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules). Read more: https://nextjs.org/docs/messages/css-global Location: ..\..\..\#React Projects\My projects\causs\pages\_app.js ` I haven't made any changes at all, the code and files in the project was by default. Note: I tried running my previous project too on my pc, which resulted in same error while it seems to do just fine where I've published. A: As it turns out the problem was in parent directory name; Global CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules). The cause of this error is peculiar, which I can't fathom. The parent directory as over here - /#React Projects/My projects/causs/styles/global.css; "React Projects" has a special character in it which caused the error to pop up once I removed it, the problem was over and my new and old projects are working just fine now.
As soon as I installed nextjs there was a globals.css error without ever interfering with the code
The link to that repository is here. After creating the next.js environment using "npx create-next-app@latest ./" and running "npm run dev". The very basic commands to run, This error pops up:- ` ../../../#React Projects/My projects/causs/styles/global.css Global CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules). Read more: https://nextjs.org/docs/messages/css-global Location: ..\..\..\#React Projects\My projects\causs\pages\_app.js The code in _app.js is default that comes while creating next.js import '../styles/global.css' export default function MyApp({ Component, pageProps }) { return <Component {...pageProps} /> } the code in index.js is same, which comes as default: import Head from 'next/head' import Image from 'next/image' import styles from '../styles/Home.module.css' export default function Home() { return ( <div className={styles.container}> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <h1 className={styles.title}> Welcome to <a href="https://nextjs.org">Next.js!</a> </h1> <p className={styles.description}> Get started by editing{' '} <code className={styles.code}>pages/index.js</code> </p> <div className={styles.grid}> <a href="https://nextjs.org/docs" className={styles.card}> <h2>Documentation &rarr;</h2> <p>Find in-depth information about Next.js features and API.</p> </a> <a href="https://nextjs.org/learn" className={styles.card}> <h2>Learn &rarr;</h2> <p>Learn about Next.js in an interactive course with quizzes!</p> </a> <a href="https://github.com/vercel/next.js/tree/canary/examples" className={styles.card} > <h2>Examples &rarr;</h2> <p>Discover and deploy boilerplate example Next.js projects.</p> </a> <a href="https://vercel.com/new?utm_source=create-next-app&utm_medium=default-template&utm_campaign=create-next-app" target="_blank" rel="noopener noreferrer" className={styles.card} > <h2>Deploy &rarr;</h2> <p> Instantly deploy your Next.js site to a public URL with Vercel. </p> </a> </div> </main> <footer className={styles.footer}> <a href="https://vercel.com?utm_source=create-next-app&utm_medium=default-template&utm_campaign=create-next-app" target="_blank" rel="noopener noreferrer" > Powered by{' '} <span className={styles.logo}> <Image src="/vercel.svg" alt="Vercel Logo" width={72} height={16} /> </span> </a> </footer> </div> ) } ` Anything else you can see by going through the repository. The next.config.js file is also default one. ` /** @type {import('next').NextConfig} */ const nextConfig = { reactStrictMode: true, } module.exports = nextConfig I have installed latest version of node - v18.12.1 and npm - 8.19.2 Please help, I've no idea what to do. except adding and removing import '../styles/global.css' ` in pages/_app.js file. In conclusion, a newbie with next.js who has only created two projects, tried "npx create-next-app@latest ./" and then when I ran the environment it resulted with ` ../../../#React Projects/My projects/causs/styles/global.css Global CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules). Read more: https://nextjs.org/docs/messages/css-global Location: ..\..\..\#React Projects\My projects\causs\pages\_app.js ` I haven't made any changes at all, the code and files in the project was by default. Note: I tried running my previous project too on my pc, which resulted in same error while it seems to do just fine where I've published.
[ "As it turns out the problem was in parent directory name;\nGlobal CSS cannot be imported from files other than your Custom <App>. Due to the Global nature of stylesheets, and to avoid conflicts, Please move all first-party global CSS imports to pages/_app.js. Or convert the import to Component-Level CSS (CSS Modules).\n\nThe cause of this error is peculiar, which I can't fathom. The parent directory as over here -\n\n/#React Projects/My projects/causs/styles/global.css;\n\n\"React Projects\" has a special character in it which caused the error to pop up once I removed it, the problem was over and my new and old projects are working just fine now.\n" ]
[ 0 ]
[]
[]
[ "javascript", "next.js", "reactjs" ]
stackoverflow_0074625945_javascript_next.js_reactjs.txt
Q: Filling Algorithm for Equal Distribution Need your help resolving algorithm task - There are 3 baskets, basket 1 has 10 balls and a possible max capacity of 100, basket 2 has 50 balls and a possible max capacity of 200, and basket 3 has 100 balls and a possible max capacity of 300. Please help me to write an algorithm or code that split another 100 balls between the 3 baskets for the best possible equal distribution between the baskets. Not possible to move balls between the baskets. Your suggested algorithm should of course work on any number of baskets with different max capacities and any onHand value, for example, 1 ball that I want to add or the maximum capacity value that should fill all baskets to 100% fill. A: As already mentioned in one of my comments. If you want to have an equal distribution of %fill, then you could add the balls individually to the current lowest filled basket: import numpy as np def fill_baskets(baskets, ballsToDistribute): for i in range(ballsToDistribute, 0, -1): # find the basket with the lowest percentage of balls in it currFillLevels = [currFill / maxFill for currFill, maxFill in baskets] minIndex = np.argmin(currFillLevels) # give the ball to this basket baskets[minIndex][0] += 1 return baskets baskets = [[10, 100], [50, 200], [100, 300]] new_baskets = fill_baskets(baskets, 100) # print the result: for i, basket in enumerate(new_baskets): print(f"Basket {i+1}: {basket[0]/basket[1]:.3f}% ({basket[0]}/ {basket[1]})") The output I get for this case if the following: Basket 1: 0.440% (44/ 100) Basket 2: 0.435% (87/ 200) Basket 3: 0.430% (129/ 300) The only problem that can arise from the code is when we have too many balls to give away. Then all the baskets will be overfilled.
Filling Algorithm for Equal Distribution
Need your help resolving algorithm task - There are 3 baskets, basket 1 has 10 balls and a possible max capacity of 100, basket 2 has 50 balls and a possible max capacity of 200, and basket 3 has 100 balls and a possible max capacity of 300. Please help me to write an algorithm or code that split another 100 balls between the 3 baskets for the best possible equal distribution between the baskets. Not possible to move balls between the baskets. Your suggested algorithm should of course work on any number of baskets with different max capacities and any onHand value, for example, 1 ball that I want to add or the maximum capacity value that should fill all baskets to 100% fill.
[ "As already mentioned in one of my comments. If you want to have an equal distribution of %fill, then you could add the balls individually to the current lowest filled basket:\nimport numpy as np\n\ndef fill_baskets(baskets, ballsToDistribute):\n for i in range(ballsToDistribute, 0, -1):\n # find the basket with the lowest percentage of balls in it\n currFillLevels = [currFill / maxFill for currFill, maxFill in baskets]\n minIndex = np.argmin(currFillLevels)\n\n # give the ball to this basket\n baskets[minIndex][0] += 1\n\n return baskets\n\n\nbaskets = [[10, 100], [50, 200], [100, 300]]\n\nnew_baskets = fill_baskets(baskets, 100)\n\n# print the result:\nfor i, basket in enumerate(new_baskets):\n print(f\"Basket {i+1}: {basket[0]/basket[1]:.3f}% ({basket[0]}/ {basket[1]})\")\n\nThe output I get for this case if the following:\nBasket 1: 0.440% (44/ 100)\nBasket 2: 0.435% (87/ 200)\nBasket 3: 0.430% (129/ 300)\n\nThe only problem that can arise from the code is when we have too many balls to give away. Then all the baskets will be overfilled.\n" ]
[ 1 ]
[]
[]
[ "algorithm", "dart", "python" ]
stackoverflow_0074665160_algorithm_dart_python.txt
Q: Why java "delete" function is not removing text (.txt) file? I have a small java program that create & delete employee data. I store employee data as .txt (text file). Everything is fine, but it fails to delete the text file. SIMPLE CLASS TO DELETE text file public void removeFile(String ID) { File file = new File("file"+ID+".txt"); if(file.exists()) { if(file.delete()); { System.out.println("\nEmployee has been removed Successfully"); } } else { System.out.println("\nEmployee does not exists :( "); } } I am getting message "Employee has been removed Successfully" but when I check the root folder, I can still see the file is still there (not deleted). I tried and checked out different codes online /google, all shows no problem in delete () function. A: I'm not sure why it didn't delete the file (my guess would be you have a file of the same name in the project root, but you are wanting to delete one stored elsewhere) but there is a syntax error on the if statement: if (file.delete()); You need to remove the semi-colon at the end of that line because that is terminating the if clause. The following System.out line is not conditional on file.delete() being true or false. It will be printed regardless. See what outcome you get when removing the semi-colon.
Why java "delete" function is not removing text (.txt) file?
I have a small java program that create & delete employee data. I store employee data as .txt (text file). Everything is fine, but it fails to delete the text file. SIMPLE CLASS TO DELETE text file public void removeFile(String ID) { File file = new File("file"+ID+".txt"); if(file.exists()) { if(file.delete()); { System.out.println("\nEmployee has been removed Successfully"); } } else { System.out.println("\nEmployee does not exists :( "); } } I am getting message "Employee has been removed Successfully" but when I check the root folder, I can still see the file is still there (not deleted). I tried and checked out different codes online /google, all shows no problem in delete () function.
[ "I'm not sure why it didn't delete the file (my guess would be you have a file of the same name in the project root, but you are wanting to delete one stored elsewhere) but there is a syntax error on the if statement:\nif (file.delete());\n\nYou need to remove the semi-colon at the end of that line because that is terminating the if clause. The following System.out line is not conditional on file.delete() being true or false. It will be printed regardless.\nSee what outcome you get when removing the semi-colon.\n" ]
[ 2 ]
[]
[]
[ "delete_file", "java" ]
stackoverflow_0074665272_delete_file_java.txt
Q: Mocking observable getters and setters in jasmine Lets say I have some state service in angular export class StateService { private _someProperty: BehaviorSubject<boolean> = new BehaviorSubject<boolean>(false); set someProperty(value: boolean) { this._someProperty.next(value); } get someProperty() { return this._someProperty.value; } get someProperty$() { return this._someProperty.asObservable(); } } How can this be mocked correctly in jasmine? Probably something like this but I dont know how to mock these Behaviour Subject getters and setters. const stateServiceMock = jasmine.createSpyObj('StateService', { ... }); A: of() from Rxjs will return an observable where the given input value is "observed" immediatly. so you can spyOnProperty(stateService, 'someProperty$').andReturn(of(false)); dont forget to import "of" from rxjs mocking with jasmine: https://stackoverflow.com/a/33169197/9627206 https://scriptverse.academy/tutorials/jasmine-spyon.html rxjs https://www.learnrxjs.io/learn-rxjs/operators/creation/of
Mocking observable getters and setters in jasmine
Lets say I have some state service in angular export class StateService { private _someProperty: BehaviorSubject<boolean> = new BehaviorSubject<boolean>(false); set someProperty(value: boolean) { this._someProperty.next(value); } get someProperty() { return this._someProperty.value; } get someProperty$() { return this._someProperty.asObservable(); } } How can this be mocked correctly in jasmine? Probably something like this but I dont know how to mock these Behaviour Subject getters and setters. const stateServiceMock = jasmine.createSpyObj('StateService', { ... });
[ "of() from Rxjs will return an observable where the given input value is \"observed\" immediatly.\nso you can\nspyOnProperty(stateService, 'someProperty$').andReturn(of(false));\n\ndont forget to import \"of\" from rxjs\nmocking with jasmine:\n\nhttps://stackoverflow.com/a/33169197/9627206\nhttps://scriptverse.academy/tutorials/jasmine-spyon.html\n\nrxjs\n\nhttps://www.learnrxjs.io/learn-rxjs/operators/creation/of\n\n" ]
[ 1 ]
[]
[]
[ "angular", "jasmine", "testing", "typescript", "unit_testing" ]
stackoverflow_0074664993_angular_jasmine_testing_typescript_unit_testing.txt
Q: Using command eas build -p android --profile preview to create build but the build doesn't have the latest changes that i made in app I am using expo for building a React Native app. I am using command : eas build -p android --profile preview from expo documentation to make an apk file for android. Latest i use that link provided by expo to download the apk file and install in my android device. But when i install it, it doesn't reflect the new changes i made.. I also tried updating my versionCode and version and then typing eas build:configure. My eas.json structure: { "cli": { "version": ">= 2.5.1" }, "build": { "development": { "developmentClient": true, "distribution": "internal" }, "local": { "distribution": "internal", "android": { "buildType": "apk" } }, "production": {} }, "submit": { "production": { "ios": {} } } } I want latest changes to reflect in lastest apk build A: This may be a problem in the eas configuration setup You can try build with expo $ expo build:android also i attached my eas file { "cli": { "version": ">= 0.52.0" }, "build": { "preview": { "android": { "buildType": "apk" } }, "preview2": { "android": { "gradleCommand": ":app:assembleRelease" } }, "preview3": { "developmentClient": true }, "production": {} }, "submit": { "production": {} } }
Using command eas build -p android --profile preview to create build but the build doesn't have the latest changes that i made in app
I am using expo for building a React Native app. I am using command : eas build -p android --profile preview from expo documentation to make an apk file for android. Latest i use that link provided by expo to download the apk file and install in my android device. But when i install it, it doesn't reflect the new changes i made.. I also tried updating my versionCode and version and then typing eas build:configure. My eas.json structure: { "cli": { "version": ">= 2.5.1" }, "build": { "development": { "developmentClient": true, "distribution": "internal" }, "local": { "distribution": "internal", "android": { "buildType": "apk" } }, "production": {} }, "submit": { "production": { "ios": {} } } } I want latest changes to reflect in lastest apk build
[ "This may be a problem in the eas configuration setup\nYou can try build with expo\n$ expo build:android\n\nalso i attached my eas file\n{\n \"cli\": {\n \"version\": \">= 0.52.0\"\n},\n\"build\": {\n \"preview\": {\n \"android\": {\n \"buildType\": \"apk\"\n }\n },\n\"preview2\": {\n \"android\": {\n \"gradleCommand\": \":app:assembleRelease\"\n }\n },\n \"preview3\": {\n \"developmentClient\": true\n },\n \"production\": {}\n },\n \"submit\": {\n \"production\": {}\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "expo", "react_native" ]
stackoverflow_0074665584_expo_react_native.txt
Q: How do I store historical data related with API in PHP laravel? I am trying to store historical data from front-end(flutter) through the API. Now when data is sent from the front-end(flutter), it is received, displayed in the laravel web-app and the data is stored in the database. The thing is that when data is sent again from front-end, it will update the data in the web-app and thus i lost the previous data. How can I store the previous data everytime data is sent from front end flutter ? What I've tried looking into is the laravel auditing. It record every changes that happen. However, the problem is that the package only works with 1 entry. So if I have 10 users, it doesnt work in that case. Another solution that I'm currently looking into and I think more straightforward related to my situation is to have my code: Check if staff_id, current_date and time_checkIn is NOT NULL. If it is NOT NULL, and it fullfill every condition above, then print a message "You already Check In", If it is NULL, the data will be displayed and stored in the database, Else, data is updated, both the previous data and the newly updated data will be stored in the database. How can I make this work? Please help Below are the current controller in my laravel web-app that works with the front-end Laravel Controller public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); } A: To store the previous data every time data is sent from the front-end, you could create a separate table in your database to store the historical data. This way, you can store all the previous data without overwriting it. In your controller, you can retrieve the current data from the main table and store it in a new row in the historical data table, then update the current data in the main table with the new data from the front-end. This way, you can keep track of all the changes made to the data without losing any of the previous data. Here is an example of how your controller could be updated to implement this approach: public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->first(); // retrieve current data $currentData = $users->toArray(); // store current data in historical data table $historicalData = new HistoricalData(); $historicalData->fill($currentData); $historicalData->save(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result);
How do I store historical data related with API in PHP laravel?
I am trying to store historical data from front-end(flutter) through the API. Now when data is sent from the front-end(flutter), it is received, displayed in the laravel web-app and the data is stored in the database. The thing is that when data is sent again from front-end, it will update the data in the web-app and thus i lost the previous data. How can I store the previous data everytime data is sent from front end flutter ? What I've tried looking into is the laravel auditing. It record every changes that happen. However, the problem is that the package only works with 1 entry. So if I have 10 users, it doesnt work in that case. Another solution that I'm currently looking into and I think more straightforward related to my situation is to have my code: Check if staff_id, current_date and time_checkIn is NOT NULL. If it is NOT NULL, and it fullfill every condition above, then print a message "You already Check In", If it is NULL, the data will be displayed and stored in the database, Else, data is updated, both the previous data and the newly updated data will be stored in the database. How can I make this work? Please help Below are the current controller in my laravel web-app that works with the front-end Laravel Controller public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); }
[ "To store the previous data every time data is sent from the front-end, you could create a separate table in your database to store the historical data. This way, you can store all the previous data without overwriting it.\nIn your controller, you can retrieve the current data from the main table and store it in a new row in the historical data table, then update the current data in the main table with the new data from the front-end. This way, you can keep track of all the changes made to the data without losing any of the previous data.\nHere is an example of how your controller could be updated to implement this approach:\npublic function userClockIn(Request $r)\n{\n $result = [];\n $result['status'] = false;\n $result['message'] = \"something error\";\n\n $users = User::where('staff_id', $r->staff_id)->first();\n\n // retrieve current data\n $currentData = $users->toArray();\n\n // store current data in historical data table\n $historicalData = new HistoricalData();\n $historicalData->fill($currentData);\n $historicalData->save();\n\n $mytime = Carbon::now();\n $time = $mytime->format('H:i:s');\n $date = $mytime->format('Y-m-d');\n\n $users->date_checkIn = $date;\n $users->time_checkIn = $time;\n $users->location_checkIn = $r->location_checkIn;\n\n $users->save();\n\n $result['data'] = $users;\n $result['status'] = true;\n $result['message'] = \"suksess add data\";\n\n return response()->json($result);\n\n" ]
[ 1 ]
[]
[]
[ "api", "laravel", "php" ]
stackoverflow_0074665342_api_laravel_php.txt
Q: How can I combine two SQL queries to show result in one table when there is inheritance I have the following structure User is a parent to Assignee and Submitter. A Submitter has a one to many relationship with Request. A Request is Many to Many relationship with Assignee. I have the following 2 queries that I'd like to combine into one table: Select r.request_number, u.first_name as Assignee from requests r, users u join request_assignee ra on r.id = ra.request_id join assignee a on u.id = ra.assignee_id; Select requests.request_number as Request_Number, users.first_name as Submitter from requests Join submitter on requests.submitter_id = submitter.id Join request_assignee on requests.id = request_assignee.request_id join users on submitter.id = users.id; There can be more than one assignee to the request. How can I do 1 query to display results in one table? Here is a picture that might help with the tables: A: Assumption: Output Request Number, Submitter, Assignee. The query could be simplified as below. The reason is ASSIGNEE and SUBMITTER are not required. USERS with alias for two different roles should be good enough. select r.request_number, s.first_name as submitter, a.first_name as assignee from requests r join request_assignee ra on r.id = ra.reqeust_id join users s on r.submitter_id = s.id join users a on ra.assignee_id = a.id; Current data model: SUBMITTER and ASSIGNEE do not provide additional information. ID column is redundant. Suggested data model by removing SUBMITTER and ASSIGNEE table. A: maybe,change the column name and keep them uniform, and try this Select r.request_number as Request_Number, u.first_name as Submitter from requests r, users u join request_assignee ra on r.id = ra.request_id join assignee a on u.id = ra.assignee_id unionall Select requests.request_number as Request_Number, users.first_name as Submitter from requests Join submitter on requests.submitter_id = submitter.id Join request_assignee on requests.id = request_assignee.request_id join users on submitter.id = users.id
How can I combine two SQL queries to show result in one table when there is inheritance
I have the following structure User is a parent to Assignee and Submitter. A Submitter has a one to many relationship with Request. A Request is Many to Many relationship with Assignee. I have the following 2 queries that I'd like to combine into one table: Select r.request_number, u.first_name as Assignee from requests r, users u join request_assignee ra on r.id = ra.request_id join assignee a on u.id = ra.assignee_id; Select requests.request_number as Request_Number, users.first_name as Submitter from requests Join submitter on requests.submitter_id = submitter.id Join request_assignee on requests.id = request_assignee.request_id join users on submitter.id = users.id; There can be more than one assignee to the request. How can I do 1 query to display results in one table? Here is a picture that might help with the tables:
[ "Assumption: Output Request Number, Submitter, Assignee.\nThe query could be simplified as below. The reason is ASSIGNEE and SUBMITTER are not required. USERS with alias for two different roles should be good enough.\nselect r.request_number,\n s.first_name as submitter,\n a.first_name as assignee\n from requests r\n join request_assignee ra\n on r.id = ra.reqeust_id\n join users s\n on r.submitter_id = s.id\n join users a\n on ra.assignee_id = a.id;\n\nCurrent data model: SUBMITTER and ASSIGNEE do not provide additional information. ID column is redundant.\n\nSuggested data model by removing SUBMITTER and ASSIGNEE table.\n\n", "maybe,change the column name and keep them uniform, and try this\nSelect r.request_number as Request_Number, u.first_name as Submitter\nfrom requests r, users u\njoin request_assignee ra on r.id = ra.request_id\njoin assignee a on u.id = ra.assignee_id\nunionall \nSelect requests.request_number as Request_Number, users.first_name as Submitter\nfrom requests\nJoin submitter on requests.submitter_id = submitter.id\nJoin request_assignee on requests.id = request_assignee.request_id\njoin users on submitter.id = users.id\n\n" ]
[ 0, 0 ]
[]
[]
[ "spring_data_jpa", "sql" ]
stackoverflow_0074665377_spring_data_jpa_sql.txt
Q: Data Store - dash_table conditional formatting failing @dashapp.callback( Output(component_id='data-storage', component_property='data'), Input(component_id='input', component_property='n_submit') . . . return json_data @dashapp.callback( Output('table', component_property='columns'), Output('table', component_property='data'), Output('table', component_property='style_cell_conditional'), Input(component_id='data-storage', component_property='data'), . . . column_name = 'Target Column' value = 'This value is a string' table_columns = [{"name": i, "id": i} for i in df.columns] table_data = df.to_dict("records") conditional_formatting = [{ 'if': { 'filter_query': f'{{{column_name}}} = {value}' }, 'backgroundColor': 'white', 'color' : 'black', } ] return table_columns, table_data, conditional_formatting When the code above is used WITH the conditional_formatting part - it works for some 'value's, and does not work for other 'value's When the code above is used WITHOUT the conditional_formatting part - it works as expected for all 'value's To be noted that when the conditional_formatting part is used, all callbacks are triggered twice. After this happens, the Data Store acts as if it has been infected by the "sick" value and does not allow new data. Example: Step 1. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected Step 2. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected Step 3. Use not working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed Step 4. Use working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed Any ideas why does this happen? Any feedback is appreciated! A: conditional_formatting = [{ 'if': { 'filter_query': f'{{{column_name}}} = "{value}"' }, 'backgroundColor': 'white', 'color' : 'black', } ] Issue was because the failing values had empty space (e.g. San Francisco). Adding quotes around solved the issue.
Data Store - dash_table conditional formatting failing
@dashapp.callback( Output(component_id='data-storage', component_property='data'), Input(component_id='input', component_property='n_submit') . . . return json_data @dashapp.callback( Output('table', component_property='columns'), Output('table', component_property='data'), Output('table', component_property='style_cell_conditional'), Input(component_id='data-storage', component_property='data'), . . . column_name = 'Target Column' value = 'This value is a string' table_columns = [{"name": i, "id": i} for i in df.columns] table_data = df.to_dict("records") conditional_formatting = [{ 'if': { 'filter_query': f'{{{column_name}}} = {value}' }, 'backgroundColor': 'white', 'color' : 'black', } ] return table_columns, table_data, conditional_formatting When the code above is used WITH the conditional_formatting part - it works for some 'value's, and does not work for other 'value's When the code above is used WITHOUT the conditional_formatting part - it works as expected for all 'value's To be noted that when the conditional_formatting part is used, all callbacks are triggered twice. After this happens, the Data Store acts as if it has been infected by the "sick" value and does not allow new data. Example: Step 1. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected Step 2. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected Step 3. Use not working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed Step 4. Use working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed Any ideas why does this happen? Any feedback is appreciated!
[ "conditional_formatting = [{\n 'if': {\n 'filter_query': f'{{{column_name}}} = \"{value}\"'\n },\n 'backgroundColor': 'white',\n 'color' : 'black',\n }\n ]\n\nIssue was because the failing values had empty space (e.g. San Francisco). Adding quotes around solved the issue.\n" ]
[ 0 ]
[]
[]
[ "conditional_formatting", "datastore", "datatable", "plotly_dash" ]
stackoverflow_0074659293_conditional_formatting_datastore_datatable_plotly_dash.txt
Q: The framework 'Microsoft.NETCore.App', version '5' was not found while Microsoft.NETCore.App 5.0.0 is found when I use the dotnet command I have an issue > dotnet Mydll.dll It was not possible to find any compatible framework version The framework 'Microsoft.NETCore.App', version '5' was not found. - The following frameworks were found: 2.1.23 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 2.2.7 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 2.2.8 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.0.3 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.4 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.9 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.10 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 5.0.0 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 5.0.7 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] You can resolve the problem by installing the specified framework and/or SDK. The specified framework can be found at: - https://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=5&arch=x64&rid=win10-x64 I am a little suprised cause it seems I have the the runtime Microsoft.NETCore.App 5.0.0 installed and also the 5.0.7 >dotnet --list-runtimes Microsoft.AspNetCore.All 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] I have installed the Hosting Bundle of ASP.NET Core Runtime 5.0.7 for being sure but still have the issue Would you have an idea nout what is the issue and how to fix it? A: Uninstall deprecated package globally:- dotnet tool uninstall dotnet-ef -g And then try to Reinstall the up-to-date package version:- dotnet tool install --global dotnet-ef --version 5.0.1 If not work, try to second process:- So you have already installed .Net 5. download dotnet-sdk-5.0.301-win-x64.zip and copied Microsoft.AspNetCore.App\5.0.301 folder manually from the zip file to C:\ProgramFiles\dotnet\shared\Microsoft.AspNetCore.App\5.0.301 then hope the asp.net core application will be started working. A: I had the same problem. I added the following line to .csproj file manually: <ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" /> </ItemGroup> A: I had this kind of issue when trying to start my application from Visual Studio. Update you Visual Studio using Visual Studio Installer solved this for me. Direct download installer places the files in this folder C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App And Visual Studio installer places the files in this folder C:\Program Files\dotnet\shared\Microsoft.NETCore.App
The framework 'Microsoft.NETCore.App', version '5' was not found while Microsoft.NETCore.App 5.0.0 is found
when I use the dotnet command I have an issue > dotnet Mydll.dll It was not possible to find any compatible framework version The framework 'Microsoft.NETCore.App', version '5' was not found. - The following frameworks were found: 2.1.23 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 2.2.7 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 2.2.8 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.0.3 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.4 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.9 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 3.1.10 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 5.0.0 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] 5.0.7 at [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] You can resolve the problem by installing the specified framework and/or SDK. The specified framework can be found at: - https://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=5&arch=x64&rid=win10-x64 I am a little suprised cause it seems I have the the runtime Microsoft.NETCore.App 5.0.0 installed and also the 5.0.7 >dotnet --list-runtimes Microsoft.AspNetCore.All 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.23 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.9 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.10 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 5.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] I have installed the Hosting Bundle of ASP.NET Core Runtime 5.0.7 for being sure but still have the issue Would you have an idea nout what is the issue and how to fix it?
[ "Uninstall deprecated package globally:-\ndotnet tool uninstall dotnet-ef -g\n\nAnd then try to Reinstall the up-to-date package version:-\ndotnet tool install --global dotnet-ef --version 5.0.1 \n\nIf not work, try to second process:-\nSo you have already installed .Net 5.\ndownload dotnet-sdk-5.0.301-win-x64.zip and copied Microsoft.AspNetCore.App\\5.0.301 folder manually from the zip file to C:\\ProgramFiles\\dotnet\\shared\\Microsoft.AspNetCore.App\\5.0.301\nthen hope the asp.net core application will be started working.\n", "I had the same problem.\nI added the following line to .csproj file manually:\n<ItemGroup>\n <FrameworkReference Include=\"Microsoft.AspNetCore.App\" />\n</ItemGroup>\n\n", "I had this kind of issue when trying to start my application from Visual Studio.\nUpdate you Visual Studio using Visual Studio Installer solved this for me.\nDirect download installer places the files in this folder\nC:\\Program Files\\dotnet\\shared\\Microsoft.AspNetCore.App\nAnd Visual Studio installer places the files in this folder\nC:\\Program Files\\dotnet\\shared\\Microsoft.NETCore.App\n" ]
[ 2, 0, 0 ]
[]
[]
[ ".net", ".net_5", ".net_core", "asp.net5", "asp.net_core" ]
stackoverflow_0068112732_.net_.net_5_.net_core_asp.net5_asp.net_core.txt
Q: CSS - How to select an id inside a class I have several tables. So I use a datatable.css. To deal with the header I use .order-table-header{} (so I use a class for header) I want now to differentiate my tables so logically use an id, something like that : #Other{} But it doesn't do what's inside the #Other{} here is my code : .order-table-header{ text-align:center; background:none repeat scroll 0 0 #dddddd; border-bottom:1px solid #BBBBBB; padding:16px; width: 8%; background-color: #0099cc; color: #ffffff; line-height: 3px; } #Other .order-table-header{ background-color: #00ff00; } I tried the other way around .order-table-header #Other{} I tried with just other #Other{} Here is my xhtml : <h:dataTable value="#{workspace.otherfiles}" var="item" styleClass="datatable.css" headerClass="order-table-header" rowClasses="order-table-odd-row, order-table-even-row" id="Other"> I also tried : .order-table-header(@background : #0099cc){ text-align:center; background:none repeat scroll 0 0 #dddddd; border-bottom:1px solid #BBBBBB; padding:16px; width: 8%; background-color: @background; color: #ffffff; line-height: 3px; } #Other { .order-table-header(#00ff00); } but netbeans tell me unexpected symbol found for the firs line and for the line inside other A: .order-table-header #Other (note the space between header and #Other) means: "node with id 'Other' that is a descendant of node with class 'order-table-header'". So a node having id "Other" and class "order-table-header" won't match a CSS rule having pattern .order-table-header #Other. However a node can match more than one rule. Try changing the corresponding part of your first example to: #Other { background-color: #00ff00; } A: try this #Other .order-table-header{ css } since the #Other will be the id of the table and .order-table-header will be the class of the table header
CSS - How to select an id inside a class
I have several tables. So I use a datatable.css. To deal with the header I use .order-table-header{} (so I use a class for header) I want now to differentiate my tables so logically use an id, something like that : #Other{} But it doesn't do what's inside the #Other{} here is my code : .order-table-header{ text-align:center; background:none repeat scroll 0 0 #dddddd; border-bottom:1px solid #BBBBBB; padding:16px; width: 8%; background-color: #0099cc; color: #ffffff; line-height: 3px; } #Other .order-table-header{ background-color: #00ff00; } I tried the other way around .order-table-header #Other{} I tried with just other #Other{} Here is my xhtml : <h:dataTable value="#{workspace.otherfiles}" var="item" styleClass="datatable.css" headerClass="order-table-header" rowClasses="order-table-odd-row, order-table-even-row" id="Other"> I also tried : .order-table-header(@background : #0099cc){ text-align:center; background:none repeat scroll 0 0 #dddddd; border-bottom:1px solid #BBBBBB; padding:16px; width: 8%; background-color: @background; color: #ffffff; line-height: 3px; } #Other { .order-table-header(#00ff00); } but netbeans tell me unexpected symbol found for the firs line and for the line inside other
[ ".order-table-header #Other (note the space between header and #Other) means: \"node with id 'Other' that is a descendant of node with class 'order-table-header'\". So a node having id \"Other\" and class \"order-table-header\" won't match a CSS rule having pattern .order-table-header #Other. However a node can match more than one rule. Try changing the corresponding part of your first example to:\n#Other {\n background-color: #00ff00;\n}\n\n", "try this #Other .order-table-header{ css } since the #Other will be the id of the table and .order-table-header will be the class of the table header\n" ]
[ 8, 1 ]
[ "Try:\n.order-table-header::not(#Other) {}\n\nThen:\n#Other {}\n\n" ]
[ -1 ]
[ "css" ]
stackoverflow_0009656102_css.txt
Q: Get count of objects in a specific S3 folder using Boto3 Trying to get count of objects in S3 folder Current code bucket='some-bucket' File='someLocation/File/' objs = boto3.client('s3').list_objects_v2(Bucket=bucket,Prefix=File) fileCount = objs['KeyCount'] This gives me the count as 1+actual number of objects in S3. Maybe it is counting "File" as a key too? A: Assuming you want to count the keys in a bucket and don't want to hit the limit of 1000 using list_objects_v2. The below code worked for me but I'm wondering if there is a better faster way to do it! Tried looking if there's a packaged function in boto3 s3 connector but there isn't! # connect to s3 - assuming your creds are all set up and you have boto3 installed s3 = boto3.resource('s3') # identify the bucket - you can use prefix if you know what your bucket name starts with for bucket in s3.buckets.all(): print(bucket.name) # get the bucket bucket = s3.Bucket('my-s3-bucket') # use loop and count increment count_obj = 0 for i in bucket.objects.all(): count_obj = count_obj + 1 print(count_obj) A: If there are more than 1000 entries, you need to use paginators, like this: count = 0 client = boto3.client('s3') paginator = client.get_paginator('list_objects') for result in paginator.paginate(Bucket='your-bucket', Prefix='your-folder/', Delimiter='/'): count += len(result.get('CommonPrefixes')) A: "Folders" do not actually exist in Amazon S3. Instead, all objects have their full path as their filename ('Key'). I think you already know this. However, it is possible to 'create' a folder by creating a zero-length object that has the same name as the folder. This causes the folder to appear in listings and is what happens if folders are created via the management console. Thus, you could exclude zero-length objects from your count. For an example, see: Determine if folder or file key - Boto A: If you have credentials to access that bucket, then you can use this simple code. Below code will give you a list. List comprehension is used for more readability. Filter is used to filter objects because in bucket to identify the files ,folder names are used. As explained by John Rotenstein concisely. import boto3 bucket = "Sample_Bucket" folder = "Sample_Folder" s3 = boto3.resource("s3") s3_bucket = s3.Bucket(bucket) files_in_s3 = [f.key.split(folder + "/")[1] for f in s3_bucket.objects.filter(Prefix=folder).all()] A: The following code worked perfectly def getNumberOfObjectsInBucket(bucketName,prefix): count = 0 response = boto3.client('s3').list_objects_v2(Bucket=bucketName,Prefix=prefix) for object in response['Contents']: if object['Size'] != 0: #print(object['Key']) count+=1 return count object['Size'] == 0 will take you to folder names, if want to check them, object['Size'] != 0 will lead you to all non-folder keys. Sample function below: getNumberOfObjectsInBucket('foo-test-bucket','foo-output/')
Get count of objects in a specific S3 folder using Boto3
Trying to get count of objects in S3 folder Current code bucket='some-bucket' File='someLocation/File/' objs = boto3.client('s3').list_objects_v2(Bucket=bucket,Prefix=File) fileCount = objs['KeyCount'] This gives me the count as 1+actual number of objects in S3. Maybe it is counting "File" as a key too?
[ "Assuming you want to count the keys in a bucket and don't want to hit the limit of 1000 using list_objects_v2. The below code worked for me but I'm wondering if there is a better faster way to do it! Tried looking if there's a packaged function in boto3 s3 connector but there isn't!\n# connect to s3 - assuming your creds are all set up and you have boto3 installed\ns3 = boto3.resource('s3')\n\n# identify the bucket - you can use prefix if you know what your bucket name starts with\nfor bucket in s3.buckets.all():\n print(bucket.name)\n\n# get the bucket\nbucket = s3.Bucket('my-s3-bucket')\n\n# use loop and count increment\ncount_obj = 0\nfor i in bucket.objects.all():\n count_obj = count_obj + 1\nprint(count_obj)\n\n", "If there are more than 1000 entries, you need to use paginators, like this:\ncount = 0\nclient = boto3.client('s3')\npaginator = client.get_paginator('list_objects')\nfor result in paginator.paginate(Bucket='your-bucket', Prefix='your-folder/', Delimiter='/'):\n count += len(result.get('CommonPrefixes'))\n\n", "\"Folders\" do not actually exist in Amazon S3. Instead, all objects have their full path as their filename ('Key'). I think you already know this.\nHowever, it is possible to 'create' a folder by creating a zero-length object that has the same name as the folder. This causes the folder to appear in listings and is what happens if folders are created via the management console.\nThus, you could exclude zero-length objects from your count.\nFor an example, see: Determine if folder or file key - Boto\n", "If you have credentials to access that bucket, then you can use this simple code. Below code will give you a list. List comprehension is used for more readability.\nFilter is used to filter objects because in bucket to identify the files ,folder names are used. As explained by John Rotenstein concisely.\nimport boto3\n\nbucket = \"Sample_Bucket\"\nfolder = \"Sample_Folder\"\ns3 = boto3.resource(\"s3\") \ns3_bucket = s3.Bucket(bucket)\nfiles_in_s3 = [f.key.split(folder + \"/\")[1] for f in s3_bucket.objects.filter(Prefix=folder).all()]\n\n", "The following code worked perfectly\ndef getNumberOfObjectsInBucket(bucketName,prefix):\n count = 0\n response = boto3.client('s3').list_objects_v2(Bucket=bucketName,Prefix=prefix)\n for object in response['Contents']:\n if object['Size'] != 0:\n #print(object['Key'])\n count+=1\n return count\n\nobject['Size'] == 0 will take you to folder names, if want to check them, object['Size'] != 0 will lead you to all non-folder keys.\nSample function below:\ngetNumberOfObjectsInBucket('foo-test-bucket','foo-output/')\n\n" ]
[ 15, 3, 1, 0, 0 ]
[]
[]
[ "amazon_s3", "boto3", "python" ]
stackoverflow_0054656455_amazon_s3_boto3_python.txt
Q: Change specific value in Dynamic nested array with parent and children structure PHP I have a dynamic nested array structure with unique id and name, need to change value of name for a specific id given. I have a dynamic nested array with structure ` $array = [ [ "id" => "31350880", "name" => "HOD", "children" => [ [ "id" => "57f94cd7", "parent_id" => "31350880", "name" => "New HOD", "children" => [ [ "id" => "e7f1c88b", "parent_id" => "57f94cd7", "name" => "Gold", "children" => [ ] ] ] ] ] ], [ "id" => "45881fa8", "name" => "Pictures", "children" => [ [ "id" => "770e6e20", "parent_id" => "45881fa8", "name" => "New Picture", "children" => [ [ "id" => "a403a8fa", "parent_id" => "770e6e20", "name" => "Silver", "children" => [ ] ] ] ] ] ] ]; ` For a given ID(unique) , need to find it from array and change the name from that node to a specific name. for example : $id = ''770e6e20' which is a child of parent node with name "Pictures", need to find child node with specific id and change its name and retrieve the full array in its initial structure ? A: This is a recursive function that gets a reference of your array, the id of the item and the new name to be assigned, and performs in place modification: function modify_element_name(&$arr, $id, $name){ foreach($arr as $index => $item){ if($item['id'] === $id){ $arr[$index]['name'] = $name; } else { if(count($item['children']) > 0){ modify_element_name($arr[$index]['children'], $id, $name); } } } } Calling it as below: modify_element_name($array, '770e6e20', 'New Picture Modified'); Will modify the array: Array ( [0] => Array ( [id] => 31350880 [name] => HOD [children] => Array ( [0] => Array ( [id] => 57f94cd7 [parent_id] => 31350880 [name] => New HOD [children] => Array ( [0] => Array ( [id] => e7f1c88b [parent_id] => 57f94cd7 [name] => Gold [children] => Array ( ) ) ) ) ) ) [1] => Array ( [id] => 45881fa8 [name] => Pictures [children] => Array ( [0] => Array ( [id] => 770e6e20 [parent_id] => 45881fa8 [name] => New Picture Modified [children] => Array ( [0] => Array ( [id] => a403a8fa [parent_id] => 770e6e20 [name] => Silver [children] => Array ( ) ) ) ) ) ) )
Change specific value in Dynamic nested array with parent and children structure PHP
I have a dynamic nested array structure with unique id and name, need to change value of name for a specific id given. I have a dynamic nested array with structure ` $array = [ [ "id" => "31350880", "name" => "HOD", "children" => [ [ "id" => "57f94cd7", "parent_id" => "31350880", "name" => "New HOD", "children" => [ [ "id" => "e7f1c88b", "parent_id" => "57f94cd7", "name" => "Gold", "children" => [ ] ] ] ] ] ], [ "id" => "45881fa8", "name" => "Pictures", "children" => [ [ "id" => "770e6e20", "parent_id" => "45881fa8", "name" => "New Picture", "children" => [ [ "id" => "a403a8fa", "parent_id" => "770e6e20", "name" => "Silver", "children" => [ ] ] ] ] ] ] ]; ` For a given ID(unique) , need to find it from array and change the name from that node to a specific name. for example : $id = ''770e6e20' which is a child of parent node with name "Pictures", need to find child node with specific id and change its name and retrieve the full array in its initial structure ?
[ "This is a recursive function that gets a reference of your array, the id of the item and the new name to be assigned, and performs in place modification:\nfunction modify_element_name(&$arr, $id, $name){\n foreach($arr as $index => $item){\n if($item['id'] === $id){\n $arr[$index]['name'] = $name;\n \n }\n else {\n if(count($item['children']) > 0){\n modify_element_name($arr[$index]['children'], $id, $name);\n }\n }\n }\n}\n\nCalling it as below:\nmodify_element_name($array, '770e6e20', 'New Picture Modified');\n\nWill modify the array:\nArray\n(\n [0] => Array\n (\n [id] => 31350880\n [name] => HOD\n [children] => Array\n (\n [0] => Array\n (\n [id] => 57f94cd7\n [parent_id] => 31350880\n [name] => New HOD\n [children] => Array\n (\n [0] => Array\n (\n [id] => e7f1c88b\n [parent_id] => 57f94cd7\n [name] => Gold\n [children] => Array\n (\n )\n\n )\n\n )\n\n )\n\n )\n\n )\n\n [1] => Array\n (\n [id] => 45881fa8\n [name] => Pictures\n [children] => Array\n (\n [0] => Array\n (\n [id] => 770e6e20\n [parent_id] => 45881fa8\n [name] => New Picture Modified\n [children] => Array\n (\n [0] => Array\n (\n [id] => a403a8fa\n [parent_id] => 770e6e20\n [name] => Silver\n [children] => Array\n (\n )\n\n )\n\n )\n\n )\n\n )\n\n )\n\n)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "json", "laravel", "nested", "php" ]
stackoverflow_0074660662_arrays_json_laravel_nested_php.txt
Q: Running SQL script on Azure Synapse via Powershell or CLI I am trying to give a service principal SELECT access on my Azure Synapse SQL data. CREATE USER [MY_SERVICE_PRINCIPAL] FROM EXTERNAL PROVIDER WITH DEFAFULT_SCHEMA=[dbo] GO GRANT SELECT ON DATABASE :: MyDB TO [MY_SERVICE_PRINCIPAL]; This works fine, but it requires me logging into the workspace to do this for every single new service principal. Is it possible to automate this? I automate the creation of the service principal via Azure CLI. Is it possible to run this script from a A: CREATE AUTOMATION ACCOUNT Open Azure Portal and search for the Automation Account Select Automation Account, and in another screen. Click on Create and fill in the required attributes. After creating the Automation Account, the option of Runbook is in the left menu. by opening this, it has default/tutorial Runbooks of different types. To Automate the Process of Synapse Analytics, install some required modules in the Automation Account. In the left menu, find and click on the Modules, search for Az.Accounts and import this module to the Automation Account. After this import, follow the same process to import Az.Synapse, another required module for this automation task. After importing the required modules, create a Runbook. 4. After clicking on Create a Runbook, an editor will be opened, paste the following code, save the Runbook, and publish it. [CmdletBinding()] param (    [Parameter(Mandatory=$true)]    [string]$ResourceGroupName ="rg_ResourceGroup",    [Parameter(Mandatory=$true)]    [string]$WorkspaceName = "wp_WorkSpaceName",    [Parameter(Mandatory=$true)]    [string]$Operation = "op_Pause" ) Begin    { Write-Output "Connecting on $(Get-Date)" #Connect to Azure using the Run As Account Try{ $servicePrincipalConnection=Get-AutomationConnection -Name "AzureRunAsConnection" Connect-AzAccount  -ServicePrincipal -TenantId $servicePrincipalConnection.TenantId -ApplicationId $servicePrincipalConnection.ApplicationId -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint } Catch { if (!$servicePrincipalConnection){ $ErrorMessage = "Connection $connectionName not found." throw $ErrorMessage } else{ Write-Output -Message $_.Exception throw $_.Exception } } # Validation parameters $ArrayOperations = "Pause","Start","Restart" If ($Operation -notin $ArrayOperations) { Throw "Only Pause, Start, Restart Operations are valid" } # Start Write-Output "Starting process on $(Get-Date)" Try{ $Status = Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Select-Object Status | Format-Table -HideTableHeaders | Out-String $Status = $Status -replace "`t|`n|`r","" Write-Output "The current status is "$Status.trim()" on $(Get-Date)" } Catch { Write-Output $_.Exception throw $_.Exception } # Start block # Start Write-Output "Starting $Operation on $(Get-Date)" if(($Operation -eq "Start") -and ($Status.trim() -ne "Online")){ Write-Output "Starting $Operation Operation" try { Write-Output "Starting on $(Get-Date)" Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Resume-AzSynapseSqlPool } catch { Write-Output "Error while executing "$Operation } } # Pause block if(($Operation -eq "Pause") -and ($Status.trim() -ne "Paused")){ write-Output "Starting $Operation Operation" try { Write-Output "Pausing on $(Get-Date)" Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Suspend-AzSynapseSqlPool } catch { Write-Output "Error while executing "$Operation } } # Restart block if(($Operation -eq "Restart") -and ($Status.trim() -eq "Online")){ Write-Output "Starting $Operation Operation" try { Write-Output "Pausing on $(Get-Date)" Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Suspend-AzSynapseSqlPool Write-Output "Starting on $(Get-Date)" Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Resume-AzSynapseSqlPool } catch { Write-Output "Error while executing "$Operation }         } } End { # Exit Write-Output "Finished process on $(Get-Date)" } After publishing, click on the start button and enter Synapse Analytics values, and a job will be created. there are various options on the job page like its Status, Errors, Exceptions, etc. After completing the job, the Synapse will Resume/Pause through this Runbook. Now add a schedule for Runbook to completely Automate the process on schedule. On the Runbook page, click on the link to Schedule button in the ribbon and add Schedule and Configure the required parameters. A: You can use the same tools to query dedicated or serverless pools, for example, invoke-sqlcmd with PowerShell, even sqlcmd. See also Connection strings for Synapse SQL. As for azure cli, see az synapse and for powershell, see Az.Synapse. A: The best solution I found was to add a whole Azure AD group as a user on the database manually, then for each new user I'm creating, I automate their addition to the group with some basic Azure CLI commands on a DevOps pipeline rather than try with a SQL Script that adds them individually.
Running SQL script on Azure Synapse via Powershell or CLI
I am trying to give a service principal SELECT access on my Azure Synapse SQL data. CREATE USER [MY_SERVICE_PRINCIPAL] FROM EXTERNAL PROVIDER WITH DEFAFULT_SCHEMA=[dbo] GO GRANT SELECT ON DATABASE :: MyDB TO [MY_SERVICE_PRINCIPAL]; This works fine, but it requires me logging into the workspace to do this for every single new service principal. Is it possible to automate this? I automate the creation of the service principal via Azure CLI. Is it possible to run this script from a
[ "CREATE AUTOMATION ACCOUNT\n\nOpen Azure Portal and search for the Automation Account\n\n\nSelect Automation Account, and in another screen. Click on Create and fill in the required attributes.\n\n\nAfter creating the Automation Account, the option of Runbook is in the left menu. by opening this, it has default/tutorial Runbooks of different types. To Automate the Process of Synapse Analytics, install some required modules in the Automation Account. In the left menu, find and click on the Modules, search for Az.Accounts and import this module to the Automation Account.\n\n\nAfter this import, follow the same process to import Az.Synapse, another required module for this automation task.\n\nAfter importing the required modules, create a Runbook.\n\n\n4. After clicking on Create a Runbook, an editor will be opened, paste the following code, save the Runbook, and publish it.\n[CmdletBinding()]\n\nparam (\n\n   [Parameter(Mandatory=$true)]\n\n   [string]$ResourceGroupName =\"rg_ResourceGroup\",\n\n   [Parameter(Mandatory=$true)]\n\n   [string]$WorkspaceName = \"wp_WorkSpaceName\",\n\n   [Parameter(Mandatory=$true)]\n\n   [string]$Operation = \"op_Pause\"\n\n)\n\nBegin    {\n\nWrite-Output \"Connecting on $(Get-Date)\"\n\n#Connect to Azure using the Run As Account\n\nTry{\n\n$servicePrincipalConnection=Get-AutomationConnection -Name \"AzureRunAsConnection\"\n\nConnect-AzAccount  -ServicePrincipal -TenantId $servicePrincipalConnection.TenantId -ApplicationId $servicePrincipalConnection.ApplicationId -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint\n\n}\n\nCatch {\n\nif (!$servicePrincipalConnection){\n\n$ErrorMessage = \"Connection $connectionName not found.\"\n\nthrow $ErrorMessage\n\n} else{\n\nWrite-Output -Message $_.Exception\n\nthrow $_.Exception\n\n}\n\n}\n\n# Validation parameters\n\n$ArrayOperations = \"Pause\",\"Start\",\"Restart\"\n\nIf ($Operation -notin $ArrayOperations)\n\n{\n\nThrow \"Only Pause, Start, Restart Operations are valid\"\n\n}\n\n# Start\n\nWrite-Output \"Starting process on $(Get-Date)\"\n\nTry{\n\n$Status = Get-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Select-Object Status | Format-Table -HideTableHeaders | Out-String\n\n$Status = $Status -replace \"`t|`n|`r\",\"\"\n\nWrite-Output \"The current status is \"$Status.trim()\" on $(Get-Date)\"\n\n}\n\nCatch {\n\nWrite-Output $_.Exception\n\nthrow $_.Exception\n\n}\n\n# Start block\n\n# Start\n\nWrite-Output \"Starting $Operation on $(Get-Date)\"\n\nif(($Operation -eq \"Start\") -and ($Status.trim() -ne \"Online\")){\n\nWrite-Output \"Starting $Operation Operation\"\n\ntry\n\n{\n\nWrite-Output \"Starting on $(Get-Date)\"\n\nGet-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Resume-AzSynapseSqlPool\n\n}\n\ncatch\n\n{\n\nWrite-Output \"Error while executing \"$Operation\n\n}\n\n}\n\n# Pause block\n\nif(($Operation -eq \"Pause\") -and ($Status.trim() -ne \"Paused\")){\n\nwrite-Output \"Starting $Operation Operation\"\n\ntry\n\n{\n\nWrite-Output \"Pausing on $(Get-Date)\"\n\nGet-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Suspend-AzSynapseSqlPool\n\n}\n\ncatch\n\n{\n\nWrite-Output \"Error while executing \"$Operation\n\n}\n\n}\n\n# Restart block\n\nif(($Operation -eq \"Restart\") -and ($Status.trim() -eq \"Online\")){\n\nWrite-Output \"Starting $Operation Operation\"\n\ntry\n\n{\n\nWrite-Output \"Pausing on $(Get-Date)\"\n\nGet-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Suspend-AzSynapseSqlPool\n\nWrite-Output \"Starting on $(Get-Date)\"\n\nGet-AzSynapseSqlPool –ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName | Resume-AzSynapseSqlPool\n\n}\n\ncatch\n\n{\n\nWrite-Output \"Error while executing \"$Operation\n\n}\n\n        }\n\n}\n\nEnd\n\n{\n\n# Exit\n\nWrite-Output \"Finished process on $(Get-Date)\"\n\n}\n\n\n\nAfter publishing, click on the start button and enter Synapse Analytics values, and a job will be created.\n\nthere are various options on the job page like its Status, Errors, Exceptions, etc. After completing the job, the Synapse will Resume/Pause through this Runbook.\n\nNow add a schedule for Runbook to completely Automate the process on schedule. On the Runbook page, click on the link to Schedule button in the ribbon and add Schedule and Configure the required parameters.\n\n\n", "\nYou can use the same tools to query dedicated or serverless pools, for example, invoke-sqlcmd with PowerShell, even sqlcmd. See also Connection strings for Synapse SQL.\n\nAs for azure cli, see az synapse and for powershell, see Az.Synapse.\n", "The best solution I found was to add a whole Azure AD group as a user on the database manually, then for each new user I'm creating, I automate their addition to the group with some basic Azure CLI commands on a DevOps pipeline rather than try with a SQL Script that adds them individually.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "azure", "azure_devops", "azure_synapse", "sql", "sql_server" ]
stackoverflow_0074533977_azure_azure_devops_azure_synapse_sql_sql_server.txt
Q: Contacts from People API limited to 100 contats I see 168 Contacts in my Contacts tab at https://contacts.google.com/. Although, when I use the People API to retrieve them, using this link: https://people.googleapis.com/v1/people/me/connections?personFields=names,emailAddresses,phoneNumbers,photos, I am only getting back 100 contacts. A: The reason why it was not working is that by default there is a limit to the number of contacts you are getting back. You need to include a pageSize parameter in the URL to get all. add &pageSize=2000 to the one above.
Contacts from People API limited to 100 contats
I see 168 Contacts in my Contacts tab at https://contacts.google.com/. Although, when I use the People API to retrieve them, using this link: https://people.googleapis.com/v1/people/me/connections?personFields=names,emailAddresses,phoneNumbers,photos, I am only getting back 100 contacts.
[ "The reason why it was not working is that by default there is a limit to the number of contacts you are getting back. You need to include a pageSize parameter in the URL to get all.\nadd &pageSize=2000 to the one above.\n" ]
[ 0 ]
[]
[]
[ "google_people_api" ]
stackoverflow_0074657098_google_people_api.txt
Q: (Windows) fatal error: sqlite3.h: No such file or directory I am trying to build a c++ application that uses sql. For that I need sqlite3 header. I have already installed sql in my system and sqlite3 in terminal gives: SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> I have tried searching for this over web and found many relevant solutions including this. Since I am working on Windows, $ sudo apt-get install libsqlite3-dev did not work. I also tried to change #include <sqlite3.h> to #include "sqlite3.h" with sqlite.h file in the same directory as my cpp code file(as I found people using it in videos). But this time I ended up with ''' C:\Users\username\AppData\Local\Temp\ccwQfHZB.o:temp.cpp:(.text+0x1e): undefined reference to `sqlite3_open' collect2.exe: error: ld returned 1 exit status ''' I am quite new to it, so any help is appreciated. Thanks. A: So you clearly need a tutorial in how to use third party libraries. That's too much to describe here. But this might get you started. You have to tell your compiler where the header files you are including are located. Having them installed is not enough. Use the -I compiler option for that. You have to tell your compiler where the library files that you are linking with are located. Having them installed is not enough. Use the -L compiler option for that. You have to tell your compiler the names of the libraries that you are linking with. Having them installed is not enough. Use the -l compiler option for that. All these options may need to be given multiple times depending on your precise setup and the libraries you are using. Sorry but I don't know what the sqlite libraries are called, and obviously I don't know where you installed stuff. Moving header files around is never the right solution, and any video you saw that does that doesn't know what they are talking about. Install stuff where you think is best and then tell your compiler where that is. Any sqlite tutorial that uses the g++ compiler should go through this process in more detail, but there are a lot of bad C++ tutorials out there. A: I found the solution to this problem: included two files(*): sqlite3.0, sqlite3.h in same folder as my main code. added #include "sqlite3.h" compiled using the command: g++ sqlite3.o main.cpp -o main This finally resolved my error. *(these were downloaded from sqlite.org)
(Windows) fatal error: sqlite3.h: No such file or directory
I am trying to build a c++ application that uses sql. For that I need sqlite3 header. I have already installed sql in my system and sqlite3 in terminal gives: SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> I have tried searching for this over web and found many relevant solutions including this. Since I am working on Windows, $ sudo apt-get install libsqlite3-dev did not work. I also tried to change #include <sqlite3.h> to #include "sqlite3.h" with sqlite.h file in the same directory as my cpp code file(as I found people using it in videos). But this time I ended up with ''' C:\Users\username\AppData\Local\Temp\ccwQfHZB.o:temp.cpp:(.text+0x1e): undefined reference to `sqlite3_open' collect2.exe: error: ld returned 1 exit status ''' I am quite new to it, so any help is appreciated. Thanks.
[ "So you clearly need a tutorial in how to use third party libraries. That's too much to describe here. But this might get you started.\n\nYou have to tell your compiler where the header files you are including are located. Having them installed is not enough. Use the -I compiler option for that.\n\nYou have to tell your compiler where the library files that you are linking with are located. Having them installed is not enough. Use the -L compiler option for that.\n\nYou have to tell your compiler the names of the libraries that you are linking with. Having them installed is not enough. Use the -l compiler option for that.\n\n\nAll these options may need to be given multiple times depending on your precise setup and the libraries you are using.\nSorry but I don't know what the sqlite libraries are called, and obviously I don't know where you installed stuff.\nMoving header files around is never the right solution, and any video you saw that does that doesn't know what they are talking about. Install stuff where you think is best and then tell your compiler where that is. Any sqlite tutorial that uses the g++ compiler should go through this process in more detail, but there are a lot of bad C++ tutorials out there.\n", "I found the solution to this problem:\n\nincluded two files(*): sqlite3.0, sqlite3.h in same folder as my main code.\nadded #include \"sqlite3.h\"\ncompiled using the command: g++ sqlite3.o main.cpp -o main\n\nThis finally resolved my error.\n*(these were downloaded from sqlite.org)\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "sqlite" ]
stackoverflow_0074610092_c++_sqlite.txt
Q: How to access command history in Python shell on Windows Terminal Bash? I sometimes want to experiment with Python code in the Python shell. In other languages (Haskell, F#) I'm used to be able to experiment in a REPL that supports command history. I start the Python shell from (Git) Bash running in Windows Terminal: $ py Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1+2 3 >>> How do I repeat the last command, or scroll through the command history? I'm aware of this question, so I've already tried Alt + p, the arrow keys, and various combinations of those and Ctrl, Shift. Nothing works. Either nothing happens, or Ctrl + n just prints this: >>> ^N The arrow keys do work when using the Command Prompt (cmd) in Windows Terminal, but not when using Bash. A: In the Python shell, you can use the up and down arrow keys to scroll through the command history. This should work both in the Command Prompt and in Bash in Windows Terminal. If this does not work for you, you can try enabling command history in the Python shell by running the following commands: import readline readline.parse_and_bind('tab: complete') readline.parse_and_bind('set editing-mode vi') This will enable tab completion and set the editing mode to vi, which will allow you to use vi-style key bindings (such as k and j) to navigate the command history. Alternatively, you can use the %hist magic command to view the command history in the Python shell. This command takes an optional integer argument that specifies the number of commands to display (by default, it displays the last five commands): # Display the last five commands %hist # Display the last ten commands %hist 10 You can then copy and paste the commands you want to repeat from the output of the %hist command. Another option is to use a different shell that supports command history, such as the IPython shell. You can start the IPython shell by running the ipython command instead of the python command. The IPython shell supports command history and tab completion, and it also has additional features such as inline plotting and automatic indentation.
How to access command history in Python shell on Windows Terminal Bash?
I sometimes want to experiment with Python code in the Python shell. In other languages (Haskell, F#) I'm used to be able to experiment in a REPL that supports command history. I start the Python shell from (Git) Bash running in Windows Terminal: $ py Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1+2 3 >>> How do I repeat the last command, or scroll through the command history? I'm aware of this question, so I've already tried Alt + p, the arrow keys, and various combinations of those and Ctrl, Shift. Nothing works. Either nothing happens, or Ctrl + n just prints this: >>> ^N The arrow keys do work when using the Command Prompt (cmd) in Windows Terminal, but not when using Bash.
[ "In the Python shell, you can use the up and down arrow keys to scroll through the command history. This should work both in the Command Prompt and in Bash in Windows Terminal.\nIf this does not work for you, you can try enabling command history in the Python shell by running the following commands:\nimport readline\nreadline.parse_and_bind('tab: complete')\nreadline.parse_and_bind('set editing-mode vi')\n\nThis will enable tab completion and set the editing mode to vi, which will allow you to use vi-style key bindings (such as k and j) to navigate the command history.\nAlternatively, you can use the %hist magic command to view the command history in the Python shell. This command takes an optional integer argument that specifies the number of commands to display (by default, it displays the last five commands):\n# Display the last five commands\n%hist\n\n# Display the last ten commands\n%hist 10\n\nYou can then copy and paste the commands you want to repeat from the output of the %hist command.\nAnother option is to use a different shell that supports command history, such as the IPython shell. You can start the IPython shell by running the ipython command instead of the python command. The IPython shell supports command history and tab completion, and it also has additional features such as inline plotting and automatic indentation.\n" ]
[ 1 ]
[]
[]
[ "bash", "python", "windows_terminal" ]
stackoverflow_0074665663_bash_python_windows_terminal.txt
Q: Golang byte vs string I'm trying to find a common element between two strings of equal length in Golang. The element is found, but the string representation seems to include the byte value too. How can I get rid of it? func main() { println(fmt.Printf("common element = %s", findCommonElement("abcdefghi", "ijklmnopq"))) } func findCommonElement(firstElements, secondElements string) string { elementsInFirstGroup := make(map[string]bool) for _, charValue := range firstElements { elementsInFirstGroup[string(charValue)] = true } for index := range firstElements { if _, ok := elementsInFirstGroup[string(secondElements[index])]; ok { matchingElem := secondElements[index] println(string(matchingElem)) return string(matchingElem) } } panicMessage := fmt.Sprintf("Could not find a common item between %s and %s", firstElements, secondElements) panic(panicMessage) } The output I get is i common element = i18 (0x0,0x0) Code available here A: You should use fmt.Sprintf instead of fmt.Printf. And avoid using the builtin println, instead use fmt.Println. https://pkg.go.dev/[email protected]#Printf func Printf(format string, a ...any) (n int, err error) Printf formats according to a format specifier and writes to standard output. It returns the number of bytes written and any write error encountered. Hence the 18 (0x0,0x0)... 18 is the number of characters in the string "common element = i". (0x0,0x0) is the nil error value as printed by println. More importantly however, your algorithm is flawed because it is mixing up bytes with runes. When you range over a string, the iteration variable charValue will be assigned a rune, which may be multi-byte. However when you index a string (e.g., secondElements[index]) the result of that is always a single byte. So panics or gibberish (invalid bytes) will inevitably be the result of your function. See example. You may get better results doing something like the following: func findCommonElement(firstElements, secondElements string) string { second := map[rune]bool{} for _, r := range secondElements { second[r] = true } for _, r := range firstElements { if second[r] { fmt.Println(string(r)) return string(r) } } panic("...") } https://go.dev/play/p/kNotThOrehj
Golang byte vs string
I'm trying to find a common element between two strings of equal length in Golang. The element is found, but the string representation seems to include the byte value too. How can I get rid of it? func main() { println(fmt.Printf("common element = %s", findCommonElement("abcdefghi", "ijklmnopq"))) } func findCommonElement(firstElements, secondElements string) string { elementsInFirstGroup := make(map[string]bool) for _, charValue := range firstElements { elementsInFirstGroup[string(charValue)] = true } for index := range firstElements { if _, ok := elementsInFirstGroup[string(secondElements[index])]; ok { matchingElem := secondElements[index] println(string(matchingElem)) return string(matchingElem) } } panicMessage := fmt.Sprintf("Could not find a common item between %s and %s", firstElements, secondElements) panic(panicMessage) } The output I get is i common element = i18 (0x0,0x0) Code available here
[ "You should use fmt.Sprintf instead of fmt.Printf.\nAnd avoid using the builtin println, instead use fmt.Println.\n\nhttps://pkg.go.dev/[email protected]#Printf\nfunc Printf(format string, a ...any) (n int, err error)\n\n\nPrintf formats according to a format specifier and writes to standard output. It returns the number of bytes written and any write error encountered.\n\nHence the 18 (0x0,0x0)...\n\n18 is the number of characters in the string \"common element = i\".\n(0x0,0x0) is the nil error value as printed by println.\n\n\nMore importantly however, your algorithm is flawed because it is mixing up bytes with runes. When you range over a string, the iteration variable charValue will be assigned a rune, which may be multi-byte. However when you index a string (e.g., secondElements[index]) the result of that is always a single byte. So panics or gibberish (invalid bytes) will inevitably be the result of your function. See example.\nYou may get better results doing something like the following:\nfunc findCommonElement(firstElements, secondElements string) string {\n second := map[rune]bool{}\n for _, r := range secondElements {\n second[r] = true\n }\n\n for _, r := range firstElements {\n if second[r] {\n fmt.Println(string(r))\n return string(r)\n }\n }\n\n panic(\"...\")\n}\n\nhttps://go.dev/play/p/kNotThOrehj\n" ]
[ 2 ]
[]
[]
[ "go" ]
stackoverflow_0074665455_go.txt
Q: Integrate Odoo 15 with Django I'm trying to build a Django REST API Project by retrieving data from Odoo, for that I need first of all to connect to odoo database. Any idea of how to do that !? A: You can use Odoo External API using XML-RPC and you can find more details in below Odoo documentation links: Odoo External API Web Services
Integrate Odoo 15 with Django
I'm trying to build a Django REST API Project by retrieving data from Odoo, for that I need first of all to connect to odoo database. Any idea of how to do that !?
[ "You can use Odoo External API using XML-RPC and you can find more details in below Odoo documentation links:\nOdoo External API\nWeb Services\n" ]
[ 0 ]
[]
[]
[ "django", "odoo" ]
stackoverflow_0074664698_django_odoo.txt
Q: correct glsl affine texture mapping i'm trying to code correct 2D affine texture mapping in GLSL. Explanation: ...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for. I'd like to simply "stretch" texture inside quadrilateral, something like this: but composed from two triangles. Any advice (GLSL) please? A: This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package. GLSL: varying vec2 shiftedPosition, width_height; #ifdef VERTEX void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed. width_height = gl_MultiTexCoord1.xy; } #endif #ifdef FRAGMENT uniform sampler2D _MainTex; void main() { gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height); } #endif C#: // Zero out the left and bottom edges, // leaving a right trapezoid with two sides on the axes and a vertex at the origin. var shiftedPositions = new Vector2[] { Vector2.zero, new Vector2(0, vertices[1].y - vertices[0].y), new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y), new Vector2(vertices[3].x - vertices[0].x, 0) }; mesh.uv = shiftedPositions; var widths_heights = new Vector2[4]; widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x; widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x; widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y; widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y; mesh.uv2 = widths_heights; A: I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0 A: In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code. float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C; //These are all uniforms for a given quad. Calculate on CPU. Ax = (x3 - x0) - (x2 - x1); Bx = (x0 - x1); Cx = (x2 - x1); Dx = x1; Ay = (y3 - y0) - (y2 - y1); By = (y0 - y1); Cy = (y2 - y1); Dy = y1; float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy); float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy); A = (Ay*Cx)-(Ax*Cy); //These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x); C = (Bx * y) + ByDx_minus_BxDy - (By * x); //These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms. u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f); v = (x - (u * Cx) - Dx)/((u*Ax)+Bx); A: Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels. Check out this link. https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be A: I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids): //usual euclidean distance float distance(int ax, int ay, int bx, int by) { int x = ax-bx; int y = ay-by; return sqrtf((float)(x*x + y*y)); } void gfx_quad(gfx_t *dst //destination texture, we are rendering into ,gfx_t *src //source texture ,int *quad // quadrilateral vertices ) { int *v = quad; //quad vertices float z = 20.0; float top = distance(v[0],v[1],v[2],v[3]); //top float bot = distance(v[4],v[5],v[6],v[7]); //bottom float lft = distance(v[0],v[1],v[4],v[5]); //left float rgt = distance(v[2],v[3],v[6],v[7]); //right // By default all vertices lie on the screen plane float az = 1.0; float bz = 1.0; float cz = 1.0; float dz = 1.0; // Move Z from screen, if based on distance ratios. if (top<bot) { az *= top/bot; bz *= top/bot; } else { cz *= bot/top; dz *= bot/top; } if (lft<rgt) { az *= lft/rgt; cz *= lft/rgt; } else { bz *= rgt/lft; dz *= rgt/lft; } // draw our quad as two textured triangles gfx_textured(dst, src , v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz , 0.0,0.0, 1.0,0.0, 0.0,1.0); gfx_textured(dst, src , v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz , 1.0,0.0, 0.0,1.0, 1.0,1.0); } I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers A: I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates. I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions: The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order. The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner). std::vector<cv::Point2f> points; // Convert points to homogeneous coordinates to simplify the problem. Eigen::Vector3f p0(points[0].x, points[0].y, 1); Eigen::Vector3f p1(points[1].x, points[1].y, 1); Eigen::Vector3f p2(points[2].x, points[2].y, 1); Eigen::Vector3f p3(points[3].x, points[3].y, 1); // Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end. // See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach. auto line1 = p2.cross(p0); auto line2 = p3.cross(p1); auto intersection = line1.cross(line2); intersection = intersection / intersection(2); // Compute distance to each point. for (const auto &pt : points) { auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) + std::pow(pt.y - intersection(1), 2)); distances.push_back(distance); } // Assumes same order as above. std::vector<cv::Point2f> texture_coords_unnormalized = { {1.0f, 1.0f}, {1.0f, 0.0f}, {0.0f, 0.0f}, {0.0f, 1.0f} }; std::vector<float> texture_coords; for (int i = 0; i < texture_coords_unnormalized.size(); ++i) { float u_i = texture_coords_unnormalized[i].x; float v_i = texture_coords_unnormalized[i].y; float d_i = distances.at(i); float d_i_2 = distances.at((i + 2) % 4); float scale = (d_i + d_i_2) / d_i_2; texture_coords.push_back(u_i*scale); texture_coords.push_back(v_i*scale); texture_coords.push_back(scale); } Pass the texture coordinates to your shader (use vec3). Then: gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0); A: thanks for answers, but after experimenting i found a solution. two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction. Numbers and shader: tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)] tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)] d1 = length of top edge = 2 d2 = length of bottom edge = 1 tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)] tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)] only right triangles are rendered using this glsl shader (on left is fixed pipeline): void main() { gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y); } so.. only U is perspective and V is linear.
correct glsl affine texture mapping
i'm trying to code correct 2D affine texture mapping in GLSL. Explanation: ...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for. I'd like to simply "stretch" texture inside quadrilateral, something like this: but composed from two triangles. Any advice (GLSL) please?
[ "This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.\nGLSL:\nvarying vec2 shiftedPosition, width_height;\n\n#ifdef VERTEX\nvoid main() {\n gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;\n shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.\n width_height = gl_MultiTexCoord1.xy;\n}\n#endif\n\n#ifdef FRAGMENT\nuniform sampler2D _MainTex;\nvoid main() {\n gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);\n}\n#endif\n\nC#:\n// Zero out the left and bottom edges, \n// leaving a right trapezoid with two sides on the axes and a vertex at the origin.\nvar shiftedPositions = new Vector2[] {\n Vector2.zero,\n new Vector2(0, vertices[1].y - vertices[0].y),\n new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),\n new Vector2(vertices[3].x - vertices[0].x, 0)\n};\nmesh.uv = shiftedPositions;\n\nvar widths_heights = new Vector2[4];\nwidths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;\nwidths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;\nwidths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;\nwidths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;\nmesh.uv2 = widths_heights;\n\n", "I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0\n", "In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.\n float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;\n\n //These are all uniforms for a given quad. Calculate on CPU.\n Ax = (x3 - x0) - (x2 - x1);\n Bx = (x0 - x1);\n Cx = (x2 - x1);\n Dx = x1;\n\n Ay = (y3 - y0) - (y2 - y1);\n By = (y0 - y1);\n Cy = (y2 - y1);\n Dy = y1;\n\n float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);\n float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);\n\n A = (Ay*Cx)-(Ax*Cy);\n\n //These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader \n B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);\n C = (Bx * y) + ByDx_minus_BxDy - (By * x);\n\n //These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.\n u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);\n v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);\n\n\n", "Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels. \nCheck out this link.\nhttps://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be\n", "I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):\n//usual euclidean distance\nfloat distance(int ax, int ay, int bx, int by) {\n int x = ax-bx;\n int y = ay-by;\n return sqrtf((float)(x*x + y*y));\n}\n\nvoid gfx_quad(gfx_t *dst //destination texture, we are rendering into\n ,gfx_t *src //source texture\n ,int *quad // quadrilateral vertices\n )\n{\n int *v = quad; //quad vertices\n float z = 20.0;\n float top = distance(v[0],v[1],v[2],v[3]); //top\n float bot = distance(v[4],v[5],v[6],v[7]); //bottom\n float lft = distance(v[0],v[1],v[4],v[5]); //left\n float rgt = distance(v[2],v[3],v[6],v[7]); //right\n\n // By default all vertices lie on the screen plane\n float az = 1.0;\n float bz = 1.0;\n float cz = 1.0;\n float dz = 1.0;\n\n // Move Z from screen, if based on distance ratios.\n if (top<bot) {\n az *= top/bot;\n bz *= top/bot;\n } else {\n cz *= bot/top;\n dz *= bot/top;\n }\n\n if (lft<rgt) {\n az *= lft/rgt;\n cz *= lft/rgt;\n } else {\n bz *= rgt/lft;\n dz *= rgt/lft;\n }\n\n // draw our quad as two textured triangles\n gfx_textured(dst, src\n , v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz\n , 0.0,0.0, 1.0,0.0, 0.0,1.0);\n gfx_textured(dst, src\n , v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz\n , 1.0,0.0, 0.0,1.0, 1.0,1.0);\n}\n\nI'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers\n", "I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.\nI based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:\n\nThe vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.\nThe coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).\n\nstd::vector<cv::Point2f> points;\n// Convert points to homogeneous coordinates to simplify the problem.\nEigen::Vector3f p0(points[0].x, points[0].y, 1);\nEigen::Vector3f p1(points[1].x, points[1].y, 1);\nEigen::Vector3f p2(points[2].x, points[2].y, 1);\nEigen::Vector3f p3(points[3].x, points[3].y, 1);\n\n// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.\n// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.\nauto line1 = p2.cross(p0);\nauto line2 = p3.cross(p1);\nauto intersection = line1.cross(line2);\nintersection = intersection / intersection(2);\n\n// Compute distance to each point.\nfor (const auto &pt : points) {\n auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +\n std::pow(pt.y - intersection(1), 2));\n distances.push_back(distance);\n}\n\n// Assumes same order as above.\nstd::vector<cv::Point2f> texture_coords_unnormalized = {\n {1.0f, 1.0f},\n {1.0f, 0.0f},\n {0.0f, 0.0f},\n {0.0f, 1.0f}\n};\n\nstd::vector<float> texture_coords;\nfor (int i = 0; i < texture_coords_unnormalized.size(); ++i) {\n float u_i = texture_coords_unnormalized[i].x;\n float v_i = texture_coords_unnormalized[i].y;\n float d_i = distances.at(i);\n float d_i_2 = distances.at((i + 2) % 4);\n float scale = (d_i + d_i_2) / d_i_2;\n\n texture_coords.push_back(u_i*scale);\n texture_coords.push_back(v_i*scale);\n texture_coords.push_back(scale);\n}\n\nPass the texture coordinates to your shader (use vec3). Then:\ngl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);\n\n", "thanks for answers, but after experimenting i found a solution.\n\ntwo triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.\nNumbers and shader:\ntri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]\ntri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]\n\nd1 = length of top edge = 2\nd2 = length of bottom edge = 1\n\ntri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]\ntri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]\n\nonly right triangles are rendered using this glsl shader (on left is fixed pipeline):\nvoid main()\n{\n gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);\n}\n\nso.. only U is perspective and V is linear.\n" ]
[ 7, 2, 2, 0, 0, 0, -2 ]
[]
[]
[ "glsl", "math", "opengl", "texture_mapping" ]
stackoverflow_0012414708_glsl_math_opengl_texture_mapping.txt
Q: Are angular material and flex layout compatible with each other? I use the latest angular and material (15) but I want to use also flex layout. A: I have the same problem. I tried to port an Angular 14 app to 15 today and I faced several issues. A typical Angular/Flex Layout/Angular Material app, that also uses theming to be more precise. I had to use the --force flat as already pointed out There was an issue with a sass warning, which was apparently fixed with version 15.0.2 of Angular The app compiles but there is an issue related to theming. I decided to go stay on version 14 until it's possible to upgrade without issues. However, if you want to be able to upgrade to later versions in the future you should move away from Flex Layout. A: According to this blog post from the Angular team, @angular/flex-layout will no longer receive any updates. The same blog post gives guidance how to replace it.
Are angular material and flex layout compatible with each other?
I use the latest angular and material (15) but I want to use also flex layout.
[ "I have the same problem. I tried to port an Angular 14 app to 15 today and I faced several issues. A typical Angular/Flex Layout/Angular Material app, that also uses theming to be more precise.\n\nI had to use the --force flat as already pointed out\nThere was an issue with a sass warning, which was apparently fixed with version 15.0.2 of Angular\nThe app compiles but there is an issue related to theming.\n\nI decided to go stay on version 14 until it's possible to upgrade without issues. However, if you want to be able to upgrade to later versions in the future you should move away from Flex Layout.\n", "According to this blog post from the Angular team, @angular/flex-layout will no longer receive any updates. The same blog post gives guidance how to replace it.\n" ]
[ 1, 1 ]
[ "Yes they are compatible, just try it out\n" ]
[ -2 ]
[ "angular", "angular15", "angular_flex_layout", "angular_material" ]
stackoverflow_0074627184_angular_angular15_angular_flex_layout_angular_material.txt
Q: showing results of tcp-variants-comparison.cc under ns3 3.28 I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04. I found here an old topic from 2013, but it seems not to work correctly in my current environment. P.S: I am a newbie in ns3, so i will appreciate any help. regards cedkhader A: Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files: -rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii -rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data -rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data -rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data -rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data -rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data -rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data -rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data You can use other command line arguments to generate the desired output, see list below. Program Arguments: --transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood] --error_p: Packet error rate [0] --bandwidth: Bottleneck bandwidth [2Mbps] --delay: Bottleneck delay [0.01ms] --access_bandwidth: Access link bandwidth [10Mbps] --access_delay: Access link delay [45ms] --tracing: Flag to enable/disable tracing [true] --prefix_name: Prefix of output trace file [TcpVariantsComparison] --data: Number of Megabytes of data to transmit [0] --mtu: Size of IP packets to send in bytes [400] --num_flows: Number of flows [1] --duration: Time to allow flows to run in seconds [100] --run: Run index (for setting repeatable seeds) [0] --flow_monitor: Enable flow monitor [false] --pcap_tracing: Enable or disable PCAP tracing [false] --queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc] --sack: Enable or disable SACK option [true] A: in ns3.36.1 I used this command ./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1 and output look like this TcpVariantsComparison-ascii TcpVariantsComparison-cwnd.data TcpVariantsComparison-inflight.data TcpVariantsComparison-next-rx.data TcpVariantsComparison-next-tx.data TcpVariantsComparison-rto.data TcpVariantsComparison-rtt.data TcpVariantsComparison-ssth.data
showing results of tcp-variants-comparison.cc under ns3 3.28
I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04. I found here an old topic from 2013, but it seems not to work correctly in my current environment. P.S: I am a newbie in ns3, so i will appreciate any help. regards cedkhader
[ "Running ./waf --run \"tcp-variants-comparison --tracing=1\" yields the following files:\n-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii\n-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data\n-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data\n-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data\n-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data\n-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data\n-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data\n-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data\n\nYou can use other command line arguments to generate the desired output, see list below.\nProgram Arguments:\n --transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]\n --error_p: Packet error rate [0]\n --bandwidth: Bottleneck bandwidth [2Mbps]\n --delay: Bottleneck delay [0.01ms]\n --access_bandwidth: Access link bandwidth [10Mbps]\n --access_delay: Access link delay [45ms]\n --tracing: Flag to enable/disable tracing [true]\n --prefix_name: Prefix of output trace file [TcpVariantsComparison]\n --data: Number of Megabytes of data to transmit [0]\n --mtu: Size of IP packets to send in bytes [400]\n --num_flows: Number of flows [1]\n --duration: Time to allow flows to run in seconds [100]\n --run: Run index (for setting repeatable seeds) [0]\n --flow_monitor: Enable flow monitor [false]\n --pcap_tracing: Enable or disable PCAP tracing [false]\n --queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]\n --sack: Enable or disable SACK option [true]\n\n", "in ns3.36.1 I used this command\n./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1\n\nand output look like this\nTcpVariantsComparison-ascii \nTcpVariantsComparison-cwnd.data \nTcpVariantsComparison-inflight.data \nTcpVariantsComparison-next-rx.data \nTcpVariantsComparison-next-tx.data \nTcpVariantsComparison-rto.data \nTcpVariantsComparison-rtt.data \nTcpVariantsComparison-ssth.data\n\n" ]
[ 0, 0 ]
[]
[]
[ "ns_3" ]
stackoverflow_0051529406_ns_3.txt
Q: Anchor tag not working for one navbar element All navbar links work the way I like them to, except 'contact'. I am wondering whether it is to do with the parallax effect? I would need to click 'contact' three times in the navbar to get it to the contact section. I've tried adding the div tag at the top of the contact title like I have done with the about and service section. I've also tried to anchor different things surrounding the contact section, but nothing. I've pasted my entire page so someone can visually see what's going on and to see what could be the problem. Happy Hunting! <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <link rel="stylesheet" href="style.css"> <title id="title">Fantasy Book Covers</title> </head> <body> <div class="black_background" id="black_background"> <div id="quote"> <div id="fadein_1"> <p id="beginning_quote">"Each touch...</p> </div> <div id="fadein_2"> <p id="ending_quote">brings the magic to life"</p> </div> </div> </div> <div id="intro_images"> <img id="introimg_1" src="https://www.dropbox.com/s/yh6apdncisdir7j/Pasted%20Graphic%2018.new.png?dl=1"/> <img id="introimg_2" src="https://www.dropbox.com/s/4tk8mnv24s8ulzc/middle%20picture.new.png?dl=1"/> <img id="introimg_3" src="https://www.dropbox.com/s/q9kgm0rjfcbfawy/Pasted%20Graphic%2017.new.png?dl=1"/> </div> <div id="nav_bar"> <a href="#about_top" id="about"><p>About</p></a> <a href="#traditional_top" id="service"><p>Service</p></a> <a href="#black_background" id="home_button"><div></div></a> <a href="FAQs_page/index.html" id="faqs"><p >FAQs</p></a> <a href="#contact_title" id="contact"><p>Contact</p></a> </div> <div id="about_top"></div> <div class="about_section" id="about_section"> <p>We believe as fantasy-book enthusiasts that to truly experience the magic of a book, one must “feel” its essence. Our job is to transport you to magical worlds with our</p> <p id="italic"> <i>handmade, leather book covers.</i></p> <p id="about_2">Choose any fantasy-based book, from any author, and we will bring their story to life. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div id="traditional_top"></div> <div class="traditional_section" id="traditional_section"> <a href="Traditional_product_page/index.html" target="_blank" ><img id="traditional_image" src="https://www.dropbox.com/s/a7kmy309tc0sb0a/hex.small_main.png?dl=1"/></a> <div id="traditional_p"> <p id="traditional_title">Traditional</p> <p id="lorem_1">Lorem ipsum dolor sit amet,consectetur <br> adipiscingelit, sed do eiusmod tempor<br>incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.Duis aute irure dolor in reprehenderit in voluptate velit esse cillum</p> </div> </div> <div class="ornamented_section"> <a href="Ornamented_product_page/index.html" target="_blank"><img id="ornamented_image" src="https://www.dropbox.com/s/7pvg6h0q66zj26d/lapis.small_main.png?dl=1"/></a> <div id="ornamented_p"> <p id="ornamented_title">Ornamented</p> <p id="lorem_2">Lorem ipsum dolor sitamet,consectetur<br> adipiscing elit, sed do eiusmod tempor<br>incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p> </div> </div> <div id="custom_button"> <p id="custom_p">CUSTOM</p> </div> <div id="review_section"> <img id="review_1" src="https://www.dropbox.com/s/kh56ws33vptqv5b/Pasted%20Graphic.1.png?dl=1"/> <div id="review1_parallax"> <p id="review_1_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor inliqua”</p> </div> <div id="quote1_parallax"> <img id="quote_mark_1" src="https://www.dropbox.com/s/qrsrhsz3q84dh4y/top_quote_mark.png?dl=1"/> </div> <img id="review_2" src="https://www.dropbox.com/s/ulkcvgxp4v0ugg4/Pasted%20Graphic.2.png?dl=1"/> <div id="review2_parallax"> <p id="review_2_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit.”</p> </div> <img id="review_3" src="https://www.dropbox.com/s/am9g54a8i36wnd0/Pasted%20Graphic.3.png?dl=1"/> <div id="review3_parallax"> <p id="review_3_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor inliqua”</p> </div> <div id="quote2_parallax"> <img id="quote_mark_2" src="https://www.dropbox.com/s/m253oh24ta5qpc9/bottom_quote_mark.png?dl=1"/> </div> <img id="review_4" src="https://www.dropbox.com/s/avjhb64toauq1mq/Pasted%20Graphic.4.png?dl=1"/> <div id="review4_parallax"> <p id="review_4_p">“Lorem ipsum dolor sit ame!!!!.”</p> </div> </div> <form id="contact_section"> <div id="unhide"> <p id="thank_you">Thank you for contacting us</p> <p id="response">We will get back to you as soon as possible</p> </div> <div id="hide"> <p id="contact_title">CONTACT</p> <input id="first_name_input" type="text" placeholder="First Name" required></input> <input id="last_name_input" type="text" placeholder="Last Name" required></input> <input id="email_input" type="email" placeholder="Email" required></input> <textarea required></textarea> <button type="submit" id="submit_button"> <div id="SEND"> <p id="S">S</p> <p id="E">E</p> <p id="N">N</p> <p id="D">D</p> </div> </button> </div> </form> </body> <script src='main.js'></script> </html> css: body{ margin:0; padding:0; overflow-x:hidden; } .black_background{ position:relative; margin-left:-2%; margin-top:-1%; width: 1461px; height: 365px; background: #000000; } #quote{ position:absolute; margin-top: 1%; width:100%; } #beginning_quote{ margin-left: 20%; margin-top: 8%; font-family: Baskerville; font-size: 48px; color: #995DBE; animation: animate; } #beginning_quote::first-letter{ font-size: 70px; } #ending_quote{ margin-left: 50%; margin-top: -3%; font-family: Baskerville; font-size: 48px; color: #DDA5FE; } #fadein_1{ animation: fadeIn 4s; } @keyframes fadeIn{ 0%{opacity:0;} 100%{opacity:1;} } #fadein_2{ animation: fadeIn 4s forwards; animation-delay: 1.5s; opacity: 0%; } @keyframes fadeIn{ 0%{opacity:0;} 100%{opacity:1;} } #intro_images{ position:relative; display:flex; justify-content: space-between; margin-left: 4%; margin-right: 4%; margin-top: 4%; } #introimg_1, #introimg_2, #introimg_3{ animation: fadeIn 4s forwards; opacity: 0%; } #nav_bar{ position: relative; margin-top: 10%; margin-left:-0.5rem; width:1445px; display: flex; justify-content: space-evenly; background-color:white; height:13%; position:sticky; top:0; z-index:3; } #about, #service, #faqs, #contact{ text-decoration: none; font-size: 20px; font-family: Baskerville; color:black; position: relative; margin-top:0%; } #home_button{ background-color:black; width:25px; height:10px; border-radius:50%; margin-top:26px; } .about_section{ position:relative; text-align: center; color:black; margin-top: 10%; font-family:Baskerville; font-size: 30px; line-height: 3rem; } #italic{ font-size: 50px; margin-top: 5%; } #about_2{ margin-top: 5%; } .traditional_section{ background-color:rgba(36, 2, 56, 0.45); width: 848px; height: 524px; margin-left: 21%; margin-top:25%; } .ornamented_section{ background-color: rgba(32, 6, 48, 0.73); width: 848px; height: 524px; margin-left: 21%; margin-top: 20%; } #traditional_image{ position: absolute; margin-left:-3%; margin-top:-5%; } #ornamented_image{ position:absolute; margin-left: -25%; margin-top: -5%; } #traditional_image:hover, #ornamented_image:hover{ width:425px; } #traditional_title{ position:absolute; font-size: 100px; margin-left: 20%; margin-top: -17%; padding:10%; } #traditional_p{ font-family:Baskerville; margin-top: 8%; margin-right:2%; font-size: 25px; float: right; text-align: right; line-height: 50px; } #ornamented_title{ position:absolute; font-size: 100px; margin-left: -8%; margin-top: -20%; color:black; padding: 3%; } #ornamented_p{ font-family:Baskerville; color:white; margin-left: 2%; margin-top:33%; font-size: 25px; float: left; text-align: left; line-height: 50px; } #lorem_1{ margin-top: 12%; margin-right: 3%; } #lorem_2{ color:white; margin-top: -6%; margin-left: 3%; } #custom_button{ position:absolute; background-color: rgb(85, 10, 34); width: 468.11px; height: 133px; margin-top: 10%; margin-left: 33%; border-radius: 15px; } #custom_p{ font-family:Baskerville; color:white; font-size: 40px; margin-left: 31%; margin-top:10%; } #review_section{ position: relative; } #quote1_parallax{ position:absolute; z-index:-1; margin-left:56.6%; margin-top:-60%; } #review_1{ margin-top: 40%; } #review_2{ margin-top:-5%; margin-left: 62%; z-index:3; } #review_3{ margin-top:0%; margin-left: 3%; } #quote2_parallax{ position:absolute; z-index:-1; margin-left:23%; margin-top:-40%; } #quote_mark_2{ width:450px; } #review_4{ margin-top: 15%; margin-left: 52%; } #review_1_p,#review_2_p,#review_3_p,#review_4_p{ font-family: Baskerville; font-size: 48px; color:black; } #review_1_p{ width:50%; text-align: center; margin-left:25%; margin-top:0%; z-index:1; } #review1_parallax{ position:absolute; margin-left:0%; margin-top:-60%; } #review_2_p{ margin-left:55%; margin-top:0%; } #review2_parallax{ margin-top:-40%; } #review_3_p{ width:50%; text-align: center; margin-left:35%; margin-top:0%; } #review3_parallax{ margin-top:-70%; } #review_4_p{ margin-left: 43%; margin-top: 0%; } #review4_parallax{ margin-top: -60%; } #contact_section{ position:relative; background-color: black; width: 1445px; height: 672px; margin-top: 15%; margin-bottom:-2%; } #contact_title{ position:absolute; font-family: Baskerville; color:white; font-size:40px; position:absolute; margin-top: 5%; margin-left:40%; } #first_name_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:6%; margin-top:10%; width: 35%; padding:20px; } #first_name_input::placeholder, #last_name_input::placeholder, #email_input::placeholder { transform: translateY(15px); } #last_name_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:50%; margin-top:10%; width:41%; padding:20px; } #email_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:6%; margin-top:16%; width: 85%; padding:20px; } textarea{ position:absolute; border:1px solid white; background-color:black; color:white; font-size: 1.5rem; margin-top:25%; margin-left:6%; height:20%; padding:5%; width:70%; } #first_name_input:focus, #last_name_input:focus, #email_input:focus, textarea:focus { outline: none; } #submit_button{ position:relative; transform: rotate(90deg); background-color:black; border-color: black; color:white; width: 19.5%; height:15%; margin-left:81%; margin-top:31.5%; } #submit_button:hover{ background-color: white; color:black; } #SEND{ font-family: Baskerville; transform:rotate(270deg); font-size: 30px; margin-top:-89.5%; } #S,#E,#N,#D{ width: 30px; } #S,#E,#N{ font-size:34px; } #about_top, #traditional_top{ width: 0%; height:10%; background-color:blue; } #hide{ display: ''; } #unhide{ position:absolute; visibility: hidden; color:white; margin-top:15%; margin-left:30%; } #thank_you{ font-size: 50px; } #response{ font-family: Baskerville; font-size: 40px; margin-left: -8%; width:100%; } js //fadein affect after image loads const introimg1 = document.getElementById('introimg_1'); const introimg2 = document.getElementById('introimg_2'); const introimg3 = document.getElementById('introimg_3'); introimg1.onload= eventHandler; function eventHandler(){ return (introimg1.style.animation, introimg2.style.animation, introimg2.style.animation); } //changing the quote when home button clicked const beginningQuote = document.getElementById("beginning_quote"); const endingQuote = document.getElementById("ending_quote"); const fadeIn1= document.getElementById("fadein_1"); const fadeIn2= document.getElementById("fadein_2"); let eventTarget = document.getElementById("home_button"); addEventListener('click', function() { beginningQuote.innerHTML= '"We judge our books...'; endingQuote.innerHTML='by their covers"'; endingQuote.style.marginLeft='61%'; }); //parallax effect const review1= document.getElementById('review_1_p'); const review2= document.getElementById('review_2_p'); const review3= document.getElementById('review_3_p'); const review4= document.getElementById('review_4_p'); const topQuote= document.getElementById('quote_mark_1'); const bottomQuote= document.getElementById('quote_mark_2'); window.addEventListener('scroll', function() { let value = (window.scrollY); review1.style.marginTop = value*0.15 + 'px'; review2.style.marginTop = value*0.15 + 'px'; review3.style.marginTop = value*0.15 + 'px'; review4.style.marginTop = value*0.15 + 'px'; topQuote.style.marginTop = value*0.1 + 'px'; bottomQuote.style.marginTop = value*0.1 + 'px'; }) //"Thank you" after sending message in contact section const form= document.getElementById('contact_section'); const button= document.getElementById('submit'); const hide = document.getElementById('hide'); const unhide = document.getElementById('unhide'); addEventListener('submit', (event) => { hide.style.display = 'none'; unhide.style.visibility ='visible'; } ); A: It looks like the issue with the 'contact' link not working properly might be due to the fact that the href attribute of the tag is pointing to an element with the id of contact_title, but there is no element with that id on the page. In order to fix this, you will need to either add an element with the id of contact_title to the page, or update the href attribute of the 'contact' link to point to a different element on the page. For example, you could add a element with the id of contact_title to the page, just before the element with the class of contact_section, like this: <div id="contact_title"></div> <div class="contact_section" id="contact_section"> ... </div> Then, you could update the href attribute of the 'contact' link to point to this new element, like this: <a href="#contact_title" id="contact"><p>Contact</p></a> This should fix the issue and allow the 'contact' link to scroll to the correct position on the page.
Anchor tag not working for one navbar element
All navbar links work the way I like them to, except 'contact'. I am wondering whether it is to do with the parallax effect? I would need to click 'contact' three times in the navbar to get it to the contact section. I've tried adding the div tag at the top of the contact title like I have done with the about and service section. I've also tried to anchor different things surrounding the contact section, but nothing. I've pasted my entire page so someone can visually see what's going on and to see what could be the problem. Happy Hunting! <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <link rel="stylesheet" href="style.css"> <title id="title">Fantasy Book Covers</title> </head> <body> <div class="black_background" id="black_background"> <div id="quote"> <div id="fadein_1"> <p id="beginning_quote">"Each touch...</p> </div> <div id="fadein_2"> <p id="ending_quote">brings the magic to life"</p> </div> </div> </div> <div id="intro_images"> <img id="introimg_1" src="https://www.dropbox.com/s/yh6apdncisdir7j/Pasted%20Graphic%2018.new.png?dl=1"/> <img id="introimg_2" src="https://www.dropbox.com/s/4tk8mnv24s8ulzc/middle%20picture.new.png?dl=1"/> <img id="introimg_3" src="https://www.dropbox.com/s/q9kgm0rjfcbfawy/Pasted%20Graphic%2017.new.png?dl=1"/> </div> <div id="nav_bar"> <a href="#about_top" id="about"><p>About</p></a> <a href="#traditional_top" id="service"><p>Service</p></a> <a href="#black_background" id="home_button"><div></div></a> <a href="FAQs_page/index.html" id="faqs"><p >FAQs</p></a> <a href="#contact_title" id="contact"><p>Contact</p></a> </div> <div id="about_top"></div> <div class="about_section" id="about_section"> <p>We believe as fantasy-book enthusiasts that to truly experience the magic of a book, one must “feel” its essence. Our job is to transport you to magical worlds with our</p> <p id="italic"> <i>handmade, leather book covers.</i></p> <p id="about_2">Choose any fantasy-based book, from any author, and we will bring their story to life. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div id="traditional_top"></div> <div class="traditional_section" id="traditional_section"> <a href="Traditional_product_page/index.html" target="_blank" ><img id="traditional_image" src="https://www.dropbox.com/s/a7kmy309tc0sb0a/hex.small_main.png?dl=1"/></a> <div id="traditional_p"> <p id="traditional_title">Traditional</p> <p id="lorem_1">Lorem ipsum dolor sit amet,consectetur <br> adipiscingelit, sed do eiusmod tempor<br>incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.Duis aute irure dolor in reprehenderit in voluptate velit esse cillum</p> </div> </div> <div class="ornamented_section"> <a href="Ornamented_product_page/index.html" target="_blank"><img id="ornamented_image" src="https://www.dropbox.com/s/7pvg6h0q66zj26d/lapis.small_main.png?dl=1"/></a> <div id="ornamented_p"> <p id="ornamented_title">Ornamented</p> <p id="lorem_2">Lorem ipsum dolor sitamet,consectetur<br> adipiscing elit, sed do eiusmod tempor<br>incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p> </div> </div> <div id="custom_button"> <p id="custom_p">CUSTOM</p> </div> <div id="review_section"> <img id="review_1" src="https://www.dropbox.com/s/kh56ws33vptqv5b/Pasted%20Graphic.1.png?dl=1"/> <div id="review1_parallax"> <p id="review_1_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor inliqua”</p> </div> <div id="quote1_parallax"> <img id="quote_mark_1" src="https://www.dropbox.com/s/qrsrhsz3q84dh4y/top_quote_mark.png?dl=1"/> </div> <img id="review_2" src="https://www.dropbox.com/s/ulkcvgxp4v0ugg4/Pasted%20Graphic.2.png?dl=1"/> <div id="review2_parallax"> <p id="review_2_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit.”</p> </div> <img id="review_3" src="https://www.dropbox.com/s/am9g54a8i36wnd0/Pasted%20Graphic.3.png?dl=1"/> <div id="review3_parallax"> <p id="review_3_p">“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor inliqua”</p> </div> <div id="quote2_parallax"> <img id="quote_mark_2" src="https://www.dropbox.com/s/m253oh24ta5qpc9/bottom_quote_mark.png?dl=1"/> </div> <img id="review_4" src="https://www.dropbox.com/s/avjhb64toauq1mq/Pasted%20Graphic.4.png?dl=1"/> <div id="review4_parallax"> <p id="review_4_p">“Lorem ipsum dolor sit ame!!!!.”</p> </div> </div> <form id="contact_section"> <div id="unhide"> <p id="thank_you">Thank you for contacting us</p> <p id="response">We will get back to you as soon as possible</p> </div> <div id="hide"> <p id="contact_title">CONTACT</p> <input id="first_name_input" type="text" placeholder="First Name" required></input> <input id="last_name_input" type="text" placeholder="Last Name" required></input> <input id="email_input" type="email" placeholder="Email" required></input> <textarea required></textarea> <button type="submit" id="submit_button"> <div id="SEND"> <p id="S">S</p> <p id="E">E</p> <p id="N">N</p> <p id="D">D</p> </div> </button> </div> </form> </body> <script src='main.js'></script> </html> css: body{ margin:0; padding:0; overflow-x:hidden; } .black_background{ position:relative; margin-left:-2%; margin-top:-1%; width: 1461px; height: 365px; background: #000000; } #quote{ position:absolute; margin-top: 1%; width:100%; } #beginning_quote{ margin-left: 20%; margin-top: 8%; font-family: Baskerville; font-size: 48px; color: #995DBE; animation: animate; } #beginning_quote::first-letter{ font-size: 70px; } #ending_quote{ margin-left: 50%; margin-top: -3%; font-family: Baskerville; font-size: 48px; color: #DDA5FE; } #fadein_1{ animation: fadeIn 4s; } @keyframes fadeIn{ 0%{opacity:0;} 100%{opacity:1;} } #fadein_2{ animation: fadeIn 4s forwards; animation-delay: 1.5s; opacity: 0%; } @keyframes fadeIn{ 0%{opacity:0;} 100%{opacity:1;} } #intro_images{ position:relative; display:flex; justify-content: space-between; margin-left: 4%; margin-right: 4%; margin-top: 4%; } #introimg_1, #introimg_2, #introimg_3{ animation: fadeIn 4s forwards; opacity: 0%; } #nav_bar{ position: relative; margin-top: 10%; margin-left:-0.5rem; width:1445px; display: flex; justify-content: space-evenly; background-color:white; height:13%; position:sticky; top:0; z-index:3; } #about, #service, #faqs, #contact{ text-decoration: none; font-size: 20px; font-family: Baskerville; color:black; position: relative; margin-top:0%; } #home_button{ background-color:black; width:25px; height:10px; border-radius:50%; margin-top:26px; } .about_section{ position:relative; text-align: center; color:black; margin-top: 10%; font-family:Baskerville; font-size: 30px; line-height: 3rem; } #italic{ font-size: 50px; margin-top: 5%; } #about_2{ margin-top: 5%; } .traditional_section{ background-color:rgba(36, 2, 56, 0.45); width: 848px; height: 524px; margin-left: 21%; margin-top:25%; } .ornamented_section{ background-color: rgba(32, 6, 48, 0.73); width: 848px; height: 524px; margin-left: 21%; margin-top: 20%; } #traditional_image{ position: absolute; margin-left:-3%; margin-top:-5%; } #ornamented_image{ position:absolute; margin-left: -25%; margin-top: -5%; } #traditional_image:hover, #ornamented_image:hover{ width:425px; } #traditional_title{ position:absolute; font-size: 100px; margin-left: 20%; margin-top: -17%; padding:10%; } #traditional_p{ font-family:Baskerville; margin-top: 8%; margin-right:2%; font-size: 25px; float: right; text-align: right; line-height: 50px; } #ornamented_title{ position:absolute; font-size: 100px; margin-left: -8%; margin-top: -20%; color:black; padding: 3%; } #ornamented_p{ font-family:Baskerville; color:white; margin-left: 2%; margin-top:33%; font-size: 25px; float: left; text-align: left; line-height: 50px; } #lorem_1{ margin-top: 12%; margin-right: 3%; } #lorem_2{ color:white; margin-top: -6%; margin-left: 3%; } #custom_button{ position:absolute; background-color: rgb(85, 10, 34); width: 468.11px; height: 133px; margin-top: 10%; margin-left: 33%; border-radius: 15px; } #custom_p{ font-family:Baskerville; color:white; font-size: 40px; margin-left: 31%; margin-top:10%; } #review_section{ position: relative; } #quote1_parallax{ position:absolute; z-index:-1; margin-left:56.6%; margin-top:-60%; } #review_1{ margin-top: 40%; } #review_2{ margin-top:-5%; margin-left: 62%; z-index:3; } #review_3{ margin-top:0%; margin-left: 3%; } #quote2_parallax{ position:absolute; z-index:-1; margin-left:23%; margin-top:-40%; } #quote_mark_2{ width:450px; } #review_4{ margin-top: 15%; margin-left: 52%; } #review_1_p,#review_2_p,#review_3_p,#review_4_p{ font-family: Baskerville; font-size: 48px; color:black; } #review_1_p{ width:50%; text-align: center; margin-left:25%; margin-top:0%; z-index:1; } #review1_parallax{ position:absolute; margin-left:0%; margin-top:-60%; } #review_2_p{ margin-left:55%; margin-top:0%; } #review2_parallax{ margin-top:-40%; } #review_3_p{ width:50%; text-align: center; margin-left:35%; margin-top:0%; } #review3_parallax{ margin-top:-70%; } #review_4_p{ margin-left: 43%; margin-top: 0%; } #review4_parallax{ margin-top: -60%; } #contact_section{ position:relative; background-color: black; width: 1445px; height: 672px; margin-top: 15%; margin-bottom:-2%; } #contact_title{ position:absolute; font-family: Baskerville; color:white; font-size:40px; position:absolute; margin-top: 5%; margin-left:40%; } #first_name_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:6%; margin-top:10%; width: 35%; padding:20px; } #first_name_input::placeholder, #last_name_input::placeholder, #email_input::placeholder { transform: translateY(15px); } #last_name_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:50%; margin-top:10%; width:41%; padding:20px; } #email_input{ position:absolute; background-color: black; border: black; border-bottom: 2px solid white; color:white; font-size:2rem; margin-left:6%; margin-top:16%; width: 85%; padding:20px; } textarea{ position:absolute; border:1px solid white; background-color:black; color:white; font-size: 1.5rem; margin-top:25%; margin-left:6%; height:20%; padding:5%; width:70%; } #first_name_input:focus, #last_name_input:focus, #email_input:focus, textarea:focus { outline: none; } #submit_button{ position:relative; transform: rotate(90deg); background-color:black; border-color: black; color:white; width: 19.5%; height:15%; margin-left:81%; margin-top:31.5%; } #submit_button:hover{ background-color: white; color:black; } #SEND{ font-family: Baskerville; transform:rotate(270deg); font-size: 30px; margin-top:-89.5%; } #S,#E,#N,#D{ width: 30px; } #S,#E,#N{ font-size:34px; } #about_top, #traditional_top{ width: 0%; height:10%; background-color:blue; } #hide{ display: ''; } #unhide{ position:absolute; visibility: hidden; color:white; margin-top:15%; margin-left:30%; } #thank_you{ font-size: 50px; } #response{ font-family: Baskerville; font-size: 40px; margin-left: -8%; width:100%; } js //fadein affect after image loads const introimg1 = document.getElementById('introimg_1'); const introimg2 = document.getElementById('introimg_2'); const introimg3 = document.getElementById('introimg_3'); introimg1.onload= eventHandler; function eventHandler(){ return (introimg1.style.animation, introimg2.style.animation, introimg2.style.animation); } //changing the quote when home button clicked const beginningQuote = document.getElementById("beginning_quote"); const endingQuote = document.getElementById("ending_quote"); const fadeIn1= document.getElementById("fadein_1"); const fadeIn2= document.getElementById("fadein_2"); let eventTarget = document.getElementById("home_button"); addEventListener('click', function() { beginningQuote.innerHTML= '"We judge our books...'; endingQuote.innerHTML='by their covers"'; endingQuote.style.marginLeft='61%'; }); //parallax effect const review1= document.getElementById('review_1_p'); const review2= document.getElementById('review_2_p'); const review3= document.getElementById('review_3_p'); const review4= document.getElementById('review_4_p'); const topQuote= document.getElementById('quote_mark_1'); const bottomQuote= document.getElementById('quote_mark_2'); window.addEventListener('scroll', function() { let value = (window.scrollY); review1.style.marginTop = value*0.15 + 'px'; review2.style.marginTop = value*0.15 + 'px'; review3.style.marginTop = value*0.15 + 'px'; review4.style.marginTop = value*0.15 + 'px'; topQuote.style.marginTop = value*0.1 + 'px'; bottomQuote.style.marginTop = value*0.1 + 'px'; }) //"Thank you" after sending message in contact section const form= document.getElementById('contact_section'); const button= document.getElementById('submit'); const hide = document.getElementById('hide'); const unhide = document.getElementById('unhide'); addEventListener('submit', (event) => { hide.style.display = 'none'; unhide.style.visibility ='visible'; } );
[ "It looks like the issue with the 'contact' link not working properly might be due to the fact that the href attribute of the tag is pointing to an element with the id of contact_title, but there is no element with that id on the page.\nIn order to fix this, you will need to either add an element with the id of contact_title to the page, or update the href attribute of the 'contact' link to point to a different element on the page.\nFor example, you could add a element with the id of contact_title to the page, just before the element with the class of contact_section, like this:\n<div id=\"contact_title\"></div>\n<div class=\"contact_section\" id=\"contact_section\">\n...\n</div>\n\nThen, you could update the href attribute of the 'contact' link to point to this new element, like this:\n<a href=\"#contact_title\" id=\"contact\"><p>Contact</p></a>\n\nThis should fix the issue and allow the 'contact' link to scroll to the correct position on the page.\n" ]
[ 0 ]
[]
[]
[ "anchor", "css", "html", "javascript", "navbar" ]
stackoverflow_0074665660_anchor_css_html_javascript_navbar.txt
Q: What is the difference between Close(); and Application.Exit();? This isn't really a big problem but I have been using the Close(); statement to well make the app close or have a button that closes the program with this function. However recently I saw someone use Application.Exit(); to do the same I guess. I just wanna know how these two are different. Or are they the same? I haven't tried the Application.Exit(); statement yet. I just wanna know the difference because I am basically just a beginner. A: It has been 10 years that I have not used windows forms but as far as I remember: Close, closes the current form. Just if the current form is the application main form, this will cause the application to exit. Application.Exit() could be called anywhere in any form and it causes application termination.
What is the difference between Close(); and Application.Exit();?
This isn't really a big problem but I have been using the Close(); statement to well make the app close or have a button that closes the program with this function. However recently I saw someone use Application.Exit(); to do the same I guess. I just wanna know how these two are different. Or are they the same? I haven't tried the Application.Exit(); statement yet. I just wanna know the difference because I am basically just a beginner.
[ "It has been 10 years that I have not used windows forms but as far as I remember: Close, closes the current form. Just if the current form is the application main form, this will cause the application to exit. Application.Exit() could be called anywhere in any form and it causes application termination.\n" ]
[ 1 ]
[]
[]
[ "c#", "exit", "winforms" ]
stackoverflow_0074665654_c#_exit_winforms.txt
Q: I can't upload image in wordpress when i upload an image in wordpress media library this message appears post-processing of the image failed likely because the server is busy or does not have enough resources. uploading a smaller image may help. suggested maximum size is 2500 pixels. i have tried some methods to solve it like increase Maximum upload file size and change php version I'm using hostGator as a web host . A: use wp filter add_filter( 'big_image_size_threshold', '__return_false' ); A: I faced the same issue and in my case it was related to nginx 413 Error response Request Entity too large and yes image was more than 2M by default in NGinx configuration client_max_body_size 2M; -> client_max_body_size 64M; should be set in server or location section of .conf file Wordpress was running in docker compose and that was the root cause A: If this error occurs on your Local by Flywheel desktop app I was able to fix it by changing the web server from nginx to Apache. A: What fixed this error for me was updating an image related plugin called Images to WebP from version 1.9 to the latest which is 4.1 at the time of writing. And everything worked just fine afterwards. I found the solution here https://support.nova.bi/d/3-wordpress-error-while-uploading-images. I had tried everything including increasing PHP allocated memory, restarting my server, checking Wordpress permissions, modifying current theme functions.php file... but nothing worked. Only after I updated the plugin the upload worked again. Hope this helps someone.
I can't upload image in wordpress
when i upload an image in wordpress media library this message appears post-processing of the image failed likely because the server is busy or does not have enough resources. uploading a smaller image may help. suggested maximum size is 2500 pixels. i have tried some methods to solve it like increase Maximum upload file size and change php version I'm using hostGator as a web host .
[ "use wp filter\nadd_filter( 'big_image_size_threshold', '__return_false' );\n", "I faced the same issue and in my case it was related to nginx\n413 Error response \nRequest Entity too large \n\nand yes image was more than 2M by default in NGinx configuration\nclient_max_body_size 2M; -> client_max_body_size 64M;\nshould be set in server or location section of .conf file\nWordpress was running in docker compose and that was the root cause\n", "If this error occurs on your Local by Flywheel desktop app I was able to fix it by changing the web server from nginx to Apache.\n\n", "What fixed this error for me was updating an image related plugin called Images to WebP from version 1.9 to the latest which is 4.1 at the time of writing. And everything worked just fine afterwards. I found the solution here https://support.nova.bi/d/3-wordpress-error-while-uploading-images. I had tried everything including increasing PHP allocated memory, restarting my server, checking Wordpress permissions, modifying current theme functions.php file... but nothing worked. Only after I updated the plugin the upload worked again.\nHope this helps someone.\n" ]
[ 2, 2, 0, 0 ]
[ "There are a few solutions to this issue. First, are you running PHP 7.3 and WordPress 5.3? You may try downgrading to PHP 7.2.\nNext, you may try increasing your site’s memory limit.\nIf neither of these works you may attempt to work around it.\n\nGo to your media library and select any picture. Preferably one that you managed to upload successfully. Click Edit and look at where the picture is located. keep this open or remember it.\n\nConnect to your server via an FTP client and navigate to this folder where your images are stored. Upload your large photographs to this folder.\n\nThey won’t show up in your media library yet. You need to use a plugin called 'Add From Server'. Download that and install/activate it (by Dion Hulse).\n\nBackup your WordPress installation (just in case).\n\nFinally, hover over Media and then select the new option = Add From Server\n\nNavigate to the folder where you uploaded your photos. Make sure you select just the photos you want to import into your Media Library (although you could delete the duplicates if you make a mistake).\n\nAfter you click go, it’ll take some time, but don't cancel it or refresh the page. Just wait, and you’ll get a notification on the same screen when it's done. Along with a new list of imported files (including size variations if applicable).\n\nNow your photos will be in your media library with ‘scaled’ at the end. You can now use these in your posts and they will work just fine.\n\n\nNOTES\nThe files are imported to wherever you select them from. So it’s important that you put them in the same folder as the rest of your active pictures BEFORE you import them. Otherwise, they’ll show up in the media library, but won’t actually work on your website (took a while to figure this out).\nI’d recommend importing 10-15 pictures at a time if they are large. Any more and you risk being signed out of your cPanel due to inactivity and it may break your installation (maybe, hence the backup).\nAlso, you may look through this wordpress.org thread as many solutions are discussed in the thread. https://wordpress.org/support/topic/unable-to-upload-images-67/page/5/\nLet me know if this helps!\n" ]
[ -1 ]
[ "image", "wordpress" ]
stackoverflow_0067670924_image_wordpress.txt
Q: How to draw animation in pyopengltk framework I am using pyopengl, tkinter, pyopengltk to draw a Rubik's cube and am going to implement a Rubik's cube recovery animation, now I have implemented to display a Rubik's cube in tkinter with this quiz. How to rotate slices of a Rubik's Cube in python PyOpenGL? But I can't implement the tesseract animation step by step now, how can I do it please? Now it can only keep repeating the same action import tkinter as tk from OpenGL.GL import * from OpenGL.GLU import * from OpenGL.GLUT import * from pyopengltk import OpenGLFrame vertices = ( (1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1), (1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1) ) edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7)) surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6)) colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1)) rot_cube_map = {'K_UP': (-1, 0), 'K_DOWN': (1, 0), 'K_LEFT': (0, -1), 'K_RIGHT': (0, 1)} rot_slice_map = { 'K_1': (0, 0, 1), 'K_2': (0, 1, 1), 'K_3': (0, 2, 1), 'K_4': (1, 0, 1), 'K_5': (1, 1, 1), 'K_6': (1, 2, 1), 'K_7': (2, 0, 1), 'K_8': (2, 1, 1), 'K_9': (2, 2, 1), 'K_F1': (0, 0, -1), 'K_F2': (0, 1, -1), 'K_F3': (0, 2, -1), 'K_F4': (1, 0, -1), 'K_F5': (1, 1, -1), 'K_F6': (1, 2, -1), 'K_F7': (2, 0, -1), 'K_F8': (2, 1, -1), 'K_F9': (2, 2, -1), } class Cube(): def __init__(self, id, N, scale): self.N = 3 self.scale = scale self.init_i = [*id] self.current_i = [*id] # 表示填充,一个变量值代替多个 self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)] def isAffected(self, axis, slice, dir): return self.current_i[axis] == slice def update(self, axis, slice, dir): if not self.isAffected(axis, slice, dir): return i, j = (axis + 1) % 3, (axis + 2) % 3 for k in range(3): self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir self.current_i[i], self.current_i[j] = ( self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j], self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i]) def transformMat(self): scaleA = [[s * self.scale for s in a] for a in self.rot] scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i] return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1] def draw(self, col, surf, vert, animate, angle, axis, slice, dir): glPushMatrix() if animate and self.isAffected(axis, slice, dir): glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转 glMultMatrixf(self.transformMat()) glBegin(GL_QUADS) for i in range(len(surf)): glColor3fv(colors[i]) for j in surf[i]: glVertex3fv(vertices[j]) glEnd() glPopMatrix() class mycube(): def __init__(self, N, scale): self.N = N cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27 def maindd(self): for cube in self.cubes: cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0) class GLFrame(OpenGLFrame): def initgl(self): self.rota = 0 self.count = 0 self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0) self.animate1, self.animate_ang, self.animate_speed = False, 0, 0.5 self.action = (0, 0, 0) glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色 # glViewport(400, 400, 200, 200) # 指定了视口的左下角位置 glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系 glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一) glMatrixMode(GL_PROJECTION) glLoadIdentity() # 恢复原始坐标 gluPerspective(30, self.width / self.height, 0.1, 50.0) def redraw(self): self.N = 3 cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr] self.animate, self.action = True, rot_slice_map['K_1'] self.ang_x += self.rot_cube[0] * 2 self.ang_y += self.rot_cube[1] * 2 glMatrixMode(GL_MODELVIEW) glLoadIdentity() glTranslatef(0, 0, -40) glRotatef(self.ang_y, 0, 1, 0) glRotatef(self.ang_x, 1, 0, 0) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) if self.animate1: if self.animate_ang >= 90: for cube in self.cubes: cube.update(*self.action) self.animate1, self.animate_ang = False, 0 for cube in self.cubes: cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action) if self.animate: self.animate_ang += self.animate_speed class App(tk.Tk): def __init__(self): super().__init__() self.title('Pineapple') self.glframe = GLFrame(self, width=800, height=600) self.glframe.pack(expand=True, fill=tk.BOTH) # self.glframe.focus_displayof() # self.glframe.animate = True App().mainloop() I do this by calling this statement twice self.animate, self.action = True, rot_slice_map['K_1'] Expect it to be executed step by step, but it only executes the last sentence,There is very little information about pyopengltk on the internet, and I am still a newbie, so I would like to get help A: You must implement the keyebord events similar as in the Pygame implementation decribed in the answer to How to rotate slices of a Rubik's Cube in python PyOpenGL?. Remove: self.animate, self.action = True, rot_slice_map['K_1'] Change the key mapping to a mapping to be used with tkinter rot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)} rot_slice_map = { '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1), '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1), 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1), 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1), } Create the 27 cubes in GLFrame.initgl and set self.animate = True. self.animate is the fagae that controls the animation loop of the the OpenGLFrame. Tha naimation of the Rubik's Cube is controled with animate1Cube: class GLFrame(OpenGLFrame): def initgl(self): self.animate = True # [...] self.N = 3 cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr] Add the callback methods for the keyboard event: class GLFrame(OpenGLFrame): # [...] def keydown(self, event): if event.keysym in rot_slice_map: self.animate1Cube, self.action = True, rot_slice_map[event.keysym] if event.keysym in rot_cube_map: self.rot_cube = rot_cube_map[event.keysym] def keyup(self, event): if event.keysym in rot_cube_map: self.rot_cube = (0, 0) Set the keyboard callbacks: class App(tk.Tk): def __init__(self): super().__init__() self.title('rubiks cube') self.glframe = GLFrame(self, width=800, height=600) self.bind("<KeyPress>", self.glframe.keydown) self.bind("<KeyRelease>", self.glframe.keyup) self.glframe.pack(expand=True, fill=tk.BOTH) The animation of the Rubikc's Cube depends on animateCube, but not on animate: class GLFrame(OpenGLFrame): # [...] def redraw(self): # [...] if self.animate1Cube: if self.animate_ang >= 90: for cube in self.cubes: cube.update(*self.action) self.animate1Cube, self.animate_ang = False, 0 for cube in self.cubes: cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action) if self.animate1Cube: self.animate_ang += self.animate_speed Complete and working example: import tkinter as tk from OpenGL.GL import * from OpenGL.GLU import * from OpenGL.GLUT import * from pyopengltk import OpenGLFrame vertices = ( (1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1), (1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1) ) edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7)) surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6)) colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1)) rot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)} rot_slice_map = { '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1), '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1), 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1), 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1), } class Cube(): def __init__(self, id, N, scale): self.N = 3 self.scale = scale self.init_i = [*id] self.current_i = [*id] # 表示填充,一个变量值代替多个 self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)] def isAffected(self, axis, slice, dir): return self.current_i[axis] == slice def update(self, axis, slice, dir): if not self.isAffected(axis, slice, dir): return i, j = (axis + 1) % 3, (axis + 2) % 3 for k in range(3): self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir self.current_i[i], self.current_i[j] = ( self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j], self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i]) def transformMat(self): scaleA = [[s * self.scale for s in a] for a in self.rot] scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i] return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1] def draw(self, col, surf, vert, animate, angle, axis, slice, dir): glPushMatrix() if animate and self.isAffected(axis, slice, dir): glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转 glMultMatrixf(self.transformMat()) glBegin(GL_QUADS) for i in range(len(surf)): glColor3fv(colors[i]) for j in surf[i]: glVertex3fv(vertices[j]) glEnd() glPopMatrix() class mycube(): def __init__(self, N, scale): self.N = N cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27 def maindd(self): for cube in self.cubes: cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0) class GLFrame(OpenGLFrame): def initgl(self): self.animate = True self.rota = 0 self.count = 0 self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0) self.animate1Cube, self.animate_ang, self.animate_speed = False, 0, 2 self.action = (0, 0, 0) glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色 # glViewport(400, 400, 200, 200) # 指定了视口的左下角位置 glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系 glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一) glMatrixMode(GL_PROJECTION) glLoadIdentity() # 恢复原始坐标 gluPerspective(30, self.width / self.height, 0.1, 50.0) self.N = 3 cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr] def keydown(self, event): if event.keysym in rot_slice_map: self.animate1Cube, self.action = True, rot_slice_map[event.keysym] if event.keysym in rot_cube_map: self.rot_cube = rot_cube_map[event.keysym] def keyup(self, event): if event.keysym in rot_cube_map: self.rot_cube = (0, 0) def redraw(self): self.ang_x += self.rot_cube[0] * 2 self.ang_y += self.rot_cube[1] * 2 glMatrixMode(GL_MODELVIEW) glLoadIdentity() glTranslatef(0, 0, -40) glRotatef(self.ang_y, 0, 1, 0) glRotatef(self.ang_x, 1, 0, 0) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) if self.animate1Cube: if self.animate_ang >= 90: for cube in self.cubes: cube.update(*self.action) self.animate1Cube, self.animate_ang = False, 0 for cube in self.cubes: cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action) if self.animate1Cube: self.animate_ang += self.animate_speed class App(tk.Tk): def __init__(self): super().__init__() self.title('rubiks cube') self.glframe = GLFrame(self, width=800, height=600) #self.bind("<Key>", self.glframe.key) self.bind("<KeyPress>", self.glframe.keydown) self.bind("<KeyRelease>", self.glframe.keyup) self.glframe.pack(expand=True, fill=tk.BOTH) # self.glframe.focus_displayof() # self.animate = True App().mainloop() If you want to animate the cube automatically, you need to animate instead of keyboard events. Define a list of animationg. e.g.: animation_list = ['1', '3', '5', 'F2'] Set self.action from the list. e.g.: class GLFrame(OpenGLFrame): # [...] def redraw(self): if not self.animate1Cube and animation_list: self.animate1Cube, self.action = True, rot_slice_map[animation_list[0]] del animation_list[0] # [...]
How to draw animation in pyopengltk framework
I am using pyopengl, tkinter, pyopengltk to draw a Rubik's cube and am going to implement a Rubik's cube recovery animation, now I have implemented to display a Rubik's cube in tkinter with this quiz. How to rotate slices of a Rubik's Cube in python PyOpenGL? But I can't implement the tesseract animation step by step now, how can I do it please? Now it can only keep repeating the same action import tkinter as tk from OpenGL.GL import * from OpenGL.GLU import * from OpenGL.GLUT import * from pyopengltk import OpenGLFrame vertices = ( (1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1), (1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1) ) edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7)) surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6)) colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1)) rot_cube_map = {'K_UP': (-1, 0), 'K_DOWN': (1, 0), 'K_LEFT': (0, -1), 'K_RIGHT': (0, 1)} rot_slice_map = { 'K_1': (0, 0, 1), 'K_2': (0, 1, 1), 'K_3': (0, 2, 1), 'K_4': (1, 0, 1), 'K_5': (1, 1, 1), 'K_6': (1, 2, 1), 'K_7': (2, 0, 1), 'K_8': (2, 1, 1), 'K_9': (2, 2, 1), 'K_F1': (0, 0, -1), 'K_F2': (0, 1, -1), 'K_F3': (0, 2, -1), 'K_F4': (1, 0, -1), 'K_F5': (1, 1, -1), 'K_F6': (1, 2, -1), 'K_F7': (2, 0, -1), 'K_F8': (2, 1, -1), 'K_F9': (2, 2, -1), } class Cube(): def __init__(self, id, N, scale): self.N = 3 self.scale = scale self.init_i = [*id] self.current_i = [*id] # 表示填充,一个变量值代替多个 self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)] def isAffected(self, axis, slice, dir): return self.current_i[axis] == slice def update(self, axis, slice, dir): if not self.isAffected(axis, slice, dir): return i, j = (axis + 1) % 3, (axis + 2) % 3 for k in range(3): self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir self.current_i[i], self.current_i[j] = ( self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j], self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i]) def transformMat(self): scaleA = [[s * self.scale for s in a] for a in self.rot] scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i] return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1] def draw(self, col, surf, vert, animate, angle, axis, slice, dir): glPushMatrix() if animate and self.isAffected(axis, slice, dir): glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转 glMultMatrixf(self.transformMat()) glBegin(GL_QUADS) for i in range(len(surf)): glColor3fv(colors[i]) for j in surf[i]: glVertex3fv(vertices[j]) glEnd() glPopMatrix() class mycube(): def __init__(self, N, scale): self.N = N cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27 def maindd(self): for cube in self.cubes: cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0) class GLFrame(OpenGLFrame): def initgl(self): self.rota = 0 self.count = 0 self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0) self.animate1, self.animate_ang, self.animate_speed = False, 0, 0.5 self.action = (0, 0, 0) glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色 # glViewport(400, 400, 200, 200) # 指定了视口的左下角位置 glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系 glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一) glMatrixMode(GL_PROJECTION) glLoadIdentity() # 恢复原始坐标 gluPerspective(30, self.width / self.height, 0.1, 50.0) def redraw(self): self.N = 3 cr = range(self.N) self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr] self.animate, self.action = True, rot_slice_map['K_1'] self.ang_x += self.rot_cube[0] * 2 self.ang_y += self.rot_cube[1] * 2 glMatrixMode(GL_MODELVIEW) glLoadIdentity() glTranslatef(0, 0, -40) glRotatef(self.ang_y, 0, 1, 0) glRotatef(self.ang_x, 1, 0, 0) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) if self.animate1: if self.animate_ang >= 90: for cube in self.cubes: cube.update(*self.action) self.animate1, self.animate_ang = False, 0 for cube in self.cubes: cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action) if self.animate: self.animate_ang += self.animate_speed class App(tk.Tk): def __init__(self): super().__init__() self.title('Pineapple') self.glframe = GLFrame(self, width=800, height=600) self.glframe.pack(expand=True, fill=tk.BOTH) # self.glframe.focus_displayof() # self.glframe.animate = True App().mainloop() I do this by calling this statement twice self.animate, self.action = True, rot_slice_map['K_1'] Expect it to be executed step by step, but it only executes the last sentence,There is very little information about pyopengltk on the internet, and I am still a newbie, so I would like to get help
[ "You must implement the keyebord events similar as in the Pygame implementation decribed in the answer to How to rotate slices of a Rubik's Cube in python PyOpenGL?.\nRemove:\nself.animate, self.action = True, rot_slice_map['K_1']\nChange the key mapping to a mapping to be used with tkinter\nrot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}\nrot_slice_map = {\n '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),\n '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),\n 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),\n 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),\n}\n\nCreate the 27 cubes in GLFrame.initgl and set self.animate = True. self.animate is the fagae that controls the animation loop of the the OpenGLFrame. Tha naimation of the Rubik's Cube is controled with animate1Cube:\nclass GLFrame(OpenGLFrame):\n def initgl(self):\n self.animate = True\n \n # [...]\n\n self.N = 3\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]\n\nAdd the callback methods for the keyboard event:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def keydown(self, event):\n if event.keysym in rot_slice_map:\n self.animate1Cube, self.action = True, rot_slice_map[event.keysym]\n if event.keysym in rot_cube_map:\n self.rot_cube = rot_cube_map[event.keysym]\n\n def keyup(self, event):\n if event.keysym in rot_cube_map:\n self.rot_cube = (0, 0)\n\nSet the keyboard callbacks:\nclass App(tk.Tk):\n def __init__(self):\n super().__init__()\n self.title('rubiks cube')\n self.glframe = GLFrame(self, width=800, height=600)\n self.bind(\"<KeyPress>\", self.glframe.keydown)\n self.bind(\"<KeyRelease>\", self.glframe.keyup)\n self.glframe.pack(expand=True, fill=tk.BOTH)\n\nThe animation of the Rubikc's Cube depends on animateCube, but not on animate:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def redraw(self):\n # [...]\n\n if self.animate1Cube:\n if self.animate_ang >= 90:\n for cube in self.cubes:\n cube.update(*self.action)\n self.animate1Cube, self.animate_ang = False, 0\n\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)\n if self.animate1Cube:\n self.animate_ang += self.animate_speed\n\n\nComplete and working example:\n\nimport tkinter as tk\nfrom OpenGL.GL import *\nfrom OpenGL.GLU import *\nfrom OpenGL.GLUT import *\nfrom pyopengltk import OpenGLFrame\n\nvertices = (\n (1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1),\n (1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1)\n)\nedges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7))\nsurfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6))\ncolors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1))\n\nrot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}\nrot_slice_map = {\n '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),\n '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),\n 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),\n 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),\n}\n\nclass Cube():\n def __init__(self, id, N, scale):\n self.N = 3\n self.scale = scale\n self.init_i = [*id]\n self.current_i = [*id] # 表示填充,一个变量值代替多个\n self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)]\n\n def isAffected(self, axis, slice, dir):\n return self.current_i[axis] == slice\n\n def update(self, axis, slice, dir):\n\n if not self.isAffected(axis, slice, dir):\n return\n\n i, j = (axis + 1) % 3, (axis + 2) % 3\n for k in range(3):\n self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir\n\n self.current_i[i], self.current_i[j] = (\n self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j],\n self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i])\n\n def transformMat(self):\n scaleA = [[s * self.scale for s in a] for a in self.rot]\n scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i]\n return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1]\n\n def draw(self, col, surf, vert, animate, angle, axis, slice, dir):\n\n glPushMatrix()\n if animate and self.isAffected(axis, slice, dir):\n glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转\n glMultMatrixf(self.transformMat())\n\n glBegin(GL_QUADS)\n for i in range(len(surf)):\n glColor3fv(colors[i])\n for j in surf[i]:\n glVertex3fv(vertices[j])\n glEnd()\n\n glPopMatrix()\n\n\nclass mycube():\n def __init__(self, N, scale):\n self.N = N\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27\n\n def maindd(self):\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0)\n\nclass GLFrame(OpenGLFrame):\n def initgl(self):\n self.animate = True\n self.rota = 0\n self.count = 0\n\n self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0)\n self.animate1Cube, self.animate_ang, self.animate_speed = False, 0, 2\n self.action = (0, 0, 0)\n glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色\n # glViewport(400, 400, 200, 200) # 指定了视口的左下角位置\n\n glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系\n glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一)\n\n glMatrixMode(GL_PROJECTION) \n glLoadIdentity() # 恢复原始坐标\n gluPerspective(30, self.width / self.height, 0.1, 50.0)\n\n self.N = 3\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]\n\n def keydown(self, event):\n if event.keysym in rot_slice_map:\n self.animate1Cube, self.action = True, rot_slice_map[event.keysym]\n if event.keysym in rot_cube_map:\n self.rot_cube = rot_cube_map[event.keysym]\n\n def keyup(self, event):\n if event.keysym in rot_cube_map:\n self.rot_cube = (0, 0)\n\n def redraw(self):\n self.ang_x += self.rot_cube[0] * 2\n self.ang_y += self.rot_cube[1] * 2\n\n glMatrixMode(GL_MODELVIEW)\n glLoadIdentity()\n glTranslatef(0, 0, -40)\n glRotatef(self.ang_y, 0, 1, 0)\n glRotatef(self.ang_x, 1, 0, 0)\n\n glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)\n\n if self.animate1Cube:\n if self.animate_ang >= 90:\n for cube in self.cubes:\n cube.update(*self.action)\n self.animate1Cube, self.animate_ang = False, 0\n\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)\n if self.animate1Cube:\n self.animate_ang += self.animate_speed\n\n\nclass App(tk.Tk):\n def __init__(self):\n super().__init__()\n self.title('rubiks cube')\n self.glframe = GLFrame(self, width=800, height=600)\n #self.bind(\"<Key>\", self.glframe.key)\n self.bind(\"<KeyPress>\", self.glframe.keydown)\n self.bind(\"<KeyRelease>\", self.glframe.keyup)\n self.glframe.pack(expand=True, fill=tk.BOTH)\n # self.glframe.focus_displayof()\n # self.animate = True\n\nApp().mainloop()\n\n\nIf you want to animate the cube automatically, you need to animate instead of keyboard events. Define a list of animationg. e.g.:\nanimation_list = ['1', '3', '5', 'F2']\n\nSet self.action from the list. e.g.:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def redraw(self):\n if not self.animate1Cube and animation_list:\n self.animate1Cube, self.action = True, rot_slice_map[animation_list[0]]\n del animation_list[0]\n\n # [...]\n\n" ]
[ 0 ]
[]
[]
[ "opengl", "pyopengl", "python", "tkinter" ]
stackoverflow_0074664263_opengl_pyopengl_python_tkinter.txt
Q: 3D Delaunay triangulation: bad output (extra simplices appearing) I am using python3.11 to create the Delaunay triangulation of a point cloud with script.Delaunay and it is misbehaving by creating some extra faces. In the image below you can see a 3D scatter plot of the points. The image was created using the next very few lines of code: import plotly.graph_objects as go fig = go.Figure() fig.add_scatter3d(x = puntos[:,0], y = puntos[:,1], z = puntos[:,2],mode='markers', marker=dict( size=1, color='rgb(0,0,0)', opacity=0.8 )) fig.update_layout(scene = dict(aspectmode = 'data')) fig.show() The data puntos can be downloaded as a csv file in this link. Now, as I said, I am interested in obtaining the Delaunay triangulation of that point could, for which the following piece of code is used. import numpy as np import pandas as pd from scipy.spatial import Delaunay import plotly.figure_factory as ff puntos = pd.read_csv('puntos.csv') puntos = puntos[['0', '1', '2']] tri = Delaunay(np.array([puntos[:,0], puntos[:,1]]).T) simplices = tri.simplices fig = ff.create_trisurf(x=puntos[:,0], y=puntos[:,1], z=puntos[:,2], simplices=simplices, aspectratio=dict(x=1, y=1, z=0.3)) fig.show() This produces the following image (point cloud images and triangulation image do not have exactly same aspect ratio, but I find it sufficient this way): As you might see, the triangulation is creating some extra faces in the boundary of the surface, and that is repeated along the four sides of the boundary. Anyone knows why this happens and how can I solve it? Thank you in advance! A: Firstly, I congratulate you on such an exquisite example. I recommend you explore examples of the function Delaunay. The documentation exhibits several properties whose output may interest you. A: Referring to my comments for your original question... The extra simplices occur because some of your vertices are positioned inside, but near, the convex hull of the Delaunay triangulation. The code below adjusts the position of the problematic vertices by finding the nearest point on the convex hull. For this example, I use the Tinfour Software Library which is written in Java. But you should be able to adapt the ideas to Python if you wish to do so. import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Comparator; import java.util.List; import org.tinfour.common.IIncrementalTin; import org.tinfour.common.IQuadEdge; import org.tinfour.common.Vertex; import org.tinfour.semivirtual.SemiVirtualIncrementalTin; import org.tinfour.utils.loaders.VertexReaderText; public class AdjustEdgePoints { public static void main(String[] args) throws IOException { File input = new File("puntos.csv"); List<Vertex> vertices = null; try ( VertexReaderText vrt = new VertexReaderText(input)) { vertices = vrt.read(null); } double pointSpacing = 0.2; double edgeTooLong = 0.4; // based on point spacng double pointTooClose = 0.021; // vertices are numbered 0 to n-1 boolean[] doNotTest = new boolean[vertices.size()]; boolean[] modified = new boolean[vertices.size()]; List<Vertex> replacements = new ArrayList<>(); IIncrementalTin tin = new SemiVirtualIncrementalTin(pointSpacing); tin.add(vertices, null); List<IQuadEdge> perimeter = tin.getPerimeter(); // the convex hull // mark all vertices on the perimeter as do-not-test for (IQuadEdge edge : perimeter) { Vertex A = edge.getA(); // vertices are for edge are A and B doNotTest[A.getIndex()] = true; } // For all excessively long edges, find the vertices that are too close // and move them to the edge. for (IQuadEdge edge : perimeter) { double eLength = edge.getLength(); if (eLength < edgeTooLong) { continue; // no processing required } Vertex A = edge.getA(); Vertex B = edge.getB(); double eX = B.getX() - A.getX(); // vector in direction of edge double eY = B.getY() - A.getY(); double pX = -eY / eLength; // unit vector perpendicular to edge double pY = eX / eLength; for (Vertex v : vertices) { if (doNotTest[v.getIndex()]) { continue; } double vX = v.getX() - A.getX(); double vY = v.getY() - A.getY(); // compute t, the parameter for a point on the line of the edge // closest to the vertex. We are only interested in this point // if it falls between the two endpoints of the edge. // in that case, t will be in the range 0 < t < 1 double t = (vX * eX + vY * eY) / (eLength * eLength); if (0 < t && t < 1) { double s = pX * vX + pY * vY; // distance of V from edge if (s < pointTooClose) { double x = A.getX() + t * eX; // point on edge double y = A.getY() + t * eY; Vertex X = new Vertex(x, y, v.getZ(), v.getIndex()); modified[v.getIndex()] = true; doNotTest[v.getIndex()] = true; replacements.add(X); } } } // end of vertices loop } // end of perimeter loop System.out.println("i,x,y,z"); for (Vertex v : vertices) { if (!modified[v.getIndex()]) { System.out.format("%d,%19.16f,%19.16f,%19.16f%n", v.getIndex(), v.getX(), v.getY(), v.getZ()); } } replacements.sort(new Comparator<Vertex>() { @Override public int compare(Vertex arg0, Vertex arg1) { return Integer.compare(arg0.getIndex(), arg1.getIndex()); } }); System.out.println(""); for (Vertex v : replacements) { System.out.format("%d,%19.16f,%19.16f,%19.16f%n", v.getIndex(), v.getX(), v.getY(), v.getZ()); } } } And here are the modified vertices. Some of the z values may be a little different because Tinfour only supports single-precision floating point values for its z elements. 1,-1.7362462292391640,-1.9574243190958676, 0.2769193053245544 2,-1.5328386393032090,-1.9691151455342581, 0.2039903998374939 3,-1.3356454135288550,-1.9804488021499242, 0.1507489830255508 7,-0.5545998965948742,-2.0099766379255533, 0.1641141176223755 8,-0.3452278908343442,-2.0128547506964680, 0.2061954736709595 9,-0.1280981557739663,-2.0158395043704496, 0.2454121708869934 10, 0.0960477228578226,-2.0189207048083720, 0.2699134647846222 11, 0.3249315805387950,-2.0220670354338774, 0.2698880732059479 12, 0.5546177976887393,-2.0252243956190710, 0.2401449084281921 13, 0.7809245195097682,-2.0283352998865780, 0.1818846762180328 14, 1.0012918968621154,-2.0313645595086890, 0.1022855937480927 15, 1.2151144438651920,-2.0343038512301570, 0.0123253259807825 20,-1.9574243190958676,-1.7362462292391640, 0.2769193053245544 40,-1.9691151455342581,-1.5328386393032090, 0.2039903998374939 60,-1.9804488021499242,-1.3356454135288550, 0.1507489830255508 99, 2.0343038512301570,-1.2151144438651922, 0.0123253259807825 119, 2.0313645595086890,-1.0012918968621158, 0.1022855937480927 139, 2.0283352998865780,-0.7809245195097688, 0.1818846762180328 140,-2.0099766379255533,-0.5545998965948735, 0.1641141176223755 159, 2.0252243956190710,-0.5546177976887400, 0.2401449084281921 160,-2.0128547506964680,-0.3452278908343438, 0.2061954736709595 179, 2.0220670354338774,-0.3249315805387953, 0.2698880732059479 180,-2.0158395043704496,-0.1280981557739660, 0.2454121708869934 199, 2.0189207048083720,-0.0960477228578232, 0.2699134647846222 200,-2.0189207048083720, 0.0960477228578229, 0.2699134647846222 219, 2.0158395043704496, 0.1280981557739658, 0.2454121708869934 220,-2.0220670354338774, 0.3249315805387953, 0.2698880732059479 239, 2.0128547506964680, 0.3452278908343438, 0.2061954736709595 240,-2.0252243956190710, 0.5546177976887398, 0.2401449084281921 259, 2.0099766379255533, 0.5545998965948733, 0.1641141176223755 260,-2.0283352998865780, 0.7809245195097683, 0.1818846762180328 280,-2.0313645595086890, 1.0012918968621158, 0.1022855937480927 300,-2.0343038512301570, 1.2151144438651922, 0.0123253259807825 339, 1.9804488021499242, 1.3356454135288547, 0.1507489830255508 359, 1.9691151455342581, 1.5328386393032085, 0.2039903998374939 379, 1.9574243190958676, 1.7362462292391640, 0.2769193053245544 384,-1.2151144438651920, 2.0343038512301570, 0.0123253259807825 385,-1.0012918968621156, 2.0313645595086890, 0.1022855937480927 386,-0.7809245195097685, 2.0283352998865780, 0.1818846762180328 387,-0.5546177976887396, 2.0252243956190710, 0.2401449084281921 388,-0.3249315805387951, 2.0220670354338774, 0.2698880732059479 389,-0.0960477228578228, 2.0189207048083720, 0.2699134647846222 390, 0.1280981557739660, 2.0158395043704496, 0.2454121708869934 391, 0.3452278908343442, 2.0128547506964680, 0.2061954736709595 392, 0.5545998965948740, 2.0099766379255533, 0.1641141176223755 396, 1.3356454135288547, 1.9804488021499242, 0.1507489830255508 397, 1.5328386393032085, 1.9691151455342581, 0.2039903998374939 398, 1.7362462292391640, 1.9574243190958676, 0.2769193053245544
3D Delaunay triangulation: bad output (extra simplices appearing)
I am using python3.11 to create the Delaunay triangulation of a point cloud with script.Delaunay and it is misbehaving by creating some extra faces. In the image below you can see a 3D scatter plot of the points. The image was created using the next very few lines of code: import plotly.graph_objects as go fig = go.Figure() fig.add_scatter3d(x = puntos[:,0], y = puntos[:,1], z = puntos[:,2],mode='markers', marker=dict( size=1, color='rgb(0,0,0)', opacity=0.8 )) fig.update_layout(scene = dict(aspectmode = 'data')) fig.show() The data puntos can be downloaded as a csv file in this link. Now, as I said, I am interested in obtaining the Delaunay triangulation of that point could, for which the following piece of code is used. import numpy as np import pandas as pd from scipy.spatial import Delaunay import plotly.figure_factory as ff puntos = pd.read_csv('puntos.csv') puntos = puntos[['0', '1', '2']] tri = Delaunay(np.array([puntos[:,0], puntos[:,1]]).T) simplices = tri.simplices fig = ff.create_trisurf(x=puntos[:,0], y=puntos[:,1], z=puntos[:,2], simplices=simplices, aspectratio=dict(x=1, y=1, z=0.3)) fig.show() This produces the following image (point cloud images and triangulation image do not have exactly same aspect ratio, but I find it sufficient this way): As you might see, the triangulation is creating some extra faces in the boundary of the surface, and that is repeated along the four sides of the boundary. Anyone knows why this happens and how can I solve it? Thank you in advance!
[ "Firstly, I congratulate you on such an exquisite example. I recommend you explore examples of the function Delaunay. The documentation exhibits several properties whose output may interest you.\n", "Referring to my comments for your original question... The extra simplices occur because some of your vertices are positioned inside, but near, the convex hull of the Delaunay triangulation. The code below adjusts the position of the problematic vertices by finding the nearest point on the convex hull. For this example, I use the Tinfour Software Library which is written in Java. But you should be able to adapt the ideas to Python if you wish to do so.\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Comparator;\nimport java.util.List;\nimport org.tinfour.common.IIncrementalTin;\nimport org.tinfour.common.IQuadEdge;\nimport org.tinfour.common.Vertex;\nimport org.tinfour.semivirtual.SemiVirtualIncrementalTin;\nimport org.tinfour.utils.loaders.VertexReaderText;\n\npublic class AdjustEdgePoints {\n public static void main(String[] args) throws IOException {\n File input = new File(\"puntos.csv\");\n List<Vertex> vertices = null;\n try ( VertexReaderText vrt = new VertexReaderText(input)) {\n vertices = vrt.read(null);\n }\n\n double pointSpacing = 0.2;\n double edgeTooLong = 0.4; // based on point spacng\n double pointTooClose = 0.021;\n\n // vertices are numbered 0 to n-1\n boolean[] doNotTest = new boolean[vertices.size()];\n boolean[] modified = new boolean[vertices.size()];\n List<Vertex> replacements = new ArrayList<>();\n\n IIncrementalTin tin = new SemiVirtualIncrementalTin(pointSpacing);\n tin.add(vertices, null);\n List<IQuadEdge> perimeter = tin.getPerimeter(); // the convex hull\n // mark all vertices on the perimeter as do-not-test\n for (IQuadEdge edge : perimeter) {\n Vertex A = edge.getA(); // vertices are for edge are A and B\n doNotTest[A.getIndex()] = true;\n }\n\n // For all excessively long edges, find the vertices that are too close\n // and move them to the edge.\n for (IQuadEdge edge : perimeter) {\n double eLength = edge.getLength();\n if (eLength < edgeTooLong) {\n continue; // no processing required\n }\n Vertex A = edge.getA();\n Vertex B = edge.getB();\n double eX = B.getX() - A.getX(); // vector in direction of edge\n double eY = B.getY() - A.getY();\n double pX = -eY / eLength; // unit vector perpendicular to edge\n double pY = eX / eLength;\n\n for (Vertex v : vertices) {\n if (doNotTest[v.getIndex()]) {\n continue;\n }\n double vX = v.getX() - A.getX();\n double vY = v.getY() - A.getY();\n // compute t, the parameter for a point on the line of the edge\n // closest to the vertex. We are only interested in this point\n // if it falls between the two endpoints of the edge.\n // in that case, t will be in the range 0 < t < 1\n double t = (vX * eX + vY * eY) / (eLength * eLength);\n if (0 < t && t < 1) {\n double s = pX * vX + pY * vY; // distance of V from edge\n if (s < pointTooClose) {\n double x = A.getX() + t * eX; // point on edge\n double y = A.getY() + t * eY;\n Vertex X = new Vertex(x, y, v.getZ(), v.getIndex());\n modified[v.getIndex()] = true;\n doNotTest[v.getIndex()] = true;\n replacements.add(X);\n }\n }\n } // end of vertices loop\n } // end of perimeter loop\n\n System.out.println(\"i,x,y,z\");\n for (Vertex v : vertices) {\n if (!modified[v.getIndex()]) {\n System.out.format(\"%d,%19.16f,%19.16f,%19.16f%n\",\n v.getIndex(), v.getX(), v.getY(), v.getZ());\n }\n }\n\n replacements.sort(new Comparator<Vertex>() {\n @Override\n public int compare(Vertex arg0, Vertex arg1) {\n return Integer.compare(arg0.getIndex(), arg1.getIndex());\n }\n });\n System.out.println(\"\");\n for (Vertex v : replacements) {\n System.out.format(\"%d,%19.16f,%19.16f,%19.16f%n\",\n v.getIndex(), v.getX(), v.getY(), v.getZ());\n }\n }\n\n}\n\nAnd here are the modified vertices. Some of the z values may be a little different because Tinfour only supports single-precision floating point values for its z elements.\n1,-1.7362462292391640,-1.9574243190958676, 0.2769193053245544\n2,-1.5328386393032090,-1.9691151455342581, 0.2039903998374939\n3,-1.3356454135288550,-1.9804488021499242, 0.1507489830255508\n7,-0.5545998965948742,-2.0099766379255533, 0.1641141176223755\n8,-0.3452278908343442,-2.0128547506964680, 0.2061954736709595\n9,-0.1280981557739663,-2.0158395043704496, 0.2454121708869934\n10, 0.0960477228578226,-2.0189207048083720, 0.2699134647846222\n11, 0.3249315805387950,-2.0220670354338774, 0.2698880732059479\n12, 0.5546177976887393,-2.0252243956190710, 0.2401449084281921\n13, 0.7809245195097682,-2.0283352998865780, 0.1818846762180328\n14, 1.0012918968621154,-2.0313645595086890, 0.1022855937480927\n15, 1.2151144438651920,-2.0343038512301570, 0.0123253259807825\n20,-1.9574243190958676,-1.7362462292391640, 0.2769193053245544\n40,-1.9691151455342581,-1.5328386393032090, 0.2039903998374939\n60,-1.9804488021499242,-1.3356454135288550, 0.1507489830255508\n99, 2.0343038512301570,-1.2151144438651922, 0.0123253259807825\n119, 2.0313645595086890,-1.0012918968621158, 0.1022855937480927\n139, 2.0283352998865780,-0.7809245195097688, 0.1818846762180328\n140,-2.0099766379255533,-0.5545998965948735, 0.1641141176223755\n159, 2.0252243956190710,-0.5546177976887400, 0.2401449084281921\n160,-2.0128547506964680,-0.3452278908343438, 0.2061954736709595\n179, 2.0220670354338774,-0.3249315805387953, 0.2698880732059479\n180,-2.0158395043704496,-0.1280981557739660, 0.2454121708869934\n199, 2.0189207048083720,-0.0960477228578232, 0.2699134647846222\n200,-2.0189207048083720, 0.0960477228578229, 0.2699134647846222\n219, 2.0158395043704496, 0.1280981557739658, 0.2454121708869934\n220,-2.0220670354338774, 0.3249315805387953, 0.2698880732059479\n239, 2.0128547506964680, 0.3452278908343438, 0.2061954736709595\n240,-2.0252243956190710, 0.5546177976887398, 0.2401449084281921\n259, 2.0099766379255533, 0.5545998965948733, 0.1641141176223755\n260,-2.0283352998865780, 0.7809245195097683, 0.1818846762180328\n280,-2.0313645595086890, 1.0012918968621158, 0.1022855937480927\n300,-2.0343038512301570, 1.2151144438651922, 0.0123253259807825\n339, 1.9804488021499242, 1.3356454135288547, 0.1507489830255508\n359, 1.9691151455342581, 1.5328386393032085, 0.2039903998374939\n379, 1.9574243190958676, 1.7362462292391640, 0.2769193053245544\n384,-1.2151144438651920, 2.0343038512301570, 0.0123253259807825\n385,-1.0012918968621156, 2.0313645595086890, 0.1022855937480927\n386,-0.7809245195097685, 2.0283352998865780, 0.1818846762180328\n387,-0.5546177976887396, 2.0252243956190710, 0.2401449084281921\n388,-0.3249315805387951, 2.0220670354338774, 0.2698880732059479\n389,-0.0960477228578228, 2.0189207048083720, 0.2699134647846222\n390, 0.1280981557739660, 2.0158395043704496, 0.2454121708869934\n391, 0.3452278908343442, 2.0128547506964680, 0.2061954736709595\n392, 0.5545998965948740, 2.0099766379255533, 0.1641141176223755\n396, 1.3356454135288547, 1.9804488021499242, 0.1507489830255508\n397, 1.5328386393032085, 1.9691151455342581, 0.2039903998374939\n398, 1.7362462292391640, 1.9574243190958676, 0.2769193053245544\n\n" ]
[ 0, 0 ]
[]
[]
[ "3d", "plotly_python", "python", "scipy", "triangulation" ]
stackoverflow_0074642689_3d_plotly_python_python_scipy_triangulation.txt
Q: Part of which architecture layer is React Redux? I have recently read "Clean Architecture" by Bob Martin. Even though the principles he explains there apply to all languages it is harder for me to grasp those concepts around JavaScript (functional languages in general). I have a React application where I have applied React Redux but now when I have read the book I wonder if I am not too dependent on Redux and how can I make myself more independent so that I can easily substitute Redux with any other approach (React Hooks for instance) any time I want. Bob Martin is emphasizing on the fact that we need to be careful about architecture boundaries but I am really not sure where I can put Redux in that case? Do I do business logic in Redux? If yes, does not this break the Clean Architecture recommendation to keep business logic independent? If I put my logic in Redux I become too dependent on it? I have my pure view components only to display data on them them some viewModel components that handle view logic but from there I am not sure what is happening next. A: Redux should be part of the middleware. A: In general Redux is a state container. Redux can be used to implement the Flux architecture that facebook created for client-side applications usually implemented in React , hence the name Redux. Redux is ususally located in the UI layer. From an architectural perspective it is then used similar to a controller and it's models in MVC. E.g. a view dispatches an action that describes what schould be done. This action is dispatched to a reducer that executes logic (like a controller) and updates the store (a kind of model). After the store changed the views rerender. Since Redux is just a state container you could also use it in other layers. Hopefully not directly, because frameworks belong to the outer circle of the clean architecture. Thus you should create a nice abstraction and use Redux behind that abstraction (only in the implementation). A: In Clean Architecture, the idea is to separate your application into different layers, where the innermost layer contains your business logic and the outermost layer contains your user interface. In between, you can have additional layers that handle things like database access, network communication, and so on. The point of this separation is to make your application more modular and easier to maintain. For example, if you want to switch from Redux to React Hooks, you should be able to do so without having to change your business logic. In general, it's not a good idea to put your business logic in Redux. Instead, you should keep your business logic in the innermost layer of your application, and use Redux (or any other state management tool) to manage the state of your user interface. This way, you can switch to a different state management tool without having to change your business logic. It's also a good idea to keep your view components as pure as possible, meaning that they should only be responsible for rendering data and not for handling any business logic. This will make your application easier to test and maintain. Overall, the key is to think about your application's architecture in terms of layers, and to keep the different parts of your application separate and modular. This will make it easier to change or replace individual parts of your application without having to change the whole thing.
Part of which architecture layer is React Redux?
I have recently read "Clean Architecture" by Bob Martin. Even though the principles he explains there apply to all languages it is harder for me to grasp those concepts around JavaScript (functional languages in general). I have a React application where I have applied React Redux but now when I have read the book I wonder if I am not too dependent on Redux and how can I make myself more independent so that I can easily substitute Redux with any other approach (React Hooks for instance) any time I want. Bob Martin is emphasizing on the fact that we need to be careful about architecture boundaries but I am really not sure where I can put Redux in that case? Do I do business logic in Redux? If yes, does not this break the Clean Architecture recommendation to keep business logic independent? If I put my logic in Redux I become too dependent on it? I have my pure view components only to display data on them them some viewModel components that handle view logic but from there I am not sure what is happening next.
[ "Redux should be part of the middleware.\n", "In general Redux is a state container. Redux can be used to implement the Flux architecture that facebook created for client-side applications usually implemented in React , hence the name Redux.\nRedux is ususally located in the UI layer. From an architectural perspective it is then used similar to a controller and it's models in MVC. E.g. a view dispatches an action that describes what schould be done. This action is dispatched to a reducer that executes logic (like a controller) and updates the store (a kind of model). After the store changed the views rerender.\nSince Redux is just a state container you could also use it in other layers. Hopefully not directly, because frameworks belong to the outer circle of the clean architecture. Thus you should create a nice abstraction and use Redux behind that abstraction (only in the implementation).\n", "In Clean Architecture, the idea is to separate your application into different layers, where the innermost layer contains your business logic and the outermost layer contains your user interface. In between, you can have additional layers that handle things like database access, network communication, and so on.\nThe point of this separation is to make your application more modular and easier to maintain. For example, if you want to switch from Redux to React Hooks, you should be able to do so without having to change your business logic.\nIn general, it's not a good idea to put your business logic in Redux. Instead, you should keep your business logic in the innermost layer of your application, and use Redux (or any other state management tool) to manage the state of your user interface. This way, you can switch to a different state management tool without having to change your business logic.\nIt's also a good idea to keep your view components as pure as possible, meaning that they should only be responsible for rendering data and not for handling any business logic. This will make your application easier to test and maintain.\nOverall, the key is to think about your application's architecture in terms of layers, and to keep the different parts of your application separate and modular. This will make it easier to change or replace individual parts of your application without having to change the whole thing.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "business_logic", "clean_architecture", "react_redux", "reactjs", "state" ]
stackoverflow_0074642219_business_logic_clean_architecture_react_redux_reactjs_state.txt
Q: How can I integrate these plots by putting them near each other? I have these two plots (a heatmap and a stacked barplot). How can I put these plots near each other? Indeed I want to integrate these two plots. Thanks for any help. Here is my script: df.bar <- data.frame(Gene.name=c('Gene1','Gene1','Gene1','Gene2','Gene2','Gene2','Gene3','Gene3','Gene3', 'Gene4','Gene4','Gene4'), Alteration.frequency=c(0.0909, 2.1838, 0.6369, 0.1819, 1.0919, 0.3639, 0.4549,0.7279, 0.7279, 0.000, 0.3639, 0.4549), Alteration.Type=c("Deep deletion", "Amplification", "Point mutation", "Deep deletion", "Amplification", "Point mutation", "Deep deletion", "Amplification" , "Point mutation", "Deep deletion", "Amplification" , "Point mutation")) library(ggplot2) ggplot(df.bar, aes(fill=factor(Alteration.Type, levels = c('Point mutation','Amplification','Deep deletion')), y=Alteration.frequency, x=Gene.name)) + geom_bar(position="stack", stat="identity")+theme_bw()+ theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, colour = 'black'))+ scale_fill_manual(values=c("seagreen2", "maroon2",'deepskyblue3'))+ labs(fill = 'Alteration type') df.heatmap <- data.frame(Gene_name=c('Gene1','Gene2','Gene3','Gene4'), log2FC=c(0.56,-1.5,-0.8,2)) library(gplots) heatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace="n", Colv = NA, dendrogram = "none", labCol = "", labRow = df$Gene_name, cexRow = 0.75, col=my_palette) I need such this plot. A: There are various ways to mix base R graphics like heatmap.2 and grid-based graphics like ggplot, but this case is a little more complex due to the fact that heatmap.2 seems to use multiple plotting regions. However, the following solution should do the trick: library(gplots) library(ggplot2) library(patchwork) library(gridGraphics) p1 <- ggplot(df.bar, aes(fill = factor(Alteration.Type, levels = c('Point mutation', 'Amplification', 'Deep deletion')), y = Alteration.frequency, x = Gene.name)) + geom_col(position = "stack") + theme_bw() + theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, colour = 'black')) + scale_fill_manual(values = c("seagreen2", "maroon2", 'deepskyblue3')) + labs(fill = 'Alteration type') heatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace = "n", Colv = NA, dendrogram = "none", labCol = "", labRow = df.heatmap$Gene_name, cexRow = 0.75, col = colorRampPalette(c("red", "white", "blue"))) grid.echo() p2 <- wrap_elements(grid.grab()) p1 + p2 Edit From the comments, it seems we want one plot on top of the other, and that the OP was using heatmap.2 because it didn't seem possible to get a single variable heatmap in ggplot. This edit solves both these issues: library(ggplot2) library(patchwork) p1 <- ggplot(df.bar, aes(fill = factor(Alteration.Type, levels = c('Point mutation', 'Amplification', 'Deep deletion')), y = Alteration.frequency, x = Gene.name)) + geom_col(position = "stack") + theme_bw() + theme(axis.text.x = element_blank()) + scale_fill_manual(values = c("seagreen2", "maroon2", 'deepskyblue3')) + labs(fill = 'Alteration type', x = NULL) p2 <- ggplot(df.heatmap, aes(y = "A", x = Gene_name, fill = log2FC)) + geom_tile() + scale_fill_viridis_c() + scale_y_discrete(NULL, position = "right") + scale_x_discrete(NULL, expand = c(0.17, 0.1)) + theme_minimal() + theme(axis.text.y = element_blank(), plot.title = element_blank()) p1 / p2 + plot_layout(height = c(3, 1)) A: Another option could be using plot_grid from the package cowplot to combine ggplot and basegraphics or other framework. For the heatmap you could use ~ and I slightly modified the size of the legend to fit it better. Here is a reproducible example: library(ggplot2) library(cowplot) library(gridGraphics) library(gplots) # first plot p1 <- ggplot(df.bar, aes(fill=factor(Alteration.Type, levels = c('Point mutation','Amplification','Deep deletion')), y=Alteration.frequency, x=Gene.name)) + geom_bar(position="stack", stat="identity")+theme_bw()+ theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, colour = 'black'))+ scale_fill_manual(values=c("seagreen2", "maroon2",'deepskyblue3'))+ labs(fill = 'Alteration type') # Second plot p2<- ~heatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace="n", Colv = NA, dendrogram = "none", labCol = "", labRow = df.heatmap$Gene_name, cexRow = 0.75, col=my_palette, key.par=list(mar=c(3,0,3,0), cex.main = 0.75), keysize = 1.5) # combine all plots together plot_grid(p1, p2) Created on 2022-12-03 with reprex v2.0.2
How can I integrate these plots by putting them near each other?
I have these two plots (a heatmap and a stacked barplot). How can I put these plots near each other? Indeed I want to integrate these two plots. Thanks for any help. Here is my script: df.bar <- data.frame(Gene.name=c('Gene1','Gene1','Gene1','Gene2','Gene2','Gene2','Gene3','Gene3','Gene3', 'Gene4','Gene4','Gene4'), Alteration.frequency=c(0.0909, 2.1838, 0.6369, 0.1819, 1.0919, 0.3639, 0.4549,0.7279, 0.7279, 0.000, 0.3639, 0.4549), Alteration.Type=c("Deep deletion", "Amplification", "Point mutation", "Deep deletion", "Amplification", "Point mutation", "Deep deletion", "Amplification" , "Point mutation", "Deep deletion", "Amplification" , "Point mutation")) library(ggplot2) ggplot(df.bar, aes(fill=factor(Alteration.Type, levels = c('Point mutation','Amplification','Deep deletion')), y=Alteration.frequency, x=Gene.name)) + geom_bar(position="stack", stat="identity")+theme_bw()+ theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, colour = 'black'))+ scale_fill_manual(values=c("seagreen2", "maroon2",'deepskyblue3'))+ labs(fill = 'Alteration type') df.heatmap <- data.frame(Gene_name=c('Gene1','Gene2','Gene3','Gene4'), log2FC=c(0.56,-1.5,-0.8,2)) library(gplots) heatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace="n", Colv = NA, dendrogram = "none", labCol = "", labRow = df$Gene_name, cexRow = 0.75, col=my_palette) I need such this plot.
[ "There are various ways to mix base R graphics like heatmap.2 and grid-based graphics like ggplot, but this case is a little more complex due to the fact that heatmap.2 seems to use multiple plotting regions. However, the following solution should do the trick:\nlibrary(gplots)\nlibrary(ggplot2)\nlibrary(patchwork)\nlibrary(gridGraphics)\n\np1 <- ggplot(df.bar,\n aes(fill = factor(Alteration.Type, levels = c('Point mutation',\n 'Amplification', \n 'Deep deletion')),\n y = Alteration.frequency, x = Gene.name)) + \n geom_col(position = \"stack\") +\n theme_bw() +\n theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, \n colour = 'black')) +\n scale_fill_manual(values = c(\"seagreen2\", \"maroon2\", 'deepskyblue3')) +\n labs(fill = 'Alteration type')\n\nheatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace = \"n\", \n Colv = NA, dendrogram = \"none\", labCol = \"\", \n labRow = df.heatmap$Gene_name, cexRow = 0.75, \n col = colorRampPalette(c(\"red\", \"white\", \"blue\")))\n\ngrid.echo()\np2 <- wrap_elements(grid.grab())\n\np1 + p2\n\n\n\nEdit\nFrom the comments, it seems we want one plot on top of the other, and that the OP was using heatmap.2 because it didn't seem possible to get a single variable heatmap in ggplot.\nThis edit solves both these issues:\nlibrary(ggplot2)\nlibrary(patchwork)\n\np1 <- ggplot(df.bar,\n aes(fill = factor(Alteration.Type, levels = c('Point mutation',\n 'Amplification', \n 'Deep deletion')),\n y = Alteration.frequency, x = Gene.name)) + \n geom_col(position = \"stack\") +\n theme_bw() +\n theme(axis.text.x = element_blank()) +\n scale_fill_manual(values = c(\"seagreen2\", \"maroon2\", 'deepskyblue3')) +\n labs(fill = 'Alteration type', x = NULL)\n\np2 <- ggplot(df.heatmap, aes(y = \"A\", x = Gene_name,\n fill = log2FC)) +\n geom_tile() +\n scale_fill_viridis_c() +\n scale_y_discrete(NULL, position = \"right\") +\n scale_x_discrete(NULL, expand = c(0.17, 0.1)) +\n theme_minimal() +\n theme(axis.text.y = element_blank(),\n plot.title = element_blank())\n\np1 / p2 + plot_layout(height = c(3, 1))\n\n\n", "Another option could be using plot_grid from the package cowplot to combine ggplot and basegraphics or other framework. For the heatmap you could use ~ and I slightly modified the size of the legend to fit it better. Here is a reproducible example:\nlibrary(ggplot2)\nlibrary(cowplot)\nlibrary(gridGraphics)\nlibrary(gplots)\n\n# first plot\np1 <- ggplot(df.bar,\n aes(fill=factor(Alteration.Type, levels = c('Point mutation','Amplification','Deep deletion')),\n y=Alteration.frequency, x=Gene.name)) + \n geom_bar(position=\"stack\", stat=\"identity\")+theme_bw()+\n theme(axis.text.x = element_text(size = 10, angle = 45, hjust = 1, colour = 'black'))+\n scale_fill_manual(values=c(\"seagreen2\", \"maroon2\",'deepskyblue3'))+\n labs(fill = 'Alteration type')\n\n# Second plot\np2<- ~heatmap.2(cbind(df.heatmap$log2FC, df.heatmap$log2FC), trace=\"n\", Colv = NA, \n dendrogram = \"none\", labCol = \"\", labRow = df.heatmap$Gene_name, cexRow = 0.75,\n col=my_palette, key.par=list(mar=c(3,0,3,0), cex.main = 0.75), keysize = 1.5)\n\n# combine all plots together\nplot_grid(p1, p2)\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n" ]
[ 2, 1 ]
[]
[]
[ "ggplot2", "gplots", "heatmap", "r" ]
stackoverflow_0074664597_ggplot2_gplots_heatmap_r.txt
Q: client.application.commands.set not work discord.js v14 I want to deploy slash commandes with my Discord bot. But I get this error : client.application.commands.set(arrayOfSlashCommands) ^ TypeError: Cannot read properties of null (reading 'commands') My handler : ["loadEvents", "loadSlashsCommand"].forEach((handler) => { require(`./handler/${handler}`)(client); }); //Config.json valid ? requirements const { checkValid } = require('./Functions/Validation/checkValid') //Check if valid checkValid() //Login console.log((`${new Date().getHours()}:${new Date().getMinutes()}:${new Date().getSeconds()} -`), chalk.bgBlue('Connexion à l\'API Discord en cours...')) client.login(BotToken) My loadSlashsCommands : const chalk = require("chalk"); const { glob } = require("glob"); const { promisify } = require("util"); const globPromise = promisify(glob); const config = require('../Configuration/config.json') module.exports = async (client) => { const fs = require("fs"); const slashCommands = await globPromise( `${process.cwd()}/Commands/*/*.js` ); const arrayOfSlashCommands = []; slashCommands.map((value) => { const file = require(value); if (!file?.name) return; client.slashCommands.set(file.name, file); arrayOfSlashCommands.push(file); }); await client.application.commands.set(arrayOfSlashCommands) }; I don't know why the error occurs. I'm using discordjs v14 A: The await client.application.commands.set(arrayOfSlashCommands) must be in ready event. So what you can do is add a ready event listener See below const chalk = require("chalk"); const { glob } = require("glob"); const { promisify } = require("util"); const globPromise = promisify(glob); const config = require('../Configuration/config.json') module.exports = async (client) => { const arrayOfSlashCommands = []; const fs = require("fs"); const slashCommands = await globPromise(`${process.cwd()}/Commands/*/*.js`); slashCommands.map((value) => { const file = require(value); if (!file?.name) return; client.slashCommands.set(file.name, file); arrayOfSlashCommands.push(file); }); client.on('ready', () => { await client.application.commands.set(arrayOfSlashCommands) }); };
client.application.commands.set not work discord.js v14
I want to deploy slash commandes with my Discord bot. But I get this error : client.application.commands.set(arrayOfSlashCommands) ^ TypeError: Cannot read properties of null (reading 'commands') My handler : ["loadEvents", "loadSlashsCommand"].forEach((handler) => { require(`./handler/${handler}`)(client); }); //Config.json valid ? requirements const { checkValid } = require('./Functions/Validation/checkValid') //Check if valid checkValid() //Login console.log((`${new Date().getHours()}:${new Date().getMinutes()}:${new Date().getSeconds()} -`), chalk.bgBlue('Connexion à l\'API Discord en cours...')) client.login(BotToken) My loadSlashsCommands : const chalk = require("chalk"); const { glob } = require("glob"); const { promisify } = require("util"); const globPromise = promisify(glob); const config = require('../Configuration/config.json') module.exports = async (client) => { const fs = require("fs"); const slashCommands = await globPromise( `${process.cwd()}/Commands/*/*.js` ); const arrayOfSlashCommands = []; slashCommands.map((value) => { const file = require(value); if (!file?.name) return; client.slashCommands.set(file.name, file); arrayOfSlashCommands.push(file); }); await client.application.commands.set(arrayOfSlashCommands) }; I don't know why the error occurs. I'm using discordjs v14
[ "The await client.application.commands.set(arrayOfSlashCommands) must be in ready event.\nSo what you can do is add a ready event listener\nSee below\nconst chalk = require(\"chalk\");\nconst { glob } = require(\"glob\");\nconst { promisify } = require(\"util\");\nconst globPromise = promisify(glob);\nconst config = require('../Configuration/config.json')\n\nmodule.exports = async (client) => {\n const arrayOfSlashCommands = [];\n const fs = require(\"fs\");\n const slashCommands = await globPromise(`${process.cwd()}/Commands/*/*.js`);\n\n slashCommands.map((value) => {\n const file = require(value);\n if (!file?.name) return;\n\n client.slashCommands.set(file.name, file);\n arrayOfSlashCommands.push(file);\n });\n\n client.on('ready', () => {\n await client.application.commands.set(arrayOfSlashCommands)\n });\n};\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.js", "javascript", "node.js" ]
stackoverflow_0074603619_discord_discord.js_javascript_node.js.txt
Q: gmail API watch webhook: how to get the user-id We need to use push notifications for many users, using watch Gmail API call. The watch response we should get on the webhook does not contain any information about the user for which the webhook has been called. The problem is that according to Users.history: list I cannot get messages without user-id. How am I supposed to get user-id from the watch webhook? OR Is it possible to call something similar to "Users.history: list" without the need to have user-id? Thanks for your support, Adrien A: Edit from info in link in comment If you check the Push Notifications you will see that the response from the server contains a data field. // This is the actual notification data, as base64url-encoded JSON. data: "eyJlbWFpbEFkZHJlc3MiOiAidXNlckBleGFtcGxlLmNvbSIsICJoaXN0b3J5SWQiOiAiMTIzNDU2Nzg5MCJ9", The HTTP POST body is JSON and the actual Gmail notification payload is in the message.data field. That message.data field is a base64url-encoded string that decodes to a JSON object containing the email address and the new mailbox history ID for the user: {"emailAddress": "[email protected]", "historyId": "9876543210"} Data contains the email address of the user. Note I don't have the power to test this I am just reading the documentation. I don't have a server set up to retrieve these notifications. A: user-id is your email, which is used to subscribe push notification
gmail API watch webhook: how to get the user-id
We need to use push notifications for many users, using watch Gmail API call. The watch response we should get on the webhook does not contain any information about the user for which the webhook has been called. The problem is that according to Users.history: list I cannot get messages without user-id. How am I supposed to get user-id from the watch webhook? OR Is it possible to call something similar to "Users.history: list" without the need to have user-id? Thanks for your support, Adrien
[ "Edit from info in link in comment\nIf you check the Push Notifications you will see that the response from the server contains a data field.\n// This is the actual notification data, as base64url-encoded JSON. \n data: \"eyJlbWFpbEFkZHJlc3MiOiAidXNlckBleGFtcGxlLmNvbSIsICJoaXN0b3J5SWQiOiAiMTIzNDU2Nzg5MCJ9\",\n\n\nThe HTTP POST body is JSON and the actual Gmail notification payload is in the message.data field. That message.data field is a base64url-encoded string that decodes to a JSON object containing the email address and the new mailbox history ID for the user:\n\n{\"emailAddress\": \"[email protected]\", \"historyId\": \"9876543210\"}\n\nData contains the email address of the user.\nNote I don't have the power to test this I am just reading the documentation. I don't have a server set up to retrieve these notifications.\n", "user-id is your email, which is used to subscribe push notification\n" ]
[ 1, 0 ]
[]
[]
[ "gmail_api", "google_api", "push_notification" ]
stackoverflow_0041565822_gmail_api_google_api_push_notification.txt
Q: How to pass column value as parameter for rolling or shift Input: import pandas as pd import numpy as np df=pd.DataFrame(np.random.rand(10,1),columns = ['A']) df['pos']=[0,0,1,2,0,0,1,2,3,4] I try to df.A.rolling(df['pos']).max() or df.A.shift(df['pos']) It doesn't work, how to achieve it? A: According to the documentation, both the rolling and shift operations are sequence-to-sequence transformations. If you want to produce a series of sequences based on the integer parameters in pos, you need to repeat this operation: pd.DataFrame({f'seq {i}': df.A.shift(i) for i in df.pos})
How to pass column value as parameter for rolling or shift
Input: import pandas as pd import numpy as np df=pd.DataFrame(np.random.rand(10,1),columns = ['A']) df['pos']=[0,0,1,2,0,0,1,2,3,4] I try to df.A.rolling(df['pos']).max() or df.A.shift(df['pos']) It doesn't work, how to achieve it?
[ "According to the documentation, both the rolling and shift operations are sequence-to-sequence transformations. If you want to produce a series of sequences based on the integer parameters in pos, you need to repeat this operation:\npd.DataFrame({f'seq {i}': df.A.shift(i) for i in df.pos})\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas" ]
stackoverflow_0074665579_dataframe_pandas.txt
Q: Select the correct html element with rvest Im some ocassion a Stack user help me for make this script. Im edit it for add more attributes but I have problems when try to add Authors The Author label is next to target and href. I have problem in this part. library(tidyverse) library(rvest) startTime <- Sys.time() get_cg <- function(pages) { cat("Scraping page", pages, "\n") page <- str_c("https://cgspace.cgiar.org/discover? scope=10568%2F106146&query=cassava&submit=&rpp=10&page=", pages) %>% read_html() tibble( title = page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_text2(), # run well fecha = page %>% html_elements(".ds-artifact-item") %>% html_element(".date") %>% html_text2(), # run well Type = page %>% html_elements(".ds-artifact-item") %>% html_element(".artifact-type") %>% html_text2(), # run well Autor= page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_attr("href"), # not download the Authors link = page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_attr("href") %>% # run well str_c("https://cgspace.cgiar.org", .) ) } df <- map_dfr(1, get_cg) endTime <- Sys.time() print(endTime - startTim) Im try with other selector but get NA A: This should get you a collapsed list of authors for each book, separated by ; , basically the same as presented on the page: library(tidyverse, warn.conflicts = F) library(rvest, warn.conflicts = F) startTime <- Sys.time() get_cg <- function(pages) { cat("Scraping page", pages, "\n") page <- str_c("https://cgspace.cgiar.org/discover?scope=10568%2F106146&query=cassava&submit=&rpp=10&page=", pages) %>% read_html() html_elements(page, "div.artifact-description > div.artifact-description") %>% map_df(~ list( title = html_element(.x, ".description-info") %>% html_text2(), fecha = html_element(.x, ".date") %>% html_text2(), Type = html_element(.x, ".artifact-type") %>% html_text2(), # Autor_links = html_elements(.x,".description-info > span > a") %>% html_attr("href") %>% paste(collapse = ";"), Autor = html_element(.x, "span.description-info") %>% html_text2(), link = html_element(.x, "a.description-info") %>% html_attr("href") %>% str_c("https://cgspace.cgiar.org", .) )) } df <- map_dfr(1, get_cg) #> Scraping page 1 endTime <- Sys.time() print(endTime - startTime) #> Time difference of 0.989037 secs Result: df #> # A tibble: 10 × 5 #> title fecha Type Autor link #> <chr> <chr> <chr> <chr> <chr> #> 1 Global Climate Regions for Cassava 2020… Type… Hyma… http… #> 2 Performance of the CSM–MANIHOT–Cassava model for sim… 2021… Type… Phon… http… #> 3 Adoption of cassava improved modern varieties in the… 2020 Type… Laba… http… #> 4 First report of Sri Lankan cassava mosaic virus and … 2021… Type… Chit… http… #> 5 Surveillance and diagnostics dataset on Sri Lankan c… 2020 Type… Siri… http… #> 6 Socieconomic and soil conservation practices for cas… 2022… Type… Ibar… http… #> 7 The transformation and outcome of traditional cassav… 2020 Type… Dou,… http… #> 8 Cassava Annual Report 2019 2020 Type… Inte… http… #> 9 Cassava Annual Report 2020 2021… Type… Bece… http… #> 10 Adoption of cassava improved modern varieties in the… 2020 Type… Flor… http… glimpse(df) #> Rows: 10 #> Columns: 5 #> $ title <chr> "Global Climate Regions for Cassava", "Performance of the CSM–MA… #> $ fecha <chr> "2020-08-03", "2021-05-01", "2020", "2021-04-23", "2020", "2022-… #> $ Type <chr> "Type:Dataset", "Type:Journal Article", "Type:Dataset", "Type:Jo… #> $ Autor <chr> "Hyman, Glenn G.", "Phoncharoen, Phanupong; Banterng, Poramate; … #> $ link <chr> "https://cgspace.cgiar.org/handle/10568/109500", "https://cgspac… Created on 2022-12-03 with reprex v2.0.2 A: In the code you posted, you're using html_element to extract the Autor and link fields, but html_element only selects the first matching element. You should use html_nodes instead, which will return all matching elements. Here's how you can use html_nodes to extract the Autor and link fields: Autor = page %>% html_elements(".ds-artifact-item") %>% html_nodes(".description-info") %>% html_attr("href"), link = page %>% html_elements(".ds-artifact-item") %>% html_nodes(".description-info") %>% html_attr("href") %>% str_c("https://cgspace.cgiar.org", .) Note that html_nodes returns a list of elements, so you'll need to use map_chr or another function to extract the href attributes from the list. For example, you can use map_chr like this: Autor = page %>% html_elements(".ds-artifact-item") %>% html_nodes(".description-info") %>% map_chr("href"), link = page %>% html_elements(".ds-artifact-item") %>% html_nodes(".description-info") %>% map_chr("href") %>% str_c("https://cgspace.cgiar.org", .) This should extract the href attributes for all matching elements and return them as a character vector. You can then use this vector to create the Autor and link columns in your data frame.
Select the correct html element with rvest
Im some ocassion a Stack user help me for make this script. Im edit it for add more attributes but I have problems when try to add Authors The Author label is next to target and href. I have problem in this part. library(tidyverse) library(rvest) startTime <- Sys.time() get_cg <- function(pages) { cat("Scraping page", pages, "\n") page <- str_c("https://cgspace.cgiar.org/discover? scope=10568%2F106146&query=cassava&submit=&rpp=10&page=", pages) %>% read_html() tibble( title = page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_text2(), # run well fecha = page %>% html_elements(".ds-artifact-item") %>% html_element(".date") %>% html_text2(), # run well Type = page %>% html_elements(".ds-artifact-item") %>% html_element(".artifact-type") %>% html_text2(), # run well Autor= page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_attr("href"), # not download the Authors link = page %>% html_elements(".ds-artifact-item") %>% html_element(".description-info") %>% html_attr("href") %>% # run well str_c("https://cgspace.cgiar.org", .) ) } df <- map_dfr(1, get_cg) endTime <- Sys.time() print(endTime - startTim) Im try with other selector but get NA
[ "This should get you a collapsed list of authors for each book, separated by ; , basically the same as presented on the page:\nlibrary(tidyverse, warn.conflicts = F)\nlibrary(rvest, warn.conflicts = F)\n\nstartTime <- Sys.time()\nget_cg <- function(pages) {\n \n cat(\"Scraping page\", pages, \"\\n\")\n \n page <-\n str_c(\"https://cgspace.cgiar.org/discover?scope=10568%2F106146&query=cassava&submit=&rpp=10&page=\", pages) %>%\n read_html()\n \n html_elements(page, \"div.artifact-description > div.artifact-description\") %>% \n map_df(~ list(\n title = html_element(.x, \".description-info\") %>% html_text2(),\n fecha = html_element(.x, \".date\") %>% html_text2(),\n Type = html_element(.x, \".artifact-type\") %>% html_text2(),\n # Autor_links = html_elements(.x,\".description-info > span > a\") %>% html_attr(\"href\") %>% paste(collapse = \";\"),\n Autor = html_element(.x, \"span.description-info\") %>% html_text2(),\n link = html_element(.x, \"a.description-info\") %>% html_attr(\"href\") %>% str_c(\"https://cgspace.cgiar.org\", .)\n )) \n}\n\ndf <- map_dfr(1, get_cg)\n#> Scraping page 1\n\nendTime <- Sys.time()\nprint(endTime - startTime)\n#> Time difference of 0.989037 secs\n\nResult:\ndf\n#> # A tibble: 10 × 5\n#> title fecha Type Autor link \n#> <chr> <chr> <chr> <chr> <chr>\n#> 1 Global Climate Regions for Cassava 2020… Type… Hyma… http…\n#> 2 Performance of the CSM–MANIHOT–Cassava model for sim… 2021… Type… Phon… http…\n#> 3 Adoption of cassava improved modern varieties in the… 2020 Type… Laba… http…\n#> 4 First report of Sri Lankan cassava mosaic virus and … 2021… Type… Chit… http…\n#> 5 Surveillance and diagnostics dataset on Sri Lankan c… 2020 Type… Siri… http…\n#> 6 Socieconomic and soil conservation practices for cas… 2022… Type… Ibar… http…\n#> 7 The transformation and outcome of traditional cassav… 2020 Type… Dou,… http…\n#> 8 Cassava Annual Report 2019 2020 Type… Inte… http…\n#> 9 Cassava Annual Report 2020 2021… Type… Bece… http…\n#> 10 Adoption of cassava improved modern varieties in the… 2020 Type… Flor… http…\n\nglimpse(df)\n#> Rows: 10\n#> Columns: 5\n#> $ title <chr> \"Global Climate Regions for Cassava\", \"Performance of the CSM–MA…\n#> $ fecha <chr> \"2020-08-03\", \"2021-05-01\", \"2020\", \"2021-04-23\", \"2020\", \"2022-…\n#> $ Type <chr> \"Type:Dataset\", \"Type:Journal Article\", \"Type:Dataset\", \"Type:Jo…\n#> $ Autor <chr> \"Hyman, Glenn G.\", \"Phoncharoen, Phanupong; Banterng, Poramate; …\n#> $ link <chr> \"https://cgspace.cgiar.org/handle/10568/109500\", \"https://cgspac…\n\nCreated on 2022-12-03 with reprex v2.0.2\n", "In the code you posted, you're using html_element to extract the Autor and link fields, but html_element only selects the first matching element. You should use html_nodes instead, which will return all matching elements.\nHere's how you can use html_nodes to extract the Autor and link fields:\nAutor = page %>%\n html_elements(\".ds-artifact-item\") %>%\n html_nodes(\".description-info\") %>%\n html_attr(\"href\"),\n\nlink = page %>%\n html_elements(\".ds-artifact-item\") %>%\n html_nodes(\".description-info\") %>%\n html_attr(\"href\") %>%\n str_c(\"https://cgspace.cgiar.org\", .)\n\nNote that html_nodes returns a list of elements, so you'll need to use map_chr or another function to extract the href attributes from the list.\nFor example, you can use map_chr like this:\nAutor = page %>%\n html_elements(\".ds-artifact-item\") %>%\n html_nodes(\".description-info\") %>%\n map_chr(\"href\"),\n\nlink = page %>%\n html_elements(\".ds-artifact-item\") %>%\n html_nodes(\".description-info\") %>%\n map_chr(\"href\") %>%\n str_c(\"https://cgspace.cgiar.org\", .)\n\nThis should extract the href attributes for all matching elements and return them as a character vector. You can then use this vector to create the Autor and link columns in your data frame.\n" ]
[ 2, 0 ]
[]
[]
[ "rvest", "tidyverse", "web_scraping" ]
stackoverflow_0074660619_rvest_tidyverse_web_scraping.txt
Q: How to expand a build flag into an environment dict? I am trying to make my build configurable with a custom flag. It should be possible to specify the flag value when I build, e.g. bazel build //test:bundle --//:foo_flag=bar Here is my build target definition (using rules_js): load("@npm//:defs.bzl", "npm_link_all_packages") load("@npm//test:webpack-cli/package_json.bzl", "bin") npm_link_all_packages( name = "node_modules", ) bin.webpack_cli( name = "bundle", outs = [ "out/index.html", "out/main.js", ], args = [ "--config", "webpack.config.js", "-o", "out", ], srcs = [ "webpack.config.js", ":node_modules", ] + glob([ "src/**/*.js", ]), chdir = package_name(), env = { # ? }, visibility = [ "//visibility:public", ], ) What is unclear from the various Bazel examples is how to actually pass the flag to the rule. I am looking for something like this: # Invalid env = { "FOO": "$(value //:foo_flag)", }, How do I pass a flag in Bazel? A: To use the command-line flag, you want to use a config_setting() and then select() based on that. config_setting( name = "foo_flag_set", flag_values = { "//:foo_flag": "bar" }, ) ... bin.webpack_cli( name = "bundle", ... env = select({ ":foo_flag_set": {...}, "//conditions:default": None, }, ... ) This might be the way to go, if you have a distinct set of values for your flag. Otherwise you might want to think about a --action_env flag.
How to expand a build flag into an environment dict?
I am trying to make my build configurable with a custom flag. It should be possible to specify the flag value when I build, e.g. bazel build //test:bundle --//:foo_flag=bar Here is my build target definition (using rules_js): load("@npm//:defs.bzl", "npm_link_all_packages") load("@npm//test:webpack-cli/package_json.bzl", "bin") npm_link_all_packages( name = "node_modules", ) bin.webpack_cli( name = "bundle", outs = [ "out/index.html", "out/main.js", ], args = [ "--config", "webpack.config.js", "-o", "out", ], srcs = [ "webpack.config.js", ":node_modules", ] + glob([ "src/**/*.js", ]), chdir = package_name(), env = { # ? }, visibility = [ "//visibility:public", ], ) What is unclear from the various Bazel examples is how to actually pass the flag to the rule. I am looking for something like this: # Invalid env = { "FOO": "$(value //:foo_flag)", }, How do I pass a flag in Bazel?
[ "To use the command-line flag, you want to use a config_setting() and then select() based on that.\nconfig_setting(\n name = \"foo_flag_set\",\n flag_values = { \"//:foo_flag\": \"bar\" },\n)\n\n...\n\nbin.webpack_cli(\n name = \"bundle\",\n ...\n env = select({\n \":foo_flag_set\": {...},\n \"//conditions:default\": None,\n },\n ...\n)\n\nThis might be the way to go, if you have a distinct set of values for your flag. Otherwise you might want to think about a --action_env flag.\n" ]
[ 0 ]
[]
[]
[ "bazel", "bazel_rules_js" ]
stackoverflow_0074659574_bazel_bazel_rules_js.txt
Q: escaping Java string to utf-8 I'm looking for java tool for converting regular String to utf-8 string. e.g. input: special-数据应用-text output: special-%u6570%u636E%u5E94%u7528-text (note the preceding "%u") A: Two things: The string you want as result is not UTF-8, at least the string you put as example is sort of UTF-16 encoded (java uses UTF-16 internally) An example of code that gives you the string that you want: String str = "special-数据应用-text"; StringBuilder builder = new StringBuilder(); for(char ch: str.toCharArray()) { if(ch >= 0x20 && ch <= 0x7E) { builder.append(ch); } else { builder.append(String.format("%%u%04X", (int)ch)); } } String result = builder.toString(); A: Can you try the following StringBuilder b = new StringBuilder(); for( char c : s.toCharArray() ){ if( ( 1024 <= c && c <= 1279 ) || ( 1280 <= c && c <= 1327) || ( 11744 <= c && c <= 11775) || ( 42560 <= c && c <= 42655) ){ b.append( "\\u" ).append( Integer.toHexString(c) ); }else{ b.append( c ); } } return b.toString(); Found here A: Try this String s= URLEncoder.encode(str, "UTF-8").replaceAll("%(..)%(..)", "%u$1$2"); A: Let me recommend you Unbescape [ http://www.unbescape.org ] Among other escaping operations (HTML, XML, etc.), it will allow you to escape your Java literal with: final String escaped = JavaEscape.escapeJava(text); Disclaimer, per StackOverflow rules: I'm Unbescape's author. A: For those who don't need java tool, but needs the online tool here is the tool https://itpro.cz/juniconv/
escaping Java string to utf-8
I'm looking for java tool for converting regular String to utf-8 string. e.g. input: special-数据应用-text output: special-%u6570%u636E%u5E94%u7528-text (note the preceding "%u")
[ "Two things:\n\nThe string you want as result is not UTF-8, at least the string you put as example is sort of UTF-16 encoded (java uses UTF-16 internally)\nAn example of code that gives you the string that you want:\nString str = \"special-数据应用-text\";\n\nStringBuilder builder = new StringBuilder();\nfor(char ch: str.toCharArray()) {\n if(ch >= 0x20 && ch <= 0x7E) {\n builder.append(ch);\n } else {\n builder.append(String.format(\"%%u%04X\", (int)ch));\n }\n}\n\nString result = builder.toString();\n\n\n", "Can you try the following\nStringBuilder b = new StringBuilder();\n\nfor( char c : s.toCharArray() ){\n if( ( 1024 <= c && c <= 1279 ) || ( 1280 <= c && c <= 1327) || ( 11744 <= c && c <= 11775) || ( 42560 <= c && c <= 42655) ){\n b.append( \"\\\\u\" ).append( Integer.toHexString(c) );\n }else{\n b.append( c );\n }\n}\n\nreturn b.toString();\n\nFound here\n", "Try this\nString s= URLEncoder.encode(str, \"UTF-8\").replaceAll(\"%(..)%(..)\", \"%u$1$2\");\n\n", "Let me recommend you Unbescape [ http://www.unbescape.org ]\nAmong other escaping operations (HTML, XML, etc.), it will allow you to escape your Java literal with:\nfinal String escaped = JavaEscape.escapeJava(text);\n\nDisclaimer, per StackOverflow rules: I'm Unbescape's author.\n", "For those who don't need java tool, but needs the online tool here is the tool https://itpro.cz/juniconv/\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "escaping", "java", "utf_8" ]
stackoverflow_0023471224_escaping_java_utf_8.txt
Q: Success on emulator, Force stop on real device I deployed an application for Android which is successfully running on emulator. But, when I try to run on real device (My device is Acer A200, tablet), my application always forced stop. The requirement of the operating system is no problem. The error in logcat when I tried to run on real device is : 07-16 15:09:20.870: I/SqliteDatabaseCpp(780): sqlite returned: error code = 1, msg = no such table: kategori, db=/data/data/com.mroring.belajarperancis/databases/MY_DATABASE I think the application didn't install the database correctly. What should I do ? Thanks in advance :) A: table name "kategori" does not exist on your database, you should check the code if you created the table or not. if created, change the version number of the database, It will call the onUpgrade method and the database will create again.
Success on emulator, Force stop on real device
I deployed an application for Android which is successfully running on emulator. But, when I try to run on real device (My device is Acer A200, tablet), my application always forced stop. The requirement of the operating system is no problem. The error in logcat when I tried to run on real device is : 07-16 15:09:20.870: I/SqliteDatabaseCpp(780): sqlite returned: error code = 1, msg = no such table: kategori, db=/data/data/com.mroring.belajarperancis/databases/MY_DATABASE I think the application didn't install the database correctly. What should I do ? Thanks in advance :)
[ "table name \"kategori\" does not exist on your database, you should check the code if you created the table or not. if created, change the version number of the database, It will call the onUpgrade method and the database will create again.\n" ]
[ 0 ]
[]
[]
[ "android", "database", "emulation", "sqlite" ]
stackoverflow_0024775644_android_database_emulation_sqlite.txt
Q: Powershell's Prompt change to just "PS" when I run "conda activate xx" in, What happend? When I activate my conda environment in powershell, The Prompt change to just "PS". In normal, the Prompt is "(base) PS C:\Users\xxx", but It's just "PS" now. What happend? I want to get it back. My conda's version is "conda 22.11.0". I want it to be "(xx) PS C:\Users\xxx", not just "PS". A: I found a solution. I can update powershell to 7 to solve the problem. But that's so weird. Why?
Powershell's Prompt change to just "PS" when I run "conda activate xx" in, What happend?
When I activate my conda environment in powershell, The Prompt change to just "PS". In normal, the Prompt is "(base) PS C:\Users\xxx", but It's just "PS" now. What happend? I want to get it back. My conda's version is "conda 22.11.0". I want it to be "(xx) PS C:\Users\xxx", not just "PS".
[ "I found a solution.\nI can update powershell to 7 to solve the problem. But that's so weird. Why?\n" ]
[ 0 ]
[]
[]
[ "conda", "powershell", "python" ]
stackoverflow_0074665678_conda_powershell_python.txt
Q: How to retrieve current color from a Cario Context in Vala? I need to retrieve the current color before applying a new color so that later on I can replace it back. var old_color = ctx.something ()? ctx.set_source_rgb (new_color.r, new_color.g, new_color.b);// Apply new color // Do some drawing ctx.set_source_rgb (old_color.r, old_color,g, old_color.b);// Restore old color // Do some more drawing I cannot see anything closely related to something like get_source_rgb () in the valadoc manual. A: I cannot see anything closely related to something like get_source_rgb () in the valadoc manual. The function you are looking for is get_source(). set_source_rgb() is a shorthand for cairo_set_source(cr, cairo_pattern_create_rgba(r, g, b, 0)). That's why there is no get_source_rgb. (This example uses the C syntax since I do not know how this is mapped to Vala; the API docs should be able to answer this question. Also, I am not entirely sure about the memory management in my example above, but that shouldn't be a concern for Vala) If you really want to get the current source RGB color, you need something like (again C, sorry): double r, g, b, a; // This assert fails if the pattern is not an RGBA pattern assert(cairo_pattern_get_rgba(cairo_get_source(cr), &r, &g, &b, &a) == CAIRO_STATUS_SUCCESS); Edit: Beware, the above is bad code. Never do something with side effects in assert since it would fail if assert is disabled. I'll keep the example like this for simplicity. so that later on I can replace it back. So, what you are actually looking for is cairo_save(cr) and cairo_restore(cr). Those are the C functions, so I guess you want ctx.save() and ctx.restore(). What they do: All of the drawing state (except the current path) is saved. Restoring then undoes any changes that were done.
How to retrieve current color from a Cario Context in Vala?
I need to retrieve the current color before applying a new color so that later on I can replace it back. var old_color = ctx.something ()? ctx.set_source_rgb (new_color.r, new_color.g, new_color.b);// Apply new color // Do some drawing ctx.set_source_rgb (old_color.r, old_color,g, old_color.b);// Restore old color // Do some more drawing I cannot see anything closely related to something like get_source_rgb () in the valadoc manual.
[ "\nI cannot see anything closely related to something like get_source_rgb () in the valadoc manual.\n\nThe function you are looking for is get_source(). set_source_rgb() is a shorthand for cairo_set_source(cr, cairo_pattern_create_rgba(r, g, b, 0)). That's why there is no get_source_rgb.\n(This example uses the C syntax since I do not know how this is mapped to Vala; the API docs should be able to answer this question. Also, I am not entirely sure about the memory management in my example above, but that shouldn't be a concern for Vala)\nIf you really want to get the current source RGB color, you need something like (again C, sorry):\ndouble r, g, b, a;\n// This assert fails if the pattern is not an RGBA pattern\nassert(cairo_pattern_get_rgba(cairo_get_source(cr), &r, &g, &b, &a) == CAIRO_STATUS_SUCCESS);\n\nEdit: Beware, the above is bad code. Never do something with side effects in assert since it would fail if assert is disabled. I'll keep the example like this for simplicity.\n\nso that later on I can replace it back.\n\nSo, what you are actually looking for is cairo_save(cr) and cairo_restore(cr). Those are the C functions, so I guess you want ctx.save() and ctx.restore().\nWhat they do: All of the drawing state (except the current path) is saved. Restoring then undoes any changes that were done.\n" ]
[ 0 ]
[]
[]
[ "cairo", "vala" ]
stackoverflow_0074651858_cairo_vala.txt
Q: React page doesn't display properly after updating or creating I have simple note app based on Django+Rest+React. It contains list of notes page (NotesListPage.js) and note pages (NotePage.js). List of notes page contains short previews, titles and links to note pages. The NotePage, in addition to the entire content, contains delete and update functionality. It works, but sometimes (~50%) to see updates on NotesListPage it needs hard refresh or step back to NotePage and come back to list of notes again. When I look at the sequence of execution of functions in the console, everything goes in the correct order. First, updating the note, then reloading the data. How can this be fixed? NotesListPage.js import ListItem from '../components/ListItem' import AddButton from '../components/AddButton' const NotesListPage = () => { let [notes, setNotes] = useState([]) let getNotes = async () => { let response = await fetch('/api/notes/') let data = await response.json() console.log(data) setNotes(data) } useEffect(() => { getNotes().then(() => {console.log('NotesList useEffect getNote')}) }, []) return ( <div className="notes"> <div className="notes-list"> {notes.map((note, index) => ( <ListItem key={index} note={note} /> ))} </div> <AddButton /> </div> ) } export default NotesListPage NotePage.js import { ReactComponent as ArrowLeft } from '../assets/arrow-left.svg' const NotePage = ({ match, history }) => { let noteId = match.params.id let [note, setNote] = useState(null) let getNote = async () => { if (noteId === 'new') return let response = await fetch(`/api/notes/${noteId}/`) let data = await response.json() setNote(data) } useEffect(() => { getNote().then(() => {console.log('NotePage useEffect getNote')}) }, [noteId]) let createNote = async () => { await fetch(`/api/notes/`, { method: "POST", headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(note) }) } let updateNote = async () => { await fetch(`/api/notes/${noteId}/`, { method: "PUT", headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(note) }) } let deleteNote = async () => { await fetch(`/api/notes/${noteId}/`, { method: 'DELETE', 'headers': { 'Content-Type': 'application/json' } }) history.push('/') } let handleSubmit = () => { console.log('NOTE:', note) if (noteId !== 'new' && note.body === '') { deleteNote().then(() => {console.log('deleteNote')}) } else if (noteId !== 'new') { updateNote().then(() => {console.log('updateNote')}) } else if (noteId === 'new' && note.body !== null) { createNote().then(() => {console.log('createNote')}) } history.push('/') } let handleChange = (value) => { setNote(note => ({ ...note, 'body': value })) console.log('Handle Change:', note) } return ( <div className="note" > <div className="note-header"> <h3> <ArrowLeft onClick={handleSubmit} /> </h3> {noteId !== 'new' ? ( <button onClick={deleteNote}>Delete</button> ) : ( <button onClick={handleSubmit}>Done</button> )} </div> <textarea onChange={(e) => { handleChange(e.target.value) }} value={note?.body}></textarea> </div> ) } export default NotePage ListItem.js import React from 'react' import { Link } from 'react-router-dom' let getTime = (note) => { return new Date(note.updated).toLocaleDateString() } let getTitle = (note) => { let title = note.body.split('\n')[0] if (title.length > 45) { return title.slice(0, 45) } return title } let getContent = (note) => { let title = getTitle(note) let content = note.body.replaceAll('\n', ' ') content = content.replaceAll(title, '') if (content.length > 45) { return content.slice(0, 45) + '...' } else { return content } } const ListItem = ({ note }) => { return ( <Link to={`/note/${note.id}`}> <div className="notes-list-item" > <h3>{getTitle(note)}</h3> <p><span>{getTime(note)}</span>{getContent(note)}</p> </div> </Link> ) } export default ListItem App.js import { BrowserRouter as Router, Route } from "react-router-dom"; import './App.css'; import Header from './components/Header' import NotesListPage from './pages/NotesListPage' import NotePage from './pages/NotePage' function App() { return ( <Router> <div className="container dark"> <div className="app"> <Header title="Note List" /> <Route path="/" exact component={NotesListPage} /> <Route path="/note/:id" component={NotePage} /> </div> </div> </Router> ); } export default App; A: My first thought was that the router is trying to be smart and it does not refetch the notes on the navigation back. That could be solved by moving the let [notes, setNotes] = useState([]) to the app, and you could then avoid fetching the individual notes as well (on the premise they don't fetch some addition info) A: I added async and await to handleSubmit in NotePage.js and and it looks like it fixed the problem. let handleSubmit = async () => { console.log('NOTE:', note) if (noteId !== 'new' && note.body === '') { deleteNote().then(() => {console.log('deleteNote')}) } else if (noteId !== 'new') { await updateNote().then(() => {console.log('updateNote')}) } else if (noteId === 'new' && note.body !== null) { await createNote().then(() => {console.log('createNote')}) } history.push('/') }
React page doesn't display properly after updating or creating
I have simple note app based on Django+Rest+React. It contains list of notes page (NotesListPage.js) and note pages (NotePage.js). List of notes page contains short previews, titles and links to note pages. The NotePage, in addition to the entire content, contains delete and update functionality. It works, but sometimes (~50%) to see updates on NotesListPage it needs hard refresh or step back to NotePage and come back to list of notes again. When I look at the sequence of execution of functions in the console, everything goes in the correct order. First, updating the note, then reloading the data. How can this be fixed? NotesListPage.js import ListItem from '../components/ListItem' import AddButton from '../components/AddButton' const NotesListPage = () => { let [notes, setNotes] = useState([]) let getNotes = async () => { let response = await fetch('/api/notes/') let data = await response.json() console.log(data) setNotes(data) } useEffect(() => { getNotes().then(() => {console.log('NotesList useEffect getNote')}) }, []) return ( <div className="notes"> <div className="notes-list"> {notes.map((note, index) => ( <ListItem key={index} note={note} /> ))} </div> <AddButton /> </div> ) } export default NotesListPage NotePage.js import { ReactComponent as ArrowLeft } from '../assets/arrow-left.svg' const NotePage = ({ match, history }) => { let noteId = match.params.id let [note, setNote] = useState(null) let getNote = async () => { if (noteId === 'new') return let response = await fetch(`/api/notes/${noteId}/`) let data = await response.json() setNote(data) } useEffect(() => { getNote().then(() => {console.log('NotePage useEffect getNote')}) }, [noteId]) let createNote = async () => { await fetch(`/api/notes/`, { method: "POST", headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(note) }) } let updateNote = async () => { await fetch(`/api/notes/${noteId}/`, { method: "PUT", headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(note) }) } let deleteNote = async () => { await fetch(`/api/notes/${noteId}/`, { method: 'DELETE', 'headers': { 'Content-Type': 'application/json' } }) history.push('/') } let handleSubmit = () => { console.log('NOTE:', note) if (noteId !== 'new' && note.body === '') { deleteNote().then(() => {console.log('deleteNote')}) } else if (noteId !== 'new') { updateNote().then(() => {console.log('updateNote')}) } else if (noteId === 'new' && note.body !== null) { createNote().then(() => {console.log('createNote')}) } history.push('/') } let handleChange = (value) => { setNote(note => ({ ...note, 'body': value })) console.log('Handle Change:', note) } return ( <div className="note" > <div className="note-header"> <h3> <ArrowLeft onClick={handleSubmit} /> </h3> {noteId !== 'new' ? ( <button onClick={deleteNote}>Delete</button> ) : ( <button onClick={handleSubmit}>Done</button> )} </div> <textarea onChange={(e) => { handleChange(e.target.value) }} value={note?.body}></textarea> </div> ) } export default NotePage ListItem.js import React from 'react' import { Link } from 'react-router-dom' let getTime = (note) => { return new Date(note.updated).toLocaleDateString() } let getTitle = (note) => { let title = note.body.split('\n')[0] if (title.length > 45) { return title.slice(0, 45) } return title } let getContent = (note) => { let title = getTitle(note) let content = note.body.replaceAll('\n', ' ') content = content.replaceAll(title, '') if (content.length > 45) { return content.slice(0, 45) + '...' } else { return content } } const ListItem = ({ note }) => { return ( <Link to={`/note/${note.id}`}> <div className="notes-list-item" > <h3>{getTitle(note)}</h3> <p><span>{getTime(note)}</span>{getContent(note)}</p> </div> </Link> ) } export default ListItem App.js import { BrowserRouter as Router, Route } from "react-router-dom"; import './App.css'; import Header from './components/Header' import NotesListPage from './pages/NotesListPage' import NotePage from './pages/NotePage' function App() { return ( <Router> <div className="container dark"> <div className="app"> <Header title="Note List" /> <Route path="/" exact component={NotesListPage} /> <Route path="/note/:id" component={NotePage} /> </div> </div> </Router> ); } export default App;
[ "My first thought was that the router is trying to be smart and it does not refetch the notes on the navigation back.\nThat could be solved by moving the let [notes, setNotes] = useState([]) to the app, and you could then avoid fetching the individual notes as well (on the premise they don't fetch some addition info)\n", "I added async and await to handleSubmit in NotePage.js and and it looks like it fixed the problem.\nlet handleSubmit = async () => {\n console.log('NOTE:', note)\n if (noteId !== 'new' && note.body === '') {\n deleteNote().then(() => {console.log('deleteNote')})\n } else if (noteId !== 'new') {\n await updateNote().then(() => {console.log('updateNote')})\n } else if (noteId === 'new' && note.body !== null) {\n await createNote().then(() => {console.log('createNote')})\n }\n history.push('/')\n }\n\n" ]
[ 0, 0 ]
[]
[]
[ "django_rest_framework", "javascript", "react_hooks", "reactjs" ]
stackoverflow_0074653998_django_rest_framework_javascript_react_hooks_reactjs.txt
Q: 'mongo' is not recognized as an internal or external command, operable program or batch file. is that version issue? I'm trying start mongo in cmd: C:\Users\Vishal Bramhankar>mongo 'mongo' is not recognized as an internal or external command, operable program or batch file. What I missed here? A: After MongoDB version 6.0 and onwards we need to install MongoDB Shell separately. Configure below path in envarinment variables C:\Program Files\mongosh-1.6.1-win32-x64\bin Close your cmd and open the new cmd again and run below command: C:\Users\Vishal Bramhankar>mongosh Run below one for knowing DBs: test> shows dbs MongoDB shell installation: https://www.mongodb.com/try/download/shell
'mongo' is not recognized as an internal or external command, operable program or batch file. is that version issue?
I'm trying start mongo in cmd: C:\Users\Vishal Bramhankar>mongo 'mongo' is not recognized as an internal or external command, operable program or batch file. What I missed here?
[ "After MongoDB version 6.0 and onwards we need to install MongoDB Shell separately.\n\nConfigure below path in envarinment variables\nC:\\Program Files\\mongosh-1.6.1-win32-x64\\bin\n\nClose your cmd and open the new cmd again and run below command:\nC:\\Users\\Vishal Bramhankar>mongosh\n\nRun below one for knowing DBs:\ntest> shows dbs\n\n\n\nMongoDB shell installation:\n\nhttps://www.mongodb.com/try/download/shell\n\n" ]
[ 0 ]
[]
[]
[ "mongodb" ]
stackoverflow_0074665228_mongodb.txt
Q: AttributeError: module 'ipyparallel' has no attribute 'Cluster' I am going through the tutorial to learn ipyparallel and while doing so, I got the error: AttributeError: module 'ipyparallel' has no attribute 'Cluster' I uninstalled and reinstalled the package but the error persisted, does anyone have any tips for solving this issue? My Code/ Issue: Thanks A: Make sure your ipyparallel version is greater or equal to 7.0. In [1]: import ipyparallel as ipp In [2]: ipp.__version__ Out[2]: '6.3.0' In [3]: hasattr(ipp, "Cluster") Out[3]: False Sometimes conda install ipyparallel may not install the newest version. Try using pip install ipyparallel. After version 7.0: In [1]: import ipyparallel as ipp In [2]: ipp.__version__ Out[2]: '8.4.1' In [3]: hasattr(ipp, "Cluster") Out[3]: True In [4]: cluster = ipp.Cluster(n=4)
AttributeError: module 'ipyparallel' has no attribute 'Cluster'
I am going through the tutorial to learn ipyparallel and while doing so, I got the error: AttributeError: module 'ipyparallel' has no attribute 'Cluster' I uninstalled and reinstalled the package but the error persisted, does anyone have any tips for solving this issue? My Code/ Issue: Thanks
[ "Make sure your ipyparallel version is greater or equal to 7.0.\nIn [1]: import ipyparallel as ipp\n\nIn [2]: ipp.__version__\nOut[2]: '6.3.0'\n\nIn [3]: hasattr(ipp, \"Cluster\")\nOut[3]: False\n\nSometimes conda install ipyparallel may not install the newest version. Try using pip install ipyparallel. After version 7.0:\nIn [1]: import ipyparallel as ipp\n\nIn [2]: ipp.__version__\nOut[2]: '8.4.1'\n\nIn [3]: hasattr(ipp, \"Cluster\")\nOut[3]: True\n\nIn [4]: cluster = ipp.Cluster(n=4)\n\n" ]
[ 0 ]
[]
[]
[ "ipython", "ipython_parallel", "python" ]
stackoverflow_0072331252_ipython_ipython_parallel_python.txt
Q: No Module Named PyQt5 and Pyside2 Error While Trying to open rqt I've been trying to use the ROS(Robot Operating System) using this page: Tutorial page In the link it makes you use turtle sim it seemed to work fine with me without any errors. In the 4th step(install rqt),i get this error: `INPUT: RQT OUTPUT: ImportError for 'pyqt': No module named 'PyQt5' ModuleNotFoundError: No module named 'PyQt5' ModuleNotFoundError: No module named 'PySide2'` There was more lines of error but it seemed irrelevant to me because it only says file names. I'm using python version 3.8.3 and qt version 5.12.12. I downloaded the qt manually in their website.Didn't use: pip install PyQt5 And i don't exactly remember now but someone said something about local files of pyhton but i had none of them: screenshot of C:\Users\Boran\AppData\Local\Programs It supposed to be Pyhton files in here. Also there is no problem about python it works fine and i used talker and listener(pyhton coded applications of ROS.) and they worked fine. A: I solved the problem by installing graphviz.It appears that to run rqt i had to install graphviz.
No Module Named PyQt5 and Pyside2 Error While Trying to open rqt
I've been trying to use the ROS(Robot Operating System) using this page: Tutorial page In the link it makes you use turtle sim it seemed to work fine with me without any errors. In the 4th step(install rqt),i get this error: `INPUT: RQT OUTPUT: ImportError for 'pyqt': No module named 'PyQt5' ModuleNotFoundError: No module named 'PyQt5' ModuleNotFoundError: No module named 'PySide2'` There was more lines of error but it seemed irrelevant to me because it only says file names. I'm using python version 3.8.3 and qt version 5.12.12. I downloaded the qt manually in their website.Didn't use: pip install PyQt5 And i don't exactly remember now but someone said something about local files of pyhton but i had none of them: screenshot of C:\Users\Boran\AppData\Local\Programs It supposed to be Pyhton files in here. Also there is no problem about python it works fine and i used talker and listener(pyhton coded applications of ROS.) and they worked fine.
[ "I solved the problem by installing graphviz.It appears that to run rqt i had to install graphviz.\n" ]
[ 0 ]
[]
[]
[ "ros", "rqt" ]
stackoverflow_0074665115_ros_rqt.txt
Q: Can you explain the updateForm() function from MongoDB official MERN stack tutorial? So, I am following the MERN stack tutorial from the MongoDB official website. In the create.js file we create the Create component. Follows the code sample: import React, { useState } from 'react'; import { useNavigate } from 'react-router'; export default function Create() { const [form, setForm] = useState({ name: '', position: '', level: '', }); const navigate = useNavigate(); // These methods will update the state properties. function updateForm(value) { return setForm((prev) => { return { ...prev, ...value }; }); } // This function will handle the submission. async function onSubmit(e) { e.preventDefault(); // When a post request is sent to the create url, we'll add a new record to the database. const newPerson = { ...form }; await fetch('http://localhost:5005/record/add', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(form), }).catch((error) => { window.alert(error); return; }); setForm({ name: '', position: '', level: '' }); navigate('/'); } // This following section will display the form that takes the input from the user. return ( <div> <h3>Create New Record</h3> <form onSubmit={onSubmit}> <div className="form-group"> <label htmlFor="name">Name</label> <input type="text" className="form-control" id="name" value={form.name} onChange={(e) => updateForm({ name: e.target.value })} /> </div> <div className="form-group"> <label htmlFor="position">Position</label> <input type="text" className="form-control" id="position" value={form.position} onChange={(e) => updateForm({ position: e.target.value })} /> </div> <div className="form-group"> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionIntern" value="Intern" checked={form.level === 'Intern'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionIntern" className="form-check-label"> Intern </label> </div> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionJunior" value="Junior" checked={form.level === 'Junior'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionJunior" className="form-check-label"> Junior </label> </div> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionSenior" value="Senior" checked={form.level === 'Senior'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionSenior" className="form-check-label"> Senior </label> </div> </div> <div className="form-group"> <input type="submit" value="Create person" className="btn btn-primary" /> </div> </form> </div> ); } I have difficulty to decode what updateForm() does exactly function updateForm(value) { return setForm((prev) => { return { ...prev, ...value }; }); } My questions are: What is the value of prev parameter? I don't understand how this works as we don't place any value in this parameter. How setForm() manipulates { ...prev, ...value }. Why couldn't we use setForm({ value }) instead? A: The updateForm function is used to update the form state variable in the Create component. This form state variable is an object that contains the values of the input fields in the form (i.e. the name, position, and level of the person being added). The updateForm function takes in a value, which is an object containing the updated values for one or more of the properties in the form state variable. The updateForm function then creates a new object that is a copy of the existing form state variable, and updates the copied object with the values passed in to the function. Here is an example of how the updateForm function might be used: // Update the form state variable with a new name value updateForm({ name: 'John Doe' }); // The form state variable now looks like this: { name: 'John Doe', position: '', level: '', } The updateForm function is used in the onChange event handlers for each of the input fields in the form. When the user updates the value of an input field, the updateForm function is called with the updated value for that field, which updates the form state variable with the new value. For example, when the user updates the name field in the form, the updateForm function is called with the updated name value, and the form state variable is updated with the new name value: // User updates the name field in the form to 'John Doe' onChange={(e) => updateForm({ name: e.target.value })} // The updateForm function is called with the updated name value updateForm({ name: 'John Doe' }); // The form state variable now looks like this: { name: 'John Doe', position: '', level: '', } The updateForm function is used to keep the form state variable up-to-date with the latest values entered by the user in the form. When the user submits the form, the values in the form state variable are used to create a new record in the database. A: As for your first question - you can use setState in 2 ways: pass value of the new state. pass a callback with argument of the current state, that returns the new state value. Your second question has nothing to do with react but with JavaScript. it's just a way to merge 2 object, but the duplicate keys will be overridden by the latest value. The ... is the spread operator . let obj1 = { a: 2, b: 4, c: 6 }; let obj2 = { c: 8, d: 0 }; console.log({ ...obj1, ...obj2 });
Can you explain the updateForm() function from MongoDB official MERN stack tutorial?
So, I am following the MERN stack tutorial from the MongoDB official website. In the create.js file we create the Create component. Follows the code sample: import React, { useState } from 'react'; import { useNavigate } from 'react-router'; export default function Create() { const [form, setForm] = useState({ name: '', position: '', level: '', }); const navigate = useNavigate(); // These methods will update the state properties. function updateForm(value) { return setForm((prev) => { return { ...prev, ...value }; }); } // This function will handle the submission. async function onSubmit(e) { e.preventDefault(); // When a post request is sent to the create url, we'll add a new record to the database. const newPerson = { ...form }; await fetch('http://localhost:5005/record/add', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(form), }).catch((error) => { window.alert(error); return; }); setForm({ name: '', position: '', level: '' }); navigate('/'); } // This following section will display the form that takes the input from the user. return ( <div> <h3>Create New Record</h3> <form onSubmit={onSubmit}> <div className="form-group"> <label htmlFor="name">Name</label> <input type="text" className="form-control" id="name" value={form.name} onChange={(e) => updateForm({ name: e.target.value })} /> </div> <div className="form-group"> <label htmlFor="position">Position</label> <input type="text" className="form-control" id="position" value={form.position} onChange={(e) => updateForm({ position: e.target.value })} /> </div> <div className="form-group"> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionIntern" value="Intern" checked={form.level === 'Intern'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionIntern" className="form-check-label"> Intern </label> </div> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionJunior" value="Junior" checked={form.level === 'Junior'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionJunior" className="form-check-label"> Junior </label> </div> <div className="form-check form-check-inline"> <input className="form-check-input" type="radio" name="positionOptions" id="positionSenior" value="Senior" checked={form.level === 'Senior'} onChange={(e) => updateForm({ level: e.target.value })} /> <label htmlFor="positionSenior" className="form-check-label"> Senior </label> </div> </div> <div className="form-group"> <input type="submit" value="Create person" className="btn btn-primary" /> </div> </form> </div> ); } I have difficulty to decode what updateForm() does exactly function updateForm(value) { return setForm((prev) => { return { ...prev, ...value }; }); } My questions are: What is the value of prev parameter? I don't understand how this works as we don't place any value in this parameter. How setForm() manipulates { ...prev, ...value }. Why couldn't we use setForm({ value }) instead?
[ "The updateForm function is used to update the form state variable in the Create component. This form state variable is an object that contains the values of the input fields in the form (i.e. the name, position, and level of the person being added).\nThe updateForm function takes in a value, which is an object containing the updated values for one or more of the properties in the form state variable. The updateForm function then creates a new object that is a copy of the existing form state variable, and updates the copied object with the values passed in to the function.\nHere is an example of how the updateForm function might be used:\n// Update the form state variable with a new name value\nupdateForm({ name: 'John Doe' });\n\n// The form state variable now looks like this:\n{\n name: 'John Doe',\n position: '',\n level: '',\n}\n\nThe updateForm function is used in the onChange event handlers for each of the input fields in the form. When the user updates the value of an input field, the updateForm function is called with the updated value for that field, which updates the form state variable with the new value.\nFor example, when the user updates the name field in the form, the updateForm function is called with the updated name value, and the form state variable is updated with the new name value:\n// User updates the name field in the form to 'John Doe'\nonChange={(e) => updateForm({ name: e.target.value })}\n\n// The updateForm function is called with the updated name value\nupdateForm({ name: 'John Doe' });\n\n// The form state variable now looks like this:\n{\n name: 'John Doe',\n position: '',\n level: '',\n}\n\nThe updateForm function is used to keep the form state variable up-to-date with the latest values entered by the user in the form. When the user submits the form, the values in the form state variable are used to create a new record in the database.\n", "As for your first question - you can use setState in 2 ways:\n\npass value of the new state.\npass a callback with argument of the current state, that returns the new state value.\n\nYour second question has nothing to do with react but with JavaScript.\nit's just a way to merge 2 object, but the duplicate keys will be overridden by the latest value.\nThe ... is the spread operator .\n\n\nlet obj1 = {\n a: 2,\n b: 4,\n c: 6\n};\nlet obj2 = {\n c: 8,\n d: 0\n};\n\nconsole.log({ ...obj1,\n ...obj2\n});\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074665502_javascript_reactjs.txt
Q: Shell script : Want to show a few details in stdout & All details in log file Say, this is my shell script echo "Show this on stdout and logfile" wget -O ....... # "Only in logfile" echo "Show this on stdout and logfile" cp file1.txt # "Only in logfile" So, I want to store the whole script output in a log file (say "complete-output.log") And on my stdout --- I want to show only some cherry-picked items (Ex. some echo messages) I used named pipes, # Set up a named pipe for logging npipe=logpipe mknod $npipe p # Log all output to a log for error checking sudo tee <$npipe /var/log/complete-output.log & exec 1>$npipe 2>&1 # Deleting named pipe on script EXIT trap 'rm -f $npipe' EXIT So, I am getting complete output on both (In file, as well as stdout) But, I do not want stdout to be so verbose.. only want to show a few things there ! What is the correct way to do so ? Thanks in advance ! A: Can this achieve what you wanted ? #!/usr/bin/env bash echo2(){ echo "$@" echo "$@" > /dev/tty } exec > /var/log/complete-output.log 2>&1 echo2 "Show this on stdout and logfile" echo wget -O ....... # "Only in logfile" echo2 "Show this on stdout and logfile" echo cp file1.txt # "Only in logfile" Run it with $ bash test.sh Show this on stdout and logfile Show this on stdout and logfile $ cat /var/log/complete-output.log Show this on stdout and logfile wget -O ....... Show this on stdout and logfile cp file1.txt
Shell script : Want to show a few details in stdout & All details in log file
Say, this is my shell script echo "Show this on stdout and logfile" wget -O ....... # "Only in logfile" echo "Show this on stdout and logfile" cp file1.txt # "Only in logfile" So, I want to store the whole script output in a log file (say "complete-output.log") And on my stdout --- I want to show only some cherry-picked items (Ex. some echo messages) I used named pipes, # Set up a named pipe for logging npipe=logpipe mknod $npipe p # Log all output to a log for error checking sudo tee <$npipe /var/log/complete-output.log & exec 1>$npipe 2>&1 # Deleting named pipe on script EXIT trap 'rm -f $npipe' EXIT So, I am getting complete output on both (In file, as well as stdout) But, I do not want stdout to be so verbose.. only want to show a few things there ! What is the correct way to do so ? Thanks in advance !
[ "Can this achieve what you wanted ?\n#!/usr/bin/env bash\n\necho2(){\n echo \"$@\"\n echo \"$@\" > /dev/tty\n}\n\nexec > /var/log/complete-output.log 2>&1\n\necho2 \"Show this on stdout and logfile\"\necho wget -O ....... # \"Only in logfile\"\necho2 \"Show this on stdout and logfile\"\necho cp file1.txt # \"Only in logfile\"\n\nRun it with\n$ bash test.sh\nShow this on stdout and logfile\nShow this on stdout and logfile\n$ cat /var/log/complete-output.log\nShow this on stdout and logfile\nwget -O .......\nShow this on stdout and logfile\ncp file1.txt\n\n" ]
[ 2 ]
[]
[]
[ "bash", "logging", "named_pipes", "sh", "shell" ]
stackoverflow_0074664681_bash_logging_named_pipes_sh_shell.txt
Q: Cython --embed flag in setup.py I am starting to compile my Python 3 project with Cython, and I would like to know if it's possible to reduce my current compile time workflow to a single instruction. This is my setup.py as of now: from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize extensions = [ Extension("v", ["version.py"]), Extension("*", ["lib/*.py"]) ] setup( name = "MyFirst App", ext_modules = cythonize(extensions), ) And this is what I run from shell to obtain my executables: python3 setup.py build_ext --inplace cython3 --embed -o main.c main.py gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl This whole process works just fine, I'd like to know if there is a way to also embed the last two instruction in the setup.py script. Thank you A: Start off with checking out the docs for the utility you're using. If there are complicated arguments, there is probably a config file. This should tidy up your first command: # setup.cfg [build_ext] inplace=1 I don't see anything in the docs about a post-build step, and I wouldn't really expect this process to execute shell commands afterwards. build_ext is for building python. make is very available and usual for building C binaries. Add a Makefile to your project. If you have gccinstalled, you likely have make already: # Makefile (lines need to start with tab) compile: python3 setup.py build_ext --inplace cython3 --embed -o main.c main.py gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl Now you can just type make or make compile to get the desired affect. A: Yes, it is possible to reduce your compile time workflow to a single instruction. The setup function in the distutils module provides a script_args argument that allows you to specify arguments to be passed to the build script. You can use this argument to specify the --inplace and --embed flags for Cython, and the -o option for gcc, like this: from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize extensions = [ Extension("v", ["version.py"]), Extension("", ["lib/.py"]) ] setup( name = "MyFirst App", ext_modules = cythonize(extensions), script_args = ["build_ext", "--inplace", "--embed", "-o", "main.c"] ) You can then compile your project by running python3 setup.py from the shell. This will run the build script with the specified arguments, and you will get your executables. Note that you will still need to run gcc separately to compile the C code generated by Cython into an executable. You can do this by running gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl from the shell after running python3 setup.py
Cython --embed flag in setup.py
I am starting to compile my Python 3 project with Cython, and I would like to know if it's possible to reduce my current compile time workflow to a single instruction. This is my setup.py as of now: from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize extensions = [ Extension("v", ["version.py"]), Extension("*", ["lib/*.py"]) ] setup( name = "MyFirst App", ext_modules = cythonize(extensions), ) And this is what I run from shell to obtain my executables: python3 setup.py build_ext --inplace cython3 --embed -o main.c main.py gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl This whole process works just fine, I'd like to know if there is a way to also embed the last two instruction in the setup.py script. Thank you
[ "Start off with checking out the docs for the utility you're using. If there are complicated arguments, there is probably a config file.\nThis should tidy up your first command:\n# setup.cfg\n[build_ext]\ninplace=1\n\nI don't see anything in the docs about a post-build step, and I wouldn't really expect this process to execute shell commands afterwards. build_ext is for building python. make is very available and usual for building C binaries.\nAdd a Makefile to your project. If you have gccinstalled, you likely have make already:\n# Makefile (lines need to start with tab)\n\ncompile:\n python3 setup.py build_ext --inplace\n cython3 --embed -o main.c main.py\n gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl\n\n\nNow you can just type make or make compile to get the desired affect.\n", "Yes, it is possible to reduce your compile time workflow to a single instruction. The setup function in the distutils module provides a script_args argument that allows you to specify arguments to be passed to the build script.\nYou can use this argument to specify the --inplace and --embed flags for Cython, and the -o option for gcc, like this:\nfrom distutils.core import setup\nfrom distutils.extension import Extension\nfrom Cython.Build import cythonize\n\nextensions = [\nExtension(\"v\", [\"version.py\"]),\nExtension(\"\", [\"lib/.py\"])\n]\n\nsetup(\nname = \"MyFirst App\",\next_modules = cythonize(extensions),\nscript_args = [\"build_ext\", \"--inplace\", \"--embed\", \"-o\", \"main.c\"]\n)\n\nYou can then compile your project by running python3 setup.py from the shell. This will run the build script with the specified arguments, and you will get your executables.\nNote that you will still need to run gcc separately to compile the C code generated by Cython into an executable. You can do this by running gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl from the shell after running python3 setup.py\n" ]
[ 0, 0 ]
[]
[]
[ "cython", "python", "python_3.5" ]
stackoverflow_0046824143_cython_python_python_3.5.txt
Q: when installing pyaudio, pip cannot find portaudio.h in /usr/local/include I'm using mac osx 10.10 As the PyAudio Homepage said, I install the PyAudio using brew install portaudio pip install pyaudio the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib but when I try to install pyaudio, it gives me an error that src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found #include "portaudio.h" ^ 1 error generated. error: command 'cc' failed with exit status 1 actually it is in /usr/local/include why can't it find the file? some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this. A: Since pyAudio has portAudio as a dependency, you first have to install portaudio. brew install portaudio Then try: pip install pyAudio. If the problem persists after installing portAudio, you can specify the directory path where the compiler will be able to find the source programs (e.g: portaudio.h). Since the headers should be in the /usr/local/include directory: pip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio A: On Ubuntu builds: sudo apt-get install python-pyaudio For Python3: sudo apt-get install python3-pyaudio A: You have to install portaudio first then link that file. Only then you can find that header file (i.e, portaudio.h). To install portaudio in mac by using HomeBrew program use following commands. brew install portaudio brew link portaudio pip install pyaudio sudo is not needed if you're admin. We should refrain using sudo as it messes up lots of permissions. A: First, you can use Homebrew to install portaudio. brew install portaudio Then try to find the portaudio path: sudo find / -name "portaudio.h" In my case it is at /usr/local/Cellar/portaudio/19.6.0/include . Run the command below to install pyaudio pip install --global-option='build_ext' --global-option='-I/usr/local/Cellar/portaudio/19.6.0/include' --global-option='-L/usr/local/Cellar/portaudio/19.6.0/lib' pyaudio A: On Raspbian: sudo apt-get install python-pyaudio A: on Centos: yum install -y portaudio portaudio-devel && pip install pyaudio A: Just for the record for folks using MacPorts and not Homebrew: $ [sudo] port install portaudio $ pip install pyaudio --global-option="build_ext" --global-option="-I/opt/local/include" --global-option="-L/opt/local/lib" A: I needed to do the following to install PortAudio on Debian sudo apt install portaudio19-dev I also apt install'd python3-portaudio before that, although it didn't work. I'm not sure if that contributed as well. A: Adding a bit of robustness (in case of a non-default homebrew dir) to the snippet from @fukudama, brew install portaudio pip install --global-option='build_ext' --global-option="-I$(brew --prefix)/include" --global-option="-L$(brew --prefix)/lib" pyaudio A: For me on 10.10.5 the paths were under /opt/local. I had to add /opt/local/bin to my /etc/paths file. And the command line that worked was sudo pip install --global-option='build_ext' --global-option='-I/opt/local/include' --global-option='-L/opt/local/lib' pyaudio A: If you are using anaconda/miniconda to manage your python environments then conda install pyaudio installs portaudio at the same time as pyaudio The following NEW packages will be INSTALLED: portaudio pkgs/main/osx-64::portaudio-19.6.0-h647c56a_4 pyaudio pkgs/main/osx-64::pyaudio-0.2.11-py37h1de35cc_2 A: On Termux (this is what worked for me): pkg install python bash -c "$(curl -fsSL https://its-pointless.github.io/setup-pointless-repo.sh)" pkg install portaudio pip install pyaudio Source: pyaudio installing #6235 A: this is the tested answer for MacBook Pro m2 chip: first find the location of the portaudio.h file by sudo find / -name "portaudio.h" then, once you find the location, copy it and use it in this command. LDFLAGS="-L/{opt/homebrew/Cellar/portaudio/19.7.0/}lib" CFLAGS="-I/{opt/homebrew/Cellar/portaudio/19.7.0}/include" pip3 install pyaudio Here replace the location from { } into you file location hopefully this works. I have tried above solutions and this one worked for me. A: For M1 mac, this is worked for me: LDFLAGS="-L/opt/homebrew/Cellar/portaudio/19.7.0/lib" CFLAGS="-I/opt/homebrew/Cellar/portaudio/19.7.0/include" pip3 install pyaudio Res: Created wheel for pyaudio: filename=PyAudio-0.2.12-cp310-cp310-macosx_11_0_arm64.whl size=24170 sha256=c74eb581e6bca2400f681f68d33654002722969f1a455ffce87e4e5da05471d8 Stored in directory: /private/var/folders/m_/kzyr4q_11cl35ngrj77k28f00000gn/T/pip-ephem-wheel-cache-ql1x8ums/wheels/93/08/0b/b915ab1895927641737175e5bc7b6111e8ed0c26daabeecba0 Successfully built pyaudio Installing collected packages: pyaudio Successfully installed pyaudio-0.2.12 Be noted, do not using find / its very slow and stupid, using brew info portaudio
when installing pyaudio, pip cannot find portaudio.h in /usr/local/include
I'm using mac osx 10.10 As the PyAudio Homepage said, I install the PyAudio using brew install portaudio pip install pyaudio the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib but when I try to install pyaudio, it gives me an error that src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found #include "portaudio.h" ^ 1 error generated. error: command 'cc' failed with exit status 1 actually it is in /usr/local/include why can't it find the file? some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this.
[ "Since pyAudio has portAudio as a dependency, you first have to install portaudio.\nbrew install portaudio\n\nThen try: pip install pyAudio. If the problem persists after installing portAudio, you can specify the directory path where the compiler will be able to find the source programs (e.g: portaudio.h). Since the headers should be in the /usr/local/include directory:\npip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio\n\n", "On Ubuntu builds:\nsudo apt-get install python-pyaudio\n\nFor Python3:\nsudo apt-get install python3-pyaudio\n\n", "You have to install portaudio first then link that file. Only then you can find that header file (i.e, portaudio.h). To install portaudio in mac by using HomeBrew program use following commands.\nbrew install portaudio\nbrew link portaudio\npip install pyaudio\n\nsudo is not needed if you're admin. We should refrain using sudo as it messes up lots of permissions.\n", "First, you can use Homebrew to install portaudio.\nbrew install portaudio\n\nThen try to find the portaudio path:\nsudo find / -name \"portaudio.h\"\n\nIn my case it is at /usr/local/Cellar/portaudio/19.6.0/include .\nRun the command below to install pyaudio\npip install --global-option='build_ext' --global-option='-I/usr/local/Cellar/portaudio/19.6.0/include' --global-option='-L/usr/local/Cellar/portaudio/19.6.0/lib' pyaudio\n\n", "On Raspbian:\nsudo apt-get install python-pyaudio\n\n", "on Centos:\nyum install -y portaudio portaudio-devel && pip install pyaudio\n\n", "Just for the record for folks using MacPorts and not Homebrew:\n$ [sudo] port install portaudio\n$ pip install pyaudio --global-option=\"build_ext\" --global-option=\"-I/opt/local/include\" --global-option=\"-L/opt/local/lib\"\n\n", "I needed to do the following to install PortAudio on Debian\nsudo apt install portaudio19-dev\n\nI also apt install'd python3-portaudio before that, although it didn't work. I'm not sure if that contributed as well.\n", "Adding a bit of robustness (in case of a non-default homebrew dir) to the snippet from @fukudama,\nbrew install portaudio\npip install --global-option='build_ext' --global-option=\"-I$(brew --prefix)/include\" --global-option=\"-L$(brew --prefix)/lib\" pyaudio\n\n", "For me on 10.10.5 the paths were under /opt/local. I had to add /opt/local/bin to my /etc/paths file. And the command line that worked was\nsudo pip install --global-option='build_ext' --global-option='-I/opt/local/include' --global-option='-L/opt/local/lib' pyaudio\n\n", "If you are using anaconda/miniconda to manage your python environments then\nconda install pyaudio\ninstalls portaudio at the same time as pyaudio\nThe following NEW packages will be INSTALLED:\n\n portaudio pkgs/main/osx-64::portaudio-19.6.0-h647c56a_4\n pyaudio pkgs/main/osx-64::pyaudio-0.2.11-py37h1de35cc_2\n\n", "On Termux (this is what worked for me):\n\npkg install python\nbash -c \"$(curl -fsSL https://its-pointless.github.io/setup-pointless-repo.sh)\"\npkg install portaudio\npip install pyaudio\n\nSource: pyaudio installing #6235\n", "this is the tested answer for MacBook Pro m2 chip:\nfirst find the location of the portaudio.h file by\nsudo find / -name \"portaudio.h\"\n\nthen, once you find the location, copy it and use it in this command.\nLDFLAGS=\"-L/{opt/homebrew/Cellar/portaudio/19.7.0/}lib\" CFLAGS=\"-I/{opt/homebrew/Cellar/portaudio/19.7.0}/include\" pip3 install pyaudio\n\nHere replace the location from { } into you file location hopefully this works. I have tried above solutions and this one worked for me.\n", "For M1 mac, this is worked for me:\nLDFLAGS=\"-L/opt/homebrew/Cellar/portaudio/19.7.0/lib\" CFLAGS=\"-I/opt/homebrew/Cellar/portaudio/19.7.0/include\" pip3 install pyaudio\n\nRes:\n Created wheel for pyaudio: filename=PyAudio-0.2.12-cp310-cp310-macosx_11_0_arm64.whl size=24170 sha256=c74eb581e6bca2400f681f68d33654002722969f1a455ffce87e4e5da05471d8\n Stored in directory: /private/var/folders/m_/kzyr4q_11cl35ngrj77k28f00000gn/T/pip-ephem-wheel-cache-ql1x8ums/wheels/93/08/0b/b915ab1895927641737175e5bc7b6111e8ed0c26daabeecba0\nSuccessfully built pyaudio\nInstalling collected packages: pyaudio\nSuccessfully installed pyaudio-0.2.12\n\nBe noted, do not using find / its very slow and stupid, using brew info portaudio\n" ]
[ 182, 27, 16, 13, 9, 8, 8, 6, 5, 4, 1, 1, 1, 0 ]
[]
[]
[ "macos", "pyaudio", "python" ]
stackoverflow_0033513522_macos_pyaudio_python.txt
Q: How to integrate Xterm.js with reactjs properly So far I am not able to properly integrate xterm.js with reactjs due to which my code breaks in production but works while development. i need a proper way of importing xterm.js in component. HELP !!! import React, {useEffect} from 'react'; import {Terminal} from 'xterm'; import {FitAddon} from 'xterm-addon-fit'; const UITerminal = () => { const term = new Terminal(); const fitAddon = new FitAddon(); term.loadAddon(fitAddon); useEffect(() => { let termDocument = document.getElementById('terminal') if (termDocument) { term.open(termDocument) fitaddon.fit(); } window.addEventListener('resize', () => { fitaddon.fit(); }) }, []) return (<div id="terminal"></div>) } Below is the error response from production code. clearly it fails to import xterm react_devtools_backend.js:4012 ReferenceError: Cannot access 'r' before initialization at new m (96209.72626fc1cc862aea477a.bundle.js:1:165467) at new b (96209.72626fc1cc862aea477a.bundle.js:1:159758) at new M (96209.72626fc1cc862aea477a.bundle.js:1:57572) at new r.exports.i.Terminal (96209.72626fc1cc862aea477a.bundle.js:1:294972) at w (96209.72626fc1cc862aea477a.bundle.js:1:15994) at zo (main.71e827eabc798023c129.bundle.js:1:1260000) at Ws (main.71e827eabc798023c129.bundle.js:1:1333492) at Wi (main.71e827eabc798023c129.bundle.js:1:1294411) at Ui (main.71e827eabc798023c129.bundle.js:1:1294336) at Pi (main.71e827eabc798023c129.bundle.js:1:1291367) A: The error shows that you might use Xterm before its initialization, You might find it useful to use react-aptor or the idea behind it to connect pure js packages like Xterm.js into react world. import useAptor from 'react-aptor'; const initializer = (node, params) => { // user params for further configuration const terminal = new Terminal(); term.open(node); return terminal; } const getAPI = (terminal, params) => { return () => ({ terminal }) } const ReactXterm = (props, ref) => { const aptorRef = useAptor(ref, { getAPI, instantiate, /* params: anything */ }); return <div ref={aptorRef} />; }; function App() { const ref = useRef(); const writeToTerminal = () => { ref.current.terminal?.write('Hello from \x1B[1;3;31mxterm.js\x1B[0m $ '); }; return ( <div> <ReactXterm ref={ref} /> <button onClick={writeToTerminal}>write to terminal</button> </div> ); } Disclosure: I am the maintainer of react-aptor A: It looks like the error you're encountering is related to the initialization of the Terminal class from xterm.js. One potential solution to this issue could be to move the instantiation of the Terminal and FitAddon classes into the useEffect hook. This ensures that the Terminal and vFitAddon` objects are only created after the component has been mounted, which should prevent any issues with accessing uninitialized variables. Here is an example of how your code could be updated to do this: import React, { useEffect } from 'react'; import { Terminal } from 'xterm'; import { FitAddon } from 'xterm-addon-fit'; const UITerminal = () => { useEffect(() => { const term = new Terminal(); const fitAddon = new FitAddon(); term.loadAddon(fitAddon); let termDocument = document.getElementById('terminal'); if (termDocument) { term.open(termDocument); fitAddon.fit(); } window.addEventListener('resize', () => { fitAddon.fit(); }); }, []); return <div id="terminal"></div>; };
How to integrate Xterm.js with reactjs properly
So far I am not able to properly integrate xterm.js with reactjs due to which my code breaks in production but works while development. i need a proper way of importing xterm.js in component. HELP !!! import React, {useEffect} from 'react'; import {Terminal} from 'xterm'; import {FitAddon} from 'xterm-addon-fit'; const UITerminal = () => { const term = new Terminal(); const fitAddon = new FitAddon(); term.loadAddon(fitAddon); useEffect(() => { let termDocument = document.getElementById('terminal') if (termDocument) { term.open(termDocument) fitaddon.fit(); } window.addEventListener('resize', () => { fitaddon.fit(); }) }, []) return (<div id="terminal"></div>) } Below is the error response from production code. clearly it fails to import xterm react_devtools_backend.js:4012 ReferenceError: Cannot access 'r' before initialization at new m (96209.72626fc1cc862aea477a.bundle.js:1:165467) at new b (96209.72626fc1cc862aea477a.bundle.js:1:159758) at new M (96209.72626fc1cc862aea477a.bundle.js:1:57572) at new r.exports.i.Terminal (96209.72626fc1cc862aea477a.bundle.js:1:294972) at w (96209.72626fc1cc862aea477a.bundle.js:1:15994) at zo (main.71e827eabc798023c129.bundle.js:1:1260000) at Ws (main.71e827eabc798023c129.bundle.js:1:1333492) at Wi (main.71e827eabc798023c129.bundle.js:1:1294411) at Ui (main.71e827eabc798023c129.bundle.js:1:1294336) at Pi (main.71e827eabc798023c129.bundle.js:1:1291367)
[ "The error shows that you might use Xterm before its initialization,\nYou might find it useful to use react-aptor or the idea behind it to connect pure js packages like Xterm.js into react world.\nimport useAptor from 'react-aptor';\n\nconst initializer = (node, params) => {\n // user params for further configuration\n const terminal = new Terminal();\n term.open(node);\n return terminal;\n}\n\n\nconst getAPI = (terminal, params) => {\n return () => ({ terminal })\n}\n\n\nconst ReactXterm = (props, ref) => {\n const aptorRef = useAptor(ref, {\n getAPI,\n instantiate,\n /* params: anything */\n });\n\n return <div ref={aptorRef} />;\n};\n\nfunction App() {\n const ref = useRef();\n\n const writeToTerminal = () => {\n ref.current.terminal?.write('Hello from \\x1B[1;3;31mxterm.js\\x1B[0m $ '); \n };\n\n return (\n <div>\n <ReactXterm ref={ref} />\n <button onClick={writeToTerminal}>write to terminal</button>\n </div>\n );\n}\n\nDisclosure: I am the maintainer of react-aptor\n", "It looks like the error you're encountering is related to the initialization of the Terminal class from xterm.js.\nOne potential solution to this issue could be to move the instantiation of the Terminal and FitAddon classes into the useEffect hook. This ensures that the Terminal and vFitAddon` objects are only created after the component has been mounted, which should prevent any issues with accessing uninitialized variables.\nHere is an example of how your code could be updated to do this:\nimport React, { useEffect } from 'react';\nimport { Terminal } from 'xterm';\nimport { FitAddon } from 'xterm-addon-fit';\n\nconst UITerminal = () => {\n useEffect(() => {\n const term = new Terminal();\n const fitAddon = new FitAddon();\n term.loadAddon(fitAddon);\n\n let termDocument = document.getElementById('terminal');\n if (termDocument) {\n term.open(termDocument);\n fitAddon.fit();\n }\n\n window.addEventListener('resize', () => {\n fitAddon.fit();\n });\n }, []);\n\n return <div id=\"terminal\"></div>;\n};\n\n" ]
[ 0, 0 ]
[]
[]
[ "reactjs", "xterm", "xtermjs" ]
stackoverflow_0074664129_reactjs_xterm_xtermjs.txt
Q: replace multiple words from a string at the same time I have this dict in python. reflections = { 'I am': 'you are', 'I was': 'you were', 'I': 'you', "I'm": 'you are', "I'd": 'you would', "I've": 'you have', "I'll": 'you will', 'my': 'your', 'you are': 'I am', 'you were': 'I was', "you've": 'I have', "you'll": 'I will', 'your': 'my', 'yours': 'mine', 'you': 'me', 'me': 'you' } I have written this piece of code to replace the words. see = "I am going to kill you" for i in reflections: if i in see: print(f'matched key {i}') see = see.replace(i, reflections[i]) print(see) This is the output of the above code. matched key I am you are going to kill you matched key you are I am going to kill you matched key you I am going to kill me matched key me I am going to kill you Now I want to replace all occurrences of words from reflections dict and replace them. As you can see in code output, "I am" is replaced with "you are" and in the next iteration, "you are" is again replaced with "I am", which shouldn't happen. It should not replace the replacement. So the output should be: You are going to kill me A: Solution 1 - str.index You can do it as follows: create a new string variable new_see, which is initially empty, but will ultimately contain the result of the replacements make each iteration only process the part of the input string up until the point where a matching key is encountered, and append the iteration's replacement result to the result string after each iteration, truncate the input string from its start to the index after the key encountered in current iteration, so that the next iteration will only work with the yet unprocessed part see = "I am going to kill you!" new_see = "" print(see) for key, reflection in reflections.items(): if key in see: idx = see.index(key) print(f"matched key [{key}] @ index {idx}, reflection=[{reflection}]") # take the part of `see` up until the index where the `key` ends, # replace the `key` with `replacement` and append the result to # the new string new_see += see[:idx+len(key)].replace(key, reflection) # truncate the original string from start up until the index where # `key` was encountered, so the next iteration will only work on # the part of it that hasn't been processed yet see = see[idx+len(key):] # take a look at the intermediate results print(f"see=[{see}], new_see=[{new_see}]") # append any leftover part that wasn't in the dict (in this case, "!") if see: new_see += see new_see = new_see.capitalize() print(new_see) Output: I am going to kill you! matched key [I am] @ index 0, reflection=[you are] see=[ going to kill you!], new_see=[you are] matched key [you] @ index 15, reflection=[me] see=[!], new_see=[you are going to kill me] You are going to kill me! Solution 2 - str.split Slightly more pythonic solution using str.split instead of operating on indices: see = "I am going to kill you!" new_see = "" print(see) for key, reflection in reflections.items(): if key in see: print(f"matched key [{key}], reflection=[{reflection}]") left, right = see.split(key) new_see += left + reflection see = right print(f"see=[{see}], new_see=[{new_see}]") if see: new_see += see new_see = new_see.capitalize() print(new_see) Output: I am going to kill you! matched key [I am], reflection=[you are] see=[ going to kill you!], new_see=[you are] matched key [you], reflection=[me] see=[!], new_see=[you are going to kill me] You are going to kill me! As pointed out in a comment, this code is going to replace "I'm" with "You'm". In order to fix this, you should reorder the entries in your dict such that "I'm", "I'd", etc., are processed before "I". Even then though, it will still not work properly in some cases, e.g. for words in all caps - "DICT" is going to be replaced with "DyouCT". In order to deal with this, you'd need to take a look at regular expressions and use re.sub instead of str.replace - that will allow you e.g. to only replace a key if it's a standalone word (i.e. surrounded by non-letters). A: print(see1) has the wrong level of indentation. Try with this: see = "you are going to kill me" for i in reflections: if i in see: see = see.replace(i, reflections[i]) see1 = see.replace(i, reflections[i]) print(see1) That way you print the word once you have broken out of the for-loop.
replace multiple words from a string at the same time
I have this dict in python. reflections = { 'I am': 'you are', 'I was': 'you were', 'I': 'you', "I'm": 'you are', "I'd": 'you would', "I've": 'you have', "I'll": 'you will', 'my': 'your', 'you are': 'I am', 'you were': 'I was', "you've": 'I have', "you'll": 'I will', 'your': 'my', 'yours': 'mine', 'you': 'me', 'me': 'you' } I have written this piece of code to replace the words. see = "I am going to kill you" for i in reflections: if i in see: print(f'matched key {i}') see = see.replace(i, reflections[i]) print(see) This is the output of the above code. matched key I am you are going to kill you matched key you are I am going to kill you matched key you I am going to kill me matched key me I am going to kill you Now I want to replace all occurrences of words from reflections dict and replace them. As you can see in code output, "I am" is replaced with "you are" and in the next iteration, "you are" is again replaced with "I am", which shouldn't happen. It should not replace the replacement. So the output should be: You are going to kill me
[ "Solution 1 - str.index\nYou can do it as follows:\n\ncreate a new string variable new_see, which is initially empty, but will ultimately contain the result of the replacements\nmake each iteration only process the part of the input string up until the point where a matching key is encountered, and append the iteration's replacement result to the result string\nafter each iteration, truncate the input string from its start to the index after the key encountered in current iteration, so that the next iteration will only work with the yet unprocessed part\n\nsee = \"I am going to kill you!\"\nnew_see = \"\"\nprint(see)\n\nfor key, reflection in reflections.items():\n if key in see:\n idx = see.index(key)\n print(f\"matched key [{key}] @ index {idx}, reflection=[{reflection}]\")\n\n # take the part of `see` up until the index where the `key` ends,\n # replace the `key` with `replacement` and append the result to\n # the new string\n new_see += see[:idx+len(key)].replace(key, reflection)\n\n # truncate the original string from start up until the index where\n # `key` was encountered, so the next iteration will only work on\n # the part of it that hasn't been processed yet\n see = see[idx+len(key):]\n\n # take a look at the intermediate results\n print(f\"see=[{see}], new_see=[{new_see}]\")\n\n# append any leftover part that wasn't in the dict (in this case, \"!\")\nif see:\n new_see += see\n\nnew_see = new_see.capitalize()\nprint(new_see)\n\nOutput:\nI am going to kill you!\nmatched key [I am] @ index 0, reflection=[you are]\nsee=[ going to kill you!], new_see=[you are]\nmatched key [you] @ index 15, reflection=[me]\nsee=[!], new_see=[you are going to kill me]\nYou are going to kill me!\n\nSolution 2 - str.split\nSlightly more pythonic solution using str.split instead of operating on indices:\nsee = \"I am going to kill you!\"\nnew_see = \"\"\nprint(see)\n\nfor key, reflection in reflections.items():\n if key in see:\n print(f\"matched key [{key}], reflection=[{reflection}]\")\n left, right = see.split(key)\n new_see += left + reflection\n see = right\n print(f\"see=[{see}], new_see=[{new_see}]\")\nif see:\n new_see += see\n\nnew_see = new_see.capitalize()\nprint(new_see)\n\nOutput:\nI am going to kill you!\nmatched key [I am], reflection=[you are]\nsee=[ going to kill you!], new_see=[you are]\nmatched key [you], reflection=[me]\nsee=[!], new_see=[you are going to kill me]\nYou are going to kill me!\n\n\nAs pointed out in a comment, this code is going to replace \"I'm\" with \"You'm\". In order to fix this, you should reorder the entries in your dict such that \"I'm\", \"I'd\", etc., are processed before \"I\". Even then though, it will still not work properly in some cases, e.g. for words in all caps - \"DICT\" is going to be replaced with \"DyouCT\". In order to deal with this, you'd need to take a look at regular expressions and use re.sub instead of str.replace - that will allow you e.g. to only replace a key if it's a standalone word (i.e. surrounded by non-letters).\n", "print(see1) has the wrong level of indentation.\nTry with this:\nsee = \"you are going to kill me\"\nfor i in reflections:\n if i in see:\n see = see.replace(i, reflections[i])\n see1 = see.replace(i, reflections[i])\nprint(see1)\n\nThat way you print the word once you have broken out of the for-loop.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0058393229_python.txt
Q: Window Function inside of Common Table Expression DB2 SQL I am trying to write a window function inside of a common table expression on Db2 and receiving some unexpected errors This query works on Db2 11.5.7 WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select a.rowValue AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 1; However this query: WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; fails with an error SQL0206N "A.ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 Interestingly if perform some scalar operation on a.rowValue though I can retrieve it. For example the following query DOES work. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; This query also works. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) AS rowValue from dummy as b group by a.phase ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; Given that the above query works and produces the expected result I would also expect this query, which is the one i really want, to work. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) over (partition by a.phase) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; but it fails with this error. SQL0206N "ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 Which is interesting because I would expect the error to be this: SQL0206N "A.ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 I really would like to use a window function here but not sure how to achieve it given these errors. I'm looking for either a. An explanation as to why these queries don't work or b. an alternate syntax that would achieve the same result. A: You should open a case with IBM support on this. Possible workaround might be to inject some expression using some inner table column, which doesn't influence the result. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue + coalesce(b.rowValue, 0)*0) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0 ROWVALUE PHASE 2 0 1 0 fiddle
Window Function inside of Common Table Expression DB2 SQL
I am trying to write a window function inside of a common table expression on Db2 and receiving some unexpected errors This query works on Db2 11.5.7 WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select a.rowValue AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 1; However this query: WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; fails with an error SQL0206N "A.ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 Interestingly if perform some scalar operation on a.rowValue though I can retrieve it. For example the following query DOES work. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; This query also works. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) AS rowValue from dummy as b group by a.phase ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; Given that the above query works and produces the expected result I would also expect this query, which is the one i really want, to work. WITH dummy AS( SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1 UNION ALL SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1 ), solution(rowValue, phase) AS ( SELECT b.rowValue, phase FROM dummy a, TABLE( select max(a.rowValue*b.phase) over (partition by a.phase) AS rowValue from dummy as b ) b WHERE phase = 0 ) SELECT * FROM solution WHERE phase = 0; but it fails with this error. SQL0206N "ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 Which is interesting because I would expect the error to be this: SQL0206N "A.ROWVALUE" is not valid in the context where it is used. SQLSTATE=42703 I really would like to use a window function here but not sure how to achieve it given these errors. I'm looking for either a. An explanation as to why these queries don't work or b. an alternate syntax that would achieve the same result.
[ "You should open a case with IBM support on this.\nPossible workaround might be to inject some expression using some inner table column, which doesn't influence the result.\nWITH dummy AS(\n SELECT 1 AS rowValue, 0 AS phase from sysibm.sysdummy1\n UNION ALL\n SELECT 2 AS rowValue, 0 AS phase from sysibm.sysdummy1\n\n),\nsolution(rowValue, phase) AS (\n SELECT b.rowValue, phase FROM dummy a,\n TABLE(\n select \n max(a.rowValue + coalesce(b.rowValue, 0)*0) AS rowValue\n from dummy as b\n ) b\n WHERE phase = 0\n) \nSELECT * \nFROM solution \nWHERE phase = 0\n\n\n\n\n\nROWVALUE\nPHASE\n\n\n\n\n2\n0\n\n\n1\n0\n\n\n\n\nfiddle\n" ]
[ 1 ]
[]
[]
[ "common_table_expression", "db2", "sql" ]
stackoverflow_0074645494_common_table_expression_db2_sql.txt
Q: Flutter Google Firebase duplicate Web-App on subdomain with User roles I want to build a flutter Web-app which can be used by different types of Users (Admins and Customers). But also when someone want to register the app it needs to create a new app automaticly under an other Subdomain, with new user data / project in google Firebase... I hope it is understandable. Its not easy for me to describe it. Is it possible to realise it? A: It is possible to create a Flutter web app that has different user types and automatically creates a new app under a subdomain with new user data in Google Firebase. To do this, you can use Firebase authentication to handle the different user types and the Firebase Realtime Database to store the user data. When a new user registers, you can use the Firebase Admin SDK to programmatically create a new app under a subdomain and store the user data in the Realtime Database for that app. However, note that creating a new app under a subdomain for each user may not be the most efficient way to handle this scenario. It would be better to store all user data in a single Firebase project and use Firebase security rules to control access to the data based on user type. You can also use Firebase Hosting to host the Flutter web app and use subdomains to differentiate between different user types or groups of users. Overall, it is possible to build the type of app you are describing, but it may be better to structure it in a different way to avoid creating a large number of individual Firebase projects.
Flutter Google Firebase duplicate Web-App on subdomain with User roles
I want to build a flutter Web-app which can be used by different types of Users (Admins and Customers). But also when someone want to register the app it needs to create a new app automaticly under an other Subdomain, with new user data / project in google Firebase... I hope it is understandable. Its not easy for me to describe it. Is it possible to realise it?
[ "It is possible to create a Flutter web app that has different user types and automatically creates a new app under a subdomain with new user data in Google Firebase.\nTo do this, you can use Firebase authentication to handle the different user types and the Firebase Realtime Database to store the user data. When a new user registers, you can use the Firebase Admin SDK to programmatically create a new app under a subdomain and store the user data in the Realtime Database for that app.\nHowever, note that creating a new app under a subdomain for each user may not be the most efficient way to handle this scenario. It would be better to store all user data in a single Firebase project and use Firebase security rules to control access to the data based on user type. You can also use Firebase Hosting to host the Flutter web app and use subdomains to differentiate between different user types or groups of users.\nOverall, it is possible to build the type of app you are describing, but it may be better to structure it in a different way to avoid creating a large number of individual Firebase projects.\n" ]
[ 0 ]
[]
[]
[ "flutter", "flutter_web" ]
stackoverflow_0074665741_flutter_flutter_web.txt
Q: Why doesn't `git credential-manager-core` accept arguments? In the question Why am I suddenly not having push permission? I was advised to use this command to store my credential: printf "host=github.com\nprotocol=https\nusername=ooker777\npassword=ghp_yourToken" | git credential-manager-core store I was explained that the command doesn't accept arguments, so it must be piped like that. Why is it so? Why not something like this? git credential-manager-core store password myToken A: The internal interface for storing and retrieving credentials from system-specific helpers described by "git credential" refers to credential.h. The design presented in that file confirms the use of pipes to get arguments: +-----------------------+ | Git code (C) |--- to server requiring ---> | | authentication |.......................| | C credential API |--- prompt ---> User +-----------------------+ ^ | | pipe | | v +-----------------------+ | Git credential helper | +-----------------------+ The Git code (typically a remote-helper) will call the C API to obtain credential data like a login/password pair (credential_fill). The API will itself call a remote helper (e.g. "git credential-cache" or "git credential-store") that may retrieve credential data from a store. If the credential helper cannot find the information, the C API will prompt the user. Then, the caller of the API takes care of contacting the server, and does the actual authentication.
Why doesn't `git credential-manager-core` accept arguments?
In the question Why am I suddenly not having push permission? I was advised to use this command to store my credential: printf "host=github.com\nprotocol=https\nusername=ooker777\npassword=ghp_yourToken" | git credential-manager-core store I was explained that the command doesn't accept arguments, so it must be piped like that. Why is it so? Why not something like this? git credential-manager-core store password myToken
[ "The internal interface for storing and retrieving credentials from system-specific helpers described by \"git credential\" refers to credential.h.\nThe design presented in that file confirms the use of pipes to get arguments:\n+-----------------------+\n| Git code (C) |--- to server requiring --->\n| | authentication\n|.......................|\n| C credential API |--- prompt ---> User\n+-----------------------+\n^ |\n| pipe |\n| v\n+-----------------------+\n| Git credential helper |\n+-----------------------+\n\n\nThe Git code (typically a remote-helper) will call the C API to obtain credential data like a login/password pair (credential_fill).\nThe API will itself call a remote helper (e.g. \"git credential-cache\" or \"git credential-store\") that may retrieve credential data from a\nstore.\nIf the credential helper cannot find the information, the C API will prompt the user. Then, the caller of the API takes care of\ncontacting the server, and does the actual authentication.\n\n" ]
[ 1 ]
[]
[]
[ "git", "git_credential_manager" ]
stackoverflow_0074665591_git_git_credential_manager.txt
Q: How to recover a deleted (Jupyter Notebook) code in Visual Studio Code? CTRL + Z doest work I accidentally deleted a piece of code on Visual Studio Code. I haven't closed visual studio yet. CTRL + Z doesn't work. Edit + Undo doesn't work either. The variables are still stored. It was accidental. I just happen to close a jupyter notebook box Any idea how I can recover thank you? A: There is a way to recover the deleted code if CTRL +Z doesn't work. You can click right on the .ipynb script, open chronology then go to your last changes and click on it. You will then see the deleted part. A: Good question, happens to most of us when working longer and not paying attention. Vscode starting from a specific version does internal management of your code (March 2020 release). This feature is called timeline. What to do: Right click on your .ipynb file > Open Timeline > Restore contents on a specific version of your file. Also, before restoring, would suggest that you check out the version you're going to restore by comparing it with the current one. Note: This does not require the git repository to be active at the time of the incident. As this is what Vscode is technically doing by itself. A: Cannot get it back if you have not backed up.
How to recover a deleted (Jupyter Notebook) code in Visual Studio Code? CTRL + Z doest work
I accidentally deleted a piece of code on Visual Studio Code. I haven't closed visual studio yet. CTRL + Z doesn't work. Edit + Undo doesn't work either. The variables are still stored. It was accidental. I just happen to close a jupyter notebook box Any idea how I can recover thank you?
[ "There is a way to recover the deleted code if CTRL +Z doesn't work.\nYou can click right on the .ipynb script, open chronology then go to your last changes and click on it. You will then see the deleted part.\n", "Good question, happens to most of us when working longer and not paying attention.\nVscode starting from a specific version does internal management of your code (March 2020 release). This feature is called timeline.\nWhat to do: \nRight click on your .ipynb file > Open Timeline > Restore contents on a specific version of your file.\nAlso, before restoring, would suggest that you check out the version you're going to restore by comparing it with the current one.\nNote: This does not require the git repository to be active at the time of the incident. As this is what Vscode is technically doing by itself.\n", "Cannot get it back if you have not backed up.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "visual_studio_code" ]
stackoverflow_0074665592_visual_studio_code.txt