content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Search null terminating characters in Mongo Database v3.0.11? We have a number of strings in a MongoDB instance that include null-terminating characters, and we need to find which ones those are. Knowing that Mongo uses PCRE Regex, we found (Can PCRE regex match a null character?) the correct syntax for matching a null-terminating character and searched for it like so: db.updates_v2.find({'longDescription': /.*\x00.*/ }).count() However, this returns 0. We know for a fact that there are null terminating characters in there because during a migration to DocumentDB, it refuses to accept them. Also, we've run the following query that confirms that longDescription is the culprit: db.updates_v2.find().forEach(function(doc){ ... for (var key in doc) { ... if ( /.*\x00.*/.test(doc[key]) ) ... print(key) ... } ... }); longDescription longDescription longDescription ... I've also tested the regex in Node (albeit a different regex engine): > test = "wot wot in the \0" 'wot wot in the \u0000' > test2 = "wot wot in the wat" 'wot wot in the wat' > regex = /.*\x00.*/ > test2.match(regex) null > test.match(regex) [ 'wot wot in the \u0000', index: 0, input: 'wot wot in the \u0000', groups: undefined ] This is an issue when migrating from mongodb to aws-documentdb as the latter will not accept \0 characters in strings. We really need to be able to reliably pull these out, in order to create a script that can find the offending entries, strip the null chars and update the entries. Any ideas? A: The following works in a MongoDB server 4.4.16 shell: db.updates_v2.find({"longDescription": {"$regex": "\\0"}})
Search null terminating characters in Mongo Database v3.0.11?
We have a number of strings in a MongoDB instance that include null-terminating characters, and we need to find which ones those are. Knowing that Mongo uses PCRE Regex, we found (Can PCRE regex match a null character?) the correct syntax for matching a null-terminating character and searched for it like so: db.updates_v2.find({'longDescription': /.*\x00.*/ }).count() However, this returns 0. We know for a fact that there are null terminating characters in there because during a migration to DocumentDB, it refuses to accept them. Also, we've run the following query that confirms that longDescription is the culprit: db.updates_v2.find().forEach(function(doc){ ... for (var key in doc) { ... if ( /.*\x00.*/.test(doc[key]) ) ... print(key) ... } ... }); longDescription longDescription longDescription ... I've also tested the regex in Node (albeit a different regex engine): > test = "wot wot in the \0" 'wot wot in the \u0000' > test2 = "wot wot in the wat" 'wot wot in the wat' > regex = /.*\x00.*/ > test2.match(regex) null > test.match(regex) [ 'wot wot in the \u0000', index: 0, input: 'wot wot in the \u0000', groups: undefined ] This is an issue when migrating from mongodb to aws-documentdb as the latter will not accept \0 characters in strings. We really need to be able to reliably pull these out, in order to create a script that can find the offending entries, strip the null chars and update the entries. Any ideas?
[ "The following works in a MongoDB server 4.4.16 shell:\ndb.updates_v2.find({\"longDescription\": {\"$regex\": \"\\\\0\"}})\n" ]
[ 0 ]
[]
[]
[ "aws_documentdb", "mongodb", "pcre", "regex" ]
stackoverflow_0057207475_aws_documentdb_mongodb_pcre_regex.txt
Q: How to check if ARKit camera movement is too fast in iOS I am working on apple's ARKit with RealityKit and I want to guide the user not to move the device too fast. I am trying to detect movement using this delegate method. func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { switch camera.trackingState { case .limited(let reason): switch reason { case .excessiveMotion: print("too fast") // then update UI default: break } default: break } } But the thing is, this method is not that accurate when I try to move my device quickly. Is there any other way to detect fast movement? A: ARKit's subcase .excessiveMotion of trackingState instance property is very subjective because tracking results may vary depending on lighting conditions, environment, rich/poor textures of real-world objects, number of reflective/refractive surfaces, presence/absence of a LiDAR scanner, presence/absence of a ARWorldMap, etc. This case is used for a general (i.e. inaccurate) analysis of the tracking situation. In my opinion, you can accurately detect fast camera movement only under ideal tracking conditions (for a novice AR user, this is an impossible task). Today robust solution (started from iOS 13) is a quick preliminary tracking session with onboarding instructions to direct users toward a specific goal. It's called ARCoachingOverlayView. A: Yes, there is another way to detect fast movement in ARKit with RealityKit. You can use the accelerometer data to measure the acceleration of the device as it moves. You can access the accelerometer data by using the CoreMotion framework. // First, import the CoreMotion framework. import CoreMotion // Then, create an instance of the CMMotionManager class. let motionManager = CMMotionManager() // Set the sampling rate of the motion data to a high frequency. motionManager.accelerometerUpdateInterval = 0.1 // Start the motion updates. motionManager.startAccelerometerUpdates() // Create a CMAccelerometerHandler to get the acceleration data. motionManager.accelerometerUpdateHandler = { (accelerometerData, error) in if error == nil { // Use the acceleration data to measure the magnitude of the device's movement. let accelerationX = accelerometerData!.acceleration.x let accelerationY = accelerometerData!.acceleration.y let accelerationZ = accelerometerData!.acceleration.z let acceleration = sqrt(accelerationX * accelerationX + accelerationY * accelerationY + accelerationZ * acceleration) // If the magnitude is above a certain threshold, it indicates that the device has moved quickly. if acceleration > 0.1 { print("too fast") // then update UI } } }
How to check if ARKit camera movement is too fast in iOS
I am working on apple's ARKit with RealityKit and I want to guide the user not to move the device too fast. I am trying to detect movement using this delegate method. func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { switch camera.trackingState { case .limited(let reason): switch reason { case .excessiveMotion: print("too fast") // then update UI default: break } default: break } } But the thing is, this method is not that accurate when I try to move my device quickly. Is there any other way to detect fast movement?
[ "ARKit's subcase .excessiveMotion of trackingState instance property is very subjective because tracking results may vary depending on lighting conditions, environment, rich/poor textures of real-world objects, number of reflective/refractive surfaces, presence/absence of a LiDAR scanner, presence/absence of a ARWorldMap, etc. This case is used for a general (i.e. inaccurate) analysis of the tracking situation.\nIn my opinion, you can accurately detect fast camera movement only under ideal tracking conditions (for a novice AR user, this is an impossible task). Today robust solution (started from iOS 13) is a quick preliminary tracking session with onboarding instructions to direct users toward a specific goal. It's called ARCoachingOverlayView.\n", "Yes, there is another way to detect fast movement in ARKit with RealityKit. You can use the accelerometer data to measure the acceleration of the device as it moves. You can access the accelerometer data by using the CoreMotion framework.\n// First, import the CoreMotion framework. \nimport CoreMotion\n// Then, create an instance of the CMMotionManager class. \nlet motionManager = CMMotionManager()\n// Set the sampling rate of the motion data to a high frequency. \nmotionManager.accelerometerUpdateInterval = 0.1\n// Start the motion updates. \nmotionManager.startAccelerometerUpdates()\n// Create a CMAccelerometerHandler to get the acceleration data. \nmotionManager.accelerometerUpdateHandler = { (accelerometerData, error) in\n if error == nil {\n\n\n // Use the acceleration data to measure the magnitude of the device's movement. \n let accelerationX = accelerometerData!.acceleration.x\n let accelerationY = accelerometerData!.acceleration.y\n let accelerationZ = accelerometerData!.acceleration.z\n let acceleration = sqrt(accelerationX * accelerationX + accelerationY * accelerationY + accelerationZ * acceleration)\n \n // If the magnitude is above a certain threshold, it indicates that the device has moved quickly. \n if acceleration > 0.1 {\n print(\"too fast\")\n // then update UI\n }\n }\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "arkit", "augmented_reality", "swift" ]
stackoverflow_0071390182_arkit_augmented_reality_swift.txt
Q: Javascript onclick function workin like an onload function somehow and returning 'undefined' I want to change the text of a button using the onclick property, but the value of the parameter 'text' it's appearing when i load the page. Also when i click the button it's text changes to 'undefined'. That's the code (I starded with Javascript like a week ago so it's probably really bad): function changeText(text) { let btn = document.querySelector('button#btn') btn.innerHTML = text } changeText('new text') <button id="btn" onclick="changeText()"></button> A: The reason why text changed on page load is because you are calling changeText function inside your javascript. And the reason why it shows "undefined" when you click the button, is because you don't send any text from within onclick event handler. So, all you need to do is move changeText("new text") inside onclick handler. (Also, avoid using innerHTML, use textContent instead.) (This is to help avoid XSS) function changeText(text) { let btn = document.querySelector('button#btn') btn.textContent = text } // changeText('new text') //don't need this <button id="btn" onclick="changeText('new text')">my button</button> One more note, to get element with ID use document.getElementById() A: It's appearing because you're immediately calling the function in the JS (changeText('new text')). If you want to change the text when the button is clicked you should add that text to that inline JS instead. You should probably also use textContent or innerText instead of innerHTML if this is plain text. The button requires some initial text so you can see to click on it. function changeText(text) { let btn = document.querySelector('button#btn'); btn.textContent = text; } <button id="btn" onclick="changeText('New text')">Old text</button> Having written that: modern JS has since moved away from inline JS (separation of concerns). Now it's more usual to see addEventListener being used. Here's an example that uses the value from an input box to update the button text. const input = document.querySelector('input'); const button = document.querySelector('button'); button.addEventListener('click', changeText); function changeText() { button.textContent = input.value; } <input type="text"> <button type="button">Old text</button> A: function changetext(text) { var btn = document.getElementById('btn'); btn.innerText = text; } <button id="btn" onclick="changetext('Clicked')">Click</button> you don't need to call changetext() in the JavaScript because as soon as the JavaScript loads its going to change the text on the button, remember your calling the function when you click the button. I'm guessing you thought you had to initialize the function by calling it for it to work, you don't need to. Great your learning though. A: changeText('new text') This calls the function immediately when the JS file is loaded so it makes sense that the button would have that text. <button id="btn" onclick="changeText()"></button> Here you're calling the function again by clicking the button, but you're not passing any text in your parentheses so that's why you're getting undefined. This is how you should call your function. Using textContent like others suggested. function changeText(text) { let btn = document.querySelector('button#btn') btn.textContent = text } <button id="btn" onclick="changeText('new text')"></button>
Javascript onclick function workin like an onload function somehow and returning 'undefined'
I want to change the text of a button using the onclick property, but the value of the parameter 'text' it's appearing when i load the page. Also when i click the button it's text changes to 'undefined'. That's the code (I starded with Javascript like a week ago so it's probably really bad): function changeText(text) { let btn = document.querySelector('button#btn') btn.innerHTML = text } changeText('new text') <button id="btn" onclick="changeText()"></button>
[ "The reason why text changed on page load is because you are calling changeText function inside your javascript. And the reason why it shows \"undefined\" when you click the button, is because you don't send any text from within onclick event handler.\nSo, all you need to do is move changeText(\"new text\") inside onclick handler.\n(Also, avoid using innerHTML, use textContent instead.)\n(This is to help avoid XSS)\n\n\nfunction changeText(text) {\n let btn = document.querySelector('button#btn')\n btn.textContent = text\n}\n// changeText('new text') //don't need this\n<button id=\"btn\" onclick=\"changeText('new text')\">my button</button>\n\n\n\nOne more note, to get element with ID use document.getElementById()\n", "\nIt's appearing because you're immediately calling the function in the JS (changeText('new text')). If you want to change the text when the button is clicked you should add that text to that inline JS instead.\n\nYou should probably also use textContent or innerText instead of innerHTML if this is plain text.\n\nThe button requires some initial text so you can see to click on it.\n\n\n\n\nfunction changeText(text) {\n let btn = document.querySelector('button#btn');\n btn.textContent = text;\n}\n<button id=\"btn\" onclick=\"changeText('New text')\">Old text</button>\n\n\n\nHaving written that: modern JS has since moved away from inline JS (separation of concerns). Now it's more usual to see addEventListener being used.\nHere's an example that uses the value from an input box to update the button text.\n\n\nconst input = document.querySelector('input');\nconst button = document.querySelector('button');\n\nbutton.addEventListener('click', changeText);\n\nfunction changeText() {\n button.textContent = input.value;\n}\n<input type=\"text\">\n<button type=\"button\">Old text</button>\n\n\n\n", "\n\nfunction changetext(text) {\nvar btn = document.getElementById('btn');\nbtn.innerText = text;\n}\n<button id=\"btn\" onclick=\"changetext('Clicked')\">Click</button>\n\n\n\nyou don't need to call changetext() in the JavaScript because as soon as the JavaScript loads its going to change the text on the button, remember your calling the function when you click the button.\nI'm guessing you thought you had to initialize the function by calling it for it to work, you don't need to. Great your learning though.\n", "changeText('new text')\n\nThis calls the function immediately when the JS file is loaded so it makes sense that the button would have that text.\n<button id=\"btn\" onclick=\"changeText()\"></button>\n\nHere you're calling the function again by clicking the button, but you're not passing any text in your parentheses so that's why you're getting undefined.\nThis is how you should call your function. Using textContent like others suggested.\nfunction changeText(text) {\n let btn = document.querySelector('button#btn')\n btn.textContent = text\n}\n<button id=\"btn\" onclick=\"changeText('new text')\"></button>\n\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "button", "javascript", "onclick" ]
stackoverflow_0074669880_button_javascript_onclick.txt
Q: How to setup external "Middleware" for Nginx-Ingress with Go API I have written an API in Go for Authentication. You can POST /register which will create a new user and save it in the Postgres DB Let's say I have an Ingress setup somewhat like this. Afterward you can POST /login with your credentials. This will create a session token and attach it to the cookie and creates an entry for the session in a Redis You can also GET /logout which will revoke your token by deleting it from Redis and expiring your cookie The API itself is also running in the kubernetes cluster in its own deployment with its own service and is also exposed via its own ingress. Now I have a different Ingress for another service setup somewhat like this. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: foo spec: tls: - hosts: - foo.domain.com rules: - host: foo.domain.com http: paths: - path: / pathType: Prefix backend: service: name: foo port: number: 5050 I want to setup Ingress with something similar to a Middleware in a way so every time someone wants to access the address foo.domain.com it directs the request to the Go API to check if the cookie is valid and then either allow or deny the access. Is this possible or do I have to do it completely differently? I quite new to kubernetes so I need some help. Thanks in advance :) I have found this post where the answer was this: annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" But I have only found this in relation to an OAuth2 setup which I don't have. A: It is possible to use an ingress to control access to your service, but you will need to use a different approach than the one you mentioned in your post. To do this, you will need to use an ingress controller, such as NGINX, that supports authentication. You can then configure the ingress to use your Go API for authentication. Here is an example of how you might set this up: First, you will need to create a new ingress resource for your service that includes the nginx.ingress.kubernetes.io/auth-url and nginx.ingress.kubernetes.io/auth-signin annotations. The auth-url annotation specifies the URL of your authentication service, and the auth-signin annotation specifies the URL that users should be redirected to if they are not already authenticated. Here is an example of how these annotations might be used: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: foo annotations: nginx.ingress.kubernetes.io/auth-url: "https://foo.domain.com/auth" nginx.ingress.kubernetes.io/auth-signin: "https://foo.domain.com/login?rd=$escaped_request_uri" spec: tls: - hosts: - foo.domain.com rules: - host: foo.domain.com http: paths: - path: / pathType: Prefix backend: service: name: foo port: number: 5050 Next, you will need to configure your authentication service to handle authentication requests from the ingress controller. When a user tries to access your service, the ingress controller will make a request to the URL specified in the auth-url annotation. Your authentication service should return a 200 response if the user is authenticated, or a 401 response if the user is not authenticated. Here is an example of how this might work in your Go API: // HandleAuth is called by the ingress controller when it needs to authenticate a user func HandleAuth(w http.ResponseWriter, r *http.Request) { // Check if the user has a valid session token sessionToken := r.Cookie("session_token") if sessionToken == nil { // User is not authenticated, return a 401 response w.WriteHeader(http.StatusUnauthorized) return } // User is authenticated, return a 200 response w.WriteHeader(http.StatusOK) } Once you have configured your ingress and authentication service, requests to your service will be automatically authenticated using your Go API. If a user is not authenticated, they will be redirected to the URL specified in the auth-signin annotation to sign in.
How to setup external "Middleware" for Nginx-Ingress with Go API
I have written an API in Go for Authentication. You can POST /register which will create a new user and save it in the Postgres DB Let's say I have an Ingress setup somewhat like this. Afterward you can POST /login with your credentials. This will create a session token and attach it to the cookie and creates an entry for the session in a Redis You can also GET /logout which will revoke your token by deleting it from Redis and expiring your cookie The API itself is also running in the kubernetes cluster in its own deployment with its own service and is also exposed via its own ingress. Now I have a different Ingress for another service setup somewhat like this. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: foo spec: tls: - hosts: - foo.domain.com rules: - host: foo.domain.com http: paths: - path: / pathType: Prefix backend: service: name: foo port: number: 5050 I want to setup Ingress with something similar to a Middleware in a way so every time someone wants to access the address foo.domain.com it directs the request to the Go API to check if the cookie is valid and then either allow or deny the access. Is this possible or do I have to do it completely differently? I quite new to kubernetes so I need some help. Thanks in advance :) I have found this post where the answer was this: annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" But I have only found this in relation to an OAuth2 setup which I don't have.
[ "It is possible to use an ingress to control access to your service, but you will need to use a different approach than the one you mentioned in your post.\nTo do this, you will need to use an ingress controller, such as NGINX, that supports authentication. You can then configure the ingress to use your Go API for authentication.\nHere is an example of how you might set this up:\nFirst, you will need to create a new ingress resource for your service that includes the nginx.ingress.kubernetes.io/auth-url and nginx.ingress.kubernetes.io/auth-signin annotations. The auth-url annotation specifies the URL of your authentication service, and the auth-signin annotation specifies the URL that users should be redirected to if they are not already authenticated. Here is an example of how these annotations might be used:\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: foo\n annotations:\n nginx.ingress.kubernetes.io/auth-url: \"https://foo.domain.com/auth\"\n nginx.ingress.kubernetes.io/auth-signin: \"https://foo.domain.com/login?rd=$escaped_request_uri\"\nspec:\n tls:\n - hosts:\n - foo.domain.com\n rules:\n - host: foo.domain.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: foo\n port:\n number: 5050\n\nNext, you will need to configure your authentication service to handle authentication requests from the ingress controller. When a user tries to access your service, the ingress controller will make a request to the URL specified in the auth-url annotation. Your authentication service should return a 200 response if the user is authenticated, or a 401 response if the user is not authenticated. Here is an example of how this might work in your Go API:\n// HandleAuth is called by the ingress controller when it needs to authenticate a user\nfunc HandleAuth(w http.ResponseWriter, r *http.Request) {\n // Check if the user has a valid session token\n sessionToken := r.Cookie(\"session_token\")\n if sessionToken == nil {\n // User is not authenticated, return a 401 response\n w.WriteHeader(http.StatusUnauthorized)\n return\n }\n\n // User is authenticated, return a 200 response\n w.WriteHeader(http.StatusOK)\n}\n\nOnce you have configured your ingress and authentication service, requests to your service will be automatically authenticated using your Go API. If a user is not authenticated, they will be redirected to the URL specified in the auth-signin annotation to sign in.\n" ]
[ 0 ]
[]
[]
[ "go", "kubernetes", "kubernetes_ingress" ]
stackoverflow_0074643485_go_kubernetes_kubernetes_ingress.txt
Q: Galera 10 cluster node dropping queries We are encountering an error on Node 1 of a 5-node cluster. Queries to Node 1 seem to succeed from a client perspective but are failing to insert. We are seeing a lot of autoinc errors even though autoinc shouldn't be involved in the update queries. Also, this seems to cause performance issues until a higher priority transaction occurs knocking the node offline to perform transaction replay. Below are some of the entries in error.log with debugging on and a walkthrough of setup. We are at a loss of how to troubleshoot further. The only way to cause transactions to continue is for all the clients to drop and rebuild connection pool. Some details of the setup: 5 Nodes all acting as a master to their local server All connected via WAN Node 1 also has outside SQL connections for website access Each node is running inside of docker on the physical machine Here are some of the errors: 150703 5:56:27 [Note] WSREP: DUPKEY error for autoinc THD 5041, value 133622, off 2 inc 5 150703 5:56:27 [Note] WSREP: retrying insert: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: innobase_commit, abort INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: cleanup transaction for LOCAL_STATE: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: wsrep retrying AC query: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: DUPKEY error for autoinc THD 5041, value 133627, off 2 inc 5 150703 5:56:27 [Note] WSREP: releasing retry_query: conf 0 sent 0 kill 0 errno 0 SQL INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 Our config [MYSQLD] datadir=/data log-error=/data/error.log query_cache_size=0 binlog_format=ROW query_cache_type=0 bind-address=0.0.0.0 port=3304 innodb_buffer_pool_size=2048M innodb_flush_log_at_trx_commit=0 innodb_read_io_threads=4 innodb_write_io_threads=4 innodb_io_capacity=200 innodb_doublewrite=1 innodb_log_file_size=512M innodb_log_buffer_size=64M innodb_buffer_pool_instances=4 innodb_log_files_in_group=2 innodb_thread_concurrency=64 innodb_flush_method = O_DIRECT innodb_autoinc_lock_mode=2 innodb_stats_on_metadata=0 default_storage_engine=innodb binlog_format=ROW key_buffer_size = 24M tmp_table_size = 64M max_heap_table_size = 64M max_allowed_packet = 512M skip_name_resolve memlock=0 sysdate_is_now=1 max_connections=512 thread_cache_size=512 query_cache_type = 0 query_cache_size = 0 table_open_cache=1024 lower_case_table_names=0 wait_timeout = 28800 explicit_defaults_for_timestamp=1 wsrep_provider=/usr/lib/galera/libgalera_smm.so wsrep_provider_options="gcache.size=2048M; evs.keepalive_period=PT3S; evs.inactive_check_period=PT10S; evs.suspect_timeout=PT30S; evs.inactive_timeout=PT1M; evs.install_timeout=PT1M; evs.send_window=1024; evs.user_send_window=512;" wsrep_cluster_name="<removed>" wsrep_cluster_address="<removed>" wsrep_slave_threads=4 wsrep_certify_nonPK=1 wsrep_max_ws_rows=131072 wsrep_max_ws_size=1073741824 wsrep_debug=1 wsrep_convert_LOCK_to_trx=0 wsrep_retry_autocommit=10 wsrep_auto_increment_control=1 wsrep_replicate_myisam=1 wsrep_drupal_282555_workaround=1 wsrep_causal_reads=0 wsrep_sst_method=rsync wsrep_log_conflicts=1 UPDATE: Per request for comments: mysql> SHOW CREATE TABLE server_live; +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_live | CREATE TABLE `server_live` ( `id` int(11) NOT NULL AUTO_INCREMENT, `server_id` int(11) NOT NULL, `performance_30` int(11) NOT NULL, `performance_120` int(11) NOT NULL, `performance_300` int(11) NOT NULL, `performance_600` int(11) NOT NULL, `players_online` int(11) NOT NULL, `staff_online` varchar(255) NOT NULL DEFAULT '{}', `staff_last_seen` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `uptime` int(11) NOT NULL, `worlds_loaded` int(11) NOT NULL, `chunks_loaded` int(11) NOT NULL, `entities_loaded` int(11) NOT NULL, `tileEntities_loaded` int(11) NOT NULL, `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `server_id_2` (`server_id`), CONSTRAINT `server_live_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `server` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=720312 DEFAULT CHARSET=utf8 | +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set mysql> SHOW VARIABLES LIKE 'auto%'; +--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | auto_increment_increment | 5 | | auto_increment_offset | 3 | | autocommit | ON | | automatic_sp_privileges | ON | +--------------------------+-------+ A: It looks like the issue is related to Galera's auto increment handling. The error messages indicate that there are duplicate keys being generated for auto-increment columns, which is causing the queries to fail. One possible solution is to disable the use of auto-increment columns in the database schema. This can be done by setting the wsrep_auto_increment_control system variable to "off" in the my.cnf configuration file. Another possible solution is to adjust the wsrep_auto_increment_control system variable to use a different method for handling auto-increment columns. For example, setting it to "none" will cause Galera to use the same auto-increment values as the primary node, while setting it to "interleaved" will cause Galera to use a different range of values for each node. It's also worth checking the schema for the tables that are causing the errors to make sure that the auto-increment columns are properly defined and indexed. This can help prevent future issues with duplicate keys. Finally, it may be necessary to perform a rolling restart of the cluster nodes to apply the changes to the configuration and schema. This can be done by stopping and restarting each node in turn, allowing the cluster to resync and apply the changes before moving on to the next node.
Galera 10 cluster node dropping queries
We are encountering an error on Node 1 of a 5-node cluster. Queries to Node 1 seem to succeed from a client perspective but are failing to insert. We are seeing a lot of autoinc errors even though autoinc shouldn't be involved in the update queries. Also, this seems to cause performance issues until a higher priority transaction occurs knocking the node offline to perform transaction replay. Below are some of the entries in error.log with debugging on and a walkthrough of setup. We are at a loss of how to troubleshoot further. The only way to cause transactions to continue is for all the clients to drop and rebuild connection pool. Some details of the setup: 5 Nodes all acting as a master to their local server All connected via WAN Node 1 also has outside SQL connections for website access Each node is running inside of docker on the physical machine Here are some of the errors: 150703 5:56:27 [Note] WSREP: DUPKEY error for autoinc THD 5041, value 133622, off 2 inc 5 150703 5:56:27 [Note] WSREP: retrying insert: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: innobase_commit, abort INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: cleanup transaction for LOCAL_STATE: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: wsrep retrying AC query: INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 150703 5:56:27 [Note] WSREP: DUPKEY error for autoinc THD 5041, value 133627, off 2 inc 5 150703 5:56:27 [Note] WSREP: releasing retry_query: conf 0 sent 0 kill 0 errno 0 SQL INSERT INTO `server_live` (server_id, performance_30, performance_120, performance_300, performance_600, players_online, staff_online, staff_last_seen, uptime, worlds_loaded, chunks_loaded, entities_loaded, tileEntities_loaded) VALUES (79, 100, 100, 99, 99, 2, '{}', staff_last_seen, 15568, 13, 789, 384, 1101) ON DUPLICATE KEY UPDATE performance_30 = 100, performance_120 = 100, performance_300 = 99, performance_600 = 99, players_online = 2, staff_online = '{}', staff_last_seen = staff_last_seen, uptime = 15568, worlds_loaded = 13, chunks_loaded = 789, entities_loaded = 384, tileEntities_loaded = 1101 Our config [MYSQLD] datadir=/data log-error=/data/error.log query_cache_size=0 binlog_format=ROW query_cache_type=0 bind-address=0.0.0.0 port=3304 innodb_buffer_pool_size=2048M innodb_flush_log_at_trx_commit=0 innodb_read_io_threads=4 innodb_write_io_threads=4 innodb_io_capacity=200 innodb_doublewrite=1 innodb_log_file_size=512M innodb_log_buffer_size=64M innodb_buffer_pool_instances=4 innodb_log_files_in_group=2 innodb_thread_concurrency=64 innodb_flush_method = O_DIRECT innodb_autoinc_lock_mode=2 innodb_stats_on_metadata=0 default_storage_engine=innodb binlog_format=ROW key_buffer_size = 24M tmp_table_size = 64M max_heap_table_size = 64M max_allowed_packet = 512M skip_name_resolve memlock=0 sysdate_is_now=1 max_connections=512 thread_cache_size=512 query_cache_type = 0 query_cache_size = 0 table_open_cache=1024 lower_case_table_names=0 wait_timeout = 28800 explicit_defaults_for_timestamp=1 wsrep_provider=/usr/lib/galera/libgalera_smm.so wsrep_provider_options="gcache.size=2048M; evs.keepalive_period=PT3S; evs.inactive_check_period=PT10S; evs.suspect_timeout=PT30S; evs.inactive_timeout=PT1M; evs.install_timeout=PT1M; evs.send_window=1024; evs.user_send_window=512;" wsrep_cluster_name="<removed>" wsrep_cluster_address="<removed>" wsrep_slave_threads=4 wsrep_certify_nonPK=1 wsrep_max_ws_rows=131072 wsrep_max_ws_size=1073741824 wsrep_debug=1 wsrep_convert_LOCK_to_trx=0 wsrep_retry_autocommit=10 wsrep_auto_increment_control=1 wsrep_replicate_myisam=1 wsrep_drupal_282555_workaround=1 wsrep_causal_reads=0 wsrep_sst_method=rsync wsrep_log_conflicts=1 UPDATE: Per request for comments: mysql> SHOW CREATE TABLE server_live; +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_live | CREATE TABLE `server_live` ( `id` int(11) NOT NULL AUTO_INCREMENT, `server_id` int(11) NOT NULL, `performance_30` int(11) NOT NULL, `performance_120` int(11) NOT NULL, `performance_300` int(11) NOT NULL, `performance_600` int(11) NOT NULL, `players_online` int(11) NOT NULL, `staff_online` varchar(255) NOT NULL DEFAULT '{}', `staff_last_seen` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `uptime` int(11) NOT NULL, `worlds_loaded` int(11) NOT NULL, `chunks_loaded` int(11) NOT NULL, `entities_loaded` int(11) NOT NULL, `tileEntities_loaded` int(11) NOT NULL, `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `server_id_2` (`server_id`), CONSTRAINT `server_live_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `server` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=720312 DEFAULT CHARSET=utf8 | +-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set mysql> SHOW VARIABLES LIKE 'auto%'; +--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | auto_increment_increment | 5 | | auto_increment_offset | 3 | | autocommit | ON | | automatic_sp_privileges | ON | +--------------------------+-------+
[ "It looks like the issue is related to Galera's auto increment handling. The error messages indicate that there are duplicate keys being generated for auto-increment columns, which is causing the queries to fail.\nOne possible solution is to disable the use of auto-increment columns in the database schema. This can be done by setting the wsrep_auto_increment_control system variable to \"off\" in the my.cnf configuration file.\nAnother possible solution is to adjust the wsrep_auto_increment_control system variable to use a different method for handling auto-increment columns. For example, setting it to \"none\" will cause Galera to use the same auto-increment values as the primary node, while setting it to \"interleaved\" will cause Galera to use a different range of values for each node.\nIt's also worth checking the schema for the tables that are causing the errors to make sure that the auto-increment columns are properly defined and indexed. This can help prevent future issues with duplicate keys.\nFinally, it may be necessary to perform a rolling restart of the cluster nodes to apply the changes to the configuration and schema. This can be done by stopping and restarting each node in turn, allowing the cluster to resync and apply the changes before moving on to the next node.\n" ]
[ 0 ]
[]
[]
[ "galera", "mariadb" ]
stackoverflow_0031225290_galera_mariadb.txt
Q: Linkedin Video api upload throwing error 500 I am trying to create Video ad Campaigns using LinkedIn API. I am using Laravel, Guzzle and I have followed steps given in documentation to upload video Initialize Upload for Video Upload the Video Finalize Video Upload On Initialization I receive success response with multiple upload Urls depending on the file size. Using these uploadUrl I am making request to upload the file chunk using Guzzle but in response it is throwing INternal server error 500. I understand it can be server error but not sure if that raised to any header, param or token. Please help If anyone has faced similar issue and got resolved. I am sharing the response of the initialize upload and request being made using Guzzle. Doc Link Response of Initialize APi {#1344 // app/Services/LinkedIn/LinkedInCampaignService.php:328 +"value": {#1324 +"uploadUrlsExpireAt": 1669664693370 +"video": "urn:li:video:C4D10AQGwksU16dn3Zw" +"uploadInstructions": array:2 [ 0 => {#1325 +"uploadUrl": "https://www.linkedin.com/dms-uploads/C4D10AQGwksU16dn3Zw/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2FtYnJ5L2FtYnJ5LXZpZGVvLz94LWxpLWFtYnJ5LWVwPUFRTE9OS2RwQmlCOS1BQUFBWVM2bndCNTd1MTR1Yjh5bVVKQ1BERDhFcVhIN1hxcXl1OHBiM3BuUVVCLV82dng2cjZscGVkWmNJajZFZXR2c2trZ1pKM1Z2MVJwWDRxQnQ4T1Z1SWxHNGlUbk85eF9tX082dE11MHhySnhod0RmbFNzUlBvWV90b1Fjdmd0TlZUTlNOQ2RlQkZKR2Zodk8tSktkcWlGMUFpa3pDZjVveDFMcnBQbkY3TXBaYkVkdlpKQXJnMGQ4R3gxQmFZWGR2SFA4aXdtRWRGdGlrSGNLRXVTa283eDhvWnNOZXRVX3I2WVlQa2dXaC1rZlVGbkh0MnNqVW03akItLVFtaGpzX3lwYTdiaEtMd0oxRFZyaEhvUE9KeGl2eFZKSEFELWVFM2txd2tHOWlvblByVm9IMU9tM2N4NXdTMU9TLUgtbjZyMmo4aHZIMFg5ckdlNWNSQkNjdUt0RmVCRkpGMzVoYnN1ZXdCZ3k1UkdxMjdpT1ZFVWRVVUdOelpxRWRKQXlGQXlFTTgteUtwbmVQalpORnFQcFVnVHl6cG80c3hqMGo2VDl6Nlp2cWNlcE1SaDBoZDRhY2Vhc1luUHNfUTc1cWNjbFBXQ2hKclpWU2NhaktRWk9WNlAyc3ZUU190cWFNQkZ1VGtWQ2Q3a0RIY2o5VmVaam1YY3hFREdpWVQzVmM4Vy1ieDdqZFRXMHBpRk9ZcURPTndTcjJZajNBZU4tcXVmRThtQy1qMzA4eEdic3NVQ0wxTTVSZTJjVmVxOS1pbDVQWmQ2MDRXU2lBSXhhejNDM09aenZmaXYtRkRwWlBIaEdscGVCYTdadFAycGRJMXR4eUpUdzVtcFFMTExiN1o2WGNscWoybWFlWkJwUkxZU0VIZXZ0Q29qMXNSUDRrcHQ0ZFluTUw1U0J6RV9qU2ZacW1pS21SS29RcnNrYWZrcUtUY0tMV1o5ZmZKTWZvaGNHTVE2ZTlRdlBJaGJHZ3ZSSFlWdFBjd1BOOG5uSG5rXzFIcE1SWWtWeGdoTVlhQm5KVGJrWDIxaExzYVBNVHlyM2FTQnlTME54c0c2UENMX202eDN2NlBmUC1nQXo1UDZOUGkxUHRubDE2Nm92Tm5ZNHNTdWxrNDdlaGVfZ1FZcnJDalZIbGVYZW4wU0g4TGUwaDNLczFLckdsekpyY0pqMjhQa29NQUdXN1ZCdktZN0ctdFZ0T0Q1cEswTTNsMUthUUQteGxsWnpVV3JLN2V2ZmtfZnEtZWNENTR5aHpKb1FVTzVmbWhDeDVIWERjT1ZYckhlWXlqVFlpbVQ5R1ZmdU9fbkdSZWtTczZENHZ6eVljejN1S0QxVmEzY3dnTU9heWQ4S3RBaVRjMGU3ZFVPdmJaMklNUnhnUlNKQnBaTmtDaHRybGRKQlJJUjRlakdMUkxJS2FpRElRUDlzMGkwUHZhNHVPOG8%253D&pn=1&m=91877349&app=4647153&sync=0&v=beta&ut=1cmOzfpLG0Caw1" +"lastByte": 4194303 +"firstByte": 0 } 1 => {#1345 +"uploadUrl": "https://www.linkedin.com/dms-uploads/C4D10AQGwksU16dn3Zw/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2FtYnJ5L2FtYnJ5LXZpZGVvLz94LWxpLWFtYnJ5LWVwPUFRTE9OS2RwQmlCOS1BQUFBWVM2bndCNTd1MTR1Yjh5bVVKQ1BERDhFcVhIN1hxcXl1OHBiM3BuUVVCLV82dng2cjZscGVkWmNJajZFZXR2c2trZ1pKM1Z2MVJwWDRxQnQ4T1Z1SWxHNGlUbk85eF9tX082dE11MHhySnhod0RmbFNzUlBvWV90b1Fjdmd0TlZUTlNOQ2RlQkZKR2Zodk8tSktkcWlGMUFpa3pDZjVveDFMcnBQbkY3TXBaYkVkdlpKQXJnMGQ4R3gxQmFZWGR2SFA4aXdtRWRGdGlrSGNLRXVTa283eDhvWnNOZXRVX3I2WVlQa2dXaC1rZlVGbkh0MnNqVW03akItLVFtaGpzX3lwYTdiaEtMd0oxRFZyaEhvUE9KeGl2eFZKSEFELWVFM2txd2tHOWlvblByVm9IMU9tM2N4NXdTMU9TLUgtbjZyMmo4aHZIMFg5ckdlNWNSQkNjdUt0RmVCRkpGMzVoYnN1ZXdCZ3k1UkdxMjdpT1ZFVWRVVUdOelpxRWRKQXlGQXlFTTgteUtwbmVQalpORnFQcFVnVHl6cG80c3hqMGo2VDl6Nlp2cWNlcE1SaDBoZDRhY2Vhc1luUHNfUTc1cWNjbFBXQ2hKclpWU2NhaktRWk9WNlAyc3ZUU190cWFNQkZ1VGtWQ2Q3a0RIY2o5VmVaam1YY3hFREdpWVQzVmM4Vy1ieDdqZFRXMHBpRk9ZcURPTndTcjJZajNBZU4tcXVmRThtQy1qMzA4eEdic3NVQ0wxTTVSZTJjVmVxOS1pbDVQWmQ2MDRXU2lBSXhhejNDM09aenZmaXYtRkRwWlBIaEdscGVCYTdadFAycGRJMXR4eUpUdzVtcFFMTExiN1o2WGNscWoybWFlWkJwUkxZU0VIZXZ0Q29qMXNSUDRrcHQ0ZFluTUw1U0J6RV9qU2ZacW1pS21SS29RcnNrYWZrcUtUY0tMV1o5ZmZKTWZvaGNHTVE2ZTlRdlBJaGJHZ3ZSSFlWdFBjd1BOOG5uSG5rXzFIcE1SWWtWeGdoTVlhQm5KVGJrWDIxaExzYVBNVHlyM2FTQnlTME54c0c2UENMX202eDN2NlBmUC1nQXo1UDZOUGkxUHRubDE2Nm92Tm5ZNHNTdWxrNDdlaGVfZ1FZcnJDalZIbGVYZW4wU0g4TGUwaDNLczFLckdsekpyY0pqMjhQa29NQUdXN1ZCdktZN0ctdFZ0T0Q1cEswTTNsMUthUUQteGxsWnpVV3JLN2V2ZmtfZnEtZWNENTR5aHpKb1FVTzVmbWhDeDVIWERjT1ZYckhlWXlqVFlpbVQ5R1ZmdU9fbkdSZWtTczZENHZ6eVljejN1S0QxVmEzY3dnTU9heWQ4S3RBaVRjMGU3ZFVPdmJaMklNUnhnUlNKQnBaTmtDaHRybGRKQlJJUjRlakdMUkxJS2FpRElRUDlzMGkwUHZhNHVPOG8%253D&pn=2&m=91877349&app=4647153&sync=0&v=beta&ut=3GSFILXAS0Caw1" +"lastByte": 5253879 +"firstByte": 4194304 } ] +"uploadToken": "" } } Calling Upload Urls public function uploadChunkedVideo($fileName, $uploadInstructions) { try { $fileHandler = fopen(storage_path('app') . '/' . $fileName, 'r'); $client = new Client(); $uploadPartIds = []; foreach ($uploadInstructions as $instruction) { $chunkedUpload = $client->put($instruction->uploadUrl, [ 'headers' => [ 'Content-Type' => 'application/octet-stream', 'Authorization' => 'Bearer ' . $this->accessToken, 'LinkedIn-Version' => config('services.linkedIn.version'), ], 'multipart' => [ [ 'name' => $fileName, 'contents' => fread($fileHandler, $instruction->lastByte - $instruction->firstByte), ] ], ]); //Push etag $uploadPartIds[] = $chunkedUpload->getHeader('ETag')[0]; } } catch (\GuzzleHttp\Exception\ServerException $e) { dd($e->getResponse()->getBody()->getContents()); } } A: This worked for me. $fileHandler = fopen(storage_path('app') . '/' . $fileName, 'r'); $client = new Client(); $uploadPartIds = []; foreach ($uploadInstructions as $instruction) { $chunkedUpload = $client->put($instruction->uploadUrl, [ 'headers' => [ 'Content-Type' => 'application/octet-stream', 'Authorization' => 'Bearer ' . $this->accessToken, 'LinkedIn-Version' => config('services.linkedIn.version'), ], 'body' => fread($fileHandler, $instruction->lastByte - $instruction->firstByte), ]); //Push etag $uploadPartIds[] = $chunkedUpload->getHeader('ETag')[0]; info($uploadPartIds); } return $uploadPartIds;
Linkedin Video api upload throwing error 500
I am trying to create Video ad Campaigns using LinkedIn API. I am using Laravel, Guzzle and I have followed steps given in documentation to upload video Initialize Upload for Video Upload the Video Finalize Video Upload On Initialization I receive success response with multiple upload Urls depending on the file size. Using these uploadUrl I am making request to upload the file chunk using Guzzle but in response it is throwing INternal server error 500. I understand it can be server error but not sure if that raised to any header, param or token. Please help If anyone has faced similar issue and got resolved. I am sharing the response of the initialize upload and request being made using Guzzle. Doc Link Response of Initialize APi {#1344 // app/Services/LinkedIn/LinkedInCampaignService.php:328 +"value": {#1324 +"uploadUrlsExpireAt": 1669664693370 +"video": "urn:li:video:C4D10AQGwksU16dn3Zw" +"uploadInstructions": array:2 [ 0 => {#1325 +"uploadUrl": "https://www.linkedin.com/dms-uploads/C4D10AQGwksU16dn3Zw/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2FtYnJ5L2FtYnJ5LXZpZGVvLz94LWxpLWFtYnJ5LWVwPUFRTE9OS2RwQmlCOS1BQUFBWVM2bndCNTd1MTR1Yjh5bVVKQ1BERDhFcVhIN1hxcXl1OHBiM3BuUVVCLV82dng2cjZscGVkWmNJajZFZXR2c2trZ1pKM1Z2MVJwWDRxQnQ4T1Z1SWxHNGlUbk85eF9tX082dE11MHhySnhod0RmbFNzUlBvWV90b1Fjdmd0TlZUTlNOQ2RlQkZKR2Zodk8tSktkcWlGMUFpa3pDZjVveDFMcnBQbkY3TXBaYkVkdlpKQXJnMGQ4R3gxQmFZWGR2SFA4aXdtRWRGdGlrSGNLRXVTa283eDhvWnNOZXRVX3I2WVlQa2dXaC1rZlVGbkh0MnNqVW03akItLVFtaGpzX3lwYTdiaEtMd0oxRFZyaEhvUE9KeGl2eFZKSEFELWVFM2txd2tHOWlvblByVm9IMU9tM2N4NXdTMU9TLUgtbjZyMmo4aHZIMFg5ckdlNWNSQkNjdUt0RmVCRkpGMzVoYnN1ZXdCZ3k1UkdxMjdpT1ZFVWRVVUdOelpxRWRKQXlGQXlFTTgteUtwbmVQalpORnFQcFVnVHl6cG80c3hqMGo2VDl6Nlp2cWNlcE1SaDBoZDRhY2Vhc1luUHNfUTc1cWNjbFBXQ2hKclpWU2NhaktRWk9WNlAyc3ZUU190cWFNQkZ1VGtWQ2Q3a0RIY2o5VmVaam1YY3hFREdpWVQzVmM4Vy1ieDdqZFRXMHBpRk9ZcURPTndTcjJZajNBZU4tcXVmRThtQy1qMzA4eEdic3NVQ0wxTTVSZTJjVmVxOS1pbDVQWmQ2MDRXU2lBSXhhejNDM09aenZmaXYtRkRwWlBIaEdscGVCYTdadFAycGRJMXR4eUpUdzVtcFFMTExiN1o2WGNscWoybWFlWkJwUkxZU0VIZXZ0Q29qMXNSUDRrcHQ0ZFluTUw1U0J6RV9qU2ZacW1pS21SS29RcnNrYWZrcUtUY0tMV1o5ZmZKTWZvaGNHTVE2ZTlRdlBJaGJHZ3ZSSFlWdFBjd1BOOG5uSG5rXzFIcE1SWWtWeGdoTVlhQm5KVGJrWDIxaExzYVBNVHlyM2FTQnlTME54c0c2UENMX202eDN2NlBmUC1nQXo1UDZOUGkxUHRubDE2Nm92Tm5ZNHNTdWxrNDdlaGVfZ1FZcnJDalZIbGVYZW4wU0g4TGUwaDNLczFLckdsekpyY0pqMjhQa29NQUdXN1ZCdktZN0ctdFZ0T0Q1cEswTTNsMUthUUQteGxsWnpVV3JLN2V2ZmtfZnEtZWNENTR5aHpKb1FVTzVmbWhDeDVIWERjT1ZYckhlWXlqVFlpbVQ5R1ZmdU9fbkdSZWtTczZENHZ6eVljejN1S0QxVmEzY3dnTU9heWQ4S3RBaVRjMGU3ZFVPdmJaMklNUnhnUlNKQnBaTmtDaHRybGRKQlJJUjRlakdMUkxJS2FpRElRUDlzMGkwUHZhNHVPOG8%253D&pn=1&m=91877349&app=4647153&sync=0&v=beta&ut=1cmOzfpLG0Caw1" +"lastByte": 4194303 +"firstByte": 0 } 1 => {#1345 +"uploadUrl": "https://www.linkedin.com/dms-uploads/C4D10AQGwksU16dn3Zw/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2FtYnJ5L2FtYnJ5LXZpZGVvLz94LWxpLWFtYnJ5LWVwPUFRTE9OS2RwQmlCOS1BQUFBWVM2bndCNTd1MTR1Yjh5bVVKQ1BERDhFcVhIN1hxcXl1OHBiM3BuUVVCLV82dng2cjZscGVkWmNJajZFZXR2c2trZ1pKM1Z2MVJwWDRxQnQ4T1Z1SWxHNGlUbk85eF9tX082dE11MHhySnhod0RmbFNzUlBvWV90b1Fjdmd0TlZUTlNOQ2RlQkZKR2Zodk8tSktkcWlGMUFpa3pDZjVveDFMcnBQbkY3TXBaYkVkdlpKQXJnMGQ4R3gxQmFZWGR2SFA4aXdtRWRGdGlrSGNLRXVTa283eDhvWnNOZXRVX3I2WVlQa2dXaC1rZlVGbkh0MnNqVW03akItLVFtaGpzX3lwYTdiaEtMd0oxRFZyaEhvUE9KeGl2eFZKSEFELWVFM2txd2tHOWlvblByVm9IMU9tM2N4NXdTMU9TLUgtbjZyMmo4aHZIMFg5ckdlNWNSQkNjdUt0RmVCRkpGMzVoYnN1ZXdCZ3k1UkdxMjdpT1ZFVWRVVUdOelpxRWRKQXlGQXlFTTgteUtwbmVQalpORnFQcFVnVHl6cG80c3hqMGo2VDl6Nlp2cWNlcE1SaDBoZDRhY2Vhc1luUHNfUTc1cWNjbFBXQ2hKclpWU2NhaktRWk9WNlAyc3ZUU190cWFNQkZ1VGtWQ2Q3a0RIY2o5VmVaam1YY3hFREdpWVQzVmM4Vy1ieDdqZFRXMHBpRk9ZcURPTndTcjJZajNBZU4tcXVmRThtQy1qMzA4eEdic3NVQ0wxTTVSZTJjVmVxOS1pbDVQWmQ2MDRXU2lBSXhhejNDM09aenZmaXYtRkRwWlBIaEdscGVCYTdadFAycGRJMXR4eUpUdzVtcFFMTExiN1o2WGNscWoybWFlWkJwUkxZU0VIZXZ0Q29qMXNSUDRrcHQ0ZFluTUw1U0J6RV9qU2ZacW1pS21SS29RcnNrYWZrcUtUY0tMV1o5ZmZKTWZvaGNHTVE2ZTlRdlBJaGJHZ3ZSSFlWdFBjd1BOOG5uSG5rXzFIcE1SWWtWeGdoTVlhQm5KVGJrWDIxaExzYVBNVHlyM2FTQnlTME54c0c2UENMX202eDN2NlBmUC1nQXo1UDZOUGkxUHRubDE2Nm92Tm5ZNHNTdWxrNDdlaGVfZ1FZcnJDalZIbGVYZW4wU0g4TGUwaDNLczFLckdsekpyY0pqMjhQa29NQUdXN1ZCdktZN0ctdFZ0T0Q1cEswTTNsMUthUUQteGxsWnpVV3JLN2V2ZmtfZnEtZWNENTR5aHpKb1FVTzVmbWhDeDVIWERjT1ZYckhlWXlqVFlpbVQ5R1ZmdU9fbkdSZWtTczZENHZ6eVljejN1S0QxVmEzY3dnTU9heWQ4S3RBaVRjMGU3ZFVPdmJaMklNUnhnUlNKQnBaTmtDaHRybGRKQlJJUjRlakdMUkxJS2FpRElRUDlzMGkwUHZhNHVPOG8%253D&pn=2&m=91877349&app=4647153&sync=0&v=beta&ut=3GSFILXAS0Caw1" +"lastByte": 5253879 +"firstByte": 4194304 } ] +"uploadToken": "" } } Calling Upload Urls public function uploadChunkedVideo($fileName, $uploadInstructions) { try { $fileHandler = fopen(storage_path('app') . '/' . $fileName, 'r'); $client = new Client(); $uploadPartIds = []; foreach ($uploadInstructions as $instruction) { $chunkedUpload = $client->put($instruction->uploadUrl, [ 'headers' => [ 'Content-Type' => 'application/octet-stream', 'Authorization' => 'Bearer ' . $this->accessToken, 'LinkedIn-Version' => config('services.linkedIn.version'), ], 'multipart' => [ [ 'name' => $fileName, 'contents' => fread($fileHandler, $instruction->lastByte - $instruction->firstByte), ] ], ]); //Push etag $uploadPartIds[] = $chunkedUpload->getHeader('ETag')[0]; } } catch (\GuzzleHttp\Exception\ServerException $e) { dd($e->getResponse()->getBody()->getContents()); } }
[ "This worked for me.\n $fileHandler = fopen(storage_path('app') . '/' . $fileName, 'r');\n$client = new Client();\n$uploadPartIds = [];\nforeach ($uploadInstructions as $instruction) {\n $chunkedUpload = $client->put($instruction->uploadUrl, [\n 'headers' => [\n 'Content-Type' => 'application/octet-stream',\n 'Authorization' => 'Bearer ' . $this->accessToken,\n 'LinkedIn-Version' => config('services.linkedIn.version'),\n ],\n 'body' => fread($fileHandler, $instruction->lastByte - $instruction->firstByte),\n ]);\n\n //Push etag\n $uploadPartIds[] = $chunkedUpload->getHeader('ETag')[0];\n info($uploadPartIds);\n \n}\nreturn $uploadPartIds;\n\n" ]
[ 0 ]
[]
[]
[ "file_upload", "guzzle", "linkedin_api", "multipartform_data", "php" ]
stackoverflow_0074593441_file_upload_guzzle_linkedin_api_multipartform_data_php.txt
Q: Trying to add router with bottom navigation bar in CupertinoApp I am trying to add router with bottom navigation bar in CupertinoApp, but Navigator.pushNamed(context,anotherPage) is giving error Could not find a generator for route RouteSettings("/anotherPage", null) in the _CupertinoTabViewState. but Navigator.push(context, CupertinoPageRoute(builder: (context)=>AnotherPage())); is working Sample code: return CupertinoApp( localizationsDelegates: <LocalizationsDelegate<dynamic>>[ DefaultMaterialLocalizations.delegate, DefaultWidgetsLocalizations.delegate, DefaultCupertinoLocalizations.delegate, ], theme: CupertinoThemeData(brightness: Brightness.light), onGenerateRoute: Router.generateRoute, initialRoute: splashScreen, ); }} //router class class Router { static Route<dynamic> generateRoute(RouteSettings settings) { switch (settings.name) { case homeRoute: return CupertinoPageRoute(builder: (_) => CupertinoHomePage()); case productDetails: final ProductDetails args = settings.arguments; return CupertinoPageRoute( builder: (_) => ProductDetails(args.productsPojo, args.userId)); case anotherPage: return MaterialPageRoute(builder: (_) => AnotherPage()); case splashScreen: return MaterialPageRoute(builder: (_) => SplashScreen()); default: return MaterialPageRoute(builder: (_) => UndefinedView(name: settings.name)); } } } A: I struggled with this for some time. CupertinoTabView has a 'routes' property. Put your app routes here return CupertinoTabView( routes: appRoutes, builder: (BuildContext context) { return CupertinoPageScaffold( navigationBar: CupertinoNavigationBar( middle: Text( titles[currentRoute] ), trailing: FlatButton( child: Icon(Icons.search, color: Colors.white,), onPressed: openSearch, ), ), child: Material( child: Center( child: routes[currentRoute], ), ), ); }, ); appRoutes: final appRoutes = { '/exampleRoute': (context) => ExampleRoute(), '/exampleRoute2': (context) => ExampleRoute2(), } You'll basically have to copy the routes you already declared in main.dart A: @applejeewce said right. You can also set onGenerateRoutes in place of routes in this scenario.
Trying to add router with bottom navigation bar in CupertinoApp
I am trying to add router with bottom navigation bar in CupertinoApp, but Navigator.pushNamed(context,anotherPage) is giving error Could not find a generator for route RouteSettings("/anotherPage", null) in the _CupertinoTabViewState. but Navigator.push(context, CupertinoPageRoute(builder: (context)=>AnotherPage())); is working Sample code: return CupertinoApp( localizationsDelegates: <LocalizationsDelegate<dynamic>>[ DefaultMaterialLocalizations.delegate, DefaultWidgetsLocalizations.delegate, DefaultCupertinoLocalizations.delegate, ], theme: CupertinoThemeData(brightness: Brightness.light), onGenerateRoute: Router.generateRoute, initialRoute: splashScreen, ); }} //router class class Router { static Route<dynamic> generateRoute(RouteSettings settings) { switch (settings.name) { case homeRoute: return CupertinoPageRoute(builder: (_) => CupertinoHomePage()); case productDetails: final ProductDetails args = settings.arguments; return CupertinoPageRoute( builder: (_) => ProductDetails(args.productsPojo, args.userId)); case anotherPage: return MaterialPageRoute(builder: (_) => AnotherPage()); case splashScreen: return MaterialPageRoute(builder: (_) => SplashScreen()); default: return MaterialPageRoute(builder: (_) => UndefinedView(name: settings.name)); } } }
[ "I struggled with this for some time.\nCupertinoTabView has a 'routes' property. Put your app routes here\nreturn CupertinoTabView(\n routes: appRoutes,\n builder: (BuildContext context) {\n return CupertinoPageScaffold(\n navigationBar: CupertinoNavigationBar(\n middle: Text(\n titles[currentRoute]\n ),\n trailing: FlatButton(\n child: Icon(Icons.search, color: Colors.white,),\n onPressed: openSearch,\n ),\n ),\n child: Material(\n child: Center(\n child: routes[currentRoute],\n ),\n ),\n );\n },\n );\n\nappRoutes:\nfinal appRoutes = {\n '/exampleRoute': (context) => ExampleRoute(),\n '/exampleRoute2': (context) => ExampleRoute2(),\n}\n\nYou'll basically have to copy the routes you already declared in main.dart\n", "@applejeewce said right.\nYou can also set onGenerateRoutes in place of routes in this scenario.\n" ]
[ 5, 0 ]
[]
[]
[ "dart", "flutter", "flutter_cupertino", "ios" ]
stackoverflow_0062554769_dart_flutter_flutter_cupertino_ios.txt
Q: Allow to enter only number 1 to 49 in the TextField in Flutter I have a TextFiled in a Flatter app and I need it to only be able to enter a number from 1 to 49. Thanks in advance for all the tips. TextField( controller: TextEditingController() ..text = (quantity ?? "").toString() ..selection = TextSelection.collapsed(offset: (quantity ?? "").toString().length), inputFormatters: <TextInputFormatter>[ LengthLimitingTextInputFormatter(2), FilteringTextInputFormatter.digitsOnly, ], enabled: true, ), A: To restrict the values that can be entered into a TextField in a Flutter app to only numbers from 1 to 49, you can use a WhitelistingTextInputFormatter and specify the allowed characters as a regular expression. Here is an example of how you could do this: TextField( controller: TextEditingController() ..text = (quantity ?? "").toString() ..selection = TextSelection.collapsed(offset: (quantity ?? "").toString().length), inputFormatters: <TextInputFormatter>[ // Limit the input to 2 characters. LengthLimitingTextInputFormatter(2), // Only allow digits to be entered. FilteringTextInputFormatter.digitsOnly, // Only allow numbers from 1 to 49 to be entered. WhitelistingTextInputFormatter(RegExp("^([1-9]|[1-4][0-9]|49)$")), ], enabled: true, ), In the code above, I have added a WhitelistingTextInputFormatter to the `inputFormatters
Allow to enter only number 1 to 49 in the TextField in Flutter
I have a TextFiled in a Flatter app and I need it to only be able to enter a number from 1 to 49. Thanks in advance for all the tips. TextField( controller: TextEditingController() ..text = (quantity ?? "").toString() ..selection = TextSelection.collapsed(offset: (quantity ?? "").toString().length), inputFormatters: <TextInputFormatter>[ LengthLimitingTextInputFormatter(2), FilteringTextInputFormatter.digitsOnly, ], enabled: true, ),
[ "To restrict the values that can be entered into a TextField in a Flutter app to only numbers from 1 to 49, you can use a WhitelistingTextInputFormatter and specify the allowed characters as a regular expression. Here is an example of how you could do this:\nTextField(\n controller: TextEditingController()\n ..text = (quantity ?? \"\").toString()\n ..selection = TextSelection.collapsed(offset: (quantity ?? \"\").toString().length),\n inputFormatters: <TextInputFormatter>[\n // Limit the input to 2 characters.\n LengthLimitingTextInputFormatter(2),\n // Only allow digits to be entered.\n FilteringTextInputFormatter.digitsOnly,\n // Only allow numbers from 1 to 49 to be entered.\n WhitelistingTextInputFormatter(RegExp(\"^([1-9]|[1-4][0-9]|49)$\")),\n ],\n enabled: true,\n),\n\nIn the code above, I have added a WhitelistingTextInputFormatter to the `inputFormatters\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "flutter_textinputfield" ]
stackoverflow_0074670003_dart_flutter_flutter_textinputfield.txt
Q: Blazor - call statehaschanged from service how can call from OfferteService.cs to get a StateHasChanged function in detailsOfferta.razor? hello how can I call the StateHasChanged function from a service to update the detailsOfferta.razor page by the function statehaschanged? Thank you very much I tried invokeasync but it does not work A: if you have a BlazorServer, then the state of your components-view is stored on the server. You can do so. Register the service as a scoped, this gives the service lifetime equal to your components, if the lifetime is needed more than the register how singleton. Declare an event in the service, in my case it is RX observable. Inject the service into the component and subscribe on event. public partial class YourComponent : IDisposable { private IDisposable _disposable = null; [Inject] public ITimerService TimerService { get; set; } public string Time { get; set; } protected override async Task OnInitializedAsync() { _disposable = TimerService.Times.Subscribe(OnTimeMessage); } private void OnTimeMessage(string time) { Time = time; StateHasChanged(); } public void Dispose() { _disposable?.Dispose(); } } public interface ITimeService { IObservable<string> Times { get; } } public class TimeService : ITimeService { private readonly Subject<string> _subject = new(); private Timer _timer; public TimeService() { _timer = new Timer(() => { _subject.OnNext(DateTime.UtcNow.ToString("G")); }, null, 1000, 1000); } public void Dispose() { _timer.Dispose(); _subject.Dispose(); } public void PublishError(string error) { _subject.OnNext(error); } public IObservable<string> Times() { return _subject; } } // In host initialization //services.AddSingleton<ITimeService, TimeService>(); services.AddScoped<ITimeService, TimeService>(); A: (How do I) call StateHasChanged from service You don't. What you need to implement is the Notification pattern. Your data, and it's management, should reside in your service. When something changes in that service, a service level event is raised: this invokes any registeted handlers. Components that display data from the service register event handlers that call StateHasChanged when they are invoked. This answer to a similar question describes how to build a notication service for the Blazor WeatherForecast - https://stackoverflow.com/a/69562295/13065781
Blazor - call statehaschanged from service
how can call from OfferteService.cs to get a StateHasChanged function in detailsOfferta.razor? hello how can I call the StateHasChanged function from a service to update the detailsOfferta.razor page by the function statehaschanged? Thank you very much I tried invokeasync but it does not work
[ "if you have a BlazorServer, then the state of your components-view is stored on the server. You can do so. Register the service as a scoped, this gives the service lifetime equal to your components, if the lifetime is needed more than the register how singleton.\nDeclare an event in the service, in my case it is RX observable.\nInject the service into the component and subscribe on event.\n public partial class YourComponent : IDisposable\n {\n private IDisposable _disposable = null;\n [Inject] public ITimerService TimerService { get; set; }\n public string Time { get; set; }\n\n protected override async Task OnInitializedAsync()\n {\n _disposable = TimerService.Times.Subscribe(OnTimeMessage);\n }\n\n private void OnTimeMessage(string time)\n {\n Time = time;\n StateHasChanged();\n }\n\n public void Dispose()\n {\n _disposable?.Dispose();\n }\n }\n\n public interface ITimeService\n {\n IObservable<string> Times { get; }\n }\n\n public class TimeService : ITimeService\n {\n private readonly Subject<string> _subject = new();\n private Timer _timer;\n\n public TimeService()\n {\n _timer = new Timer(() =>\n {\n _subject.OnNext(DateTime.UtcNow.ToString(\"G\"));\n }, null, 1000, 1000);\n }\n\n public void Dispose()\n {\n _timer.Dispose();\n _subject.Dispose();\n }\n\n public void PublishError(string error)\n {\n _subject.OnNext(error);\n }\n\n public IObservable<string> Times()\n {\n return _subject;\n }\n }\n\n// In host initialization \n//services.AddSingleton<ITimeService, TimeService>();\nservices.AddScoped<ITimeService, TimeService>();\n\n", "\n(How do I) call StateHasChanged from service\n\nYou don't.\nWhat you need to implement is the Notification pattern.\nYour data, and it's management, should reside in your service. When something changes in that service, a service level event is raised: this invokes any registeted handlers. Components that display data from the service register event handlers that call StateHasChanged when they are invoked.\nThis answer to a similar question describes how to build a notication service for the Blazor WeatherForecast - https://stackoverflow.com/a/69562295/13065781\n" ]
[ 0, 0 ]
[]
[]
[ ".net", "blazor", "c#", "razor" ]
stackoverflow_0074648054_.net_blazor_c#_razor.txt
Q: TypeError converting fahrenheit to celsius in python my code: temperature_f = input('Please enter the temperature :') print('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade') run code: Please enter the temperature :50 Traceback (most recent call last): File "c:\Users\Aryan\.vscode\py\test1.py", line 2, in <module> print('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade') ~~~~~~~~~~~~~~^~~~ TypeError: unsupported operand type(s) for -: 'str' and 'int' How can I fix this error? I want to write a code that will convert fahrenheit to celsius for me but i am getting this error Please tell me how I can fix this error A: You have to type cast your input temperature_f = input(int('Please enter the temperature :')) A: type cast temperature string into float. temperature_f = float(input('Please enter the temperature :')) print('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade')
TypeError converting fahrenheit to celsius in python
my code: temperature_f = input('Please enter the temperature :') print('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade') run code: Please enter the temperature :50 Traceback (most recent call last): File "c:\Users\Aryan\.vscode\py\test1.py", line 2, in <module> print('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade') ~~~~~~~~~~~~~~^~~~ TypeError: unsupported operand type(s) for -: 'str' and 'int' How can I fix this error? I want to write a code that will convert fahrenheit to celsius for me but i am getting this error Please tell me how I can fix this error
[ "You have to type cast your input\ntemperature_f = input(int('Please enter the temperature :'))\n\n", "type cast temperature string into float.\ntemperature_f = float(input('Please enter the temperature :'))\nprint('The temperature is' , 1.8 / (temperature_f - 32) ,'centigrade')\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074669949_python.txt
Q: openAI DALL-E ModuleNotFoundError I installed DALL-E following the instructions on https://github.com/openai/DALL-E and got : ---> 10 from dall_e import map_pixels, unmap_pixels, load_model 11 from IPython.display import display, display_markdown 12 ModuleNotFoundError: No module named 'dall_e' A: I found that it helped when I changed which Python version I was using. It fixed my issue when I changed mine from 3.7.- to 3.10.7.
openAI DALL-E ModuleNotFoundError
I installed DALL-E following the instructions on https://github.com/openai/DALL-E and got : ---> 10 from dall_e import map_pixels, unmap_pixels, load_model 11 from IPython.display import display, display_markdown 12 ModuleNotFoundError: No module named 'dall_e'
[ "I found that it helped when I changed which Python version I was using.\nIt fixed my issue when I changed mine from 3.7.- to 3.10.7.\n" ]
[ 1 ]
[]
[]
[ "openai", "python", "pytorch" ]
stackoverflow_0072078830_openai_python_pytorch.txt
Q: How to render subchild above parent overlay I want a gray overlay above all children except for the selected one. Given the following structure: <div class="parent"> <!-- I have this subparent which is absolute. I cannot remove it... --> <div class="subParent1"> <div class="subParent2"> <!-- This child I want to be above the OVERLAY, aka not greyed out --> <div class="child selected">child</div> <div class="child">child</div> <div class="child">child</div> </div> </div> <!-- This component is underneat subParent in the tree structure --> <div class="grayOverlay"></div> </div> Here's an exact fiddle. Maybe, I could use a pseudo-element instead? PS: I updated the children to be a bit more nested to align with my actual code. A: You can take the reference from below code. I have altered the CSS a bit. I have added z-index wherever required you can optimise that. Also, removed position: absolute; from subParent1 and added top: 0; left: 0; on the grayOverlay. You can optimise it or change it as per you preference. .parent { width: 300px; height: 300px; position: relative; background-color: gray; } .grayOverlay { position: absolute; width: 100%; height: 100%; top: 0; left: 0; background-color: rgb(107 114 128 / 0.8); z-index: 11000; } .subParent1 { display: flex; flex-direction: column; width: 100%; z-index: 12000; } .child { color: black; width: 50px; height: 20px; background-color: white; margin: 10px; z-index: 10000; } .childIWantOverOverlay { background-color: red; z-index: 12000; } <div class="parent"> <div class="subParent1"> <div class="child childIWantOverOverlay">child</div> <div class="child">child</div> <div class="child">child</div> </div> <!-- This component is underneat subParent in the tree structure --> <div class="grayOverlay"></div> </div> A: UPDATE 2 Perhaps also consider use a pseudo-element for this, if it is acceptable in the actual use case. This approach is more isolated, so it might be less likely to have conflict with other existing elements in the actual project. Example with pseudo-element: const btn = document.querySelector("button"); const divs = document.querySelectorAll("div.child"); let i = 0; btn.addEventListener("click", () => { divs[i].classList.toggle("selected"); if (i < 2) { divs[i + 1].classList.toggle("selected") i++; return; }; if (i >= 2) { i = 0; divs[i].classList.toggle("selected"); } }); /* Can Change */ .parent { width: 300px; height: 300px; position: relative; } /* Add this */ .parent::after { content: ""; position: absolute; inset: 0; background-color: rgb(107 114 128 / 0.5); z-index: 50; } /* Disabled for now .grayOverlay { position: absolute; width: 100%; height: 100%; background-color: rgb(107 114 128 / 0.5); z-index: 50; } */ /* CANNOT CHANGE */ .subParent1 { position: absolute; display: flex; flex-direction: column; width: 100%; } /* Can Change */ .child { color: black; width: 50px; height: 20px; background-color: pink; margin: 10px; z-index: 25; position: relative; } /* Can Change */ .selected { background-color: red; /* Add z-index */ z-index: 100; } button { margin-bottom: 1em; padding: 6px; } <button>Toggle</button> <div class="parent"> <div class="subParent1"> <div class="subParent2"> <div class="child selected">child</div> <div class="child">child</div> <div class="child">child</div> </div> </div> <!-- This component is underneat subParent in the tree structure. I cannot move this into subParent1 --> <!-- <div class="grayOverlay"></div> --> </div> Update: also added position: relative on child. It seems that this can be achieved by removing the z-index on grayOverlay and subParent1 (the grayOverlay is still stacked on top due to natural placement), and add some z-index on selected. Example: const btn = document.querySelector("button"); const divs = document.querySelectorAll("div.child"); let i = 0; btn.addEventListener("click", () => { divs[i].classList.toggle("selected"); if (i < 2) { divs[i + 1].classList.toggle("selected") i++; return; }; if (i >= 2) { i = 0; divs[i].classList.toggle("selected"); } }); /* Can Change */ .parent { width: 300px; height: 300px; position: relative; } /* Can Change */ .grayOverlay { position: absolute; width: 100%; height: 100%; background-color: rgb(107 114 128 / 0.5); /* Removed z-index */ } /* CANNOT CHANGE */ .subParent1 { position: absolute; display: flex; flex-direction: column; width: 100%; /* Removed z-index */ } /* Can Change */ .child { color: black; width: 50px; height: 20px; background-color: pink; margin: 10px; /* Add position */ position: relative; } /* Can Change */ .selected { background-color: red; /* Add z-index */ z-index: 100; } button { margin-bottom: 1em; padding: 6px; } <button>Toggle</button> <div class="parent"> <div class="subParent1"> <div class="subParent2"> <div class="child selected">child</div> <div class="child">child</div> <div class="child">child</div> </div> </div> <!-- This component is underneat subParent in the tree structure. I cannot move this into subParent1 --> <div class="grayOverlay"></div> </div>
How to render subchild above parent overlay
I want a gray overlay above all children except for the selected one. Given the following structure: <div class="parent"> <!-- I have this subparent which is absolute. I cannot remove it... --> <div class="subParent1"> <div class="subParent2"> <!-- This child I want to be above the OVERLAY, aka not greyed out --> <div class="child selected">child</div> <div class="child">child</div> <div class="child">child</div> </div> </div> <!-- This component is underneat subParent in the tree structure --> <div class="grayOverlay"></div> </div> Here's an exact fiddle. Maybe, I could use a pseudo-element instead? PS: I updated the children to be a bit more nested to align with my actual code.
[ "You can take the reference from below code. I have altered the CSS a bit. I have added z-index wherever required you can optimise that. Also, removed position: absolute; from subParent1 and added top: 0; left: 0; on the grayOverlay. You can optimise it or change it as per you preference.\n\n\n.parent {\n width: 300px;\n height: 300px;\n position: relative;\n background-color: gray;\n}\n\n.grayOverlay {\n position: absolute;\n width: 100%;\n height: 100%;\n top: 0;\n left: 0;\n background-color: rgb(107 114 128 / 0.8);\n z-index: 11000;\n}\n\n.subParent1 {\n display: flex;\n flex-direction: column;\n width: 100%;\n z-index: 12000;\n}\n\n.child {\n color: black;\n width: 50px;\n height: 20px;\n background-color: white;\n margin: 10px;\n z-index: 10000;\n}\n\n.childIWantOverOverlay {\n background-color: red;\n z-index: 12000;\n}\n<div class=\"parent\">\n\n <div class=\"subParent1\">\n <div class=\"child childIWantOverOverlay\">child</div>\n <div class=\"child\">child</div>\n <div class=\"child\">child</div>\n </div>\n\n <!-- This component is underneat subParent in the tree structure -->\n <div class=\"grayOverlay\"></div>\n\n</div>\n\n\n\n", "UPDATE 2\nPerhaps also consider use a pseudo-element for this, if it is acceptable in the actual use case.\nThis approach is more isolated, so it might be less likely to have conflict with other existing elements in the actual project.\nExample with pseudo-element:\n\n\nconst btn = document.querySelector(\"button\");\nconst divs = document.querySelectorAll(\"div.child\");\n\nlet i = 0;\n\nbtn.addEventListener(\"click\", () => {\n divs[i].classList.toggle(\"selected\");\n if (i < 2) {\n divs[i + 1].classList.toggle(\"selected\")\n i++;\n return;\n };\n if (i >= 2) {\n i = 0;\n divs[i].classList.toggle(\"selected\");\n }\n\n});\n/* Can Change */\n\n.parent {\n width: 300px;\n height: 300px;\n position: relative;\n}\n\n\n/* Add this */\n\n.parent::after {\n content: \"\";\n position: absolute;\n inset: 0;\n background-color: rgb(107 114 128 / 0.5);\n z-index: 50;\n}\n\n\n/* Disabled for now\n.grayOverlay {\n position: absolute;\n width: 100%;\n height: 100%;\n background-color: rgb(107 114 128 / 0.5);\n z-index: 50;\n}\n*/\n\n\n/* CANNOT CHANGE */\n\n.subParent1 {\n position: absolute;\n display: flex;\n flex-direction: column;\n width: 100%;\n}\n\n\n/* Can Change */\n\n.child {\n color: black;\n width: 50px;\n height: 20px;\n background-color: pink;\n margin: 10px;\n z-index: 25;\n position: relative;\n}\n\n\n/* Can Change */\n\n.selected {\n background-color: red;\n /* Add z-index */\n z-index: 100;\n}\n\nbutton {\n margin-bottom: 1em;\n padding: 6px;\n}\n<button>Toggle</button>\n<div class=\"parent\">\n\n <div class=\"subParent1\">\n <div class=\"subParent2\">\n <div class=\"child selected\">child</div>\n <div class=\"child\">child</div>\n <div class=\"child\">child</div>\n </div>\n </div>\n\n <!-- This component is underneat subParent in the tree structure. I cannot move this into subParent1 -->\n <!-- <div class=\"grayOverlay\"></div> -->\n\n</div>\n\n\n\nUpdate: also added position: relative on child.\nIt seems that this can be achieved by removing the z-index on grayOverlay and subParent1 (the grayOverlay is still stacked on top due to natural placement), and add some z-index on selected.\nExample:\n\n\nconst btn = document.querySelector(\"button\");\nconst divs = document.querySelectorAll(\"div.child\");\n\nlet i = 0;\n\nbtn.addEventListener(\"click\", () => {\n divs[i].classList.toggle(\"selected\");\n if (i < 2) {\n divs[i + 1].classList.toggle(\"selected\")\n i++;\n return;\n };\n if (i >= 2) {\n i = 0;\n divs[i].classList.toggle(\"selected\");\n }\n\n});\n/* Can Change */\n\n.parent {\n width: 300px;\n height: 300px;\n position: relative;\n}\n\n\n/* Can Change */\n\n.grayOverlay {\n position: absolute;\n width: 100%;\n height: 100%;\n background-color: rgb(107 114 128 / 0.5);\n /* Removed z-index */\n}\n\n\n/* CANNOT CHANGE */\n\n.subParent1 {\n position: absolute;\n display: flex;\n flex-direction: column;\n width: 100%;\n /* Removed z-index */\n}\n\n\n/* Can Change */\n\n.child {\n color: black;\n width: 50px;\n height: 20px;\n background-color: pink;\n margin: 10px;\n /* Add position */\n position: relative;\n}\n\n\n/* Can Change */\n\n.selected {\n background-color: red;\n /* Add z-index */\n z-index: 100;\n}\n\nbutton {\n margin-bottom: 1em;\n padding: 6px;\n}\n<button>Toggle</button>\n<div class=\"parent\">\n\n <div class=\"subParent1\">\n <div class=\"subParent2\">\n <div class=\"child selected\">child</div>\n <div class=\"child\">child</div>\n <div class=\"child\">child</div>\n </div>\n </div>\n \n <!-- This component is underneat subParent in the tree structure. I cannot move this into subParent1 --> \n <div class=\"grayOverlay\"></div>\n \n</div>\n\n\n\n" ]
[ 2, 1 ]
[]
[]
[ "css" ]
stackoverflow_0074669658_css.txt
Q: Android emulator authorization dialog appear "always allow from this computer" everytime When I want to use the android emulator for the first time it needs authorization for USB debugging. But when I turn on the emulator the authorization dialog last for about 300ms and then disappears, and this dialog won't be shown until I restart the emulator again. One time I started to click continuously on the place that I guessed the confirmation button of the dialog will be shown and that time the authorization was successful but I have to do this every time! as I don't have enough time to check "Always trust this computer" in that 300ms. I know it is a weird problem but please help if you have any suggestions. I use : android studio 3.2.1 android emulator 28.0.20 HAXM Installer 7.3.2 virtual device android 9.0 (google play) A: This issue may be caused by a number of different things, including slow performance on your computer or conflicts with other programs running in the background. Some possible solutions to try include: Restart your computer and try again. Make sure that you have the latest version of the Android emulator installed and that it is compatible with your version of Android Studio Check for updates to the HAXM installer and other software that may be related to the issue Close any unnecessary programs or processes running in the background that may be impacting performance Try using a different virtual device or creating a new virtual device to see if that resolves the issue If none of these solutions work, you may want to try using a different emulator or contact the Android emulator support team for further assistance.
Android emulator authorization dialog appear "always allow from this computer" everytime
When I want to use the android emulator for the first time it needs authorization for USB debugging. But when I turn on the emulator the authorization dialog last for about 300ms and then disappears, and this dialog won't be shown until I restart the emulator again. One time I started to click continuously on the place that I guessed the confirmation button of the dialog will be shown and that time the authorization was successful but I have to do this every time! as I don't have enough time to check "Always trust this computer" in that 300ms. I know it is a weird problem but please help if you have any suggestions. I use : android studio 3.2.1 android emulator 28.0.20 HAXM Installer 7.3.2 virtual device android 9.0 (google play)
[ "This issue may be caused by a number of different things, including slow performance on your computer or conflicts with other programs running in the background.\nSome possible solutions to try include:\n\nRestart your computer and try again.\nMake sure that you have the latest version of the Android emulator\ninstalled and that it is compatible with your version of Android Studio\nCheck for updates to the HAXM installer and other software that may\nbe related to the issue\nClose any unnecessary programs or processes running in the\nbackground that may be impacting performance\nTry using a different virtual device or creating a new virtual\ndevice to see if that resolves the issue\n\nIf none of these solutions work, you may want to try using a different emulator or contact the Android emulator support team for further assistance.\n" ]
[ 0 ]
[]
[]
[ "android_emulator", "android_studio" ]
stackoverflow_0053901441_android_emulator_android_studio.txt
Q: ASP.NET Core, route is not triggered when defined as OpenId SignedOutCallbackPath I have this controller [Route("Authentication")] public class AuthenticationController : Controller { and this action [HttpGet("SignOut")] public async Task<IActionResult> SignOut([FromQuery] string sid) { await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectDefaults.AuthenticationScheme); await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationDefaults.AuthenticationScheme); return View(); This works as expected. But when I configure a SignedOutCallbackPath for my OpenId authentication that has the same route, it doesn't work anymore. The constructor of my controller is not called, the action is not hit and the result in the browser is a blank page (code 200) with html/head/body, but all empty, that doesn't match any template or view. services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.HttpOnly = true; }) .AddOpenIdConnect(options => { options.SignedOutCallbackPath = "/Authentication/SignOut"; Is the SignedOutCallbackPath not supposed to be a view of my own? A: The callback paths in the OpenID Connect authentication scheme are internal paths that are used for the authentication flow of the OpenID Connect protocol. There are three of those: CallbackPath – The path the authentication provider posts back when authenticating. SignedOutCallbackPath – The path the authentication provider posts back after signing out. RemoteSignOutPath – The path the authentication provider posts back after signing out remotely by a third-party application. As you can see from my explanation, these are all URLs that the authentication provider uses: They are part of the authentication flow and not to be directly used by your users. You also don’t need to worry about handling those things. The OpenID Connect authentication handler will automatically respond to these requests when the authentication middleware runs. This means that when you change a callback path to some path that is a route of one of your controller actions, then the authentication middleware will handle that request before your controller gets involved. This is by design, so that you do not need to worry about these routes as they are mostly internal. You just have the option to change those paths if you cannot or do not want to use the default values. Now, there are two possible things I can think of that you could have meant to change instead: SignedOutRedirectUri: This is the URL the user gets redirected to after the sign-out process is completed. This basically allows you to send the user to some view with e.g. a message “you were successfully signed out”, to show that the sign-out is done. You can also set this as part of the AuthenticationProperties that you can pass to the SignOutAsync. CookieAuthenticationOptions.LogoutPath: This is the URL that is configured to the actual URL that users can go to to sign out of the application. This does not really have that much of an effect though. Otherwise, it’s really up to you to send users to your /Authentication/SignOut URL. You can put a button into your layout that goes there for example, to offer users a sign out functionality at all times. A: Your Action is expecting parameter which is not passed by your callback, the parameter is seemingly not used within the action either, so you can either omit the parameter or make it optional [HttpGet("SignOut")] public async Task<IActionResult> SignOut() { await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectDefaults.AuthenticationScheme); await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationDefaults.AuthenticationScheme); return View(); } A: For Azure AD I found this post logout redirection behavior: It doens't work for root accounts of the tenant, that is my personal account, which created Azure subscription. But it works for new accounts I created inside of my subscription.
ASP.NET Core, route is not triggered when defined as OpenId SignedOutCallbackPath
I have this controller [Route("Authentication")] public class AuthenticationController : Controller { and this action [HttpGet("SignOut")] public async Task<IActionResult> SignOut([FromQuery] string sid) { await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectDefaults.AuthenticationScheme); await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationDefaults.AuthenticationScheme); return View(); This works as expected. But when I configure a SignedOutCallbackPath for my OpenId authentication that has the same route, it doesn't work anymore. The constructor of my controller is not called, the action is not hit and the result in the browser is a blank page (code 200) with html/head/body, but all empty, that doesn't match any template or view. services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.HttpOnly = true; }) .AddOpenIdConnect(options => { options.SignedOutCallbackPath = "/Authentication/SignOut"; Is the SignedOutCallbackPath not supposed to be a view of my own?
[ "The callback paths in the OpenID Connect authentication scheme are internal paths that are used for the authentication flow of the OpenID Connect protocol. There are three of those:\n\nCallbackPath – The path the authentication provider posts back when authenticating.\nSignedOutCallbackPath – The path the authentication provider posts back after signing out.\nRemoteSignOutPath – The path the authentication provider posts back after signing out remotely by a third-party application.\n\nAs you can see from my explanation, these are all URLs that the authentication provider uses: They are part of the authentication flow and not to be directly used by your users. You also don’t need to worry about handling those things. The OpenID Connect authentication handler will automatically respond to these requests when the authentication middleware runs.\nThis means that when you change a callback path to some path that is a route of one of your controller actions, then the authentication middleware will handle that request before your controller gets involved. This is by design, so that you do not need to worry about these routes as they are mostly internal.\nYou just have the option to change those paths if you cannot or do not want to use the default values.\nNow, there are two possible things I can think of that you could have meant to change instead:\n\nSignedOutRedirectUri: This is the URL the user gets redirected to after the sign-out process is completed. This basically allows you to send the user to some view with e.g. a message “you were successfully signed out”, to show that the sign-out is done.\nYou can also set this as part of the AuthenticationProperties that you can pass to the SignOutAsync.\nCookieAuthenticationOptions.LogoutPath: This is the URL that is configured to the actual URL that users can go to to sign out of the application. This does not really have that much of an effect though.\n\nOtherwise, it’s really up to you to send users to your /Authentication/SignOut URL. You can put a button into your layout that goes there for example, to offer users a sign out functionality at all times.\n", "Your Action is expecting parameter which is not passed by your callback, the parameter is seemingly not used within the action either, so you can either omit the parameter or make it optional\n[HttpGet(\"SignOut\")]\npublic async Task<IActionResult> SignOut()\n{\n await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectDefaults.AuthenticationScheme);\n await ControllerContext.HttpContext.SignOutAsync(Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationDefaults.AuthenticationScheme);\n\n return View();\n}\n\n", "For Azure AD I found this post logout redirection behavior:\n\nIt doens't work for root accounts of the tenant, that is my personal account, which created Azure\nsubscription.\nBut it works for new accounts I created inside of my\nsubscription.\n\n" ]
[ 10, 0, 0 ]
[]
[]
[ "asp.net_core", "asp.net_mvc", "openid", "openid_connect" ]
stackoverflow_0060205375_asp.net_core_asp.net_mvc_openid_openid_connect.txt
Q: Child Group Count values from two columns SSRS I have datatable stuattrecordAMPM. enter image description here Matrix: enter image description here Output enter image description here How can I have two columns expression in Matrix I Tried: "=COUNT(IIF(Fields!INAM.Value="P" and Fields!OUTPM.Value="P",0,0))", but its showing one field columns. How to get Two column fields expression to get both "P"values.I want "P"values count. Could someone help me on this. Thank you... A: Just use =SUM(IIF(Fields!INAM.Value = "P" AND Fields!OUTPM.Value = "P", 1, 0)) The above checks is both AM and PM are "P" and then includes it in the sum if they BOTH are. I did try to explain in your previous question why your expression would not work. Here's the result based on your sample data where there are 11 days where both AM and PM and "P" If you want to count all instances of "P" then you can use the following, which sums the two individually and then adds the results together. =SUM(IIF(Fields!INAM.Value = "P", 1, 0)) + SUM(IIF(Fields!OUTPM.Value = "P", 1, 0))
Child Group Count values from two columns SSRS
I have datatable stuattrecordAMPM. enter image description here Matrix: enter image description here Output enter image description here How can I have two columns expression in Matrix I Tried: "=COUNT(IIF(Fields!INAM.Value="P" and Fields!OUTPM.Value="P",0,0))", but its showing one field columns. How to get Two column fields expression to get both "P"values.I want "P"values count. Could someone help me on this. Thank you...
[ "Just use\n=SUM(IIF(Fields!INAM.Value = \"P\" AND Fields!OUTPM.Value = \"P\", 1, 0))\n\nThe above checks is both AM and PM are \"P\" and then includes it in the sum if they BOTH are.\nI did try to explain in your previous question why your expression would not work.\nHere's the result based on your sample data where there are 11 days where both AM and PM and \"P\"\n\nIf you want to count all instances of \"P\" then you can use the following, which sums the two individually and then adds the results together.\n=SUM(IIF(Fields!INAM.Value = \"P\", 1, 0)) + SUM(IIF(Fields!OUTPM.Value = \"P\", 1, 0))\n\n" ]
[ 0 ]
[]
[]
[ "reporting_services", "vb.net" ]
stackoverflow_0074664198_reporting_services_vb.net.txt
Q: How do I force this function to always return six character hex color codes? This code is supposed to provide contrasting fg and bg color codes, However there's a bug: function randomColorPair() { const bg = '#' + Math.floor(Math.random() * 16777215).toString(16); let fg = '#' + Math.floor(Math.random() * 16777215).toString(16); while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) { fg = '#' + Math.floor(Math.random() * 16777215).toString(16); } return [bg, fg]; } console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); So this function works great, except occassionaly either the bg or fg will only be 4 or 5 characters. Something buggy but it needs to always be six characters for a hex color code. A: Math.floor(Math.random() * 16777215).toString(16) generates a random number between 0 and 16777215 and converts it to a hexadecimal value. So this value can sometimes be less than 6 characters long, which causes the issue you're seeing. Pad the hex value with zeroes until it is 6 characters long. function randomColorPair() { var bg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0'); var fg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0'); while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) { fg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0'); } return [bg, fg]; } console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); A: Pad the left of the string resulting from .toString(16) with 0s if needed. Put it into a function for reusability. const randomColor = () => ( '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(0, 6) ); function randomColorPair() { const bg = randomColor(); let fg = randomColor(); while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) { fg = randomColor(); } return [bg, fg]; } for (let i = 0; i < 5; i++) { console.log(randomColorPair()); }
How do I force this function to always return six character hex color codes?
This code is supposed to provide contrasting fg and bg color codes, However there's a bug: function randomColorPair() { const bg = '#' + Math.floor(Math.random() * 16777215).toString(16); let fg = '#' + Math.floor(Math.random() * 16777215).toString(16); while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) { fg = '#' + Math.floor(Math.random() * 16777215).toString(16); } return [bg, fg]; } console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); console.log(randomColorPair()); So this function works great, except occassionaly either the bg or fg will only be 4 or 5 characters. Something buggy but it needs to always be six characters for a hex color code.
[ "Math.floor(Math.random() * 16777215).toString(16) generates a random number between 0 and 16777215 and converts it to a hexadecimal value. So this value can sometimes be less than 6 characters long, which causes the issue you're seeing.\nPad the hex value with zeroes until it is 6 characters long.\nfunction randomColorPair() {\n var bg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0');\n var fg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0');\n\n while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) {\n fg = '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(6, '0');\n }\n\n return [bg, fg];\n}\n\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\nconsole.log(randomColorPair());\n\n", "Pad the left of the string resulting from .toString(16) with 0s if needed. Put it into a function for reusability.\n\n\nconst randomColor = () => (\n '#' + Math.floor(Math.random() * 16777215).toString(16).padStart(0, 6)\n);\nfunction randomColorPair() {\n const bg = randomColor();\n let fg = randomColor();\n \n while (Math.abs(parseInt(bg.substring(1), 16) - parseInt(fg.substring(1), 16)) < 0x777777) {\n fg = randomColor();\n }\n\n return [bg, fg];\n}\nfor (let i = 0; i < 5; i++) {\n console.log(randomColorPair());\n}\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "hex", "javascript" ]
stackoverflow_0074670036_hex_javascript.txt
Q: E/libEGL: validate_display:99 error 3008 (EGL_BAD_DISPLAY) android os 7.1 nougat I am getting the following error while running my app on Android OS 7.1 Nougat. E/libEGL: validate_display:99 error 3008 (EGL_BAD_DISPLAY)[ 04-21 10:19:18.788 4410: 4622 D/ ] HostConnection::get() New Host Connection established 0x7db835ad6200, tid 4622 In build.gradle I am using vectorDrawables.useSupportLibrary = true and the following dependencies: dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support:appcompat-v7:26.1.0' compile 'com.android.support.constraint:constraint-layout:1.0.2' testCompile 'junit:junit:4.12' compile 'com.google.android.gms:play-services-location:11.6.0' compile 'com.google.android.gms:play-services-places:11.6.0' compile project(':library') } At the build types I have: buildTypes { release { minifyEnabled true shrinkResources true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } For the project build.gradle I have: dependencies { classpath 'com.android.tools.build:gradle:3.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files classpath 'com.google.gms:google-services:3.1.1' } In my UI, I am using imageview, ontop of another imageview. The problem appears both on AVD and Real device. Has anyone faced this issue before? What causes this error to occur? What is the solution for this? A: The error "EGL_BAD_DISPLAY" typically indicates that the Android system is unable to connect to the display system. This can happen for a number of reasons, including a problem with the device's graphics drivers or a mismatch between the version of EGL in your app and the version on the device. One potential solution is to update the graphics drivers on the device. You can also try using a different version of the EGL library in your app, or using the eglInitialize function to explicitly initialize the EGL display before making any EGL calls. Another possible cause of this error is an issue with the AndroidManifest.xml file in your app. Make sure that the android:hardwareAccelerated attribute is set to true in the <application> element. This will enable hardware acceleration for your app, which may help resolve the issue. A: This error message indicates that there is an issue with the EGL (Embedded Graphics Library) display, which is used to create and manage the display and surface of an Android application. The specific error code (3008) indicates that the display being used is not valid. There are a few potential causes for this issue: The device may not support the version of EGL being used by the application. In this case, upgrading the device's operating system to a more recent version that supports the required EGL version may fix the problem. The application may not be properly initializing the EGL display. In this case, checking the code that initializes the EGL display and ensuring that it is being done correctly may fix the problem. There may be an issue with the device itself, such as a hardware malfunction or a driver problem. In this case, trying the app on a different device or attempting to troubleshoot the device's hardware and drivers may fix the problem. It is also possible that this error is being caused by a combination of these factors or by another issue entirely. In general, the best way to fix this issue is to carefully review the code and the device's settings to identify the root cause of the problem and implement a solution based on that. A: This error occurs when the Android OS is unable to establish a connection to the display. This can be caused by a variety of factors, such as a misconfigured or outdated graphics driver, a problem with the device's display hardware, or an issue with the app itself. To troubleshoot this issue, you can try the following steps: Check if your device's graphics driver is up to date. If it is not, update it to the latest version. Try restarting your device and see if the issue persists. If the problem still occurs, try using a different Android emulator or device to see if the issue is specific to the current device or emulator. If the problem only occurs on a specific device or emulator, try resetting its configuration or settings to see if that fixes the issue. If the problem occurs on multiple devices or emulators, it is likely an issue with the app itself. In this case, you can try debugging the app to identify the cause of the error.
E/libEGL: validate_display:99 error 3008 (EGL_BAD_DISPLAY) android os 7.1 nougat
I am getting the following error while running my app on Android OS 7.1 Nougat. E/libEGL: validate_display:99 error 3008 (EGL_BAD_DISPLAY)[ 04-21 10:19:18.788 4410: 4622 D/ ] HostConnection::get() New Host Connection established 0x7db835ad6200, tid 4622 In build.gradle I am using vectorDrawables.useSupportLibrary = true and the following dependencies: dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support:appcompat-v7:26.1.0' compile 'com.android.support.constraint:constraint-layout:1.0.2' testCompile 'junit:junit:4.12' compile 'com.google.android.gms:play-services-location:11.6.0' compile 'com.google.android.gms:play-services-places:11.6.0' compile project(':library') } At the build types I have: buildTypes { release { minifyEnabled true shrinkResources true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } For the project build.gradle I have: dependencies { classpath 'com.android.tools.build:gradle:3.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files classpath 'com.google.gms:google-services:3.1.1' } In my UI, I am using imageview, ontop of another imageview. The problem appears both on AVD and Real device. Has anyone faced this issue before? What causes this error to occur? What is the solution for this?
[ "The error \"EGL_BAD_DISPLAY\" typically indicates that the Android system is unable to connect to the display system. This can happen for a number of reasons, including a problem with the device's graphics drivers or a mismatch between the version of EGL in your app and the version on the device.\nOne potential solution is to update the graphics drivers on the device. You can also try using a different version of the EGL library in your app, or using the eglInitialize function to explicitly initialize the EGL display before making any EGL calls.\nAnother possible cause of this error is an issue with the AndroidManifest.xml file in your app. Make sure that the android:hardwareAccelerated attribute is set to true in the <application> element. This will enable hardware acceleration for your app, which may help resolve the issue.\n", "This error message indicates that there is an issue with the EGL (Embedded Graphics Library) display, which is used to create and manage the display and surface of an Android application. The specific error code (3008) indicates that the display being used is not valid.\nThere are a few potential causes for this issue:\n\nThe device may not support the version of EGL being used by the application. In this case, upgrading the device's operating system to a more recent version that supports the required EGL version may fix the problem.\nThe application may not be properly initializing the EGL display. In this case, checking the code that initializes the EGL display and ensuring that it is being done correctly may fix the problem.\nThere may be an issue with the device itself, such as a hardware malfunction or a driver problem. In this case, trying the app on a different device or attempting to troubleshoot the device's hardware and drivers may fix the problem.\nIt is also possible that this error is being caused by a combination of these factors or by another issue entirely. In general, the best way to fix this issue is to carefully review the code and the device's settings to identify the root cause of the problem and implement a solution based on that.\n\n", "This error occurs when the Android OS is unable to establish a connection to the display. This can be caused by a variety of factors, such as a misconfigured or outdated graphics driver, a problem with the device's display hardware, or an issue with the app itself.\nTo troubleshoot this issue, you can try the following steps:\n\nCheck if your device's graphics driver is up to date. If it is not,\nupdate it to the latest version.\nTry restarting your device and see if the issue persists.\nIf the problem still occurs, try using a different Android emulator\nor device to see if the issue is specific to the current device or\nemulator.\nIf the problem only occurs on a specific device or emulator, try\nresetting its configuration or settings to see if that fixes the\nissue.\nIf the problem occurs on multiple devices or emulators, it is likely\nan issue with the app itself. In this case, you can try debugging\nthe app to identify the cause of the error.\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "android", "android_7.1_nougat" ]
stackoverflow_0043534272_android_android_7.1_nougat.txt
Q: Adding an element to an existing XML I'm not really into XSLT and to have an app work i need to modify an existing XML but don't know how to do it. I'm sure it's not that hard but i don't understand much how this thing work so far and i'm a bit in a hurry. So the XML looks like this <Culture id="uc_dummy"> <notable_and_wanderer_templates> <template name="NPCCharacter.spc_wanderer_empire_0" /> </notable_and_wanderer_templates> </Culture> and i want it to look like this <Culture id="uc_dummy"> <notable_and_wanderer_templates> <template name="NPCCharacter.spc_wanderer_empire_0" /> <template name="NPCCharacter.uc_wanderer_empire_0" /> </notable_and_wanderer_templates> </Culture> And of course there are more than one block with different ID in the file and i want only this one to be altered, also using as an element seem to be confusing for XSLT as it's a keyword for this language ... Thanks for your time. A: I suggest you do: XSLT 1.0 <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:strip-space elements="*"/> <!-- identity transform --> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="Culture[@id='uc_dummy']/notable_and_wanderer_templates"> <xsl:copy> <xsl:copy-of select="*"/> <template name="NPCCharacter.uc_wanderer_empire_0" /> </xsl:copy> </xsl:template> </xsl:stylesheet> There is no problem with using template as element name, because when you use it as an instruction it is prefixed by xsl so it's in the XSLT namespace.
Adding an element to an existing XML
I'm not really into XSLT and to have an app work i need to modify an existing XML but don't know how to do it. I'm sure it's not that hard but i don't understand much how this thing work so far and i'm a bit in a hurry. So the XML looks like this <Culture id="uc_dummy"> <notable_and_wanderer_templates> <template name="NPCCharacter.spc_wanderer_empire_0" /> </notable_and_wanderer_templates> </Culture> and i want it to look like this <Culture id="uc_dummy"> <notable_and_wanderer_templates> <template name="NPCCharacter.spc_wanderer_empire_0" /> <template name="NPCCharacter.uc_wanderer_empire_0" /> </notable_and_wanderer_templates> </Culture> And of course there are more than one block with different ID in the file and i want only this one to be altered, also using as an element seem to be confusing for XSLT as it's a keyword for this language ... Thanks for your time.
[ "I suggest you do:\nXSLT 1.0\n<xsl:stylesheet version=\"1.0\" \nxmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n<xsl:output method=\"xml\" version=\"1.0\" encoding=\"UTF-8\" indent=\"yes\"/>\n<xsl:strip-space elements=\"*\"/>\n\n<!-- identity transform -->\n<xsl:template match=\"@*|node()\">\n <xsl:copy>\n <xsl:apply-templates select=\"@*|node()\"/>\n </xsl:copy>\n</xsl:template>\n\n<xsl:template match=\"Culture[@id='uc_dummy']/notable_and_wanderer_templates\">\n <xsl:copy>\n <xsl:copy-of select=\"*\"/>\n <template name=\"NPCCharacter.uc_wanderer_empire_0\" />\n </xsl:copy>\n</xsl:template>\n\n</xsl:stylesheet>\n\nThere is no problem with using template as element name, because when you use it as an instruction it is prefixed by xsl so it's in the XSLT namespace.\n" ]
[ 1 ]
[]
[]
[ "xslt" ]
stackoverflow_0074669863_xslt.txt
Q: planets created from 1d perlin noise terrain look weird i am trying to make planets using pyglet but they end up looking like stars result here is my code also i need a way to convert a batch to a sprite (to move it easily) import pyglet from pyglet import shapes import opensimplex import math import time brtd = 0 ######## planets########### class planetobj(): def __init__(self,seed=1234,age=68,position=(0,0),color=(0,1,0),name="planet",description=" 127.0.0.1 , home sweet home never will thy become infected with the virus that has a closedcure"): self.seed = seed self.age = age self.position = position self.color = color self.name = name self.description = description def gplanet(self,size): opensimplex.seed(self.seed) done = 0 xc = 0 c = 0 self.terrain = [] start = opensimplex.noise2(x=0, y=self.age) while (done == 0 or xc < 50) and not xc > 100 : xc = xc + 1 c = c + size value = opensimplex.noise2(x=xc, y=self.age) self.terrain.append(value * size) if xc > 50: if math.floor(value * 100 ) == math.floor(start * 100): self.done = 1 def mkplanet(self, x,y): self.batch = pyglet.graphics.Batch() corner1 = (x,y) self.trias = [] counter = 0 cornerback = [0,0] for i in self.terrain: counter += 1 radi = (360 / len(self.terrain)) * counter radi2 = (360 / len(self.terrain)) * ((counter + 1 ) % len(self.terrain)) theta = self.terrain[(counter +1 ) % len(self.terrain)] corner3 = (x + math.sin(radi) * ( i ) ,math.cos(radi) * ( i ) + y ) corner2 = (x + math.sin(radi2) * ( theta ) ,math.cos(radi2) * ( theta ) + y ) self.trias.append(shapes.Triangle( x,y,corner2[0], corner2[1], corner3[0], corner3[1], color=(255, counter % 255, 255), batch=self.batch) ) ############ basic game logic & rendering ########### scr_X = 400 scr_Y = 300 window = pyglet.window.Window(scr_X,scr_Y) samplebatch = pyglet.graphics.Batch() earth = planetobj() earth.gplanet(200) planets = [] planets.append(earth) earth.mkplanet( 50 ,50) @window.event def on_draw(): window.clear() earth.batch.draw() pyglet.app.run() i tried changing the values that get divided by 'len(self.terrain)' but i could not find out how to make the planets look round A: OK, I've corrected your trigonometry, but there are some other issues. The random values you get back from the noise generator are between -1 and 1. You are then multiplying that by the planet size, which gives you wild variations from wedge to wedge. What you want is to have a basic wedge size, which you use the noise to adjust bit by bit. Here, I'm saying that the noise should be 3% of the wedge size (size/30). I didn't want to download opensimplex, so I've used a uniform random number generator. I'm also using matplotlib to plot the triangle, but see if this is closer to what you intended. import math import random import numpy as np import matplotlib.pyplot as plt class planetobj(): def __init__(self,seed=1234,age=68,position=(0,0),color=(0,1,0),name="planet",description=""): self.seed = seed self.age = age self.position = position self.color = color self.name = name self.description = description def gplanet(self,size): done = 0 xc = 0 self.terrain = [] start = random.uniform(-1,1) while (done == 0 or xc < 50) and not xc > 100 : xc = xc + 1 value = random.uniform(-1,1) self.terrain.append(size + value * size / 30) if xc > 50 and math.floor(value * 100) == math.floor(start * 100): done = 1 def mkplanet(self, x,y): corner1 = (x,y) self.trias = [] deltatheta = 360 / len(self.terrain) for counter,i in enumerate(self.terrain): theta1 = deltatheta * counter theta2 = deltatheta * (counter + 1) radius = self.terrain[counter] corner2 = (x + radius * math.cos(theta1), y + radius * math.sin(theta1)) corner3 = (x + radius * math.cos(theta2), y + radius * math.sin(theta2)) # self.trias.append(shapes.Triangle( x, y, corner2[0], corner2[1], corner3[0], corner3[1], color=(255, counter % 255, 255), batch=self.batch) ) self.trias.append(( x, y, corner2[0], corner2[1], corner3[0], corner3[1], (1.0,(counter%255)/255,1.0) )) earth = planetobj() earth.gplanet(200) earth.mkplanet(50 ,50) print(earth.trias) plt.figure() plt.scatter( [48,48,52,52],[-50,50,-50,50] ) for t in earth.trias: tri = np.array(t[:6]).reshape(3,2) plt.gca().add_patch(plt.Polygon( tri, color=t[6] )) plt.show() Output:
planets created from 1d perlin noise terrain look weird
i am trying to make planets using pyglet but they end up looking like stars result here is my code also i need a way to convert a batch to a sprite (to move it easily) import pyglet from pyglet import shapes import opensimplex import math import time brtd = 0 ######## planets########### class planetobj(): def __init__(self,seed=1234,age=68,position=(0,0),color=(0,1,0),name="planet",description=" 127.0.0.1 , home sweet home never will thy become infected with the virus that has a closedcure"): self.seed = seed self.age = age self.position = position self.color = color self.name = name self.description = description def gplanet(self,size): opensimplex.seed(self.seed) done = 0 xc = 0 c = 0 self.terrain = [] start = opensimplex.noise2(x=0, y=self.age) while (done == 0 or xc < 50) and not xc > 100 : xc = xc + 1 c = c + size value = opensimplex.noise2(x=xc, y=self.age) self.terrain.append(value * size) if xc > 50: if math.floor(value * 100 ) == math.floor(start * 100): self.done = 1 def mkplanet(self, x,y): self.batch = pyglet.graphics.Batch() corner1 = (x,y) self.trias = [] counter = 0 cornerback = [0,0] for i in self.terrain: counter += 1 radi = (360 / len(self.terrain)) * counter radi2 = (360 / len(self.terrain)) * ((counter + 1 ) % len(self.terrain)) theta = self.terrain[(counter +1 ) % len(self.terrain)] corner3 = (x + math.sin(radi) * ( i ) ,math.cos(radi) * ( i ) + y ) corner2 = (x + math.sin(radi2) * ( theta ) ,math.cos(radi2) * ( theta ) + y ) self.trias.append(shapes.Triangle( x,y,corner2[0], corner2[1], corner3[0], corner3[1], color=(255, counter % 255, 255), batch=self.batch) ) ############ basic game logic & rendering ########### scr_X = 400 scr_Y = 300 window = pyglet.window.Window(scr_X,scr_Y) samplebatch = pyglet.graphics.Batch() earth = planetobj() earth.gplanet(200) planets = [] planets.append(earth) earth.mkplanet( 50 ,50) @window.event def on_draw(): window.clear() earth.batch.draw() pyglet.app.run() i tried changing the values that get divided by 'len(self.terrain)' but i could not find out how to make the planets look round
[ "OK, I've corrected your trigonometry, but there are some other issues. The random values you get back from the noise generator are between -1 and 1. You are then multiplying that by the planet size, which gives you wild variations from wedge to wedge. What you want is to have a basic wedge size, which you use the noise to adjust bit by bit. Here, I'm saying that the noise should be 3% of the wedge size (size/30).\nI didn't want to download opensimplex, so I've used a uniform random number generator. I'm also using matplotlib to plot the triangle, but see if this is closer to what you intended.\nimport math\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nclass planetobj():\n def __init__(self,seed=1234,age=68,position=(0,0),color=(0,1,0),name=\"planet\",description=\"\"):\n self.seed = seed\n self.age = age\n self.position = position\n self.color = color\n self.name = name\n self.description = description\n\n def gplanet(self,size):\n done = 0\n xc = 0\n self.terrain = []\n start = random.uniform(-1,1)\n while (done == 0 or xc < 50) and not xc > 100 :\n xc = xc + 1\n value = random.uniform(-1,1)\n self.terrain.append(size + value * size / 30)\n if xc > 50 and math.floor(value * 100) == math.floor(start * 100):\n done = 1\n\n def mkplanet(self, x,y):\n corner1 = (x,y)\n self.trias = []\n deltatheta = 360 / len(self.terrain)\n for counter,i in enumerate(self.terrain):\n theta1 = deltatheta * counter\n theta2 = deltatheta * (counter + 1)\n radius = self.terrain[counter] \n corner2 = (x + radius * math.cos(theta1), y + radius * math.sin(theta1))\n corner3 = (x + radius * math.cos(theta2), y + radius * math.sin(theta2))\n# self.trias.append(shapes.Triangle( x, y, corner2[0], corner2[1], corner3[0], corner3[1], color=(255, counter % 255, 255), batch=self.batch) )\n self.trias.append(( x, y, corner2[0], corner2[1], corner3[0], corner3[1], (1.0,(counter%255)/255,1.0) ))\n\n\nearth = planetobj()\nearth.gplanet(200)\n\nearth.mkplanet(50 ,50)\nprint(earth.trias)\nplt.figure()\nplt.scatter( [48,48,52,52],[-50,50,-50,50] )\nfor t in earth.trias:\n tri = np.array(t[:6]).reshape(3,2)\n plt.gca().add_patch(plt.Polygon( tri, color=t[6] ))\nplt.show()\n\nOutput:\n\n" ]
[ 0 ]
[]
[]
[ "procedural_generation", "pyglet", "python" ]
stackoverflow_0074664055_procedural_generation_pyglet_python.txt
Q: vscode "#include errors detected. Please update your includePath I'm trying to use vscode with arduino but have no success. The problem seems to be something with the libraries path. But I havent been able to fix that ! I'm on linux. "message": "#include errors detected. Please update your includePath. IntelliSense features for this translation unit (/home/harold/Arduino/Saaf_Curing/Saaf_Curing.ino) will be provided by the Tag Parser.", I don't know how to find my includePath. I'm not able to do any advices given in vscode. I wonder if vs code is the right direction at all as it seems complicated ? A: Although the question mentions Arduino, the following suggestions apply basically any time VSCode tells you to "update your includePath". What is includePath? The includePath is an attribute in c_cpp_settings.json, which is in the .vscode folder of the main folder you have opened in VSCode using File → Open Folder. You can edit c_cpp_settings.json directly, but it is usually easier to use the "C/C++ Configurations GUI". To do that, open the Command Palette (Ctrl+Shift+P) and run "C/C++: Edit Configurations (UI)". Then look for the "Include path" setting. The includePath tells VSCode (specifically the IntelliSense component of the C/C++ extension) where to look when resolving #include "filename" directives. That allows VSCode to see definitions of symbols defined in those files. So should I fiddle with includePath when VSCode tells me to? Not at first! Before changing the include path, if you haven't already, first set the "Compiler path" to point at your C/C++ compiler, and set "IntelliSense mode" to match the compiler as closely as possible. You may also need to adjust the Compiler arguments, particularly if the compiler is capable of generating code for multiple targets, for example, both 32-bit and 64-bit code. (If you don't know what that means, skip it at first.) Next, in Command Palette, run "C/C++: Log Diagnostics". The output will show you which compiler VSCode found and what it detected as its built-in include path and preprocessor defines. Then, run these commands in a shell: $ touch empty.c $ gcc -v -E -dD empty.c Here, I have assumed you are using gcc as your compiler. If not, substitute the actual compiler command name. If your compiler is not a variant of GCC (for example you are using the Microsoft cl.exe compiler), you'll need to look at its documentation or Google to find switches that print the predefined macros and include paths (e.g., see here for cl.exe). Compare the output of the above command to what VSCode shows in its C/C++ diagnostics output. Hopefully they are very similar. If not, try adjusting the Compiler path, IntelliSense mode, or Compiler arguments. Once you've gotten them as close as possible by adjusting just those three settings, go on to the next step. Now adjust includePath if necessary If there are still significant differences between the compiler built-in configuration and what VSCode detects, fix that by (in the C/C++ settings UI) modifying the Include path, Defines, and C/C++ standard fields. Re-run the C/C++ Log Diagnostics command to see the effects. It is probably not necessary to add all of the pre-defined preprocessor symbols. This really only matters if there are #ifdef directives that depend on them, and which are causing VSCode to see the wrong code as active. I suggest only adding predefined symbols if, while browsing your code, you see a specific case that VSCode gets wrong. Finally, if your project has header files in places that the compiler does not search by default, that is, you normally have to pass -I switches on the compiler command line, add those as well to the Include path. The same goes for any -D arguments, which must be added to the Defines. A: This is due to the extension is missing some includepath when initialize add the missing lines into your c_cpp_properties.json "includePath": [ "<arduino ide installation folder>\\tools\\**", "<arduino ide installation folder>\\hardware\\arduino\\avr\\**", "<arduino ide installation folder>\\hardware\\tools\\**", "<arduino ide installation folder>\\hardware\\arduino\\avr\\cores\\arduino" ] Also add "defines": [ "USBCON" ] under "configurations" to make Serial class work with intellisense A: Try using platformIO extension it makes your life easier. Personally I use VScode with platformIO for my Arduino and ESP32 projects. A: I just successfully wasted an hour finding solution to this problem on stack overflow but all in vain and now I have founded the solution which is, if you are using linux, just have to install the g++ compiler from your terminal, sudo apt install g++ " and you are good to go. A: For those using WSL, it's a common error, resolved by: installing Remote-WSL VS-Code extension setting c_cpp_properties.json includePath to ["${workspaceFolder}/**"] and intelliSenseMode to "linux-clang-x64" (or other intelliSense mode) close VS-Code and open it again from your WSL termnial, you'll see the Status Bar icon as on the screenshot Now the #include error should be gone For more information follow the link A: I had the same issue and spent hours trying different solutions, and finally, I realized I had a misspelling in "iostream". It's not an answer to this question, but if you have fallen here with the same error, check spelling as well. A: Cause of the Problem: My C/C++ Compiler got saved as /use/bin/clang and this was showing the error in my computer. Operating System: Ubuntu 2022.04 Solutions: Open Your VSCode and click Settings (at bottom left corner generally). Now click at Open Settings (JSON) icon (at top right corner) and open the json file. Add below code in between last part of main {} bracket [If you don't have the bracket then create one]. "C_Cpp.default.compilerPath": "/usr/bin/gcc", "C_Cpp.default.intelliSenseMode": "linux-gcc-x64", This has solved my problem and I think it will solve the problem for any Linux/Unix OS. In case, if this doesn't work then try with: "C_Cpp.default.compilerPath": "/usr/bin/g++", "C_Cpp.default.intelliSenseMode": "linux-gcc-x64", If you have Windows Operating System then Use the path of your GCC compiler instead of /usr/bin/gcc [this will be something like C:\MinGW\bin].
vscode "#include errors detected. Please update your includePath
I'm trying to use vscode with arduino but have no success. The problem seems to be something with the libraries path. But I havent been able to fix that ! I'm on linux. "message": "#include errors detected. Please update your includePath. IntelliSense features for this translation unit (/home/harold/Arduino/Saaf_Curing/Saaf_Curing.ino) will be provided by the Tag Parser.", I don't know how to find my includePath. I'm not able to do any advices given in vscode. I wonder if vs code is the right direction at all as it seems complicated ?
[ "Although the question mentions Arduino, the following suggestions apply basically any time VSCode tells you to \"update your includePath\".\nWhat is includePath?\nThe includePath is an attribute in c_cpp_settings.json, which is in the .vscode folder of the main folder you have opened in VSCode using File → Open Folder.\nYou can edit c_cpp_settings.json directly, but it is usually easier to use the \"C/C++ Configurations GUI\". To do that, open the Command Palette (Ctrl+Shift+P) and run \"C/C++: Edit Configurations (UI)\". Then look for the \"Include path\" setting.\nThe includePath tells VSCode (specifically the IntelliSense component of the C/C++ extension) where to look when resolving #include \"filename\" directives. That allows VSCode to see definitions of symbols defined in those files.\nSo should I fiddle with includePath when VSCode tells me to?\nNot at first! Before changing the include path, if you haven't already, first set the \"Compiler path\" to point at your C/C++ compiler, and set \"IntelliSense mode\" to match the compiler as closely as possible.\nYou may also need to adjust the Compiler arguments, particularly if the compiler is capable of generating code for multiple targets, for example, both 32-bit and 64-bit code. (If you don't know what that means, skip it at first.)\nNext, in Command Palette, run \"C/C++: Log Diagnostics\". The output will show you which compiler VSCode found and what it detected as its built-in include path and preprocessor defines.\nThen, run these commands in a shell:\n $ touch empty.c\n $ gcc -v -E -dD empty.c\n\nHere, I have assumed you are using gcc as your compiler. If not, substitute the actual compiler command name. If your compiler is not a variant of GCC (for example you are using the Microsoft cl.exe compiler), you'll need to look at its documentation or Google to find switches that print the predefined macros and include paths (e.g., see here for cl.exe).\nCompare the output of the above command to what VSCode shows in its C/C++ diagnostics output. Hopefully they are very similar. If not, try adjusting the Compiler path, IntelliSense mode, or Compiler arguments. Once you've gotten them as close as possible by adjusting just those three settings, go on to the next step.\nNow adjust includePath if necessary\nIf there are still significant differences between the compiler built-in configuration and what VSCode detects, fix that by (in the C/C++ settings UI) modifying the Include path, Defines, and C/C++ standard fields. Re-run the C/C++ Log Diagnostics command to see the effects.\nIt is probably not necessary to add all of the pre-defined preprocessor symbols. This really only matters if there are #ifdef directives that depend on them, and which are causing VSCode to see the wrong code as active. I suggest only adding predefined symbols if, while browsing your code, you see a specific case that VSCode gets wrong.\nFinally, if your project has header files in places that the compiler does not search by default, that is, you normally have to pass -I switches on the compiler command line, add those as well to the Include path. The same goes for any -D arguments, which must be added to the Defines.\n", "This is due to the extension is missing some includepath when initialize\nadd the missing lines into your c_cpp_properties.json \n\"includePath\": [\n\"<arduino ide installation folder>\\\\tools\\\\**\",\n\"<arduino ide installation folder>\\\\hardware\\\\arduino\\\\avr\\\\**\",\n\"<arduino ide installation folder>\\\\hardware\\\\tools\\\\**\",\n\"<arduino ide installation folder>\\\\hardware\\\\arduino\\\\avr\\\\cores\\\\arduino\"\n]\n\nAlso add \"defines\": [ \"USBCON\" ] under \"configurations\" to make Serial class work with intellisense \n", "Try using platformIO extension it makes your life easier. Personally I use VScode with platformIO for my Arduino and ESP32 projects.\n", "I just successfully wasted an hour finding solution to this problem on stack overflow but all in vain and now I have founded the solution which is, if you are using linux, just have to install the g++ compiler from your terminal,\n\nsudo apt install g++\n\n\" and you are good to go.\n", "For those using WSL, it's a common error, resolved by:\n\ninstalling Remote-WSL VS-Code extension\nsetting c_cpp_properties.json includePath to [\"${workspaceFolder}/**\"] and intelliSenseMode to \"linux-clang-x64\" (or other intelliSense mode)\nclose VS-Code and open it again from your WSL termnial, you'll see the Status Bar icon as on the screenshot\n\nNow the #include error should be gone\nFor more information follow the link\n", "I had the same issue and spent hours trying different solutions, and finally, I realized I had a misspelling in \"iostream\". It's not an answer to this question, but if you have fallen here with the same error, check spelling as well.\n", "Cause of the Problem:\nMy C/C++ Compiler got saved as /use/bin/clang and this was showing the error in my computer.\nOperating System: Ubuntu 2022.04\nSolutions:\n\nOpen Your VSCode and click Settings (at bottom left corner generally).\nNow click at Open Settings (JSON) icon (at top right corner) and open the json file.\nAdd below code in between last part of main {} bracket [If you don't have the bracket then create one].\n\n \"C_Cpp.default.compilerPath\": \"/usr/bin/gcc\",\n \"C_Cpp.default.intelliSenseMode\": \"linux-gcc-x64\",\n\nThis has solved my problem and I think it will solve the problem for any Linux/Unix OS.\nIn case, if this doesn't work then try with:\n \"C_Cpp.default.compilerPath\": \"/usr/bin/g++\",\n \"C_Cpp.default.intelliSenseMode\": \"linux-gcc-x64\",\n\nIf you have Windows Operating System then Use the path of your GCC compiler instead of /usr/bin/gcc [this will be something like C:\\MinGW\\bin].\n" ]
[ 24, 5, 1, 1, 0, 0, 0 ]
[]
[]
[ "arduino", "include_path", "linux", "visual_studio_code" ]
stackoverflow_0051227662_arduino_include_path_linux_visual_studio_code.txt
Q: C++: Question on access specifier in inheritance Here's the following code. #include <iostream> using namespace std; class B { private: int a=1; protected: int b=2; public: int c=3; }; class D : protected B { // clause 1 }; class D2 : public D { // clause 2 void x() { c=2; } }; int main() { D d; D2 d2; cout << d2.c; // clause 3 Error due to c being protected return 0; } Note: Clause 1 would make c to be protected. Clause 2 would make c (protected in clause 1) to be public again. Why clause 3 failed? A: Your notes are not entirely correct: private inheritance makes public, protected and private members be private in the derived class protected inheritances makes public and protected members be protected in the derived class, and private members stay private public inheritance changes nothing to the derived members' access. c is protected in D and still protected in D2 because of the public inheritance. There is no way to cheat visibility. What you assume would break logic. Visibility only goes downwards, not up. A: The type of inheritance you use does not affect the protection level of members in the parent class. It only changes the protection level of those members in the child class. When you define class D : protected B it is equivalent to defining class D like this: class D { protected: int b=2; int c=3; }; Then when you define class D2 : public D it is equivalent to defining class D2 like this: class D2 { protected: int b=2; int c=3; };
C++: Question on access specifier in inheritance
Here's the following code. #include <iostream> using namespace std; class B { private: int a=1; protected: int b=2; public: int c=3; }; class D : protected B { // clause 1 }; class D2 : public D { // clause 2 void x() { c=2; } }; int main() { D d; D2 d2; cout << d2.c; // clause 3 Error due to c being protected return 0; } Note: Clause 1 would make c to be protected. Clause 2 would make c (protected in clause 1) to be public again. Why clause 3 failed?
[ "Your notes are not entirely correct:\n\nprivate inheritance makes public, protected and private members be private in the derived class\nprotected inheritances makes public and protected members be protected in the derived class, and private members stay private\npublic inheritance changes nothing to the derived members' access.\n\nc is protected in D and still protected in D2 because of the public inheritance. There is no way to cheat visibility. What you assume would break logic. Visibility only goes downwards, not up.\n", "The type of inheritance you use does not affect the protection level of members in the parent class. It only changes the protection level of those members in the child class.\nWhen you define class D : protected B it is equivalent to defining class D like this:\nclass D\n{\nprotected:\n int b=2;\n int c=3;\n};\n\nThen when you define class D2 : public D it is equivalent to defining class D2 like this:\nclass D2\n{\nprotected:\n int b=2;\n int c=3;\n};\n\n" ]
[ 2, 0 ]
[]
[]
[ "c++", "inheritance" ]
stackoverflow_0074670025_c++_inheritance.txt
Q: launch URL in Flet I'm using Flet and I want for my app to launch a link when clicking on a button. According to the docs, I can use launch_url method. But when I tried, I got the following error: Exception in thread Thread-6 (open_repo): Traceback (most recent call last): File "C:\Users\Iqmal\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\Iqmal\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "d:\Iqmal\Documents\Python Projects\flet-hello\main.py", line 58, in open_repo ft.page.launch_url('https://github.com/iqfareez/flet-hello') ^^^^^^^^^^^^^^^^^^ AttributeError: 'function' object has no attribute 'launch_url' Code import flet as ft def main(page: ft.Page): page.padding = ft.Padding(20, 35, 20, 20) page.theme_mode = ft.ThemeMode.LIGHT appbar = ft.AppBar( title=ft.Text(value="Flutter using Flet"), bgcolor=ft.colors.BLUE, color=ft.colors.WHITE, actions=[ft.IconButton(icon=ft.icons.CODE, on_click=open_repo)]) page.controls.append(appbar) page.update() def open_repo(e): ft.page.launch_url('https://github.com/iqfareez/flet-hello') ft.app(target=main, assets_dir='assets') A: To fix the error you're getting, you need to import the page module from the flet library and then create an instance of the page class. Then, you can call the launch_url method on that instance to open a URL in the default web browser. Here's how you might update your code to do that: import flet as ft Import the page module from the flet library from flet import page def main(page: ft.Page): # Create a new page object my_page = page.Page() # Set the padding and theme mode for the page my_page.padding = ft.Padding(20, 35, 20, 20) my_page.theme_mode = ft.ThemeMode.LIGHT # Create an app bar and add it to the page appbar = ft.AppBar( title=ft.Text(value="Flutter using Flet"), bgcolor=ft.colors.BLUE, color=ft.colors.WHITE, actions=[ft.IconButton(icon=ft.icons.CODE, on_click=open_repo)]) my_page.controls.append(appbar) # Update the page to display the changes my_page.update() def open_repo(e): # Use the launch_url method to open a URL in the default web browser my_page.launch_url('https://github.com/iqfareez/flet-hello') Run the app using the main function as the target ft.app(target=main, assets_dir='assets') A: From what I'm seeing here and the errors you're getting it's just possible you might have a problem with the installation of Flet. Try installing running in a virtual environment and see if it changes. Good Luck
launch URL in Flet
I'm using Flet and I want for my app to launch a link when clicking on a button. According to the docs, I can use launch_url method. But when I tried, I got the following error: Exception in thread Thread-6 (open_repo): Traceback (most recent call last): File "C:\Users\Iqmal\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\Iqmal\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "d:\Iqmal\Documents\Python Projects\flet-hello\main.py", line 58, in open_repo ft.page.launch_url('https://github.com/iqfareez/flet-hello') ^^^^^^^^^^^^^^^^^^ AttributeError: 'function' object has no attribute 'launch_url' Code import flet as ft def main(page: ft.Page): page.padding = ft.Padding(20, 35, 20, 20) page.theme_mode = ft.ThemeMode.LIGHT appbar = ft.AppBar( title=ft.Text(value="Flutter using Flet"), bgcolor=ft.colors.BLUE, color=ft.colors.WHITE, actions=[ft.IconButton(icon=ft.icons.CODE, on_click=open_repo)]) page.controls.append(appbar) page.update() def open_repo(e): ft.page.launch_url('https://github.com/iqfareez/flet-hello') ft.app(target=main, assets_dir='assets')
[ "To fix the error you're getting, you need to import the page module from the flet library and then create an instance of the page class. Then, you can call the launch_url method on that instance to open a URL in the default web browser.\nHere's how you might update your code to do that:\nimport flet as ft\nImport the page module from the flet library\nfrom flet import page\ndef main(page: ft.Page):\n# Create a new page object\nmy_page = page.Page()\n\n# Set the padding and theme mode for the page\nmy_page.padding = ft.Padding(20, 35, 20, 20)\nmy_page.theme_mode = ft.ThemeMode.LIGHT\n\n# Create an app bar and add it to the page\nappbar = ft.AppBar(\n title=ft.Text(value=\"Flutter using Flet\"),\n bgcolor=ft.colors.BLUE,\n color=ft.colors.WHITE,\n actions=[ft.IconButton(icon=ft.icons.CODE, on_click=open_repo)])\n\nmy_page.controls.append(appbar)\n\n# Update the page to display the changes\nmy_page.update()\n\ndef open_repo(e):\n# Use the launch_url method to open a URL in the default web browser\nmy_page.launch_url('https://github.com/iqfareez/flet-hello')\nRun the app using the main function as the target\nft.app(target=main, assets_dir='assets')\n", "From what I'm seeing here and the errors you're getting it's just possible you might have a problem with the installation of Flet. Try installing running in a virtual environment and see if it changes.\nGood Luck\n" ]
[ 0, 0 ]
[]
[]
[ "flet", "flutter", "python" ]
stackoverflow_0074661326_flet_flutter_python.txt
Q: Can I run multiple programs in a Docker container? I'm trying to wrap my head around Docker from the point of deploying an application which is intended to run on the users on desktop. My application is simply a flask web application and mongo database. Normally I would install both in a VM and, forward a host port to the guest web app. I'd like to give Docker a try but I'm not sure how I'm meant to use more than one program. The documentations says there can only be only ENTRYPOINT so how can I have Mongo and my flask application. Or do they need to be in separate containers, in which case how do they talk to each other and how does this make distributing the app easy? A: There can be only one ENTRYPOINT, but that target is usually a script that launches as many programs that are needed. You can additionally use for example Supervisord or similar to take care of launching multiple services inside single container. This is an example of a docker container running mysql, apache and wordpress within a single container. Say, You have one database that is used by a single web application. Then it is probably easier to run both in a single container. If You have a shared database that is used by more than one application, then it would be better to run the database in its own container and the applications each in their own containers. There are at least two possibilities how the applications can communicate with each other when they are running in different containers: Use exposed IP ports and connect via them. Recent docker versions support linking. A: I strongly disagree with some previous solutions that recommended to run both services in the same container. It's clearly stated in the documentation that it's not a recommended: It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes. There are good use cases for supervisord or similar programs but running a web application + database is not part of them. You should definitely use docker-compose to do that and orchestrate multiple containers with different responsibilities. A: I had similar requirement of running a LAMP stack, Mongo DB and my own services Docker is OS based virtualisation, which is why it isolates its container around a running process, hence it requires least one process running in FOREGROUND. So you provide your own startup script as the entry point, thus your startup script becomes an extended Docker image script, in which you can stack any number of the services as far as AT LEAST ONE FOREGROUND SERVICE IS STARTED, WHICH TOO TOWARDS THE END So my Docker image file has two line below in the very end: COPY myStartupScript.sh /usr/local/myscripts/myStartupScript.sh CMD ["/bin/bash", "/usr/local/myscripts/myStartupScript.sh"] In my script I run all MySQL, MongoDB, Tomcat etc. In the end I run my Apache as a foreground thread. source /etc/apache2/envvars /usr/sbin/apache2 -DFOREGROUND This enables me to start all my services and keep the container alive with the last service started being in the foreground Hope it helps UPDATE: Since I last answered this question, new things have come up like Docker compose, which can help you run each service on its own container, yet bind all of them together as dependencies among those services, try knowing more about docker-compose and use it, it is more elegant way unless your need does not match with it. A: Although it's not recommended you can run 2 processes in foreground by using wait. Just make a bash script with the following content. Eg start.sh: # runs 2 commands simultaneously: mongod & # your first application P1=$! python script.py & # your second application P2=$! wait $P1 $P2 In your Dockerfile, start it with CMD bash start.sh I would recommend to set up a local Kubernetes cluster if you want to run multiple processes simultaneously. You can 'distribute' the app by providing them a simple Kubernetes manifest. A: They can be in separate containers, and indeed, if the application was also intended to run in a larger environment, they probably would be. A multi-container system would require some more orchestration to be able to bring up all the required dependencies, though in Docker v0.6.5+, there is a new facility to help with that built into Docker itself - Linking. With a multi-machine solution, its still something that has to be arranged from outside the Docker environment however. With two different containers, the two parts still communicate over TCP/IP, but unless the ports have been locked down specifically (not recommended, as you'd be unable to run more than one copy), you would have to pass the new port that the database has been exposed as to the application, so that it could communicate with Mongo. This is again, something that Linking can help with. For a simpler, small installation, where all the dependencies are going in the same container, having both the database and Python runtime started by the program that is initially called as the ENTRYPOINT is also possible. This can be as simple as a shell script, or some other process controller - Supervisord is quite popular, and a number of examples exist in the public Dockerfiles. A: Docker provides a couple of examples on how to do it. The lightweight option is to: Put all of your commands in a wrapper script, complete with testing and debugging information. Run the wrapper script as your CMD. This is a very naive example. First, the wrapper script: #!/bin/bash # Start the first process ./my_first_process -D status=$? if [ $status -ne 0 ]; then echo "Failed to start my_first_process: $status" exit $status fi # Start the second process ./my_second_process -D status=$? if [ $status -ne 0 ]; then echo "Failed to start my_second_process: $status" exit $status fi # Naive check runs checks once a minute to see if either of the processes exited. # This illustrates part of the heavy lifting you need to do if you want to run # more than one service in a container. The container will exit with an error # if it detects that either of the processes has exited. # Otherwise it will loop forever, waking up every 60 seconds while /bin/true; do ps aux |grep my_first_process |grep -q -v grep PROCESS_1_STATUS=$? ps aux |grep my_second_process |grep -q -v grep PROCESS_2_STATUS=$? # If the greps above find anything, they will exit with 0 status # If they are not both 0, then something is wrong if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then echo "One of the processes has already exited." exit -1 fi sleep 60 done Next, the Dockerfile: FROM ubuntu:latest COPY my_first_process my_first_process COPY my_second_process my_second_process COPY my_wrapper_script.sh my_wrapper_script.sh CMD ./my_wrapper_script.sh A: I agree with the other answers that using two containers is preferable, but if you have your heart set on bunding multiple services in a single container you can use something like supervisord. in Hipache for instance, the included Dockerfile runs supervisord, and the file supervisord.conf specifies for both hipache and redis-server to be run. A: If a dedicated script seems like too much overhead, you can spawn separate processes explicitly with sh -c. For example: CMD sh -c 'mini_httpd -C /my/config -D &' \ && ./content_computing_loop A: In docker, there are two ways you can run a program CMD ENTRYPOINT If you want to know the difference between them, please refer here In CMD/ENTRYPOINT, there are two formats to run a command SHELL format EXEC format SHELL format: CMD executable_first arg1; executable_second arg1 arg2 ENTRYPOINT executable_first arg1; executable_second arg1 arg2 This version will create a shell and executes above command. Here you can use any shell syntax such as ";", "&", "|", etc. So you can run any number of commands here. If you have complex set of commands to run, you can create separate shell script and use it. CMD my_script.sh arg1 ENTRYPOINT my_script.sh arg1 EXEC format: CMD ["executable", "parameter 1", "parameter 2", …] ENTRYPOINT ["executable", "parameter 1", "parameter 2", …] Here you can notice that only first parameter is an executable. From the second parameter, everything become an arguments/parameters for that executable. To run multiple commands in EXEC format CMD ["/bin/sh", "-c", "executable_first arg1; executable_second"] CMD ["/bin/sh", "-c", "executable_first arg1; executable_second"] In above command, we have used shell command as executable to run the command. This is the only way to run multiple commands in EXEC format. Following are WRONG CMD ["executable_first parameter", "executable_second parameter"] ENTRYPOINT ["executable_first parameter", "executable_second parameter"] CMD ["executable_first", "parameter", ";", "executable_second", "parameter"] ENTRYPOINT ["executable_first", "parameter", ";", "executable_second", "parameter"] A: Can I run multiple programs in a Docker container? Yes. But with significant risks. Below is the same answer as above. But with details and a recommended resolution. If you're interested in those. Not Recommended Warning. Using the same container for multiple services is not recommended by the Docker community, though. The Docker documentation reads: "It is generally recommended that you separate areas of concern by using one service per container." Source at: • https://archive.ph/3Roa6#selection-307.2-307.100 • https://docs.docker.com/config/containers/multi-service_container/ If you choose to ignore the recommendation above, you container risk to be with weaker security, increasingly unstable, and in the future a painful growth. If you are ok with those risks above, the documentation to use one container for multiple services is at: • https://archive.ph/3Roa6#selection-335.0-691.1 • https://docs.docker.com/config/containers/multi-service_container/ Recommended If you need a container(s) with stronger security, and more stability, and in the future, scale bigger, as well as better performance, then the Docker community recommends those two steps: Use one service per Docker container. The end result is that you will have multiple containers. Use this Docker "Networking" feature to connect any of those containers to your liking.
Can I run multiple programs in a Docker container?
I'm trying to wrap my head around Docker from the point of deploying an application which is intended to run on the users on desktop. My application is simply a flask web application and mongo database. Normally I would install both in a VM and, forward a host port to the guest web app. I'd like to give Docker a try but I'm not sure how I'm meant to use more than one program. The documentations says there can only be only ENTRYPOINT so how can I have Mongo and my flask application. Or do they need to be in separate containers, in which case how do they talk to each other and how does this make distributing the app easy?
[ "There can be only one ENTRYPOINT, but that target is usually a script that launches as many programs that are needed. You can additionally use for example Supervisord or similar to take care of launching multiple services inside single container. This is an example of a docker container running mysql, apache and wordpress within a single container.\nSay, You have one database that is used by a single web application. Then it is probably easier to run both in a single container.\nIf You have a shared database that is used by more than one application, then it would be better to run the database in its own container and the applications each in their own containers.\nThere are at least two possibilities how the applications can communicate with each other when they are running in different containers:\n\nUse exposed IP ports and connect via them.\nRecent docker versions support linking.\n\n", "I strongly disagree with some previous solutions that recommended to run both services in the same container. It's clearly stated in the documentation that it's not a recommended:\n\nIt is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.\n\nThere are good use cases for supervisord or similar programs but running a web application + database is not part of them.\nYou should definitely use docker-compose to do that and orchestrate multiple containers with different responsibilities.\n", "I had similar requirement of running a LAMP stack, Mongo DB and my own services\nDocker is OS based virtualisation, which is why it isolates its container around a running process, hence it requires least one process running in FOREGROUND.\nSo you provide your own startup script as the entry point, thus your startup script becomes an extended Docker image script, in which you can stack any number of the services as far as AT LEAST ONE FOREGROUND SERVICE IS STARTED, WHICH TOO TOWARDS THE END\nSo my Docker image file has two line below in the very end:\nCOPY myStartupScript.sh /usr/local/myscripts/myStartupScript.sh\nCMD [\"/bin/bash\", \"/usr/local/myscripts/myStartupScript.sh\"]\n\nIn my script I run all MySQL, MongoDB, Tomcat etc. In the end I run my Apache as a foreground thread.\nsource /etc/apache2/envvars\n/usr/sbin/apache2 -DFOREGROUND\n\nThis enables me to start all my services and keep the container alive with the last service started being in the foreground\nHope it helps\nUPDATE: Since I last answered this question, new things have come up like Docker compose, which can help you run each service on its own container, yet bind all of them together as dependencies among those services, try knowing more about docker-compose and use it, it is more elegant way unless your need does not match with it.\n", "Although it's not recommended you can run 2 processes in foreground by using wait. Just make a bash script with the following content. Eg start.sh:\n# runs 2 commands simultaneously:\n\nmongod & # your first application\nP1=$!\npython script.py & # your second application\nP2=$!\nwait $P1 $P2\n\nIn your Dockerfile, start it with\nCMD bash start.sh\n\nI would recommend to set up a local Kubernetes cluster if you want to run multiple processes simultaneously. You can 'distribute' the app by providing them a simple Kubernetes manifest.\n", "They can be in separate containers, and indeed, if the application was also intended to run in a larger environment, they probably would be. \nA multi-container system would require some more orchestration to be able to bring up all the required dependencies, though in Docker v0.6.5+, there is a new facility to help with that built into Docker itself - Linking. With a multi-machine solution, its still something that has to be arranged from outside the Docker environment however.\nWith two different containers, the two parts still communicate over TCP/IP, but unless the ports have been locked down specifically (not recommended, as you'd be unable to run more than one copy), you would have to pass the new port that the database has been exposed as to the application, so that it could communicate with Mongo. This is again, something that Linking can help with.\nFor a simpler, small installation, where all the dependencies are going in the same container, having both the database and Python runtime started by the program that is initially called as the ENTRYPOINT is also possible. This can be as simple as a shell script, or some other process controller - Supervisord is quite popular, and a number of examples exist in the public Dockerfiles.\n", "Docker provides a couple of examples on how to do it. The lightweight option is to:\n\nPut all of your commands in a wrapper script, complete with testing\n and debugging information. Run the wrapper script as your CMD. This is\n a very naive example. First, the wrapper script:\n\n#!/bin/bash\n\n# Start the first process\n./my_first_process -D\nstatus=$?\nif [ $status -ne 0 ]; then\n echo \"Failed to start my_first_process: $status\"\n exit $status\nfi\n\n# Start the second process\n./my_second_process -D\nstatus=$?\nif [ $status -ne 0 ]; then\n echo \"Failed to start my_second_process: $status\"\n exit $status\nfi\n\n# Naive check runs checks once a minute to see if either of the processes exited.\n# This illustrates part of the heavy lifting you need to do if you want to run\n# more than one service in a container. The container will exit with an error\n# if it detects that either of the processes has exited.\n# Otherwise it will loop forever, waking up every 60 seconds\n\nwhile /bin/true; do\n ps aux |grep my_first_process |grep -q -v grep\n PROCESS_1_STATUS=$?\n ps aux |grep my_second_process |grep -q -v grep\n PROCESS_2_STATUS=$?\n # If the greps above find anything, they will exit with 0 status\n # If they are not both 0, then something is wrong\n if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then\n echo \"One of the processes has already exited.\"\n exit -1\n fi\n sleep 60\ndone\n\n\nNext, the Dockerfile:\n\nFROM ubuntu:latest\nCOPY my_first_process my_first_process\nCOPY my_second_process my_second_process\nCOPY my_wrapper_script.sh my_wrapper_script.sh\nCMD ./my_wrapper_script.sh\n\n", "I agree with the other answers that using two containers is preferable, but if you have your heart set on bunding multiple services in a single container you can use something like supervisord.\nin Hipache for instance, the included Dockerfile runs supervisord, and the file supervisord.conf specifies for both hipache and redis-server to be run.\n", "If a dedicated script seems like too much overhead, you can spawn separate processes explicitly with sh -c. For example:\nCMD sh -c 'mini_httpd -C /my/config -D &' \\\n && ./content_computing_loop\n\n", "In docker, there are two ways you can run a program\n\nCMD\nENTRYPOINT\n\nIf you want to know the difference between them, please refer here\nIn CMD/ENTRYPOINT, there are two formats to run a command\n\nSHELL format\nEXEC format\n\nSHELL format:\nCMD executable_first arg1; executable_second arg1 arg2\nENTRYPOINT executable_first arg1; executable_second arg1 arg2\n\nThis version will create a shell and executes above command. Here you can use any shell syntax such as \";\", \"&\", \"|\", etc. So you can run any number of commands here. If you have complex set of commands to run, you can create separate shell script and use it.\nCMD my_script.sh arg1\nENTRYPOINT my_script.sh arg1\n\nEXEC format:\nCMD [\"executable\", \"parameter 1\", \"parameter 2\", …]\nENTRYPOINT [\"executable\", \"parameter 1\", \"parameter 2\", …]\n\nHere you can notice that only first parameter is an executable. From the second parameter, everything become an arguments/parameters for that executable.\nTo run multiple commands in EXEC format\nCMD [\"/bin/sh\", \"-c\", \"executable_first arg1; executable_second\"]\nCMD [\"/bin/sh\", \"-c\", \"executable_first arg1; executable_second\"]\n\nIn above command, we have used shell command as executable to run the command. This is the only way to run multiple commands in EXEC format.\nFollowing are WRONG\nCMD [\"executable_first parameter\", \"executable_second parameter\"]\nENTRYPOINT [\"executable_first parameter\", \"executable_second parameter\"]\n\nCMD [\"executable_first\", \"parameter\", \";\", \"executable_second\", \"parameter\"]\nENTRYPOINT [\"executable_first\", \"parameter\", \";\", \"executable_second\", \"parameter\"]\n\n", "\nCan I run multiple programs in a Docker container?\n\nYes. But with significant risks.\n\nBelow is the same answer as above. But with details and a recommended resolution. If you're interested in those.\nNot Recommended\nWarning. Using the same container for multiple services is not recommended by the Docker community, though. The Docker documentation reads: \"It is generally recommended that you separate areas of concern by using one service per container.\" Source at:\n• https://archive.ph/3Roa6#selection-307.2-307.100\n• https://docs.docker.com/config/containers/multi-service_container/\nIf you choose to ignore the recommendation above, you container risk to be with weaker security, increasingly unstable, and in the future a painful growth.\nIf you are ok with those risks above, the documentation to use one container for multiple services is at:\n• https://archive.ph/3Roa6#selection-335.0-691.1\n• https://docs.docker.com/config/containers/multi-service_container/\nRecommended\nIf you need a container(s) with stronger security, and more stability, and in the future, scale bigger, as well as better performance, then the Docker community recommends those two steps:\n\nUse one service per Docker container. The end result is that you will have multiple containers.\n\nUse this Docker \"Networking\" feature to connect any of those containers to your liking.\n\n\n" ]
[ 133, 29, 24, 14, 7, 5, 3, 1, 0, 0 ]
[]
[]
[ "docker" ]
stackoverflow_0019948149_docker.txt
Q: Could not find a generator for route error when using named routes I'm trying to use CupertinoTabBar together with named routes. I tried adding a CupertinoTabScaffold and have the tab bar with the tab view in it, but when the app push's the app to a different route, the app throws an error: FlutterError (Could not find a generator for route RouteSettings("/randomONE", null) in the _CupertinoTabViewState void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return ChangeNotifierProvider( create: (context) => SomeProvider(), child: MaterialApp( title: 'Cool Title', home: CupertinoTabScaffold( tabBar: CupertinoTabBar( items: [ BottomNavigationBarItem(title: Text('First Page'), icon: Icon(Icons.access_alarms)), BottomNavigationBarItem(title: Text('Second Page'), icon: Icon(Icons.account_balance)), BottomNavigationBarItem(title: Text('Third Page'), icon: Icon(Icons.accessible)), ], ), tabBuilder: (context, index) { CupertinoTabView selectedView; switch (index) { case 0: selectedView = CupertinoTabView(builder: (context) { return CupertinoPageScaffold(child: FirstPage()); }); break; default: } return selectedView; }), initialRoute: NamedRoutes.splashScreen, routes: { '/first': (BuildContext context) => FirstPage(), '/second': (BuildContext context) => SecondPage(), '/randomONE': (BuildContext context) => ThirdPage(), }, ), ); } } A: You simply have to complete your switch: switch (index) { case 0: selectedView = CupertinoTabView(builder: (context) { return CupertinoPageScaffold(child: FirstPage()); }); break; case 1: selectedView = CupertinoTabView(builder: (context) { return CupertinoPageScaffold(child: SecondPage()); }); break; case 2: selectedView = CupertinoTabView(builder: (context) { return CupertinoPageScaffold(child: ThirdPage()); }); break; default: } A: Use routes or onGenerateRoutes poperty in CupertinoTabView of CupertinoTabBar and then pass your routes here. Your issue will be solved.
Could not find a generator for route error when using named routes
I'm trying to use CupertinoTabBar together with named routes. I tried adding a CupertinoTabScaffold and have the tab bar with the tab view in it, but when the app push's the app to a different route, the app throws an error: FlutterError (Could not find a generator for route RouteSettings("/randomONE", null) in the _CupertinoTabViewState void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return ChangeNotifierProvider( create: (context) => SomeProvider(), child: MaterialApp( title: 'Cool Title', home: CupertinoTabScaffold( tabBar: CupertinoTabBar( items: [ BottomNavigationBarItem(title: Text('First Page'), icon: Icon(Icons.access_alarms)), BottomNavigationBarItem(title: Text('Second Page'), icon: Icon(Icons.account_balance)), BottomNavigationBarItem(title: Text('Third Page'), icon: Icon(Icons.accessible)), ], ), tabBuilder: (context, index) { CupertinoTabView selectedView; switch (index) { case 0: selectedView = CupertinoTabView(builder: (context) { return CupertinoPageScaffold(child: FirstPage()); }); break; default: } return selectedView; }), initialRoute: NamedRoutes.splashScreen, routes: { '/first': (BuildContext context) => FirstPage(), '/second': (BuildContext context) => SecondPage(), '/randomONE': (BuildContext context) => ThirdPage(), }, ), ); } }
[ "You simply have to complete your switch:\nswitch (index) {\n case 0:\n selectedView = CupertinoTabView(builder: (context) {\n return CupertinoPageScaffold(child: FirstPage());\n });\n break;\n case 1:\n selectedView = CupertinoTabView(builder: (context) {\n return CupertinoPageScaffold(child: SecondPage());\n });\n break;\n case 2:\n selectedView = CupertinoTabView(builder: (context) {\n return CupertinoPageScaffold(child: ThirdPage());\n });\n break;\n default:\n}\n\n\n", "Use routes or onGenerateRoutes poperty in CupertinoTabView of CupertinoTabBar and then pass your routes here. Your issue will be solved.\n" ]
[ 0, 0 ]
[]
[]
[ "flutter" ]
stackoverflow_0063914098_flutter.txt
Q: hange a ClassName dynamiclly in react i want to change a class dynamilt and this is the component: import classes from "./Board.module.css" const Card = (props) => { const itemClass = "card" + (props.item.stat ? " active " + props.item.stat : ""); console.log(itemClass); return ( <div className={classes.itemClass} onClick={() => props.clickHandler(props.id)}> <label>{props.item.content}</label> </div> ); }; export default Card; itemClass is a nameclass i want to change dynamicly and that is the css file: .card{ background-color: #fff; display: flex; justify-content: center; align-items: center; border-radius: 5px; transform: rotateY(180deg); animation: 1s hideCard linear; transition: transform 0.5s; } .card.wrong{ background-color: red; } .card.correct{ background-color: green; } .card.active{ transform: rotateY(0); } .card.active label{ transform: scale(1); } but i can seem to change the class name only if i call classes.card it works but anytime i cant change to other classes in css hope someone can help me please thank you change a class name dynamiclly A: To change a class name dynamically in this component, you can use a template string to reference the classes object and the desired class name. For example, you can update the line where you define the itemClass variable as follows: const itemClass = `card${props.item.stat ? " active " + props.item.stat : ""}`; Then, when you use the itemClass variable in the className attribute of the div element, you can use the same template string syntax to reference it. For example: <div className={`${classes.${itemClass}}`} onClick={() => props.clickHandler(props.id)}> {props.item.content} This will allow you to dynamically change the class name based on the value of the props.item.stat property. I hope this helps!
hange a ClassName dynamiclly in react
i want to change a class dynamilt and this is the component: import classes from "./Board.module.css" const Card = (props) => { const itemClass = "card" + (props.item.stat ? " active " + props.item.stat : ""); console.log(itemClass); return ( <div className={classes.itemClass} onClick={() => props.clickHandler(props.id)}> <label>{props.item.content}</label> </div> ); }; export default Card; itemClass is a nameclass i want to change dynamicly and that is the css file: .card{ background-color: #fff; display: flex; justify-content: center; align-items: center; border-radius: 5px; transform: rotateY(180deg); animation: 1s hideCard linear; transition: transform 0.5s; } .card.wrong{ background-color: red; } .card.correct{ background-color: green; } .card.active{ transform: rotateY(0); } .card.active label{ transform: scale(1); } but i can seem to change the class name only if i call classes.card it works but anytime i cant change to other classes in css hope someone can help me please thank you change a class name dynamiclly
[ "To change a class name dynamically in this component, you can use a template string to reference the classes object and the desired class name. For example, you can update the line where you define the itemClass variable as follows:\nconst itemClass = `card${props.item.stat ? \" active \" + props.item.stat : \"\"}`;\n\nThen, when you use the itemClass variable in the className attribute of the div element, you can use the same template string syntax to reference it. For example:\n<div className={`${classes.${itemClass}}`} onClick={() => props.clickHandler(props.id)}>\n\n{props.item.content}\nThis will allow you to dynamically change the class name based on the value of the props.item.stat property. I hope this helps!\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074669934_reactjs.txt
Q: How to fix "TypeError: (0 , _web.default) is not a function" in Express I have a problem with Express. This is my code import express from "express"; import bodyParser from "body-parser"; import { configViewEngine } from "./config/viewEngine.js"; import { initWebRoutes } from './route/web.js'; import connectDB from "./config/connectDB.js"; import cors from 'cors'; require('dotenv').config(); let app = express(); app.use(cors({ origin: true })); app.use(bodyParser.json({ limit: '50mb' })); app.use(bodyParser.urlencoded({ limit: '50mb', extended: true })); configViewEngine(app); initWebRoutes(app); connectDB(); let port = process.env.PORT || 7070; //if port is undefined, default to current 7070 app.listen(port, () => { //callback console.log("BE NodeJS is running on the port" + port); }) I run project and receive a bug with content: "TypeError: (0 , _web.default) is not a function" I'm expecting my code work normally A: It looks like you're using the default import syntax in your code, but you're not using the default property when calling the function. // Replace this line: import { initWebRoutes } from './route/web.js'; // With this: import initWebRoutes from './route/web.js';
How to fix "TypeError: (0 , _web.default) is not a function" in Express
I have a problem with Express. This is my code import express from "express"; import bodyParser from "body-parser"; import { configViewEngine } from "./config/viewEngine.js"; import { initWebRoutes } from './route/web.js'; import connectDB from "./config/connectDB.js"; import cors from 'cors'; require('dotenv').config(); let app = express(); app.use(cors({ origin: true })); app.use(bodyParser.json({ limit: '50mb' })); app.use(bodyParser.urlencoded({ limit: '50mb', extended: true })); configViewEngine(app); initWebRoutes(app); connectDB(); let port = process.env.PORT || 7070; //if port is undefined, default to current 7070 app.listen(port, () => { //callback console.log("BE NodeJS is running on the port" + port); }) I run project and receive a bug with content: "TypeError: (0 , _web.default) is not a function" I'm expecting my code work normally
[ "It looks like you're using the default import syntax in your code, but you're not using the default property when calling the function.\n// Replace this line:\nimport { initWebRoutes } from './route/web.js';\n\n// With this:\nimport initWebRoutes from './route/web.js';\n\n" ]
[ 0 ]
[]
[]
[ "express", "javascript", "node.js" ]
stackoverflow_0074668964_express_javascript_node.js.txt
Q: User.Identity.Name is null for Blazor Wasm hosted in asp.net core User.Identity.Name is null for Blazor Wasm hosted in asp.net core. There is no such claim as name or email. I'm using default template of Visual Studio. services.AddIdentityServer().AddApiAuthorization<ApplicationUser, ApplicationDbContext>(); services.AddAuthentication().AddIdentityServerJwt(); Am I missing something? Let me know if you need more information. A: Your code does not make any real sense. You don't host IdentityServer inside a blazor application, instead you have a separate dedicated services that implements OpenIDConnect and that can authenticate users and give out tokens. In a blazor application you typically have this type of skeleton to configure authentication: services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }).AddCookie(opt => //Add configuration here { }).AddOpenIdConnect(options => { //Add configuration here }); See this article for details: How do I implement Blazor authentication with OpenID Connect? A: I had been struggling with the same issue for few months already. After try many approaches, finally I come out with the solution. using System.Linq; services.AddIdentityServer().AddApiAuthorization<ApplicationUser, ApplicationDbContext>(opt => opt.IdentityResources["openid"].UserClaims.Add("name"); opt.ApiResources.Single().UserClaims.Add("name"); }); services.AddAuthentication().AddIdentityServerJwt(); Please let me know if this solve the issue. Thank you :) A: I had this same issue in an API controller I added to my Blazor WASM hosted server project. Adding this to my program.cs file is what fixed it for me: services.Configure<IdentityOptions>(options => options.ClaimsIdentity.UserIdClaimType = ClaimTypes.NameIdentifier); Source: https://github.com/dotnet/AspNetCore.Docs/issues/17517
User.Identity.Name is null for Blazor Wasm hosted in asp.net core
User.Identity.Name is null for Blazor Wasm hosted in asp.net core. There is no such claim as name or email. I'm using default template of Visual Studio. services.AddIdentityServer().AddApiAuthorization<ApplicationUser, ApplicationDbContext>(); services.AddAuthentication().AddIdentityServerJwt(); Am I missing something? Let me know if you need more information.
[ "Your code does not make any real sense. You don't host IdentityServer inside a blazor application, instead you have a separate dedicated services that implements OpenIDConnect and that can authenticate users and give out tokens.\nIn a blazor application you typically have this type of skeleton to configure authentication:\nservices.AddAuthentication(options =>\n{\n options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;\n options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;\n}).AddCookie(opt =>\n //Add configuration here\n{\n}).AddOpenIdConnect(options =>\n{\n //Add configuration here\n});\n\nSee this article for details:\nHow do I implement Blazor authentication with OpenID Connect?\n", "I had been struggling with the same issue for few months already. After try many approaches, finally I come out with the solution.\n\nusing System.Linq;\n\nservices.AddIdentityServer().AddApiAuthorization<ApplicationUser, \n ApplicationDbContext>(opt =>\n opt.IdentityResources[\"openid\"].UserClaims.Add(\"name\");\n opt.ApiResources.Single().UserClaims.Add(\"name\");\n });\n\nservices.AddAuthentication().AddIdentityServerJwt();\n\nPlease let me know if this solve the issue. Thank you :)\n", "I had this same issue in an API controller I added to my Blazor WASM hosted server project. Adding this to my program.cs file is what fixed it for me:\nservices.Configure<IdentityOptions>(options => options.ClaimsIdentity.UserIdClaimType = ClaimTypes.NameIdentifier);\nSource: https://github.com/dotnet/AspNetCore.Docs/issues/17517\n" ]
[ 0, 0, 0 ]
[]
[]
[ "asp.net", "c#", "identityserver4" ]
stackoverflow_0070027764_asp.net_c#_identityserver4.txt
Q: Run JavaFX program I just finished a JavaFX program with MySQL and want to run it on another machine. like, convert it to .exe, or what can I do? A: There are some solutions for this If the 2 computers can reach each other (i.e. are on the same network) you could just use the already existing MySQL-DB and open that one for the other PC. Then continue to use that DB on the other PC. This has the effect that you will always the exact same state on both machines, but requires the machine with the DB to always reachable. Another solution would be to dump the DB and use the dump on the other machine to restore the state. That way the machines can use their own DB's, but their state will diverge if you use both at the same time. So you can only use your app on 1 machine, if you care about keeping the same state on both machines.
Run JavaFX program
I just finished a JavaFX program with MySQL and want to run it on another machine. like, convert it to .exe, or what can I do?
[ "There are some solutions for this\nIf the 2 computers can reach each other (i.e. are on the same network) you could just use the already existing MySQL-DB and open that one for the other PC. Then continue to use that DB on the other PC.\nThis has the effect that you will always the exact same state on both machines, but requires the machine with the DB to always reachable.\nAnother solution would be to dump the DB and use the dump on the other machine to restore the state. That way the machines can use their own DB's, but their state will diverge if you use both at the same time.\nSo you can only use your app on 1 machine, if you care about keeping the same state on both machines.\n" ]
[ 0 ]
[]
[]
[ "java", "javafx", "mysql", "netbeans" ]
stackoverflow_0074669765_java_javafx_mysql_netbeans.txt
Q: The event in the fullcalendar is not added to the next year When I add an event for the current year, everything works. And when I add on any date of the next year, it is not added, and error in console: plugins.bundle.js:54 Uncaught TypeError: Cannot set properties of undefined (setting 'tabIndex') at V (plugins.bundle.js:54:16706) at I (plugins.bundle.js:54:12928) at L.b.setDate (plugins.bundle.js:54:32849) at populateForm (calendar.js?80967:931:22) at handleNewEvent (calendar.js?80967:438:9) at t.select (calendar.js?80967:145:17) at e.trigger (fullcalendar.bundle.js:20:82181) at mr (fullcalendar.bundle.js:20:48529) at n.handlePointerUp (fullcalendar.bundle.js:20:188907) at e.trigger (fullcalendar.bundle.js:20:82181) I did not find information on this error on the Internet. I tried to change the order import scripts, but it does not help. I am using fullcalendar with metronic template. A: It is not clear from the code snippet you provided how to do this but the error message is saying that the code is trying to set the tabIndex property of an undefined object. To fix this error, you will need to find the code in the plugins.bundle.js file that is setting the tabIndex property and make sure that the object it is trying to set the property on is defined.
The event in the fullcalendar is not added to the next year
When I add an event for the current year, everything works. And when I add on any date of the next year, it is not added, and error in console: plugins.bundle.js:54 Uncaught TypeError: Cannot set properties of undefined (setting 'tabIndex') at V (plugins.bundle.js:54:16706) at I (plugins.bundle.js:54:12928) at L.b.setDate (plugins.bundle.js:54:32849) at populateForm (calendar.js?80967:931:22) at handleNewEvent (calendar.js?80967:438:9) at t.select (calendar.js?80967:145:17) at e.trigger (fullcalendar.bundle.js:20:82181) at mr (fullcalendar.bundle.js:20:48529) at n.handlePointerUp (fullcalendar.bundle.js:20:188907) at e.trigger (fullcalendar.bundle.js:20:82181) I did not find information on this error on the Internet. I tried to change the order import scripts, but it does not help. I am using fullcalendar with metronic template.
[ "It is not clear from the code snippet you provided how to do this but the error message is saying that the code is trying to set the tabIndex property of an undefined object.\nTo fix this error, you will need to find the code in the plugins.bundle.js file that is setting the tabIndex property and make sure that the object it is trying to set the property on is defined.\n" ]
[ 0 ]
[]
[]
[ "fullcalendar", "javascript", "metronic" ]
stackoverflow_0074670063_fullcalendar_javascript_metronic.txt
Q: Formsubmit.co form not working in reactapp I added formsubmit service to my email form on my react application simple personal website. but it does not seem to be working. It does nothing. Does the extra javascript in there mess with the formsubmit service? I actually put my real email in just changed it for this post. import React, { useState } from 'react'; import './styles.contactform.css'; function ContactForm() { const [name, setName] = useState(''); const [email, setEmail] = useState(''); const [message, setMessage] = useState(''); const handleSubmit = (e) => { e.preventDefault(); console.log(name, email, message) setName('') setEmail('') setMessage('') } return ( <div className="contact-container"> <form className="contact-form" action="https://formsubmit.co/[email protected]" method="POST" onSubmit={handleSubmit}> <input type="hidden" name="_next" value="https://notmicahclark.herokuapp.com/"/> <input type="text" value={name} id="name" name="name" placeholder="Name..." onChange={(e) => setName(e.target.value)} required/> <input type="email" value={email} id="email" name="email" placeholder="Email..." onChange={(e) => setEmail(e.target.value)} required/> <img className="letter-img" src="contact_form_imagepng.png" alt="Mail Letter"/> <input id="message" value={message} name="message" placeholder="Your message..." onChange={(e) => setMessage(e.target.value)}/> <button type="submit" value="Submit">Submit</button> </form> </div> ); } export default ContactForm; A: To overcome this issue, you can use the AJAX form feature provided by FormSubmit. According to their documentation: You can easily submit the form using AJAX without ever having your users leave the page. — this even works cross-origin. Please refer to their documentation about AJAX forms (sample code snippets are available): https://formsubmit.co/ajax-documentation A: I've tested your code. The problem is that you are both using the form's action, and at the same time preventing the forms default behavior with the following line: e.preventDefault(); Furthermore, even removing that line won't fix it, since React seems to ignore the method and action parameters in the form when there is a onSubmit present. Therefore, remove the: onSubmit={handleSubmit} For your code to work. Or, as previously stated by user Kesara, handle it with AJAX.
Formsubmit.co form not working in reactapp
I added formsubmit service to my email form on my react application simple personal website. but it does not seem to be working. It does nothing. Does the extra javascript in there mess with the formsubmit service? I actually put my real email in just changed it for this post. import React, { useState } from 'react'; import './styles.contactform.css'; function ContactForm() { const [name, setName] = useState(''); const [email, setEmail] = useState(''); const [message, setMessage] = useState(''); const handleSubmit = (e) => { e.preventDefault(); console.log(name, email, message) setName('') setEmail('') setMessage('') } return ( <div className="contact-container"> <form className="contact-form" action="https://formsubmit.co/[email protected]" method="POST" onSubmit={handleSubmit}> <input type="hidden" name="_next" value="https://notmicahclark.herokuapp.com/"/> <input type="text" value={name} id="name" name="name" placeholder="Name..." onChange={(e) => setName(e.target.value)} required/> <input type="email" value={email} id="email" name="email" placeholder="Email..." onChange={(e) => setEmail(e.target.value)} required/> <img className="letter-img" src="contact_form_imagepng.png" alt="Mail Letter"/> <input id="message" value={message} name="message" placeholder="Your message..." onChange={(e) => setMessage(e.target.value)}/> <button type="submit" value="Submit">Submit</button> </form> </div> ); } export default ContactForm;
[ "To overcome this issue, you can use the AJAX form feature provided by FormSubmit.\nAccording to their documentation:\n\nYou can easily submit the form using AJAX without ever having your\nusers leave the page. — this even works cross-origin.\n\nPlease refer to their documentation about AJAX forms (sample code snippets are available): https://formsubmit.co/ajax-documentation\n", "I've tested your code. The problem is that you are both using the form's action, and at the same time preventing the forms default behavior with the following line:\ne.preventDefault();\n\nFurthermore, even removing that line won't fix it, since React seems to ignore the method and action parameters in the form when there is a onSubmit present. Therefore, remove the:\nonSubmit={handleSubmit}\n\nFor your code to work. Or, as previously stated by user Kesara, handle it with AJAX.\n" ]
[ 2, 0 ]
[]
[]
[ "form_submit", "forms", "reactjs" ]
stackoverflow_0066491991_form_submit_forms_reactjs.txt
Q: How do I assign a timeout variable to the tag in Vuetify? I would like to display an alert box which notifies the user about something. I would like it to disappear after 5 seconds even if the user didn't acknowledge it. I already tried timeout and :timeout attributes but none of those seem to work and according to Vuetify docs they don't even exist in the tag so I'm clueless. Template: <div> <v-alert :value="alert" v-model="alert" dismissible color="blue" border="left" elevation="2" colored-border icon="mdi-information" >Registration successful!</v-alert> </div> <div class="text-center"> <v-dialog v-model="dialog" width="500"> <template v-slot:activator="{ on }"> <v-btn color="red lighten-2" dark v-on="on">Click Me</v-btn> </template> <v-card> <v-card-title class="headline grey lighten-2" primary-title>Privacy Policy</v-card-title> <v-card-text>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</v-card-text> <v-divider></v-divider> <v-card-actions> <div class="flex-grow-1"></div> <v-btn color="primary" text v-if="!alert" @click="dialog = false, alert">I accept</v-btn> </v-card-actions> </v-card> </v-dialog> </div> Script: import Vue from "vue"; export default { data() { return { alert: false, dialog: false }; }, created() { setTimeout(() => { this.alert = false }, 5000) } }; A: In created hook add timeout range with 5s that update alert property with false: new Vue({ el: '#app', vuetify: new Vuetify(), data(){ return{ alert: true, } }, created(){ setTimeout(()=>{ this.alert=false },5000) } }) in template bind v-alert's value prop to alert data property : <div id="app"> <v-app id="inspire"> <div> <v-alert type="success" :value="alert"> I'm a success alert. </v-alert> </div> </v-app> check this pen A: You could watch for changes to the alert property and set a timeout whenever the alert is set to true i.e. the alert is shown. import Vue from "vue"; export default { data() { return { alert: false, dialog: false }; }, watch: { alert(new_val){ if(new_val){ setTimeout(()=>{this.alert=false},3000) } } } }; A: I recommend you use Snackbar instead that's what Vuetify called it Then you just add this prop :timeout="2000" https://vuetifyjs.com/en/components/snackbars/#timeout
How do I assign a timeout variable to the tag in Vuetify?
I would like to display an alert box which notifies the user about something. I would like it to disappear after 5 seconds even if the user didn't acknowledge it. I already tried timeout and :timeout attributes but none of those seem to work and according to Vuetify docs they don't even exist in the tag so I'm clueless. Template: <div> <v-alert :value="alert" v-model="alert" dismissible color="blue" border="left" elevation="2" colored-border icon="mdi-information" >Registration successful!</v-alert> </div> <div class="text-center"> <v-dialog v-model="dialog" width="500"> <template v-slot:activator="{ on }"> <v-btn color="red lighten-2" dark v-on="on">Click Me</v-btn> </template> <v-card> <v-card-title class="headline grey lighten-2" primary-title>Privacy Policy</v-card-title> <v-card-text>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</v-card-text> <v-divider></v-divider> <v-card-actions> <div class="flex-grow-1"></div> <v-btn color="primary" text v-if="!alert" @click="dialog = false, alert">I accept</v-btn> </v-card-actions> </v-card> </v-dialog> </div> Script: import Vue from "vue"; export default { data() { return { alert: false, dialog: false }; }, created() { setTimeout(() => { this.alert = false }, 5000) } };
[ "In created hook add timeout range with 5s that update alert property with false:\nnew Vue({\n el: '#app',\n vuetify: new Vuetify(),\n data(){\n return{\n alert: true,\n }\n },\n created(){\n setTimeout(()=>{\n this.alert=false\n },5000)\n }\n})\n\nin template bind v-alert's value prop to alert data property :\n<div id=\"app\">\n <v-app id=\"inspire\">\n <div>\n <v-alert type=\"success\" :value=\"alert\">\n I'm a success alert.\n </v-alert>\n\n </div>\n </v-app>\n\ncheck this pen\n", "You could watch for changes to the alert property and set a timeout whenever the alert is set to true i.e. the alert is shown.\nimport Vue from \"vue\";\n\nexport default {\n data() {\n return {\n alert: false,\n dialog: false\n };\n },\n watch: {\n alert(new_val){\n if(new_val){\n setTimeout(()=>{this.alert=false},3000)\n }\n } \n }\n};\n\n", "I recommend you use Snackbar instead that's what Vuetify called it\nThen you just add this prop\n:timeout=\"2000\"\n\nhttps://vuetifyjs.com/en/components/snackbars/#timeout\n" ]
[ 5, 2, 0 ]
[ "try this\nsetInterval: 5000; \n\nthat should do the trick, its just like timeout!\nbut you still need to make a function for your timeout\nmaybe you should take a look on w3schools.com. and take a look with what you can do with alerts!\n" ]
[ -1 ]
[ "javascript", "vue.js", "vue_component", "vuejs2", "vuetify.js" ]
stackoverflow_0057957906_javascript_vue.js_vue_component_vuejs2_vuetify.js.txt
Q: Include configmap with non-managed helm chart I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart. Example: I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently. I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers. A: What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation. I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass. kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmCharts: - name: kube-prometheus-stack repo: https://prometheus-community.github.io/helm-charts version: 39.11.0 releaseName: prometheus namespace: prometheus valuesFile: values.yaml includeCRDs: true resources: - secretproviderclass.yaml I can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is. I use this with my gitops setup, but you can use this standalone as well. My folder structure would look something like this: . ├── kustomization.yaml ├── secretproviderclass.yaml └── values.yaml A: It is possible to include a configmap with its own values.yml file with a Helm chart that you are not managing locally. In Helm, a chart is a package of pre-configured Kubernetes resources. A chart may include a values.yml file that specifies default values for the chart's configuration. You can override these default values by providing your own values.yml file when you install the chart. To include a configmap with a Helm chart that you are not managing locally, you can create the configmap in your own values.yml file and specify it as part of the resources section in the file. For example, your values.yml file might look something like this: configmaps: my-configmap: data: my-key: my-value resources: - name: my-configmap type: configmap When you install the Helm chart, you can specify your own values.yml file using the --values flag. For example: $ helm install my-chart --values my-values.yml This will install the chart and include your configmap with the specified values. You can then manage the chart and the configmap together using Helm commands. To uninstall the chart and the associated configmap, you can use the helm uninstall command and specify the name of the chart. For example: $ helm uninstall my-chart This will uninstall the chart and any associated resources, including the configmap you included in the values.yml file.
Include configmap with non-managed helm chart
I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart. Example: I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently. I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.
[ "What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.\nI'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.\nkustomization.yaml:\napiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\n\nhelmCharts:\n - name: kube-prometheus-stack\n repo: https://prometheus-community.github.io/helm-charts\n version: 39.11.0\n releaseName: prometheus\n namespace: prometheus\n valuesFile: values.yaml\n includeCRDs: true\n\nresources:\n - secretproviderclass.yaml\n\n\nI can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is.\nI use this with my gitops setup, but you can use this standalone as well.\nMy folder structure would look something like this:\n.\n├── kustomization.yaml\n├── secretproviderclass.yaml\n└── values.yaml\n\n", "It is possible to include a configmap with its own values.yml file with a Helm chart that you are not managing locally. In Helm, a chart is a package of pre-configured Kubernetes resources. A chart may include a values.yml file that specifies default values for the chart's configuration. You can override these default values by providing your own values.yml file when you install the chart.\nTo include a configmap with a Helm chart that you are not managing locally, you can create the configmap in your own values.yml file and specify it as part of the resources section in the file. For example, your values.yml file might look something like this:\nconfigmaps:\n my-configmap:\n data:\n my-key: my-value\n\nresources:\n - name: my-configmap\n type: configmap\n\nWhen you install the Helm chart, you can specify your own values.yml file using the --values flag. For example:\n$ helm install my-chart --values my-values.yml\n\nThis will install the chart and include your configmap with the specified values. You can then manage the chart and the configmap together using Helm commands. To uninstall the chart and the associated configmap, you can use the helm uninstall command and specify the name of the chart. For example:\n$ helm uninstall my-chart\n\nThis will uninstall the chart and any associated resources, including the configmap you included in the values.yml file.\n" ]
[ 0, 0 ]
[]
[]
[ "kubernetes", "kubernetes_helm" ]
stackoverflow_0074602502_kubernetes_kubernetes_helm.txt
Q: Is it possible to inherit all operators in C++? One can inherit all constructors of a base class using following code: class DerivedClass : public BaseClass { public: using BaseClass::BaseClass; ... }; Is it possible to inherit all operators in simillar way (one line)? Paticular code that does not compile: #include <string> #include <memory> using std::string; struct A : string { using string::string; }; int main() { std::shared_ptr<string> str; *str = 'a'; std::shared_ptr<A> a; *a = 'a'; // err: E0349 no operator "=" matches these operands } If I write class like this, then the code above compiles and runs: struct A : string { using string::string; using string::operator=; }; A: All member functions, that includes operator overloads, are inheritted following the regular inheritance rules for member functions. Like @sklott mentioned, you need to implicitly pull in functions from the base class if they are shadowed in the derived (as in same name, but different arguments, etc). In your case, the default assignment operator of A shadows the assignment operators of string. Therefore those must be explicitly injected via using.
Is it possible to inherit all operators in C++?
One can inherit all constructors of a base class using following code: class DerivedClass : public BaseClass { public: using BaseClass::BaseClass; ... }; Is it possible to inherit all operators in simillar way (one line)? Paticular code that does not compile: #include <string> #include <memory> using std::string; struct A : string { using string::string; }; int main() { std::shared_ptr<string> str; *str = 'a'; std::shared_ptr<A> a; *a = 'a'; // err: E0349 no operator "=" matches these operands } If I write class like this, then the code above compiles and runs: struct A : string { using string::string; using string::operator=; };
[ "All member functions, that includes operator overloads, are inheritted following the regular inheritance rules for member functions.\nLike @sklott mentioned, you need to implicitly pull in functions from the base class if they are shadowed in the derived (as in same name, but different arguments, etc).\nIn your case, the default assignment operator of A shadows the assignment operators of string. Therefore those must be explicitly injected via using.\n" ]
[ 0 ]
[]
[]
[ "c++", "class", "inheritance", "operators" ]
stackoverflow_0074669977_c++_class_inheritance_operators.txt
Q: How to upload image with Socket Ktor? I want to send image in Socket. How can I do that with Ktor? A: AFAIK , you cannot do like http multi part upload in websockets directly. Several things you can try. Try convert the image into Base64/Byte Array in client and send it to websocket server. Several things you may need to handle is, if you use byte array you might need to handle file headers in the byte array. If you read file from image you can get like this and pass it to the websocket. var arr = File(path).inputStream().use { it.readBytes() } Downside of doing this is , if image size is higher it may not work as expected. And if you websocket sends it to multiple listening clients, it will be quite overload and leads to some delay while sending additional data with file byte array. Another best approach is to upload image to you sever using http multipart upload(Without socket) and send the image url to server. So url can be sent to clients listening to the particular socket. So the image data can be loaded in client only when required. If you send byte array in webssocket for big image, the particular websocket response size will be higher than sending the image with image url. Recommended approach will be method 2 mostly , except some specific use cases. A: The easiest way to upload an image with Socket Ktor is by using MultipartFormData, which is a Ktor feature that allows you to easily upload files. You can use the following code snippet to upload an image: val multipart = MultipartFormDataContent() val filePart = FileDataPart("filename", "image.jpg") multipart.addPart(filePart) val request = HttpRequestBuilder().apply { url("http://your-server-url") method = HttpMethod.Post body = multipart }.build() val response = client.submitFormWithBinaryData(request, multipart)
How to upload image with Socket Ktor?
I want to send image in Socket. How can I do that with Ktor?
[ "AFAIK , you cannot do like http multi part upload in websockets directly.\nSeveral things you can try.\n\nTry convert the image into Base64/Byte Array in client and send it to websocket server.\nSeveral things you may need to handle is, if you use byte array you might need to handle file headers in the byte array.\n\nIf you read file from image you can get like this and pass it to the websocket.\nvar arr = File(path).inputStream().use { it.readBytes() }\n\nDownside of doing this is , if image size is higher it may not work as expected. And if you websocket sends it to multiple listening clients, it will be quite overload and leads to some delay while sending additional data with file byte array.\n\nAnother best approach is to upload image to you sever using http multipart upload(Without socket) and send the image url to server. So url can be sent to clients listening to the particular socket. So the image data can be loaded in client only when required.\n\nIf you send byte array in webssocket for big image, the particular websocket response size will be higher than sending the image with image url.\nRecommended approach will be method 2 mostly , except some specific use cases.\n", "The easiest way to upload an image with Socket Ktor is by using MultipartFormData, which is a Ktor feature that allows you to easily upload files. You can use the following code snippet to upload an image:\nval multipart = MultipartFormDataContent()\nval filePart = FileDataPart(\"filename\", \"image.jpg\")\nmultipart.addPart(filePart)\nval request = HttpRequestBuilder().apply {\n url(\"http://your-server-url\")\n method = HttpMethod.Post\n body = multipart\n}.build()\nval response = client.submitFormWithBinaryData(request, multipart)\n\n" ]
[ 1, 0 ]
[]
[]
[ "kotlin", "ktor", "websocket" ]
stackoverflow_0074668286_kotlin_ktor_websocket.txt
Q: Player Notification Manager I use jetpack compose media3 and I heve this bug I need help to fix it setChannelNameResourceId(R.string.notification_channel) setChannelDescriptionResourceId(R.string.notification_channel_description) api33 media3 notification_channel not found notification_channel_description not found A: You are trying to set the channel name and description for a notification in your Android app, but the resource IDs for those strings are not being found. This can happen if the string resources have not been properly added to your app's strings.xml file. Make sure that the notification_channel and notification_channel_description strings are added to strings.xml file, using the correct syntax: <resources> <string name="notification_channel">My Channel</string> <string name="notification_channel_description">My description</string> ... </resources>
Player Notification Manager
I use jetpack compose media3 and I heve this bug I need help to fix it setChannelNameResourceId(R.string.notification_channel) setChannelDescriptionResourceId(R.string.notification_channel_description) api33 media3 notification_channel not found notification_channel_description not found
[ "You are trying to set the channel name and description for a notification in your Android app, but the resource IDs for those strings are not being found. This can happen if the string resources have not been properly added to your app's strings.xml file.\nMake sure that the notification_channel and notification_channel_description strings are added to strings.xml file, using the correct syntax:\n<resources>\n <string name=\"notification_channel\">My Channel</string>\n <string name=\"notification_channel_description\">My description</string>\n ...\n</resources>\n\n" ]
[ 1 ]
[]
[]
[ "android", "android_jetpack_compose", "android_media3", "kotlin" ]
stackoverflow_0074669333_android_android_jetpack_compose_android_media3_kotlin.txt
Q: how can i use pandas datetools in python because it removed from newer version How can i use pandas datetools in python because it removed from newer version def convert_time(s): h, m, s = map(int, s.split(':')) return pd.datetools.timedelta(hours=h, minutes=m, seconds=s) data = pd.read_csv('marathon-data.csv',converters={'split':convert_time, 'final':convert_time}) data.head() A: You can replace your convert_time() function with pd.to_timedelta(). It is built in to Pandas and understands HH:MM:SS and similar formats. A: Since pd.datetools.timedelta is deprecated, using datetime.timedelta is a possible workaround. Try this updated code snippet: import datetime def convert_time(s): h, m, s = map(int, s.split(':')) return datetime.timedelta(hours=h, minutes=m, seconds=s) data = pd.read_csv('marathon-data.csv', converters={'split':convert_time, 'final':convert_time}) data.head()
how can i use pandas datetools in python because it removed from newer version
How can i use pandas datetools in python because it removed from newer version def convert_time(s): h, m, s = map(int, s.split(':')) return pd.datetools.timedelta(hours=h, minutes=m, seconds=s) data = pd.read_csv('marathon-data.csv',converters={'split':convert_time, 'final':convert_time}) data.head()
[ "You can replace your convert_time() function with pd.to_timedelta(). It is built in to Pandas and understands HH:MM:SS and similar formats.\n", "Since pd.datetools.timedelta is deprecated, using datetime.timedelta is a possible workaround.\nTry this updated code snippet:\nimport datetime\n\ndef convert_time(s):\n h, m, s = map(int, s.split(':'))\n return datetime.timedelta(hours=h, minutes=m, seconds=s)\ndata = pd.read_csv('marathon-data.csv',\n converters={'split':convert_time, 'final':convert_time})\ndata.head()\n\n" ]
[ 2, 0 ]
[]
[]
[ "pandas", "python_3.x" ]
stackoverflow_0063084782_pandas_python_3.x.txt
Q: error while Creating the Oracle user account on windows i'm following this guide : https://techdocs.broadcom.com/content/dam/broadcom/techdocs/symantec-security-software/information-security/data-loss-prevention/generated-pdfs/Symantec_DLP_15.x_Oracle19c_Implementation_Guide.pdf when i try to create the Oracle user account for Symantec Data Loss Prevention on Windows i get the error you an find in the capture; i used the command sqlplus /nolog @oracle_create_user.sql and i got the following error c:\temp\Oracle\tools>sqlplus /nolog @oracle_create_user.sql SQL*Plus: Release 19.0.0.0.0 - Production on Sat Dec 3 15:39:51 2022 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Please enter the password for sys user: Please enter Oracle service name:protect Please enter required username to be created : protect Please enter a password for the new username: SP2-0306: Invalid option. Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM|SYSBACKUP|SYSDG|SYSKM|SYSRAC}] [edition=value]] where ::= [/][@<connect_identifier>] ::= [][/][@<connect_identifier>] SP2-0640: Not connected SP2-0640: Not connected SP2-0640: Not connected SP2-0640: Not connected press to see the error does anyone know how i can solve this issue ? A: sqlplus /nolog @oracle_create_user.sql You are connecting to ORACLE without logging-in (/nolog), so you can't execute a script under "not authenticated"... If you are running on the server, try sqlplus / as SYSDBA or sqlplus SYS as SYSDBA, then if you are running under a recent ORACLE you should select the PDB with alter session set container = PDB_NAME;, then you can execute the script. You can list the PDBs with show pdbs. If you execute the script without selecting a PDB you will create the user at the CDB level, probably not what is expected.
error while Creating the Oracle user account on windows
i'm following this guide : https://techdocs.broadcom.com/content/dam/broadcom/techdocs/symantec-security-software/information-security/data-loss-prevention/generated-pdfs/Symantec_DLP_15.x_Oracle19c_Implementation_Guide.pdf when i try to create the Oracle user account for Symantec Data Loss Prevention on Windows i get the error you an find in the capture; i used the command sqlplus /nolog @oracle_create_user.sql and i got the following error c:\temp\Oracle\tools>sqlplus /nolog @oracle_create_user.sql SQL*Plus: Release 19.0.0.0.0 - Production on Sat Dec 3 15:39:51 2022 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Please enter the password for sys user: Please enter Oracle service name:protect Please enter required username to be created : protect Please enter a password for the new username: SP2-0306: Invalid option. Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM|SYSBACKUP|SYSDG|SYSKM|SYSRAC}] [edition=value]] where ::= [/][@<connect_identifier>] ::= [][/][@<connect_identifier>] SP2-0640: Not connected SP2-0640: Not connected SP2-0640: Not connected SP2-0640: Not connected press to see the error does anyone know how i can solve this issue ?
[ "sqlplus /nolog @oracle_create_user.sql\n\nYou are connecting to ORACLE without logging-in (/nolog), so you can't execute a script under \"not authenticated\"...\nIf you are running on the server, try sqlplus / as SYSDBA or sqlplus SYS as SYSDBA, then if you are running under a recent ORACLE you should select the PDB with alter session set container = PDB_NAME;, then you can execute the script. You can list the PDBs with show pdbs. If you execute the script without selecting a PDB you will create the user at the CDB level, probably not what is expected.\n" ]
[ 0 ]
[]
[]
[ "authentication", "connection", "oracle", "sqlplus" ]
stackoverflow_0074667218_authentication_connection_oracle_sqlplus.txt
Q: How to set time only in this format (hh:mm:ss) 00:00:00 with all values 0 I'm able to set time in other formats but I'm not able to set time in this format, please help me. I tried all the methods which i have up to my knowledge, I'm not able to set time with set() methods. A: const date = new Date(0, 0, 0, 0, 0, 0); This creates a new Date object with the year, month, day, hours, minutes, and seconds all set to 0. This will also result in a time of 00:00:00. console.log(date) // Sun Dec 31 1899 00:00:00 GMT-0800 (Pacific Standard Time)
How to set time only in this format (hh:mm:ss) 00:00:00 with all values 0
I'm able to set time in other formats but I'm not able to set time in this format, please help me. I tried all the methods which i have up to my knowledge, I'm not able to set time with set() methods.
[ "const date = new Date(0, 0, 0, 0, 0, 0);\n\nThis creates a new Date object with the year, month, day, hours, minutes, and seconds all set to 0. This will also result in a time of 00:00:00.\nconsole.log(date)\n\n// Sun Dec 31 1899 00:00:00 GMT-0800 (Pacific Standard Time)\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "javascript" ]
stackoverflow_0074666132_datetime_javascript.txt
Q: Could not find a generator for route RouteSettings("/GeneralAnnouncements", null) in the _WidgetsAppState When i run this and press on the button it gets the error (Could not find a generator for route RouteSettings("/GeneralAnnouncements", null) in the _WidgetsAppState. So can I know what is the problem here? As i want to use only named routes so please don't recommend for me to use .push as it doesn't suit me. The problem is in this Navigator.pushNamed(context, '/GeneralAnnouncements'); line but I don't know why? This is my code: class home extends StatelessWidget { const home({key}) : super(key: key); static const routeName = '/home'; @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( backgroundColor: Color(0xFF6D0131), body: Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: [ CircleAvatar( radius: (100), child: ClipRRect( borderRadius: BorderRadius.circular(110), child: Image.asset('images/logo.png'), )), SizedBox( height: 10.0, ), GestureDetector( onTap: () { Navigator.pushNamed(context, '/GeneralAnnouncements'); }, child: Padding( padding: const EdgeInsets.fromLTRB(8, 0, 8, 0), child: Container( height: 100.0, decoration: BoxDecoration( color: Color(0xFF8D0235), borderRadius: BorderRadius.circular(20.0), ), margin: EdgeInsets.only(bottom: 10.0), padding: EdgeInsets.only(left: 18.0), child: Center( child: Text( ' General \nAnnouncements', style: TextStyle( color: Colors.white, fontSize: 35.0, fontWeight: FontWeight.bold, ), ), ), ), ), ), ], ), ), ); } } This is the MaterialApp: runApp(MaterialApp( initialRoute: '/homepage', routes: { "/homepage": (context) => MyHomepage(), '/login': (context) => LoginScreen(), '/registration': (context) => RegistrationScreen(), "/GeneralAnnouncements": (context) => GeneralAnnouncements(), '/MyCalendar': (context) => MyCalendar(), "/home": (context) => home(), }, )); A: Remove MaterialApp from your class Home. Your code like this: class home extends StatelessWidget { const home({key}) : super(key: key); static const routeName = '/home'; @override Widget build(BuildContext context) { return MaterialApp( // Remove this and return Scaffold directly home: Scaffold( backgroundColor: Color(0xFF6D0131), body: Column(
Could not find a generator for route RouteSettings("/GeneralAnnouncements", null) in the _WidgetsAppState
When i run this and press on the button it gets the error (Could not find a generator for route RouteSettings("/GeneralAnnouncements", null) in the _WidgetsAppState. So can I know what is the problem here? As i want to use only named routes so please don't recommend for me to use .push as it doesn't suit me. The problem is in this Navigator.pushNamed(context, '/GeneralAnnouncements'); line but I don't know why? This is my code: class home extends StatelessWidget { const home({key}) : super(key: key); static const routeName = '/home'; @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( backgroundColor: Color(0xFF6D0131), body: Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: [ CircleAvatar( radius: (100), child: ClipRRect( borderRadius: BorderRadius.circular(110), child: Image.asset('images/logo.png'), )), SizedBox( height: 10.0, ), GestureDetector( onTap: () { Navigator.pushNamed(context, '/GeneralAnnouncements'); }, child: Padding( padding: const EdgeInsets.fromLTRB(8, 0, 8, 0), child: Container( height: 100.0, decoration: BoxDecoration( color: Color(0xFF8D0235), borderRadius: BorderRadius.circular(20.0), ), margin: EdgeInsets.only(bottom: 10.0), padding: EdgeInsets.only(left: 18.0), child: Center( child: Text( ' General \nAnnouncements', style: TextStyle( color: Colors.white, fontSize: 35.0, fontWeight: FontWeight.bold, ), ), ), ), ), ), ], ), ), ); } } This is the MaterialApp: runApp(MaterialApp( initialRoute: '/homepage', routes: { "/homepage": (context) => MyHomepage(), '/login': (context) => LoginScreen(), '/registration': (context) => RegistrationScreen(), "/GeneralAnnouncements": (context) => GeneralAnnouncements(), '/MyCalendar': (context) => MyCalendar(), "/home": (context) => home(), }, ));
[ "Remove MaterialApp from your class Home. Your code like this:\nclass home extends StatelessWidget {\n const home({key}) : super(key: key);\n static const routeName = '/home';\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp( // Remove this and return Scaffold directly\n home: Scaffold(\n backgroundColor: Color(0xFF6D0131),\n body: Column(\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "flutter_layout", "flutter_navigation" ]
stackoverflow_0071457041_dart_flutter_flutter_layout_flutter_navigation.txt
Q: Metrics for monitoring LDA Model We use LDA for topic-modelling in production. I was wondering if there are any metrics which we could use to monitor the quality of this model to understand when model starts to perform poorly and we need to retrain it (for example,if we have too many new topics). We consider to calculate the ratio of number of words from top-topic(topic which has the highest probability for a document) corpus,which were found in the document, to the general number of words(after all processing) in the document with some theshold, but may be someone can share their experience. A: You can calculate its coherence value and compare it with previous one. See Michael Roeder, Andreas Both and Alexander Hinneburg: “Exploring the space of topic coherence measures, and if you're using gensim with python, check its implementation at CoherenceModel.
Metrics for monitoring LDA Model
We use LDA for topic-modelling in production. I was wondering if there are any metrics which we could use to monitor the quality of this model to understand when model starts to perform poorly and we need to retrain it (for example,if we have too many new topics). We consider to calculate the ratio of number of words from top-topic(topic which has the highest probability for a document) corpus,which were found in the document, to the general number of words(after all processing) in the document with some theshold, but may be someone can share their experience.
[ "You can calculate its coherence value and compare it with previous one. See Michael Roeder, Andreas Both and Alexander Hinneburg: “Exploring the space of topic coherence measures, and if you're using gensim with python, check its implementation at CoherenceModel.\n" ]
[ 0 ]
[]
[]
[ "gensim", "lda", "metrics", "mlops", "monitoring" ]
stackoverflow_0074444916_gensim_lda_metrics_mlops_monitoring.txt
Q: php display error message when input value is empty: so with if(!isset.....) on the same page I'm adding a html form to my website. I have an issue with the form validation. I'm trying to display error message when the user does not writte is email address and still submit the form. The thing is: I'm trying to do it on the same page : so my code, as it is now, directly display the error message when the page is loaded (which is normal because the user did not enter is email address yet). I would like this message to display on the same page (so here : on registerForm.php) only if the user clicked on the submit btn but did not enter his email address. Could someone help me please? I put the code bellow : registerForm.php ` <?php session_start(); //include a javascript file echo "<script type='text/javascript' src='registerForm.js'></script>"; ?> <div class="divCenter"> <div class="registerform-container"> <form action="successForm.php" class="registerform-form" method="POST"> <div> <label id="emailInput" for="email" class="registerform-form-label">Email</label> <div> <input type="email" class="registerform-form-control" id="email" name="email" aria-describedby="email-subscribe" placeholder="[email protected]"> <div id="email-help" class="registerform-form-text">L'email utilisé lors de la création de compte.</div> </div> </div> <button id="button" class="registerform-btn disabled" type="submit">Send</button> </form> <?php if(isset($_SESSION['statut'])){ ?> <div class="success"> <p><?php echo htmlspecialchars($_SESSION['statut']) ?></p> </div>'; <?php //une fois afficher il faut détruire le session unset($_SESSION['statut']); } ?> </div> </div> successForm.php <?php //include CSS Style Sheet echo "<link rel='stylesheet' type='text/css' href='registerForm.css' />"; //include a javascript file echo "<script type='text/javascript' src='rfffff.js'></script>"; ?> <?php if(!isset($_SESSION)) { session_start(); } ?> <?php if(!isset($_POST['email'])) { echo 'you must enter your email address'; return; } try { $db = new PDO('mysql:host=localhost;dbname=mariawebsite;charset=utf8', 'root', 'root'); } catch (Exception $e) { die('Erreur : ' . $e->getMessage()); } $email = $_POST['email']; // Préparation $insertEmail = $db->prepare("INSERT INTO users_addresses(email) VALUES (:email)"); // ajouter la ligne ci-dessous pour être sûr d'afficher toutes les erreurs. $db->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION ); // Exécution ! La recette est maintenant en base de données $insertEmail->execute([ 'email' => $email, ]); $_SESSION['statut'] = "$email submited successfully"; header('Location: home.php'); ?> ` I would like this message to display on the same page (so here : on registerForm.php) only if the user clicked on the submit btn but did not enter his email address. Could someone help me please?
php display error message when input value is empty: so with if(!isset.....) on the same page
I'm adding a html form to my website. I have an issue with the form validation. I'm trying to display error message when the user does not writte is email address and still submit the form. The thing is: I'm trying to do it on the same page : so my code, as it is now, directly display the error message when the page is loaded (which is normal because the user did not enter is email address yet). I would like this message to display on the same page (so here : on registerForm.php) only if the user clicked on the submit btn but did not enter his email address. Could someone help me please? I put the code bellow : registerForm.php ` <?php session_start(); //include a javascript file echo "<script type='text/javascript' src='registerForm.js'></script>"; ?> <div class="divCenter"> <div class="registerform-container"> <form action="successForm.php" class="registerform-form" method="POST"> <div> <label id="emailInput" for="email" class="registerform-form-label">Email</label> <div> <input type="email" class="registerform-form-control" id="email" name="email" aria-describedby="email-subscribe" placeholder="[email protected]"> <div id="email-help" class="registerform-form-text">L'email utilisé lors de la création de compte.</div> </div> </div> <button id="button" class="registerform-btn disabled" type="submit">Send</button> </form> <?php if(isset($_SESSION['statut'])){ ?> <div class="success"> <p><?php echo htmlspecialchars($_SESSION['statut']) ?></p> </div>'; <?php //une fois afficher il faut détruire le session unset($_SESSION['statut']); } ?> </div> </div> successForm.php <?php //include CSS Style Sheet echo "<link rel='stylesheet' type='text/css' href='registerForm.css' />"; //include a javascript file echo "<script type='text/javascript' src='rfffff.js'></script>"; ?> <?php if(!isset($_SESSION)) { session_start(); } ?> <?php if(!isset($_POST['email'])) { echo 'you must enter your email address'; return; } try { $db = new PDO('mysql:host=localhost;dbname=mariawebsite;charset=utf8', 'root', 'root'); } catch (Exception $e) { die('Erreur : ' . $e->getMessage()); } $email = $_POST['email']; // Préparation $insertEmail = $db->prepare("INSERT INTO users_addresses(email) VALUES (:email)"); // ajouter la ligne ci-dessous pour être sûr d'afficher toutes les erreurs. $db->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION ); // Exécution ! La recette est maintenant en base de données $insertEmail->execute([ 'email' => $email, ]); $_SESSION['statut'] = "$email submited successfully"; header('Location: home.php'); ?> ` I would like this message to display on the same page (so here : on registerForm.php) only if the user clicked on the submit btn but did not enter his email address. Could someone help me please?
[]
[]
[ "Try this:\nOn successForm.php\nif(!isset($_POST['email']) || empty($_POST['email']))\n{\n $msg = 'you must enter your email address';\n header('Location: registerForm.php?msg='.$msg); // this will redirect to register form with message\n\n} else {\n //try catch and other code here\n}\n\nOn registerForm.php\nAfter and before form starts.\nAdd this:\n<?php if(isset($_GET['msg']) && $_GET['msg'] != \"\") {?>\n <div class=\"msg_error\">\n <p><?php echo $_GET['msg']?></p>\n </div>\n<?php } ?>\n\n" ]
[ -1 ]
[ "forms", "php", "submit" ]
stackoverflow_0074669718_forms_php_submit.txt
Q: How to crop square inscribed in partial circle? I have frames of a video taken from a microscope. I need to crop them to a square inscribed to the circle but the issue is that the circle isn't whole (like in the following image). How can I do it? My idea was to use contour finding to get the center of the circle and then find the distance from each point over the whole array of coordinates to the center, take the maximum distance as the radius and find the corners of the square analytically but there must be a better way to do it (also I don't really have a formula to find the corners). A: This may not be adequate in terms of centered at center of circle, but using my iterative processing, one can crop to an approximation of the largest rectangle inside your circle area. Input: import cv2 import numpy as np # read image img = cv2.imread('img.jpg') h, w = img.shape[:2] # threshold so border is black and rest is white (invert as needed). # Here I needed to specify the upper threshold at 20 as your black is not pure black. lower = (0,0,0) upper = (20,20,20) mask = cv2.inRange(img, lower, upper) mask = 255 - mask # define top and left starting coordinates and starting width and height top = 0 left = 0 bottom = h right = w # compute the mean of each side of the image and its stop test mean_top = np.mean( mask[top:top+1, left:right] ) mean_left = np.mean( mask[top:bottom, left:left+1] ) mean_bottom = np.mean( mask[bottom-1:bottom, left:right] ) mean_right = np.mean( mask[top:bottom, right-1:right] ) mean_minimum = min(mean_top, mean_left, mean_bottom, mean_right) top_test = "stop" if (mean_top == 255) else "go" left_test = "stop" if (mean_left == 255) else "go" bottom_test = "stop" if (mean_bottom == 255) else "go" right_test = "stop" if (mean_right == 255) else "go" # iterate to compute new side coordinates if mean of given side is not 255 (all white) and it is the current darkest side while top_test == "go" or left_test == "go" or right_test == "go" or bottom_test == "go": # top processing if top_test == "go": if mean_top != 255: if mean_top == mean_minimum: top += 1 mean_top = np.mean( mask[top:top+1, left:right] ) mean_left = np.mean( mask[top:bottom, left:left+1] ) mean_bottom = np.mean( mask[bottom-1:bottom, left:right] ) mean_right = np.mean( mask[top:bottom, right-1:right] ) mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom) #print("top",mean_top) continue else: top_test = "stop" # left processing if left_test == "go": if mean_left != 255: if mean_left == mean_minimum: left += 1 mean_top = np.mean( mask[top:top+1, left:right] ) mean_left = np.mean( mask[top:bottom, left:left+1] ) mean_bottom = np.mean( mask[bottom-1:bottom, left:right] ) mean_right = np.mean( mask[top:bottom, right-1:right] ) mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom) #print("left",mean_left) continue else: left_test = "stop" # bottom processing if bottom_test == "go": if mean_bottom != 255: if mean_bottom == mean_minimum: bottom -= 1 mean_top = np.mean( mask[top:top+1, left:right] ) mean_left = np.mean( mask[top:bottom, left:left+1] ) mean_bottom = np.mean( mask[bottom-1:bottom, left:right] ) mean_right = np.mean( mask[top:bottom, right-1:right] ) mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom) #print("bottom",mean_bottom) continue else: bottom_test = "stop" # right processing if right_test == "go": if mean_right != 255: if mean_right == mean_minimum: right -= 1 mean_top = np.mean( mask[top:top+1, left:right] ) mean_left = np.mean( mask[top:bottom, left:left+1] ) mean_bottom = np.mean( mask[bottom-1:bottom, left:right] ) mean_right = np.mean( mask[top:bottom, right-1:right] ) mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom) #print("right",mean_right) continue else: right_test = "stop" # crop input result = img[top:bottom, left:right] # print crop values print("top: ",top) print("bottom: ",bottom) print("left: ",left) print("right: ",right) print("height:",result.shape[0]) print("width:",result.shape[1]) # save cropped image #cv2.imwrite('border_image1_cropped.png',result) cv2.imwrite('img_cropped.png',result) cv2.imwrite('img_mask.png',mask) # show the images cv2.imshow("mask", mask) cv2.imshow("cropped", result) cv2.waitKey(0) cv2.destroyAllWindows() Result: A: Let's start with an illustration of the problem to help with the explanation. Of course, we have to begin with loading the image. Let's also grab its width and height, since they will be useful later on. img = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR) height, width = img.shape[:2] First, let's convert the image to grayscale and then apply threshold to make the circle all white, and the background black. I arbitrarily picked a threshold value of 31, which seems to give reasonable results. img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY) The result of those operations looks like this: Now, we can determine the "top" and "bottom" of the circle (first_yd and last_yd), by finding the first and last row that contains at least one white pixel. I chose to use cv2.reduce to find the maximum of each row (since the thresholded image only contains 0's and 255's, a non-zero result means there is at least 1 white pixel), followed by cv2.findNonZero to get the row numbers. reduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX) row_info = cv2.findNonZero(reduced) first_yd, last_yd = row_info[0][0][1], row_info[-1][0][1] This information allows us to determine the diameter of the circle d, its radius r (r = d/2), as well as the Y coordinate of the center of the circle center_y. diameter = last_yd - first_yd radius = int(diameter / 2) center_y = first_yd + radius Next, we need to determine the X coordinate of the center of the circle center_x. Let's take advantage of the fact that the circle is cropped on the left-hand side. The white pixels in the first column of the threshold image represent a chord c of the circle (red in the diagram). Again, we begin with finding the "top" and "bottom" of the chord (first_yc and last_yc), but since we're working with a single column, we only need cv2.findNonZero. row_info = cv2.findNonZero(thresh[:,0]) first_yc, last_yc = row_info[0][0][1], row_info[-1][0][1] c = last_yc - first_yc Now we have a nice right-angled triangle with one side adjacent to the right angle being half of the chord c (red in the diagram), the other adjacent side being the unknown offset o, and the hypotenuse (green in the diagram) being the radius of the circle r. Let's apply Pythagoras' theorem: r2 = (c/2)2 + o2 o2 = r2 - (c/2)2 o = sqrt(r2 - (c/2)2) And in Python: center_x = int(math.sqrt(radius**2 - (c/2)**2)) Now we're ready to determine the parameters of the inscribed square. Let's keep in mind that the center of the circle and center of its inscribed square are co-located. Here is another illustration: We will again use Pythagoras' theorem. The hypotenuse of the right triangle is again the radius r. Both of the sides adjacent to the right angle are of equal length, which is half the length of the side of inscribed square s. r2 = (s/2)2 + (s/2)2 r2 = 2 × (s/2)2 r2 = 2 × s2/22 r2 = s2/2 s2 = 2 × r2 s = sqrt(2) × r And in Python: s = int(math.sqrt(2) * radius) Finally, we can determine the top-left and bottom-right corners of the inscribed square. Both of those points are offset by s/2 from the common center. half_s = int(s/2) tl = (center_x - half_s, center_y - half_s) br = (center_x + half_s, center_y + half_s) We have determined all the parameters we need. Let's print them out... Circle diameter = 1167 pixels Circle radius = 583 pixels Circle center = (404,1089) Inscribed square side = 824 pixels Inscribed square top-left = (-8,677) Inscribed square bottom-right = (816,1501) and visualize the center (green), the detected circle (red) and the inscribed square (blue) on a copy of the input image: Now we can do the cropping, but first we have to make sure we don't go out of bounds of the source image. crop_left = max(tl[0], 0) crop_top = max(tl[1], 0) # Kinda redundant, but why not crop_right = min(br[0], width) crop_bottom = min(br[1], height) # ditto cropped = img[crop_top:crop_bottom, crop_left:crop_right] And that's it. Here's the cropped image (it's rectangular, since small part of the inscribed square falls outside the source image, and scaled down for embedding -- click to get the full-sized image): Complete Script import cv2 import numpy as np import math img = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR) height, width = img.shape[:2] img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY) # Find top/bottom of the circle, to determine radius and center reduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX) row_info = cv2.findNonZero(reduced) first_yd, last_yd = row_info[0][0][1], row_info[-1][0][1] diameter = last_yd - first_yd radius = int(diameter / 2) center_y = first_yd + radius # Repeat again, just on first column, to find length of a chord of the circle row_info = cv2.findNonZero(thresh[:,0]) first_yc, last_yc = row_info[0][0][1], row_info[-1][0][1] c = last_yc - first_yc # Apply Pythagoras theorem to find the X offset of the center from the chord # Since the chord is in row 0, this is also the X coordinate center_x = int(math.sqrt(radius**2 - (c/2)**2)) # Find length of the side of the inscribed square (Pythagoras again) s = int(math.sqrt(2) * radius) # Now find the top-left and bottom-right corners of the square half_s = int(s/2) tl = (center_x - half_s, center_y - half_s) br = (center_x + half_s, center_y + half_s) # Let's print out what we found print("Circle diameter = %d pixels" % diameter) print("Circle radius = %d pixels" % radius) print("Circle center = (%d,%d)" % (center_x, center_y)) print("Inscribed square side = %d pixels" % s) print("Inscribed square top-left = (%d,%d)" % tl) print("Inscribed square bottom-right = (%d,%d)" % br) # And visualize it... vis = img.copy() cv2.line(vis, (center_x-5,center_y), (center_x+5,center_y), (0,255,0), 3) cv2.line(vis, (center_x,center_y-5), (center_x,center_y+5), (0,255,0), 3) cv2.circle(vis, (center_x,center_y), radius, (0,0,255), 3) cv2.rectangle(vis, tl, br, (255,0,0), 3) # Write some illustration images cv2.imwrite('circ_thresh.png', thresh) cv2.imwrite('circ_vis.png', vis) # Time to do some cropping, but we need to make sure the coordinates are inside the bounds of the image crop_left = max(tl[0], 0) crop_top = max(tl[1], 0) # Kinda redundant, but why not crop_right = min(br[0], width) crop_bottom = min(br[1], height) # ditto cropped = img[crop_top:crop_bottom, crop_left:crop_right] cv2.imwrite('circ_cropped.png', cropped) NB: The main focus of this was the explanation of the algorithm. I've been kinda blunt on rounding the values, and there may be some off-by-one errors. For the sake of brevity, error checking is minimal. It's left as an excercise to the reader to address those issues as necessary. Furthermore, the assumption is that the left-hand side of the circle is cropped as in the sample image. It should be fairly trivial to extend this to handle other possible scenarios, using the techniques I've demonstrated. A: Building on Dan Mašek's answer, here is an alternate method of computing the center and radius in Python/OpenCV/Numpy, in particular, the x-coordinate of the center. The idea is simply find the coordinate of column that has the largest non-zero count in the thresholded image. Input: import cv2 import numpy as np import math img = cv2.imread('img_circle.jpg') height, width = img.shape[:2] gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 31, 255, cv2.THRESH_BINARY)[1] # Find top/bottom of the circle, to determine radius and y coordinate center reduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX) row_info = cv2.findNonZero(reduced) first_yd, last_yd = row_info[0][0][1], row_info[-1][0][1] diameter = last_yd - first_yd radius = int(diameter / 2) center_y = first_yd + radius # count non-zero pixels in columns to find the column with the largest count # that will give us the x coordinate center col_counts = np.count_nonzero(thresh, axis=0) max_counts = np.amax(col_counts) # find index (x-coordinate) where col_counts=max_counts max_coords = np.argwhere(col_counts==max_counts) # get number of max values in case more than one num_max = len(max_coords) # compute center_y center_x = max_coords[0][0] + num_max//2 print("radius:", radius, "center_x:", center_x, "center_y:", center_y) print('') Result: radius: 583 center_x: 388 center_y: 1089 The rest is the same as in Dan Mašek's answer. A: Find the edge points of the image circle, and then fit a circle to the edge. Or, you may be able to use minEnclosingCircle() instead of circle fitting. (I omit the explanation of the subsequent steps for obtaining a square.)
How to crop square inscribed in partial circle?
I have frames of a video taken from a microscope. I need to crop them to a square inscribed to the circle but the issue is that the circle isn't whole (like in the following image). How can I do it? My idea was to use contour finding to get the center of the circle and then find the distance from each point over the whole array of coordinates to the center, take the maximum distance as the radius and find the corners of the square analytically but there must be a better way to do it (also I don't really have a formula to find the corners).
[ "This may not be adequate in terms of centered at center of circle, but using my iterative processing, one can crop to an approximation of the largest rectangle inside your circle area.\nInput:\n\nimport cv2\nimport numpy as np\n\n# read image\nimg = cv2.imread('img.jpg')\nh, w = img.shape[:2]\n\n# threshold so border is black and rest is white (invert as needed). \n# Here I needed to specify the upper threshold at 20 as your black is not pure black.\n\nlower = (0,0,0)\nupper = (20,20,20)\nmask = cv2.inRange(img, lower, upper)\nmask = 255 - mask\n\n# define top and left starting coordinates and starting width and height\ntop = 0\nleft = 0\nbottom = h\nright = w\n\n# compute the mean of each side of the image and its stop test\nmean_top = np.mean( mask[top:top+1, left:right] )\nmean_left = np.mean( mask[top:bottom, left:left+1] )\nmean_bottom = np.mean( mask[bottom-1:bottom, left:right] )\nmean_right = np.mean( mask[top:bottom, right-1:right] )\n\nmean_minimum = min(mean_top, mean_left, mean_bottom, mean_right)\n\ntop_test = \"stop\" if (mean_top == 255) else \"go\"\nleft_test = \"stop\" if (mean_left == 255) else \"go\"\nbottom_test = \"stop\" if (mean_bottom == 255) else \"go\"\nright_test = \"stop\" if (mean_right == 255) else \"go\"\n\n# iterate to compute new side coordinates if mean of given side is not 255 (all white) and it is the current darkest side\nwhile top_test == \"go\" or left_test == \"go\" or right_test == \"go\" or bottom_test == \"go\":\n\n # top processing\n if top_test == \"go\":\n if mean_top != 255:\n if mean_top == mean_minimum:\n top += 1\n mean_top = np.mean( mask[top:top+1, left:right] )\n mean_left = np.mean( mask[top:bottom, left:left+1] )\n mean_bottom = np.mean( mask[bottom-1:bottom, left:right] )\n mean_right = np.mean( mask[top:bottom, right-1:right] )\n mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom)\n #print(\"top\",mean_top)\n continue\n else:\n top_test = \"stop\" \n\n # left processing\n if left_test == \"go\":\n if mean_left != 255:\n if mean_left == mean_minimum:\n left += 1\n mean_top = np.mean( mask[top:top+1, left:right] )\n mean_left = np.mean( mask[top:bottom, left:left+1] )\n mean_bottom = np.mean( mask[bottom-1:bottom, left:right] )\n mean_right = np.mean( mask[top:bottom, right-1:right] )\n mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom)\n #print(\"left\",mean_left)\n continue\n else:\n left_test = \"stop\" \n\n # bottom processing\n if bottom_test == \"go\":\n if mean_bottom != 255:\n if mean_bottom == mean_minimum:\n bottom -= 1\n mean_top = np.mean( mask[top:top+1, left:right] )\n mean_left = np.mean( mask[top:bottom, left:left+1] )\n mean_bottom = np.mean( mask[bottom-1:bottom, left:right] )\n mean_right = np.mean( mask[top:bottom, right-1:right] )\n mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom)\n #print(\"bottom\",mean_bottom)\n continue\n else:\n bottom_test = \"stop\" \n\n # right processing\n if right_test == \"go\":\n if mean_right != 255:\n if mean_right == mean_minimum:\n right -= 1\n mean_top = np.mean( mask[top:top+1, left:right] )\n mean_left = np.mean( mask[top:bottom, left:left+1] )\n mean_bottom = np.mean( mask[bottom-1:bottom, left:right] )\n mean_right = np.mean( mask[top:bottom, right-1:right] )\n mean_minimum = min(mean_top, mean_left, mean_right, mean_bottom)\n #print(\"right\",mean_right)\n continue\n else:\n right_test = \"stop\" \n\n\n# crop input\nresult = img[top:bottom, left:right]\n\n# print crop values \nprint(\"top: \",top)\nprint(\"bottom: \",bottom)\nprint(\"left: \",left)\nprint(\"right: \",right)\nprint(\"height:\",result.shape[0])\nprint(\"width:\",result.shape[1])\n\n# save cropped image\n#cv2.imwrite('border_image1_cropped.png',result)\ncv2.imwrite('img_cropped.png',result)\ncv2.imwrite('img_mask.png',mask)\n\n# show the images\ncv2.imshow(\"mask\", mask)\ncv2.imshow(\"cropped\", result)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nResult:\n\n", "Let's start with an illustration of the problem to help with the explanation.\n\n\nOf course, we have to begin with loading the image. Let's also grab its width and height, since they will be useful later on.\nimg = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR)\nheight, width = img.shape[:2]\n\n\nFirst, let's convert the image to grayscale and then apply threshold to make the circle all white, and the background black. I arbitrarily picked a threshold value of 31, which seems to give reasonable results.\nimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n_, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY)\n\nThe result of those operations looks like this:\n\n\nNow, we can determine the \"top\" and \"bottom\" of the circle (first_yd and last_yd), by finding the first and last row that contains at least one white pixel. I chose to use cv2.reduce to find the maximum of each row (since the thresholded image only contains 0's and 255's, a non-zero result means there is at least 1 white pixel), followed by cv2.findNonZero to get the row numbers.\nreduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX)\nrow_info = cv2.findNonZero(reduced)\nfirst_yd, last_yd = row_info[0][0][1], row_info[-1][0][1]\n\nThis information allows us to determine the diameter of the circle d, its radius r (r = d/2), as well as the Y coordinate of the center of the circle center_y.\ndiameter = last_yd - first_yd\nradius = int(diameter / 2)\ncenter_y = first_yd + radius\n\n\nNext, we need to determine the X coordinate of the center of the circle center_x.\nLet's take advantage of the fact that the circle is cropped on the left-hand side. The white pixels in the first column of the threshold image represent a chord c of the circle (red in the diagram).\nAgain, we begin with finding the \"top\" and \"bottom\" of the chord (first_yc and last_yc), but since we're working with a single column, we only need cv2.findNonZero.\nrow_info = cv2.findNonZero(thresh[:,0])\nfirst_yc, last_yc = row_info[0][0][1], row_info[-1][0][1]\n\nc = last_yc - first_yc\n\nNow we have a nice right-angled triangle with one side adjacent to the right angle being half of the chord c (red in the diagram), the other adjacent side being the unknown offset o, and the hypotenuse (green in the diagram) being the radius of the circle r. Let's apply Pythagoras' theorem:\nr2 = (c/2)2 + o2\no2 = r2 - (c/2)2\no = sqrt(r2 - (c/2)2)\nAnd in Python:\ncenter_x = int(math.sqrt(radius**2 - (c/2)**2))\n\n\nNow we're ready to determine the parameters of the inscribed square. Let's keep in mind that the center of the circle and center of its inscribed square are co-located. Here is another illustration:\n\nWe will again use Pythagoras' theorem. The hypotenuse of the right triangle is again the radius r. Both of the sides adjacent to the right angle are of equal length, which is half the length of the side of inscribed square s.\nr2 = (s/2)2 + (s/2)2\nr2 = 2 × (s/2)2\nr2 = 2 × s2/22\nr2 = s2/2\ns2 = 2 × r2\ns = sqrt(2) × r\n\nAnd in Python:\ns = int(math.sqrt(2) * radius)\n\nFinally, we can determine the top-left and bottom-right corners of the inscribed square. Both of those points are offset by s/2 from the common center.\nhalf_s = int(s/2)\ntl = (center_x - half_s, center_y - half_s)\nbr = (center_x + half_s, center_y + half_s)\n\n\nWe have determined all the parameters we need. Let's print them out...\nCircle diameter = 1167 pixels\nCircle radius = 583 pixels\nCircle center = (404,1089)\nInscribed square side = 824 pixels\nInscribed square top-left = (-8,677)\nInscribed square bottom-right = (816,1501)\n\nand visualize the center (green), the detected circle (red) and the inscribed square (blue) on a copy of the input image:\n\n\nNow we can do the cropping, but first we have to make sure we don't go out of bounds of the source image.\ncrop_left = max(tl[0], 0)\ncrop_top = max(tl[1], 0) # Kinda redundant, but why not\ncrop_right = min(br[0], width)\ncrop_bottom = min(br[1], height) # ditto\n\ncropped = img[crop_top:crop_bottom, crop_left:crop_right]\n\nAnd that's it. Here's the cropped image (it's rectangular, since small part of the inscribed square falls outside the source image, and scaled down for embedding -- click to get the full-sized image):\n\n\nComplete Script\nimport cv2\nimport numpy as np\nimport math\n\nimg = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR)\nheight, width = img.shape[:2]\n\nimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n_, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY)\n\n# Find top/bottom of the circle, to determine radius and center\nreduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX)\nrow_info = cv2.findNonZero(reduced)\nfirst_yd, last_yd = row_info[0][0][1], row_info[-1][0][1]\n\ndiameter = last_yd - first_yd\nradius = int(diameter / 2)\ncenter_y = first_yd + radius\n\n# Repeat again, just on first column, to find length of a chord of the circle\nrow_info = cv2.findNonZero(thresh[:,0])\nfirst_yc, last_yc = row_info[0][0][1], row_info[-1][0][1]\n\nc = last_yc - first_yc\n\n# Apply Pythagoras theorem to find the X offset of the center from the chord\n# Since the chord is in row 0, this is also the X coordinate\ncenter_x = int(math.sqrt(radius**2 - (c/2)**2))\n\n# Find length of the side of the inscribed square (Pythagoras again)\ns = int(math.sqrt(2) * radius)\n\n# Now find the top-left and bottom-right corners of the square\nhalf_s = int(s/2)\ntl = (center_x - half_s, center_y - half_s)\nbr = (center_x + half_s, center_y + half_s)\n\n# Let's print out what we found\nprint(\"Circle diameter = %d pixels\" % diameter)\nprint(\"Circle radius = %d pixels\" % radius)\nprint(\"Circle center = (%d,%d)\" % (center_x, center_y))\nprint(\"Inscribed square side = %d pixels\" % s)\nprint(\"Inscribed square top-left = (%d,%d)\" % tl)\nprint(\"Inscribed square bottom-right = (%d,%d)\" % br)\n\n# And visualize it...\nvis = img.copy()\ncv2.line(vis, (center_x-5,center_y), (center_x+5,center_y), (0,255,0), 3)\ncv2.line(vis, (center_x,center_y-5), (center_x,center_y+5), (0,255,0), 3)\ncv2.circle(vis, (center_x,center_y), radius, (0,0,255), 3)\ncv2.rectangle(vis, tl, br, (255,0,0), 3)\n\n# Write some illustration images\ncv2.imwrite('circ_thresh.png', thresh)\ncv2.imwrite('circ_vis.png', vis)\n\n# Time to do some cropping, but we need to make sure the coordinates are inside the bounds of the image\ncrop_left = max(tl[0], 0)\ncrop_top = max(tl[1], 0) # Kinda redundant, but why not\ncrop_right = min(br[0], width)\ncrop_bottom = min(br[1], height) # ditto\n\ncropped = img[crop_top:crop_bottom, crop_left:crop_right]\ncv2.imwrite('circ_cropped.png', cropped)\n\n\nNB: The main focus of this was the explanation of the algorithm. I've been kinda blunt on rounding the values, and there may be some off-by-one errors. For the sake of brevity, error checking is minimal. It's left as an excercise to the reader to address those issues as necessary.\nFurthermore, the assumption is that the left-hand side of the circle is cropped as in the sample image. It should be fairly trivial to extend this to handle other possible scenarios, using the techniques I've demonstrated.\n", "Building on Dan Mašek's answer, here is an alternate method of computing the center and radius in Python/OpenCV/Numpy, in particular, the x-coordinate of the center.\nThe idea is simply find the coordinate of column that has the largest non-zero count in the thresholded image.\nInput:\n\nimport cv2\nimport numpy as np\nimport math\n\nimg = cv2.imread('img_circle.jpg')\nheight, width = img.shape[:2]\n\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\nthresh = cv2.threshold(gray, 31, 255, cv2.THRESH_BINARY)[1]\n\n# Find top/bottom of the circle, to determine radius and y coordinate center\nreduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX)\nrow_info = cv2.findNonZero(reduced)\nfirst_yd, last_yd = row_info[0][0][1], row_info[-1][0][1]\n\ndiameter = last_yd - first_yd\nradius = int(diameter / 2)\ncenter_y = first_yd + radius\n\n# count non-zero pixels in columns to find the column with the largest count\n# that will give us the x coordinate center\ncol_counts = np.count_nonzero(thresh, axis=0)\nmax_counts = np.amax(col_counts)\n\n# find index (x-coordinate) where col_counts=max_counts\nmax_coords = np.argwhere(col_counts==max_counts)\n\n# get number of max values in case more than one\nnum_max = len(max_coords)\n\n# compute center_y\ncenter_x = max_coords[0][0] + num_max//2\n\nprint(\"radius:\", radius, \"center_x:\", center_x, \"center_y:\", center_y)\nprint('')\n\nResult:\nradius: 583 center_x: 388 center_y: 1089\n\nThe rest is the same as in Dan Mašek's answer.\n", "Find the edge points of the image circle, and then fit a circle to the edge.\nOr, you may be able to use minEnclosingCircle() instead of circle fitting.\n(I omit the explanation of the subsequent steps for obtaining a square.)\n" ]
[ 6, 4, 1, 0 ]
[]
[]
[ "image_processing", "opencv", "python" ]
stackoverflow_0074645811_image_processing_opencv_python.txt
Q: Can a valid GeoJSON Polygon represent an island in a lake in an island? I was a bit puzzled to find the GeoJSON { "type": "Polygon", "coordinates": [ [ [-0.8, -0.8], [ 0.8, -0.8], [ 0.8, 0.8], [-0.8, 0.8], [-0.8, -0.8] ], [ [-0.6, -0.6], [-0.6, 0.6], [ 0.6, 0.6], [ 0.6, -0.6], [-0.6, -0.6] ], [ [-0.4, -0.4], [ 0.4, -0.4], [ 0.4, 0.4], [-0.4, 0.4], [-0.4, -0.4] ] ] } an "island in a lake in an island" if you will, rejected by one online validator, but accepted by another. Looking again at RFC7846 For Polygons with more than one of these rings, the first MUST be the exterior ring, and any others MUST be interior rings. The exterior ring bounds the surface, and the interior rings (if present) bound holes within the surface. it looks to me that the first validator was correct, the final ring does not bound a hole in the surface defined by the first ring, so is invalid. So am I right in thinking that a valid GeoJSON Polygon cannot represent an island in a lake in an island? (So one would need to use a MultiPolygon to represent it?) A: Correct, the inner rings are holes in the exterior ring so cannot have an island in a lake where an inner ring is not a hole. A valid representation of an island would be a MultiPolygon where the first polygon is the larger polygon with a hole that forms the lake and the second polygon is the smaller polygon inside the first which is the island. Here is the GeoJSON: { "type": "Feature", "geometry": { "type": "MultiPolygon", "coordinates": [ [ [ [-0.8, -0.8], [ 0.8, -0.8], [ 0.8, 0.8], [-0.8, 0.8], [-0.8, -0.8] ], [ [-0.6, -0.6], [-0.6, 0.6], [ 0.6, 0.6], [ 0.6, -0.6], [-0.6, -0.6] ] ], [ [ [-0.4, -0.4], [ 0.4, -0.4], [ 0.4, 0.4], [-0.4, 0.4], [-0.4, -0.4] ] ] ] } }
Can a valid GeoJSON Polygon represent an island in a lake in an island?
I was a bit puzzled to find the GeoJSON { "type": "Polygon", "coordinates": [ [ [-0.8, -0.8], [ 0.8, -0.8], [ 0.8, 0.8], [-0.8, 0.8], [-0.8, -0.8] ], [ [-0.6, -0.6], [-0.6, 0.6], [ 0.6, 0.6], [ 0.6, -0.6], [-0.6, -0.6] ], [ [-0.4, -0.4], [ 0.4, -0.4], [ 0.4, 0.4], [-0.4, 0.4], [-0.4, -0.4] ] ] } an "island in a lake in an island" if you will, rejected by one online validator, but accepted by another. Looking again at RFC7846 For Polygons with more than one of these rings, the first MUST be the exterior ring, and any others MUST be interior rings. The exterior ring bounds the surface, and the interior rings (if present) bound holes within the surface. it looks to me that the first validator was correct, the final ring does not bound a hole in the surface defined by the first ring, so is invalid. So am I right in thinking that a valid GeoJSON Polygon cannot represent an island in a lake in an island? (So one would need to use a MultiPolygon to represent it?)
[ "Correct, the inner rings are holes in the exterior ring so cannot have an island in a lake where an inner ring is not a hole.\n\nA valid representation of an island would be a MultiPolygon where the first polygon is the larger polygon with a hole that forms the lake and the second polygon is the smaller polygon inside the first which is the island.\nHere is the GeoJSON:\n{\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [-0.8, -0.8],\n [ 0.8, -0.8],\n [ 0.8, 0.8],\n [-0.8, 0.8],\n [-0.8, -0.8]\n ],\n [\n [-0.6, -0.6],\n [-0.6, 0.6],\n [ 0.6, 0.6],\n [ 0.6, -0.6],\n [-0.6, -0.6]\n ] \n ], [\n [\n [-0.4, -0.4],\n [ 0.4, -0.4],\n [ 0.4, 0.4],\n [-0.4, 0.4],\n [-0.4, -0.4]\n ]\n ]\n ]\n }\n} \n\n" ]
[ 1 ]
[]
[]
[ "geojson" ]
stackoverflow_0074660981_geojson.txt
Q: android ndk gdb loaded sharedlibraries missing *.oat Both gdb 7.7 and gbd 7.11 missed some shared libraries when debugging my device (oppo r7s). I've pulled all libraries to local. Here is a complete list of libraries shown by info shared (gdb) info shared From To Syms Read Shared Object Library 0x40000980 0x40009640 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\linker 0x401c7940 0x401ce6e8 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so No libstdc++.so No libm.so 0x4013bbb0 0x4017329c Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so No libbinder.so No liblog.so No libhardware.so No libcutils.so No libc++.so No libLLVM.so No libbcinfo.so No libunwind.so No libz.so No libpng.so No libpowermanager.so No libcommon_time_client.so No libstlport.so No libui.so No libsync.so No libgui.so No libft2.so No libbcc.so No libGLESv2.so No libGLESv1_CM.so No libEGL.so No libunwind-ptrace.so No libgccdemangle.so No libcrypto.so No libicuuc.so No libicui18n.so No libjpeg.so No libexpat.so No libpcre.so No libharfbuzz_ng.so No libstagefright_foundation.so No libsonivox.so No libnbaio.so No libcamera_client.so No libaudioutils.so No libaudioparameter.so No libinput.so No libhardware_legacy.so No libcamera_metadata.so No libgabi++.so No libskia.so No libRScpp.so No libRS.so No libwpa_client.so No libnetutils.so No libspeexresampler.so 0x402635b0 0x402724a4 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroidfw.so No libGLES_trace.so No libbacktrace.so No libusbhost.so No libssl.so No libsqlite.so No libsoundtrigger.so No libselinux.so No libprocessgroup.so No libpdfium.so No libnetd_client.so No libnativehelper.so No libnativebridge.so No libminikin.so No libmemtrack.so No libmedia.so No libinputflinger.so No libimg_utils.so No libhwui.so No libassert_tip_service.so No libETC1.so 0x4006d230 0x400ca9dc Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid_runtime.so No libNimsWrap.so No libsigchain.so No libvendorconn.so No libbacktrace_libc++.so 0x41d4baa0 0x41f9ee24 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libart.so No libjavacore.so No memtrack.msm8916.so No libqti-perfd-client.so No libtinyxml.so No libqservice.so No libmm-abl-oem.so No libdiag.so No libmm-abl.so No libprotecteyes.so No libgsl.so No libadreno_utils.so No libEGL_adreno.so No libGLESv1_CM_adreno.so No libGLESv2_adreno.so 0x68246388 0x68249184 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid.so No libcompiler_rt.so No libjnigraphics.so No libvorbisidec.so No libstagefright_yuv.so No libstagefright_omx.so No libstagefright_enc_common.so No libstagefright_avc_common.so No libopus.so No libdrmframework.so No libstagefright_amrnb_common.so No libstagefright.so No libmtp.so No libjhead.so No libexif.so No libmedia_jni.so No libjavacrypto.so No libsoundpool.so No libaudioeffect_jni.so No librs_jni.so No libthwsplit.so No libwebviewchromium_loader.so No eglsubAndroid.so No libsc-a3xx.so No libqdutils.so No libqdMetaData.so No libmemalloc.so No gralloc.msm8916.so No libfmodex.so No libfmodevent.so No libstagefright_http_support.so No libeffects.so No libwilhelm.so No libOpenSLES.so 0x7f9ceb40 0x8102f72c Yes Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libclient.so No libwebviewchromium.so No libwebviewchromium_plat_support.so (*): Shared library is missing debugging information. But oat files are loaded when debugging other devices like Huawei (FRD-AL00). Following is an excerpt of the output of show shared on such a device. 0x71867000 0x71cc76d6 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x721dc000 0x725657c4 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x725dc000 0x7262d9cc Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x726c3000 0x727291ea Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0xea0de584 0xea0e5714 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid.so 0xe1b15da0 0xe1cdc3ec Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libart-compiler.so No /system/lib/libvixl.so 0xc2b0ab40 0xc416b72c Yes Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libclient.so No /data/dalvik-cache/arm/system@app@[email protected]@classes.dex And without the libraries being loaded, gdb cannot unwind the stack correctly, the backtrace in oppo: (gdb) bt #0 0x40168698 in __epoll_pwait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #1 0x4013f746 in epoll_pwait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #2 0x4013f754 in epoll_wait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #3 0x401cdf56 in android::Looper::pollInner(int) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so #4 0x401ce180 in android::Looper::pollOnce(int, int*, int*, void**) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so #5 0x4009c7dc in android::NativeMessageQueue::pollOnce(_JNIEnv*, int) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid_runtime.so #6 0x72403cdc in ?? () The last address is in the system@[email protected], which will also work correctly when the oat file is loaded. Can anyone give some advice? A: It looks like gdb is not able to find and load some of the shared libraries that are required for your program. To fix this, you can try adding the directories where the shared libraries are located to gdb's search path using the set solib-search-path command. For example: (gdb) set solib-search-path Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a This will tell gdb to search in the specified directory for shared libraries when it is trying to load them. You can also specify multiple directories by separating them with a colon (:) on Linux or a semicolon (;) on Windows. Alternatively, you can try using the sharedlibrary command in gdb to manually load the shared libraries that are missing. For example: (gdb) sharedlibrary libutils.so This will load the libutils.so library and make it available for gdb to use. You can use this command to load any other missing libraries as well. Once you have added the directories to the search path or manually loaded the missing libraries, you should be able to use gdb to debug your program properly. A: It looks like you are having trouble getting gdb to properly load and display the shared libraries in your Android application. There are a few possible solutions to this issue: Make sure that gdb is properly configured to use the correct shared library search paths. You can specify these paths using the set solib-search-path command in gdb, followed by the path to your shared libraries. For example: set solib-search-path Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a Use the info shared command in gdb to see if all of the required shared libraries are actually being loaded by the debugger. If any are missing, you can use the sharedlibrary command to manually load them. For example: sharedlibrary libm.so If gdb still isn't able to properly load and display the shared libraries, you may need to try using a different version of gdb. Some versions of gdb are known to have issues with Android applications, so switching to a newer or older version may help.
android ndk gdb loaded sharedlibraries missing *.oat
Both gdb 7.7 and gbd 7.11 missed some shared libraries when debugging my device (oppo r7s). I've pulled all libraries to local. Here is a complete list of libraries shown by info shared (gdb) info shared From To Syms Read Shared Object Library 0x40000980 0x40009640 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\linker 0x401c7940 0x401ce6e8 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so No libstdc++.so No libm.so 0x4013bbb0 0x4017329c Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so No libbinder.so No liblog.so No libhardware.so No libcutils.so No libc++.so No libLLVM.so No libbcinfo.so No libunwind.so No libz.so No libpng.so No libpowermanager.so No libcommon_time_client.so No libstlport.so No libui.so No libsync.so No libgui.so No libft2.so No libbcc.so No libGLESv2.so No libGLESv1_CM.so No libEGL.so No libunwind-ptrace.so No libgccdemangle.so No libcrypto.so No libicuuc.so No libicui18n.so No libjpeg.so No libexpat.so No libpcre.so No libharfbuzz_ng.so No libstagefright_foundation.so No libsonivox.so No libnbaio.so No libcamera_client.so No libaudioutils.so No libaudioparameter.so No libinput.so No libhardware_legacy.so No libcamera_metadata.so No libgabi++.so No libskia.so No libRScpp.so No libRS.so No libwpa_client.so No libnetutils.so No libspeexresampler.so 0x402635b0 0x402724a4 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroidfw.so No libGLES_trace.so No libbacktrace.so No libusbhost.so No libssl.so No libsqlite.so No libsoundtrigger.so No libselinux.so No libprocessgroup.so No libpdfium.so No libnetd_client.so No libnativehelper.so No libnativebridge.so No libminikin.so No libmemtrack.so No libmedia.so No libinputflinger.so No libimg_utils.so No libhwui.so No libassert_tip_service.so No libETC1.so 0x4006d230 0x400ca9dc Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid_runtime.so No libNimsWrap.so No libsigchain.so No libvendorconn.so No libbacktrace_libc++.so 0x41d4baa0 0x41f9ee24 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libart.so No libjavacore.so No memtrack.msm8916.so No libqti-perfd-client.so No libtinyxml.so No libqservice.so No libmm-abl-oem.so No libdiag.so No libmm-abl.so No libprotecteyes.so No libgsl.so No libadreno_utils.so No libEGL_adreno.so No libGLESv1_CM_adreno.so No libGLESv2_adreno.so 0x68246388 0x68249184 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid.so No libcompiler_rt.so No libjnigraphics.so No libvorbisidec.so No libstagefright_yuv.so No libstagefright_omx.so No libstagefright_enc_common.so No libstagefright_avc_common.so No libopus.so No libdrmframework.so No libstagefright_amrnb_common.so No libstagefright.so No libmtp.so No libjhead.so No libexif.so No libmedia_jni.so No libjavacrypto.so No libsoundpool.so No libaudioeffect_jni.so No librs_jni.so No libthwsplit.so No libwebviewchromium_loader.so No eglsubAndroid.so No libsc-a3xx.so No libqdutils.so No libqdMetaData.so No libmemalloc.so No gralloc.msm8916.so No libfmodex.so No libfmodevent.so No libstagefright_http_support.so No libeffects.so No libwilhelm.so No libOpenSLES.so 0x7f9ceb40 0x8102f72c Yes Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libclient.so No libwebviewchromium.so No libwebviewchromium_plat_support.so (*): Shared library is missing debugging information. But oat files are loaded when debugging other devices like Huawei (FRD-AL00). Following is an excerpt of the output of show shared on such a device. 0x71867000 0x71cc76d6 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x721dc000 0x725657c4 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x725dc000 0x7262d9cc Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0x726c3000 0x727291ea Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\system@[email protected] 0xea0de584 0xea0e5714 Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid.so 0xe1b15da0 0xe1cdc3ec Yes (*) Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libart-compiler.so No /system/lib/libvixl.so 0xc2b0ab40 0xc416b72c Yes Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libclient.so No /data/dalvik-cache/arm/system@app@[email protected]@classes.dex And without the libraries being loaded, gdb cannot unwind the stack correctly, the backtrace in oppo: (gdb) bt #0 0x40168698 in __epoll_pwait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #1 0x4013f746 in epoll_pwait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #2 0x4013f754 in epoll_wait () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libc.so #3 0x401cdf56 in android::Looper::pollInner(int) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so #4 0x401ce180 in android::Looper::pollOnce(int, int*, int*, void**) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libutils.so #5 0x4009c7dc in android::NativeMessageQueue::pollOnce(_JNIEnv*, int) () from Z:\program\program\target\android_RelWithDebInfo\obj\local\armeabi-v7a\libandroid_runtime.so #6 0x72403cdc in ?? () The last address is in the system@[email protected], which will also work correctly when the oat file is loaded. Can anyone give some advice?
[ "It looks like gdb is not able to find and load some of the shared libraries that are required for your program. To fix this, you can try adding the directories where the shared libraries are located to gdb's search path using the set solib-search-path command. For example:\n(gdb) set solib-search-path Z:\\program\\program\\target\\android_RelWithDebInfo\\obj\\local\\armeabi-v7a\n\nThis will tell gdb to search in the specified directory for shared libraries when it is trying to load them. You can also specify multiple directories by separating them with a colon (:) on Linux or a semicolon (;) on Windows.\nAlternatively, you can try using the sharedlibrary command in gdb to manually load the shared libraries that are missing. For example:\n(gdb) sharedlibrary libutils.so\n\nThis will load the libutils.so library and make it available for gdb to use. You can use this command to load any other missing libraries as well.\nOnce you have added the directories to the search path or manually loaded the missing libraries, you should be able to use gdb to debug your program properly.\n", "It looks like you are having trouble getting gdb to properly load and display the shared libraries in your Android application. There are a few possible solutions to this issue:\nMake sure that gdb is properly configured to use the correct shared library search paths. You can specify these paths using the set solib-search-path command in gdb, followed by the path to your shared libraries. For example:\nset solib-search-path Z:\\program\\program\\target\\android_RelWithDebInfo\\obj\\local\\armeabi-v7a\n\nUse the info shared command in gdb to see if all of the required shared libraries are actually being loaded by the debugger. If any are missing, you can use the sharedlibrary command to manually load them. For example:\nsharedlibrary libm.so\n\nIf gdb still isn't able to properly load and display the shared libraries, you may need to try using a different version of gdb. Some versions of gdb are known to have issues with Android applications, so switching to a newer or older version may help.\n" ]
[ 0, 0 ]
[ "Do you have multiple flavors in your project? Faced a similar issue where Android Studio has bug in it's iml file due to doing the gradle sync incorrectly.\n" ]
[ -1 ]
[ "android", "android_ndk", "gdb", "gdbserver" ]
stackoverflow_0048615324_android_android_ndk_gdb_gdbserver.txt
Q: Azure Pipeline Creating blank folder for drop Artifact with PowerShell In the Azure Build Pipeline. I have a powershell task that creates a folder New-Item -ItemType directory -Path $(Build.ArtifactStagingDirectory)\changedfiles And populate it by using this line of code if we have $file. Copy-Item -Path $file -Destination $(Build.ArtifactStagingDirectory)\changedfiles The problem is when the there is no item to be input in the folder changesfiles, there are no changesfiles folder being created. Would need it to be in there for the release pipeline. I have a Copy Artifact to Machine Server target and it will try to check for that specific folder from the drop artifact. But it will throw an error because the folder is not created when there is no files in it. My goal is to just to make sure that the Copy Artifact not to return any error. If there is some files I added to the changesfiles folder then I will have this If there I did not add anything to it this is the result, but ofcourse there is a changes on the repository that indicates 1 because I have changed the yml file. Which I didn't want to put in the changedfiles folder. hence the changedfiles folder is blank and it will not be on the 'drop' artifact as indicated in my previous screen shot. A: I have done some tests and the problem seems to be in the PublishBuildArtifacts task. It simply cannot copy empty folders. I have looked in the source code, and this could easily be a RoboCopy thing. I can however offer you two options to overcome this issue. 1 Add a place holder file in your folder with this single line of PowerShell: New-Item -Name .placeholder -ItemType File -Path (Build.ArtifactStagingDirectory)/changedfiles The result: 2 Another option is to use the StoreAsTar option in the PublishBuildArtifacts task: StoreAsTar: true This will TAR the artifact folder and preserves the empty folder:
Azure Pipeline Creating blank folder for drop Artifact with PowerShell
In the Azure Build Pipeline. I have a powershell task that creates a folder New-Item -ItemType directory -Path $(Build.ArtifactStagingDirectory)\changedfiles And populate it by using this line of code if we have $file. Copy-Item -Path $file -Destination $(Build.ArtifactStagingDirectory)\changedfiles The problem is when the there is no item to be input in the folder changesfiles, there are no changesfiles folder being created. Would need it to be in there for the release pipeline. I have a Copy Artifact to Machine Server target and it will try to check for that specific folder from the drop artifact. But it will throw an error because the folder is not created when there is no files in it. My goal is to just to make sure that the Copy Artifact not to return any error. If there is some files I added to the changesfiles folder then I will have this If there I did not add anything to it this is the result, but ofcourse there is a changes on the repository that indicates 1 because I have changed the yml file. Which I didn't want to put in the changedfiles folder. hence the changedfiles folder is blank and it will not be on the 'drop' artifact as indicated in my previous screen shot.
[ "I have done some tests and the problem seems to be in the PublishBuildArtifacts task. It simply cannot copy empty folders. I have looked in the source code, and this could easily be a RoboCopy thing.\nI can however offer you two options to overcome this issue.\n1\nAdd a place holder file in your folder with this single line of PowerShell:\nNew-Item -Name .placeholder -ItemType File -Path (Build.ArtifactStagingDirectory)/changedfiles\nThe result:\n\n2\nAnother option is to use the StoreAsTar option in the PublishBuildArtifacts task:\nStoreAsTar: true\nThis will TAR the artifact folder and preserves the empty folder:\n\n" ]
[ 0 ]
[]
[]
[ "azure_pipelines", "powershell" ]
stackoverflow_0074653532_azure_pipelines_powershell.txt
Q: Could not find a generator for route RouteSettings("chat", null) in the _CustomTabViewState I have a set of routes that I have defined in the routes.dart and these routes are linked in the main.dart file as below. @override Widget build(BuildContext context) { return MultiProvider( providers: [ StreamProvider<ConnectivityStatus>( create: (_) => ConnectionService().connectionStatusController.stream, ), ... ChangeNotifierProvider<AuthNotifier>( create: (_) => AuthNotifier(), ), ], child: MaterialApp( debugShowCheckedModeBanner: false, routes: Routes.routes, home: SplashScreen(), ), ); } Routes.dart file: class Routes { Routes._(); static const chat = '/chat'; static final routes = <String, WidgetBuilder>{ chat: (BuildContext ctx) => CircleChat(), }; } I have a button that is triggering the above route but throwing error. FlatButton( Navigator.of(context).pushReplacementNamed(Routes.chat); ) Error - The following assertion was thrown while handling a gesture: Could not find a generator for route RouteSettings("chat", null) in the _CustomTabViewState. Generators for routes are searched for in the following order: For the "/" route, the "builder" property, if non-null, is used. Otherwise, the "routes" table is used, if it has an entry for the route. Otherwise, onGenerateRoute is called. It should return a non-null value for any valid route not handled by "builder" and "routes". Finally if all else fails onUnknownRoute is called. Unfortunately, onUnknownRoute was not set. When the exception was thrown, this was the stack: #0 _CustomTabViewState._onUnknownRoute. P.S. - For a similar error I went through this and this, but I didn't find the explanation, that why it is not working when all semantics are correct. A: Use routes or onGenerateRoutes property in CuptinoTabView of CuptinoTabBar and pass your routes or onGenerateRoutes arguments here.
Could not find a generator for route RouteSettings("chat", null) in the _CustomTabViewState
I have a set of routes that I have defined in the routes.dart and these routes are linked in the main.dart file as below. @override Widget build(BuildContext context) { return MultiProvider( providers: [ StreamProvider<ConnectivityStatus>( create: (_) => ConnectionService().connectionStatusController.stream, ), ... ChangeNotifierProvider<AuthNotifier>( create: (_) => AuthNotifier(), ), ], child: MaterialApp( debugShowCheckedModeBanner: false, routes: Routes.routes, home: SplashScreen(), ), ); } Routes.dart file: class Routes { Routes._(); static const chat = '/chat'; static final routes = <String, WidgetBuilder>{ chat: (BuildContext ctx) => CircleChat(), }; } I have a button that is triggering the above route but throwing error. FlatButton( Navigator.of(context).pushReplacementNamed(Routes.chat); ) Error - The following assertion was thrown while handling a gesture: Could not find a generator for route RouteSettings("chat", null) in the _CustomTabViewState. Generators for routes are searched for in the following order: For the "/" route, the "builder" property, if non-null, is used. Otherwise, the "routes" table is used, if it has an entry for the route. Otherwise, onGenerateRoute is called. It should return a non-null value for any valid route not handled by "builder" and "routes". Finally if all else fails onUnknownRoute is called. Unfortunately, onUnknownRoute was not set. When the exception was thrown, this was the stack: #0 _CustomTabViewState._onUnknownRoute. P.S. - For a similar error I went through this and this, but I didn't find the explanation, that why it is not working when all semantics are correct.
[ "Use routes or onGenerateRoutes property in CuptinoTabView of CuptinoTabBar and pass your routes or onGenerateRoutes arguments here.\n" ]
[ 0 ]
[]
[]
[ "flutter", "flutter_routes" ]
stackoverflow_0064051765_flutter_flutter_routes.txt
Q: How to make a scheduled celery task that starts multiple jobs with different params? I have a celery task like: # Inside tasks.py from .models import Animal @shared_task def process_animals(): animals = Animal.ojbects.filter(age=5) for animal in animals: utils.register_animal(animal) I have a schedule like: # Inside celery.py from celery import Celery from celery.schedules import crontab app = Celery("core") app.conf.beat_schedule = { "runs-every-1-min": { "task": "my_app.core.tasks.process_animals", "schedule": crontab(), }, } There is no reason to process the Animals one at a time, they're all independent. Is it possible to "multiprocess" or "multi-task" this list? A: Following code will create a thread for each animal and process it independtly. #tasks.py from .models import Animal from threading import Thread @shared_task def process_animals(): animals = Animal.ojbects.filter(age=5) for animal in animals: t = Thread(target=utils.register_animal, args=(animal,)) t.start() And if you want to do the same in single core (if processing is less): #tasks.py from .models import Animal from multiprocessing.dummy import Pool as ThreadPool @shared_task def process_animals(): animals = Animal.ojbects.filter(age=5) pool = ThreadPool() results = pool.map(utils.register_animal, animals) pool.close() pool.join()
How to make a scheduled celery task that starts multiple jobs with different params?
I have a celery task like: # Inside tasks.py from .models import Animal @shared_task def process_animals(): animals = Animal.ojbects.filter(age=5) for animal in animals: utils.register_animal(animal) I have a schedule like: # Inside celery.py from celery import Celery from celery.schedules import crontab app = Celery("core") app.conf.beat_schedule = { "runs-every-1-min": { "task": "my_app.core.tasks.process_animals", "schedule": crontab(), }, } There is no reason to process the Animals one at a time, they're all independent. Is it possible to "multiprocess" or "multi-task" this list?
[ "Following code will create a thread for each animal and process it independtly.\n#tasks.py\nfrom .models import Animal\nfrom threading import Thread\n\n@shared_task\ndef process_animals():\n animals = Animal.ojbects.filter(age=5)\n for animal in animals:\n t = Thread(target=utils.register_animal, args=(animal,))\n t.start()\n\nAnd if you want to do the same in single core (if processing is less):\n#tasks.py\nfrom .models import Animal\nfrom multiprocessing.dummy import Pool as ThreadPool\n \n@shared_task\ndef process_animals():\n animals = Animal.ojbects.filter(age=5)\n pool = ThreadPool()\n\n results = pool.map(utils.register_animal, animals)\n\n pool.close()\n pool.join()\n\n" ]
[ 0 ]
[]
[]
[ "celery", "django", "multiprocessing" ]
stackoverflow_0074670119_celery_django_multiprocessing.txt
Q: How to kill zombie process I launched my program in the foreground (a daemon program), and then I killed it with kill -9, but I get a zombie remaining and I m not able to kill it with kill -9. How to kill a zombie process? If the zombie is a dead process (already killed), how I remove it from the output of ps aux? root@OpenWrt:~# anyprogramd & root@OpenWrt:~# ps aux | grep anyprogram 1163 root 2552 S anyprogramd 1167 root 2552 S anyprogramd 1169 root 2552 S anyprogramd 1170 root 2552 S anyprogramd 10101 root 944 S grep anyprogram root@OpenWrt:~# pidof anyprogramd 1170 1169 1167 1163 root@OpenWrt:~# kill -9 1170 1169 1167 1163 root@OpenWrt:~# ps aux |grep anyprogram 1163 root 0 Z [anyprogramd] root@OpenWrt:~# kill -9 1163 root@OpenWrt:~# ps aux |grep anyprogram 1163 root 0 Z [anyprogramd] A: A zombie is already dead, so you cannot kill it. To clean up a zombie, it must be waited on by its parent, so killing the parent should work to eliminate the zombie. (After the parent dies, the zombie will be inherited by pid 1, which will wait on it and clear its entry in the process table.) If your daemon is spawning children that become zombies, you have a bug. Your daemon should notice when its children die and wait on them to determine their exit status. An example of how you might send a signal to every process that is the parent of a zombie (note that this is extremely crude and might kill processes that you do not intend. I do not recommend using this sort of sledge hammer): # Don't do this. Incredibly risky sledge hammer! kill $(ps -A -ostat,ppid | awk '/[zZ]/ && !a[$2]++ {print $2}') A: You can clean up a zombie process by killing its parent process with the following command: kill -HUP $(ps -A -ostat,ppid | awk '{/[zZ]/{ print $2 }') A: I tried: ps aux | grep -w Z # returns the zombies pid ps o ppid {returned pid from previous command} # returns the parent kill -1 {the parent id from previous command} this will work :) A: Found it at http://www.linuxquestions.org/questions/suse-novell-60/howto-kill-defunct-processes-574612/ 2) Here a great tip from another user (Thxs Bill Dandreta): Sometimes kill -9 <pid> will not kill a process. Run ps -xal the 4th field is the parent process, kill all of a zombie's parents and the zombie dies! Example 4 0 18581 31706 17 0 2664 1236 wait S ? 0:00 sh -c /usr/bin/gcc -fomit-frame-pointer -O -mfpmat 4 0 18582 18581 17 0 2064 828 wait S ? 0:00 /usr/i686-pc-linux-gnu/gcc-bin/3.3.6/gcc -fomit-fr 4 0 18583 18582 21 0 6684 3100 - R ? 0:00 /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.6/cc1 -quie 18581, 18582, 18583 are zombies - kill -9 18581 18582 18583 has no effect. kill -9 31706 removes the zombies. A: I tried kill -9 $(ps -A -ostat,ppid | grep -e '[zZ]'| awk '{ print $2 }') and it works for me. A: Sometimes the parent ppid cannot be killed, hence kill the zombie pid kill -9 $(ps -A -ostat,pid | awk '/[zZ]/{ print $2 }') A: On mac non of the above commands/instructions worked. To remove zombie processes you can right click on docker-icon->troubleshot->clean/purge Data. A: I do not dare to try above methods. My solution is htop then detect which process have multiprocessing.spawn and kill -9 it. A: Combining a few answers here into an elegant approach where you confirm which process is gone zombie before killing it. Add the script to .bashrc/.zshrc and run the killZombie command. killZombie() { pid=$(ps -A -ostat,ppid | awk '/[zZ]/ && !a[$2]++ {print $2}'); if [ "$pid" = "" ]; then echo "No zombie processes found."; else cmd=$(ps -p $pid -o cmd | sed '1d'); echo "Found zombie process PID: $pid"; echo "$cmd"; echo "Kill it? Return to continue… (ctrl+c to cancel)"; read -r; sudo kill -9 $pid; fi }
How to kill zombie process
I launched my program in the foreground (a daemon program), and then I killed it with kill -9, but I get a zombie remaining and I m not able to kill it with kill -9. How to kill a zombie process? If the zombie is a dead process (already killed), how I remove it from the output of ps aux? root@OpenWrt:~# anyprogramd & root@OpenWrt:~# ps aux | grep anyprogram 1163 root 2552 S anyprogramd 1167 root 2552 S anyprogramd 1169 root 2552 S anyprogramd 1170 root 2552 S anyprogramd 10101 root 944 S grep anyprogram root@OpenWrt:~# pidof anyprogramd 1170 1169 1167 1163 root@OpenWrt:~# kill -9 1170 1169 1167 1163 root@OpenWrt:~# ps aux |grep anyprogram 1163 root 0 Z [anyprogramd] root@OpenWrt:~# kill -9 1163 root@OpenWrt:~# ps aux |grep anyprogram 1163 root 0 Z [anyprogramd]
[ "A zombie is already dead, so you cannot kill it. To clean up a zombie, it must be waited on by its parent, so killing the parent should work to eliminate the zombie. (After the parent dies, the zombie will be inherited by pid 1, which will wait on it and clear its entry in the process table.) If your daemon is spawning children that become zombies, you have a bug. Your daemon should notice when its children die and wait on them to determine their exit status.\nAn example of how you might send a signal to every process that is the parent of a zombie (note that this is extremely crude and might kill processes that you do not intend. I do not recommend using this sort of sledge hammer):\n# Don't do this. Incredibly risky sledge hammer!\nkill $(ps -A -ostat,ppid | awk '/[zZ]/ && !a[$2]++ {print $2}')\n\n", "You can clean up a zombie process by killing its parent process with the following command:\nkill -HUP $(ps -A -ostat,ppid | awk '{/[zZ]/{ print $2 }')\n\n", "I tried:\nps aux | grep -w Z # returns the zombies pid\nps o ppid {returned pid from previous command} # returns the parent\nkill -1 {the parent id from previous command}\n\nthis will work :)\n", "Found it at http://www.linuxquestions.org/questions/suse-novell-60/howto-kill-defunct-processes-574612/\n2) Here a great tip from another user (Thxs Bill Dandreta):\nSometimes\nkill -9 <pid>\n\nwill not kill a process. Run\nps -xal\n\nthe 4th field is the parent process, kill all of a zombie's parents and the zombie dies!\nExample\n4 0 18581 31706 17 0 2664 1236 wait S ? 0:00 sh -c /usr/bin/gcc -fomit-frame-pointer -O -mfpmat\n4 0 18582 18581 17 0 2064 828 wait S ? 0:00 /usr/i686-pc-linux-gnu/gcc-bin/3.3.6/gcc -fomit-fr\n4 0 18583 18582 21 0 6684 3100 - R ? 0:00 /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.6/cc1 -quie\n\n18581, 18582, 18583 are zombies -\nkill -9 18581 18582 18583\n\nhas no effect.\nkill -9 31706\n\nremoves the zombies.\n", "I tried \nkill -9 $(ps -A -ostat,ppid | grep -e '[zZ]'| awk '{ print $2 }')\n\nand it works for me. \n", "Sometimes the parent ppid cannot be killed, hence kill the zombie pid\nkill -9 $(ps -A -ostat,pid | awk '/[zZ]/{ print $2 }')\n\n", "On mac non of the above commands/instructions worked. To remove zombie processes you can right click on docker-icon->troubleshot->clean/purge Data.\n\n", "I do not dare to try above methods.\nMy solution is htop then detect which process have multiprocessing.spawn and kill -9 it.\n", "Combining a few answers here into an elegant approach where you confirm which process is gone zombie before killing it. Add the script to .bashrc/.zshrc and run the killZombie command.\nkillZombie() {\n pid=$(ps -A -ostat,ppid | awk '/[zZ]/ && !a[$2]++ {print $2}');\n if [ \"$pid\" = \"\" ]; then\n echo \"No zombie processes found.\";\n else\n cmd=$(ps -p $pid -o cmd | sed '1d');\n echo \"Found zombie process PID: $pid\";\n echo \"$cmd\";\n echo \"Kill it? Return to continue… (ctrl+c to cancel)\";\n read -r;\n sudo kill -9 $pid;\n fi\n}\n\n" ]
[ 290, 72, 47, 31, 27, 1, 0, 0, 0 ]
[]
[]
[ "debian", "linux", "shell", "ubuntu", "zombie_process" ]
stackoverflow_0016944886_debian_linux_shell_ubuntu_zombie_process.txt
Q: RTK Query with two different apis works only with one I have two different Api. Both uses RTK query to do different set of CRUD operations. One of them works perfectly fine but the other one shows error TypeError: (0 , _api_addressApi__WEBPACK_IMPORTED_MODULE_2__.useGetAddress) is not a function Here is my addressApi.js which does not work import { createApi, fetchBaseQuery } from "@reduxjs/toolkit/query/react"; export const addressApi = createApi({ reducerPath: "address", baseQuery: fetchBaseQuery({ baseUrl: "https://random-data-api.com/api/v2", }), endpoints: (builder) => ({ getAddress: builder.query({ query: () => { return { url: "/addresses" }; }, }), }), }); export const { useGetAddress } = addressApi; Here is my store.js import { configureStore } from "@reduxjs/toolkit"; import { addressApi } from "../api/addressApi"; import { destinationApi } from "../api/destinationApi"; export const store = configureStore({ reducer: { // destinationApi works but addressApi does not works [destinationApi.reducerPath]: destinationApi.reducer, [addressApi.reducerPath]: addressApi.reducer, }, // Adding the api middleware enables caching, invalidation, polling, // and other useful features of `rtk-query`. middleware: (getDefaultMiddleware) => getDefaultMiddleware() .concat(destinationApi.middleware) .concat(addressApi.middleware), }); In the file where I am calling useGetAddress() import { useGetAddress } from "../api/addressApi"; function AddDestination() { .... const { data, error } = useGetAddress(); console.log(data); A: Provide method:'GET' to query Like getAddress: builder.query({ query: () => { return { url: "/addresses" , method:'GET', }; },
RTK Query with two different apis works only with one
I have two different Api. Both uses RTK query to do different set of CRUD operations. One of them works perfectly fine but the other one shows error TypeError: (0 , _api_addressApi__WEBPACK_IMPORTED_MODULE_2__.useGetAddress) is not a function Here is my addressApi.js which does not work import { createApi, fetchBaseQuery } from "@reduxjs/toolkit/query/react"; export const addressApi = createApi({ reducerPath: "address", baseQuery: fetchBaseQuery({ baseUrl: "https://random-data-api.com/api/v2", }), endpoints: (builder) => ({ getAddress: builder.query({ query: () => { return { url: "/addresses" }; }, }), }), }); export const { useGetAddress } = addressApi; Here is my store.js import { configureStore } from "@reduxjs/toolkit"; import { addressApi } from "../api/addressApi"; import { destinationApi } from "../api/destinationApi"; export const store = configureStore({ reducer: { // destinationApi works but addressApi does not works [destinationApi.reducerPath]: destinationApi.reducer, [addressApi.reducerPath]: addressApi.reducer, }, // Adding the api middleware enables caching, invalidation, polling, // and other useful features of `rtk-query`. middleware: (getDefaultMiddleware) => getDefaultMiddleware() .concat(destinationApi.middleware) .concat(addressApi.middleware), }); In the file where I am calling useGetAddress() import { useGetAddress } from "../api/addressApi"; function AddDestination() { .... const { data, error } = useGetAddress(); console.log(data);
[ "Provide method:'GET' to query\nLike\ngetAddress: builder.query({\n query: () => {\n return { url: \"/addresses\" ,\n method:'GET',\n };\n },\n\n" ]
[ 0 ]
[]
[]
[ "react_redux", "reactjs", "redux", "redux_toolkit", "rtk_query" ]
stackoverflow_0074670009_react_redux_reactjs_redux_redux_toolkit_rtk_query.txt
Q: Why retention.ms of Kaka Streams repartition topic is set to -1 by default? Isn't this infinitely retain messages in repartition topic? I think it's related to the below links, but I don't understand. https://issues.apache.org/jira/browse/KAFKA-6535 https://issues.apache.org/jira/browse/KAFKA-6150 Kafka Streams deleting consumed repartition records, to reduce disk usage It's possible to provide topic configurations like "retention.ms", "cleanup.policy" for kafka streams internal topics like *-changelog topics to delete useless logs. But when it comes to internal topics like *-repartition topics, it's not possible to provide topic configuration values, even though the default "retention.ms" for repartition topic is "-1" which means infinite retention. How can I delete or manage repartition topics? Otherwise the repartition topic's size is going to be too large and disk malfunction problems might occur. How can I manage repartition topics? What is purgeData? Couldn't find any related explanations on the documentation. A: Fact retention.ms for the repartition topics is -1 by default and there's no way to override this value in kafka-streams client code. What I misunderstood Size of the repartition topic would be increasing infinitely since the retentions.ms for the repartition topics is -1. Fix misunderstanding There's a method called maybeCommit in the StreamThread class. maybeCommit method is called iteratively inside the loop that handles stream records. Inside the maybeCommit method (version 2.7.1), there's a comment like below. try to purge the committed records for repartition topics if possible https://github.com/apache/kafka/blob/2.7.1/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L923-L926 Based on this, what I understand is that when the record in the repartition topics is streamed down to the changelog topic, then the records already sent are purged periodically. Therefore, there's no need to clear or manage retention.ms for the repartition topics. Reference https://issues.apache.org/jira/browse/KAFKA-6150 Please leave a comment or correct this if I'm wrong. A: I was facing the same issue with ksqldb. Internal topics grew up like TB of data in a matter of days with infinite retention by default. We altered them setting retention.ms to some value instead of infinite (-1) but after that everything broke. Today I executed this command: set topic.retention.ms=3600000 After that, I created a table and all internal topics were created with retention.ms=1h instead of infinite. Will try next week in prd environment to see if ksqldb (0.28.2) evicts segments and everything is ok. Source: https://docs.confluent.io/platform/current/streams/developer-guide/config-streams.html#internal-topic-parameters Hope it helps Regards
Why retention.ms of Kaka Streams repartition topic is set to -1 by default? Isn't this infinitely retain messages in repartition topic?
I think it's related to the below links, but I don't understand. https://issues.apache.org/jira/browse/KAFKA-6535 https://issues.apache.org/jira/browse/KAFKA-6150 Kafka Streams deleting consumed repartition records, to reduce disk usage It's possible to provide topic configurations like "retention.ms", "cleanup.policy" for kafka streams internal topics like *-changelog topics to delete useless logs. But when it comes to internal topics like *-repartition topics, it's not possible to provide topic configuration values, even though the default "retention.ms" for repartition topic is "-1" which means infinite retention. How can I delete or manage repartition topics? Otherwise the repartition topic's size is going to be too large and disk malfunction problems might occur. How can I manage repartition topics? What is purgeData? Couldn't find any related explanations on the documentation.
[ "Fact\n\nretention.ms for the repartition topics is -1 by default and there's no way to override this value in kafka-streams client code.\n\nWhat I misunderstood\n\nSize of the repartition topic would be increasing infinitely since the retentions.ms for the repartition topics is -1.\n\nFix misunderstanding\n\nThere's a method called maybeCommit in the StreamThread class.\nmaybeCommit method is called iteratively inside the loop that handles stream records.\nInside the maybeCommit method (version 2.7.1), there's a comment like below.\n\ntry to purge the committed records for repartition topics if possible\n\n\nhttps://github.com/apache/kafka/blob/2.7.1/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L923-L926\n\n\nBased on this, what I understand is that when the record in the repartition topics is streamed down to the changelog topic, then the records already sent are purged periodically.\nTherefore, there's no need to clear or manage retention.ms for the repartition topics.\n\nReference\n\nhttps://issues.apache.org/jira/browse/KAFKA-6150\n\nPlease leave a comment or correct this if I'm wrong.\n", "I was facing the same issue with ksqldb. Internal topics grew up like TB of data in a matter of days with infinite retention by default. We altered them setting retention.ms to some value instead of infinite (-1) but after that everything broke.\nToday I executed this command:\nset topic.retention.ms=3600000\nAfter that, I created a table and all internal topics were created with retention.ms=1h instead of infinite.\nWill try next week in prd environment to see if ksqldb (0.28.2) evicts segments and everything is ok.\nSource: https://docs.confluent.io/platform/current/streams/developer-guide/config-streams.html#internal-topic-parameters\nHope it helps\nRegards\n" ]
[ 0, 0 ]
[]
[]
[ "apache_kafka", "apache_kafka_streams", "spring_cloud_stream_binder_kafka" ]
stackoverflow_0065971984_apache_kafka_apache_kafka_streams_spring_cloud_stream_binder_kafka.txt
Q: Skia Failed to create Image decoder with message 'unimplemented' I have a CutImagesPage where users in the App should be allowed to crop thweir images before uploading, there i am Using Skiasharp to crop the Images. After Cropping the Image gets Saved as a byte[] as you can see in my code. The AppManager, where the byte[[] gets saved, notifies the CreateAccount Page which gets the byte[] from the Manager and sets its own ProfileImage Property. In the View i am usign the ByteArrayToImageConverter to displayx the page. The Problem is now that i get following error during the Process of showing the image. I looked it up on Google but all SOlutions where with an Image beeing loaded from the Ressources which is not the case here. [skia] --- Failed to create image decoder with message 'unimplemented' [skia] --- Failed to create image decoder with message 'unimplemented' [System] A resource failed to call close. [System] A resource failed to call close. The Image Cropping Page: public SKBitmap ProfileImage { get; set; } public async void SetImages() { try { if (isProfileImage) { foreach (CutImages cutImage in ImageObjects) { ProfileImage = cutImage.CroppedImage.Resize(new SKImageInfo(360, 360), SKFilterQuality.High); } AppManager.ProfileImage = this.ProfileImage.Bytes; await NavigationService.GoBackAsync(); ImageObjects.Clear(); return; } foreach (CutImages cutImage in ImageObjects) { PostImages.Add(cutImage.CroppedImage.Resize(new SKImageInfo(1080,1080),SKFilterQuality.High)); } NavigationService.InsertNavigateBefore<CreatePostViewModel>(); ImageObjects.Clear(); } catch (Exception ex) { Console.WriteLine(ex); } } The Method Setting the Image: public override void AppManager_PropertyChanged(object sender, PropertyChangedEventArgs e) { base.AppManager_PropertyChanged(sender, e); if(e.PropertyName == nameof(AppManager.CroppedImage)) { ProfileImage = AppManager.CroppedImage; } } The View: <ContentPage.Resources> <xct:ByteArrayToImageSourceConverter x:Key="ByteArrayToImageSourceConverter"/> </ContentPage.Resources> <ContentPage.Content> <ScrollView> <StackLayout Grid.Column="1" Grid.Row="1"> <ffimageloading:CachedImage Margin="0,0,0,0" HorizontalOptions="CenterAndExpand" VerticalOptions="CenterAndExpand" Source="{Binding ProfileImage, Converter={StaticResource ByteArrayToImageSourceConverter}}" HeightRequest="100" WidthRequest="100" > </ffimageloading:CachedImage> A: from the docs SKBitmap Bytes Gets a copy of all the pixel data as a byte array. this is the raw pixel data, which Xamarin Forms cannot work with. XF supports encoded image formats like PNG, JPG, etc. You need to encode SKBitmap as one of those image formats, and then get the byte data from that encoded image
Skia Failed to create Image decoder with message 'unimplemented'
I have a CutImagesPage where users in the App should be allowed to crop thweir images before uploading, there i am Using Skiasharp to crop the Images. After Cropping the Image gets Saved as a byte[] as you can see in my code. The AppManager, where the byte[[] gets saved, notifies the CreateAccount Page which gets the byte[] from the Manager and sets its own ProfileImage Property. In the View i am usign the ByteArrayToImageConverter to displayx the page. The Problem is now that i get following error during the Process of showing the image. I looked it up on Google but all SOlutions where with an Image beeing loaded from the Ressources which is not the case here. [skia] --- Failed to create image decoder with message 'unimplemented' [skia] --- Failed to create image decoder with message 'unimplemented' [System] A resource failed to call close. [System] A resource failed to call close. The Image Cropping Page: public SKBitmap ProfileImage { get; set; } public async void SetImages() { try { if (isProfileImage) { foreach (CutImages cutImage in ImageObjects) { ProfileImage = cutImage.CroppedImage.Resize(new SKImageInfo(360, 360), SKFilterQuality.High); } AppManager.ProfileImage = this.ProfileImage.Bytes; await NavigationService.GoBackAsync(); ImageObjects.Clear(); return; } foreach (CutImages cutImage in ImageObjects) { PostImages.Add(cutImage.CroppedImage.Resize(new SKImageInfo(1080,1080),SKFilterQuality.High)); } NavigationService.InsertNavigateBefore<CreatePostViewModel>(); ImageObjects.Clear(); } catch (Exception ex) { Console.WriteLine(ex); } } The Method Setting the Image: public override void AppManager_PropertyChanged(object sender, PropertyChangedEventArgs e) { base.AppManager_PropertyChanged(sender, e); if(e.PropertyName == nameof(AppManager.CroppedImage)) { ProfileImage = AppManager.CroppedImage; } } The View: <ContentPage.Resources> <xct:ByteArrayToImageSourceConverter x:Key="ByteArrayToImageSourceConverter"/> </ContentPage.Resources> <ContentPage.Content> <ScrollView> <StackLayout Grid.Column="1" Grid.Row="1"> <ffimageloading:CachedImage Margin="0,0,0,0" HorizontalOptions="CenterAndExpand" VerticalOptions="CenterAndExpand" Source="{Binding ProfileImage, Converter={StaticResource ByteArrayToImageSourceConverter}}" HeightRequest="100" WidthRequest="100" > </ffimageloading:CachedImage>
[ "from the docs\n\nSKBitmap Bytes Gets a copy of all the pixel data as a byte array.\n\nthis is the raw pixel data, which Xamarin Forms cannot work with. XF supports encoded image formats like PNG, JPG, etc. You need to encode SKBitmap as one of those image formats, and then get the byte data from that encoded image\n" ]
[ 2 ]
[]
[]
[ "xamarin.forms" ]
stackoverflow_0074669334_xamarin.forms.txt
Q: I can't use queries params with rtk query So I'm very new to rtk query I'm building a web store app and I'm trying to fetch products based on category, and sort them, everything works fine until I pass the query params in useGetProductsQuery hook at first it works but when I refresh the page or wait a few seconds it shows errors the errors is just not getting the data when I'm selecting them with useSelector(state => selecProductById(state, productId)) in the product component I'm getting the ids but no products if there's a simple solution or another way please help me That's my apiSlice: import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react' export const apiSlice = createApi({ reducerPath: 'api', baseQuery: fetchBaseQuery({ baseUrl: 'http://localhost:4500/' }), tagTypes: ['Product'], endpoints: (builder) => ({}), }) that's my products slice import { createSelector, createEntityAdapter } from '@reduxjs/toolkit' import { apiSlice } from '../../app/api/apiSlice' const productsAdapter = createEntityAdapter({}) const initialState = productsAdapter.getInitialState() export const productsApiSlice = apiSlice.injectEndpoints({ endpoints: (builder) => ({ getProducts: builder.query({ query: (args) => ({ url: '/products', params: { ...args }, method: 'GET', validateStatus: (response, result) => { return response.status === 200 && !result.isError }, }), transformResponse: (responseData) => { const loadedProducts = responseData.map((product) => { product.id = product._id return product }) return productsAdapter.setAll(initialState, loadedProducts) }, providesTags: (result, error, arg) => { if (result?.ids) { return [ { type: 'Product', id: 'LIST' }, ...result.ids.map((id) => ({ type: 'Product', id })), ] } else return [{ type: 'Product', id: 'LIST' }] }, }), }), }) export const { useGetProductsQuery } = productsApiSlice export const selectProductsResult = productsApiSlice.endpoints.getProducts.select() const selectProductsData = createSelector( selectProductsResult, (productsResult) => productsResult.data ) export const { selectAll: selectAllProducts, selectById: selectPostById, selectIds: selectPostIds, } = productsAdapter.getSelectors( (state) => selectProductsData(state) ?? initialState ) the product component: const Products = ({ productId }) => { const product = useSelector((state) => selectPostById(state, productId)) const content = ( <Container> <ProductContainer> <ImageSection> <Image src={product.img} /> <LinksContainer> <LinksWrapper> <Links> <ShoppingCartOutlinedIcon style={{ fontSize: '1.6em' }} /> </Links> <Links> <SearchIcon style={{ fontSize: '1.6em' }} /> </Links> <Links> <FavoriteBorderOutlinedIcon style={{ fontSize: '1.6em' }} /> </Links> </LinksWrapper> </LinksContainer> </ImageSection> <InfoSection> <Title>{product.title}</Title> <StarsSection> <Stars> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> </Stars> </StarsSection> <Price>{product.price}</Price> </InfoSection> </ProductContainer> </Container> ) return content } export default Products that's my api product controller: const getAllProducts = async (req, res) => { const queryNewest = req.query.new const queryCategory = req.query.category let products if (queryNewest && queryCategory) { products = await Product.find({ categories: { $in: [queryCategory] }, }) .sort({ createdAt: -1 }) .limit(3) return res.json(products) } if (queryNewest) { products = await Product.find().sort({ createdAt: -1 }) return res.json(products) } if (queryCategory) { products = await Product.find({ categories: { $in: [queryCategory] }, }) return res.json(products) } products = await Product.find().lean() if (!products?.length) { return res.status(400).json({ message: 'No product found' }) } res.json(products) } my app component function App() { return ( <Routes> <Route path="/" element={<Layout />}> <Route index element={<Home />} /> <Route path="shop"> <Route index element={<Shop />} /> </Route> <Route path="register" element={<Register />} /> <Route path="login" element={<Login />} /> <Route path="cart" element={<Cart />} /> <Route path="product" element={<ProductView />} /> </Route> </Routes> ) } A: You are trying to use the selectPostById selector to get a product by its id, but it's not returning the correct product. You are using the setAll method to set the initial state of your products, but you are not providing an initial state to the getSelectors method when you create your selectors. So that the initial state will always be an empty object, and the selectPostById selector will not be able to find any products in that empty state. Pass the initial state to the getSelectors method when you create your selectors, like this: const { selectAll: selectAllProducts, selectById: selectPostById, selectIds: selectPostIds, } = productsAdapter.getSelectors( (state) => selectProductsData(state) ?? initialState, // <-- pass initial state here initialState // <-- initial state here ) Now selectPostById selector should be able to find the products in the initial state, and it should return the correct product when you pass a product id to it.
I can't use queries params with rtk query
So I'm very new to rtk query I'm building a web store app and I'm trying to fetch products based on category, and sort them, everything works fine until I pass the query params in useGetProductsQuery hook at first it works but when I refresh the page or wait a few seconds it shows errors the errors is just not getting the data when I'm selecting them with useSelector(state => selecProductById(state, productId)) in the product component I'm getting the ids but no products if there's a simple solution or another way please help me That's my apiSlice: import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react' export const apiSlice = createApi({ reducerPath: 'api', baseQuery: fetchBaseQuery({ baseUrl: 'http://localhost:4500/' }), tagTypes: ['Product'], endpoints: (builder) => ({}), }) that's my products slice import { createSelector, createEntityAdapter } from '@reduxjs/toolkit' import { apiSlice } from '../../app/api/apiSlice' const productsAdapter = createEntityAdapter({}) const initialState = productsAdapter.getInitialState() export const productsApiSlice = apiSlice.injectEndpoints({ endpoints: (builder) => ({ getProducts: builder.query({ query: (args) => ({ url: '/products', params: { ...args }, method: 'GET', validateStatus: (response, result) => { return response.status === 200 && !result.isError }, }), transformResponse: (responseData) => { const loadedProducts = responseData.map((product) => { product.id = product._id return product }) return productsAdapter.setAll(initialState, loadedProducts) }, providesTags: (result, error, arg) => { if (result?.ids) { return [ { type: 'Product', id: 'LIST' }, ...result.ids.map((id) => ({ type: 'Product', id })), ] } else return [{ type: 'Product', id: 'LIST' }] }, }), }), }) export const { useGetProductsQuery } = productsApiSlice export const selectProductsResult = productsApiSlice.endpoints.getProducts.select() const selectProductsData = createSelector( selectProductsResult, (productsResult) => productsResult.data ) export const { selectAll: selectAllProducts, selectById: selectPostById, selectIds: selectPostIds, } = productsAdapter.getSelectors( (state) => selectProductsData(state) ?? initialState ) the product component: const Products = ({ productId }) => { const product = useSelector((state) => selectPostById(state, productId)) const content = ( <Container> <ProductContainer> <ImageSection> <Image src={product.img} /> <LinksContainer> <LinksWrapper> <Links> <ShoppingCartOutlinedIcon style={{ fontSize: '1.6em' }} /> </Links> <Links> <SearchIcon style={{ fontSize: '1.6em' }} /> </Links> <Links> <FavoriteBorderOutlinedIcon style={{ fontSize: '1.6em' }} /> </Links> </LinksWrapper> </LinksContainer> </ImageSection> <InfoSection> <Title>{product.title}</Title> <StarsSection> <Stars> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> <StarOutlinedIcon style={{ fontSize: '1em', color: 'orange' }} /> </Stars> </StarsSection> <Price>{product.price}</Price> </InfoSection> </ProductContainer> </Container> ) return content } export default Products that's my api product controller: const getAllProducts = async (req, res) => { const queryNewest = req.query.new const queryCategory = req.query.category let products if (queryNewest && queryCategory) { products = await Product.find({ categories: { $in: [queryCategory] }, }) .sort({ createdAt: -1 }) .limit(3) return res.json(products) } if (queryNewest) { products = await Product.find().sort({ createdAt: -1 }) return res.json(products) } if (queryCategory) { products = await Product.find({ categories: { $in: [queryCategory] }, }) return res.json(products) } products = await Product.find().lean() if (!products?.length) { return res.status(400).json({ message: 'No product found' }) } res.json(products) } my app component function App() { return ( <Routes> <Route path="/" element={<Layout />}> <Route index element={<Home />} /> <Route path="shop"> <Route index element={<Shop />} /> </Route> <Route path="register" element={<Register />} /> <Route path="login" element={<Login />} /> <Route path="cart" element={<Cart />} /> <Route path="product" element={<ProductView />} /> </Route> </Routes> ) }
[ "You are trying to use the selectPostById selector to get a product by its id, but it's not returning the correct product.\nYou are using the setAll method to set the initial state of your products, but you are not providing an initial state to the getSelectors method when you create your selectors. So that the initial state will always be an empty object, and the selectPostById selector will not be able to find any products in that empty state.\nPass the initial state to the getSelectors method when you create your selectors, like this:\nconst {\n selectAll: selectAllProducts,\n selectById: selectPostById,\n selectIds: selectPostIds,\n} = productsAdapter.getSelectors(\n (state) => selectProductsData(state) ?? initialState, // <-- pass initial state here\n initialState // <-- initial state here\n)\n\nNow selectPostById selector should be able to find the products in the initial state, and it should return the correct product when you pass a product id to it.\n" ]
[ 0 ]
[]
[]
[ "javascript", "react_redux", "react_router", "redux_toolkit", "rtk_query" ]
stackoverflow_0074670110_javascript_react_redux_react_router_redux_toolkit_rtk_query.txt
Q: I'm trying to upload a file to sanity client with client.assets.upload I catch this error when I've try to upload to sanity client. Is it the lengthComputable or something else that is the problem and if so how do I fix it? Caught Error client.assets .upload('file', selectedFile, { contentType: selectedFile.type, filename: selectedFile.name }) .then((data) => { console.log(data) }) .catch((error) => { console.log("Upload failed: ", error) }) selected File is a vid btw. I am not sure what I'm doing wrong. I've tried to use await but the documentation does not say I have to. A: It looks like the error you're encountering is caused by the selectedFile object not having a lengthComputable property. This property is used to determine the size of the file, so it's necessary in order to upload the file using the client.assets.upload method. One way to fix this issue would be to check if the selectedFile object has a lengthComputable property, and if not, add one and set it to true: client.assets .upload("file", selectedFile, { contentType: selectedFile.type, filename: selectedFile.name, lengthComputable: true, }) .then((data) => { console.log(data); }) .catch((error) => { console.log("Upload failed: ", error); });
I'm trying to upload a file to sanity client with client.assets.upload
I catch this error when I've try to upload to sanity client. Is it the lengthComputable or something else that is the problem and if so how do I fix it? Caught Error client.assets .upload('file', selectedFile, { contentType: selectedFile.type, filename: selectedFile.name }) .then((data) => { console.log(data) }) .catch((error) => { console.log("Upload failed: ", error) }) selected File is a vid btw. I am not sure what I'm doing wrong. I've tried to use await but the documentation does not say I have to.
[ "It looks like the error you're encountering is caused by the selectedFile object not having a lengthComputable property. This property is used to determine the size of the file, so it's necessary in order to upload the file using the client.assets.upload method.\nOne way to fix this issue would be to check if the selectedFile object has a lengthComputable property, and if not, add one and set it to true:\nclient.assets\n .upload(\"file\", selectedFile, {\n contentType: selectedFile.type,\n filename: selectedFile.name,\n lengthComputable: true,\n })\n .then((data) => {\n console.log(data);\n })\n .catch((error) => {\n console.log(\"Upload failed: \", error);\n });\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "sanity" ]
stackoverflow_0074665191_javascript_sanity.txt
Q: Layer nav over text So I havd a problem 1 Im trying to make the website responsive and the paragraph ( and the World's Biggest University Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat Visit Us To Know More ) is showing in the menu and it should be behind the nav it so the red nav should show only half the text this is the problem enter image description here and it should look like this enter image description here this is the code @media(max-width: 700px) { .text-box h1 { font-size: 20px; } .nav-links ul li { display: block; } .nav-links { position: absolute; background: #f44336; height: 100vh; width: 200px; top: 0; right: 0; text-align: left; z-index: 2; } } this is the whole css code * { margin: 0; padding: 0; font-family: 'Poppins', sans-serif; } .header { min-height: 100vh; width: 100%; background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png); background-position: center; background-size: cover; position: relative; } nav { display: flex; padding: 2% 6%; justify-content: space-between; align-items: center; position: sticky; } nav img { width: 150px; } .nav-links { flex: 1; text-align: right; } .nav-links ul li { list-style: none; display: inline-block; padding: 8px 12px; position: relative; } .nav-links ul li a { color: #fff; text-decoration: none; font-size: 13px; } .nav-links ul li::after { content: ''; width: 0%; height: 2px; background: #a85d58; display: block; margin: auto; transition: 0.5s; } .nav-links ul li:hover::after { width: 100%; } .text-box { width: 90%; color: #fff; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); text-align: center; } .text-box h1 { font-size: 62px; } .text-box p { margin: 10px 0 40px; font-size: 14px; color: #fff; } .hero-btn { display: inline-block; text-decoration: none; color: #fff; border: 1px solid #fff; padding: 12px 34px; font-size: 13px; background: transparent; position: relative; cursor: pointer; } .hero-btn:hover { border: 1px solid #f44336; background: #f44336; transition: 1s; } @media(max-width: 700px) { .text-box h1 { font-size: 20px; } .nav-links ul li { display: block; } .nav-links { position: absolute; background: #f44336; height: 100vh; width: 200px; top: 0; right: 0; text-align: left; z-index: 2; } } and this is the html code <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>University</title> <link rel="stylesheet" href="style.css"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link href="https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/[email protected]/css/fontawesome.min.css"> </head> <body> <section class="header"> <nav> <a href="index.html"><img src="img/logo.png"></a> <div class="nav-links"> <i class="fa fa-times"></i> <ul> <li><a href="">HOME</a></li> <li><a href="">ABOUT</a></li> <li><a href="">COURSE</a></li> <li><a href="">BLOG</a></li> <li><a href="">CONTACT</a></li> </ul> </div> <i class="fa fa-bars"></i> </nav> <div class="text-box"> <h1>World's Biggest University</h1> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p> <a href="" class="hero-btn">Visit Us To Know More</a> </div> </section> </body> </html> A: If the goal is to have the test "Visit Us To Know More" behind the red nav then you can just add a z-index: 1000; to the nav. This will make the nav layer over the text. But then you won't be able to read the text of course. Edit: so after a little playing around I added some grid-template-areas to your code so the text isn't overflowing into eachother. <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>University</title> <link rel="stylesheet" href="style.css"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link href="https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/[email protected]/css/fontawesome.min.css"> </head> <body> <header class="header"> <nav> <a href="index.html"><img src="img/logo.png"></a> <div class="nav-links"> <i class="fa fa-times"></i> <ul> <li><a href="">HOME</a></li> <li><a href="">ABOUT</a></li> <li><a href="">COURSE</a></li> <li><a href="">BLOG</a></li> <li><a href="">CONTACT</a></li> </ul> </div> <i class="fa fa-bars"></i> </nav> </header> <main> <div class="text-box"> <h1>World's Biggest University</h1> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p> <a href="" class="hero-btn">Visit Us To Know More</a> </div> </main> </body> </html> * { margin: 0; padding: 0; font-family: 'Poppins', sans-serif; } body { display: grid; height: fit-content; grid-template-areas: 'header' 'section' ; } main { grid-area: section; display: flex; width: auto; background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png); height: 100vh; vertical-align: middle; } .text-box { background-color: gray; } header { grid-area: header; width: auto; background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png); background-position: center; background-size: cover; position: relative; } nav { display: flex; padding: 2% 6%; justify-content: space-between; align-items: center; position: sticky; z-index: 1000; } nav img { width: 150px; } .nav-links { flex: 1; text-align: right; } .nav-links ul li { list-style: none; display: inline-block; padding: 8px 12px; position: relative; } .nav-links ul li a { color: #fff; text-decoration: none; font-size: 13px; } .nav-links ul li::after { content: ''; width: 0%; height: 2px; background: #a85d58; display: block; margin: auto; transition: 0.5s; } .nav-links ul li:hover::after { width: 100%; } .text-box { padding-top: 5rem; height: 100%; width: 100%; color: #fff; margin: 0 auto; text-align: center; } .text-box h1 { font-size: 62px; } .text-box p { margin: 10px 0 40px; font-size: 14px; color: #fff; } .hero-btn { display: inline-block; text-decoration: none; color: #fff; border: 1px solid #fff; padding: 12px 34px; font-size: 13px; background: transparent; position: relative; cursor: pointer; } .hero-btn:hover { border: 1px solid #f44336; background: #f44336; transition: 1s; } @media all and (max-width: 700px) { body { display: grid; grid-template-areas: 'section header' 'section header' ; } .text-box h1 { padding-top: 10rem; font-size: 20px; } .nav-links ul li { display: block; } header { background: #f44336; text-align: left; } } A: This is very closely related to the other answer, but styled in the way that you showed in the image. I commented in sections I changed. If the other answer is what you were looking for, please select it as your answer to your question. If this answer is the correct response, please do the same. * { margin: 0; padding: 0; font-family: 'Poppins', sans-serif; } body{ z-index: 0; } .header { min-height: 100vh; width: 100%; background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png); background-position: center; background-size: cover; position: relative; } nav { display: flex; padding: 2% 6%; justify-content: space-between; align-items: center; position: sticky; } nav img { width: 150px; } .nav-links { flex: 1; text-align: right; } .nav-links ul li { list-style: none; display: inline-block; padding: 8px 12px; position: relative; } .nav-links ul li a { color: #fff; text-decoration: none; font-size: 13px; } .nav-links ul li::after { content: ''; width: 0%; height: 2px; background: #a85d58; display: block; margin: auto; transition: 0.5s; } .nav-links ul li:hover::after { width: 100%; } .text-box { width: 90%; color: #fff; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); text-align: center; } .text-box h1 { font-size: 62px; } .text-box p { margin: 10px 0 40px; font-size: 14px; color: #fff; } .hero-btn { display: inline-block; text-decoration: none; color: #fff; border: 1px solid #fff; padding: 12px 34px; font-size: 13px; background: transparent; position: relative; cursor: pointer; } .hero-btn:hover { border: 1px solid #f44336; background: #f44336; transition: 1s; } @media(max-width: 700px) { .text-box h1 { font-size: 20px; } .nav-links ul li { display: block; } .nav-links{ position: absolute; background: #f44336; height: 100vh; width: 200px; top: 0; right: 0; text-align: left; z-index: 20; /*This index brings forth the nav links */ } nav { z-index: 19; /* this brings the nav forward as well */ } } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>University</title> <link rel="stylesheet" href="style.css"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link href="https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.1/css/all.min.css"> <!--This is the correct link to the fontawesome icons you want--> </head> <body> <section class="header"> <nav> <a href="index.html"><img src="img/logo.png"></a> <div class="nav-links"> <i class="fa-solid fa-xmark"></i><!--This is a 6.2.1 version icon--> <ul> <li><a href="">HOME</a></li> <li><a href="">ABOUT</a></li> <li><a href="">COURSE</a></li> <li><a href="">BLOG</a></li> <li><a href="">CONTACT</a></li> </ul> </div> <i class="fa-solid fa-bars"></i><!--This is a 6.2.1 version icon--> </nav> <div class="text-box"> <h1>World's Biggest University</h1> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p> <a href="" class="hero-btn">Visit Us To Know More</a> </div> </section> </body> </html>
Layer nav over text
So I havd a problem 1 Im trying to make the website responsive and the paragraph ( and the World's Biggest University Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat Visit Us To Know More ) is showing in the menu and it should be behind the nav it so the red nav should show only half the text this is the problem enter image description here and it should look like this enter image description here this is the code @media(max-width: 700px) { .text-box h1 { font-size: 20px; } .nav-links ul li { display: block; } .nav-links { position: absolute; background: #f44336; height: 100vh; width: 200px; top: 0; right: 0; text-align: left; z-index: 2; } } this is the whole css code * { margin: 0; padding: 0; font-family: 'Poppins', sans-serif; } .header { min-height: 100vh; width: 100%; background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png); background-position: center; background-size: cover; position: relative; } nav { display: flex; padding: 2% 6%; justify-content: space-between; align-items: center; position: sticky; } nav img { width: 150px; } .nav-links { flex: 1; text-align: right; } .nav-links ul li { list-style: none; display: inline-block; padding: 8px 12px; position: relative; } .nav-links ul li a { color: #fff; text-decoration: none; font-size: 13px; } .nav-links ul li::after { content: ''; width: 0%; height: 2px; background: #a85d58; display: block; margin: auto; transition: 0.5s; } .nav-links ul li:hover::after { width: 100%; } .text-box { width: 90%; color: #fff; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); text-align: center; } .text-box h1 { font-size: 62px; } .text-box p { margin: 10px 0 40px; font-size: 14px; color: #fff; } .hero-btn { display: inline-block; text-decoration: none; color: #fff; border: 1px solid #fff; padding: 12px 34px; font-size: 13px; background: transparent; position: relative; cursor: pointer; } .hero-btn:hover { border: 1px solid #f44336; background: #f44336; transition: 1s; } @media(max-width: 700px) { .text-box h1 { font-size: 20px; } .nav-links ul li { display: block; } .nav-links { position: absolute; background: #f44336; height: 100vh; width: 200px; top: 0; right: 0; text-align: left; z-index: 2; } } and this is the html code <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>University</title> <link rel="stylesheet" href="style.css"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link href="https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/[email protected]/css/fontawesome.min.css"> </head> <body> <section class="header"> <nav> <a href="index.html"><img src="img/logo.png"></a> <div class="nav-links"> <i class="fa fa-times"></i> <ul> <li><a href="">HOME</a></li> <li><a href="">ABOUT</a></li> <li><a href="">COURSE</a></li> <li><a href="">BLOG</a></li> <li><a href="">CONTACT</a></li> </ul> </div> <i class="fa fa-bars"></i> </nav> <div class="text-box"> <h1>World's Biggest University</h1> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam. Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p> <a href="" class="hero-btn">Visit Us To Know More</a> </div> </section> </body> </html>
[ "If the goal is to have the test \"Visit Us To Know More\" behind the red nav then you can just add a z-index: 1000; to the nav. This will make the nav layer over the text. But then you won't be able to read the text of course.\nEdit: so after a little playing around I added some grid-template-areas to your code so the text isn't overflowing into eachother.\n<!DOCTYPE html>\n<html>\n\n<head>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>University</title>\n <link rel=\"stylesheet\" href=\"style.css\">\n <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\">\n <link href=\"https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap\" rel=\"stylesheet\">\n <link rel=\"stylesheet\"\n href=\"https://cdn.jsdelivr.net/npm/@fortawesome/[email protected]/css/fontawesome.min.css\">\n</head>\n\n<body>\n <header class=\"header\">\n <nav>\n <a href=\"index.html\"><img src=\"img/logo.png\"></a>\n <div class=\"nav-links\">\n <i class=\"fa fa-times\"></i>\n <ul>\n <li><a href=\"\">HOME</a></li>\n <li><a href=\"\">ABOUT</a></li>\n <li><a href=\"\">COURSE</a></li>\n <li><a href=\"\">BLOG</a></li>\n <li><a href=\"\">CONTACT</a></li>\n </ul>\n </div>\n <i class=\"fa fa-bars\"></i>\n </nav>\n </header>\n <main>\n <div class=\"text-box\">\n <h1>World's Biggest University</h1>\n <p>\n Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam.\n Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p>\n <a href=\"\" class=\"hero-btn\">Visit Us To Know More</a>\n </div>\n </main>\n</body>\n\n</html>\n\n* {\n margin: 0;\n padding: 0;\n font-family: 'Poppins', sans-serif;\n}\n\nbody {\n display: grid;\n height: fit-content;\n grid-template-areas:\n 'header'\n 'section'\n ;\n}\n\nmain {\n grid-area: section;\n display: flex;\n width: auto;\n background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png);\n height: 100vh;\n vertical-align: middle;\n}\n\n.text-box {\n background-color: gray;\n}\n\nheader {\n grid-area: header;\n width: auto;\n background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png);\n background-position: center;\n background-size: cover;\n position: relative;\n}\n\n\nnav {\n display: flex;\n padding: 2% 6%;\n justify-content: space-between;\n align-items: center;\n position: sticky;\n z-index: 1000;\n}\n\nnav img {\n width: 150px;\n}\n\n.nav-links {\n flex: 1;\n text-align: right;\n}\n\n.nav-links ul li {\n list-style: none;\n display: inline-block;\n padding: 8px 12px;\n position: relative;\n}\n\n.nav-links ul li a {\n color: #fff;\n text-decoration: none;\n font-size: 13px;\n}\n\n.nav-links ul li::after {\n content: '';\n width: 0%;\n height: 2px;\n background: #a85d58;\n display: block;\n margin: auto;\n transition: 0.5s;\n}\n\n.nav-links ul li:hover::after {\n width: 100%;\n}\n\n.text-box {\n padding-top: 5rem;\n height: 100%;\n width: 100%;\n color: #fff;\n margin: 0 auto;\n text-align: center;\n\n}\n\n.text-box h1 {\n font-size: 62px;\n}\n\n.text-box p {\n margin: 10px 0 40px;\n font-size: 14px;\n color: #fff;\n}\n\n.hero-btn {\n display: inline-block;\n text-decoration: none;\n color: #fff;\n border: 1px solid #fff;\n padding: 12px 34px;\n font-size: 13px;\n background: transparent;\n position: relative;\n cursor: pointer;\n}\n\n.hero-btn:hover {\n border: 1px solid #f44336;\n background: #f44336;\n transition: 1s;\n}\n\n@media all and (max-width: 700px) {\n body {\n display: grid;\n grid-template-areas:\n 'section header'\n 'section header'\n ;\n }\n\n .text-box h1 {\n padding-top: 10rem;\n font-size: 20px;\n }\n\n .nav-links ul li {\n display: block;\n }\n\n header {\n background: #f44336;\n text-align: left;\n }\n}\n\n", "This is very closely related to the other answer, but styled in the way that you showed in the image. I commented in sections I changed. If the other answer is what you were looking for, please select it as your answer to your question. If this answer is the correct response, please do the same.\n\n\n* {\n margin: 0;\n padding: 0;\n font-family: 'Poppins', sans-serif;\n}\n\nbody{ \n z-index: 0;\n}\n\n.header {\n min-height: 100vh;\n width: 100%;\n background-image: linear-gradient(rgba(4, 9, 30, 0.7), rgba(4, 9, 30, 0.7)), url(./img/banner.png);\n background-position: center;\n background-size: cover;\n position: relative;\n}\n\nnav {\n display: flex;\n padding: 2% 6%;\n justify-content: space-between;\n align-items: center;\n position: sticky;\n}\n\nnav img {\n width: 150px;\n}\n\n.nav-links {\n flex: 1;\n text-align: right;\n}\n\n.nav-links ul li {\n list-style: none;\n display: inline-block;\n padding: 8px 12px;\n position: relative;\n}\n\n.nav-links ul li a {\n color: #fff;\n text-decoration: none;\n font-size: 13px;\n}\n\n.nav-links ul li::after {\n content: '';\n width: 0%;\n height: 2px;\n background: #a85d58;\n display: block;\n margin: auto;\n transition: 0.5s;\n}\n\n.nav-links ul li:hover::after {\n width: 100%;\n}\n\n.text-box {\n width: 90%;\n color: #fff;\n position: absolute;\n top: 50%;\n left: 50%;\n transform: translate(-50%, -50%);\n text-align: center;\n}\n\n.text-box h1 {\n font-size: 62px;\n}\n\n.text-box p {\n margin: 10px 0 40px;\n font-size: 14px;\n color: #fff;\n}\n\n.hero-btn {\n display: inline-block;\n text-decoration: none;\n color: #fff;\n border: 1px solid #fff;\n padding: 12px 34px;\n font-size: 13px;\n background: transparent;\n position: relative;\n cursor: pointer;\n}\n\n.hero-btn:hover {\n border: 1px solid #f44336;\n background: #f44336;\n transition: 1s;\n}\n\n@media(max-width: 700px) {\n .text-box h1 {\n font-size: 20px;\n }\n\n .nav-links ul li {\n display: block;\n }\n\n .nav-links{\n position: absolute;\n background: #f44336;\n height: 100vh;\n width: 200px;\n top: 0;\n right: 0;\n text-align: left;\n z-index: 20; /*This index brings forth the nav links */\n }\n nav {\n z-index: 19; /* this brings the nav forward as well */\n }\n}\n<!DOCTYPE html>\n<html>\n\n<head>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>University</title>\n <link rel=\"stylesheet\" href=\"style.css\">\n <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\">\n <link href=\"https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,700;1,700&display=swap\" rel=\"stylesheet\">\n <link rel=\"stylesheet\" \nhref=\"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.1/css/all.min.css\">\n<!--This is the correct link to the fontawesome icons you want-->\n</head>\n\n<body>\n\n <section class=\"header\">\n <nav>\n <a href=\"index.html\"><img src=\"img/logo.png\"></a>\n <div class=\"nav-links\">\n <i class=\"fa-solid fa-xmark\"></i><!--This is a 6.2.1 version icon-->\n <ul>\n <li><a href=\"\">HOME</a></li>\n <li><a href=\"\">ABOUT</a></li>\n <li><a href=\"\">COURSE</a></li>\n <li><a href=\"\">BLOG</a></li>\n <li><a href=\"\">CONTACT</a></li>\n </ul>\n </div>\n <i class=\"fa-solid fa-bars\"></i><!--This is a 6.2.1 version icon-->\n </nav>\n\n <div class=\"text-box\">\n <h1>World's Biggest University</h1>\n <p>\n Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec eget iaculis dui, quis dapibus diam.\n Etiam tellus erat, consectetur eget eros sit amet, tincidunt consectetur erat</p>\n <a href=\"\" class=\"hero-btn\">Visit Us To Know More</a>\n </div>\n\n\n </section>\n\n</body>\n\n</html>\n\n\n\n" ]
[ 2, 1 ]
[]
[]
[ "css", "html", "menu", "nav", "responsive" ]
stackoverflow_0074669479_css_html_menu_nav_responsive.txt
Q: How to unpivot date range in Excel/ PowerQuery Would someone be able to tell me the best way of unpivoting the date range that currently appears in each row: start date and end date, so that the 'Title' appears row by row for each date included in the date range. I expect the title not to appear once, but line by line as many times as the date range indicates - so if the date range runs from 01/12/2022 to 03/12/2022 the Title will appear three times with the calendar date in the column next to it: 01/12/2022, 02/12/2022, 03/12/2022. Image of the dataset: My current thinking is to use PowerQuery but I'm stuck. Maybe there's a clever Excel function that can do what I need. Any suggestion/ help would be awesome. Many thanks! A: The trick is to read the two dates as numbers. Read the start and end dates as numbers Int64.Type Make a new column and create a list/range with {[Date start]..[Date end]} Expand the list to multiple rows with Table.ExpandListColumn Switch back the three dates to date type Try this : let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], #"Changed Type" = Table.TransformColumnTypes(Source,{{"A C Title", type text}, {"Date start", Int64.Type}, {"Date end", Int64.Type}}), #"Added Custom" = Table.AddColumn(#"Changed Type", "Custom", each try {[Date start]..[Date end]} otherwise null), #"Expanded Custom" = Table.ExpandListColumn(#"Added Custom", "Custom"), #"Changed Type1" = Table.TransformColumnTypes(#"Expanded Custom",{{"Custom", type date}, {"Date start", type date}, {"Date end", type date}}) in #"Changed Type1" # Output :
How to unpivot date range in Excel/ PowerQuery
Would someone be able to tell me the best way of unpivoting the date range that currently appears in each row: start date and end date, so that the 'Title' appears row by row for each date included in the date range. I expect the title not to appear once, but line by line as many times as the date range indicates - so if the date range runs from 01/12/2022 to 03/12/2022 the Title will appear three times with the calendar date in the column next to it: 01/12/2022, 02/12/2022, 03/12/2022. Image of the dataset: My current thinking is to use PowerQuery but I'm stuck. Maybe there's a clever Excel function that can do what I need. Any suggestion/ help would be awesome. Many thanks!
[ "The trick is to read the two dates as numbers.\n\nRead the start and end dates as numbers Int64.Type\nMake a new column and create a list/range with {[Date start]..[Date end]}\nExpand the list to multiple rows with Table.ExpandListColumn\nSwitch back the three dates to date type\n\nTry this :\nlet\n Source = Excel.CurrentWorkbook(){[Name=\"Table1\"]}[Content],\n #\"Changed Type\" = Table.TransformColumnTypes(Source,{{\"A C Title\", type text}, {\"Date start\", Int64.Type}, {\"Date end\", Int64.Type}}),\n #\"Added Custom\" = Table.AddColumn(#\"Changed Type\", \"Custom\", each try {[Date start]..[Date end]} otherwise null),\n #\"Expanded Custom\" = Table.ExpandListColumn(#\"Added Custom\", \"Custom\"),\n #\"Changed Type1\" = Table.TransformColumnTypes(#\"Expanded Custom\",{{\"Custom\", type date}, {\"Date start\", type date}, {\"Date end\", type date}})\nin\n #\"Changed Type1\"\n\n# Output :\n\n" ]
[ 0 ]
[]
[]
[ "date", "date_range", "excel", "powerquery", "unpivot" ]
stackoverflow_0074669924_date_date_range_excel_powerquery_unpivot.txt
Q: Passing a function's output as a parameter of another function I'm having a hard time figuring out how to pass a function's return as a parameter to another function. I've searched a lot of threads that are deviations of this problem but I can't think of a solution from them. My code isn't good yet, but I just need help on the line where the error is occurring to start with. Instructions: create a function that asks the user to enter their birthday and returns a date object. Validate user input as well. This function must NOT take any parameters. create another function that takes the date object as a parameter. Calculate the age of the user using their birth year and the current year. def func1(): bd = input("When is your birthday? ") try: dt.datetime.strptime(bd, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYY") except Exception as e: print(e) return bd def func2(bd): today = dt.datetime.today() age = today.year - bd.year return age This is the Error I get: TypeError: func2() missing 1 required positional argument: 'bday' So far, I've tried: assigning the func1 to a variable and passing the variable as func2 parameter calling func1 inside func2 defining func1 inside func2 A: You can't use a function as a parameter, I think what you want to do is use it as an argument. You can do it like this: import datetime as dt def func1(): bd = input("When is your birthday? ") try: dt.datetime.strptime(bd, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYY") except Exception as e: print(e) return bd def func2(bd) today = dt.datetime.today() age = today.year - bd.year return age func2(func1) There are still some errors to be solved but the problem you had should be solved there A: You're almost there, a few subtleties to consider: The datetime object must be assigned to a variable and returned. Your code was not assigning the datetime object, but returning a str object for input into func2. Which would have thrown an attribute error as a str has no year attribute. Simply subtracting the years will not always give the age. What if the individual's date of birth has not yet come? In this case, 1 must be subtracted. (Notice the code update below). For example: from datetime import datetime as dt def func1(): bday = input("When is your birthday? Enter as MM/DD/YYYY: ") try: # Assign the datetime object. dte = dt.strptime(bday, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYYY") except Exception as e: print(e) return dte # <-- Return the datetime, not a string. def func2(bdate): today = dt.today() # Account for the date of birth not yet arriving. age = today.year - bdate.year - ((today.month, today.day) < (bdate.month, bdate.day)) return age Can be called using: func2(bdate=func1())
Passing a function's output as a parameter of another function
I'm having a hard time figuring out how to pass a function's return as a parameter to another function. I've searched a lot of threads that are deviations of this problem but I can't think of a solution from them. My code isn't good yet, but I just need help on the line where the error is occurring to start with. Instructions: create a function that asks the user to enter their birthday and returns a date object. Validate user input as well. This function must NOT take any parameters. create another function that takes the date object as a parameter. Calculate the age of the user using their birth year and the current year. def func1(): bd = input("When is your birthday? ") try: dt.datetime.strptime(bd, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYY") except Exception as e: print(e) return bd def func2(bd): today = dt.datetime.today() age = today.year - bd.year return age This is the Error I get: TypeError: func2() missing 1 required positional argument: 'bday' So far, I've tried: assigning the func1 to a variable and passing the variable as func2 parameter calling func1 inside func2 defining func1 inside func2
[ "You can't use a function as a parameter, I think what you want to do is use it as an argument. You can do it like this:\nimport datetime as dt\n\ndef func1():\n bd = input(\"When is your birthday? \")\n try:\n dt.datetime.strptime(bd, \"%m/%d/%Y\")\n except ValueError as e:\n print(\"There is a ValueError. Please format as MM/DD/YYY\")\n except Exception as e:\n print(e)\n return bd\n\ndef func2(bd)\n today = dt.datetime.today()\n age = today.year - bd.year\n return age\n\nfunc2(func1)\n\nThere are still some errors to be solved but the problem you had should be solved there\n", "You're almost there, a few subtleties to consider:\n\nThe datetime object must be assigned to a variable and returned.\nYour code was not assigning the datetime object, but returning a str object for input into func2. Which would have thrown an attribute error as a str has no year attribute.\nSimply subtracting the years will not always give the age. What if the individual's date of birth has not yet come? In this case, 1 must be subtracted. (Notice the code update below).\n\nFor example:\nfrom datetime import datetime as dt\n\ndef func1():\n bday = input(\"When is your birthday? Enter as MM/DD/YYYY: \")\n try:\n # Assign the datetime object.\n dte = dt.strptime(bday, \"%m/%d/%Y\")\n except ValueError as e:\n print(\"There is a ValueError. Please format as MM/DD/YYYY\")\n except Exception as e:\n print(e)\n return dte # <-- Return the datetime, not a string.\n\ndef func2(bdate):\n today = dt.today()\n # Account for the date of birth not yet arriving.\n age = today.year - bdate.year - ((today.month, today.day) < (bdate.month, bdate.day))\n return age\n\nCan be called using:\nfunc2(bdate=func1())\n\n" ]
[ 0, 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074663224_function_python.txt
Q: Isotope v2 filtering with Infinite Scroll - Filter not finding all items and Window not resizing on filter Head's up! There's a pending feature-request issue on Isotope's GitHub repo that you should add a "" reaction to if you're interested in seeing official docs and demos for this (how to combine Isotope, Infinite Scroll, filtering, and sorting). It was opened by Isotope's creator to gauge interest. If interested, please upvote! **TL;DR: To help get official docs and demos for this, go here and add a "" reaction.** Trying to piece together a filterable layout using the Isotope JS plugin and Paul Irish's (sadly unmaintained) Infinite Scroll plugin. Filtering is somewhat working. Initially it filters the page 1 content. For it to filter items not on page 1 I need to scroll down (I suppose this is bringing the elements in the browser's memory, thus making it available to the filtering script?) via a set of select boxes for the initial page content (ie: the content on page 1). Question 1: How to get the filter to work for all page items? ie: How to reference all elements in the filter script, even those not yet brought onto the page via infinite scroll? Question 2: Once I have scrolled down and all the elements are filterable, the window does not resize on filtering. so when there are only one or two elements shown via the filter, it's still possible to scroll way down the page (even though no elements are shown). Upon inspection of these elements, I see that they're still in the DOM. Filtering Script function filterTags(){ isotopeInit(); var $checkboxes = $('#tag-wrap input') $checkboxes.change(function(){ var arr = []; $checkboxes.filter(':checked').each(function(){ var $dataToFilter = $(this).attr('data-filter'); arr.push( $dataToFilter ); }); arr = arr.join(', '); $container.isotope({ filter: arr }); }); }; Isotope Init function isotopeInit(){ var $container = $('.post-excerpts').imagesLoaded( function() { $container.isotope({ itemSelector: '.post-excerpt-block-wrap', layoutMode: 'masonry', animationEngine: "best-available", masonry: { columnWidth: '.post-excerpt-block-wrap' }, transitionDuration: '2.0', hiddenStyle: { opacity: 0 }, visibleStyle: { opacity: 1, transform: 'scale(1)' } }); }); }; Infinite Scroll Init $container.infinitescroll({ loading: { finished: undefined, finishedMsg: "<em>No more posts to load.</em>", img: "http://www.infinite-scroll.com/loading.gif", msg: null, msgText: "<em>Loading the next set of posts...</em>", selector: '.infinite-loader', speed: 'fast', start: undefined }, binder: $(window), //pixelsFromNavToBottom: Math.round($(window).height() * 0.9), //bufferPx: Math.round($(window).height() * 0.9), nextSelector: "a.older-posts", navSelector: "nav.pagination", contentSelector: ".content", itemSelector: ".post-excerpt-block-wrap", maxPage: {{pagination.pages}}, appendCallback: true }, // Callback for initializing scripts to added post excerpts function(newElements) { var $newElems = $( newElements ); loadImages(); checkForFeatured(); makeFontResponsive(); addReadMoreLinks(); fitVidInit(); $newElems.imagesLoaded(function(){ $container.isotope( 'appended', $newElems ); }); } ); Any ideas, suggestions, or other insights are incredibly welcomed. Many thanks in advance. ##Update: Regarding Questions 2: Seems the problem is related to how Isotope is being told to filter the items. Specifically, this code from the isotope init function: transitionDuration: '2.0', hiddenStyle: { opacity: 0 }, visibleStyle: { opacity: 1, transform: 'scale(1)' } I've tried changing it to the following, though this removes the completely from the DOM (fixing the spacing issue) and they're not returned into the DOM upon "unfiltering" them. So it's not a solution: hiddenStyle: { display: 'none' }, visibleStyle: { display: 'visible', transform: 'scale(1)' } I've also tried simply removing these config lines altogether, which seems like the obvious "clean" solution, but this too still leaves lots of white space on the page upon filtering. Not sure whether the problem here is with my Isotope or Infinite Scroll implementation. A: For question 2, one thing you could do is applying the display:none style to all hidden elements (and remove from all the visible ones) after isotope filtering. I think you should be able to use the "on layoutComplete" event listener of isotope to apply it at the right time, like this: $container.isotope( 'on', 'layoutComplete', function( isoInstance, laidOutItems ) { $('.my-elements-class.hiddenStyle').addClass('reallyHiddenStyle'); $('.my-elements-class.visibleStyle').removeClass('reallyHiddenStyle'); } ); Where, of course, the elements you want to filter are of css class my-elements-class, you applied isotope filtering to $container and you define reallyHiddenStyle: { display: 'none' } in your CSS. For question 1, perhaps you need to use a similar strategy with infinitescrolling callback, adding new elements to the filter once they appear because of scrolling. You already have the callback passed as last parameter of the infinitescroll method, so at a quick look it seems that something like this might work: $container.isotope('destroy'); $.each(newElements, function (i, el){/** add new elements to arr */}); $container.isotope({ filter: arr }); Do you have a working example you can share? So that I can check it out, if you'd like me to. A: To answer Question 1, you can try using the stamp option in the Isotope initialization. This option allows you to specify elements that are not to be laid out by Isotope, but are still considered in the filter. You can try adding the following to your isotopeInit() function: function isotopeInit(){ var $container = $('.post-excerpts').imagesLoaded( function() { $container.isotope({ // other options... stamp: ".infinite-loader", // add this line }); }); } This will tell Isotope to include elements with the class infinite-loader in the filtering, but not lay them out. As for Question 2, the issue is likely with the way you are calling Isotope's filter method. In your filterTags() function, you are concatenating all selected filters into a string using arr.join(', '). This will result in a filter string that looks like ".filter1, .filter2, .filter3". However, this is not the correct format for Isotope's filter. To correctly filter multiple items, you need to use a filter string that looks like this: ".filter1.filter2.filter3". To fix this, you can modify your filterTags() function to concatenate the selected filters using the . character instead of , like so: function filterTags(){ isotopeInit(); var $checkboxes = $('#tag-wrap input') $checkboxes.change(function(){ var arr = []; $checkboxes.filter(':checked').each(function(){ var $dataToFilter = $(this).attr('data-filter'); arr.push( $dataToFilter ); }); arr = arr.join('.'); // modify this line $container.isotope({ filter: arr }); }); }; Note that this change may require you to update your HTML data-filter attributes to include the . character at the beginning, like so: instead of . A: To answer your first question, you can try calling the filterTags() function from inside the infinite scroll's callback function. This way, when new items are added to the page via infinite scroll, they will be filtered according to the selected tags. $container.infinitescroll({ // Infinite scroll options... }, // Callback for initializing scripts to added post excerpts function(newElements) { var $newElems = $( newElements ); loadImages(); checkForFeatured(); makeFontResponsive(); addReadMoreLinks(); fitVidInit(); $newElems.imagesLoaded(function(){ $container.isotope( 'appended', $newElems ); filterTags(); }); } ); For your second question, it looks like the issue is that the items that are hidden by the filter are still present in the DOM, but with a style of display: none applied to them. This means that they still take up space in the layout, even though they are not visible. To fix this, you can try calling the isotope() function on the container again after the items have been filtered. This will cause Isotope to adjust the layout and remove the space occupied by the hidden items. You can try adding the following code to the filterTags() function: $checkboxes.change(function(){ var arr = []; $checkboxes.filter(':checked').each(function(){ var $dataToFilter = $(this).attr('data-filter'); arr.push( $dataToFilter ); }); arr = arr.join(', '); $container.isotope({ filter: arr }); $container.isotope(); }); This should cause Isotope to adjust the layout after the items have been filtered, which should fix the issue with the window not resizing. A: It sounds like you are encountering two separate issues with using Isotope and Infinite Scroll together. For question 1, it sounds like the issue is that Infinite Scroll is only loading elements onto the page as the user scrolls, and so the filtering script is only able to filter the elements that are already loaded in the browser. One solution to this would be to use Infinite Scroll's appendCallback option to run the filtering script on each set of newly-loaded elements. This way, the filtering script can filter all of the elements that have been loaded onto the page, including those loaded by Infinite Scroll. For question 2, it sounds like the issue is that the page is not resizing when the filter is applied. This is likely because the Isotope layout is not being updated to reflect the changes in the size of the elements after the filter is applied. One solution to this would be to use Isotope's layout method to manually update the layout after the filter is applied. This will cause the page to resize to fit the elements that remain after the filter is applied. I would recommend looking at the Isotope documentation and Infinite Scroll documentation for more information on how to use these options to solve your issues.
Isotope v2 filtering with Infinite Scroll - Filter not finding all items and Window not resizing on filter
Head's up! There's a pending feature-request issue on Isotope's GitHub repo that you should add a "" reaction to if you're interested in seeing official docs and demos for this (how to combine Isotope, Infinite Scroll, filtering, and sorting). It was opened by Isotope's creator to gauge interest. If interested, please upvote! **TL;DR: To help get official docs and demos for this, go here and add a "" reaction.** Trying to piece together a filterable layout using the Isotope JS plugin and Paul Irish's (sadly unmaintained) Infinite Scroll plugin. Filtering is somewhat working. Initially it filters the page 1 content. For it to filter items not on page 1 I need to scroll down (I suppose this is bringing the elements in the browser's memory, thus making it available to the filtering script?) via a set of select boxes for the initial page content (ie: the content on page 1). Question 1: How to get the filter to work for all page items? ie: How to reference all elements in the filter script, even those not yet brought onto the page via infinite scroll? Question 2: Once I have scrolled down and all the elements are filterable, the window does not resize on filtering. so when there are only one or two elements shown via the filter, it's still possible to scroll way down the page (even though no elements are shown). Upon inspection of these elements, I see that they're still in the DOM. Filtering Script function filterTags(){ isotopeInit(); var $checkboxes = $('#tag-wrap input') $checkboxes.change(function(){ var arr = []; $checkboxes.filter(':checked').each(function(){ var $dataToFilter = $(this).attr('data-filter'); arr.push( $dataToFilter ); }); arr = arr.join(', '); $container.isotope({ filter: arr }); }); }; Isotope Init function isotopeInit(){ var $container = $('.post-excerpts').imagesLoaded( function() { $container.isotope({ itemSelector: '.post-excerpt-block-wrap', layoutMode: 'masonry', animationEngine: "best-available", masonry: { columnWidth: '.post-excerpt-block-wrap' }, transitionDuration: '2.0', hiddenStyle: { opacity: 0 }, visibleStyle: { opacity: 1, transform: 'scale(1)' } }); }); }; Infinite Scroll Init $container.infinitescroll({ loading: { finished: undefined, finishedMsg: "<em>No more posts to load.</em>", img: "http://www.infinite-scroll.com/loading.gif", msg: null, msgText: "<em>Loading the next set of posts...</em>", selector: '.infinite-loader', speed: 'fast', start: undefined }, binder: $(window), //pixelsFromNavToBottom: Math.round($(window).height() * 0.9), //bufferPx: Math.round($(window).height() * 0.9), nextSelector: "a.older-posts", navSelector: "nav.pagination", contentSelector: ".content", itemSelector: ".post-excerpt-block-wrap", maxPage: {{pagination.pages}}, appendCallback: true }, // Callback for initializing scripts to added post excerpts function(newElements) { var $newElems = $( newElements ); loadImages(); checkForFeatured(); makeFontResponsive(); addReadMoreLinks(); fitVidInit(); $newElems.imagesLoaded(function(){ $container.isotope( 'appended', $newElems ); }); } ); Any ideas, suggestions, or other insights are incredibly welcomed. Many thanks in advance. ##Update: Regarding Questions 2: Seems the problem is related to how Isotope is being told to filter the items. Specifically, this code from the isotope init function: transitionDuration: '2.0', hiddenStyle: { opacity: 0 }, visibleStyle: { opacity: 1, transform: 'scale(1)' } I've tried changing it to the following, though this removes the completely from the DOM (fixing the spacing issue) and they're not returned into the DOM upon "unfiltering" them. So it's not a solution: hiddenStyle: { display: 'none' }, visibleStyle: { display: 'visible', transform: 'scale(1)' } I've also tried simply removing these config lines altogether, which seems like the obvious "clean" solution, but this too still leaves lots of white space on the page upon filtering. Not sure whether the problem here is with my Isotope or Infinite Scroll implementation.
[ "For question 2, one thing you could do is applying the display:none style to all hidden elements (and remove from all the visible ones) after isotope filtering.\nI think you should be able to use the \"on layoutComplete\" event listener of isotope to apply it at the right time, like this:\n$container.isotope( 'on', 'layoutComplete',\n function( isoInstance, laidOutItems ) {\n\n $('.my-elements-class.hiddenStyle').addClass('reallyHiddenStyle');\n $('.my-elements-class.visibleStyle').removeClass('reallyHiddenStyle');\n }\n);\n\nWhere, of course, the elements you want to filter are of css class my-elements-class, you applied isotope filtering to $container and you define\nreallyHiddenStyle: {\n display: 'none'\n}\n\nin your CSS.\nFor question 1, perhaps you need to use a similar strategy with infinitescrolling callback, adding new elements to the filter once they appear because of scrolling.\nYou already have the callback passed as last parameter of the infinitescroll method, so at a quick look it seems that something like this might work:\n$container.isotope('destroy');\n$.each(newElements, function (i, el){/** add new elements to arr */});\n$container.isotope({ filter: arr });\n\nDo you have a working example you can share? So that I can check it out, if you'd like me to.\n", "To answer Question 1, you can try using the stamp option in the Isotope initialization. This option allows you to specify elements that are not to be laid out by Isotope, but are still considered in the filter. You can try adding the following to your isotopeInit() function:\nfunction isotopeInit(){\n var $container = $('.post-excerpts').imagesLoaded( function() {\n $container.isotope({\n // other options...\n stamp: \".infinite-loader\", // add this line\n });\n });\n}\n\nThis will tell Isotope to include elements with the class infinite-loader in the filtering, but not lay them out.\nAs for Question 2, the issue is likely with the way you are calling Isotope's filter method. In your filterTags() function, you are concatenating all selected filters into a string using arr.join(', '). This will result in a filter string that looks like \".filter1, .filter2, .filter3\". However, this is not the correct format for Isotope's filter. To correctly filter multiple items, you need to use a filter string that looks like this: \".filter1.filter2.filter3\".\nTo fix this, you can modify your filterTags() function to concatenate the selected filters using the . character instead of , like so:\nfunction filterTags(){\n isotopeInit();\n\n var $checkboxes = $('#tag-wrap input')\n\n $checkboxes.change(function(){\n var arr = [];\n $checkboxes.filter(':checked').each(function(){\n var $dataToFilter = $(this).attr('data-filter');\n arr.push( $dataToFilter );\n });\n arr = arr.join('.'); // modify this line\n $container.isotope({ filter: arr });\n });\n};\n\nNote that this change may require you to update your HTML data-filter attributes to include the . character at the beginning, like so: instead of .\n", "To answer your first question, you can try calling the filterTags() function from inside the infinite scroll's callback function. This way, when new items are added to the page via infinite scroll, they will be filtered according to the selected tags.\n$container.infinitescroll({\n // Infinite scroll options...\n},\n // Callback for initializing scripts to added post excerpts\n function(newElements) {\n var $newElems = $( newElements );\n loadImages();\n checkForFeatured();\n makeFontResponsive();\n addReadMoreLinks();\n fitVidInit();\n $newElems.imagesLoaded(function(){\n $container.isotope( 'appended', $newElems );\n filterTags();\n });\n }\n);\n\nFor your second question, it looks like the issue is that the items that are hidden by the filter are still present in the DOM, but with a style of display: none applied to them. This means that they still take up space in the layout, even though they are not visible. To fix this, you can try calling the isotope() function on the container again after the items have been filtered. This will cause Isotope to adjust the layout and remove the space occupied by the hidden items. You can try adding the following code to the filterTags() function:\n$checkboxes.change(function(){\n var arr = [];\n $checkboxes.filter(':checked').each(function(){\n var $dataToFilter = $(this).attr('data-filter');\n arr.push( $dataToFilter );\n });\n arr = arr.join(', ');\n $container.isotope({ filter: arr });\n $container.isotope();\n});\n\nThis should cause Isotope to adjust the layout after the items have been filtered, which should fix the issue with the window not resizing.\n", "It sounds like you are encountering two separate issues with using Isotope and Infinite Scroll together.\nFor question 1, it sounds like the issue is that Infinite Scroll is only loading elements onto the page as the user scrolls, and so the filtering script is only able to filter the elements that are already loaded in the browser. One solution to this would be to use Infinite Scroll's appendCallback option to run the filtering script on each set of newly-loaded elements. This way, the filtering script can filter all of the elements that have been loaded onto the page, including those loaded by Infinite Scroll.\nFor question 2, it sounds like the issue is that the page is not resizing when the filter is applied. This is likely because the Isotope layout is not being updated to reflect the changes in the size of the elements after the filter is applied. One solution to this would be to use Isotope's layout method to manually update the layout after the filter is applied. This will cause the page to resize to fit the elements that remain after the filter is applied.\nI would recommend looking at the Isotope documentation and Infinite Scroll documentation for more information on how to use these options to solve your issues.\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "filter", "infinite_scroll", "javascript", "jquery", "jquery_isotope" ]
stackoverflow_0023895457_filter_infinite_scroll_javascript_jquery_jquery_isotope.txt
Q: how to connect docker mongodb which is in oracle linux virtual machine to mongodb compass I installed mongodb inside docker container in oracle linux virtual machine. I need to connect this mongodb with mongodb compass. I don't have user account for mongodb and don't have mongod config file in the docker or in virtual machine. More Information: I created oracle linux virtual machine from oracle cloud. Then, using my current system's(windows) command prompt, I connected to that machine. Through that, I created a docker container inside the virtual machine. And then, I installed mongodb-6.0.2 inside the docker container. Now, when I try to connect this mongodb database(in virtual machine) to the mongodb compass, It is connecting to my current system's mongodb database(which was installed before the creation of this virtual machine). I also attached the screenshot of my virtual machine's docker and mongodb's information in cmd. Here is the screenshot of ip of my docker inside the virtual machine Can anyone solve this issue and tell me how can I make connection of this virtual machine database to mongodb compass? A: If wanting to use Compass from Windows and connect to a MongoDB container running an Oracle cloud compute instance, you'll need to make sure networking allows for connections to port 27017 via an ingress rule in your network security group. Another option is to use port forwarding over ssh.
how to connect docker mongodb which is in oracle linux virtual machine to mongodb compass
I installed mongodb inside docker container in oracle linux virtual machine. I need to connect this mongodb with mongodb compass. I don't have user account for mongodb and don't have mongod config file in the docker or in virtual machine. More Information: I created oracle linux virtual machine from oracle cloud. Then, using my current system's(windows) command prompt, I connected to that machine. Through that, I created a docker container inside the virtual machine. And then, I installed mongodb-6.0.2 inside the docker container. Now, when I try to connect this mongodb database(in virtual machine) to the mongodb compass, It is connecting to my current system's mongodb database(which was installed before the creation of this virtual machine). I also attached the screenshot of my virtual machine's docker and mongodb's information in cmd. Here is the screenshot of ip of my docker inside the virtual machine Can anyone solve this issue and tell me how can I make connection of this virtual machine database to mongodb compass?
[ "If wanting to use Compass from Windows and connect to a MongoDB container running an Oracle cloud compute instance, you'll need to make sure networking allows for connections to port 27017 via an ingress rule in your network security group.\nAnother option is to use port forwarding over ssh.\n" ]
[ 0 ]
[]
[]
[ "docker", "docker_container", "mongodb_compass", "oracle_cloud_infrastructure", "virtual_machine" ]
stackoverflow_0074660297_docker_docker_container_mongodb_compass_oracle_cloud_infrastructure_virtual_machine.txt
Q: How to pull the current user's email and insert into a script placeholder? I am using an affiliate tracker called Rewardful, and it requires me to pass the users email to this script <script> rewardful('convert', { email: '[email protected]' }) </script> The rewardful script is third party hosted by them, so I cannot modify this. I have found some script to pull/echo the current user's email but I cannot figure out how to insert it into the placeholder of another script. I have no idea if this matters in terms of how this question is answered, but this script will only be used on one page. Any help is greatly appreciated. A: Put the email in a HTML tag inside your output (via PHP). <div data-email="[email protected]"></div> Then from Javascript side pull it out. rewardful('convert', { email: document.querySelector("div[data-email]").dataset.email; }) This selector will find the element and extract the attributes value [email protected].
How to pull the current user's email and insert into a script placeholder?
I am using an affiliate tracker called Rewardful, and it requires me to pass the users email to this script <script> rewardful('convert', { email: '[email protected]' }) </script> The rewardful script is third party hosted by them, so I cannot modify this. I have found some script to pull/echo the current user's email but I cannot figure out how to insert it into the placeholder of another script. I have no idea if this matters in terms of how this question is answered, but this script will only be used on one page. Any help is greatly appreciated.
[ "Put the email in a HTML tag inside your output (via PHP).\n<div data-email=\"[email protected]\"></div>\n\nThen from Javascript side pull it out.\nrewardful('convert', { email: document.querySelector(\"div[data-email]\").dataset.email; })\n\nThis selector will find the element and extract the attributes value [email protected].\n" ]
[ 0 ]
[]
[]
[ "javascript", "php", "wordpress" ]
stackoverflow_0074656656_javascript_php_wordpress.txt
Q: MaterialUI slider with multiple colours I am getting into MaterialUI at the moment and specifically I want to create a range slider. I am using this example https://material-ui.com/components/slider/#range-sliders What I am trying to achieve is to have different colours for "high", "medium" and "low" range. The result should be something similar to: The colour of the thumbs doesn't really matter but I want to clearly distinguish the different ranges on the rail. Is there anyway to achieve that? A: Hey I think I solved this in a decent way using the ThumbComponent component input on the slider which I noticed when having two thumbs their data-index properties were different. function MyThumbComponent(props) { if (props['data-index'] == 0) { props.style.backgroundColor = "green" } else if (props['data-index'] == 1) { props.style.backgroundColor = "red" } return ( <span {...props}> </span> ); } Output screenshot sorry I'm not use to answering on stack: https://imgur.com/YybkFK6 (I can't embed in answer) Finally and most usefully here is the codepen: https://codesandbox.io/s/dualrange-two-colors-fzifz Hope this helps!!! A: Check this coloured range MUI Slider (normal/reversed) implemented in the codepen: https://codesandbox.io/s/metrics-color-threshold-slider-reversed-o58nu6.
MaterialUI slider with multiple colours
I am getting into MaterialUI at the moment and specifically I want to create a range slider. I am using this example https://material-ui.com/components/slider/#range-sliders What I am trying to achieve is to have different colours for "high", "medium" and "low" range. The result should be something similar to: The colour of the thumbs doesn't really matter but I want to clearly distinguish the different ranges on the rail. Is there anyway to achieve that?
[ "Hey I think I solved this in a decent way using the ThumbComponent component input on the slider which I noticed when having two thumbs their data-index properties were different. \nfunction MyThumbComponent(props) {\n if (props['data-index'] == 0) {\n props.style.backgroundColor = \"green\"\n } else if (props['data-index'] == 1) {\n props.style.backgroundColor = \"red\"\n }\n return (\n <span {...props}>\n </span>\n );\n}\n\nOutput screenshot sorry I'm not use to answering on stack:\nhttps://imgur.com/YybkFK6 (I can't embed in answer)\nFinally and most usefully here is the codepen: https://codesandbox.io/s/dualrange-two-colors-fzifz\nHope this helps!!!\n", "Check this coloured range MUI Slider (normal/reversed) implemented in the codepen: https://codesandbox.io/s/metrics-color-threshold-slider-reversed-o58nu6.\n" ]
[ 2, 0 ]
[]
[]
[ "css", "material_ui" ]
stackoverflow_0060389875_css_material_ui.txt
Q: The package com.fasterxml.jackson.databind is not accessible Java (268436910) I have recently explored handling JSON data with the org.json library and all went well. Now I started a bigger Maven project, for which I intend to use the Jackson libraries in stead. Sadly, it does not seem to work for me. I wanted to try out the ObjectMapper class, that VScode autocompleted for me, which also automatically adds the required import: import com.fasterxml.jackson.databind.ObjectMapper; However, I also immediately get an error on that line: "The type com.fasterxml.jackson.databind.ObjectMapper is not accessible Java (16778666)" I have added the necessary dependencies to my pom.xml file like so: <dependencies> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-controls</artifactId> <version>13</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-fxml</artifactId> <version>13</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>2.14.0</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>2.14.0</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.14.0</version> </dependency> </dependencies> Am I missing something? Are there any other steps that I should have taken? A: This is not a Maven problem. You probably have created a package-info.java for your project and so all your dependencies ended up on the module path but your package-info.java is missing the corresponding declarations. You have to add a line like this: requires com.fasterxml.jackson.databind; See also: How to fix 'Package is declared in module, but module does not read it' error in IntelliJ JavaFX? A: The only dependency needed for ObjectMapper is <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.14.0</version> </dependency> This error occurs, because Jackson library is not included in your project's classpath. Probably your project is using the Java Platform Module System (JPMS); then log would also contain: ... declared in the "unnamed module" ... module does not read it. If this is the case, add a requires directive to module-info.java file to specify that this module requires the jackson-databind library: module correct.module.name { requires com.fasterxml.jackson.databind; } After adding this requires directive recompile the project again.
The package com.fasterxml.jackson.databind is not accessible Java (268436910)
I have recently explored handling JSON data with the org.json library and all went well. Now I started a bigger Maven project, for which I intend to use the Jackson libraries in stead. Sadly, it does not seem to work for me. I wanted to try out the ObjectMapper class, that VScode autocompleted for me, which also automatically adds the required import: import com.fasterxml.jackson.databind.ObjectMapper; However, I also immediately get an error on that line: "The type com.fasterxml.jackson.databind.ObjectMapper is not accessible Java (16778666)" I have added the necessary dependencies to my pom.xml file like so: <dependencies> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-controls</artifactId> <version>13</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-fxml</artifactId> <version>13</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>2.14.0</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>2.14.0</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.14.0</version> </dependency> </dependencies> Am I missing something? Are there any other steps that I should have taken?
[ "This is not a Maven problem. You probably have created a package-info.java for your project and so all your dependencies ended up on the module path but your package-info.java is missing the corresponding declarations.\nYou have to add a line like this:\nrequires com.fasterxml.jackson.databind;\n\nSee also: How to fix 'Package is declared in module, but module does not read it' error in IntelliJ JavaFX?\n", "The only dependency needed for ObjectMapper is\n <dependency>\n <groupId>com.fasterxml.jackson.core</groupId>\n <artifactId>jackson-databind</artifactId>\n <version>2.14.0</version>\n </dependency>\n\nThis error occurs, because Jackson library is not included in your project's classpath. Probably your project is using the Java Platform Module System (JPMS); then log would also contain:\n\n... declared in the \"unnamed module\" ... module does not read it.\n\nIf this is the case, add a requires directive to module-info.java file to specify that this module requires the jackson-databind library:\nmodule correct.module.name {\n requires com.fasterxml.jackson.databind;\n}\n\nAfter adding this requires directive recompile the project again.\n" ]
[ 0, 0 ]
[]
[]
[ "dependencies", "jackson", "java", "json", "maven" ]
stackoverflow_0074666546_dependencies_jackson_java_json_maven.txt
Q: How to hide collapsible navbar menu when clicking outside of it (Bootstrap 5)? I have a collapsible navbar menu: <div class="collapse navbar-collapse" id="navbarCollapsible"> .. menu items .. </div> I want to hide it whenever a user clicks outside of it (so not just when he clicks the menu button again): $(document).ready(function() { const navbarCollapsible = document.getElementById('navbarCollapsible'); $('.container').on("click", function() { navbarCollapsible.hide(); }); }); But I get: Uncaught TypeError: navbarCollapsible.hide is not a function Edit: I made it work using the following: $(navbarCollapsible).collapse("hide"); But how can I do it without jQuery? I know I am using jQuery here, but I know that it's not required for Bootstrap 5 - so how am I supposed to use the hide method as described in the docs without jQuery? A: The error is because navbarCollapsible is a element of the DOM and not a jQuery Object. There is no hide available. Try this: $('body').on("click", function() { $(navbarCollapsible).hide(); });
How to hide collapsible navbar menu when clicking outside of it (Bootstrap 5)?
I have a collapsible navbar menu: <div class="collapse navbar-collapse" id="navbarCollapsible"> .. menu items .. </div> I want to hide it whenever a user clicks outside of it (so not just when he clicks the menu button again): $(document).ready(function() { const navbarCollapsible = document.getElementById('navbarCollapsible'); $('.container').on("click", function() { navbarCollapsible.hide(); }); }); But I get: Uncaught TypeError: navbarCollapsible.hide is not a function Edit: I made it work using the following: $(navbarCollapsible).collapse("hide"); But how can I do it without jQuery? I know I am using jQuery here, but I know that it's not required for Bootstrap 5 - so how am I supposed to use the hide method as described in the docs without jQuery?
[ "The error is because navbarCollapsible is a element of the DOM and not a jQuery Object. There is no hide available.\nTry this:\n$('body').on(\"click\", function() {\n $(navbarCollapsible).hide();\n});\n\n" ]
[ 0 ]
[]
[]
[ "bootstrap_5", "javascript", "twitter_bootstrap" ]
stackoverflow_0074670132_bootstrap_5_javascript_twitter_bootstrap.txt
Q: How to include zone.js, reflect-metadata etc. in Systemjs builder task? I'm using system.js and systemjs builder to create a dist folder with all the packed javascript files of my angular2 application. It works pretty nicely, except that it does not include the following files, which are currently statically included in the index.html: node_modules/zone.js/dist/zone.js node_modules/reflect-metadata/Reflect.js node_modules/systemjs/dist/system.src.js node_modules/esri-system-js/dist/esriSystem.js How can I force systemjs builder to include these dependencies? libs-bundle.js: var SystemBuilder = require('systemjs-builder'); var builder = new SystemBuilder(); builder.loadConfig('./systemjs.config.js').then(function() { return builder.bundle( 'app - [app/**/*]', // build app and remove the app code - this leaves only 3rd party dependencies 'dist/libs-bundle.js' ); }).then(function() { console.log('library bundles built successfully!'); }); app-bundle.js var SystemBuilder = require('systemjs-builder'); var builder = new SystemBuilder(); builder.loadConfig('./systemjs.config.js').then(function() { return builder.bundle( 'app - dist/libs-bundle.js', // build the app only, exclude everything already included in dependencies 'dist/app-bundle.js' ); }).then(function() { console.log('Application bundles built successfully!'); }); systemjs.config.js: /** * System configuration for Angular 2 samples * Adjust as necessary for your application needs. */ (function(global) { System.config({ paths: { // paths serve as alias 'npm:': 'node_modules/' }, // map tells the System loader where to look for things map: { // our app is within the app folder app: 'dist', // angular bundles '@angular/core': 'npm:@angular/core/bundles/core.umd.js', '@angular/common': 'npm:@angular/common/bundles/common.umd.js', '@angular/compiler': 'npm:@angular/compiler/bundles/compiler.umd.js', '@angular/platform-browser': 'npm:@angular/platform-browser/bundles/platform-browser.umd.js', '@angular/platform-browser-dynamic': 'npm:@angular/platform-browser-dynamic/bundles/platform-browser-dynamic.umd.js', '@angular/http': 'npm:@angular/http/bundles/http.umd.js', '@angular/router': 'npm:@angular/router/bundles/router.umd.js', '@angular/forms': 'npm:@angular/forms/bundles/forms.umd.js', // other libraries 'rxjs': 'npm:rxjs', 'angular2-in-memory-web-api': 'npm:angular2-in-memory-web-api', 'ng2-slim-loading-bar': 'npm:/ng2-slim-loading-bar', 'ng2-toasty': 'npm:/ng2-toasty', 'primeng': 'npm:/primeng', '@angular2-material/core': 'npm:/@angular2-material/core', '@angular2-material/grid-list': 'npm:/@angular2-material/grid-list' }, // packages tells the System loader how to load when no filename and/or no extension packages: { app: { main: './main.js', defaultExtension: 'js' }, rxjs: { defaultExtension: 'js' }, 'esri-mods': { defaultExtension: 'js' }, 'angular2-in-memory-web-api': { main: './index.js', defaultExtension: 'js' }, 'ng2-slim-loading-bar': { main: 'index.js', defaultExtension: 'js' }, 'ng2-toasty': { main: 'index.js', defaultExtension: 'js' }, 'primeng': { defaultExtension: 'js' }, '@angular2-material/core': { main: './core.umd.js', defaultExtension: 'js' }, '@angular2-material/grid-list': { main: './grid-list.umd.js', defaultExtension: 'js' } }, meta: { 'esri/*': { build: false }, 'esri-mods': { build: false }, 'dojo/*': { build: false }, } }); })(this); A: I would try doing an npm install in your app directory for each of the missing packages npm i zone.js npm i reflect-js npm i systemjs npm i esri-system-js A: To include the files you listed, you can simply add them to the packages section of your systemjs.config.js file, like this: // other libraries 'rxjs': 'npm:rxjs', 'angular2-in-memory-web-api': 'npm:angular2-in-memory-web-api', 'ng2-slim-loading-bar': 'npm:/ng2-slim-loading-bar', 'ng2-toasty': 'npm:/ng2-toasty', 'primeng': 'npm:/primeng', '@angular2-material/core': 'npm:/@angular2-material/core', '@angular2-material/grid-list': 'npm:/@angular2-material/grid-list', 'zone.js': 'npm:zone.js/dist/zone.js', 'reflect-metadata': 'npm:reflect-metadata/Reflect.js', 'systemjs': 'npm:systemjs/dist/system.src.js', 'esri-system-js': 'npm:esri-system-js/dist/esriSystem.js' Note that you will need to add a defaultExtension property to each of these entries, as you have done for the other libraries in your systemjs.config.js file. After you've added these entries to your configuration file, you can run systemjs-builder as you have before to create your bundles, and it should include these files.
How to include zone.js, reflect-metadata etc. in Systemjs builder task?
I'm using system.js and systemjs builder to create a dist folder with all the packed javascript files of my angular2 application. It works pretty nicely, except that it does not include the following files, which are currently statically included in the index.html: node_modules/zone.js/dist/zone.js node_modules/reflect-metadata/Reflect.js node_modules/systemjs/dist/system.src.js node_modules/esri-system-js/dist/esriSystem.js How can I force systemjs builder to include these dependencies? libs-bundle.js: var SystemBuilder = require('systemjs-builder'); var builder = new SystemBuilder(); builder.loadConfig('./systemjs.config.js').then(function() { return builder.bundle( 'app - [app/**/*]', // build app and remove the app code - this leaves only 3rd party dependencies 'dist/libs-bundle.js' ); }).then(function() { console.log('library bundles built successfully!'); }); app-bundle.js var SystemBuilder = require('systemjs-builder'); var builder = new SystemBuilder(); builder.loadConfig('./systemjs.config.js').then(function() { return builder.bundle( 'app - dist/libs-bundle.js', // build the app only, exclude everything already included in dependencies 'dist/app-bundle.js' ); }).then(function() { console.log('Application bundles built successfully!'); }); systemjs.config.js: /** * System configuration for Angular 2 samples * Adjust as necessary for your application needs. */ (function(global) { System.config({ paths: { // paths serve as alias 'npm:': 'node_modules/' }, // map tells the System loader where to look for things map: { // our app is within the app folder app: 'dist', // angular bundles '@angular/core': 'npm:@angular/core/bundles/core.umd.js', '@angular/common': 'npm:@angular/common/bundles/common.umd.js', '@angular/compiler': 'npm:@angular/compiler/bundles/compiler.umd.js', '@angular/platform-browser': 'npm:@angular/platform-browser/bundles/platform-browser.umd.js', '@angular/platform-browser-dynamic': 'npm:@angular/platform-browser-dynamic/bundles/platform-browser-dynamic.umd.js', '@angular/http': 'npm:@angular/http/bundles/http.umd.js', '@angular/router': 'npm:@angular/router/bundles/router.umd.js', '@angular/forms': 'npm:@angular/forms/bundles/forms.umd.js', // other libraries 'rxjs': 'npm:rxjs', 'angular2-in-memory-web-api': 'npm:angular2-in-memory-web-api', 'ng2-slim-loading-bar': 'npm:/ng2-slim-loading-bar', 'ng2-toasty': 'npm:/ng2-toasty', 'primeng': 'npm:/primeng', '@angular2-material/core': 'npm:/@angular2-material/core', '@angular2-material/grid-list': 'npm:/@angular2-material/grid-list' }, // packages tells the System loader how to load when no filename and/or no extension packages: { app: { main: './main.js', defaultExtension: 'js' }, rxjs: { defaultExtension: 'js' }, 'esri-mods': { defaultExtension: 'js' }, 'angular2-in-memory-web-api': { main: './index.js', defaultExtension: 'js' }, 'ng2-slim-loading-bar': { main: 'index.js', defaultExtension: 'js' }, 'ng2-toasty': { main: 'index.js', defaultExtension: 'js' }, 'primeng': { defaultExtension: 'js' }, '@angular2-material/core': { main: './core.umd.js', defaultExtension: 'js' }, '@angular2-material/grid-list': { main: './grid-list.umd.js', defaultExtension: 'js' } }, meta: { 'esri/*': { build: false }, 'esri-mods': { build: false }, 'dojo/*': { build: false }, } }); })(this);
[ "I would try doing an npm install in your app directory for each of the missing packages\nnpm i zone.js\nnpm i reflect-js\nnpm i systemjs\nnpm i esri-system-js\n\n", "To include the files you listed, you can simply add them to the packages section of your systemjs.config.js file, like this:\n// other libraries\n'rxjs': 'npm:rxjs',\n'angular2-in-memory-web-api': 'npm:angular2-in-memory-web-api',\n'ng2-slim-loading-bar': 'npm:/ng2-slim-loading-bar',\n'ng2-toasty': 'npm:/ng2-toasty',\n'primeng': 'npm:/primeng',\n'@angular2-material/core': 'npm:/@angular2-material/core',\n'@angular2-material/grid-list': 'npm:/@angular2-material/grid-list',\n'zone.js': 'npm:zone.js/dist/zone.js',\n'reflect-metadata': 'npm:reflect-metadata/Reflect.js',\n'systemjs': 'npm:systemjs/dist/system.src.js',\n'esri-system-js': 'npm:esri-system-js/dist/esriSystem.js'\n\nNote that you will need to add a defaultExtension property to each of these entries, as you have done for the other libraries in your systemjs.config.js file.\nAfter you've added these entries to your configuration file, you can run systemjs-builder as you have before to create your bundles, and it should include these files.\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "javascript", "systemjs", "systemjs_builder" ]
stackoverflow_0040528480_angular_javascript_systemjs_systemjs_builder.txt
Q: How can I turn a xy-meshgrid and a 1D array into a contour? So I have a csv file (which I have read with pandas), which has 3 columns the first column corresponds to the x-axis, the second column the y-axis and the third column is the value for the free energy, we can interpret that as the z-axis or the height of the xy-plane. I have create a meshgrid with my x,y colums which has the shape (6105,6105). The z-axis is just a 1D array with the length 6105. I can't find a way to combine my z-axis with the xy grid, in order for me to depict it in a contour. Does anybody have an idea of how to do it? My code: import matplotlib.pyplot as plt import pandas as pd from mpl_toolkits import mplot3d import os import numpy as np import math current_dir = os.path.abspath(os.getcwd()) file_path = (current_dir + '/__files/fes.csv') df = pd.read_csv(file_path, sep=r'\t', engine='python') df.drop(df[df['free energy (kJ/mol)'] == math.inf].index, inplace=True) x = df['CV1'] y = df['CV2'] z = df['free energy (kJ/mol)'] x_array = np.array(x) y_array = np.array(y) z_array = np.array(z) z_reshaped = z_array.reshape(int(math.sqrt(len(x_array))), int(math.sqrt(len(y_array)))) X , Y = np.meshgrid(x, y, sparse=True) plt.contour(X, Y, z_reshaped) plt.show() And I get the following error z_reshaped = z_array.reshape(int(math.sqrt(len(x_array))), int(math.sqrt(len(y_array)))) ValueError: cannot reshape array of size 6105 into shape (78,78) After checking the other stack overflow threads on the topic of reshaping, I couldn't find the answer of how to reshape my z-array to my xy-meshgrid. Each x, y, z have the size 6105. I want the output to be the left image I tried using the reshape function: z_reshape = z.reshape(-1,1) # => 2D array but I didn't work. Afterwords I tried putting all three in a mesh grid like X, Y, Z =np.meshgrid(x, y, z) but it could not compile. At last I created a mesh grid with both inputs being z but of course it didn't work Z, Z = np.meshgrid(z, z) A: Use the contour() function from the matplotlib library. This function takes in three arguments: x,y,z. You can reshape your z array to be the same size as your X and Y arrays using the reshape() method. Then, you can pass all three arrays to the contour() function to generate your desired contour plot. import matplotlib.pyplot as plt # Reshape your z array to be the same size as X and Y z_reshaped = z.reshape(X.shape) # Generate the contour plot using contour() plt.contour(X, Y, z_reshaped) # Show the plot plt.show() customize the contour plot by setting options such as the number of contour levels, the color map, and the line styles.
How can I turn a xy-meshgrid and a 1D array into a contour?
So I have a csv file (which I have read with pandas), which has 3 columns the first column corresponds to the x-axis, the second column the y-axis and the third column is the value for the free energy, we can interpret that as the z-axis or the height of the xy-plane. I have create a meshgrid with my x,y colums which has the shape (6105,6105). The z-axis is just a 1D array with the length 6105. I can't find a way to combine my z-axis with the xy grid, in order for me to depict it in a contour. Does anybody have an idea of how to do it? My code: import matplotlib.pyplot as plt import pandas as pd from mpl_toolkits import mplot3d import os import numpy as np import math current_dir = os.path.abspath(os.getcwd()) file_path = (current_dir + '/__files/fes.csv') df = pd.read_csv(file_path, sep=r'\t', engine='python') df.drop(df[df['free energy (kJ/mol)'] == math.inf].index, inplace=True) x = df['CV1'] y = df['CV2'] z = df['free energy (kJ/mol)'] x_array = np.array(x) y_array = np.array(y) z_array = np.array(z) z_reshaped = z_array.reshape(int(math.sqrt(len(x_array))), int(math.sqrt(len(y_array)))) X , Y = np.meshgrid(x, y, sparse=True) plt.contour(X, Y, z_reshaped) plt.show() And I get the following error z_reshaped = z_array.reshape(int(math.sqrt(len(x_array))), int(math.sqrt(len(y_array)))) ValueError: cannot reshape array of size 6105 into shape (78,78) After checking the other stack overflow threads on the topic of reshaping, I couldn't find the answer of how to reshape my z-array to my xy-meshgrid. Each x, y, z have the size 6105. I want the output to be the left image I tried using the reshape function: z_reshape = z.reshape(-1,1) # => 2D array but I didn't work. Afterwords I tried putting all three in a mesh grid like X, Y, Z =np.meshgrid(x, y, z) but it could not compile. At last I created a mesh grid with both inputs being z but of course it didn't work Z, Z = np.meshgrid(z, z)
[ "Use the contour() function from the matplotlib library. This function takes in three arguments: x,y,z.\nYou can reshape your z array to be the same size as your X and Y arrays using the reshape() method. Then, you can pass all three arrays to the contour() function to generate your desired contour plot.\nimport matplotlib.pyplot as plt\n\n# Reshape your z array to be the same size as X and Y\nz_reshaped = z.reshape(X.shape)\n\n# Generate the contour plot using contour()\nplt.contour(X, Y, z_reshaped)\n\n# Show the plot\nplt.show()\n\ncustomize the contour plot by setting options such as the number of contour levels, the color map, and the line styles.\n" ]
[ 0 ]
[]
[]
[ "3d", "contour", "matplotlib", "numpy", "python" ]
stackoverflow_0074670095_3d_contour_matplotlib_numpy_python.txt
Q: Does userEvent.hover() not works with mouseEnter event when testing react components? I am using tippy js library to handle tooltips in my app. Now I want to test whether the components shows a tooltip content when hover over an element. The tippy js library says that the tooltip is triggered by either mouseEnter or focus event. When testing it, I use fireEvent.mouseEnter() event to trigger tooltip. That works fine enough to pass. But when I use userEvent.hover(), it didn't work. Doesn't userEvent.hover() support mouserEnter event? Or help me to understand why doesn't it works here. Note: fireEvent.mouseOver() doesn't work here. I know the tippy js lib is already been tested. I am just curious why it is not working with userEvent.hover(). The following is contrived/reproducible code. CodeSandbox import React from "react"; import Tippy from "@tippyjs/react"; const App = () => ( <Tippy content={<span>Tooltip</span>}> <button>My button</button> </Tippy> ); export default App; import React from "react"; import { fireEvent, render, screen } from "@testing-library/react"; import "@testing-library/jest-dom"; import user from "@testing-library/user-event"; import App from "./App"; test("first", async () => { render(<App />); const button = screen.getByRole("button", { name: /my button/i }); expect(button).toBeInTheDocument(); user.hover(button); expect(await screen.findByText(/tooltip/i)).toBeInTheDocument(); screen.debug(); }); test("second", async () => { render(<App />); const button = screen.getByRole("button", { name: /my button/i }); expect(button).toBeInTheDocument(); fireEvent.mouseEnter(button); expect(screen.getByText(/tooltip/i)).toBeInTheDocument(); screen.debug(); }); A: The user.hover() function in the testing-library is a combination of focus and mouseEnter events. However, it seems like the Tippy library that you are using only listens for the mouseEnter event, which is why user.hover() does not work in this case. You can modify the Tippy library to also listen for the focus event. Here is an example of how you could modify the Tippy library to listen for the focus event: const App = () => ( <Tippy content={<span>Tooltip</span>} trigger="focus"> <button>My button</button> </Tippy> ); This way, the user.hover() function will work as expected. You can also use the trigger prop to specify which events should trigger the tooltip, so you can combine multiple events if needed. I hope this helps! Let me know if you have any other questions. A: The userEvent.hover() method simulates a user hovering over an element by triggering the mouseover and mouseout events on the element in sequence. In your code, you are using the Tippy library, which, according to its documentation, is triggered by the mouseEnter event. Since the userEvent.hover() method does not trigger the mouseEnter event, it cannot be used to trigger the Tippy tooltip. Instead, you can use the fireEvent.mouseEnter() method, which triggers a mouseEnter event on the specified element, to test the Tippy tooltip. Here is an updated version of your test code that uses fireEvent.mouseEnter() to test the Tippy tooltip: test("second", async () => { render(<App />); const button = screen.getByRole("button", { name: /my button/i }); expect(button).toBeInTheDocument(); fireEvent.mouseEnter(button); expect(screen.getByText(/tooltip/i)).toBeInTheDocument(); screen.debug(); }); A: The userEvent.hover() function simulates a hover event by firing a mouseEnter event followed by a mouseLeave event after a short delay. This means that if a tooltip is only triggered by a mouseEnter event, it will quickly appear and then disappear when using userEvent.hover(). In contrast, the fireEvent.mouseEnter() function only fires a mouseEnter event, which will trigger the tooltip to appear without it disappearing immediately. One solution to this issue is to change the event that triggers the tooltip in the Tippy component. For example, you could trigger the tooltip on both mouseEnter and focus events: <Tippy content={<span>Tooltip</span>} trigger={['mouseEnter', 'focus']}> <button>My button</button> </Tippy> This way, when using userEvent.hover(), the mouseEnter event will trigger the tooltip to appear, and the subsequent mouseLeave event will not cause it to disappear because it is also triggered on focus. Alternatively, you could use fireEvent.mouseEnter() in your test instead of userEvent.hover() to trigger the tooltip.
Does userEvent.hover() not works with mouseEnter event when testing react components?
I am using tippy js library to handle tooltips in my app. Now I want to test whether the components shows a tooltip content when hover over an element. The tippy js library says that the tooltip is triggered by either mouseEnter or focus event. When testing it, I use fireEvent.mouseEnter() event to trigger tooltip. That works fine enough to pass. But when I use userEvent.hover(), it didn't work. Doesn't userEvent.hover() support mouserEnter event? Or help me to understand why doesn't it works here. Note: fireEvent.mouseOver() doesn't work here. I know the tippy js lib is already been tested. I am just curious why it is not working with userEvent.hover(). The following is contrived/reproducible code. CodeSandbox import React from "react"; import Tippy from "@tippyjs/react"; const App = () => ( <Tippy content={<span>Tooltip</span>}> <button>My button</button> </Tippy> ); export default App; import React from "react"; import { fireEvent, render, screen } from "@testing-library/react"; import "@testing-library/jest-dom"; import user from "@testing-library/user-event"; import App from "./App"; test("first", async () => { render(<App />); const button = screen.getByRole("button", { name: /my button/i }); expect(button).toBeInTheDocument(); user.hover(button); expect(await screen.findByText(/tooltip/i)).toBeInTheDocument(); screen.debug(); }); test("second", async () => { render(<App />); const button = screen.getByRole("button", { name: /my button/i }); expect(button).toBeInTheDocument(); fireEvent.mouseEnter(button); expect(screen.getByText(/tooltip/i)).toBeInTheDocument(); screen.debug(); });
[ "The user.hover() function in the testing-library is a combination of focus and mouseEnter events. However, it seems like the Tippy library that you are using only listens for the mouseEnter event, which is why user.hover() does not work in this case.\nYou can modify the Tippy library to also listen for the focus event.\nHere is an example of how you could modify the Tippy library to listen for the focus event:\nconst App = () => (\n <Tippy content={<span>Tooltip</span>} trigger=\"focus\">\n <button>My button</button>\n </Tippy>\n);\n\nThis way, the user.hover() function will work as expected. You can also use the trigger prop to specify which events should trigger the tooltip, so you can combine multiple events if needed.\nI hope this helps! Let me know if you have any other questions.\n", "The userEvent.hover() method simulates a user hovering over an element by triggering the mouseover and mouseout events on the element in sequence. In your code, you are using the Tippy library, which, according to its documentation, is triggered by the mouseEnter event.\nSince the userEvent.hover() method does not trigger the mouseEnter event, it cannot be used to trigger the Tippy tooltip. Instead, you can use the fireEvent.mouseEnter() method, which triggers a mouseEnter event on the specified element, to test the Tippy tooltip.\nHere is an updated version of your test code that uses fireEvent.mouseEnter() to test the Tippy tooltip:\ntest(\"second\", async () => {\n render(<App />);\n\n const button = screen.getByRole(\"button\", { name: /my button/i });\n\n expect(button).toBeInTheDocument();\n\n fireEvent.mouseEnter(button);\n expect(screen.getByText(/tooltip/i)).toBeInTheDocument();\n screen.debug();\n});\n\n", "The userEvent.hover() function simulates a hover event by firing a mouseEnter event followed by a mouseLeave event after a short delay. This means that if a tooltip is only triggered by a mouseEnter event, it will quickly appear and then disappear when using userEvent.hover().\nIn contrast, the fireEvent.mouseEnter() function only fires a mouseEnter event, which will trigger the tooltip to appear without it disappearing immediately.\nOne solution to this issue is to change the event that triggers the tooltip in the Tippy component. For example, you could trigger the tooltip on both mouseEnter and focus events:\n<Tippy content={<span>Tooltip</span>} trigger={['mouseEnter', 'focus']}>\n <button>My button</button>\n</Tippy>\n\nThis way, when using userEvent.hover(), the mouseEnter event will trigger the tooltip to appear, and the subsequent mouseLeave event will not cause it to disappear because it is also triggered on focus.\nAlternatively, you could use fireEvent.mouseEnter() in your test instead of userEvent.hover() to trigger the tooltip.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "jestjs", "react_testing_library", "reactjs", "tippyjs", "user_event" ]
stackoverflow_0074506809_jestjs_react_testing_library_reactjs_tippyjs_user_event.txt
Q: I have an array of variables, can I attach a word to each variable element so they turn into another existing set of variables? Let's say I have an array of variables: let fruits = [apple, orange, banana]; These variables target specific DOM elements apple = document.getElementById("apple"); orange = document.getElementById("orange"); banana = document.getElementById("banana"); I also happen to have another set of variables that basically have "Rot" added to them appleRot = document.getElementById("appleRot"); orangeRot = document.getElementById("appleRot"); bananaRot = document.getElementById("appleRot"); Is it possible to loop through the existing array and add "Rot" to each element so they target the other set of existing variables? let fruits = [apple, orange, banana]; let fruitsRot = fruits.map((x) => x + "Rot"); is something like this possible without the result being a string but an array of elements that happen to target the second set of variables? A: Almost there. Use: const fruits = [ document.getElementById("apple"), document.getElementById("orange"), document.getElementById("banana"),]; const fruitsRot = fruits.map((x) => document.getElementById(x.id + "Rot")); A: If you want to get a new element based on a known element's id, try this: let fruits = [apple, orange, banana]; let fruitsRot = fruits.map((x) => { const {id} = x; const newId = id + "Rot"; return document.getElementById(newId); }); A: In a classic way: let fruits = ['apple', 'orange', 'banana']; let fruitsRot = []; for (let i = 0; i < fruits.length; i++) { fruitsRot.push(fruits[i] + "Rot"); } console.log(fruitsRot) To loop through an existing array and add "Rot" to each element, you can use a for loop and the push() method. The for loop allows you to iterate over the elements of the array, and the push() method allows you to add a new element to the end of the array. Finally you can use your document.getElementById(...) A: I'd use the names in the array instead: const fruitNames = ['apple', 'orange', 'banana']; const fruits = fruitNames.map(fruitName => document.getElementById(fruitName)); const rottenFruits = fruitNames.map(fruitName => document.getElementById(`${fruitName}Rot`));
I have an array of variables, can I attach a word to each variable element so they turn into another existing set of variables?
Let's say I have an array of variables: let fruits = [apple, orange, banana]; These variables target specific DOM elements apple = document.getElementById("apple"); orange = document.getElementById("orange"); banana = document.getElementById("banana"); I also happen to have another set of variables that basically have "Rot" added to them appleRot = document.getElementById("appleRot"); orangeRot = document.getElementById("appleRot"); bananaRot = document.getElementById("appleRot"); Is it possible to loop through the existing array and add "Rot" to each element so they target the other set of existing variables? let fruits = [apple, orange, banana]; let fruitsRot = fruits.map((x) => x + "Rot"); is something like this possible without the result being a string but an array of elements that happen to target the second set of variables?
[ "Almost there. Use:\nconst fruits = [\n document.getElementById(\"apple\"),\n document.getElementById(\"orange\"),\n document.getElementById(\"banana\"),];\nconst fruitsRot = fruits.map((x) =>\n document.getElementById(x.id + \"Rot\"));\n\n", "If you want to get a new element based on a known element's id, try this:\nlet fruits = [apple, orange, banana];\nlet fruitsRot = fruits.map((x) => {\n const {id} = x;\n const newId = id + \"Rot\";\n return document.getElementById(newId);\n});\n\n", "In a classic way:\n\n\nlet fruits = ['apple', 'orange', 'banana'];\nlet fruitsRot = [];\n\nfor (let i = 0; i < fruits.length; i++) {\n fruitsRot.push(fruits[i] + \"Rot\");\n}\n\nconsole.log(fruitsRot)\n\n\n\nTo loop through an existing array and add \"Rot\" to each element, you can use a for loop and the push() method. The for loop allows you to iterate over the elements of the array, and the push() method allows you to add a new element to the end of the array.\nFinally you can use your document.getElementById(...)\n", "I'd use the names in the array instead:\nconst fruitNames = ['apple', 'orange', 'banana'];\nconst fruits = fruitNames.map(fruitName => document.getElementById(fruitName));\nconst rottenFruits = fruitNames.map(fruitName => document.getElementById(`${fruitName}Rot`));\n\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074670082_javascript.txt
Q: How compare and count the number of unique values across multiple columns I am currently looking at something similar to df what I would like to be able to do is produce soemthing that looks like df2. Where the specified column values are compared next to eachother, the number of specific occurences are counted, and the count is places into a new column in a new dataframe. For example: in df the combination 1, 5, and 9 occur 3 times. df <- data.frame( col1 = c(1,2,3,4,1,2,3,4,1), col2 = c(5,6,7,8,5,6,7,8,5), col3 = c(9,10,11,12,9,10,11,13,9)) df2 <- data.frame( col1 = c(1,2,3,4,4), col2 = c(5,6,7,8,8), col3 = c(9,10,11,12,13), count = c(3,2,2,1,1)) I tried using dplyr df2 <- df %>% distinct(col1,col2, col3) %>% group_by(col3) %>% summarize("count" = n()) with no success A: library(dplyr) df %>% count(col1,col2,col3) col1 col2 col3 n 1 1 5 9 3 2 2 6 10 2 3 3 7 11 2 4 4 8 12 1 5 4 8 13 1 A: Is using plyr fine? library(plyr) ddply(df,.(col1,col2,col3),nrow) Output: col1 col2 col3 V1 1 1 5 9 3 2 2 6 10 2 3 3 7 11 2 4 4 8 12 1 5 4 8 13 1 A: The best way to do it with dplyr is using count() as suggested by Vinícius Félix's response However, here is a fix using the syntax you started. You were thinking in the right direction. Library library(dplyr) Solution to your code df %>% # distinct(col1,col2, col3) # you don't need this row, remove it. group_by(col1, col2, col3) %>% # you have to group by all columns you want to check summarize(count = n()) %>% # quotes are not needed, but are not wrong ungroup() # Always add ungroup() at the end to solve future problems Output #> # A tibble: 5 × 4 #> col1 col2 col3 count #> <dbl> <dbl> <dbl> <int> #> 1 1 5 9 3 #> 2 2 6 10 2 #> 3 3 7 11 2 #> 4 4 8 12 1 #> 5 4 8 13 1 Created on 2022-12-03 with reprex v2.0.2
How compare and count the number of unique values across multiple columns
I am currently looking at something similar to df what I would like to be able to do is produce soemthing that looks like df2. Where the specified column values are compared next to eachother, the number of specific occurences are counted, and the count is places into a new column in a new dataframe. For example: in df the combination 1, 5, and 9 occur 3 times. df <- data.frame( col1 = c(1,2,3,4,1,2,3,4,1), col2 = c(5,6,7,8,5,6,7,8,5), col3 = c(9,10,11,12,9,10,11,13,9)) df2 <- data.frame( col1 = c(1,2,3,4,4), col2 = c(5,6,7,8,8), col3 = c(9,10,11,12,13), count = c(3,2,2,1,1)) I tried using dplyr df2 <- df %>% distinct(col1,col2, col3) %>% group_by(col3) %>% summarize("count" = n()) with no success
[ "library(dplyr)\n\ndf %>% \n count(col1,col2,col3)\n\n col1 col2 col3 n\n1 1 5 9 3\n2 2 6 10 2\n3 3 7 11 2\n4 4 8 12 1\n5 4 8 13 1\n\n", "Is using plyr fine?\nlibrary(plyr)\nddply(df,.(col1,col2,col3),nrow)\n\nOutput:\n col1 col2 col3 V1\n1 1 5 9 3\n2 2 6 10 2\n3 3 7 11 2\n4 4 8 12 1\n5 4 8 13 1\n\n", "The best way to do it with dplyr is using count() as suggested by Vinícius Félix's response\nHowever, here is a fix using the syntax you started. You were thinking in the right direction.\nLibrary\nlibrary(dplyr)\n\nSolution to your code\ndf %>%\n# distinct(col1,col2, col3) # you don't need this row, remove it.\n group_by(col1, col2, col3) %>% # you have to group by all columns you want to check\n summarize(count = n()) %>% # quotes are not needed, but are not wrong\n ungroup() # Always add ungroup() at the end to solve future problems\n\n\nOutput\n\n#> # A tibble: 5 × 4\n#> col1 col2 col3 count\n#> <dbl> <dbl> <dbl> <int>\n#> 1 1 5 9 3\n#> 2 2 6 10 2\n#> 3 3 7 11 2\n#> 4 4 8 12 1\n#> 5 4 8 13 1\n\nCreated on 2022-12-03 with reprex v2.0.2\n" ]
[ 1, 1, 1 ]
[]
[]
[ "r" ]
stackoverflow_0074670052_r.txt
Q: Problem with searching vector of user defined class by value I have created a class called HexagonTile, which stores three integers to represent cubic coordinates and has an array to store its neighbours. All created HexagonTile objects get stored in a vector std::vector <HexagonTile> map. The problem is I cannot assign the neighbours to the HexagonTile neighbour array, because I do not really know how would I search the vector in the first place :D To see, if it contains some value, like a HexagonTile with coordinates 0, 0, 0 for example. This is what I tried std::vector <HexagonTile> map; // Map gets populated here for(HexagonTile hexagon : map) { HexagonTile hexagonNeighbourTop(hexagon.q, hexagon.r - 1, hexagon.s + 1); if(std::find(map.begin(), map.end(), hexagonNeighbourTop) != map.end()) { // How do I even assign it if it is found? :( } } I followed John's advice to get rid of the pointers and now the map stores hexagons directly. Although I still cannot figure out how to get the result and assign it A: Aha! So I finally figured it out with iterators! (Their error messages are so cryptic though :D ) Thanks to John, who advised me to simplify things for myself. This is the result std::vector<HexagonField>::iterator it; for(HexagonField hexagon : map) { HexagonField hexagonNeighbourTop(hexagon.q, hexagon.r - 1, hexagon.s + 1); it = std::find(map.begin(), map.end(), hexagonNeighbourTop); if(it != map.end()) { std::cout << it - map.begin() << std::endl; hexagon.neighbours[0] = &map[it - map.begin()]; } }
Problem with searching vector of user defined class by value
I have created a class called HexagonTile, which stores three integers to represent cubic coordinates and has an array to store its neighbours. All created HexagonTile objects get stored in a vector std::vector <HexagonTile> map. The problem is I cannot assign the neighbours to the HexagonTile neighbour array, because I do not really know how would I search the vector in the first place :D To see, if it contains some value, like a HexagonTile with coordinates 0, 0, 0 for example. This is what I tried std::vector <HexagonTile> map; // Map gets populated here for(HexagonTile hexagon : map) { HexagonTile hexagonNeighbourTop(hexagon.q, hexagon.r - 1, hexagon.s + 1); if(std::find(map.begin(), map.end(), hexagonNeighbourTop) != map.end()) { // How do I even assign it if it is found? :( } } I followed John's advice to get rid of the pointers and now the map stores hexagons directly. Although I still cannot figure out how to get the result and assign it
[ "Aha! So I finally figured it out with iterators! (Their error messages are so cryptic though :D ) Thanks to John, who advised me to simplify things for myself. This is the result\nstd::vector<HexagonField>::iterator it;\nfor(HexagonField hexagon : map)\n{\n HexagonField hexagonNeighbourTop(hexagon.q, hexagon.r - 1, hexagon.s + 1);\n it = std::find(map.begin(), map.end(), hexagonNeighbourTop);\n if(it != map.end())\n {\n std::cout << it - map.begin() << std::endl;\n hexagon.neighbours[0] = &map[it - map.begin()];\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "c++", "vector" ]
stackoverflow_0074669826_c++_vector.txt
Q: How to find if elements of a column in a data frame are string-contained by the elements of a column of another data frame? I have a data frame tweets_df that looks like this: sentiment id date text 0 0 1502071360117424136 2022-03-10 23:58:14+00:00 AngelaRaeBoon1 Same Alabama Republicans charge... 1 0 1502070916318121994 2022-03-10 23:56:28+00:00 This ’ w/the sentencing JussieSmollett But mad... 2 0 1502057466267377665 2022-03-10 23:03:01+00:00 DannyClayton Not hard find takes smallest amou... 3 0 1502053718711316512 2022-03-10 22:48:08+00:00 I make fake scenarios getting fights protectin... 4 0 1502045714486022146 2022-03-10 22:16:19+00:00 WipeHomophobia Well people lands wildest thing... .. ... ... ... ... 94 0 1501702542899691525 2022-03-09 23:32:41+00:00 There 's reason deep look things kill bad peop... 95 0 1501700281729433606 2022-03-09 23:23:42+00:00 Shame UN United Dictators Shame NATO Repeat We... 96 0 1501699859803516934 2022-03-09 23:22:01+00:00 GayleKing The difference Ukrainian refugees IL... 97 0 1501697172441550848 2022-03-09 23:11:20+00:00 hrkbenowen And includes new United States I un... 98 0 1501696149853511687 2022-03-09 23:07:16+00:00 JLaw_OTD A world women minorities POC LGBTQ÷ d... And the second dataFrame globe_df that looks like this: Country Region 0 Andorra Europe 1 United Arab Emirates Middle east 2 Afghanistan Asia & Pacific 3 Antigua and Barbuda South/Latin America 4 Anguilla South/Latin America .. ... ... 243 Guernsey Europe 244 Isle of Man Europe 245 Jersey Europe 246 Saint Barthelemy South/Latin America 247 Saint Martin South/Latin America I want to delete all rows of the dataframe tweets_df which have 'text' that does not contain a 'Country' or 'Region'. This was my attempt: globe_df = pd.read_csv('countriesAndRegions.csv') tweets_df = pd.read_csv('tweetSheet.csv') for entry in globe_df['Country']: tweet_index = tweets_df[entry in tweets_df['text']].index # if tweets that *contain*, not equal...... entry in tweets_df['text] .... (in)or (not in)? tweets_df.drop(tweet_index , inplace=True) print(tweets_df) Edit: Also, fuzzy, case-insensitive matching with stemming would be preferred when searching the 'text' for countries and regions. Ex) If the text contained 'Ukrainian', 'british', 'engliSH', etc... then it would not be deleted A: Convert country and region values to a list and use str.contains to filter out rows that do not contain these values. #with case insensitive vals=globe_df.stack().to_list() tweets_df = tweets_df[tweets_df ['text'].str.contains('|'.join(vals), regex=True, case=False)] or (with case insensitive) vals="({})".format('|'.join(globe_df.stack().str.lower().to_list())) #make all letters lowercase tweets_df['matched'] = tweets_df.text.str.lower().str.extract(vals, expand=False) tweets_df = tweets_df.dropna()
How to find if elements of a column in a data frame are string-contained by the elements of a column of another data frame?
I have a data frame tweets_df that looks like this: sentiment id date text 0 0 1502071360117424136 2022-03-10 23:58:14+00:00 AngelaRaeBoon1 Same Alabama Republicans charge... 1 0 1502070916318121994 2022-03-10 23:56:28+00:00 This ’ w/the sentencing JussieSmollett But mad... 2 0 1502057466267377665 2022-03-10 23:03:01+00:00 DannyClayton Not hard find takes smallest amou... 3 0 1502053718711316512 2022-03-10 22:48:08+00:00 I make fake scenarios getting fights protectin... 4 0 1502045714486022146 2022-03-10 22:16:19+00:00 WipeHomophobia Well people lands wildest thing... .. ... ... ... ... 94 0 1501702542899691525 2022-03-09 23:32:41+00:00 There 's reason deep look things kill bad peop... 95 0 1501700281729433606 2022-03-09 23:23:42+00:00 Shame UN United Dictators Shame NATO Repeat We... 96 0 1501699859803516934 2022-03-09 23:22:01+00:00 GayleKing The difference Ukrainian refugees IL... 97 0 1501697172441550848 2022-03-09 23:11:20+00:00 hrkbenowen And includes new United States I un... 98 0 1501696149853511687 2022-03-09 23:07:16+00:00 JLaw_OTD A world women minorities POC LGBTQ÷ d... And the second dataFrame globe_df that looks like this: Country Region 0 Andorra Europe 1 United Arab Emirates Middle east 2 Afghanistan Asia & Pacific 3 Antigua and Barbuda South/Latin America 4 Anguilla South/Latin America .. ... ... 243 Guernsey Europe 244 Isle of Man Europe 245 Jersey Europe 246 Saint Barthelemy South/Latin America 247 Saint Martin South/Latin America I want to delete all rows of the dataframe tweets_df which have 'text' that does not contain a 'Country' or 'Region'. This was my attempt: globe_df = pd.read_csv('countriesAndRegions.csv') tweets_df = pd.read_csv('tweetSheet.csv') for entry in globe_df['Country']: tweet_index = tweets_df[entry in tweets_df['text']].index # if tweets that *contain*, not equal...... entry in tweets_df['text] .... (in)or (not in)? tweets_df.drop(tweet_index , inplace=True) print(tweets_df) Edit: Also, fuzzy, case-insensitive matching with stemming would be preferred when searching the 'text' for countries and regions. Ex) If the text contained 'Ukrainian', 'british', 'engliSH', etc... then it would not be deleted
[ "Convert country and region values to a list and use str.contains to filter out rows that do not contain these values.\n#with case insensitive\nvals=globe_df.stack().to_list()\n\ntweets_df = tweets_df[tweets_df ['text'].str.contains('|'.join(vals), regex=True, case=False)]\n\nor (with case insensitive)\nvals=\"({})\".format('|'.join(globe_df.stack().str.lower().to_list())) #make all letters lowercase\ntweets_df['matched'] = tweets_df.text.str.lower().str.extract(vals, expand=False)\ntweets_df = tweets_df.dropna()\n\n" ]
[ 0 ]
[ "You can try using pandas.Series.str.contains to find the values.\ntweets_df[tweets_df['text'].contains('{}|{}'.format(entry['Country'],entry['Region'])]\n\nAnd after creating a new column with boolean values, you can remove rows with the value True.\n", "# Import data\nglobe_df = pd.read_csv('countriesAndRegions.csv')\ntweets_df = pd.read_csv('tweetSheet.csv')\n# Get country and region column as list\nglobe_df_country = globe_df['Country'].values.tolist()\nglobe_df_region = globe_df['Region'].values.tolist()\n# merge_lists, cause you want to check with or operator\nmerged_list = globe_df_country + globe_df_region\n# If you want to update df while iterating it, best way to do it with using copy df\ndf_tweets2 = tweets_df.copy()\nfor index,row in tweets_df.iterrows():\n # Check if splitted row's text values are intersecting with merged_list\n if [i for i in merged_list if i in row['text'].split()] == []:\n df_tweets2 = df_tweets2.drop[index]\ntweets_df_new = df_tweets2.copy()\nprint(tweets_df_new) \n \n\n" ]
[ -1, -1 ]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074669715_dataframe_pandas_python.txt
Q: Best practise to render a custom component in an area in three.js Angular2+? What is the best practice to render a certain component in a area in a three.js scene? A: To render a custom component in an area in three.js with Angular2+, you can: Import the @angular/core module and the THREE object from the three.js library. Create a new component using the @Component decorator and specify the selector name and template for the component. In the component class, create a renderer object and a camera object. In the ngAfterViewInit() lifecycle hook, create a scene object and add your custom 3D objects to it. Create a render() function that will be called repeatedly to update the scene and render the 3D objects. In the ngOnDestroy() lifecycle hook, stop the rendering loop and clean up any references to ensure proper garbage collection.
Best practise to render a custom component in an area in three.js Angular2+?
What is the best practice to render a certain component in a area in a three.js scene?
[ "To render a custom component in an area in three.js with Angular2+, you can:\n\nImport the @angular/core module and the THREE object from the three.js library.\nCreate a new component using the @Component decorator and specify the selector name and template for the component.\nIn the component class, create a renderer object and a camera object.\nIn the ngAfterViewInit() lifecycle hook, create a scene object and add your custom 3D objects to it.\nCreate a render() function that will be called repeatedly to update the scene and render the 3D objects.\nIn the ngOnDestroy() lifecycle hook, stop the rendering loop and clean up any references to ensure proper garbage collection.\n\n" ]
[ 0 ]
[]
[]
[ "angular", "javascript", "three.js", "typescript" ]
stackoverflow_0074665112_angular_javascript_three.js_typescript.txt
Q: Order the coefficients in specific rows of facet_grid I am using facet_grid() to display a 2x2 of different combinations of model types for racial groups and levels of participation in a program. By using scales = "free" I am able to separate out the y axes for each row and only display the relevant coefficients. But, how can I then specify the model/variable order within each panel row? Typically, I would do something like: model_order <- c("White", "Black", "Hispanic") And then pass that through to scale_x_discrete(). (And would have High, then Medium, then Low in that order). But that does not seem to work in this case because of using scales = "free". Is there a workaround for controlling the order? Code: mylabels <- c("1" = "Linear", "2" = "Logit", "3" = "Race", "4" = "Level") ggplot(dx, aes(x = var, y = coef, ymin = ci_lower, ymax = ci_upper)) + geom_point(size = 2) + geom_errorbar(width = 0.1, size = 1) + facet_grid(effect~model, scales = "free", labeller = as_labeller(mylabels)) + scale_y_continuous(breaks = seq(-3, 3, by = 1)) + coord_flip() + theme_bw(base_size = 15) + theme(legend.position = "none") Data: structure(list(var = c("White", "Black", "Hispanic", "White", "Black", "Hispanic", "High", "Medium", "Low", "High", "Medium", "Low"), coef = c(1.64, 1.2, 0.4, 1.45, 0.17, 0.6, 1.04, 0.05, -0.74, -0.99, -0.45, -0.3045), ci_lower = c(1.3, 0.86, 0.06, 1.11, -0.17, 0.26, 0.7, -0.29, -1.08, -1.33, -0.79, -0.6445), ci_upper = c(1.98, 1.54, 0.74, 1.79, 0.51, 0.94, 1.38, 0.39, -0.4, -0.65, -0.11, 0.0355), model = c(1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2), effect = c(3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4)), class = c("spec_tbl_df", "tbl_df", "tbl", "data.frame" ), row.names = c(NA, -12L), spec = structure(list(cols = list( var = structure(list(), class = c("collector_character", "collector")), coef = structure(list(), class = c("collector_double", "collector")), ci_lower = structure(list(), class = c("collector_double", "collector")), ci_upper = structure(list(), class = c("collector_double", "collector")), model = structure(list(), class = c("collector_double", "collector")), effect = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 1L), class = "col_spec")) A: You can define your variable as a factor, and then reorder their levels: library(dplyr) library(ggplot2) mylabels <- c("1" = "Linear", "2" = "Logit", "3" = "Race", "4" = "Level") dx %>% mutate(var = forcats::fct_relevel(var,"High","Medium")) %>% ggplot(aes(x = var, y = coef, ymin = ci_lower, ymax = ci_upper)) + geom_point(size = 2) + geom_errorbar(width = 0.1, size = 1) + facet_grid(effect~model, scales = "free", labeller = as_labeller(mylabels)) + scale_y_continuous(breaks = seq(-3, 3, by = 1)) + coord_flip() + theme_bw(base_size = 15) + theme(legend.position = "none")
Order the coefficients in specific rows of facet_grid
I am using facet_grid() to display a 2x2 of different combinations of model types for racial groups and levels of participation in a program. By using scales = "free" I am able to separate out the y axes for each row and only display the relevant coefficients. But, how can I then specify the model/variable order within each panel row? Typically, I would do something like: model_order <- c("White", "Black", "Hispanic") And then pass that through to scale_x_discrete(). (And would have High, then Medium, then Low in that order). But that does not seem to work in this case because of using scales = "free". Is there a workaround for controlling the order? Code: mylabels <- c("1" = "Linear", "2" = "Logit", "3" = "Race", "4" = "Level") ggplot(dx, aes(x = var, y = coef, ymin = ci_lower, ymax = ci_upper)) + geom_point(size = 2) + geom_errorbar(width = 0.1, size = 1) + facet_grid(effect~model, scales = "free", labeller = as_labeller(mylabels)) + scale_y_continuous(breaks = seq(-3, 3, by = 1)) + coord_flip() + theme_bw(base_size = 15) + theme(legend.position = "none") Data: structure(list(var = c("White", "Black", "Hispanic", "White", "Black", "Hispanic", "High", "Medium", "Low", "High", "Medium", "Low"), coef = c(1.64, 1.2, 0.4, 1.45, 0.17, 0.6, 1.04, 0.05, -0.74, -0.99, -0.45, -0.3045), ci_lower = c(1.3, 0.86, 0.06, 1.11, -0.17, 0.26, 0.7, -0.29, -1.08, -1.33, -0.79, -0.6445), ci_upper = c(1.98, 1.54, 0.74, 1.79, 0.51, 0.94, 1.38, 0.39, -0.4, -0.65, -0.11, 0.0355), model = c(1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2), effect = c(3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4)), class = c("spec_tbl_df", "tbl_df", "tbl", "data.frame" ), row.names = c(NA, -12L), spec = structure(list(cols = list( var = structure(list(), class = c("collector_character", "collector")), coef = structure(list(), class = c("collector_double", "collector")), ci_lower = structure(list(), class = c("collector_double", "collector")), ci_upper = structure(list(), class = c("collector_double", "collector")), model = structure(list(), class = c("collector_double", "collector")), effect = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 1L), class = "col_spec"))
[ "You can define your variable as a factor, and then reorder their levels:\nlibrary(dplyr)\nlibrary(ggplot2)\n\nmylabels <- c(\"1\" = \"Linear\",\n \"2\" = \"Logit\",\n \"3\" = \"Race\",\n \"4\" = \"Level\")\ndx %>% \n mutate(var = forcats::fct_relevel(var,\"High\",\"Medium\")) %>%\n ggplot(aes(x = var, y = coef,\n ymin = ci_lower, ymax = ci_upper)) +\n geom_point(size = 2) +\n geom_errorbar(width = 0.1,\n size = 1) +\n facet_grid(effect~model,\n scales = \"free\",\n labeller = as_labeller(mylabels)) + \n scale_y_continuous(breaks = seq(-3, 3, by = 1)) +\n coord_flip() +\n theme_bw(base_size = 15) +\n theme(legend.position = \"none\")\n\n" ]
[ 1 ]
[]
[]
[ "facet_grid", "ggplot2", "r" ]
stackoverflow_0074670170_facet_grid_ggplot2_r.txt
Q: Can I add a custom modbar button? I have a function that auto scales the y axis. It takes in the figure and relayout data objects and then spits out a y axis that fits some criteria that I pre-determined. I want to add a new modbar button in dash plotly on python that fires the function and updates the figure any time I click on it. Is that possible? One simple example would be to be able to resize the chart, but only reset the y-axis. The resize button that comes in the modbar out of the box resets the y axis and sets autosize = true. The example custom modbar button for the purpose of this question would do the first function, but not the second. A: Adding/removing/customizing modebar buttons is quite simple, one just need to define the figure config accordingly using the modeBarButtonsToAdd and/or modeBarButtonsToRemove options (documented here). Note that these configuration options are consistent across dash, plotly.py and plotly.js. The .show() method that you use to display your figures also accepts a config parameter. You can set the configuration options for your figure by passing a dictionary to this parameter which contains the options you want to set. With Dash, the same config dictionary can be passed to the config property of a dcc.Graph component : dcc.Graph(figure=go.Figure(), config=config) Here is a Plotly.js live example which removes the pre-defined 'autoscale2d' button and add a custom one with a specific handler that triggers Plotly.relayout() : const data = [{ mode: 'lines', x: [-2, 0, 2], y: [2, -2, 3], line: {shape: 'spline'} }] const layout = { title: 'Mode Bar Custom Button' } const config = { displayModeBar: true, modeBarButtonsToRemove: ['autoscale2d'], modeBarButtonsToAdd: [{ name: 'autoscale2d-custom', title: 'Autoscale (custom)', icon: Plotly.Icons.autoscale, click: function(gd) { Plotly.relayout(gd, { 'xaxis.range': [-3, 3], 'yaxis.range': [-3, 3] }); } }] } Plotly.newPlot('graph', data, layout, config) <script src="https://cdn.plot.ly/plotly-2.16.2.min.js"></script> <div id="graph"></div>
Can I add a custom modbar button?
I have a function that auto scales the y axis. It takes in the figure and relayout data objects and then spits out a y axis that fits some criteria that I pre-determined. I want to add a new modbar button in dash plotly on python that fires the function and updates the figure any time I click on it. Is that possible? One simple example would be to be able to resize the chart, but only reset the y-axis. The resize button that comes in the modbar out of the box resets the y axis and sets autosize = true. The example custom modbar button for the purpose of this question would do the first function, but not the second.
[ "Adding/removing/customizing modebar buttons is quite simple, one just need to define the figure config accordingly using the modeBarButtonsToAdd and/or modeBarButtonsToRemove options (documented here).\nNote that these configuration options are consistent across dash, plotly.py and plotly.js.\n\nThe .show() method that you use to display your figures also accepts a\nconfig parameter.\nYou can set the configuration options for your figure by passing a\ndictionary to this parameter which contains the options you want to\nset.\n\nWith Dash, the same config dictionary can be passed to the config property of a dcc.Graph component :\ndcc.Graph(figure=go.Figure(), config=config)\n\nHere is a Plotly.js live example which removes the pre-defined 'autoscale2d' button and add a custom one with a specific handler that triggers Plotly.relayout() :\n\n\nconst data = [{\n mode: 'lines',\n x: [-2, 0, 2],\n y: [2, -2, 3], \n line: {shape: 'spline'}\n}]\n\nconst layout = { title: 'Mode Bar Custom Button' }\n\nconst config = {\n displayModeBar: true,\n modeBarButtonsToRemove: ['autoscale2d'],\n modeBarButtonsToAdd: [{\n name: 'autoscale2d-custom',\n title: 'Autoscale (custom)',\n icon: Plotly.Icons.autoscale,\n click: function(gd) {\n Plotly.relayout(gd, {\n 'xaxis.range': [-3, 3],\n 'yaxis.range': [-3, 3]\n });\n }\n }]\n}\n\nPlotly.newPlot('graph', data, layout, config)\n<script src=\"https://cdn.plot.ly/plotly-2.16.2.min.js\"></script>\n<div id=\"graph\"></div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "plotly_dash", "plotly_python" ]
stackoverflow_0074533206_plotly_plotly_dash_plotly_python.txt
Q: Docker - unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx I just downloaded Docker Toolbox for Windows 10 64bit today. I'm going through the tutorial. I'm receving the following error when trying to build an image using a Dockerfile. Steps: Launched Docker Quickstart terminal. testdocker after creating it. Prepare Dockerfile as documented in "Build your own image" web link ran below command docker build -t docker-whale . Error: $ docker build -t docker-whale . unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx C:\Users\Villanueva\Test\testdocker\Dockerfile: The system cannot find the file specified. BTW: I tried several options mentioned @ https://github.com/docker/docker/issues/14339 $ docker info Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 2 Server Version: 1.10.1 Storage Driver: aufs Root Dir: /mnt/sda1/var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Plugins: Volume: local Network: bridge null host Kernel Version: 4.1.17-boot2docker Operating System: Boot2Docker 1.10.1 (TCL 6.4.1); master : b03e158 - Thu Feb 11 22:34:01 UTC 2016 OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 996.2 MiB Name: default ID: C7DS:CIAJ:FTSN:PCGD:ZW25:MQNG:H3HK:KRJL:G6FC:VPRW:SEWW:KP7B Debug mode (server): true File Descriptors: 32 Goroutines: 44 System Time: 2016-02-19T17:37:37.706076803Z EventsListeners: 0 Init SHA1: Init Path: /usr/local/bin/docker Docker Root Dir: /mnt/sda1/var/lib/docker Labels: provider=virtualbox Thanks. A: while executing following command: docker build -t docker-whale . check that Dockerfile is present in your current working directory. A: The error message is misleading The problem has nothing to do with symlinks really. Usually, the problem is only that docker cannot find the Dockerfile describing the build. Typical reasons are these: Dockerfile has wrong name. It must be called Dockerfile. If it is called, for instance, dockerfile, .Dockerfile, Dockerfile.txt, or other, it will not be found. Dockerfile is not in context. If you say docker build contextdir, the Dockerfile must be at contextdir/Dockerfile. If you have it in, say, ./Dockerfile instead, it will not be found. Dockerfile does not exist at all. Sounds silly? Well, I got the above error message from my GitLab CI after I had written a nice Dockerfile, but forgotten to check it in. Silly? Sure. Unlikely? No. A: If you are working on windows 8 you would be using Docker toolbox. From the mydockerbuild directory run the below command as your Dockerfile is a textfile docker build -t docker-whale -f ./Dockerfile.txt . A: The name of the file should be Dockerfile and not .Dockerfile. The file should not have any extension. A: Make sure you moved to the directory where Dockerfile is located. Make sure your Dockerfile is extension-less. That is, not Dockerfile.txt, Dockerfile.rtf, or any other. Make sure you named Dockerfile, and not DockerFile, dockerfile or any other. A: I had named my file dockerfile instead of Dockerfile (capitalized), and once I changed that, it started processing my "Dockerfile". A: Just Remove the extension .txt from Dockerfile and run the command docker build -t image-name It will work for sure. A: I have got this error (in MacBook) though I used correct command to create image, docker build -t testimg . Later I found that path is the problem. Just navigate to the correct path that contains docker file. Just double check your current working directory .Nothing to panic! A: That's just because Notepad add ".txt" at the end of Dockerfile A: This command worked for me: docker build -t docker-whale -f Dockerfile.txt . A: I had created my DockerFile by VS2017 Docker Support tool and had the same error. After a while I realised I was not in the correct directory that contains the Dockerfile (~\source\repos\DockerWebApplication\). cd'ed to the correct file (~/source/repos/DockerWebApplication/DockerWebApplication) which was inside the project and successfully created the docker image. A: In WSL, there seems to be a problem with path conversion. The location of the Dockerfile in Ubuntu (where I'm running docker and where Dockerfile lives) is "/home/sxw455/App1", but neither of these commands worked: $ pwd /home/sxw455/App1 $ ll total 4 drwxrwxrwx 0 sxw455 sxw455 4096 Dec 11 19:28 ./ drwxr-xr-x 0 sxw455 sxw455 4096 Dec 11 19:25 ../ -rwxrwxrwx 1 sxw455 sxw455 531 Dec 11 19:26 Dockerfile* -rwxrwxrwx 1 sxw455 sxw455 666 Dec 11 19:28 app.py* -rwxrwxrwx 1 sxw455 sxw455 12 Dec 11 19:27 requirements.txt* $ docker build -t friendlyhello . unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx C:\Windows\System32\Dockerfile: The system cannot find the file specified. $ docker build -t friendlyhello "/home/sxw455/App1" unable to prepare context: path "/home/sxw455/App1" not found But in Windows, the actual path is: C:\Users\sxw455\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\sxw455\App1 And so I had to do this (even though I ran it from bash): $ docker build -t friendlyhello "C:\Users\sxw455\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\sxw455\App1" Sending build context to Docker daemon 5.12kB Step 1/7 : FROM python:2.7-slim ---> 0dc3d8d47241 Step 2/7 : WORKDIR /app ---> Using cache ---> f739aa02ce04 Step 3/7 : COPY . /app ---> Using cache ---> 88686c524ae9 Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt ---> Using cache ---> b95f02b14f78 Step 5/7 : EXPOSE 80 ---> Using cache ---> 0924dbc3f695 Step 6/7 : ENV NAME World ---> Using cache ---> 85c145785b87 Step 7/7 : CMD ["python", "app.py"] ---> Using cache ---> c2e43b7f0d4a Successfully built c2e43b7f0d4a Successfully tagged friendlyhello:latest SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories. I had similar problems with environment variables during the initial installation, and followed some advice that said to install the Windows DockerCE and hack the environment variables rather than installing the Ubuntu DockerCE, because (I hope I remembered this correctly) that WSL does not fully implement systemctl. Once the Windows Docker CE installation is done and environment variables are set, docker then works fine under WSL/Ubuntu. A: In windows 10, period is first parameter docker build . -t docker-whale A: Two ways to build a dockerfile: You can decide not to specify the file name of which to build from and just build it specifying a path (doing it this way the file name must be Dockerfile with no extension appended, eg: docker build -t docker-whale:tag path/to/Dockerfile or You can specify a file with -f and it doesn't matter what extension (within reason .txt, .dockerfile, .Dockerfile etc..) you decide to use, eg docker build -t docker-whale:tag /path/to/file -f docker-whale.dockerfile. A: I had originally created my Dockerfile in PowerShell and though I didn't see an extension on the file it showed as a PS File Type...once I created the file from Notepad++ being sure to select the "All types (.)" File Type with no extension on the File Name (Dockerfile). That allowed my image build command to complete successfully....Just make sure your Dockerfile has a Type of "File"... A: The problem is that the file name should be Dockerfile and not DockerFile or dockerfile it should be D capital followed by ockerfile in lower-case pls note A: Be sure your DOCKERfile is in the ROOT of the application directory, I had mine in src which resulted in this error because Docker was not finding the path to DOCKERfile A: The error means that docker build is either using a PATH | URL that are incorrectly input or that the Dockerfile cannot be found in the current directory. Also, make sure that when running the command from an integrated terminal (e.g. bash inside your IDE or text editor) you have the admin permissions to do so. Best if you can check the PATH from your terminal with pwd (in bash shell or dir if using a simple cli on windows) and copy the exact path where you want the image to be build. docker build C:\windows\your_amazing_directory docker build --help will also show you available options to use in case of malformed or illegal commands. A: To build Dockerfile save automated content in Dockerfile. not Dockerfile because while opening a file command: $ notepad Dockerfile (A text file is written so file cannot build) To build file run: $ notepad Dockerfile and Now run: $ docker build -t docker-whale . Make sure you are in current directory of Dockerfile. A: Most importantly make sure your file name is Dockerfile if you use another name it won't work(at least it did not for me.) Also if you are in the same dir where the Dockerfile is use a . i.e. docker build -t Myubuntu1:v1 . or use the absolute path i.e docker build -t Myubuntu1:v1 /Users/<username>/Desktop/Docker A: I my case (run from Windows 10) 1) Rename the file myDockerFile.Dockerfile to Dockerfile (without file extension). Then run from outside the folder this command: docker build .\Docker-LocalNifi\ This is working for me and for my colleagues at work, hope that will also work for you A: Make sure file name "Dockerfile" is not saved with any extension. Just create a file without any extension. And make sure Dockerfile is in same directory from where you are trying to building docker image. A: In case if we have multiple docker files in our environment just Dockerfile wont suffice our requirement. docker build -t ihub -f Dockerfile.ihub . So use the file (-f argument) command to specify your docker file(Dockerfile.ihub) A: I got this on Windows when the path I was working in was under a Junction directory. So my fix was to not work under that path. A: On Mac it works for below command. (hope your .Dockerfile is in your root directory). docker build -t docker-whale -f .Dockerfile . A: The issue is related to the DockerFile creation procedure. In order to work, open cmd, cd to the directory of interest and type: abc>DockerFile This will create a file called DockerFile inside your folder. Now type: notepad DockerFile This will open the DockerFile file in notepad and you will have to copy/paste the standard code provided. Save the file and now, finally, build your image with Docker typing: docker build -t docker-whale . This is working for me and I hope it helps others A: I erroneously created Dockerfile.txt in my working directory leading to the above-mentioned error while build The fix was to remove the .txt extension from the file. The file name should be Dockerfile only without any extension. A: Execute docker build -t getting-started . in your project directory and make sure Dockerfile is present and having no .txt extension. If you are on Windows, check the 'file name extension' in the under the view tab in the File Explorer to show whether .txt is there or not and remove it if the former is true. Good Luck. A: I also faced the same issues and it was resolved when i created file named with DockerFile and mentioned all the command which wanted to get executed while creation of any image. A: If you have mounted a second drive to an NTFS folder as a 'mounted volume' then you can get this issue. Move you files to a drive location outside of the mounted volume. A: please check whether docker is running on your windows or not, I try to find the solution and then accidently checked and find the issue A: Make sure you run the command docker build . -t docker-whale from the directory that has the dockerfile A: In Linux, folders are case sensitive. I was getting this error because folder name TestAPI and was putting TestApi. A: docker build -t docker-whale -f DockerFile . A: Installing docker.io instead of docker helped me i.e. apt install docker.io
Docker - unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx
I just downloaded Docker Toolbox for Windows 10 64bit today. I'm going through the tutorial. I'm receving the following error when trying to build an image using a Dockerfile. Steps: Launched Docker Quickstart terminal. testdocker after creating it. Prepare Dockerfile as documented in "Build your own image" web link ran below command docker build -t docker-whale . Error: $ docker build -t docker-whale . unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx C:\Users\Villanueva\Test\testdocker\Dockerfile: The system cannot find the file specified. BTW: I tried several options mentioned @ https://github.com/docker/docker/issues/14339 $ docker info Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 2 Server Version: 1.10.1 Storage Driver: aufs Root Dir: /mnt/sda1/var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Plugins: Volume: local Network: bridge null host Kernel Version: 4.1.17-boot2docker Operating System: Boot2Docker 1.10.1 (TCL 6.4.1); master : b03e158 - Thu Feb 11 22:34:01 UTC 2016 OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 996.2 MiB Name: default ID: C7DS:CIAJ:FTSN:PCGD:ZW25:MQNG:H3HK:KRJL:G6FC:VPRW:SEWW:KP7B Debug mode (server): true File Descriptors: 32 Goroutines: 44 System Time: 2016-02-19T17:37:37.706076803Z EventsListeners: 0 Init SHA1: Init Path: /usr/local/bin/docker Docker Root Dir: /mnt/sda1/var/lib/docker Labels: provider=virtualbox Thanks.
[ "while executing following command:\ndocker build -t docker-whale .\n\ncheck that Dockerfile is present in your current working directory.\n", "The error message is misleading\nThe problem has nothing to do with symlinks really.\nUsually, the problem is only that docker cannot find the Dockerfile describing the build.\nTypical reasons are these:\n\nDockerfile has wrong name.\nIt must be called Dockerfile. If it is called, for instance, dockerfile, .Dockerfile, Dockerfile.txt, or other, it will not be found.\nDockerfile is not in context.\nIf you say docker build contextdir, the Dockerfile must be at contextdir/Dockerfile. If you have it in, say, ./Dockerfile instead, it will not be found.\nDockerfile does not exist at all.\nSounds silly? Well, I got the above error message from my GitLab CI after I had written a nice Dockerfile, but forgotten to check it in. Silly? Sure. Unlikely? No.\n\n", "If you are working on windows 8 you would be using Docker toolbox.\nFrom the mydockerbuild directory run the below command as your Dockerfile is a textfile\ndocker build -t docker-whale -f ./Dockerfile.txt .\n\n", "The name of the file should be Dockerfile and not .Dockerfile. The file should not have any extension. \n", "\nMake sure you moved to the directory where Dockerfile is located.\nMake sure your Dockerfile is extension-less. That is, not Dockerfile.txt, Dockerfile.rtf, or any other.\nMake sure you named Dockerfile, and not DockerFile, dockerfile or any other.\n\n", "I had named my file dockerfile instead of Dockerfile (capitalized), and once I changed that, it started processing my \"Dockerfile\".\n", "Just Remove the extension .txt from Dockerfile and run the command \ndocker build -t image-name \n\nIt will work for sure.\n", "I have got this error (in MacBook) though I used correct command to create image,\ndocker build -t testimg .\n\nLater I found that path is the problem. Just navigate to the correct path that contains docker file. Just double check your current working directory .Nothing to panic!\n", "That's just because Notepad add \".txt\" at the end of Dockerfile\n", "This command worked for me:\ndocker build -t docker-whale -f Dockerfile.txt .\n\n", "I had created my DockerFile by VS2017 Docker Support tool and had the same error. After a while I realised I was not in the correct directory that contains the Dockerfile (~\\source\\repos\\DockerWebApplication\\). cd'ed to the correct file (~/source/repos/DockerWebApplication/DockerWebApplication) which was inside the project and successfully created the docker image.\n", "In WSL, there seems to be a problem with path conversion. The location of the Dockerfile in Ubuntu (where I'm running docker and where Dockerfile lives) is \"/home/sxw455/App1\", but neither of these commands worked:\n$ pwd\n/home/sxw455/App1\n$ ll\ntotal 4\ndrwxrwxrwx 0 sxw455 sxw455 4096 Dec 11 19:28 ./\ndrwxr-xr-x 0 sxw455 sxw455 4096 Dec 11 19:25 ../\n-rwxrwxrwx 1 sxw455 sxw455 531 Dec 11 19:26 Dockerfile*\n-rwxrwxrwx 1 sxw455 sxw455 666 Dec 11 19:28 app.py*\n-rwxrwxrwx 1 sxw455 sxw455 12 Dec 11 19:27 requirements.txt*\n\n$ docker build -t friendlyhello .\nunable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx C:\\Windows\\System32\\Dockerfile: The system cannot find the file specified.\n\n$ docker build -t friendlyhello \"/home/sxw455/App1\"\nunable to prepare context: path \"/home/sxw455/App1\" not found\n\nBut in Windows, the actual path is:\nC:\\Users\\sxw455\\AppData\\Local\\Packages\\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\\LocalState\\rootfs\\home\\sxw455\\App1\n\nAnd so I had to do this (even though I ran it from bash):\n$ docker build -t friendlyhello \n\"C:\\Users\\sxw455\\AppData\\Local\\Packages\\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\\LocalState\\rootfs\\home\\sxw455\\App1\"\n\nSending build context to Docker daemon 5.12kB\nStep 1/7 : FROM python:2.7-slim\n ---> 0dc3d8d47241\nStep 2/7 : WORKDIR /app\n ---> Using cache\n ---> f739aa02ce04\nStep 3/7 : COPY . /app\n ---> Using cache\n ---> 88686c524ae9\nStep 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt\n ---> Using cache\n ---> b95f02b14f78\nStep 5/7 : EXPOSE 80\n ---> Using cache\n ---> 0924dbc3f695\nStep 6/7 : ENV NAME World\n ---> Using cache\n ---> 85c145785b87\nStep 7/7 : CMD [\"python\", \"app.py\"]\n ---> Using cache\n ---> c2e43b7f0d4a\nSuccessfully built c2e43b7f0d4a\nSuccessfully tagged friendlyhello:latest\nSECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.\n\nI had similar problems with environment variables during the initial installation, and followed some advice that said to install the Windows DockerCE and hack the environment variables rather than installing the Ubuntu DockerCE, because (I hope I remembered this correctly) that WSL does not fully implement systemctl. Once the Windows Docker CE installation is done and environment variables are set, docker then works fine under WSL/Ubuntu.\n", "In windows 10, period is first parameter\ndocker build . -t docker-whale\n", "Two ways to build a dockerfile:\nYou can decide not to specify the file name of which to build from and just build it specifying a path (doing it this way the file name must be Dockerfile with no extension appended, eg: docker build -t docker-whale:tag path/to/Dockerfile\nor\nYou can specify a file with -f and it doesn't matter what extension (within reason .txt, .dockerfile, .Dockerfile etc..) you decide to use, eg docker build -t docker-whale:tag /path/to/file -f docker-whale.dockerfile.\n", "I had originally created my Dockerfile in PowerShell and though I didn't see an extension on the file it showed as a PS File Type...once I created the file from Notepad++ being sure to select the \"All types (.)\" File Type with no extension on the File Name (Dockerfile). That allowed my image build command to complete successfully....Just make sure your Dockerfile has a Type of \"File\"...\n", "The problem is that the file name should be Dockerfile and not DockerFile or dockerfile it should be D capital followed by ockerfile in lower-case pls note\n", "Be sure your DOCKERfile is in the ROOT of the application directory, I had mine in src which resulted in this error because Docker was not finding the path to DOCKERfile\n", "The error means that docker build is either using a PATH | URL that are incorrectly input or that the Dockerfile cannot be found in the current directory. Also, make sure that when running the command from an integrated terminal (e.g. bash inside your IDE or text editor) you have the admin permissions to do so.\nBest if you can check the PATH from your terminal with pwd (in bash shell or dir if using a simple cli on windows) and copy the exact path where you want the image to be build.\ndocker build C:\\windows\\your_amazing_directory\ndocker build --help will also show you available options to use in case of malformed or illegal commands.\n", "To build Dockerfile save automated content in Dockerfile. not Dockerfile because while opening a file command:\n$ notepad Dockerfile \n\n(A text file is written so file cannot build)\nTo build file run:\n$ notepad Dockerfile\n\nand Now run:\n$ docker build -t docker-whale .\n\nMake sure you are in current directory of Dockerfile.\n", "Most importantly make sure your file name is Dockerfile if you use another name it won't work(at least it did not for me.)\nAlso if you are in the same dir where the Dockerfile is use a . i.e. \n docker build -t Myubuntu1:v1 .\nor use the absolute path i.e\n docker build -t Myubuntu1:v1 /Users/<username>/Desktop/Docker\n", "I my case (run from Windows 10)\n1) Rename the file myDockerFile.Dockerfile to Dockerfile (without file extension).\nThen run from outside the folder this command: \ndocker build .\\Docker-LocalNifi\\ \n\nThis is working for me and for my colleagues at work, hope that will also work for you\n", "Make sure file name \"Dockerfile\" is not saved with any extension. \nJust create a file without any extension.\nAnd make sure Dockerfile is in same directory from where you are trying to building docker image.\n", "In case if we have multiple docker files in our environment just Dockerfile wont suffice our requirement.\ndocker build -t ihub -f Dockerfile.ihub .\n\nSo use the file (-f argument) command to specify your docker file(Dockerfile.ihub)\n", "I got this on Windows when the path I was working in was under a Junction directory. So my fix was to not work under that path.\n", "On Mac it works for below command. (hope your .Dockerfile is in your root directory).\ndocker build -t docker-whale -f .Dockerfile .\n\n", "The issue is related to the DockerFile creation procedure. \nIn order to work, open cmd, cd to the directory of interest and type: \nabc>DockerFile\n\nThis will create a file called DockerFile inside your folder.\nNow type: \nnotepad DockerFile \n\nThis will open the DockerFile file in notepad and you will have to copy/paste the standard code provided. \nSave the file and now, finally, build your image with Docker typing: \ndocker build -t docker-whale . \n\nThis is working for me and I hope it helps others \n", "I erroneously created Dockerfile.txt in my working directory leading to the above-mentioned error while build\nThe fix was to remove the .txt extension from the file.\nThe file name should be Dockerfile only without any extension.\n", "Execute docker build -t getting-started . in your project directory and make sure Dockerfile is present and having no .txt extension. \nIf you are on Windows, check the 'file name extension' in the under the view tab in the \nFile Explorer to show whether .txt is there or not and remove it if the former is true.\nGood Luck.\n", "I also faced the same issues and it was resolved when i created file named with DockerFile and mentioned all the command which wanted to get executed while creation of any image.\n", "If you have mounted a second drive to an NTFS folder as a 'mounted volume' then you can get this issue.\nMove you files to a drive location outside of the mounted volume.\n", "please check whether docker is running on your windows or not, I try to find the solution and then accidently checked and find the issue\n", "Make sure you run the command\ndocker build . -t docker-whale \n\nfrom the directory that has the dockerfile\n", "In Linux, folders are case sensitive. I was getting this error because folder name TestAPI and was putting TestApi.\n", "docker build -t docker-whale -f DockerFile .\n", "Installing docker.io instead of docker helped me\ni.e.\napt install docker.io\n\n" ]
[ 307, 215, 46, 28, 27, 19, 15, 9, 5, 5, 4, 4, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "To build an image from command-line in windows/linux. \n1. Create a docker file in your current directory.\n eg: FROM ubuntu\n RUN apt-get update\n RUN apt-get -y install apache2\n ADD . /var/www/html\n ENTRYPOINT apachectl -D FOREGROUND\n ENV name Devops_Docker\n2. Don't save it with .txt extension.\n3. Under command-line run the command\n docker build . -t apache2image \n" ]
[ -1 ]
[ "docker", "docker_build", "docker_toolbox", "dockerfile" ]
stackoverflow_0035511604_docker_docker_build_docker_toolbox_dockerfile.txt
Q: How handle NULL in visualization I have two tables (it is example, of course) that I loaded to app from different sources by script Table 1: ID Attribute T1 1 100 3 200 Table 2: ID Attribute T2 1 Value 1 2 Value 2 On a list I create table: ID Attribute T1 Attribute T2 Finally I have table ID Attribute T1 Attribute T2 1 100 Value 1 2 - Value 2 3 200 - So, as You know it limits me in filtering and analyzing data, for example I can't show all data that isn't represented in Table 1, or all data for Attribute T1 not equal 100. I try to use NullAsValue, but it didn't help. Would be appreciate for idea how to manage my case. A: To achieve what you're attempting, you'll need to Join or Concatenate your tables. The reason is because Null means something different depending on how the data is loaded. There's basically two "types" of Null: "Implied" Null When you associate several tables in your data model, as you've done in your example, Qlik is essentially treating that as a natural outer join between the tables. But since it's not an actual join that happens when the script executes, the Nulls that arise from data incongruencies (like in your example) are basically implied, since there really is an absence of data there. There's nothing in the data or script that actually says "there are no Attribute T1 values for ID of 2." Because of that, you can't use a function like NullAsValue() or Coalesce() to replace Nulls with another value because those Nulls aren't even there -- there's nothing to actually replace. The above tables don't have any actual Nulls -- just implied ones from their association and the fact that the ID fields in either table don't have all the same values. "Realized" Null If, instead of just using associations, you actually combine the tables using the Join or Concatenate prefixes, then Qlik is forced to actually generate a Null value in the absence of data. Instead of Null being implied, it's actually there in the data model -- it's been realized. In this case, we can actually use functions like NullAsValue() or Coalesce() or Alt() to replace Nulls with another value since we actually have something in our table to replace. The above joined table has actual Nulls that are realized in the data model, so they can be replaced. To replace Nulls at that point, you can use the NullAsValue() or Coalesce() functions like this in the Data Load Editor: table1: load * inline [ ID , Attribute T1 1 , 100 3 , 200 ]; table2: join load * inline [ ID , Attribute T2 1 , Value 1 2 , Value 2 ]; NullAsValue [Attribute T1]; Set NullValue = '-NULL-'; new_table: NoConcatenate load ID , [Attribute T1] , Coalesce([Attribute T2], '-AlsoNULL-') as [Attribute T2] Resident table1; Drop Table table1; That will result in a table like this: The Coalesce() and Alt() functions are also available in chart expressions. Here are some quick links to the things discussed here: Qlik Null interpretation Qlik table associations NullAsValue() function Coalesce() function Alt() function
How handle NULL in visualization
I have two tables (it is example, of course) that I loaded to app from different sources by script Table 1: ID Attribute T1 1 100 3 200 Table 2: ID Attribute T2 1 Value 1 2 Value 2 On a list I create table: ID Attribute T1 Attribute T2 Finally I have table ID Attribute T1 Attribute T2 1 100 Value 1 2 - Value 2 3 200 - So, as You know it limits me in filtering and analyzing data, for example I can't show all data that isn't represented in Table 1, or all data for Attribute T1 not equal 100. I try to use NullAsValue, but it didn't help. Would be appreciate for idea how to manage my case.
[ "To achieve what you're attempting, you'll need to Join or Concatenate your tables. The reason is because Null means something different depending on how the data is loaded.\nThere's basically two \"types\" of Null:\n\"Implied\" Null\nWhen you associate several tables in your data model, as you've done in your example, Qlik is essentially treating that as a natural outer join between the tables. But since it's not an actual join that happens when the script executes, the Nulls that arise from data incongruencies (like in your example) are basically implied, since there really is an absence of data there. There's nothing in the data or script that actually says \"there are no Attribute T1 values for ID of 2.\" Because of that, you can't use a function like NullAsValue() or Coalesce() to replace Nulls with another value because those Nulls aren't even there -- there's nothing to actually replace.\n\n\nThe above tables don't have any actual Nulls -- just implied ones from their association and the fact that the ID fields in either table don't have all the same values.\n\"Realized\" Null\nIf, instead of just using associations, you actually combine the tables using the Join or Concatenate prefixes, then Qlik is forced to actually generate a Null value in the absence of data. Instead of Null being implied, it's actually there in the data model -- it's been realized. In this case, we can actually use functions like NullAsValue() or Coalesce() or Alt() to replace Nulls with another value since we actually have something in our table to replace.\n\nThe above joined table has actual Nulls that are realized in the data model, so they can be replaced.\nTo replace Nulls at that point, you can use the NullAsValue() or Coalesce() functions like this in the Data Load Editor:\ntable1:\nload * inline [\nID , Attribute T1\n1 , 100\n3 , 200\n];\n\n\ntable2:\njoin load * inline [\nID , Attribute T2\n1 , Value 1\n2 , Value 2\n];\n\n\nNullAsValue [Attribute T1];\nSet NullValue = '-NULL-';\n\nnew_table:\nNoConcatenate load\n ID\n , [Attribute T1]\n , Coalesce([Attribute T2], '-AlsoNULL-') as [Attribute T2]\nResident table1;\n\nDrop Table table1;\n\nThat will result in a table like this:\n\nThe Coalesce() and Alt() functions are also available in chart expressions.\nHere are some quick links to the things discussed here:\n\nQlik Null interpretation\nQlik table associations\nNullAsValue() function\nCoalesce() function\nAlt() function\n\n" ]
[ 0 ]
[]
[]
[ "qliksense" ]
stackoverflow_0074669762_qliksense.txt
Q: Jenkins Port 8080 to another alternative Port I wanted to install Jenkins on my pc, but it support 8080 port which is already used by Xammp server, Can Anybody tell me what alternative port I can use and it will worked? I had already tried 8081,8443 but next java step it gives warning. A: There are 65,535 possible port numbers. It could be your firewall blocking the ports. Apps will often use more than one port. One for inbound and one for outbound etc.
Jenkins Port 8080 to another alternative Port
I wanted to install Jenkins on my pc, but it support 8080 port which is already used by Xammp server, Can Anybody tell me what alternative port I can use and it will worked? I had already tried 8081,8443 but next java step it gives warning.
[ "There are 65,535 possible port numbers. It could be your firewall blocking the ports. Apps will often use more than one port. One for inbound and one for outbound etc.\n" ]
[ 0 ]
[]
[]
[ "jenkins_pipeline" ]
stackoverflow_0074670151_jenkins_pipeline.txt
Q: How can I see local history changes in Visual Studio Code? I'm looking for a way to see my local history changes. Kind of the equivalent way in WebStorm: A: Visual Studio Code now offers this in the Timeline view. See Mark's answer. Or alternatively, if you want a plugin to give you similar functionality, for example: Checkpoints Or the more famous: Local History Some details may need to be configured because the Visual Studio Code search gets confused sometimes because of additional folders created by this type of plugins. To fix this, you can: Add the history folder to your .gitignore file. Change the history folder location in the chosen plugin configuration. Configure the Visual Studio Code search to ignore the history folder. A: Local File History Local history of files is now available from the Timeline view. Depending on the configured settings, every time you save an editor, a new entry is added to the list: Each local history entry contains the full contents of the file at the time the entry was created and in certain cases can provide more semantic information (for example indicate refactorings). From an entry you can: compare the changes to the local file or previous entry restore the contents delete or rename the entry see https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_66.md#local-history Local file history is being actively worked on and is in the Insiders' Build v1.66. The results will be available in the Timeline view. Here are the current applicable settings: Workbench > Local History: Enabled Controls whether the local file history is enabled. When enabled, the file contents of an editor that is saved will be stored to a backup location and can be restored or reviewed later. Changing this setting has no effect on existing file history entries. Workbench > Local History: Max File Entries Controls the maximum number of local file history entries per file. When the number of local file history entries exceeds this number for a file, the oldest entries will be discarded. Workbench > Local History: Max File Size Controls the maximum size of a file (in KB) to be considered for local history. Files that are larger will not be added to the local history unless explicitly added by via user gesture. Changing this setting has no effect on existing file history entries. And these commands: timeline.toggleExcludeSource:timeline.localHistory workbench.action.localHistory.compareWithFile workbench.action.localHistory.compareWithPrevious workbench.action.localHistory.selectForCompare // compare any 2 entries workbench.action.localHistory.compareWithSelected workbench.action.localHistory.delete // delete this entry workbench.action.localHistory.deleteAll // delete all entries of all files from local history workbench.action.localHistory.open workbench.action.localHistory.restore workbench.action.localHistory.restoreViaEditor workbench.action.localHistory.rename // rename this entry New global commands have been added to work with local history: workbench.action.localHistory.create: create a new history entry for the active file with a custom name workbench.action.localHistory.deleteAll: delete all history entries across all files workbench.action.localHistory.restoreViaPicker: find a history entry to restore across all files A bunch of new settings have been introduced to work with local history: workbench.localHistory.enabled: enable or disable local history (default: true) workbench.localHistory.maxFileSize: a limit of file size to create a local history entry (default: 256kb) workbench.localHistory.maxFileEntries: a limit of local history entries per file (default: 50) workbench.localHistory.exclude: glob patterns for excluding certain files from local history workbench.localHistory.mergeWindow: interval in seconds during which the last entry in local file history is replaced with the entry that is being added (default 10s) A: I built an extension called Checkpoints, an alternative to Local History. Checkpoints has support for viewing history for all files (that has checkpoints) in the tree view, not just the currently active file. There are some other minor differences aswell, but overall they are pretty similar. A: it's pretty simple, just open a file and check timeline tab A: Basic Functionality Automatically saved local edit history is available with the Local History extension. Manually saved local edit history is available with the Checkpoints extension (this is the IntelliJ equivalent to adding tags to the local history). Advanced Functionality None of the extensions mentioned above support edit history when a file is moved or renamed. The extensions above only support edit history. They do not support move/delete history, for example, like IntelliJ does. Open Request If you'd like to see this feature added natively, along with all of the advanced functionality, I'd suggest upvoting the open GitHub issue here. A: right-click the file and select Show History. Other day I lost my git changes because I've clicked that graphic undo option of Git. This option saved me so I could get back my code.
How can I see local history changes in Visual Studio Code?
I'm looking for a way to see my local history changes. Kind of the equivalent way in WebStorm:
[ "Visual Studio Code now offers this in the Timeline view. See Mark's answer.\nOr alternatively, if you want a plugin to give you similar functionality, for example:\nCheckpoints\nOr the more famous:\nLocal History\nSome details may need to be configured because the Visual Studio Code search gets confused sometimes because of additional folders created by this type of plugins. To fix this, you can:\n\nAdd the history folder to your .gitignore file.\nChange the history folder location in the chosen plugin\nconfiguration.\nConfigure the Visual Studio Code search to ignore the history folder.\n\n", "Local File History\n\nLocal history of files is now available from the Timeline view.\nDepending on the configured settings, every time you save an editor, a\nnew entry is added to the list:\n\nEach local history entry contains the full contents of the file at the time the entry was created and in certain cases can provide more semantic information (for example indicate refactorings).\n\nFrom an entry you can:\ncompare the changes to the local file or previous entry restore the\ncontents delete or rename the entry\n\nsee https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_66.md#local-history\nLocal file history is being actively worked on and is in the Insiders' Build v1.66. The results will be available in the Timeline view.\nHere are the current applicable settings:\nWorkbench > Local History: Enabled\n\nControls whether the local file history is enabled. When enabled, the\nfile contents of an editor that is saved will be stored to a backup\nlocation and can be restored or reviewed later. Changing this setting\nhas no effect on existing file history entries.\n\nWorkbench > Local History: Max File Entries\n\nControls the maximum number of local file history entries per file.\nWhen the number of local file history entries exceeds this number for\na file, the oldest entries will be discarded.\n\nWorkbench > Local History: Max File Size\n\nControls the maximum size of a file (in KB) to be considered for local\nhistory. Files that are larger will not be added to the local history\nunless explicitly added by via user gesture. Changing this setting has\nno effect on existing file history entries.\n\n\nAnd these commands:\ntimeline.toggleExcludeSource:timeline.localHistory \n\nworkbench.action.localHistory.compareWithFile\nworkbench.action.localHistory.compareWithPrevious\nworkbench.action.localHistory.selectForCompare // compare any 2 entries\nworkbench.action.localHistory.compareWithSelected\n\nworkbench.action.localHistory.delete // delete this entry\nworkbench.action.localHistory.deleteAll // delete all entries of all files from local history\n\nworkbench.action.localHistory.open\nworkbench.action.localHistory.restore\nworkbench.action.localHistory.restoreViaEditor\nworkbench.action.localHistory.rename // rename this entry\n\nNew global commands have been added to work with local history:\nworkbench.action.localHistory.create: create a new history entry for the active file with a custom name\nworkbench.action.localHistory.deleteAll: delete all history entries across all files\nworkbench.action.localHistory.restoreViaPicker: find a history entry to restore across all files\n\nA bunch of new settings have been introduced to work with local\nhistory:\nworkbench.localHistory.enabled: enable or disable local history\n(default: true) workbench.localHistory.maxFileSize: a limit of\nfile size to create a local history entry (default: 256kb)\nworkbench.localHistory.maxFileEntries: a limit of local history\nentries per file (default: 50)\nworkbench.localHistory.exclude:\nglob patterns for excluding certain files from local history\nworkbench.localHistory.mergeWindow: interval in seconds during which\nthe last entry in local file history is replaced with the entry that\nis being added (default 10s)\n\n\n", "I built an extension called Checkpoints, an alternative to Local History. Checkpoints has support for viewing history for all files (that has checkpoints) in the tree view, not just the currently active file. There are some other minor differences aswell, but overall they are pretty similar.\n", "it's pretty simple, just open a file and check timeline tab\n\n", "Basic Functionality\n\nAutomatically saved local edit history is available with the Local History extension.\nManually saved local edit history is available with the Checkpoints extension (this is the IntelliJ equivalent to adding tags to the local history).\n\nAdvanced Functionality\n\nNone of the extensions mentioned above support edit history when a file is moved or renamed.\nThe extensions above only support edit history. They do not support move/delete history, for example, like IntelliJ does.\n\nOpen Request\nIf you'd like to see this feature added natively, along with all of the advanced functionality, I'd suggest upvoting the open GitHub issue here.\n", "right-click the file and select Show History.\nOther day I lost my git changes because I've clicked that graphic undo option of Git.\nThis option saved me so I could get back my code.\n" ]
[ 93, 48, 38, 31, 18, 0 ]
[ "There isn’t any option in Visual Studio Code to see file history. If you are using Git, then you can use Visual Studio Code extension Git History to see the file changes after each commit and compare with previous commits.\n" ]
[ -3 ]
[ "file", "local", "revision_history", "visual_studio_code" ]
stackoverflow_0046446901_file_local_revision_history_visual_studio_code.txt
Q: Does js-ipfs have a readonly gateway server? When I start my local ipfs node with ipfs daemon, in the cmd I get this: Gateway (readonly) sever listening on /ip4/127.0.0.1/tcp/8080 With this, I can say 127.0.0.1:8080/ipfs/CID and read files from IPFS. In my Node.js app, when I run ipfs.create(), in the console I get logs about swarms, but not about a readonly gateway server. I have found out that the ipfs.create() function has an option Gateway that on default is set to /ip4/127.0.0.1/tcp/9090. But when I run my node and keep my app running, when I try to retrieve something with 127.0.0.1:9090/ipfs/CID, I get an ERR_CONNECTION_REFUSED. Why is that? While the app is running, I scanned my ports and nothing was attached to 9090. A: I have found the answer. Yes, js-ipfs has a readonly gateway server, but it's not starting implicitly togheter with the node, you have to use ipfs-http-gateway package. The package doesn't really have good instructions, but here is how you do it. You import HttpGateway class from the package and give your ipfs instance to it as a constructor, then you call .start() from the HttpGateway instance. The .start() will take the config options from your ipfs instance, and will search for Adresses -> Gateway options that defaults to /ip4/127.0.0.1/tcp/9090 and start the gateway to that port. You can read the code from the package where the HttpGateway class is written, and you'll figure it all out.
Does js-ipfs have a readonly gateway server?
When I start my local ipfs node with ipfs daemon, in the cmd I get this: Gateway (readonly) sever listening on /ip4/127.0.0.1/tcp/8080 With this, I can say 127.0.0.1:8080/ipfs/CID and read files from IPFS. In my Node.js app, when I run ipfs.create(), in the console I get logs about swarms, but not about a readonly gateway server. I have found out that the ipfs.create() function has an option Gateway that on default is set to /ip4/127.0.0.1/tcp/9090. But when I run my node and keep my app running, when I try to retrieve something with 127.0.0.1:9090/ipfs/CID, I get an ERR_CONNECTION_REFUSED. Why is that? While the app is running, I scanned my ports and nothing was attached to 9090.
[ "I have found the answer. Yes, js-ipfs has a readonly gateway server, but it's not starting implicitly togheter with the node, you have to use ipfs-http-gateway package. The package doesn't really have good instructions, but here is how you do it. You import HttpGateway class from the package and give your ipfs instance to it as a constructor, then you call .start() from the HttpGateway instance. The .start() will take the config options from your ipfs instance, and will search for Adresses -> Gateway options that defaults to /ip4/127.0.0.1/tcp/9090 and start the gateway to that port. You can read the code from the package where the HttpGateway class is written, and you'll figure it all out.\n" ]
[ 0 ]
[]
[]
[ "ipfs", "js_ipfs", "node.js" ]
stackoverflow_0074641274_ipfs_js_ipfs_node.js.txt
Q: can anyone tell me what is happening here? I am trying to create an API and here is the code. When I run the server on localhost:8888, the response looks like an empty array []. When I run the script (npm run start or dev) the terminal shows this result in picture: this is the terminal error const PORT = process.env.PORT || 8888 const express = require('express') const cheerio = require('cheerio') const axios = require('axios') const app = express() const articles = [] app.get('/', (req,res) => { res.json('Live Foreign Exchange News API') }) app.get('/fxstreet', (req,res) => { axios.get('https://www.fxstreet.com/news') .then((response) => { const html = response.data const $ = cheerio.load(html) $('h4:contains("fxs_headline_tiny")', html).each(function (){ const title = $(this).text() const url = $(this).attr('href') articles.push({ title, url }) }) res.json(articles) }).catch((err) => console.log(err)) }) app.listen(PORT, () => console.log(`server running on PORT ${PORT}`)) A: You need to add Accept-Encoding with text/html; charset=UTF-8 in axios.get header. The default of it is gzip. That is axios v1.2.0 errror Change in your code line 10 From axios.get('https://www.fxstreet.com/news') To axios.get("https://www.fxstreet.com/news",{ headers: { 'Accept-Encoding': 'text/html; charset=UTF-8'}}) You can simple test const axios = require("axios"); axios.get("https://www.fxstreet.com/news",{ headers: { 'Accept-Encoding': 'text/html; charset=UTF-8'}}) .then(response => console.log(response.data)) and const axios = require("axios"); axios.get("https://www.fxstreet.com/news").then(response => console.log(response.data)) First code will give correct HTML, Second will shows strange strings. I get the new title by puppeteer instead of cheerio const PORT = process.env.PORT || 8888 const express = require('express') const puppeteer = require('puppeteer'); const app = express() app.get('/news', (req, res) => { (async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('https://www.fxstreet.com/news') const articles = await page.$$eval('.fxs_headline_tiny > a', elements => elements.map(el => el.innerText)) console.log(articles) res.json(articles) await browser.close() })(); }) app.listen(PORT, () => console.log(`server running on PORT ${PORT}`)) This result Reference puppeteer example
can anyone tell me what is happening here?
I am trying to create an API and here is the code. When I run the server on localhost:8888, the response looks like an empty array []. When I run the script (npm run start or dev) the terminal shows this result in picture: this is the terminal error const PORT = process.env.PORT || 8888 const express = require('express') const cheerio = require('cheerio') const axios = require('axios') const app = express() const articles = [] app.get('/', (req,res) => { res.json('Live Foreign Exchange News API') }) app.get('/fxstreet', (req,res) => { axios.get('https://www.fxstreet.com/news') .then((response) => { const html = response.data const $ = cheerio.load(html) $('h4:contains("fxs_headline_tiny")', html).each(function (){ const title = $(this).text() const url = $(this).attr('href') articles.push({ title, url }) }) res.json(articles) }).catch((err) => console.log(err)) }) app.listen(PORT, () => console.log(`server running on PORT ${PORT}`))
[ "You need to add Accept-Encoding with text/html; charset=UTF-8 in axios.get header.\nThe default of it is gzip. That is axios v1.2.0 errror\nChange in your code line 10\nFrom\n axios.get('https://www.fxstreet.com/news')\n\nTo\naxios.get(\"https://www.fxstreet.com/news\",{ headers: { 'Accept-Encoding': 'text/html; charset=UTF-8'}})\n\nYou can simple test\nconst axios = require(\"axios\");\naxios.get(\"https://www.fxstreet.com/news\",{ headers: { 'Accept-Encoding': 'text/html; charset=UTF-8'}})\n .then(response => console.log(response.data))\n\nand\nconst axios = require(\"axios\");\n\naxios.get(\"https://www.fxstreet.com/news\").then(response => console.log(response.data))\n\nFirst code will give correct HTML, Second will shows strange strings.\nI get the new title by puppeteer instead of cheerio\nconst PORT = process.env.PORT || 8888\nconst express = require('express')\nconst puppeteer = require('puppeteer');\nconst app = express()\n\napp.get('/news', (req, res) => {\n (async () => {\n const browser = await puppeteer.launch()\n const page = await browser.newPage()\n await page.goto('https://www.fxstreet.com/news')\n const articles = await page.$$eval('.fxs_headline_tiny > a', elements => elements.map(el => el.innerText))\n console.log(articles)\n res.json(articles)\n await browser.close()\n })();\n})\n\napp.listen(PORT, () => console.log(`server running on PORT ${PORT}`))\n\nThis result\n\nReference\npuppeteer example\n" ]
[ 0 ]
[]
[]
[ "cheerio", "express", "javascript", "json", "node.js" ]
stackoverflow_0074668982_cheerio_express_javascript_json_node.js.txt
Q: How does Xamarin.MediaGallery work in PopupPage in Xamarin I am using Rg.Plugins.Popup + Xamarin.MediaGallery. However there is one problem that Xamarin.MediaGallery doesn't work if I add in Rg.Plugins.Popup. <popup:PopupPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:popup="clr-namespace:Rg.Plugins.Popup.Pages;assembly=Rg.Plugins.Popup" xmlns:animations="clr-namespace:Rg.Plugins.Popup.Animations;assembly=Rg.Plugins.Popup" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" ....> <popup:PopupPage.Animation> <animations:ScaleAnimation PositionIn="Bottom" PositionOut="Bottom" ScaleIn="1.2" ScaleOut="0.8" DurationIn="400" DurationOut="300" EasingIn="SinOut" EasingOut="SinIn" HasBackgroundAnimation="False"/> </popup:PopupPage.Animation> <StackLayout HorizontalOptions="Fill" VerticalOptions="EndAndExpand" Margin="0" Spacing="0"> <Image Margin="0" x:Name="pickimg"> <Image.Source> <FontImageSource Color="#ddd" Size="22" FontFamily="MaterIcon" Glyph="{x:Static local:FontIconsClass.Camera}"/> </Image.Source> </Image> <Image.GestureRecognizers> <TapGestureRecognizer Tapped="pickimg_Tapped" /> </Image.GestureRecognizers> </StackLayout> </popup:PopupPage> async void pickimg_Tapped(System.Object sender, EventArgs e) { var result = await MediaGallery.PickAsync(5, MediaFileType.Image, MediaFileType.Video); if(result?.Files == null) { return; } foreach (var img in result.Files) { var filename = img.NameWithoutExtension; } } This is how I use it. Please Note: If I add in ContentPage it works fine. It doesn't work so I add it in PopupPage I'm checking on Xamarin iOS, Android I haven't tried yet. How can I use Xamarin.MediaGallery inside Rg.Plugins.Popup? Has anyone encountered this problem please help me with the solution. Thank you A: I suggest you update all the packages/plugins to the latest . Xamarin.forms 5.0.0.2244 Xamarin.Essentials 1.7.0 Rg.Plugins.Popup 2.0.0.14 Xamarin.MediaGallery 2.0.0 I test on iPhone12/iOS15 simulator , it works fine without any problem .
How does Xamarin.MediaGallery work in PopupPage in Xamarin
I am using Rg.Plugins.Popup + Xamarin.MediaGallery. However there is one problem that Xamarin.MediaGallery doesn't work if I add in Rg.Plugins.Popup. <popup:PopupPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:popup="clr-namespace:Rg.Plugins.Popup.Pages;assembly=Rg.Plugins.Popup" xmlns:animations="clr-namespace:Rg.Plugins.Popup.Animations;assembly=Rg.Plugins.Popup" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" ....> <popup:PopupPage.Animation> <animations:ScaleAnimation PositionIn="Bottom" PositionOut="Bottom" ScaleIn="1.2" ScaleOut="0.8" DurationIn="400" DurationOut="300" EasingIn="SinOut" EasingOut="SinIn" HasBackgroundAnimation="False"/> </popup:PopupPage.Animation> <StackLayout HorizontalOptions="Fill" VerticalOptions="EndAndExpand" Margin="0" Spacing="0"> <Image Margin="0" x:Name="pickimg"> <Image.Source> <FontImageSource Color="#ddd" Size="22" FontFamily="MaterIcon" Glyph="{x:Static local:FontIconsClass.Camera}"/> </Image.Source> </Image> <Image.GestureRecognizers> <TapGestureRecognizer Tapped="pickimg_Tapped" /> </Image.GestureRecognizers> </StackLayout> </popup:PopupPage> async void pickimg_Tapped(System.Object sender, EventArgs e) { var result = await MediaGallery.PickAsync(5, MediaFileType.Image, MediaFileType.Video); if(result?.Files == null) { return; } foreach (var img in result.Files) { var filename = img.NameWithoutExtension; } } This is how I use it. Please Note: If I add in ContentPage it works fine. It doesn't work so I add it in PopupPage I'm checking on Xamarin iOS, Android I haven't tried yet. How can I use Xamarin.MediaGallery inside Rg.Plugins.Popup? Has anyone encountered this problem please help me with the solution. Thank you
[ "I suggest you update all the packages/plugins to the latest .\n\nXamarin.forms 5.0.0.2244\nXamarin.Essentials 1.7.0\nRg.Plugins.Popup 2.0.0.14\nXamarin.MediaGallery 2.0.0\n\nI test on iPhone12/iOS15 simulator , it works fine without any problem .\n" ]
[ 0 ]
[ "Maybe I'm late. See: https://github.com/xamarin/Essentials/pull/1846#issuecomment-975207765\nThis plugin supports such initialization\n" ]
[ -1 ]
[ "xamarin" ]
stackoverflow_0070233374_xamarin.txt
Q: Read from file as from console I am doing a bit of competitive programming in koltin. Most of the time I used input from the console but sometimes I want to use files. Is there a way to make readln() work from a file ? The goal is to avoid writting to code doing the same thing. From here: Reading console input in Kotlin I tries fun <T : Closeable, R> T.useWith(block: T.() -> R): R = use { with(it, block) } File("a.in").bufferedReader().useWith { File("a.out").printWriter().useWith { val (a, b) = readLine()!!.split(' ').map(String::toInt) println(a + b) } } Scanner(File("b.in")).useWith { PrintWriter("b.out").useWith { val a = nextInt() val b = nextInt() println(a + b) } } But I was not able to make it works. Thx for any answer. A: Thx to @aSemy comment I make it works: val seq = File("./src/ts1_input.txt").readLines().listIterator() fun readString() = seq.next() // readln()
Read from file as from console
I am doing a bit of competitive programming in koltin. Most of the time I used input from the console but sometimes I want to use files. Is there a way to make readln() work from a file ? The goal is to avoid writting to code doing the same thing. From here: Reading console input in Kotlin I tries fun <T : Closeable, R> T.useWith(block: T.() -> R): R = use { with(it, block) } File("a.in").bufferedReader().useWith { File("a.out").printWriter().useWith { val (a, b) = readLine()!!.split(' ').map(String::toInt) println(a + b) } } Scanner(File("b.in")).useWith { PrintWriter("b.out").useWith { val a = nextInt() val b = nextInt() println(a + b) } } But I was not able to make it works. Thx for any answer.
[ "Thx to @aSemy comment I make it works:\nval seq = File(\"./src/ts1_input.txt\").readLines().listIterator() \n\nfun readString() = seq.next() // readln()\n\n" ]
[ 0 ]
[]
[]
[ "io", "kotlin" ]
stackoverflow_0074669388_io_kotlin.txt
Q: R: Find Out First Non-Consecutive Year I have this dataset in R: name = c("john", "john", "john", "john", "john", "alex", "alex", "alex", "alex", "alex" ) year = c(2010, 2011, 2012, 2015, 2017, 2014, 2015, 2016, 2017, 2018) my_data = data.frame(name, year) > my_data name year 1 john 2010 2 john 2011 3 john 2012 4 john 2015 5 john 2017 6 alex 2014 7 alex 2015 8 alex 2016 9 alex 2017 10 alex 2018 For each person in this dataset - I want to select all rows until the first non-consecutive year appears. As an example, the final output would look something like this: # DESIRED OUTPUT > my_data name year 1 john 2010 2 john 2011 3 john 2012 6 alex 2014 7 alex 2015 8 alex 2016 9 alex 2017 10 alex 2018 To do this, I thought of the following approach: > agg <- aggregate(year ~ name, my_data, c) > agg name year.1 year.2 year.3 year.4 year.5 1 alex 2014 2015 2016 2017 2018 2 john 2010 2011 2012 2015 2017 library(stringr) agg = data.frame(as.matrix(agg)) agg$all_years = paste(agg$year.1 ,agg$year.2, agg$year.3, agg$year.4, agg$year.5) agg$y_2010 = str_detect(agg$all_years, "2010") agg$y_2011 = str_detect(agg$all_years, "2011") agg$y_2012 = str_detect(agg$all_years, "2012") agg$y_2013 = str_detect(agg$all_years, "2013") agg$y_2014 = str_detect(agg$all_years, "2014") agg$y_2015 = str_detect(agg$all_years, "2015") agg$y_2016 = str_detect(agg$all_years, "2016") agg$y_2017 = str_detect(agg$all_years, "2017") agg$y_2018 = str_detect(agg$all_years, "2018") agg$y_2019 = str_detect(agg$all_years, "2019") name year.1 year.2 year.3 year.4 year.5 all_years y_2010 y_2011 y_2012 y_2013 y_2014 y_2015 y_2016 y_2017 y_2018 y_2019 1 alex 2014 2015 2016 2017 2018 2014 2015 2016 2017 2018 FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE FALSE 2 john 2010 2011 2012 2015 2017 2010 2011 2012 2015 2017 TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE FALSE Now, the idea would be for each row - find out the first time when a "TRUE" is followed by a "FALSE" - and then I would try to find some way to accomplish my task. But I am not sure how to proceed from here. Can someone please show me how to do this? Thanks! A: library(dplyr) my_data %>% group_by(name) %>% filter(c(1,diff(year)) == 1) # A tibble: 8 x 2 # Groups: name [2] name year <chr> <dbl> 1 john 2010 2 john 2011 3 john 2012 4 alex 2014 5 alex 2015 6 alex 2016 7 alex 2017 8 alex 2018
R: Find Out First Non-Consecutive Year
I have this dataset in R: name = c("john", "john", "john", "john", "john", "alex", "alex", "alex", "alex", "alex" ) year = c(2010, 2011, 2012, 2015, 2017, 2014, 2015, 2016, 2017, 2018) my_data = data.frame(name, year) > my_data name year 1 john 2010 2 john 2011 3 john 2012 4 john 2015 5 john 2017 6 alex 2014 7 alex 2015 8 alex 2016 9 alex 2017 10 alex 2018 For each person in this dataset - I want to select all rows until the first non-consecutive year appears. As an example, the final output would look something like this: # DESIRED OUTPUT > my_data name year 1 john 2010 2 john 2011 3 john 2012 6 alex 2014 7 alex 2015 8 alex 2016 9 alex 2017 10 alex 2018 To do this, I thought of the following approach: > agg <- aggregate(year ~ name, my_data, c) > agg name year.1 year.2 year.3 year.4 year.5 1 alex 2014 2015 2016 2017 2018 2 john 2010 2011 2012 2015 2017 library(stringr) agg = data.frame(as.matrix(agg)) agg$all_years = paste(agg$year.1 ,agg$year.2, agg$year.3, agg$year.4, agg$year.5) agg$y_2010 = str_detect(agg$all_years, "2010") agg$y_2011 = str_detect(agg$all_years, "2011") agg$y_2012 = str_detect(agg$all_years, "2012") agg$y_2013 = str_detect(agg$all_years, "2013") agg$y_2014 = str_detect(agg$all_years, "2014") agg$y_2015 = str_detect(agg$all_years, "2015") agg$y_2016 = str_detect(agg$all_years, "2016") agg$y_2017 = str_detect(agg$all_years, "2017") agg$y_2018 = str_detect(agg$all_years, "2018") agg$y_2019 = str_detect(agg$all_years, "2019") name year.1 year.2 year.3 year.4 year.5 all_years y_2010 y_2011 y_2012 y_2013 y_2014 y_2015 y_2016 y_2017 y_2018 y_2019 1 alex 2014 2015 2016 2017 2018 2014 2015 2016 2017 2018 FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE FALSE 2 john 2010 2011 2012 2015 2017 2010 2011 2012 2015 2017 TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE FALSE Now, the idea would be for each row - find out the first time when a "TRUE" is followed by a "FALSE" - and then I would try to find some way to accomplish my task. But I am not sure how to proceed from here. Can someone please show me how to do this? Thanks!
[ "library(dplyr)\n\nmy_data %>% \n group_by(name) %>% \n filter(c(1,diff(year)) == 1)\n\n# A tibble: 8 x 2\n# Groups: name [2]\n name year\n <chr> <dbl>\n1 john 2010\n2 john 2011\n3 john 2012\n4 alex 2014\n5 alex 2015\n6 alex 2016\n7 alex 2017\n8 alex 2018\n\n" ]
[ 2 ]
[]
[]
[ "data_manipulation", "r" ]
stackoverflow_0074670181_data_manipulation_r.txt
Q: Simple question, Finding word count within file trouble I've looked up a simple way to file word count for a file but it keeps giving me zero output script A: This should hopefully solve your problem. The split is only set to count words separated by spaces. Hope this helps! fileName = input('Enter File Name: ') with open(fileName,'r') as file: lineCnt = 0 wordCnt = 0 for i in file: lineCnt += 1 for j in i.split(): wordCnt += 1 print(lineCnt,wordCnt)
Simple question, Finding word count within file trouble
I've looked up a simple way to file word count for a file but it keeps giving me zero output script
[ "This should hopefully solve your problem. The split is only set to count words separated by spaces. Hope this helps!\nfileName = input('Enter File Name: ')\nwith open(fileName,'r') as file:\n lineCnt = 0\n wordCnt = 0\n for i in file:\n lineCnt += 1\n for j in i.split():\n wordCnt += 1\nprint(lineCnt,wordCnt)\n\n" ]
[ 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0074670150_file_python.txt
Q: View time taken of last command in fish shell I'm aware that I can use time as time <some command> But this requires me to remember to type time before <some command> I'm wondering if it's possible create some sort of hook, so that time is run for every command, but only displayed if I enter last-time or something at the cli. An example usage might be: $ sleep 2 $ last-time ________________________________________________________ Executed in 2.00 secs fish external usr time 2.28 millis 912.00 micros 1.37 millis sys time 0.30 millis 296.00 micros 0.00 millis A: The $CMD_DURATION variable contains the duration of the last-run interactive command, in milliseconds. $ sleep 2 $ echo $CMD_DURATION 2014 Docs are here.
View time taken of last command in fish shell
I'm aware that I can use time as time <some command> But this requires me to remember to type time before <some command> I'm wondering if it's possible create some sort of hook, so that time is run for every command, but only displayed if I enter last-time or something at the cli. An example usage might be: $ sleep 2 $ last-time ________________________________________________________ Executed in 2.00 secs fish external usr time 2.28 millis 912.00 micros 1.37 millis sys time 0.30 millis 296.00 micros 0.00 millis
[ "The $CMD_DURATION variable contains the duration of the last-run interactive command, in milliseconds.\n$ sleep 2\n$ echo $CMD_DURATION\n2014\n\nDocs are here.\n" ]
[ 1 ]
[]
[]
[ "fish", "time" ]
stackoverflow_0074668999_fish_time.txt
Q: Unity: How to destroy an Instantiated object in a different function I'm trying to make a game which is spawning an object and after the object is destroyed, another object spawns right away. But right now I'm trying to destroy an instantiate object in a different function, and it is not working. ` public GameObject[] food; public Vector3Int spawnPosition; public void Start() { SpawnFood(); } //Spawning food public void SpawnFood() { int random = Random.Range(0, food.Length); GameObject clone = (GameObject)Instantiate(food[random], this.spawnPosition, Quaternion.identity); } private void Update() { if(Input.GetKeyDown(KeyCode.C)) { Destroy(this.gameObject); } } ` I have tried to do some research on this and still, I can only find the solution for destroying an object inside the same function as the Instantiate. A: When you call Destroy(this.gameObject), the game object the script attacting to will be destroyed, and after that you cannot call the script. I guess what you want to destroy is the food game object, not the game object the script you showed here attacting to. A quick adjustment to suit your need maybe : ... private GameObject clone ... private void Update() { if(Input.GetKeyDown(KeyCode.C)) { if (clone != null) Destroy(clone); } } And as @Daniel seggested, if you will repeatedly instantiate/destroy food game object, the better way would be just use the same food object and change it's properties (e.g. location...) to create the new food game object pop-up illusion. The key idea here is called Object Pooling. A: Here is your answer : declare a public Gameobject: public GameObject clone; and replace GameObject clone = (GameObject)Instantiate(food[random], this.spawnPosition, Quaternion.identity); with GameObject clone = Instantiate(food[random], this.spawnPosition, Quaternion.identity); and then you can destroy in another function or class public void DestroyFood() { Destroy(clone); //you can instantiate another gameobject here }
Unity: How to destroy an Instantiated object in a different function
I'm trying to make a game which is spawning an object and after the object is destroyed, another object spawns right away. But right now I'm trying to destroy an instantiate object in a different function, and it is not working. ` public GameObject[] food; public Vector3Int spawnPosition; public void Start() { SpawnFood(); } //Spawning food public void SpawnFood() { int random = Random.Range(0, food.Length); GameObject clone = (GameObject)Instantiate(food[random], this.spawnPosition, Quaternion.identity); } private void Update() { if(Input.GetKeyDown(KeyCode.C)) { Destroy(this.gameObject); } } ` I have tried to do some research on this and still, I can only find the solution for destroying an object inside the same function as the Instantiate.
[ "When you call Destroy(this.gameObject), the game object the script attacting to will be destroyed, and after that you cannot call the script.\nI guess what you want to destroy is the food game object, not the game object the script you showed here attacting to.\nA quick adjustment to suit your need maybe :\n...\nprivate GameObject clone\n...\nprivate void Update()\n{\n if(Input.GetKeyDown(KeyCode.C))\n {\n if (clone != null)\n Destroy(clone);\n }\n}\n\nAnd as @Daniel seggested, if you will repeatedly instantiate/destroy food game object, the better way would be just use the same food object and change it's properties (e.g. location...) to create the new food game object pop-up illusion.\nThe key idea here is called Object Pooling.\n", "Here is your answer :\ndeclare a public Gameobject:\npublic GameObject clone;\n\nand replace\nGameObject clone = (GameObject)Instantiate(food[random], this.spawnPosition, Quaternion.identity);\n\nwith\nGameObject clone = Instantiate(food[random], this.spawnPosition, Quaternion.identity);\n\nand then you can destroy in another function or class\npublic void DestroyFood()\n{\n\nDestroy(clone);\n\n//you can instantiate another gameobject here\n\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074663905_c#_unity3d.txt
Q: Paypal refund using uc_paypal.modue in drupal with API drupal has a module called uc_paypal.module which has this part of the code. // Sends a request to PayPal and returns a response array. function uc_paypal_api_request($request, $server) { $request['USER'] = variable_get('uc_paypal_api_username', ''); $request['PWD'] = variable_get('uc_paypal_api_password', ''); $request['VERSION'] = '3.0'; $request['SIGNATURE'] = variable_get('uc_paypal_api_signature', ''); $data = ''; foreach ($request as $key => $value) { $data .= $key .'='. urlencode(ereg_replace(',', '', $value)) .'&'; } $data = substr($data, 0, -1); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $server); curl_setopt($ch, CURLOPT_VERBOSE, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_NOPROGRESS, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,0); $response = curl_exec($ch); if ($error = curl_error($ch)) { watchdog('uc_paypal', $error, WATCHDOG_ERROR); } curl_close($ch); return _uc_paypal_nvp_to_array($response); } What I would like to do is create a link button that can refund a completed order. I found this API code example. curl -v -X POST https://api-m.sandbox.paypal.com/v2/payments/captures/2GG279541U471931P/refund \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <Access-Token>" \ -H "PayPal-Request-Id: 123e4567-e89b-12d3-a456-426655440020" \ -d '{ "amount": { "value": "10.00", "currency_code": "USD" }, "invoice_id": "INVOICE-123", "note_to_payer": "DefectiveProduct", "payment_instruction": { "platform_fees": [ { "amount": { "currency_code": "USD", "value": "1.00" } } ] } }' What I would like to do is load the data from the database using this code to get the TxnId for the order. $result = db_query("SELECT order_id, txn_id FROM uc_payment_paypal_ipn WHERE status = 'Completed' AND uid = %d AND order_id = %d", $uid, $order->order_id); And join all three items into one PHP call to Paypal to refund the order. I just don't know how to join those codes together. A: That code example is for the current REST API, which uses a clientid and secret for authentication to first obtain an access token. The uc_paypal.module uses the much older (15+ years) classic NVP API, which authenticates with a USER, PWD, SIGNATURE. It's generally not a good idea to mix very old and current APIs. Here's the API reference for a classic API RefundTransaction call.
Paypal refund using uc_paypal.modue in drupal with API
drupal has a module called uc_paypal.module which has this part of the code. // Sends a request to PayPal and returns a response array. function uc_paypal_api_request($request, $server) { $request['USER'] = variable_get('uc_paypal_api_username', ''); $request['PWD'] = variable_get('uc_paypal_api_password', ''); $request['VERSION'] = '3.0'; $request['SIGNATURE'] = variable_get('uc_paypal_api_signature', ''); $data = ''; foreach ($request as $key => $value) { $data .= $key .'='. urlencode(ereg_replace(',', '', $value)) .'&'; } $data = substr($data, 0, -1); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $server); curl_setopt($ch, CURLOPT_VERBOSE, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_NOPROGRESS, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,0); $response = curl_exec($ch); if ($error = curl_error($ch)) { watchdog('uc_paypal', $error, WATCHDOG_ERROR); } curl_close($ch); return _uc_paypal_nvp_to_array($response); } What I would like to do is create a link button that can refund a completed order. I found this API code example. curl -v -X POST https://api-m.sandbox.paypal.com/v2/payments/captures/2GG279541U471931P/refund \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <Access-Token>" \ -H "PayPal-Request-Id: 123e4567-e89b-12d3-a456-426655440020" \ -d '{ "amount": { "value": "10.00", "currency_code": "USD" }, "invoice_id": "INVOICE-123", "note_to_payer": "DefectiveProduct", "payment_instruction": { "platform_fees": [ { "amount": { "currency_code": "USD", "value": "1.00" } } ] } }' What I would like to do is load the data from the database using this code to get the TxnId for the order. $result = db_query("SELECT order_id, txn_id FROM uc_payment_paypal_ipn WHERE status = 'Completed' AND uid = %d AND order_id = %d", $uid, $order->order_id); And join all three items into one PHP call to Paypal to refund the order. I just don't know how to join those codes together.
[ "That code example is for the current REST API, which uses a clientid and secret for authentication to first obtain an access token.\nThe uc_paypal.module uses the much older (15+ years) classic NVP API, which authenticates with a USER, PWD, SIGNATURE.\nIt's generally not a good idea to mix very old and current APIs. Here's the API reference for a classic API RefundTransaction call.\n" ]
[ 1 ]
[]
[]
[ "button", "drupal", "paypal", "php" ]
stackoverflow_0074670206_button_drupal_paypal_php.txt
Q: How might I get detailed database error messages from dplyr::tbl? I'm using R to plot some data I pull out of a database (the Stack Exchange data dump, to be specific): dplyr::tbl(serverfault, dbplyr::sql(" select year(p.CreationDate) year, avg(p.AnswerCount*1.0) answers_per_question, sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate from Posts p where PostTypeId = 1 group by year(p.CreationDate) order by year(p.CreationDate) ")) The query works fine on SEDE, but I get this error in the R console: Error: <SQL> 'SELECT * FROM ( select year(p.CreationDate) year, avg(p.AnswerCount*1.0) answers_per_question, sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate from Posts p where PostTypeId = 1 group by year(p.CreationDate) order by year(p.CreationDate) ) "zzz11" WHERE (0 = 1)' nanodbc/nanodbc.cpp:1587: 42000: [FreeTDS][SQL Server]Statement(s) could not be prepared. I reckoned "Statement(s) could not be prepared." meant that SQL Server didn't like the query for some reason. Unfortunately, it didn't give any hint about what went wrong. After fiddling with the query for a bit, I noticed it was wrapped in a subselect, according to the error message. Copying and executing the full query as constructed by one of the libraries in the chain, SQL Server gave me this more informative error message: The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified. Now the solution is obvious: remove (or comment out) the order by clause. But where is the detailed error message in the R console? I'm using Rstudio, should that matter. If I could get the full exception right next to the code I'm working on, it would help me fix bug a lot quicker. (And just to be clear, I get cryptic errors from dplyr::tbl often and typically use binary search debugging to fix them.) A: The error message that you are seeing in the R console is likely coming from the nanodbc package, which is a popular R package for connecting to databases. The message is saying that the SQL statement that was sent to the database couldn't be prepared, which means that the database rejected it because it was syntactically incorrect or otherwise not valid. The reason that the full error message from the database isn't being shown in the R console is likely because the nanodbc package is catching the error and formatting it in a way that is more user-friendly. However, it sounds like in this case the error message isn't very helpful. If you want to see the full error message from the database, you can try adding the following line of code after your tbl call: dbplyr::last_query(serverfault) This will print out the last SQL query that was sent to the database. This should include the full error message from the database, which should give you more information about why the query is failing. Alternatively, you can try using the dbGetQuery function from the DBI package instead of tbl from the dplyr package. This will send the query directly to the database without any additional processing, so you should be able to see the full error message in the R console. Here's how you would use dbGetQuery to execute your query: library(DBI) query <- " SELECT year(p.CreationDate) year, avg(p.AnswerCount*1.0) answers_per_question, sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate FROM Posts p WHERE PostTypeId = 1 GROUP BY year(p.CreationDate) -- ORDER BY year(p.CreationDate) " dbGetQuery(serverfault, query) I've commented out the ORDER BY clause in the example above, but you can remove the comment if you want to include it in your query. I hope this helps!
How might I get detailed database error messages from dplyr::tbl?
I'm using R to plot some data I pull out of a database (the Stack Exchange data dump, to be specific): dplyr::tbl(serverfault, dbplyr::sql(" select year(p.CreationDate) year, avg(p.AnswerCount*1.0) answers_per_question, sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate from Posts p where PostTypeId = 1 group by year(p.CreationDate) order by year(p.CreationDate) ")) The query works fine on SEDE, but I get this error in the R console: Error: <SQL> 'SELECT * FROM ( select year(p.CreationDate) year, avg(p.AnswerCount*1.0) answers_per_question, sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate from Posts p where PostTypeId = 1 group by year(p.CreationDate) order by year(p.CreationDate) ) "zzz11" WHERE (0 = 1)' nanodbc/nanodbc.cpp:1587: 42000: [FreeTDS][SQL Server]Statement(s) could not be prepared. I reckoned "Statement(s) could not be prepared." meant that SQL Server didn't like the query for some reason. Unfortunately, it didn't give any hint about what went wrong. After fiddling with the query for a bit, I noticed it was wrapped in a subselect, according to the error message. Copying and executing the full query as constructed by one of the libraries in the chain, SQL Server gave me this more informative error message: The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified. Now the solution is obvious: remove (or comment out) the order by clause. But where is the detailed error message in the R console? I'm using Rstudio, should that matter. If I could get the full exception right next to the code I'm working on, it would help me fix bug a lot quicker. (And just to be clear, I get cryptic errors from dplyr::tbl often and typically use binary search debugging to fix them.)
[ "The error message that you are seeing in the R console is likely coming from the nanodbc package, which is a popular R package for connecting to databases. The message is saying that the SQL statement that was sent to the database couldn't be prepared, which means that the database rejected it because it was syntactically incorrect or otherwise not valid.\nThe reason that the full error message from the database isn't being shown in the R console is likely because the nanodbc package is catching the error and formatting it in a way that is more user-friendly. However, it sounds like in this case the error message isn't very helpful.\nIf you want to see the full error message from the database, you can try adding the following line of code after your tbl call:\ndbplyr::last_query(serverfault)\n\nThis will print out the last SQL query that was sent to the database. This should include the full error message from the database, which should give you more information about why the query is failing.\nAlternatively, you can try using the dbGetQuery function from the DBI package instead of tbl from the dplyr package. This will send the query directly to the database without any additional processing, so you should be able to see the full error message in the R console. Here's how you would use dbGetQuery to execute your query:\nlibrary(DBI)\n\nquery <- \"\nSELECT year(p.CreationDate) year,\n avg(p.AnswerCount*1.0) answers_per_question,\n sum(iif(ClosedDate is null, 0.0, 100.0))/count(*) close_rate\nFROM Posts p\nWHERE PostTypeId = 1\nGROUP BY year(p.CreationDate)\n-- ORDER BY year(p.CreationDate)\n\"\n\ndbGetQuery(serverfault, query)\n\nI've commented out the ORDER BY clause in the example above, but you can remove the comment if you want to include it in your query.\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "dbplyr", "dplyr", "r" ]
stackoverflow_0051792750_dbplyr_dplyr_r.txt
Q: iCloud and Core Data Error (Ubiquity: Didn't get baseline metadata back from metadata url) I'm having a bit of trouble trying to get iCloud working with my app. I tried following Tim Roadley's example here, but still get the log below appearing whenever the app is launched via Xcode (syncing did work briefly, but has ceased to work now). [PFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:modelVersionHash:andUbiquityRootLocation:withError:](1091): CoreData: Ubiquity: Didn't get baseline metadata back from metadata url: file://localhost/private/var/mobile/Library/Mobile%20Documents/<TEAM ID>~samburnstone~Staff-Manager/Logs/.baseline/current.nosync/<TEAM ID>.samburnstone.StaffManager/g9TNo_uNFuNyltbjcAmDaFE7wl~6a2eGmW6uKIZCC1s= /baseline.meta Error: (null) (TEAM ID is my alphanumeric sequence of characters that can be found in Apple's Member Center) If anyone has any idea what could be causing this I'd be very grateful. Thanks! A: Check if you the following applies to your issue - it helped me: http://mentalfaculty.tumblr.com/post/25241910449/under-the-sheets-with-icloud-and-core-data Without seeing your code it's hard to comment on the actual issue. A: You can also try this below code and call it every time you launch the app: PFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:modelVersionHash:andUbiquityRootLocation:withError: if (![[NSFileManager defaultManager] fileExistsAtPath:[[NSURL fileURLWithPath:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:@"Logs"]] URLByAppendingPathComponent:@"current.nosync"].path]) { [PFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:@"AppStore" modelVersionHash:[[NSBundle mainBundle] objectForInfoDictionaryKey:@"CFBundleShortVersionString"] andUbiquityRootLocation:[[NSURL fileURLWithPath:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:@"Logs"]] URLByAppendingPathComponent:@"current.nosync"] withError:nil]; } A: This error may occur if there is a problem with the metadata file for your iCloud-enabled Core Data app. The metadata file contains information about the version of the Core Data model and the location of the data store, and it is used by iCloud to manage data synchronization. To fix this error, you can try the following steps: Check the location of the metadata file specified in your app's code. Make sure that it is correct and that the file exists at that location. If the file does not exist, try creating it manually. You can do this by using the -createUbiquityBaseline option with the mdd command-line tool. For example: mdd -createUbiquityBaseline /path/to/metadata/file If the file already exists, try deleting it and then running the app again. This will force iCloud to re-create the metadata file, which may fix any issues with it. If the error persists, you may need to reset the iCloud data for your app. To do this, you can use the -resetUbiquityMetadata option with the mdd tool. For example: mdd -resetUbiquityMetadata /path/to/metadata/file This will reset the iCloud data for your app, including the metadata file. Note that this will also delete any data that is currently being synced with iCloud, so you should only use this option as a last resort.
iCloud and Core Data Error (Ubiquity: Didn't get baseline metadata back from metadata url)
I'm having a bit of trouble trying to get iCloud working with my app. I tried following Tim Roadley's example here, but still get the log below appearing whenever the app is launched via Xcode (syncing did work briefly, but has ceased to work now). [PFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:modelVersionHash:andUbiquityRootLocation:withError:](1091): CoreData: Ubiquity: Didn't get baseline metadata back from metadata url: file://localhost/private/var/mobile/Library/Mobile%20Documents/<TEAM ID>~samburnstone~Staff-Manager/Logs/.baseline/current.nosync/<TEAM ID>.samburnstone.StaffManager/g9TNo_uNFuNyltbjcAmDaFE7wl~6a2eGmW6uKIZCC1s= /baseline.meta Error: (null) (TEAM ID is my alphanumeric sequence of characters that can be found in Apple's Member Center) If anyone has any idea what could be causing this I'd be very grateful. Thanks!
[ "Check if you the following applies to your issue - it helped me:\nhttp://mentalfaculty.tumblr.com/post/25241910449/under-the-sheets-with-icloud-and-core-data\nWithout seeing your code it's hard to comment on the actual issue.\n", "You can also try this below code and call it every time you launch the app:\n\nPFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:modelVersionHash:andUbiquityRootLocation:withError:\n\nif (![[NSFileManager defaultManager] fileExistsAtPath:[[NSURL fileURLWithPath:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:@\"Logs\"]] URLByAppendingPathComponent:@\"current.nosync\"].path])\n {\n [PFUbiquityBaseline metadataFromCurrentBaselineForStoreWithName:@\"AppStore\" modelVersionHash:[[NSBundle mainBundle] objectForInfoDictionaryKey:@\"CFBundleShortVersionString\"] andUbiquityRootLocation:[[NSURL fileURLWithPath:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:@\"Logs\"]] URLByAppendingPathComponent:@\"current.nosync\"] withError:nil];\n }\n\n", "This error may occur if there is a problem with the metadata file for your iCloud-enabled Core Data app. The metadata file contains information about the version of the Core Data model and the location of the data store, and it is used by iCloud to manage data synchronization.\nTo fix this error, you can try the following steps:\n\nCheck the location of the metadata file specified in your app's\ncode. Make sure that it is correct and that the file exists at that\nlocation.\n\nIf the file does not exist, try creating it manually. You can do\nthis by using the -createUbiquityBaseline option with the mdd\ncommand-line tool. For example:\nmdd -createUbiquityBaseline /path/to/metadata/file\n\nIf the file already exists, try deleting it and then running the app\nagain. This will force iCloud to re-create the metadata file, which\nmay fix any issues with it.\n\nIf the error persists, you may need to reset the iCloud data for\nyour app. To do this, you can use the -resetUbiquityMetadata option\nwith the mdd tool. For example:\nmdd -resetUbiquityMetadata /path/to/metadata/file\n\n\nThis will reset the iCloud data for your app, including the metadata file. Note that this will also delete any data that is currently being synced with iCloud, so you should only use this option as a last resort.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "core_data", "icloud", "ios" ]
stackoverflow_0011827565_core_data_icloud_ios.txt
Q: Typescript function return type as the return type of the function in a param I have a function that accept a callback as the parameter, then the function will return the result of the callback. The type returned by the first function is the return type of the callback function: Here an example: const useMethod = < Data, Error = unknown, TReturn = unknown, >( fn: (options?: Options<Data, Error>) => TReturn, options?: Options<Data, Error>, ): TReturn => { return fn(options); }; const callback1 = () => { return "Hello" } const callback2 = () => { return {hello: "World"} } const callback3 = () => { return 100; } useMethod(callback1); // This will be string useMethod(callback2); // This will be {hello: string} useMethod(callback3); // This will be number The issue I'm getting into is that if I pass a type to the first type argument, the return type became the default ( unknown in this case ) and not the one of the callback. useMethod<{ param: string; }>(callback1); // This should be string but getting unknown useMethod<string>(callback2); // This should be {hello: string} but getting unknown useMethod<boolean>(callback3); // This should be number but getting unknown Why this happen? How can I avoid this? Thanks for any help A: Typescript generics cannot be inferred partially You either write no <T> at all, or V in <T, V=D> will be set do D as if you wrote <T, D> instead of inferring as you expect Typescript: infer type of generic after optional first generic
Typescript function return type as the return type of the function in a param
I have a function that accept a callback as the parameter, then the function will return the result of the callback. The type returned by the first function is the return type of the callback function: Here an example: const useMethod = < Data, Error = unknown, TReturn = unknown, >( fn: (options?: Options<Data, Error>) => TReturn, options?: Options<Data, Error>, ): TReturn => { return fn(options); }; const callback1 = () => { return "Hello" } const callback2 = () => { return {hello: "World"} } const callback3 = () => { return 100; } useMethod(callback1); // This will be string useMethod(callback2); // This will be {hello: string} useMethod(callback3); // This will be number The issue I'm getting into is that if I pass a type to the first type argument, the return type became the default ( unknown in this case ) and not the one of the callback. useMethod<{ param: string; }>(callback1); // This should be string but getting unknown useMethod<string>(callback2); // This should be {hello: string} but getting unknown useMethod<boolean>(callback3); // This should be number but getting unknown Why this happen? How can I avoid this? Thanks for any help
[ "Typescript generics cannot be inferred partially\nYou either write no <T> at all, or V in <T, V=D> will be set do D as if you wrote <T, D> instead of inferring as you expect\nTypescript: infer type of generic after optional first generic\n" ]
[ 0 ]
[]
[]
[ "typescript", "typescript_generics" ]
stackoverflow_0074668027_typescript_typescript_generics.txt
Q: generate random Poisson numbers with given expected value M(x) I need to generate a hundred random numbers and each number max value is 80; I can't figure out how to connect Poisson method with expected value that equals 28. that means sum is 28 * 100 = 2800. So M(x) is 28. but that means I need to track every generated number before? I noticed that D(x) equals M(x) but in some examples these values is different. I found this answer it can help me but it doesn't compile const maxLimit = 80; function poissonRandomNumber(lambda) { var L = Math.exp(-lambda), k = 0, p = 1; do { k = k + 1; p = p * Math.random(); console.log(p, k); } while (p > L); return Math.min(k, maxLimit); // return k - 1; } var res = poissonRandomNumber(maxLimit); console.log("res", res); A: It looks like you are trying to generate random numbers from a Poisson distribution with a mean of 28 and a maximum value of 80. The Poisson distribution is a probability distribution that is often used to model the number of times an event occurs within a given time interval. It is a discrete distribution, which means that the possible values are whole numbers (integers) rather than a continuous range of values. To generate random numbers from a Poisson distribution, you can use the following algorithm: Calculate the expected value (mean) of the distribution, which is denoted by λ. In your case, the expected value is 28. Calculate the probability of each possible value using the formula P(x) = e^-λ * λ^x / x!, where x is the possible value and e is the base of the natural logarithm (about 2.718). Generate a random number between 0 and 1. Calculate the cumulative probability of each possible value by adding up the probabilities of all the values from 0 to x. If the random number generated in step 3 is less than or equal to the cumulative probability of a given value, return that value as the random number. Repeat the process until you have generated the desired number of random numbers. In your code, you are using a do-while loop to generate the random numbers, but you are not using the formula to calculate the probabilities or the cumulative probabilities. Instead, you are just generating random numbers and checking if they are less than or equal to e^-lambda. This is not correct and will not give you the desired result. To fix your code, you can try implementing the steps described above. For example, you could add a function that calculates the probability of a given value x: function poissonProbability(lambda, x) { return Math.exp(-lambda) * Math.pow(lambda, x) / factorial(x); } where factorial(x) is a function that calculates the factorial of a number (the product of all the positive integers from 1 to x). You can then use this function to calculate the cumulative probability of each value, and return the first value for which the cumulative probability is greater than or equal to the random number generated in step 3.
generate random Poisson numbers with given expected value M(x)
I need to generate a hundred random numbers and each number max value is 80; I can't figure out how to connect Poisson method with expected value that equals 28. that means sum is 28 * 100 = 2800. So M(x) is 28. but that means I need to track every generated number before? I noticed that D(x) equals M(x) but in some examples these values is different. I found this answer it can help me but it doesn't compile const maxLimit = 80; function poissonRandomNumber(lambda) { var L = Math.exp(-lambda), k = 0, p = 1; do { k = k + 1; p = p * Math.random(); console.log(p, k); } while (p > L); return Math.min(k, maxLimit); // return k - 1; } var res = poissonRandomNumber(maxLimit); console.log("res", res);
[ "It looks like you are trying to generate random numbers from a Poisson distribution with a mean of 28 and a maximum value of 80.\nThe Poisson distribution is a probability distribution that is often used to model the number of times an event occurs within a given time interval. It is a discrete distribution, which means that the possible values are whole numbers (integers) rather than a continuous range of values.\nTo generate random numbers from a Poisson distribution, you can use the following algorithm:\n\nCalculate the expected value (mean) of the distribution, which is\ndenoted by λ. In your case, the expected value is 28.\nCalculate the probability of each possible value using the formula\nP(x) = e^-λ * λ^x / x!, where x is the possible value and e is the\nbase of the natural logarithm (about 2.718).\nGenerate a random number between 0 and 1.\nCalculate the cumulative probability of each possible value by\nadding up the probabilities of all the values from 0 to x.\nIf the random number generated in step 3 is less than or equal to\nthe cumulative probability of a given value, return that value as\nthe random number.\nRepeat the process until you have generated the desired number of\nrandom numbers.\n\nIn your code, you are using a do-while loop to generate the random numbers, but you are not using the formula to calculate the probabilities or the cumulative probabilities. Instead, you are just generating random numbers and checking if they are less than or equal to e^-lambda. This is not correct and will not give you the desired result.\nTo fix your code, you can try implementing the steps described above. For example, you could add a function that calculates the probability of a given value x:\nfunction poissonProbability(lambda, x) {\n return Math.exp(-lambda) * Math.pow(lambda, x) / factorial(x);\n}\n\nwhere factorial(x) is a function that calculates the factorial of a number (the product of all the positive integers from 1 to x).\nYou can then use this function to calculate the cumulative probability of each value, and return the first value for which the cumulative probability is greater than or equal to the random number generated in step 3.\n" ]
[ 1 ]
[]
[]
[ "javascript", "poisson" ]
stackoverflow_0074670114_javascript_poisson.txt
Q: How to regex replace from a showdown extension in js? I am making a markdown (showdown js) extension for mathematics, expressions and plotting graphs. however I faced a problem here. I cannot replace a pattern by regex in the extension code, but I can replace it with the replace method! what I've tried: //load showdown module // https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0 // i don't know how to load from js, i loaded from <script src="..."> // we replace {graph:<expression} into a div of class graph function mathext(){ return [{ type:"lang", regex:/{graph:(.*)}/gi, replace:"<div class='graph'>$1</div>" } ];} // load the extension showdown.extension("mathext",mathext); // create a converter let converter = new showdown.Converter(); // make a function to convert markdown to html with pre-configured extension function mathmd(md){ return converter.makeHtml(md); } // convert a ready markdown string to html. let mathmds="# hello\n`x^2+y^2=9`\n##hello2\n{graph:x^2+y^2=9}"; document.getElementById("mathmd").innerText=mathmd(mathmds); an html that runs this script looks like this: <head> <meta charset="utf-8"> <title>md</title> </head> <body> <div id="mathmd"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0/showdown.min.js"></script> <script src="mark.js"></script> </body> </html> manual Here is the manual replacement that correctly works: let mathmds="# hello\n`x^2+y^2=9`\n##hello2\n{graph:x^2+y^2=9}"; let reg=/{graph:(.*)}/gi; let toreplace="<div class='graph'>$1</div>" alert(mathmds.replace(reg,toreplace)); Why does that happen? How to replace the pattern correctly? Thanks in advance. A: The problem with your code is that you are trying to use the replace method on the regex object, but regex does not have a replace method. You need to call the replace method on the string that you want to perform the replacement on. function mathext() { return [{ type: "lang", regex: /{graph:(.*)}/gi, replace: function(match, p1) { // p1 is the first capture group in the regex // In this case, it is the content inside the "{graph: }" braces return "<div class='graph'>" + p1 + "</div>"; } }]; } // load the extension showdown.extension("mathext", mathext); // create a converter let converter = new showdown.Converter(); // make a function to convert markdown to html with pre-configured extension function mathmd(md) { return converter.makeHtml(md); } // convert a ready markdown string to html. let mathmds = "# hello\n`x^2+y^2=9`\n##hello2\n{graph:x^2+y^2=9}"; document.getElementById("mathmd").innerText = mathmd(mathmds);
How to regex replace from a showdown extension in js?
I am making a markdown (showdown js) extension for mathematics, expressions and plotting graphs. however I faced a problem here. I cannot replace a pattern by regex in the extension code, but I can replace it with the replace method! what I've tried: //load showdown module // https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0 // i don't know how to load from js, i loaded from <script src="..."> // we replace {graph:<expression} into a div of class graph function mathext(){ return [{ type:"lang", regex:/{graph:(.*)}/gi, replace:"<div class='graph'>$1</div>" } ];} // load the extension showdown.extension("mathext",mathext); // create a converter let converter = new showdown.Converter(); // make a function to convert markdown to html with pre-configured extension function mathmd(md){ return converter.makeHtml(md); } // convert a ready markdown string to html. let mathmds="# hello\n`x^2+y^2=9`\n##hello2\n{graph:x^2+y^2=9}"; document.getElementById("mathmd").innerText=mathmd(mathmds); an html that runs this script looks like this: <head> <meta charset="utf-8"> <title>md</title> </head> <body> <div id="mathmd"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0/showdown.min.js"></script> <script src="mark.js"></script> </body> </html> manual Here is the manual replacement that correctly works: let mathmds="# hello\n`x^2+y^2=9`\n##hello2\n{graph:x^2+y^2=9}"; let reg=/{graph:(.*)}/gi; let toreplace="<div class='graph'>$1</div>" alert(mathmds.replace(reg,toreplace)); Why does that happen? How to replace the pattern correctly? Thanks in advance.
[ "The problem with your code is that you are trying to use the replace method on the regex object, but regex does not have a replace method. You need to call the replace method on the string that you want to perform the replacement on.\nfunction mathext() {\n return [{\n type: \"lang\",\n regex: /{graph:(.*)}/gi,\n replace: function(match, p1) {\n // p1 is the first capture group in the regex\n // In this case, it is the content inside the \"{graph: }\" braces\n return \"<div class='graph'>\" + p1 + \"</div>\";\n }\n }];\n}\n\n// load the extension\nshowdown.extension(\"mathext\", mathext);\n\n// create a converter\nlet converter = new showdown.Converter();\n\n// make a function to convert markdown to html with pre-configured extension\nfunction mathmd(md) {\n return converter.makeHtml(md);\n}\n\n// convert a ready markdown string to html.\nlet mathmds = \"# hello\\n`x^2+y^2=9`\\n##hello2\\n{graph:x^2+y^2=9}\";\ndocument.getElementById(\"mathmd\").innerText = mathmd(mathmds);\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "markdown", "regex", "showdown" ]
stackoverflow_0074665061_javascript_markdown_regex_showdown.txt
Q: how to convert this code substr in php to javascript how to convert this code substr in php to javascript? this code run in php, but run in java script cannot PHP <?php $header='XII-40'; $next=1022; $len=6; $id = $header.substr("000000000$next", -$len); print $id; https://www.tehplayground.com/Fa2f6Jk0bP4UaLLz const str = 'XII-14'; const next=123 const text1='00000000001'+next const len =6 console.log(text1.substr(next, len)); A: This seems to work: const str = 'XII-14'; const next = 123; const text1 = '00000000001' + next; const len = 6; console.log(str + text1.substr(-len)); It returns: XII-14001123
how to convert this code substr in php to javascript
how to convert this code substr in php to javascript? this code run in php, but run in java script cannot PHP <?php $header='XII-40'; $next=1022; $len=6; $id = $header.substr("000000000$next", -$len); print $id; https://www.tehplayground.com/Fa2f6Jk0bP4UaLLz const str = 'XII-14'; const next=123 const text1='00000000001'+next const len =6 console.log(text1.substr(next, len));
[ "This seems to work:\nconst str = 'XII-14';\nconst next = 123;\nconst text1 = '00000000001' + next;\nconst len = 6;\nconsole.log(str + text1.substr(-len));\n\nIt returns:\nXII-14001123\n\n" ]
[ -1 ]
[]
[]
[ "javascript", "javascript_objects", "php" ]
stackoverflow_0074670161_javascript_javascript_objects_php.txt
Q: python selenium alternative for web actions I am constantly waiting for the page to load in web actions. Is there a way to do web actions faster without waiting for the page to load ? When working with Selenium, it takes time to navigate from page to page. A: If your use case does not require opening an actual browser and interacting with the webpage through simulated user input, then you can use HTTP requests to extract/manipulate data from the page. Popular modules for this are Requests and BeautifulSoup.
python selenium alternative for web actions
I am constantly waiting for the page to load in web actions. Is there a way to do web actions faster without waiting for the page to load ? When working with Selenium, it takes time to navigate from page to page.
[ "If your use case does not require opening an actual browser and interacting with the webpage through simulated user input, then you can use HTTP requests to extract/manipulate data from the page. Popular modules for this are Requests and BeautifulSoup.\n" ]
[ 0 ]
[]
[]
[ "automation", "bots", "javascript", "python", "selenium" ]
stackoverflow_0074669684_automation_bots_javascript_python_selenium.txt
Q: PHP Mail - receiving wrong words on the mail server When I try to send mail from a web page - on the mail.bg and abv.bg mail I receive letters like this : РќРѕРІРѕ where I must receive "Ново" What can I do so I can fix it - on gmail.com I receive the normal "Ново" and when I use echo - I also receive the normal "Ново" , but on the mail.bg and abv.bg - I receive this strange words...is there something wrong with the encoding ? <?php header('Content-Type: text/html; charset=utf-8'); header('Content-Transfer-Encoding: base64'); $errors = ''; $myemail = '[email protected]';//<-----Put Your email address here. if(empty($_POST['name']) || empty($_POST['email']) || empty($_POST['message'])) { $errors .= "\n Error: all fields are required"; } $name = $_POST['name']; $email_address = $_POST['email']; $message = $_POST['message']; $subject = $_POST['subject']; if (!preg_match( "/^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,3})$/i", $email_address)) { $errors .= "\n Error: Invalid email address"; } if( empty($errors)) { $to = $myemail; $email_subject = "$subject, $name"; $email_body = "You have received a new message. ". " Here are the details:\n Name: $name \n Email: $email_address \n Message \n $message"; $headers = "From: $myemail\n"; $headers .= "Reply-To: $email_address"; $headers .= 'Content-Type: text/plain; charset=utf-8' . "\r\n"; $headers .= 'Content-Transfer-Encoding: base64'. "\n\r\n"; mail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers); //redirect to the 'thank you' page header('Location: contact-form-thank-you.html'); } ?> When I try to send mail from a web page - on the mail.bg and abv.bg mail I receive letters like this : РќРѕРІРѕ where I must receive "Ново" What can I do so I can fix it - on gmail.com I receive the normal "Ново" and when I use echo - I also receive the normal "Ново" , but on the mail.bg and abv.bg - I receive this strange words...is there something wrong with the encoding ? A: Your headers are specifying Content-Transfer-Encoding: base64 but you have not applied base64_encode on your contents. (in your case $email_body) On the other hand, please use \r\n to separate respective header lines Hence, change the block $headers = "From: $myemail\n"; $headers .= "Reply-To: $email_address"; $headers .= 'Content-Type: text/plain; charset=utf-8' . "\r\n"; $headers .= 'Content-Transfer-Encoding: base64'. "\n\r\n"; mail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers); to $email_body=base64_encode($email_body); $headers = "From: $myemail\r\n"; $headers .= "Reply-To: $email_address\r\n"; $headers .= 'Content-Type: text/plain; charset=utf-8' . "\r\n"; $headers .= 'Content-Transfer-Encoding: base64'. "\r\n"; mail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers); Alternatively, remove Content-Transfer-Encoding: base64 specification, like this: //$email_body=base64_encode($email_body); $headers = "From: $myemail\r\n"; $headers .= "Reply-To: $email_address\r\n"; $headers .= 'Content-Type: text/plain; charset=utf-8' . "\r\n"; //$headers .= 'Content-Transfer-Encoding: base64'. "\r\n"; mail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers);
PHP Mail - receiving wrong words on the mail server
When I try to send mail from a web page - on the mail.bg and abv.bg mail I receive letters like this : РќРѕРІРѕ where I must receive "Ново" What can I do so I can fix it - on gmail.com I receive the normal "Ново" and when I use echo - I also receive the normal "Ново" , but on the mail.bg and abv.bg - I receive this strange words...is there something wrong with the encoding ? <?php header('Content-Type: text/html; charset=utf-8'); header('Content-Transfer-Encoding: base64'); $errors = ''; $myemail = '[email protected]';//<-----Put Your email address here. if(empty($_POST['name']) || empty($_POST['email']) || empty($_POST['message'])) { $errors .= "\n Error: all fields are required"; } $name = $_POST['name']; $email_address = $_POST['email']; $message = $_POST['message']; $subject = $_POST['subject']; if (!preg_match( "/^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,3})$/i", $email_address)) { $errors .= "\n Error: Invalid email address"; } if( empty($errors)) { $to = $myemail; $email_subject = "$subject, $name"; $email_body = "You have received a new message. ". " Here are the details:\n Name: $name \n Email: $email_address \n Message \n $message"; $headers = "From: $myemail\n"; $headers .= "Reply-To: $email_address"; $headers .= 'Content-Type: text/plain; charset=utf-8' . "\r\n"; $headers .= 'Content-Transfer-Encoding: base64'. "\n\r\n"; mail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers); //redirect to the 'thank you' page header('Location: contact-form-thank-you.html'); } ?> When I try to send mail from a web page - on the mail.bg and abv.bg mail I receive letters like this : РќРѕРІРѕ where I must receive "Ново" What can I do so I can fix it - on gmail.com I receive the normal "Ново" and when I use echo - I also receive the normal "Ново" , but on the mail.bg and abv.bg - I receive this strange words...is there something wrong with the encoding ?
[ "Your headers are specifying Content-Transfer-Encoding: base64 but you have not applied base64_encode on your contents. (in your case $email_body)\nOn the other hand, please use \\r\\n to separate respective header lines\nHence, change the block\n$headers = \"From: $myemail\\n\"; \n$headers .= \"Reply-To: $email_address\";\n$headers .= 'Content-Type: text/plain; charset=utf-8' . \"\\r\\n\";\n$headers .= 'Content-Transfer-Encoding: base64'. \"\\n\\r\\n\";\nmail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers);\n\n\nto\n$email_body=base64_encode($email_body);\n\n$headers = \"From: $myemail\\r\\n\"; \n$headers .= \"Reply-To: $email_address\\r\\n\";\n$headers .= 'Content-Type: text/plain; charset=utf-8' . \"\\r\\n\";\n$headers .= 'Content-Transfer-Encoding: base64'. \"\\r\\n\";\nmail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers);\n\nAlternatively, remove Content-Transfer-Encoding: base64 specification, like this:\n//$email_body=base64_encode($email_body);\n\n$headers = \"From: $myemail\\r\\n\"; \n$headers .= \"Reply-To: $email_address\\r\\n\";\n$headers .= 'Content-Type: text/plain; charset=utf-8' . \"\\r\\n\";\n//$headers .= 'Content-Transfer-Encoding: base64'. \"\\r\\n\";\nmail($to, '=?utf-8?B?'.base64_encode($email_subject).'?=', $email_body, $headers);\n\n" ]
[ 0 ]
[ "It looks like you are having an issue with the encoding of your email messages when you send them from your web page. In order to fix this, you can try adding the following line of code to your PHP script:\nmb_internal_encoding(\"UTF-8\");\n\nThis line of code sets the internal encoding of your script to UTF-8, which is a common character encoding that supports most languages, including Bulgarian. This should ensure that your email messages are properly encoded and displayed correctly on the mail.bg and abv.bg mail servers.\n" ]
[ -1 ]
[ "email", "encoding", "php", "phpmailer" ]
stackoverflow_0074669460_email_encoding_php_phpmailer.txt
Q: CSS padding property for svg elements I can't figure out how the CSS padding property is interpreted for svg elements. The following snippet (jsFiddle): <!DOCTYPE html> <meta charset="utf-8"> <title>noob d3</title> <style> svg{background-color:beige; padding:0px 0px 50px 50px;} rect{fill:red; stroke:none; shape-rendering:crispEdges;} </style> <body> <script src="//d3js.org/d3.v3.min.js"></script> <script> d3.select("body") .append("svg") .attr("width", 155) .attr("height", 105) .append("g") .append("rect") .attr("class", "frame") .attr("x", 50) .attr("y", 50) .attr("width", 50) .attr("height", 50); </script> </body> ... displays significantly differently in Firefox and Chrome. What's worse, neither display really makes sense to me: the size of the displayed svg element (the "beige" rectangle) looks to be significantly bigger than what I expected. So my question is two-fold: 1) How is the padding property of an svg element supposed to affect where things get drawn within it? 2) Is there a polyfill that will ensure that both Chrome and Firefox both handle padding in the same way? A: AFAIK, the SVG standard doesn't specify anything like padding, which is why it's handled inconsistently. Just set the SVG to the size you want (with padding) and maybe add a rect to make it appear like you want it to appear. A: From my experience (granted, still very little as I am still learning SVG), I have strayed away from using padding wherever that I could do so. It was suggested to me when I was first learning SVG that I use margin in place of padding, if possible. This is also because you can use display: block; and margin: 0 auto; to make the left and right sides of an SVG to fit directly into the middle of the screen. A: There is no padding or margin, but you can set x and y attributes such that the elements inside or outside get a padding and margin. For example, if an element starts at (0,0), starting at (10, 10) will automatically give a margin of 10. A: The best solution is open Inkscape (or other SVG editor) and change dimension A: Based on what I was able to try on firefox and chromium: the specified width and height for an svg include the padding. In other terms, if you want an image of 20*20px with a padding of 10px on each side, you should set the width to 20+10*2 = 40px (same thing with the height) and the padding to 10px Note : 20+10*2 : 20 is the width you want, 10 is your padding and you double it because you want it on both sides. A: You can apply padding to parent svg elements The padding as described by the OP actually works – albeit, not as desired. Outermost <svg> will be rendered with padding (won't work for nested svgs). But: child elements (e.g the <rect>) won't be re-aligned according to – unlike HTML DOM elements. svg { background-color: beige; max-height:20em; } .pdd{ padding: 0px 0px 50px 50px; } rect { fill: red; stroke: none; shape-rendering: crispEdges; } .borderBox{ box-sizing: border-box; } .overflow{ overflow:visible } <p>Rendered size: 205 x 155 – padding added to initial dimensions </p> <svg class="pdd" width="155" height="105"> <g> <rect class="frame" x="50" y="50" width="50" height="50" /> </g> </svg> <p>Rendered size: 155 x 105; cropped</p> <svg class="pdd borderBox" width="155" height="105"> <g> <rect class="frame" x="50" y="50" width="50" height="50" /> </g> </svg> <p>Rendered size: 155 x 105; cropped; overflow visible</p> <svg class="pdd borderBox overflow" width="155" height="105"> <g> <rect class="frame" x="50" y="50" width="50" height="50" /> </g> </svg> Usecase: padding for fluid svg layouts So, padding doesn't work well for fixed widths/heights. However, it can be handy for flexible/fluid layouts – provided you're using relative (percentage) units for svg child elements. *{ box-sizing:border-box; } svg{ border:1px solid #ccc; } svg { background-color: lightblue; padding:0 10px; overflow:visible; } .svg2 { padding:10px; } .svg3 { padding:0px; } .resize{ resize:both; overflow:auto; padding:1em; border:1px solid #ccc; } <p>resize me :</p> <div class="resize"> <svg id="svg" width="100%" height="40" xmlns="http://www.w3.org/2000/svg"> <circle cx="0" cy="10" r="5" /> <circle cx="0" cy="30" r="5" /> <circle cx="50%" cy="10" r="5" /> <circle cx="50%" cy="30" r="5" /> <circle cx="100%" cy="10" r="5" /> <circle cx="100%" cy="30" r="5" /> </svg> </div> <div class="resize"> <svg class="svg2" width="100%" height="100%" xmlns="http://www.w3.org/2000/svg"> <!-- align path center to x/y =0 by adding viewBox offset width/2 height/2 --> <symbol class="icon icon-home" id="iconHome" viewBox="20 20 40 40" overflow="visible"> <path d="M36.4 22.2l-5.2 0l0 13l-3.4 0l0-16.7l-7.7-8.7l-7.7 8.7l0 16.7l-3.4 0l0-13l-5.2 0l16.4-17.4z"></path> </symbol> <use x="0" y="0%" href="#iconHome" width="20" height="20" /> <use x="0" y="100%" href="#iconHome" width="20" height="20" /> <use x="50%" y="0%" href="#iconHome" width="20" height="20" /> <use x="50%" y="100%" href="#iconHome" width="20" height="20" /> <use x="100%" y="0%" href="#iconHome" width="20" height="20" /> <use x="100%" y="100%" href="#iconHome" width="20" height="20" /> </svg> </div>
CSS padding property for svg elements
I can't figure out how the CSS padding property is interpreted for svg elements. The following snippet (jsFiddle): <!DOCTYPE html> <meta charset="utf-8"> <title>noob d3</title> <style> svg{background-color:beige; padding:0px 0px 50px 50px;} rect{fill:red; stroke:none; shape-rendering:crispEdges;} </style> <body> <script src="//d3js.org/d3.v3.min.js"></script> <script> d3.select("body") .append("svg") .attr("width", 155) .attr("height", 105) .append("g") .append("rect") .attr("class", "frame") .attr("x", 50) .attr("y", 50) .attr("width", 50) .attr("height", 50); </script> </body> ... displays significantly differently in Firefox and Chrome. What's worse, neither display really makes sense to me: the size of the displayed svg element (the "beige" rectangle) looks to be significantly bigger than what I expected. So my question is two-fold: 1) How is the padding property of an svg element supposed to affect where things get drawn within it? 2) Is there a polyfill that will ensure that both Chrome and Firefox both handle padding in the same way?
[ "AFAIK, the SVG standard doesn't specify anything like padding, which is why it's handled inconsistently. Just set the SVG to the size you want (with padding) and maybe add a rect to make it appear like you want it to appear.\n", "From my experience (granted, still very little as I am still learning SVG), I have strayed away from using padding wherever that I could do so. It was suggested to me when I was first learning SVG that I use margin in place of padding, if possible.\nThis is also because you can use display: block; and margin: 0 auto; to make the left and right sides of an SVG to fit directly into the middle of the screen.\n", "There is no padding or margin, but you can set x and y attributes such that the elements inside or outside get a padding and margin. For example, if an element starts at (0,0), starting at (10, 10) will automatically give a margin of 10.\n", "The best solution is open Inkscape (or other SVG editor) and change dimension\n", "Based on what I was able to try on firefox and chromium: the specified width and height for an svg include the padding.\nIn other terms, if you want an image of 20*20px with a padding of 10px on each side, you should set the width to 20+10*2 = 40px (same thing with the height) and the padding to 10px\nNote : 20+10*2 : 20 is the width you want, 10 is your padding and you double it because you want it on both sides.\n", "You can apply padding to parent svg elements\nThe padding as described by the OP actually works – albeit, not as desired.\nOutermost <svg> will be rendered with padding (won't work for nested svgs).\nBut: child elements (e.g the <rect>) won't be re-aligned according to – unlike HTML DOM elements.\n\n\nsvg {\n background-color: beige;\n max-height:20em;\n}\n\n.pdd{\n padding: 0px 0px 50px 50px;\n}\n\nrect {\n fill: red;\n stroke: none;\n shape-rendering: crispEdges;\n}\n\n.borderBox{\n box-sizing: border-box;\n}\n\n.overflow{\noverflow:visible\n}\n<p>Rendered size: 205 x 155 – padding added to initial dimensions </p>\n<svg class=\"pdd\" width=\"155\" height=\"105\">\n <g>\n <rect class=\"frame\" x=\"50\" y=\"50\" width=\"50\" height=\"50\" />\n </g>\n</svg>\n\n<p>Rendered size: 155 x 105; cropped</p>\n<svg class=\"pdd borderBox\" width=\"155\" height=\"105\">\n <g>\n <rect class=\"frame\" x=\"50\" y=\"50\" width=\"50\" height=\"50\" />\n </g>\n</svg>\n\n<p>Rendered size: 155 x 105; cropped; overflow visible</p>\n<svg class=\"pdd borderBox overflow\" width=\"155\" height=\"105\">\n <g>\n <rect class=\"frame\" x=\"50\" y=\"50\" width=\"50\" height=\"50\" />\n </g>\n</svg>\n\n\n\nUsecase: padding for fluid svg layouts\nSo, padding doesn't work well for fixed widths/heights.\nHowever, it can be handy for flexible/fluid layouts – provided you're using relative (percentage) units for svg child elements.\n\n\n*{\n box-sizing:border-box;\n}\nsvg{\n border:1px solid #ccc;\n}\n\nsvg {\n background-color: lightblue;\n padding:0 10px;\n overflow:visible;\n}\n\n.svg2 {\n padding:10px;\n}\n\n.svg3 {\n padding:0px;\n}\n\n.resize{\n resize:both;\n overflow:auto;\n padding:1em;\n border:1px solid #ccc;\n}\n<p>resize me :</p>\n<div class=\"resize\">\n <svg id=\"svg\" width=\"100%\" height=\"40\" xmlns=\"http://www.w3.org/2000/svg\">\n <circle cx=\"0\" cy=\"10\" r=\"5\" />\n <circle cx=\"0\" cy=\"30\" r=\"5\" />\n\n <circle cx=\"50%\" cy=\"10\" r=\"5\" />\n <circle cx=\"50%\" cy=\"30\" r=\"5\" />\n\n <circle cx=\"100%\" cy=\"10\" r=\"5\" />\n <circle cx=\"100%\" cy=\"30\" r=\"5\" />\n </svg>\n\n</div>\n\n<div class=\"resize\">\n <svg class=\"svg2\" width=\"100%\" height=\"100%\" xmlns=\"http://www.w3.org/2000/svg\">\n <!-- align path center to x/y =0 by adding viewBox offset width/2 height/2 -->\n <symbol class=\"icon icon-home\" id=\"iconHome\" viewBox=\"20 20 40 40\" overflow=\"visible\">\n <path d=\"M36.4 22.2l-5.2 0l0 13l-3.4 0l0-16.7l-7.7-8.7l-7.7 8.7l0 16.7l-3.4 0l0-13l-5.2 0l16.4-17.4z\"></path>\n </symbol>\n\n <use x=\"0\" y=\"0%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n <use x=\"0\" y=\"100%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n \n <use x=\"50%\" y=\"0%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n <use x=\"50%\" y=\"100%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n \n <use x=\"100%\" y=\"0%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n <use x=\"100%\" y=\"100%\" href=\"#iconHome\" width=\"20\" height=\"20\" />\n </svg>\n</div>\n\n\n\n" ]
[ 34, 6, 1, 1, 0, 0 ]
[]
[]
[ "css", "d3.js", "firefox", "google_chrome", "svg" ]
stackoverflow_0019695136_css_d3.js_firefox_google_chrome_svg.txt
Q: Transform For loop into while loop s=0 for i in range(3,20,2): if i>10: break else: s=s+i print(s) how can i transform this code into a while loop? I don't know how to include the step. A: s = 0 i = 3 while i<10: s+=i i+=2 print(s) A: If you want to break the loop when i>10, then why you're running the loop till 20? Any way you can try this s,i=0,3 while i<=20: if i>10: break else: s=s+i i+=2 print(s) A: Here's how you can transform the for loop into a while loop: s = 0 i = 3 while i < 20: if i > 10: break else: s = s + i i += 2 print(s) A: Why reinvent something when you could do this using a range and the sum functions: >>> sum(range(3,10, 2)) 24
Transform For loop into while loop
s=0 for i in range(3,20,2): if i>10: break else: s=s+i print(s) how can i transform this code into a while loop? I don't know how to include the step.
[ "s = 0\ni = 3\nwhile i<10:\n s+=i\n i+=2\nprint(s)\n\n", "If you want to break the loop when i>10, then why you're running the loop till 20? Any way you can try this\ns,i=0,3\nwhile i<=20:\n if i>10:\n break\n else:\n s=s+i\n i+=2\nprint(s)\n\n", "Here's how you can transform the for loop into a while loop:\ns = 0\ni = 3\nwhile i < 20:\n if i > 10:\n break\n else:\n s = s + i\n i += 2\nprint(s)\n\n", "Why reinvent something when you could do this using a range and the sum functions:\n>>> sum(range(3,10, 2))\n24\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "for_loop", "python", "while_loop" ]
stackoverflow_0074669923_for_loop_python_while_loop.txt
Q: Binding element 'children' implicitly has an 'any' type.ts(7031) in Next.js While developing my personal portfolio using Next.js, I ran into an error where the prop "children" was not working. I am using tsx. Layout.tsx import styles from "./layout.module.css"; export default function Layout({ children, ...props }) { return ( <> <main className={styles.main} {...props}>{children}</main> </> ); } I tried using other props, defining the props, like I was from tutorials online, but none of them worked. ;( A: Your filetype is .tsx, which is a TypeScript filetype. You have to either define the types of the props, or change the file's name to Layout.js to convert it to vanilla JavaScript. If you want to use TypeScript: import { ReactNode } from "react"; import styles from "./layout.module.css"; const type LayoutProps = { children: ReactNode; // Your other props here. } export default function Layout({ children, ...props }: LayoutProps) { return ( <> <main className={styles.main} {...props}>{children}</main> </> ); } The issue is that when TypeScript doesn't know the type of something, it will assume the type is any and complain. You could explicitly set the type as any to solve this problem, as well, but this is considered bad practice: export default function Layout({ children, ...props }: any) { return ( <> <main className={styles.main} {...props}>{children}</main> </> ); }
Binding element 'children' implicitly has an 'any' type.ts(7031) in Next.js
While developing my personal portfolio using Next.js, I ran into an error where the prop "children" was not working. I am using tsx. Layout.tsx import styles from "./layout.module.css"; export default function Layout({ children, ...props }) { return ( <> <main className={styles.main} {...props}>{children}</main> </> ); } I tried using other props, defining the props, like I was from tutorials online, but none of them worked. ;(
[ "Your filetype is .tsx, which is a TypeScript filetype. You have to either define the types of the props, or change the file's name to Layout.js to convert it to vanilla JavaScript.\nIf you want to use TypeScript:\nimport { ReactNode } from \"react\";\nimport styles from \"./layout.module.css\";\n\nconst type LayoutProps = {\n children: ReactNode;\n // Your other props here.\n}\n\nexport default function Layout({ children, ...props }: LayoutProps) {\n return (\n <>\n <main className={styles.main} {...props}>{children}</main>\n </>\n );\n}\n\nThe issue is that when TypeScript doesn't know the type of something, it will assume the type is any and complain. You could explicitly set the type as any to solve this problem, as well, but this is considered bad practice:\nexport default function Layout({ children, ...props }: any) {\n return (\n <>\n <main className={styles.main} {...props}>{children}</main>\n </>\n );\n}\n\n" ]
[ 2 ]
[ "You are not destructuring the props object correctly. When you use the spread operator (...), it will spread out all the key-value pairs in the object as individual props. Since you have already destructured the children prop, it will not be included in the object that you spread.\nYou can either add the children prop to the spread operator, or destruct all.\nimport styles from \"./layout.module.css\";\n\nexport default function Layout({ children, ...props }) {\nreturn (\n<>\n<main className={styles.main} {...props} {...{children}}>{children}</main>\n</>\n);\n}\n\nOr destructure all the props in one:\nimport styles from \"./layout.module.css\";\n\nexport default function Layout({ children, ...props }) {\nreturn (\n<>\n<main className={styles.main} {...props}>{children}</main>\n</>\n);\n}\n\nEither way will fix the children prop.\n" ]
[ -1 ]
[ "javascript", "next.js", "reactjs", "tsx", "typescript" ]
stackoverflow_0074670210_javascript_next.js_reactjs_tsx_typescript.txt
Q: Check for None in pandas dataframe I would like to find where None is found in the dataframe. pd.DataFrame([None,np.nan]).isnull() OUT: 0 0 True 1 True isnull() finds both numpy Nan and None values. I only want the None values and not numpy Nan. Is there an easier way to do that without looping through the dataframe? Edit: After reading the comments, I realized that in my dataframe in my work also include strings, so the None were not coerced to numpy Nan. So the answer given by Pisdom works. A: If you want to get True/False for each line, you can use the following code. Here is an example as a result for the following DataFrame: df = pd.DataFrame([[None, 3], ["", np.nan]]) df # 0 1 #0 None 3.0 #1 NaN How to check None Available: .isnull() >>> df[0].isnull() 0 True 1 False Name: 0, dtype: bool Available: .apply == or is None >>> df[0].apply(lambda x: x == None) 0 True 1 False Name: 0, dtype: bool >>> df[0].apply(lambda x: x is None) 0 True 1 False Name: 0, dtype: bool Available: .values == None >>> df[0].values == None array([ True, False]) Unavailable: is or == >>> df[0] is None False >>> df[0] == None 0 False 1 False Name: 0, dtype: bool Unavailable: .values is None >>> df[0].values is None False How to check np.nan Available: .isnull() >>> df[1].isnull() 0 False 1 True Name: 1, dtype: bool Available: np.isnan >>> np.isnan(df[1]) 0 False 1 True Name: 1, dtype: bool >>> np.isnan(df[1].values) array([False, True]) >>> df[1].apply(lambda x: np.isnan(x)) 0 False 1 True Name: 1, dtype: bool Unavailable: is or == np.nan >>> df[1] is np.nan False >>> df[1] == np.nan 0 False 1 False Name: 1, dtype: bool >>> df[1].values is np.nan False >>> df[1].values == np.nan array([False, False]) >>> df[1].apply(lambda x: x is np.nan) 0 False 1 False Name: 1, dtype: bool >>> df[1].apply(lambda x: x == np.nan) 0 False 1 False Name: 1, dtype: bool A: You could use applymap with a lambda to check if an element is None as follows, (constructed a different example, as in your original one, None is coerced to np.nan because the data type is float, you will need an object type column to hold None as is, or as commented by @Evert, None and NaN are indistinguishable in numeric type columns): df = pd.DataFrame([[None, 3], ["", np.nan]]) df # 0 1 #0 None 3.0 #1 NaN df.applymap(lambda x: x is None) # 0 1 #0 True False #1 False False A: Q: How check for None in DataFrame / Series A: isna works but also catches nan. Two suggestions: Use x.isna() and replace none with nan If you really care about None: x.applymap(type) == type(None) I prefer comparing type since for example nan == nan is false. In my case the Nones appeared unintentionally so x[x.isna()] = nan solved the problem. Example: x = pd.DataFrame([12, False, 0, nan, None]).T x.isna() 0 1 2 3 4 0 False False False True True x.applymap(type) == type(None) 0 1 2 3 4 0 False False False False True x 0 1 2 3 4 0 12 False 0 NaN None x[x.isna()] = nan 0 1 2 3 4 0 12 False 0 NaN NaN
Check for None in pandas dataframe
I would like to find where None is found in the dataframe. pd.DataFrame([None,np.nan]).isnull() OUT: 0 0 True 1 True isnull() finds both numpy Nan and None values. I only want the None values and not numpy Nan. Is there an easier way to do that without looping through the dataframe? Edit: After reading the comments, I realized that in my dataframe in my work also include strings, so the None were not coerced to numpy Nan. So the answer given by Pisdom works.
[ "If you want to get True/False for each line, you can use the following code. Here is an example as a result for the following DataFrame:\ndf = pd.DataFrame([[None, 3], [\"\", np.nan]])\n\ndf\n# 0 1\n#0 None 3.0\n#1 NaN\n\nHow to check None\nAvailable: .isnull()\n>>> df[0].isnull()\n0 True\n1 False\nName: 0, dtype: bool\n\nAvailable: .apply == or is None\n>>> df[0].apply(lambda x: x == None)\n0 True\n1 False\nName: 0, dtype: bool\n\n>>> df[0].apply(lambda x: x is None)\n0 True\n1 False\nName: 0, dtype: bool\n\nAvailable: .values == None\n>>> df[0].values == None\narray([ True, False])\n\nUnavailable: is or ==\n>>> df[0] is None\nFalse\n\n>>> df[0] == None\n0 False\n1 False\nName: 0, dtype: bool\n\nUnavailable: .values is None\n>>> df[0].values is None\nFalse\n\nHow to check np.nan\nAvailable: .isnull()\n>>> df[1].isnull()\n0 False\n1 True\nName: 1, dtype: bool\n\nAvailable: np.isnan\n>>> np.isnan(df[1])\n0 False\n1 True\nName: 1, dtype: bool\n\n>>> np.isnan(df[1].values)\narray([False, True])\n\n>>> df[1].apply(lambda x: np.isnan(x))\n0 False\n1 True\nName: 1, dtype: bool\n\nUnavailable: is or == np.nan\n>>> df[1] is np.nan\nFalse\n\n>>> df[1] == np.nan\n0 False\n1 False\nName: 1, dtype: bool\n\n>>> df[1].values is np.nan\nFalse\n\n>>> df[1].values == np.nan\narray([False, False])\n\n>>> df[1].apply(lambda x: x is np.nan)\n0 False\n1 False\nName: 1, dtype: bool\n\n>>> df[1].apply(lambda x: x == np.nan)\n0 False\n1 False\nName: 1, dtype: bool\n\n", "You could use applymap with a lambda to check if an element is None as follows, (constructed a different example, as in your original one, None is coerced to np.nan because the data type is float, you will need an object type column to hold None as is, or as commented by @Evert, None and NaN are indistinguishable in numeric type columns):\ndf = pd.DataFrame([[None, 3], [\"\", np.nan]])\n\ndf\n# 0 1\n#0 None 3.0\n#1 NaN\n\ndf.applymap(lambda x: x is None)\n\n# 0 1\n#0 True False\n#1 False False\n\n", "Q: How check for None in DataFrame / Series\nA: isna works but also catches nan. Two suggestions:\n\nUse x.isna() and replace none with nan\nIf you really care about None: x.applymap(type) == type(None)\n\nI prefer comparing type since for example nan == nan is false.\nIn my case the Nones appeared unintentionally so x[x.isna()] = nan solved the problem.\nExample:\nx = pd.DataFrame([12, False, 0, nan, None]).T\nx.isna()\n 0 1 2 3 4\n0 False False False True True\n\nx.applymap(type) == type(None)\n 0 1 2 3 4\n0 False False False False True\n\nx\n 0 1 2 3 4\n0 12 False 0 NaN None\n\nx[x.isna()] = nan\n 0 1 2 3 4\n0 12 False 0 NaN NaN\n\n" ]
[ 13, 8, 0 ]
[]
[]
[ "nan", "numpy", "pandas", "python" ]
stackoverflow_0045271309_nan_numpy_pandas_python.txt
Q: PowerShell - Get everything after the first "-" I have the following code: $TodayDate = Get-Date -Format "dd-MM-yyyy" $Student = Student01 - Project 01-02 - $TodayDate Write-Host -NoNewline -ForegroundColor White "$Student"; Write-Host -ForegroundColor Green " - was delivered!" This script returns in the console: Student01 - Project 01-02 - dd-MM-yyyy - was delivered How is it possible to return only everything after the first "-"?, that is Project 01-02 - dd-MM-yyyy - was delivered? I thought about using .split, but I couldn't make it work so that it returns everything after the first "-". A: Your problem boils down to wanting to remove a prefix from a string. Given that the prefix to remove cannot be defined as a static, literal string, but is defined by the (included) first occurrence of a separator, you have two PowerShell-idiomatic options: Use the -split operator, which allows you to limit the number of resulting tokens; if you limit them to 2, everything after the first separator is returned as-is; thus, the 2nd (and last) token (accessible by index [-1]) then contains the suffix of interest: $str = 'Student01 - Project 01-02 - dd-MM-yyyy - was delivered' # Split by ' - ', and return at most 2 tokens, then extract the 2nd (last) token. # -> 'Project 01-02 - dd-MM-yyyy - was delivered' ($str -split ' - ', 2)[-1] Use the -replace operator, which (like -split by default) is regex-based, and allows you to formulate a pattern that matches any prefix up to and including the first separator, which can then be removed: $str = 'Student01 - Project 01-02 - dd-MM-yyyy - was delivered' # Match everything up to the *first* ' - ', and remove it. # Note that not specifying a *replacement string* (second RHS operator) # implicitly uses '' and therefore *removes* what was matched. # -> 'Project 01-02 - dd-MM-yyyy - was delivered' $str -replace '^.+? - ' For an explanation of the regex (^.+? - ) and the ability to experiment with it, see this regex101.com page.
PowerShell - Get everything after the first "-"
I have the following code: $TodayDate = Get-Date -Format "dd-MM-yyyy" $Student = Student01 - Project 01-02 - $TodayDate Write-Host -NoNewline -ForegroundColor White "$Student"; Write-Host -ForegroundColor Green " - was delivered!" This script returns in the console: Student01 - Project 01-02 - dd-MM-yyyy - was delivered How is it possible to return only everything after the first "-"?, that is Project 01-02 - dd-MM-yyyy - was delivered? I thought about using .split, but I couldn't make it work so that it returns everything after the first "-".
[ "\nYour problem boils down to wanting to remove a prefix from a string.\nGiven that the prefix to remove cannot be defined as a static, literal string, but is defined by the (included) first occurrence of a separator, you have two PowerShell-idiomatic options:\n\nUse the -split operator, which allows you to limit the number of resulting tokens; if you limit them to 2, everything after the first separator is returned as-is; thus, the 2nd (and last) token (accessible by index [-1]) then contains the suffix of interest:\n$str = 'Student01 - Project 01-02 - dd-MM-yyyy - was delivered'\n\n# Split by ' - ', and return at most 2 tokens, then extract the 2nd (last) token.\n# -> 'Project 01-02 - dd-MM-yyyy - was delivered'\n($str -split ' - ', 2)[-1]\n\n\nUse the -replace operator, which (like -split by default) is regex-based, and allows you to formulate a pattern that matches any prefix up to and including the first separator, which can then be removed:\n$str = 'Student01 - Project 01-02 - dd-MM-yyyy - was delivered'\n\n# Match everything up to the *first* ' - ', and remove it.\n# Note that not specifying a *replacement string* (second RHS operator)\n# implicitly uses '' and therefore *removes* what was matched.\n# -> 'Project 01-02 - dd-MM-yyyy - was delivered'\n$str -replace '^.+? - '\n\n\nFor an explanation of the regex (^.+? - ) and the ability to experiment with it, see this regex101.com page.\n\n\n\n" ]
[ 2 ]
[]
[]
[ "powershell", "windows" ]
stackoverflow_0074669265_powershell_windows.txt
Q: Spring batch filtering data inside item reader I'm writing a batch that reads log files which should take many types (format of log log file ) then I want to read every file based on some characters inside log files for example 15:31:44,437 INFO <NioProcessor-32> Send to <SLE- 15:31:44,437 INFO <NioProcessor-32> [{2704=5, 604=1, {0=023pdu88mW00007z}] 15:31:44,437 DEBUG <NioProcessor-32> SCRecord 2944 In such a log file I want to read only log lines which contain ' [{}] ' and ignore all others. I have tried to read it in item reader and split it to object but I can't figure how. I think that I should create a custom item reader or something like that; my Logline class looks too simple: public class logLine { String idOrder; String time; String Tags; } and my item reader look like: public FlatFileItemReader<logLine> customerItemReader() { FlatFileItemReader<logLine> reader = new FlatFileItemReader<>(); reader.setResource(new ClassPathResource("/data/customer.log")); DefaultLineMapper<LogLine> customerLineMapper = new DefaultLineMapper<>(); DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(); tokenizer.setNames(new String[] {"idOrder", "date", "tags"}); customerLineMapper.setLineTokenizer(tokenizer); customerLineMapper.setFieldSetMapper(new CustomerFieldSetMapper()); reader.setLineMapper(customerLineMapper); return reader; } How can I add a filter in this item reader to read only lines which contain [{ without doing the job in the item processor A: filtering should be responsibility of processor and not the reader. You can use composite item processor and add First processor as Filtering. Filtering processor should return null for log lines which does not contain ' [{}] ' . These rows will be automatically ignore in next processor and in writer. A: can implement a customfilereader extending FlatFileItemReader with partition num or filter criteria passed in constructor from config, and override the read() method -> https://github.com/spring-projects/spring-batch/blob/main/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/support/AbstractItemCountingItemStreamItemReader.java#L90 Every slave step would instantiate based on a different constructor param.
Spring batch filtering data inside item reader
I'm writing a batch that reads log files which should take many types (format of log log file ) then I want to read every file based on some characters inside log files for example 15:31:44,437 INFO <NioProcessor-32> Send to <SLE- 15:31:44,437 INFO <NioProcessor-32> [{2704=5, 604=1, {0=023pdu88mW00007z}] 15:31:44,437 DEBUG <NioProcessor-32> SCRecord 2944 In such a log file I want to read only log lines which contain ' [{}] ' and ignore all others. I have tried to read it in item reader and split it to object but I can't figure how. I think that I should create a custom item reader or something like that; my Logline class looks too simple: public class logLine { String idOrder; String time; String Tags; } and my item reader look like: public FlatFileItemReader<logLine> customerItemReader() { FlatFileItemReader<logLine> reader = new FlatFileItemReader<>(); reader.setResource(new ClassPathResource("/data/customer.log")); DefaultLineMapper<LogLine> customerLineMapper = new DefaultLineMapper<>(); DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(); tokenizer.setNames(new String[] {"idOrder", "date", "tags"}); customerLineMapper.setLineTokenizer(tokenizer); customerLineMapper.setFieldSetMapper(new CustomerFieldSetMapper()); reader.setLineMapper(customerLineMapper); return reader; } How can I add a filter in this item reader to read only lines which contain [{ without doing the job in the item processor
[ "filtering should be responsibility of processor and not the reader. You can use composite item processor and add First processor as Filtering. \nFiltering processor should return null for log lines which does not contain ' [{}] ' .\nThese rows will be automatically ignore in next processor and in writer.\n", "can implement a customfilereader extending FlatFileItemReader with partition num or filter criteria passed in constructor from config, and override the read() method -> https://github.com/spring-projects/spring-batch/blob/main/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/support/AbstractItemCountingItemStreamItemReader.java#L90\nEvery slave step would instantiate based on a different constructor param.\n" ]
[ 2, 0 ]
[]
[]
[ "java", "spring_batch" ]
stackoverflow_0049579991_java_spring_batch.txt
Q: How to summarize the values by their latest timestamps using QUERY? I've tried to do it using something like: =UNIQUE(query(J2:L,"select J, K, MAX(L) where K matches 'Pending' or K matches 'Finished' group by J, K, L")) but it doesn't get the unique values, as the expected result shows: Here is a test file. A: added solution here =ARRAYFORMULA(MAP(SORT(UNIQUE(FILTER(J3:J,J3:J<>"")),1,1),LAMBDA(jx,vlookup(jx,{SORT(J:L,3,0)},{1,2,3},)))) A: With one QUERY you can't limit to one result per item in J. One option is to use a formula like this: =filter({unique(J3:J),byrow(unique(J3:J),LAMBDA(each,query(J3:L,"Select K,L where J = "&each&" order by L desc limit 1",0)))},unique(J3:J)<>"")
How to summarize the values by their latest timestamps using QUERY?
I've tried to do it using something like: =UNIQUE(query(J2:L,"select J, K, MAX(L) where K matches 'Pending' or K matches 'Finished' group by J, K, L")) but it doesn't get the unique values, as the expected result shows: Here is a test file.
[ "added solution here\n=ARRAYFORMULA(MAP(SORT(UNIQUE(FILTER(J3:J,J3:J<>\"\")),1,1),LAMBDA(jx,vlookup(jx,{SORT(J:L,3,0)},{1,2,3},))))\n\n", "With one QUERY you can't limit to one result per item in J. One option is to use a formula like this:\n=filter({unique(J3:J),byrow(unique(J3:J),LAMBDA(each,query(J3:L,\"Select K,L where J = \"&each&\" order by L desc limit 1\",0)))},unique(J3:J)<>\"\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "formula", "google_sheets", "google_sheets_formula" ]
stackoverflow_0074670016_formula_google_sheets_google_sheets_formula.txt