content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: How do I remove an item from an array based on the difference between two items I'm trying to remove outliers from a dataset, where an outlier is if the difference between one item and the next one is larger than 3 * the uncertainty on the item def remove_outliers(data): for i in data: x = np.where(abs(i[1] - (i+1)[1]) > 3( * data[:,2])) data_outliers_removed = np.delete(data, x, axis =1) return data_outliers_removed is the function which I tried to use, however it either deletes no values or all values when I've played around with it. A: i would maybe do something like this by working with a new empty array. def remove_outliers(dataset): filtered_dataset = [] for index, item in enumerate(dataset): if index == 0: filtered_dataset.append(item) else: if abs(item[0] - dataset[index - 1][0]) <= 3 * dataset[index - 1][1]: filtered_dataset.append(item) return filtered_dataset Of course the same can be achieved easily with numpy. Hope that helps A: Iterating over a numpy array is usually a code-smell, since you reject numpy's super-fast indexing and slicing abilities for python's slow loops. I'm assuming data is a numpy array since you've used it like one. Your criterion for an outlier is: if the difference between one item and the next one is larger than 3 * the uncertainty on the item From your usage, it appears the "items" are in the data[:, 1] column, and the uncertainties are in the data[:, 2] column. The difference between an item and the next one is easy to obtain using np.diff, so our condition becomes: np.diff(data[:, 1]) > 3 * data[:-1, 2] I skipped the last uncertainty by doing data[:-1, 2] because the last uncertainty doesn't matter -- the last item doesn't have a "next element". I'm going to consider that it is an outlier and filter it out, but I've also shown how to filter it in if you want. We will use boolean indexing to filter out the rows we don't want in our array: def remove_outliers(data): select_mask = np.zeros(data[:, 1].shape, dtype=bool) # Make an array of Falses # Since the default value of the mask is False, items are considered outliers # and therefore filtered out unless we calculate the value for the mask # If you want to consider the opposite, do `np.ones(...)` # Only calculate the value for the mask for the first through the second-last item select_mask[:-1] = np.diff(data[:, 1]) > 3 * data[:-1, 2] # Select only those rows where select_mask is True # And select all columns filtered_data = data[select_mask, :] return filtered_data
How do I remove an item from an array based on the difference between two items
I'm trying to remove outliers from a dataset, where an outlier is if the difference between one item and the next one is larger than 3 * the uncertainty on the item def remove_outliers(data): for i in data: x = np.where(abs(i[1] - (i+1)[1]) > 3( * data[:,2])) data_outliers_removed = np.delete(data, x, axis =1) return data_outliers_removed is the function which I tried to use, however it either deletes no values or all values when I've played around with it.
[ "i would maybe do something like this by working with a new empty array.\ndef remove_outliers(dataset):\nfiltered_dataset = []\nfor index, item in enumerate(dataset):\n if index == 0:\n filtered_dataset.append(item)\n else:\n if abs(item[0] - dataset[index - 1][0]) <= 3 * dataset[index - 1][1]:\n filtered_dataset.append(item)\nreturn filtered_dataset\n\nOf course the same can be achieved easily with numpy.\nHope that helps\n", "Iterating over a numpy array is usually a code-smell, since you reject numpy's super-fast indexing and slicing abilities for python's slow loops. I'm assuming data is a numpy array since you've used it like one.\nYour criterion for an outlier is:\n\nif the difference between one item and the next one is larger than 3 * the uncertainty on the item\n\nFrom your usage, it appears the \"items\" are in the data[:, 1] column, and the uncertainties are in the data[:, 2] column.\nThe difference between an item and the next one is easy to obtain using np.diff, so our condition becomes:\nnp.diff(data[:, 1]) > 3 * data[:-1, 2]\n\nI skipped the last uncertainty by doing data[:-1, 2] because the last uncertainty doesn't matter -- the last item doesn't have a \"next element\". I'm going to consider that it is an outlier and filter it out, but I've also shown how to filter it in if you want.\nWe will use boolean indexing to filter out the rows we don't want in our array:\ndef remove_outliers(data):\n select_mask = np.zeros(data[:, 1].shape, dtype=bool) # Make an array of Falses\n # Since the default value of the mask is False, items are considered outliers\n # and therefore filtered out unless we calculate the value for the mask\n # If you want to consider the opposite, do `np.ones(...)`\n\n # Only calculate the value for the mask for the first through the second-last item\n select_mask[:-1] = np.diff(data[:, 1]) > 3 * data[:-1, 2]\n\n # Select only those rows where select_mask is True\n # And select all columns\n filtered_data = data[select_mask, :] \n return filtered_data\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074656777_arrays_numpy_python.txt
Q: Reactive forms with interfaces More of a question on best practice. Let's say you have a form asking for Name and Age: <form [formGroup]="myForm" novalidate (ngSubmit)="sendMyForm(myForm)"><label>Name</label> <input type="text" /> <hr /> <label>Age</label> <select><option value="20">20</option><option value="30">30</option></select></form> Note that a dropdown is a number. Now, of course, this will by default be a string not a number as it's in a <select>. My question is should I make an interface such as: export interface Person { name: string; age: number; } And would you assign the interface somehow to the form first? Or on the model.value when you submit the form? I ask as the API would expect a number not a string of a number. If you would assign the interface to the form upfront, please could you show how to do this? Form set-up would be: this.myForm = this.fBuilder.group({ name: null, age: null }); A: In our application we have interfaces representing the form type. And we have mappers which know how to map form value to entity and entity back to form value. If we hadn't the types for form, then our mapper would not be type safe. IMHO it is worth the effort to have types everywhere. You can either add mapping between form value and service so that service expects the mapped type or make your service expect the form type and do the mapping inside the service. A: Just use [ngValue] instead of value like below <option [ngValue]="20">20</option> <option [ngValue]="30">30</option> Also, you need to put the formControlName on the select tag <select formControlName="age"> You don't need to use any mappings and conversions. To check the type, keep the below in the method which is triggered on submit console.log('type of age is', typeof this.myForm.controls['age'].value) // gives number A: I know its long time after the question is asked.Just came across this question and I think I got a solution. Please check the below answer is what you are expecting. <form [formGroup]="myForm" novalidate (ngSubmit)="sendMyForm()"><label>Name</label> <input type="text" formControlName="name"/> <hr /> <label>Age</label> <select formControlName="age"><option value="20">20</option><option value="30">30</option></select> <button type="submit">Submit</button> I have added a button in your code. fBuilder = new FormBuilder(); personModel: Person; myForm = this.fBuilder.group({ name: this.fBuilder.control(''), age: this.fBuilder.control(0) }); constructor() { } ngOnInit(): void { } sendMyForm(){ this.personModel = Object.assign({}, this.myForm.value); console.log(this.personModel); } regarding your question "And would you assign the interface somehow to the form first?" - No. As the reactive forms will manage the data binding its own. Instead once the form is submitted, I took the values and mapped to the interface.
Reactive forms with interfaces
More of a question on best practice. Let's say you have a form asking for Name and Age: <form [formGroup]="myForm" novalidate (ngSubmit)="sendMyForm(myForm)"><label>Name</label> <input type="text" /> <hr /> <label>Age</label> <select><option value="20">20</option><option value="30">30</option></select></form> Note that a dropdown is a number. Now, of course, this will by default be a string not a number as it's in a <select>. My question is should I make an interface such as: export interface Person { name: string; age: number; } And would you assign the interface somehow to the form first? Or on the model.value when you submit the form? I ask as the API would expect a number not a string of a number. If you would assign the interface to the form upfront, please could you show how to do this? Form set-up would be: this.myForm = this.fBuilder.group({ name: null, age: null });
[ "In our application we have interfaces representing the form type. And we have mappers which know how to map form value to entity and entity back to form value. If we hadn't the types for form, then our mapper would not be type safe. IMHO it is worth the effort to have types everywhere.\nYou can either add mapping between form value and service so that service expects the mapped type or make your service expect the form type and do the mapping inside the service.\n", "Just use [ngValue] instead of value like below\n<option [ngValue]=\"20\">20</option>\n<option [ngValue]=\"30\">30</option>\n\nAlso, you need to put the formControlName on the select tag\n<select formControlName=\"age\">\n\nYou don't need to use any mappings and conversions.\nTo check the type, keep the below in the method which is triggered on submit\nconsole.log('type of age is', typeof this.myForm.controls['age'].value) // gives number\n\n", "I know its long time after the question is asked.Just came across this question and I think I got a solution. Please check the below answer is what you are expecting.\n<form [formGroup]=\"myForm\" novalidate (ngSubmit)=\"sendMyForm()\"><label>Name</label>\n<input type=\"text\" formControlName=\"name\"/>\n<hr />\n<label>Age</label>\n<select formControlName=\"age\"><option value=\"20\">20</option><option value=\"30\">30</option></select>\n<button type=\"submit\">Submit</button>\n\nI have added a button in your code.\nfBuilder = new FormBuilder();\n personModel: Person;\n\n myForm = this.fBuilder.group({\n name: this.fBuilder.control(''),\n age: this.fBuilder.control(0)\n });\n \n constructor() { }\n\n ngOnInit(): void {\n }\n\n sendMyForm(){\n this.personModel = Object.assign({}, this.myForm.value);\n console.log(this.personModel);\n }\n\nregarding your question\n\n\"And would you assign the interface somehow to the form first?\" - No.\nAs the reactive forms will manage the data binding its own. Instead\nonce the form is submitted, I took the values and mapped to the\ninterface.\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "angular" ]
stackoverflow_0054763664_angular.txt
Q: Dropdown Button only working when clicking on very edges I have been following Web Dev Simplified's tutorial on dropdowns. My dropdown works, but you can only dropdown when you are selecting the div outside the button (on the very edges of the button). When you click on the text, it doesn't do anything. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Document</title> </head> <body> <style> .content { display: none; } .dropdown.active>.dropdown-menu+.content { display: block; } </style> <div class="dropdown" data-dropdown> <button class="dropdown-menu" id="phoneButton" data-dropdown-button> <h4>Therapy</h4> <i class="fa-solid fa-angle-down"></i> </button> <div class="content" id="contentPhone"> <a href="person-centred.html">Person Centred Therapy</a> <a href="play.html">Play Therapy</a> <a href="music.html">Music Therapy</a> <a href="art.html">Art Therapy</a> </div> </div> <script> document.addEventListener("click", e => { const isDropdownButton = e.target.matches("[data-dropdown-button]") if (!isDropdownButton && e.target.closest("[data-dropdown]") != null) return let currentDropdown if (isDropdownButton) { currentDropdown = e.target.closest("[data-dropdown]") currentDropdown.classList.toggle("active") } document.querySelectorAll("[data-dropdown].active").forEach(dropdown => { if (dropdown === currentDropdown) return dropdown.classList.remove("active") }) }) </script> </body> </html> A: I wrote some JS. Actually the problem is e.target.matches(). Here it just takes button. but your content has h4 tag. You are accessing a data attribute with e.target.matches() and if you click on this h4 tag nothing will be triggered. Therefore, by directly selecting the button id, it will include all the elements in the button. document.getElementById('phoneButton').addEventListener('click', (e) => { currentDropdown = e.target.closest("[data-dropdown]"); currentDropdown.classList.toggle("active"); });
Dropdown Button only working when clicking on very edges
I have been following Web Dev Simplified's tutorial on dropdowns. My dropdown works, but you can only dropdown when you are selecting the div outside the button (on the very edges of the button). When you click on the text, it doesn't do anything. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Document</title> </head> <body> <style> .content { display: none; } .dropdown.active>.dropdown-menu+.content { display: block; } </style> <div class="dropdown" data-dropdown> <button class="dropdown-menu" id="phoneButton" data-dropdown-button> <h4>Therapy</h4> <i class="fa-solid fa-angle-down"></i> </button> <div class="content" id="contentPhone"> <a href="person-centred.html">Person Centred Therapy</a> <a href="play.html">Play Therapy</a> <a href="music.html">Music Therapy</a> <a href="art.html">Art Therapy</a> </div> </div> <script> document.addEventListener("click", e => { const isDropdownButton = e.target.matches("[data-dropdown-button]") if (!isDropdownButton && e.target.closest("[data-dropdown]") != null) return let currentDropdown if (isDropdownButton) { currentDropdown = e.target.closest("[data-dropdown]") currentDropdown.classList.toggle("active") } document.querySelectorAll("[data-dropdown].active").forEach(dropdown => { if (dropdown === currentDropdown) return dropdown.classList.remove("active") }) }) </script> </body> </html>
[ "I wrote some JS. Actually the problem is e.target.matches(). Here it just takes button. but your content has h4 tag. You are accessing a data attribute with e.target.matches() and if you click on this h4 tag nothing will be triggered. Therefore, by directly selecting the button id, it will include all the elements in the button.\ndocument.getElementById('phoneButton').addEventListener('click', (e) => {\n currentDropdown = e.target.closest(\"[data-dropdown]\");\n currentDropdown.classList.toggle(\"active\");\n });\n\n" ]
[ 0 ]
[]
[]
[ "dropdown", "html" ]
stackoverflow_0074659368_dropdown_html.txt
Q: cordova camera plugin is not loading after moving to targetSdkVersion 31 After updating my App to targetSdkVersion 31 camera is not loading anymore. When clicking on cameraBtn as shown bellow my app displays the alert "Camera unavailable". navigator.camera is undefined. I'm familiar with the plugin usage, I have been using it for the last years successfully, but for some reason now I'm moving to targetSdkVersion 31 it is not working anymore. I appreciate if anybody can give me directions here! document.addEventListener('deviceready', function() { cameraBtn.addEventListener('click', function() { if (!navigator.camera) alert("Camera unavailable") else if (!navigator.camera.getPicture) alert(navigator.camera) navigator.camera.getPicture( function(imgData) { // success }, function(msg) { // fail }, { quality: 60, destinationType: Camera.DestinationType.DATA_URL, sourceType: Camera.PictureSourceType.CAMERA, mediaType: Camera.MediaType.PICTURE, encodingType: Camera.EncodingType.JPEG, cameraDirection: Camera.Direction.BACK, correctOrientation: true, targetWidth: 512, targetHeight: 512 } ); }); }); Here is my app plugin list: C:\htdocs\app\myapp>cordova plugin list cordova-plugin-barcodescanner 0.7.4 "BarcodeScanner" cordova-plugin-camera 6.0.0 "Camera" cordova-plugin-compat 1.2.0 "Compat" cordova-plugin-device 2.1.0 "Device" cordova-plugin-geolocation 4.1.0 "Geolocation" cordova-plugin-whitelist 1.3.4 "Whitelist" And this is my config.xml <?xml version='1.0' encoding='utf-8'?> <widget id="io.cordova.myapp" version="1.8.1" android-versionCode="10036" xmlns="http://www.w3.org/ns/widgets" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:cdv="http://cordova.apache.org/ns/1.0"> <name>Myapp</name> <description> </description> <author email="[email protected]" href="http://www.mydomain.com.br"> </author> <content src="index.html" /> <access origin="*" /> <allow-intent href="http://*/*" /> <allow-intent href="https://*/*" /> <allow-intent href="tel:*" /> <allow-intent href="sms:*" /> <allow-intent href="mailto:*" /> <allow-intent href="geo:*" /> <platform name="android"> <allow-intent href="market:*" /> <config-file after="uses-permission" parent="/manifest" target="AndroidManifest.xml"> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> </config-file> <edit-config file="app/src/main/AndroidManifest.xml" mode="merge" target="/manifest/application"> <application android:usesCleartextTraffic="true" /> </edit-config> <edit-config file="app/src/main/AndroidManifest.xml" mode="merge" target="/manifest/application/activity"> <activity android:exported="true" /> </edit-config> </platform> <platform name="ios"> <allow-intent href="itms:*" /> <allow-intent href="itms-apps:*" /> <edit-config file="*-Info.plist" mode="merge" target="NSLocationAlwaysAndWhenInUseUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSLocationWhenInUseUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSLocationAlwaysUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSCameraUsageDescription"> <string>Myapp precisa da permissão para acessar a sua câmera para anexar fotos aos seus chamados</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSPhotoLibraryUsageDescription"> <string>Myapp precisa da permissão para acessar a sua galeria para anexar fotos aos seus chamados</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSPhotoLibraryAddUsageDescription "> <string>Myapp precisa da permissão para acessar a sua galeria para salvar as fotos que serão anexadas aos seus chamados</string> </edit-config> </platform> <plugin name="cordova-plugin-whitelist" spec="1" /> <plugin name="cordova-plugin-geolocation" spec="~4" /> <plugin name="cordova-plugin-camera" spec="^6" /> <plugin name="cordova-plugin-barcodescanner" spec="^0.7.4" /> <preference name="android-minSdkVersion" value="30" /> <preference name="android-targetSdkVersion" value="31" /> <icon src="www/icon.png" /> <icon height="57" platform="ios" src="ios/icon-57.png" width="57" /> <icon height="72" platform="ios" src="ios/icon-72.png" width="72" /> <icon height="114" platform="ios" src="ios/icon-114.png" width="114" /> <icon height="120" platform="ios" src="ios/icon-120.png" width="120" /> <icon height="144" platform="ios" src="ios/icon-144.png" width="144" /> <icon height="152" platform="ios" src="ios/icon-152.png" width="152" /> <icon height="1024" platform="ios" src="ios/icon-1024.png" width="1024" /> </widget> A: I could solve the problem removing camera plugin and adding it with ANDROIDX_CORE_VERSION 1.8.0 as suggested on cordova-plugin-camera github cordova plugin add cordova-plugin-camera --variable ANDROIDX_CORE_VERSION=1.8.0 Previously package.json was showing ANDROIDX_CORE_VERSION 1.6.+ Anybody can tell what exactly ANDROIDX_CORE_VERSION stands for?
cordova camera plugin is not loading after moving to targetSdkVersion 31
After updating my App to targetSdkVersion 31 camera is not loading anymore. When clicking on cameraBtn as shown bellow my app displays the alert "Camera unavailable". navigator.camera is undefined. I'm familiar with the plugin usage, I have been using it for the last years successfully, but for some reason now I'm moving to targetSdkVersion 31 it is not working anymore. I appreciate if anybody can give me directions here! document.addEventListener('deviceready', function() { cameraBtn.addEventListener('click', function() { if (!navigator.camera) alert("Camera unavailable") else if (!navigator.camera.getPicture) alert(navigator.camera) navigator.camera.getPicture( function(imgData) { // success }, function(msg) { // fail }, { quality: 60, destinationType: Camera.DestinationType.DATA_URL, sourceType: Camera.PictureSourceType.CAMERA, mediaType: Camera.MediaType.PICTURE, encodingType: Camera.EncodingType.JPEG, cameraDirection: Camera.Direction.BACK, correctOrientation: true, targetWidth: 512, targetHeight: 512 } ); }); }); Here is my app plugin list: C:\htdocs\app\myapp>cordova plugin list cordova-plugin-barcodescanner 0.7.4 "BarcodeScanner" cordova-plugin-camera 6.0.0 "Camera" cordova-plugin-compat 1.2.0 "Compat" cordova-plugin-device 2.1.0 "Device" cordova-plugin-geolocation 4.1.0 "Geolocation" cordova-plugin-whitelist 1.3.4 "Whitelist" And this is my config.xml <?xml version='1.0' encoding='utf-8'?> <widget id="io.cordova.myapp" version="1.8.1" android-versionCode="10036" xmlns="http://www.w3.org/ns/widgets" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:cdv="http://cordova.apache.org/ns/1.0"> <name>Myapp</name> <description> </description> <author email="[email protected]" href="http://www.mydomain.com.br"> </author> <content src="index.html" /> <access origin="*" /> <allow-intent href="http://*/*" /> <allow-intent href="https://*/*" /> <allow-intent href="tel:*" /> <allow-intent href="sms:*" /> <allow-intent href="mailto:*" /> <allow-intent href="geo:*" /> <platform name="android"> <allow-intent href="market:*" /> <config-file after="uses-permission" parent="/manifest" target="AndroidManifest.xml"> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> </config-file> <edit-config file="app/src/main/AndroidManifest.xml" mode="merge" target="/manifest/application"> <application android:usesCleartextTraffic="true" /> </edit-config> <edit-config file="app/src/main/AndroidManifest.xml" mode="merge" target="/manifest/application/activity"> <activity android:exported="true" /> </edit-config> </platform> <platform name="ios"> <allow-intent href="itms:*" /> <allow-intent href="itms-apps:*" /> <edit-config file="*-Info.plist" mode="merge" target="NSLocationAlwaysAndWhenInUseUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSLocationWhenInUseUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSLocationAlwaysUsageDescription"> <string>Myapp precisa da permissão para acessar a sua geolocalização para identificar a localização do seu dispositivo no módulo de Documentos de RH</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSCameraUsageDescription"> <string>Myapp precisa da permissão para acessar a sua câmera para anexar fotos aos seus chamados</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSPhotoLibraryUsageDescription"> <string>Myapp precisa da permissão para acessar a sua galeria para anexar fotos aos seus chamados</string> </edit-config> <edit-config file="*-Info.plist" mode="merge" target="NSPhotoLibraryAddUsageDescription "> <string>Myapp precisa da permissão para acessar a sua galeria para salvar as fotos que serão anexadas aos seus chamados</string> </edit-config> </platform> <plugin name="cordova-plugin-whitelist" spec="1" /> <plugin name="cordova-plugin-geolocation" spec="~4" /> <plugin name="cordova-plugin-camera" spec="^6" /> <plugin name="cordova-plugin-barcodescanner" spec="^0.7.4" /> <preference name="android-minSdkVersion" value="30" /> <preference name="android-targetSdkVersion" value="31" /> <icon src="www/icon.png" /> <icon height="57" platform="ios" src="ios/icon-57.png" width="57" /> <icon height="72" platform="ios" src="ios/icon-72.png" width="72" /> <icon height="114" platform="ios" src="ios/icon-114.png" width="114" /> <icon height="120" platform="ios" src="ios/icon-120.png" width="120" /> <icon height="144" platform="ios" src="ios/icon-144.png" width="144" /> <icon height="152" platform="ios" src="ios/icon-152.png" width="152" /> <icon height="1024" platform="ios" src="ios/icon-1024.png" width="1024" /> </widget>
[ "I could solve the problem removing camera plugin and adding it with ANDROIDX_CORE_VERSION 1.8.0 as suggested on cordova-plugin-camera github\ncordova plugin add cordova-plugin-camera --variable ANDROIDX_CORE_VERSION=1.8.0\n\nPreviously package.json was showing ANDROIDX_CORE_VERSION 1.6.+\nAnybody can tell what exactly ANDROIDX_CORE_VERSION stands for?\n" ]
[ 0 ]
[]
[]
[ "cordova", "cordova_plugin_camera" ]
stackoverflow_0074649235_cordova_cordova_plugin_camera.txt
Q: Show preloader until content is loaded I've got a preloader for my site that works but is just counting from 0-100 with a timer and then the content is displayed, is it possible to amend my code so that it runs from 0-100 while the content is actually loading and only shows the content once loaded? let counter = 0; const loaderTimer = setInterval(function() { counter++; jQuery(".preloader__container__percent").text(counter + ""); if(counter == 100){ clearInterval(loaderTimer); gsap.to(".preloader", 1, { delay: 0.5, y: "-100%" }); } }, 25); .elementor-editor-active .preloader { display: none; } .preloader { position: fixed; top: 0; left: 0; width: 100%; height: 100%; color: #f2f2f2; background-color: #ff635e; z-index: 999; } .preloader__container { display: flex; flex-direction: column; justify-content: center; align-items: center; width: 100vw; height: 100vh; } .preloader__container__percent { font-family: "Alliance Black Italic", Sans-serif; font-size: 15vw; } @media only screen and (max-width: 1024px) { .preloader__container__percent { font-family: "Alliance Black Italic", Sans-serif; font-size: 20vw; } .preloader__container__preload { display: flex; } <script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.2.6/gsap.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="preloader"> <div class="preloader__container"> <h1 class="preloader__container__percent"></h1> </div> </div>
Show preloader until content is loaded
I've got a preloader for my site that works but is just counting from 0-100 with a timer and then the content is displayed, is it possible to amend my code so that it runs from 0-100 while the content is actually loading and only shows the content once loaded? let counter = 0; const loaderTimer = setInterval(function() { counter++; jQuery(".preloader__container__percent").text(counter + ""); if(counter == 100){ clearInterval(loaderTimer); gsap.to(".preloader", 1, { delay: 0.5, y: "-100%" }); } }, 25); .elementor-editor-active .preloader { display: none; } .preloader { position: fixed; top: 0; left: 0; width: 100%; height: 100%; color: #f2f2f2; background-color: #ff635e; z-index: 999; } .preloader__container { display: flex; flex-direction: column; justify-content: center; align-items: center; width: 100vw; height: 100vh; } .preloader__container__percent { font-family: "Alliance Black Italic", Sans-serif; font-size: 15vw; } @media only screen and (max-width: 1024px) { .preloader__container__percent { font-family: "Alliance Black Italic", Sans-serif; font-size: 20vw; } .preloader__container__preload { display: flex; } <script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.2.6/gsap.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="preloader"> <div class="preloader__container"> <h1 class="preloader__container__percent"></h1> </div> </div>
[]
[]
[ "take and display this loader on the content but make it position fixed and z-index 100 and wrap the rest of the content with a div and give this div z-index 50\n <div class=\"content\">\n <!--your content-->\n </div>\n <div class=\"loader\">\n <!--your loader-->\n </div>\n\nCSS:\n .content {\n position: relative;\n z-index: 50;\n }\n .loader {\n position: fixed;\n z-index: 100;\n }\n\nJS:\nnow delete the loader:\n setTimeout(() => {\n document.querySelector(\".loader\").remove();\n }, 1000)\n\n" ]
[ -1 ]
[ "css", "gsap", "html", "javascript" ]
stackoverflow_0074642930_css_gsap_html_javascript.txt
Q: How to create a subclass with more specific generics In the following code, I've outlined two problems (see comments): I need a static method to return an instance of the current class. When it is called on a subclass (without being overwritten), it should return an instance of the subclass. I need a generic to be more specific in a subclass (Base should take BaseOptions while Sub should take SubOptions which are a superset of BaseOptions). Solutions with extends BaseOptions | SubOptions don't work for me, as I cannot enumerate all of the possible, more specific types of options to keep the code generic. Am I approaching this the wrong way altogether or is there a solution to my problems? interface BaseOptions { a: string } interface SubOptions extends BaseOptions { a: string b: string } class Base<T extends BaseOptions> { options: T constructor(options: T) { this.options = options; } // Problem 1: When TypeScript is concerned, this method always returns an // instance of "Base", even when calling 'create' on a subclass (that does not // override 'create'). How can this be fixed? static create<T extends BaseOptions>(options: T): Base<T> { return new this(options); } } // Problem 2: Class 'Sub' requires more specific options than class 'Base', but // TypeScript does not allow me to validate it: // 2417: Class static side 'typeof Sub' // incorrectly extends base class static // side 'typeof Base'. class Sub<T extends SubOptions> extends Base<T> { static create<T extends SubOptions>(options: T): Sub<T> { return new this(options); } } A: Short Answer: Remove all the unnecessary stuff and keep it real. This works: type BaseOptions = { a: string; }; type SubOptions = { a: string; b: string; } & BaseOptions; class Base<BO extends BaseOptions> { options: BO; constructor(options: BO) { this.options = options; } static create<BO extends BaseOptions>(options: BO): Base<BO> { return new this(options); } } class Sub<SO extends SubOptions> extends Base<SO> {} You can force TypeScript to use the correct type like this: const sub = Sub.create({ a: 'a', b: 'b' }) as Sub<...>; Long Answer: This might be a TypeScript error - that's it; I really said that. Here is why I think so: Using the code from the short answer we can create an object called sub like this: const sub = Sub.create({ a: 'a', b: 'b' }); TypeScript will choose to type infer Base<{ a: string, b: string } - I am guessing you want to have the more explicit definition and infer the type Sub<...>. To see what the underlying JS-Interpreter does, let's see what happens in node when using plain JavaScript. class Base { constructor(options) { this.options = options } static create(options) { return new this(options) } } class Sub extends Base {} Sub.create({ a: 'a', b: 'b' }) This outputs > Sub { options: { a: 'a', b: 'b' } } There you have it. That's the object with the correct class assigned; perfectly safe to overwrite the TypeScript inferred type. A: interface IGenericMap<T> { [x: string]: T; } class Base<T extends IGenericMap<string>> { options: T constructor(options: T) { this.options = options; } static create<T extends IGenericMap<string>>(options: T): Base<T> { return new this(options); } } class Sub<T extends IGenericMap<string>> extends Base<T> { constructor(options: T) { super(options); } } console.log(Base.create({ a: 'one' })); console.log(Sub.create({ a: 'one', b: 'two' })); Now you can use the inheritance properly and the create methods returns the appropirate class. If use the object mapping instead of BaseOptions and SubOptions inside the IGenericMap you are going to have any key with a generic value by choise. In the example above string and you can change it with any other type: number, Object, any, string | number ... UPDATE: an example with types: interface BaseOptions { a: string } interface SubOptions extends BaseOptions { a: string b: string } class Base<T extends BaseOptions> { options: T constructor(options: T) { this.options = options; } static create<T extends BaseOptions>(options: T): Base<T> { return new this(options); } } class Sub<T extends SubOptions> extends Base<T> { constructor(options: T) { super(options); } } console.log(Base.create({ a: 'one' })); console.log(Sub.create({ a: 'one', b: 'two', c: '' }));
How to create a subclass with more specific generics
In the following code, I've outlined two problems (see comments): I need a static method to return an instance of the current class. When it is called on a subclass (without being overwritten), it should return an instance of the subclass. I need a generic to be more specific in a subclass (Base should take BaseOptions while Sub should take SubOptions which are a superset of BaseOptions). Solutions with extends BaseOptions | SubOptions don't work for me, as I cannot enumerate all of the possible, more specific types of options to keep the code generic. Am I approaching this the wrong way altogether or is there a solution to my problems? interface BaseOptions { a: string } interface SubOptions extends BaseOptions { a: string b: string } class Base<T extends BaseOptions> { options: T constructor(options: T) { this.options = options; } // Problem 1: When TypeScript is concerned, this method always returns an // instance of "Base", even when calling 'create' on a subclass (that does not // override 'create'). How can this be fixed? static create<T extends BaseOptions>(options: T): Base<T> { return new this(options); } } // Problem 2: Class 'Sub' requires more specific options than class 'Base', but // TypeScript does not allow me to validate it: // 2417: Class static side 'typeof Sub' // incorrectly extends base class static // side 'typeof Base'. class Sub<T extends SubOptions> extends Base<T> { static create<T extends SubOptions>(options: T): Sub<T> { return new this(options); } }
[ "Short Answer:\nRemove all the unnecessary stuff and keep it real. This works:\ntype BaseOptions = {\n a: string;\n};\n\ntype SubOptions = {\n a: string;\n b: string;\n} & BaseOptions;\n\nclass Base<BO extends BaseOptions> {\n options: BO;\n\n constructor(options: BO) {\n this.options = options;\n }\n\n static create<BO extends BaseOptions>(options: BO): Base<BO> {\n return new this(options);\n }\n}\n\nclass Sub<SO extends SubOptions> extends Base<SO> {}\n\nYou can force TypeScript to use the correct type like this:\nconst sub = Sub.create({ a: 'a', b: 'b' }) as Sub<...>;\n\n\nLong Answer:\nThis might be a TypeScript error - that's it; I really said that. Here is why I think so:\nUsing the code from the short answer we can create an object called sub like this:\nconst sub = Sub.create({ a: 'a', b: 'b' });\n\nTypeScript will choose to type infer Base<{ a: string, b: string } - I am guessing you want to have the more explicit definition and infer the type Sub<...>.\nTo see what the underlying JS-Interpreter does, let's see what happens in node when using plain JavaScript.\nclass Base {\n constructor(options) {\n this.options = options\n }\n\n static create(options) {\n return new this(options)\n }\n}\n\nclass Sub extends Base {}\n\nSub.create({ a: 'a', b: 'b' })\n\nThis outputs\n> Sub { options: { a: 'a', b: 'b' } }\n\nThere you have it. That's the object with the correct class assigned; perfectly safe to overwrite the TypeScript inferred type.\n", "interface IGenericMap<T> {\n [x: string]: T;\n}\n \nclass Base<T extends IGenericMap<string>> {\n options: T\n\n constructor(options: T) {\n this.options = options;\n }\n\n static create<T extends IGenericMap<string>>(options: T): Base<T> {\n return new this(options);\n }\n}\n\nclass Sub<T extends IGenericMap<string>> extends Base<T> {\n constructor(options: T) {\n super(options);\n }\n}\n\nconsole.log(Base.create({ a: 'one' }));\nconsole.log(Sub.create({ a: 'one', b: 'two' }));\n\nNow you can use the inheritance properly and the create methods returns the appropirate class.\nIf use the object mapping instead of BaseOptions and SubOptions inside the IGenericMap you are going to have any key with a generic value by choise. In the example above string and you can change it with any other type: number, Object, any, string | number ...\nUPDATE: an example with types:\ninterface BaseOptions {\n a: string\n}\n\ninterface SubOptions extends BaseOptions {\n a: string\n b: string\n}\n \nclass Base<T extends BaseOptions> {\n options: T\n\n constructor(options: T) {\n this.options = options;\n }\n\n static create<T extends BaseOptions>(options: T): Base<T> {\n return new this(options);\n }\n}\n\nclass Sub<T extends SubOptions> extends Base<T> {\n constructor(options: T) {\n super(options);\n }\n}\n\nconsole.log(Base.create({ a: 'one' }));\nconsole.log(Sub.create({ a: 'one', b: 'two', c: '' }));\n\n" ]
[ 1, 0 ]
[]
[]
[ "typescript", "typescript_generics" ]
stackoverflow_0074550014_typescript_typescript_generics.txt
Q: Serverless: Importing file to custom + other variables I have a serverless.common.yml, with properties that should be shared by all the services, with that: service: ixxxx custom: stage: ${opt:stage, self:provider.stage} resourcesStages: prod: prod dev: dev resourcesStage: ${self:custom.resourcesStages.${self:custom.stage}, self:custom.resourcesStages.dev} lambdaPolicyXRay: Effect: Allow Action: - xray:PutTraceSegments - xray:PutTelemetryRecords Resource: "*" And, another serverless.yml inside a services folder, which uses properties on the common file: ... custom: ${file(../../serverless.common.yml):custom} ... environment: stage: ${self:custom.stage} ... In that way, I can access the custom variables (from the common file) without a problem. Now, I want to continue to import this file to custom, but adding new variables, related to this service, to it, so I tried that: custom: common: ${file(../../serverless.common.yml):custom} wsgi: app: app.app packRequirements: false pythonRequirements: dockerizePip: non-linux And it seems it's possible to access, for example: environment: stage: ${self:custom.common.stage} But now, I'm receiving the error: Serverless Warning -------------------------------------- A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found. Serverless Warning -------------------------------------- A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found. Serverless Error --------------------------------------- Trying to populate non string value into a string for variable ${self:custom.stage}. Please make sure the value of the property is a strin What am I doing wrong? A: In your serverless.common.yml you must reference as if it were serverless.yml. In this case ${self:custom.stage} does not exist, but ${self:custom.common.stage} does exist. service: ixxxx custom: stage: ${opt:stage, self:provider.stage} resourcesStages: prod: prod dev: dev resourcesStage: ${self:custom.common.resourcesStages.${self:custom.common.stage}, self:custom.resourcesStages.dev} lambdaPolicyXRay: Effect: Allow Action: - xray:PutTraceSegments - xray:PutTelemetryRecords Resource: "*"
Serverless: Importing file to custom + other variables
I have a serverless.common.yml, with properties that should be shared by all the services, with that: service: ixxxx custom: stage: ${opt:stage, self:provider.stage} resourcesStages: prod: prod dev: dev resourcesStage: ${self:custom.resourcesStages.${self:custom.stage}, self:custom.resourcesStages.dev} lambdaPolicyXRay: Effect: Allow Action: - xray:PutTraceSegments - xray:PutTelemetryRecords Resource: "*" And, another serverless.yml inside a services folder, which uses properties on the common file: ... custom: ${file(../../serverless.common.yml):custom} ... environment: stage: ${self:custom.stage} ... In that way, I can access the custom variables (from the common file) without a problem. Now, I want to continue to import this file to custom, but adding new variables, related to this service, to it, so I tried that: custom: common: ${file(../../serverless.common.yml):custom} wsgi: app: app.app packRequirements: false pythonRequirements: dockerizePip: non-linux And it seems it's possible to access, for example: environment: stage: ${self:custom.common.stage} But now, I'm receiving the error: Serverless Warning -------------------------------------- A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found. Serverless Warning -------------------------------------- A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found. Serverless Error --------------------------------------- Trying to populate non string value into a string for variable ${self:custom.stage}. Please make sure the value of the property is a strin What am I doing wrong?
[ "In your serverless.common.yml you must reference as if it were serverless.yml. In this case ${self:custom.stage} does not exist, but ${self:custom.common.stage} does exist.\nservice: ixxxx\ncustom:\n stage: ${opt:stage, self:provider.stage}\n resourcesStages:\n prod: prod\n dev: dev\n resourcesStage: ${self:custom.common.resourcesStages.${self:custom.common.stage}, self:custom.resourcesStages.dev}\n\nlambdaPolicyXRay:\n Effect: Allow\n Action:\n - xray:PutTraceSegments\n - xray:PutTelemetryRecords\n Resource: \"*\"\n\n" ]
[ 0 ]
[]
[]
[ "serverless", "serverless_framework" ]
stackoverflow_0063508975_serverless_serverless_framework.txt
Q: Why won't my square move right? I'm trying all different methods to move it in turtle module it works for up and down but not left and right? # Game creation import turtle wn = turtle.Screen() wn.title("Pong") wn.bgcolor("Black") wn.setup(width=800, height=800) wn.tracer(0) # paddle a paddle_a = turtle.Turtle() paddle_a.speed(0) paddle_a.shape("square") paddle_a.color("white") paddle_a.penup() paddle_a.goto(0, 0) # Functions def paddle_a_right(): turtle.forward(100) wn.onkeypress(paddle_a_right, 'd') while True: wn.update() Want the square to move to the right or left using 'a' or 'd' I don't know very much about turtle, I just want to program a simple game. A: There are three major issues with your code. First, you need to call wn.listen() to allow the window to receive keyboard input. Second, you do turtle.forward(100) when you mean paddle_a.forward(100). Finally, since you did tracer(0), you now need to call wn.update() anytime a change is made that you want your user to see. Here's a simplified example: from turtle import Screen, Turtle def paddle_right(): paddle.forward(10) screen.update() screen = Screen() screen.title("Pong") screen.bgcolor("Black") screen.setup(width=800, height=800) screen.tracer(0) paddle = Turtle() paddle.shape("square") paddle.color("white") paddle.penup() screen.onkeypress(paddle_right, 'd') screen.listen() screen.update() screen.mainloop()
Why won't my square move right? I'm trying all different methods to move it in turtle module it works for up and down but not left and right?
# Game creation import turtle wn = turtle.Screen() wn.title("Pong") wn.bgcolor("Black") wn.setup(width=800, height=800) wn.tracer(0) # paddle a paddle_a = turtle.Turtle() paddle_a.speed(0) paddle_a.shape("square") paddle_a.color("white") paddle_a.penup() paddle_a.goto(0, 0) # Functions def paddle_a_right(): turtle.forward(100) wn.onkeypress(paddle_a_right, 'd') while True: wn.update() Want the square to move to the right or left using 'a' or 'd' I don't know very much about turtle, I just want to program a simple game.
[ "There are three major issues with your code. First, you need to call wn.listen() to allow the window to receive keyboard input. Second, you do turtle.forward(100) when you mean paddle_a.forward(100). Finally, since you did tracer(0), you now need to call wn.update() anytime a change is made that you want your user to see.\nHere's a simplified example:\nfrom turtle import Screen, Turtle\n\ndef paddle_right():\n paddle.forward(10)\n screen.update()\n\nscreen = Screen()\nscreen.title(\"Pong\")\nscreen.bgcolor(\"Black\")\nscreen.setup(width=800, height=800)\nscreen.tracer(0)\n\npaddle = Turtle()\npaddle.shape(\"square\")\npaddle.color(\"white\")\npaddle.penup()\n\nscreen.onkeypress(paddle_right, 'd')\nscreen.listen()\nscreen.update()\nscreen.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074649562_python_python_turtle_turtle_graphics.txt
Q: Discord.py Showing User Badges I am trying to do a command that shows a user's badges. This is my code: @bot.command(pass_context=True) async def test(ctx, user: discord.Member): test = discord.Embed(title=f"{user.name} User's Badges", description=f"{user.public_flags}", color=0xff0000 ) await ctx.channel.send(embed=test) And the bot is responding like this <PublicUserFlags value=64> I want it to respond like this Hype Squad ... How do I do that? A: You could do str(user.public_flags.all()) to obtain a string value of all the badges an user has. Although this is an improvement, your output will still be something like: [<UserFlags.hypesquad_brilliance: 128>]. But the advantage here is that the words hypesquad and brilliance are clearly indicated in the string. Now, all you have to do is to remove [<UserFlags., _ and : 128>] from the string. Here is a way to re-define your code: @client.command(pass_context=True) async def test(ctx, user: discord.Member): # Remove unnecessary characters hypesquad_class = str(user.public_flags.all()).replace('[<UserFlags.', '').replace('>]', '').replace('_', ' ').replace( ':', '').title() # Remove digits from string hypesquad_class = ''.join([i for i in hypesquad_class if not i.isdigit()]) # Output test = discord.Embed(title=f"{user.name} User's Badges", description=f"{hypesquad_class}", color=0xff0000) await ctx.channel.send(embed=test) A: user.public_flags is not the way to access the user's profile. From the documentation, you need to use user.profile() to get attributes like premium, staff, hypesquad. Since discord.py 1.7 it is impossible to get info from the user's profile using await user.profile(). In the documentation it states that this functionality is deprecated. If you try it you get an error Forbidden: 403 Forbidden (error code: 20001): Bots cannot use this endpoint A: here a lil correction from the code from @GGBerry @client.command(pass_context=True) async def test(ctx, user: discord.Member): userFlags = user.public_flags.all() for flag in userFlags: print(flag.name) user.public_flags.all() returns a list, that can be iterated. in the list are flag object from the type discord.UserFlag. This object contains all sorts of badges. Here is the documentation for the UserFlags: https://discordpy.readthedocs.io/en/stable/api.html?highlight=userflag#discord.UserFlags Greetings, DasMoorhuhn
Discord.py Showing User Badges
I am trying to do a command that shows a user's badges. This is my code: @bot.command(pass_context=True) async def test(ctx, user: discord.Member): test = discord.Embed(title=f"{user.name} User's Badges", description=f"{user.public_flags}", color=0xff0000 ) await ctx.channel.send(embed=test) And the bot is responding like this <PublicUserFlags value=64> I want it to respond like this Hype Squad ... How do I do that?
[ "You could do str(user.public_flags.all()) to obtain a string value of all the badges an user has. Although this is an improvement, your output will still be something like: [<UserFlags.hypesquad_brilliance: 128>]. But the advantage here is that the words hypesquad and brilliance are clearly indicated in the string. Now, all you have to do is to remove [<UserFlags., _ and : 128>] from the string.\nHere is a way to re-define your code:\[email protected](pass_context=True)\nasync def test(ctx, user: discord.Member):\n # Remove unnecessary characters\n hypesquad_class = str(user.public_flags.all()).replace('[<UserFlags.', '').replace('>]', '').replace('_',\n ' ').replace(\n ':', '').title()\n\n # Remove digits from string\n hypesquad_class = ''.join([i for i in hypesquad_class if not i.isdigit()])\n\n # Output\n test = discord.Embed(title=f\"{user.name} User's Badges\", description=f\"{hypesquad_class}\", color=0xff0000)\n await ctx.channel.send(embed=test)\n\n", "user.public_flags is not the way to access the user's profile.\nFrom the documentation, you need to use user.profile() to get attributes like premium,\nstaff, hypesquad.\nSince discord.py 1.7 it is impossible to get info from the user's profile using await user.profile(). In the documentation it states that this functionality is deprecated. If you try it you get an error Forbidden: 403 Forbidden (error code: 20001): Bots cannot use this endpoint\n", "here a lil correction from the code from @GGBerry\[email protected](pass_context=True)\nasync def test(ctx, user: discord.Member):\n userFlags = user.public_flags.all()\n for flag in userFlags:\n print(flag.name)\n\nuser.public_flags.all() returns a list, that can be iterated. in the list are flag object from the type discord.UserFlag. This object contains all sorts of badges. Here is the documentation for the UserFlags: https://discordpy.readthedocs.io/en/stable/api.html?highlight=userflag#discord.UserFlags\nGreetings, DasMoorhuhn\n" ]
[ 1, 0, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0066951118_discord_discord.py_python.txt
Q: How to deal with NGSI-LD tenants ? Create ? List ? Delete? I have a hard time trying to find concrete information about how to deal with tenants in a NGSI-LD Context Broker. The ETSI specification defines multi-tenancy but it seems it doesn't specify any operations to create a tenant, list available tenants and remove a tenant. I presume each broker is free to implement multi-tenancy its own way but I've searched the tenant keyword in broker documentations (at least for Orion-LD, Stellio and Scorpio) with no success. Thanks to this stackoverflow post I've successfully created a tenant in Orion-LD. I'd like to know if there are some tenants operations (documented or undocumented) exposed by brokers. Especially any facilities to remove a tenant - along with all resources that have been created "inside". Thanks. A: So, first of all, tenants are created "on the fly". In comes a request to create an entity, subscription, or registration, and the HTTP header "NGSILD-Tenant" specifies a tenant that does not already exists, then the tenant is created and the entity/sub/reg is created "under it". For ALL operations, the HTTP header is used. jsonldContexts are different, those are "omnipresent" and exist regardless of the tenant used. GET operations (like all other operations) can use the NGSILD-Tenant header to indicate on which tenant the GET is to be performed, but if that tenant does not exist, it will naturally not be created - an error is retuirned instead. There are no official endpoints to list tenants (nor delete - that would be a bit dangerous!), but in the case of Orion-LD, I implemented an endpoint for debugging purposes: GET /ngsi-ld/ex/v1/tenants. That one you can use if you please. Just remember, no other NGSI-LD broker is supporting that endpoint
How to deal with NGSI-LD tenants ? Create ? List ? Delete?
I have a hard time trying to find concrete information about how to deal with tenants in a NGSI-LD Context Broker. The ETSI specification defines multi-tenancy but it seems it doesn't specify any operations to create a tenant, list available tenants and remove a tenant. I presume each broker is free to implement multi-tenancy its own way but I've searched the tenant keyword in broker documentations (at least for Orion-LD, Stellio and Scorpio) with no success. Thanks to this stackoverflow post I've successfully created a tenant in Orion-LD. I'd like to know if there are some tenants operations (documented or undocumented) exposed by brokers. Especially any facilities to remove a tenant - along with all resources that have been created "inside". Thanks.
[ "So, first of all, tenants are created \"on the fly\". In comes a request to create an entity, subscription, or registration, and the HTTP header \"NGSILD-Tenant\" specifies a tenant that does not already exists, then the tenant is created and the entity/sub/reg is created \"under it\". For ALL operations, the HTTP header is used. jsonldContexts are different, those are \"omnipresent\" and exist regardless of the tenant used.\nGET operations (like all other operations) can use the NGSILD-Tenant header to indicate on which tenant the GET is to be performed, but if that tenant does not exist, it will naturally not be created - an error is retuirned instead.\nThere are no official endpoints to list tenants (nor delete - that would be a bit dangerous!), but in the case of Orion-LD, I implemented an endpoint for debugging purposes: GET /ngsi-ld/ex/v1/tenants. That one you can use if you please. Just remember, no other NGSI-LD broker is supporting that endpoint\n" ]
[ 0 ]
[]
[]
[ "fiware", "fiware_orion", "fiware_scorpio", "fiware_stellio" ]
stackoverflow_0074641013_fiware_fiware_orion_fiware_scorpio_fiware_stellio.txt
Q: Parse context from Task to DAG in case of failure There are multiple tasks running inside a DAG according to below code. import logging from airflow import DAG from datetime import datetime, timedelta from util.email_util import Email from util.slack_alert_util import task_failure_alert from airflow.operators.dummy import DummyOperator from airflow.operators.postgres_operator import PostgresOperator def dag_failure_notification_alert(context): # Slack notification logging.info("Sending DAG Slack notification") task_failure_alert(context) # Email notification subject = 'DAG Failure Alert' from_email = '[email protected]' to_email = ['[email protected]'] dag_name = str(context['dag'])[6:-1] dag_run = str(context['dag_run'])[8:-1] message_body = """ <html> <body> <strong>Airflow DAG Failure Report</strong><br /><br /> Dag Name: {}<br /> Dag run details: {}<br /> Execution date and time: {}<br /> Run ID: {}<br /> Task Instance Key: {}<br /> Exception: {}<br /> </body> </html> """.format(dag_name, dag_run, str(context['execution_date']), str(context['run_id']), str(context['task_instance_key_str']), str(context.get('exception'))) logging.info("Message body created for DAG as: %s", message_body) email_obj = Email( {'Subject': subject, 'From': from_email, 'To': to_email, 'body': message_body, 'file': None, 'filename': '', 'body_type': 'html'}) email_obj.send() def task_failure_notification_alert(context): # Slack notification logging.info("Sending Task Slack notification") task_failure_alert(context) default_args = { "owner": "analytics", "start_date": datetime(2021, 12, 12), 'retries': 0, 'retry_delay': timedelta(), "schedule_interval": "@daily" } dag = DAG('test_alert_notification', default_args=default_args, catchup=False, on_failure_callback=dag_failure_notification_alert ) start_task = DummyOperator(task_id="start_task", dag=dag, on_failure_callback=task_failure_notification_alert) end_task = DummyOperator(task_id="end_task", dag=dag, on_failure_callback=task_failure_notification_alert) create_table_sql_query = ''' CREATE TABLE dummy_table (id INT NOT NULL, name VARCHAR(250) NOT NULL); ''' for i in range(5): create_table_task = PostgresOperator( sql=create_table_sql_query, task_id=str(i), postgres_conn_id="postgres_dummy_test", dag=dag, on_failure_callback=task_failure_notification_alert ) start_task >> create_table_task >> end_task DAG graph according to the above code. As we can see in the above DAG graph image that if parallel Postgres tasks i.e. 0,1,2,3,4 is failing then on_failure_callback will call the python function(task_failure_notification_alert) with context to send a slack notification. In the end, it is sending slack and email notifications both in case of DAG failure with context having on_failure_callback with dag_failure_notification_alert function call. In case of Task failure, The output seems to be like this: DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-29 12:03:13.245324+00:00: manual__2022-11-29T12:03:13.245324+00:00, externally triggered: True>, execution_date: 2022-11-29T12:03:13.245324+00:00, run_id: manual__2022-11-29T12:03:13.245324+00:00, task_instance_key_str: test_alert_notification__4__20221129 exception: The conn_id postgres_dummy_test isn't defined or DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-29 12:03:13.245324+00:00: manual__2022-11-29T12:03:13.245324+00:00, externally triggered: True>, execution_date: 2022-11-29T12:03:13.245324+00:00, run_id: manual__2022-11-29T12:03:13.245324+00:00, task_instance_key_str: test_alert_notification__5__20221129 exception: The conn_id postgres_dummy_test isn't defined for each different task. In DAG failure, the context contains an exception as None and only a single task instance key which is the last success ID. DAG failure Output format: DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-30 09:33:02.032456+00:00: manual__2022-11-30T09:33:02.032456+00:00, externally triggered: True>, execution_date: 2022-11-30T09:33:02.032456+00:00, run_id: manual__2022-11-30T09:33:02.032456+00:00, task_instance_key_str: test_alert_notification__start_task__20221130 exception: None I want to pass task failure information i.e exceptions and task instances todag_failure_notification_alert to send an email with accumulated information of all failure tasks. I tried using a common global variable i.e. exceptions and task_instances as a list and appending all task exceptions and task instances to it inside the task_failure_notification_alert function. Later using the same variable inside the dag_failure_notification_alert function but it didn't work. I tried using python callback as mentioned here but it works with PythonOperator only. I read about XCOM push and pull mechanism but it focuses on sharing data between tasks(if I understand it correctly) and unsure how to use it here. As I am new to this Airflow. Kindly suggest the best way to do it. Any other method which suits best for this kind of requirement? A: Here is the solution I find for it from the stack overflow answer. We can get the list of failed tasks by using passed context only. e.g. ti = context['task_instance'] for t in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance logging.info(f'failed dag: {t.dag_id}, task: {t.task_id}, url: {t.log_url}') Updating the dag_failure_notification_alert as def dag_failure_notification_alert(context): # Slack notification logging.info("Sending DAG Slack notification") task_failure_alert(context) failed_tasks = [] dag_name = None ti = context['task_instance'] for t in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance logging.info(f'failed dag: {t.dag_id}, task: {t.task_id}, url: {t.log_url}') dag_name = t.dag_id failed_tasks.append({'id': t.task_id, 'url': t.log_url}) if failed_tasks: # Email notification subject = 'DAG Failure Alert' from_email = '[email protected]' to_email = ['[email protected]'] task_url_link = "" for failed_task in failed_tasks: task_url_link += """<a href="{}">{}</a>, """.format(failed_task['url'], failed_task['id']) task_url_link = task_url_link[:-2] message_body = """ <html> <body> <strong>Airflow DAG Failure Report</strong><br /><br /> <b>Dag Name:</b> {}<br /> <b>Task Details:</b> [{}]<br /> <br /> Thanks,<br /> </body> </html> """.format(dag_name, task_url_link) logging.info("Message body created for DAG as: %s", message_body) email_obj = Email( {'Subject': subject, 'From': from_email, 'To': to_email, 'body': message_body, 'file': None, 'filename': '', 'body_type': 'html'}) email_obj.send() else: logging.info("No failure Tasks fetched.") Hope this will help anyone if someone faces the same issue that's why I post the answer.
Parse context from Task to DAG in case of failure
There are multiple tasks running inside a DAG according to below code. import logging from airflow import DAG from datetime import datetime, timedelta from util.email_util import Email from util.slack_alert_util import task_failure_alert from airflow.operators.dummy import DummyOperator from airflow.operators.postgres_operator import PostgresOperator def dag_failure_notification_alert(context): # Slack notification logging.info("Sending DAG Slack notification") task_failure_alert(context) # Email notification subject = 'DAG Failure Alert' from_email = '[email protected]' to_email = ['[email protected]'] dag_name = str(context['dag'])[6:-1] dag_run = str(context['dag_run'])[8:-1] message_body = """ <html> <body> <strong>Airflow DAG Failure Report</strong><br /><br /> Dag Name: {}<br /> Dag run details: {}<br /> Execution date and time: {}<br /> Run ID: {}<br /> Task Instance Key: {}<br /> Exception: {}<br /> </body> </html> """.format(dag_name, dag_run, str(context['execution_date']), str(context['run_id']), str(context['task_instance_key_str']), str(context.get('exception'))) logging.info("Message body created for DAG as: %s", message_body) email_obj = Email( {'Subject': subject, 'From': from_email, 'To': to_email, 'body': message_body, 'file': None, 'filename': '', 'body_type': 'html'}) email_obj.send() def task_failure_notification_alert(context): # Slack notification logging.info("Sending Task Slack notification") task_failure_alert(context) default_args = { "owner": "analytics", "start_date": datetime(2021, 12, 12), 'retries': 0, 'retry_delay': timedelta(), "schedule_interval": "@daily" } dag = DAG('test_alert_notification', default_args=default_args, catchup=False, on_failure_callback=dag_failure_notification_alert ) start_task = DummyOperator(task_id="start_task", dag=dag, on_failure_callback=task_failure_notification_alert) end_task = DummyOperator(task_id="end_task", dag=dag, on_failure_callback=task_failure_notification_alert) create_table_sql_query = ''' CREATE TABLE dummy_table (id INT NOT NULL, name VARCHAR(250) NOT NULL); ''' for i in range(5): create_table_task = PostgresOperator( sql=create_table_sql_query, task_id=str(i), postgres_conn_id="postgres_dummy_test", dag=dag, on_failure_callback=task_failure_notification_alert ) start_task >> create_table_task >> end_task DAG graph according to the above code. As we can see in the above DAG graph image that if parallel Postgres tasks i.e. 0,1,2,3,4 is failing then on_failure_callback will call the python function(task_failure_notification_alert) with context to send a slack notification. In the end, it is sending slack and email notifications both in case of DAG failure with context having on_failure_callback with dag_failure_notification_alert function call. In case of Task failure, The output seems to be like this: DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-29 12:03:13.245324+00:00: manual__2022-11-29T12:03:13.245324+00:00, externally triggered: True>, execution_date: 2022-11-29T12:03:13.245324+00:00, run_id: manual__2022-11-29T12:03:13.245324+00:00, task_instance_key_str: test_alert_notification__4__20221129 exception: The conn_id postgres_dummy_test isn't defined or DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-29 12:03:13.245324+00:00: manual__2022-11-29T12:03:13.245324+00:00, externally triggered: True>, execution_date: 2022-11-29T12:03:13.245324+00:00, run_id: manual__2022-11-29T12:03:13.245324+00:00, task_instance_key_str: test_alert_notification__5__20221129 exception: The conn_id postgres_dummy_test isn't defined for each different task. In DAG failure, the context contains an exception as None and only a single task instance key which is the last success ID. DAG failure Output format: DAG FAIL ALERT dag: <DAG: test_alert_notification>, dag_run: <DagRun test_alert_notification @ 2022-11-30 09:33:02.032456+00:00: manual__2022-11-30T09:33:02.032456+00:00, externally triggered: True>, execution_date: 2022-11-30T09:33:02.032456+00:00, run_id: manual__2022-11-30T09:33:02.032456+00:00, task_instance_key_str: test_alert_notification__start_task__20221130 exception: None I want to pass task failure information i.e exceptions and task instances todag_failure_notification_alert to send an email with accumulated information of all failure tasks. I tried using a common global variable i.e. exceptions and task_instances as a list and appending all task exceptions and task instances to it inside the task_failure_notification_alert function. Later using the same variable inside the dag_failure_notification_alert function but it didn't work. I tried using python callback as mentioned here but it works with PythonOperator only. I read about XCOM push and pull mechanism but it focuses on sharing data between tasks(if I understand it correctly) and unsure how to use it here. As I am new to this Airflow. Kindly suggest the best way to do it. Any other method which suits best for this kind of requirement?
[ "Here is the solution I find for it from the stack overflow answer.\nWe can get the list of failed tasks by using passed context only.\ne.g.\nti = context['task_instance']\nfor t in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance\nlogging.info(f'failed dag: {t.dag_id}, task: {t.task_id}, url: {t.log_url}')\n\nUpdating the dag_failure_notification_alert as\ndef dag_failure_notification_alert(context):\n # Slack notification\n logging.info(\"Sending DAG Slack notification\")\n task_failure_alert(context)\n\n failed_tasks = []\n dag_name = None\n ti = context['task_instance']\n for t in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance\n logging.info(f'failed dag: {t.dag_id}, task: {t.task_id}, url: {t.log_url}')\n dag_name = t.dag_id\n failed_tasks.append({'id': t.task_id, 'url': t.log_url})\n\n if failed_tasks:\n # Email notification\n subject = 'DAG Failure Alert'\n from_email = '[email protected]'\n to_email = ['[email protected]']\n task_url_link = \"\"\n for failed_task in failed_tasks:\n task_url_link += \"\"\"<a href=\"{}\">{}</a>, \"\"\".format(failed_task['url'], failed_task['id'])\n task_url_link = task_url_link[:-2]\n message_body = \"\"\"\n <html>\n <body>\n <strong>Airflow DAG Failure Report</strong><br /><br />\n <b>Dag Name:</b> {}<br />\n <b>Task Details:</b> [{}]<br />\n <br />\n Thanks,<br />\n </body>\n </html>\n \"\"\".format(dag_name, task_url_link)\n logging.info(\"Message body created for DAG as: %s\", message_body)\n email_obj = Email(\n {'Subject': subject, 'From': from_email, 'To': to_email, 'body': message_body, 'file': None, 'filename': '',\n 'body_type': 'html'})\n email_obj.send()\n else:\n logging.info(\"No failure Tasks fetched.\")\n\n\nHope this will help anyone if someone faces the same issue that's why I post the answer.\n" ]
[ 0 ]
[]
[]
[ "airflow", "directed_acyclic_graphs" ]
stackoverflow_0074625725_airflow_directed_acyclic_graphs.txt
Q: git: generate a single patch across multiple commits This is the situation: We created a "private" repo (say our-repo) based off an existing open-source git repo (say source-repo). We have been developing code and have around 20 merges into our repo. So, the repo moved from "State_Initial" to "State_Current". Now, for business reasons, we want to hand-over all our development to a third party. Based on some legal issues, the only option is we give them a "single" patch file with all our changes. That is a squashed patch between "State_Initial" and "State_Current". I looked around, and found git format-patch -X But, it generates "n" .patch files. Is there a way to create a single patch file, so that if we create a repo based off "source-repo" and apply the patch, it takes us to "State_Current"? A: The following command creates a single .patch file that contains multiple commits. git format-patch cc1dde0dd^..6de6d4b06 --stdout > foo.patch You can then apply it like so: git am foo.patch Note: Be sure to use ^.. instead of .. if you want the first commit SHA to be included. A: You can use git diff: git diff 0f3063094850 > ./test.patch A: Create a new branch named squashed that has a single squashed commit. This branch will have the exact same contents as your normal branch but none of the history. Look it over and if you're satisfied, use format-patch to create a patch file. $ git checkout -b squashed $(git commit-tree HEAD^{tree} -m 'Squashed history') $ git format-patch --root HEAD This is a non-destructive operation and you can switch right back to your normal development branch afterwards. You can tag the squashed branch to save a reference to what you e-mailed them, or use branch -D to delete it if you no longer need it. $ git branch -D squashed A: If for whatever reason you do not wish to create a throwaway branch and squash all your commits between state_initial and state_current and then use git format-patch, there is an alternative. If you branched out from state_initial and your branch is rebased on top of the source branch: git format-patch source_branch <patch_file_name> When you do git am <patch_file_name>, it will not reconstruct your entire commit chain. But this patch file will be a sequential list of changes. If you have changed and changed back things across multiple commits, it can still be visible if someone examines the patch file. A: use this command: $ git format-patch -n <commit string> -n means how many commits you want to generate for this patch, and means which commit you want to generate from. A: "The last 10 patches from head in a single patch files:" git format-patch -10 HEAD --stdout > 0001-last-10-commits.patch Source: https://stackoverflow.com/a/16172120/9478470 an answer to similar question: How can I generate a Git patch for a specific commit?
git: generate a single patch across multiple commits
This is the situation: We created a "private" repo (say our-repo) based off an existing open-source git repo (say source-repo). We have been developing code and have around 20 merges into our repo. So, the repo moved from "State_Initial" to "State_Current". Now, for business reasons, we want to hand-over all our development to a third party. Based on some legal issues, the only option is we give them a "single" patch file with all our changes. That is a squashed patch between "State_Initial" and "State_Current". I looked around, and found git format-patch -X But, it generates "n" .patch files. Is there a way to create a single patch file, so that if we create a repo based off "source-repo" and apply the patch, it takes us to "State_Current"?
[ "The following command creates a single .patch file that contains multiple commits.\ngit format-patch cc1dde0dd^..6de6d4b06 --stdout > foo.patch\n\nYou can then apply it like so:\ngit am foo.patch\n\nNote: Be sure to use ^.. instead of .. if you want the first commit SHA to be included.\n", "You can use git diff:\ngit diff 0f3063094850 > ./test.patch\n", "Create a new branch named squashed that has a single squashed commit. This branch will have the exact same contents as your normal branch but none of the history.\nLook it over and if you're satisfied, use format-patch to create a patch file.\n$ git checkout -b squashed $(git commit-tree HEAD^{tree} -m 'Squashed history')\n$ git format-patch --root HEAD\n\nThis is a non-destructive operation and you can switch right back to your normal development branch afterwards. You can tag the squashed branch to save a reference to what you e-mailed them, or use branch -D to delete it if you no longer need it.\n$ git branch -D squashed\n\n", "If for whatever reason you do not wish to create a throwaway branch and squash all your commits between state_initial and state_current and then use git format-patch, there is an alternative. If you branched out from state_initial and your branch is rebased on top of the source branch:\ngit format-patch source_branch <patch_file_name>\nWhen you do git am <patch_file_name>, it will not reconstruct your entire commit chain. But this patch file will be a sequential list of changes. If you have changed and changed back things across multiple commits, it can still be visible if someone examines the patch file.\n", "use this command:\n$ git format-patch -n <commit string>\n\n-n means how many commits you want to generate for this patch, and means which commit you want to generate from.\n", "\"The last 10 patches from head in a single patch files:\"\ngit format-patch -10 HEAD --stdout > 0001-last-10-commits.patch\nSource: https://stackoverflow.com/a/16172120/9478470 an answer to similar question: How can I generate a Git patch for a specific commit?\n" ]
[ 84, 3, 2, 0, 0, 0 ]
[ "git format-patch can take a revision range as an argument. See git help format-patch:\n\nSYNOPSIS\ngit format-patch [-k] [(-o|--output-directory) <dir> | --stdout]\n ... (many flags omitted)\n [--progress]\n [<common diff options>]\n [ <since> | <revision range> ] \n\nDESCRIPTION\n…\nThere are two ways to specify which commits to operate on.\n\nA single commit, <since>, specifies that the commits leading to the tip of the current branch that are not in the history that leads to the <since> to be output.\nGeneric <revision range> expression (see \"SPECIFYING REVISIONS\" section in gitrevisions(7)) means the commits in the specified range.\n\n\nFor example, the following command generates a patch for the last three commits on the current branch:\ngit format-patch HEAD~3..HEAD\n\n" ]
[ -2 ]
[ "git", "patch" ]
stackoverflow_0052884437_git_patch.txt
Q: How to create script for exact search excluding % sign I'm creating a table that I'd like to search by he first column (number + % sign). I've got it set up so I can search for the number, but need it to return the exact number without necessitating the % to be included. I've found a close solution here, but it doesn't account for excluding the % in the search. Here's an portion of the table along with the search script: function myFunction() { var input, filter, table, tr, td, i, txtValue; input = document.getElementById("myInput"); filter = input.value.toUpperCase(); table = document.getElementById("myTable"); tr = table.getElementsByTagName("tr"); for (i = 0; i < tr.length; i++) { td = tr[i].getElementsByTagName("td")[0]; if (td) { txtValue = td.textContent || td.innerText; if (txtValue.toUpperCase().indexOf(filter) > -1) { tr[i].style.display = ""; } else { tr[i].style.display = "none"; } } } } <input type="text" id="myInput" onkeyup="myFunction()" placeholder="Search ABV" title="Type in ABV"> <table id="myTable"> <tr class="header"> <th style="width:100%;">ABV</th> <th><strong>1 oz.</strong></th> <th><strong>2 oz.</strong></th> <th><strong>3 oz.</strong></th> <th><strong>4 oz.</strong></th> <th><strong>5 oz.</strong></th> <th><strong>6 oz.</strong></th> <th><strong>7 oz.</strong></th> <th><strong>8 oz.</strong></th> <th><strong>9 oz.</strong></th> <th><strong>10 oz.</strong></th> <th><strong>11 oz.</strong></th> <th><strong>12 oz.</strong></th> <th><strong>13 oz.</strong></th> <th><strong>14 oz.</strong></th> <th><strong>15 oz.</strong></th> <th><strong>16 oz.</strong></th> <th><strong>17 oz.</strong></th> <th><strong>18 oz.</strong></th> <th><strong>19 oz.</strong></th> <th><strong>20 oz.</strong></th> <th><strong>21 oz.</strong></th> <th><strong>22 oz.</strong></th> </tr> <tr> <td><strong>3%</strong></td> <td>8</td> <td>15</td> <td>23</td> <td>30</td> <td>38</td> <td>45</td> <td>53</td> <td>60</td> <td>68</td> <td>75</td> <td>83</td> <td>90</td> <td>98</td> <td>105</td> <td>113</td> <td>120</td> <td>128</td> <td>135</td> <td>143</td> <td>150</td> <td>158</td> <td>165</td> </tr> <tr> <td><strong>3.5%</strong></td> <td>9</td> <td>18</td> <td>27</td> <td>35</td> <td>44</td> <td>53</td> <td>62</td> <td>70</td> <td>79</td> <td>88</td> <td>97</td> <td>105</td> <td>114</td> <td>123</td> <td>132</td> <td>140</td> <td>149</td> <td>158</td> <td>167</td> <td>175</td> <td>184</td> <td>193</td> </tr> <tr> <td><strong>4%</strong></td> <td>10</td> <td>20</td> <td>30</td> <td>40</td> <td>50</td> <td>60</td> <td>70</td> <td>80</td> <td>90</td> <td>100</td> <td>110</td> <td>120</td> <td>130</td> <td>140</td> <td>150</td> <td>160</td> <td>170</td> <td>180</td> <td>190</td> <td>200</td> <td>210</td> <td>220</td> </tr> <tr> <td><strong>4.5%</strong></td> <td>12</td> <td>23</td> <td>34</td> <td>45</td> <td>57</td> <td>68</td> <td>79</td> <td>90</td> <td>101</td> <td>113</td> <td>124</td> <td>135</td> <td>147</td> <td>158</td> <td>169</td> <td>180</td> <td>192</td> <td>203</td> <td>214</td> <td>225</td> <td>237</td> <td>248</td> </tr> <tr> <td><strong>4.6%</strong></td> <td>12</td> <td>23</td> <td>35</td> <td>46</td> <td>58</td> <td>69</td> <td>81</td> <td>92</td> <td>104</td> <td>115</td> <td>127</td> <td>138</td> <td>150</td> <td>161</td> <td>173</td> <td>184</td> <td>196</td> <td>207</td> <td>219</td> <td>230</td> <td>242</td> <td>253</td> </tr> <tr> <td><strong>4.7%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>47</td> <td>59</td> <td>71</td> <td>83</td> <td>94</td> <td>106</td> <td>118</td> <td>130</td> <td>141</td> <td>153</td> <td>165</td> <td>177</td> <td>188</td> <td>200</td> <td>212</td> <td>224</td> <td>235</td> <td>247</td> <td>259</td> </tr> <tr> <td><strong>4.8%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>48</td> <td>60</td> <td>72</td> <td>84</td> <td>96</td> <td>108</td> <td>120</td> <td>132</td> <td>144</td> <td>156</td> <td>168</td> <td>180</td> <td>192</td> <td>204</td> <td>216</td> <td>228</td> <td>240</td> <td>252</td> <td>264</td> </tr> <tr> <td><strong>4.9%</strong></td> <td>13</td> <td>25</td> <td>37</td> <td>49</td> <td>62</td> <td>74</td> <td>86</td> <td>98</td> <td>110</td> <td>123</td> <td>135</td> <td>147</td> <td>160</td> <td>172</td> <td>184</td> <td>196</td> <td>209</td> <td>221</td> <td>233</td> <td>245</td> <td>258</td> <td>270</td> </tr> <tr> <td><strong>5%</strong></td> <td>13</td> <td>25</td> <td>38</td> <td>50</td> <td>63</td> <td>75</td> <td>88</td> <td>100</td> <td>113</td> <td>125</td> <td>138</td> <td>150</td> <td>163</td> <td>175</td> <td>188</td> <td>200</td> <td>213</td> <td>225</td> <td>238</td> <td>250</td> <td>263</td> <td>275</td> </tr> <tr> <td><strong>5.1%</strong></td> <td>13</td> <td>26</td> <td>39</td> <td>51</td> <td>64</td> <td>77</td> <td>90</td> <td>102</td> </tr> </table> A: A quick and dirty solution would simply update the if statement to compare the value of the TD along with the % attached to the filter variable. Plus adding an || or statement for an empty filter allows you to reset the table too. if (txtValue.toUpperCase() == (filter + "%") || filter == "") { function myFunction() { var input, filter, table, tr, td, i, txtValue; input = document.getElementById("myInput"); filter = input.value.toUpperCase(); table = document.getElementById("myTable"); tr = table.getElementsByTagName("tr"); for (i = 0; i < tr.length; i++) { td = tr[i].getElementsByTagName("td")[0]; if (td) { txtValue = td.textContent || td.innerText; if (txtValue.toUpperCase() == (filter + "%") || filter == "") { tr[i].style.display = ""; } else { tr[i].style.display = "none"; } } } } <input type="text" id="myInput" onkeyup="myFunction()" placeholder="Search ABV" title="Type in ABV"> <table id="myTable"> <tr class="header"> <th style="width:100%;">ABV</th> <th><strong>1 oz.</strong></th> <th><strong>2 oz.</strong></th> <th><strong>3 oz.</strong></th> <th><strong>4 oz.</strong></th> <th><strong>5 oz.</strong></th> <th><strong>6 oz.</strong></th> <th><strong>7 oz.</strong></th> <th><strong>8 oz.</strong></th> <th><strong>9 oz.</strong></th> <th><strong>10 oz.</strong></th> <th><strong>11 oz.</strong></th> <th><strong>12 oz.</strong></th> <th><strong>13 oz.</strong></th> <th><strong>14 oz.</strong></th> <th><strong>15 oz.</strong></th> <th><strong>16 oz.</strong></th> <th><strong>17 oz.</strong></th> <th><strong>18 oz.</strong></th> <th><strong>19 oz.</strong></th> <th><strong>20 oz.</strong></th> <th><strong>21 oz.</strong></th> <th><strong>22 oz.</strong></th> </tr> <tr> <td><strong>3%</strong></td> <td>8</td> <td>15</td> <td>23</td> <td>30</td> <td>38</td> <td>45</td> <td>53</td> <td>60</td> <td>68</td> <td>75</td> <td>83</td> <td>90</td> <td>98</td> <td>105</td> <td>113</td> <td>120</td> <td>128</td> <td>135</td> <td>143</td> <td>150</td> <td>158</td> <td>165</td> </tr> <tr> <td><strong>3.5%</strong></td> <td>9</td> <td>18</td> <td>27</td> <td>35</td> <td>44</td> <td>53</td> <td>62</td> <td>70</td> <td>79</td> <td>88</td> <td>97</td> <td>105</td> <td>114</td> <td>123</td> <td>132</td> <td>140</td> <td>149</td> <td>158</td> <td>167</td> <td>175</td> <td>184</td> <td>193</td> </tr> <tr> <td><strong>4%</strong></td> <td>10</td> <td>20</td> <td>30</td> <td>40</td> <td>50</td> <td>60</td> <td>70</td> <td>80</td> <td>90</td> <td>100</td> <td>110</td> <td>120</td> <td>130</td> <td>140</td> <td>150</td> <td>160</td> <td>170</td> <td>180</td> <td>190</td> <td>200</td> <td>210</td> <td>220</td> </tr> <tr> <td><strong>4.5%</strong></td> <td>12</td> <td>23</td> <td>34</td> <td>45</td> <td>57</td> <td>68</td> <td>79</td> <td>90</td> <td>101</td> <td>113</td> <td>124</td> <td>135</td> <td>147</td> <td>158</td> <td>169</td> <td>180</td> <td>192</td> <td>203</td> <td>214</td> <td>225</td> <td>237</td> <td>248</td> </tr> <tr> <td><strong>4.6%</strong></td> <td>12</td> <td>23</td> <td>35</td> <td>46</td> <td>58</td> <td>69</td> <td>81</td> <td>92</td> <td>104</td> <td>115</td> <td>127</td> <td>138</td> <td>150</td> <td>161</td> <td>173</td> <td>184</td> <td>196</td> <td>207</td> <td>219</td> <td>230</td> <td>242</td> <td>253</td> </tr> <tr> <td><strong>4.7%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>47</td> <td>59</td> <td>71</td> <td>83</td> <td>94</td> <td>106</td> <td>118</td> <td>130</td> <td>141</td> <td>153</td> <td>165</td> <td>177</td> <td>188</td> <td>200</td> <td>212</td> <td>224</td> <td>235</td> <td>247</td> <td>259</td> </tr> <tr> <td><strong>4.8%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>48</td> <td>60</td> <td>72</td> <td>84</td> <td>96</td> <td>108</td> <td>120</td> <td>132</td> <td>144</td> <td>156</td> <td>168</td> <td>180</td> <td>192</td> <td>204</td> <td>216</td> <td>228</td> <td>240</td> <td>252</td> <td>264</td> </tr> <tr> <td><strong>4.9%</strong></td> <td>13</td> <td>25</td> <td>37</td> <td>49</td> <td>62</td> <td>74</td> <td>86</td> <td>98</td> <td>110</td> <td>123</td> <td>135</td> <td>147</td> <td>160</td> <td>172</td> <td>184</td> <td>196</td> <td>209</td> <td>221</td> <td>233</td> <td>245</td> <td>258</td> <td>270</td> </tr> <tr> <td><strong>5%</strong></td> <td>13</td> <td>25</td> <td>38</td> <td>50</td> <td>63</td> <td>75</td> <td>88</td> <td>100</td> <td>113</td> <td>125</td> <td>138</td> <td>150</td> <td>163</td> <td>175</td> <td>188</td> <td>200</td> <td>213</td> <td>225</td> <td>238</td> <td>250</td> <td>263</td> <td>275</td> </tr> <tr> <td><strong>5.1%</strong></td> <td>13</td> <td>26</td> <td>39</td> <td>51</td> <td>64</td> <td>77</td> <td>90</td> <td>102</td> </tr> </table> A: I would suggest using data attributes to both contain the values and to toggle the visual with CSS - whatever you want that to be. Here I just add some color to the rows found/not found. Now this can be HTML table or a div or whatever without changing the code function keyUpEventHandler(event) { const input = event.currentTarget; const filterValue = input.value.toUpperCase(); const table = document.getElementById("myTable"); const searchRows = table.querySelectorAll(".searchable-row"); Array.from(searchRows).forEach((rowElement) => { rowElement.dataset.hasMatch = "no-match"; }); Array.from(searchRows).filter(rowElement => rowElement.querySelector(".search-column").dataset.searchValue.toUpperCase() === filterValue ).forEach((rowElement) => { rowElement.dataset.hasMatch = "has-match"; }); } const searchABV = document.getElementById("myInput"); searchABV.addEventListener('keyup', keyUpEventHandler, false); .searchable-row[data-has-match="no-match"] { border: solid blue 1px; background-color: #ddddff; } .searchable-row[data-has-match="has-match"] { border: solid green 1px; background-color: #ddffdd; } <input type="text" id="myInput" placeholder="Search ABV" title="Type in ABV"> <table id="myTable"> <thead> <tr class="header"> <th style="width:100%;">ABV</th> <th><strong>1 oz.</strong></th> <th><strong>2 oz.</strong></th> <th><strong>3 oz.</strong></th> <th><strong>4 oz.</strong></th> <th><strong>5 oz.</strong></th> <th><strong>6 oz.</strong></th> <th><strong>7 oz.</strong></th> <th><strong>8 oz.</strong></th> <th><strong>9 oz.</strong></th> <th><strong>10 oz.</strong></th> <th><strong>11 oz.</strong></th> <th><strong>12 oz.</strong></th> <th><strong>13 oz.</strong></th> <th><strong>14 oz.</strong></th> <th><strong>15 oz.</strong></th> <th><strong>16 oz.</strong></th> <th><strong>17 oz.</strong></th> <th><strong>18 oz.</strong></th> <th><strong>19 oz.</strong></th> <th><strong>20 oz.</strong></th> <th><strong>21 oz.</strong></th> <th><strong>22 oz.</strong></th> </tr> </thead> <tr class="searchable-row"> <td class="search-column" data-search-value="3"><strong>3%</strong></td> <td>8</td> <td>15</td> <td>23</td> <td>30</td> <td>38</td> <td>45</td> <td>53</td> <td>60</td> <td>68</td> <td>75</td> <td>83</td> <td>90</td> <td>98</td> <td>105</td> <td>113</td> <td>120</td> <td>128</td> <td>135</td> <td>143</td> <td>150</td> <td>158</td> <td>165</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="3.5"><strong>3.5%</strong></td> <td>9</td> <td>18</td> <td>27</td> <td>35</td> <td>44</td> <td>53</td> <td>62</td> <td>70</td> <td>79</td> <td>88</td> <td>97</td> <td>105</td> <td>114</td> <td>123</td> <td>132</td> <td>140</td> <td>149</td> <td>158</td> <td>167</td> <td>175</td> <td>184</td> <td>193</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4"><strong>4%</strong></td> <td>10</td> <td>20</td> <td>30</td> <td>40</td> <td>50</td> <td>60</td> <td>70</td> <td>80</td> <td>90</td> <td>100</td> <td>110</td> <td>120</td> <td>130</td> <td>140</td> <td>150</td> <td>160</td> <td>170</td> <td>180</td> <td>190</td> <td>200</td> <td>210</td> <td>220</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4.5"><strong>4.5%</strong></td> <td>12</td> <td>23</td> <td>34</td> <td>45</td> <td>57</td> <td>68</td> <td>79</td> <td>90</td> <td>101</td> <td>113</td> <td>124</td> <td>135</td> <td>147</td> <td>158</td> <td>169</td> <td>180</td> <td>192</td> <td>203</td> <td>214</td> <td>225</td> <td>237</td> <td>248</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4.6"><strong>4.6%</strong></td> <td>12</td> <td>23</td> <td>35</td> <td>46</td> <td>58</td> <td>69</td> <td>81</td> <td>92</td> <td>104</td> <td>115</td> <td>127</td> <td>138</td> <td>150</td> <td>161</td> <td>173</td> <td>184</td> <td>196</td> <td>207</td> <td>219</td> <td>230</td> <td>242</td> <td>253</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4.7"><strong>4.7%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>47</td> <td>59</td> <td>71</td> <td>83</td> <td>94</td> <td>106</td> <td>118</td> <td>130</td> <td>141</td> <td>153</td> <td>165</td> <td>177</td> <td>188</td> <td>200</td> <td>212</td> <td>224</td> <td>235</td> <td>247</td> <td>259</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4.8"><strong>4.8%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>48</td> <td>60</td> <td>72</td> <td>84</td> <td>96</td> <td>108</td> <td>120</td> <td>132</td> <td>144</td> <td>156</td> <td>168</td> <td>180</td> <td>192</td> <td>204</td> <td>216</td> <td>228</td> <td>240</td> <td>252</td> <td>264</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="4.9"><strong>4.9%</strong></td> <td>13</td> <td>25</td> <td>37</td> <td>49</td> <td>62</td> <td>74</td> <td>86</td> <td>98</td> <td>110</td> <td>123</td> <td>135</td> <td>147</td> <td>160</td> <td>172</td> <td>184</td> <td>196</td> <td>209</td> <td>221</td> <td>233</td> <td>245</td> <td>258</td> <td>270</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="5"><strong>5%</strong></td> <td>13</td> <td>25</td> <td>38</td> <td>50</td> <td>63</td> <td>75</td> <td>88</td> <td>100</td> <td>113</td> <td>125</td> <td>138</td> <td>150</td> <td>163</td> <td>175</td> <td>188</td> <td>200</td> <td>213</td> <td>225</td> <td>238</td> <td>250</td> <td>263</td> <td>275</td> </tr> <tr class="searchable-row"> <td class="search-column" data-search-value="5.1"><strong>5.1%</strong></td> <td>13</td> <td>26</td> <td>39</td> <td>51</td> <td>64</td> <td>77</td> <td>90</td> <td>102</td> </tr> </table> A: When filtering, you have two options: Exact matches Cell visibility const filterText = document.querySelector('#filterText'), exactMatch = document.querySelector('#exactMatch'), hideCells = document.querySelector('#hideCells'), table = document.querySelector('#myTable'); const findAll = (selector, parent = document) => [...parent.querySelectorAll(selector)]; function doFilter() { const hide = hideCells.checked; const filter = filterText.value.trim().toUpperCase(); findAll('tr', table).slice(1).forEach(row => { const showRow = (filter === '') || (filter.length && findAll('td', row).reduce((acc, td, index) => { const value = td.textContent.toUpperCase(); const matches = exactMatch.checked ? value === filter : value.includes(filter); td.style.visibility = !hide || (hide && matches) || index === 0 ? 'visible' : 'hidden'; return matches ? true : acc; }, false)); row.style.display = showRow ? '' : 'none'; }); } #myTable { border-collapse: collapse; } #myTable tr:nth-child(odd) { background: hsl(120, 75%, 90%); } #myTable tr td:nth-child(even) { background: #DDD; } #myTable tr:nth-child(odd) td:nth-child(even) { background: hsl(120, 75%, 80%); } #myTable .header { background: #AAA !important; } #myTable .header th:nth-child(even) { background: #888 !important; } <form> <input type="search" id="filterText" onInput="doFilter()" placeholder="Search ABV" title="Type in ABV" /> <input type="checkbox" id="exactMatch" onChange="doFilter()" /> <label for="exactMatch">Exact Match</label> <input type="checkbox" id="hideCells" onChange="doFilter()" /> <label for="hideCells">Hide Cells</label> </form> <hr /> <table id="myTable"> <tr class="header"> <th style="width:100%;">ABV</th> <th><strong>1 oz.</strong></th> <th><strong>2 oz.</strong></th> <th><strong>3 oz.</strong></th> <th><strong>4 oz.</strong></th> <th><strong>5 oz.</strong></th> <th><strong>6 oz.</strong></th> <th><strong>7 oz.</strong></th> <th><strong>8 oz.</strong></th> <th><strong>9 oz.</strong></th> <th><strong>10 oz.</strong></th> <th><strong>11 oz.</strong></th> <th><strong>12 oz.</strong></th> <th><strong>13 oz.</strong></th> <th><strong>14 oz.</strong></th> <th><strong>15 oz.</strong></th> <th><strong>16 oz.</strong></th> <th><strong>17 oz.</strong></th> <th><strong>18 oz.</strong></th> <th><strong>19 oz.</strong></th> <th><strong>20 oz.</strong></th> <th><strong>21 oz.</strong></th> <th><strong>22 oz.</strong></th> </tr> <tr> <td><strong>3%</strong></td> <td>8</td> <td>15</td> <td>23</td> <td>30</td> <td>38</td> <td>45</td> <td>53</td> <td>60</td> <td>68</td> <td>75</td> <td>83</td> <td>90</td> <td>98</td> <td>105</td> <td>113</td> <td>120</td> <td>128</td> <td>135</td> <td>143</td> <td>150</td> <td>158</td> <td>165</td> </tr> <tr> <td><strong>3.5%</strong></td> <td>9</td> <td>18</td> <td>27</td> <td>35</td> <td>44</td> <td>53</td> <td>62</td> <td>70</td> <td>79</td> <td>88</td> <td>97</td> <td>105</td> <td>114</td> <td>123</td> <td>132</td> <td>140</td> <td>149</td> <td>158</td> <td>167</td> <td>175</td> <td>184</td> <td>193</td> </tr> <tr> <td><strong>4%</strong></td> <td>10</td> <td>20</td> <td>30</td> <td>40</td> <td>50</td> <td>60</td> <td>70</td> <td>80</td> <td>90</td> <td>100</td> <td>110</td> <td>120</td> <td>130</td> <td>140</td> <td>150</td> <td>160</td> <td>170</td> <td>180</td> <td>190</td> <td>200</td> <td>210</td> <td>220</td> </tr> <tr> <td><strong>4.5%</strong></td> <td>12</td> <td>23</td> <td>34</td> <td>45</td> <td>57</td> <td>68</td> <td>79</td> <td>90</td> <td>101</td> <td>113</td> <td>124</td> <td>135</td> <td>147</td> <td>158</td> <td>169</td> <td>180</td> <td>192</td> <td>203</td> <td>214</td> <td>225</td> <td>237</td> <td>248</td> </tr> <tr> <td><strong>4.6%</strong></td> <td>12</td> <td>23</td> <td>35</td> <td>46</td> <td>58</td> <td>69</td> <td>81</td> <td>92</td> <td>104</td> <td>115</td> <td>127</td> <td>138</td> <td>150</td> <td>161</td> <td>173</td> <td>184</td> <td>196</td> <td>207</td> <td>219</td> <td>230</td> <td>242</td> <td>253</td> </tr> <tr> <td><strong>4.7%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>47</td> <td>59</td> <td>71</td> <td>83</td> <td>94</td> <td>106</td> <td>118</td> <td>130</td> <td>141</td> <td>153</td> <td>165</td> <td>177</td> <td>188</td> <td>200</td> <td>212</td> <td>224</td> <td>235</td> <td>247</td> <td>259</td> </tr> <tr> <td><strong>4.8%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>48</td> <td>60</td> <td>72</td> <td>84</td> <td>96</td> <td>108</td> <td>120</td> <td>132</td> <td>144</td> <td>156</td> <td>168</td> <td>180</td> <td>192</td> <td>204</td> <td>216</td> <td>228</td> <td>240</td> <td>252</td> <td>264</td> </tr> <tr> <td><strong>4.9%</strong></td> <td>13</td> <td>25</td> <td>37</td> <td>49</td> <td>62</td> <td>74</td> <td>86</td> <td>98</td> <td>110</td> <td>123</td> <td>135</td> <td>147</td> <td>160</td> <td>172</td> <td>184</td> <td>196</td> <td>209</td> <td>221</td> <td>233</td> <td>245</td> <td>258</td> <td>270</td> </tr> <tr> <td><strong>5%</strong></td> <td>13</td> <td>25</td> <td>38</td> <td>50</td> <td>63</td> <td>75</td> <td>88</td> <td>100</td> <td>113</td> <td>125</td> <td>138</td> <td>150</td> <td>163</td> <td>175</td> <td>188</td> <td>200</td> <td>213</td> <td>225</td> <td>238</td> <td>250</td> <td>263</td> <td>275</td> </tr> <tr> <td><strong>5.1%</strong></td> <td>13</td> <td>26</td> <td>39</td> <td>51</td> <td>64</td> <td>77</td> <td>90</td> <td>102</td> </tr> </table>
How to create script for exact search excluding % sign
I'm creating a table that I'd like to search by he first column (number + % sign). I've got it set up so I can search for the number, but need it to return the exact number without necessitating the % to be included. I've found a close solution here, but it doesn't account for excluding the % in the search. Here's an portion of the table along with the search script: function myFunction() { var input, filter, table, tr, td, i, txtValue; input = document.getElementById("myInput"); filter = input.value.toUpperCase(); table = document.getElementById("myTable"); tr = table.getElementsByTagName("tr"); for (i = 0; i < tr.length; i++) { td = tr[i].getElementsByTagName("td")[0]; if (td) { txtValue = td.textContent || td.innerText; if (txtValue.toUpperCase().indexOf(filter) > -1) { tr[i].style.display = ""; } else { tr[i].style.display = "none"; } } } } <input type="text" id="myInput" onkeyup="myFunction()" placeholder="Search ABV" title="Type in ABV"> <table id="myTable"> <tr class="header"> <th style="width:100%;">ABV</th> <th><strong>1 oz.</strong></th> <th><strong>2 oz.</strong></th> <th><strong>3 oz.</strong></th> <th><strong>4 oz.</strong></th> <th><strong>5 oz.</strong></th> <th><strong>6 oz.</strong></th> <th><strong>7 oz.</strong></th> <th><strong>8 oz.</strong></th> <th><strong>9 oz.</strong></th> <th><strong>10 oz.</strong></th> <th><strong>11 oz.</strong></th> <th><strong>12 oz.</strong></th> <th><strong>13 oz.</strong></th> <th><strong>14 oz.</strong></th> <th><strong>15 oz.</strong></th> <th><strong>16 oz.</strong></th> <th><strong>17 oz.</strong></th> <th><strong>18 oz.</strong></th> <th><strong>19 oz.</strong></th> <th><strong>20 oz.</strong></th> <th><strong>21 oz.</strong></th> <th><strong>22 oz.</strong></th> </tr> <tr> <td><strong>3%</strong></td> <td>8</td> <td>15</td> <td>23</td> <td>30</td> <td>38</td> <td>45</td> <td>53</td> <td>60</td> <td>68</td> <td>75</td> <td>83</td> <td>90</td> <td>98</td> <td>105</td> <td>113</td> <td>120</td> <td>128</td> <td>135</td> <td>143</td> <td>150</td> <td>158</td> <td>165</td> </tr> <tr> <td><strong>3.5%</strong></td> <td>9</td> <td>18</td> <td>27</td> <td>35</td> <td>44</td> <td>53</td> <td>62</td> <td>70</td> <td>79</td> <td>88</td> <td>97</td> <td>105</td> <td>114</td> <td>123</td> <td>132</td> <td>140</td> <td>149</td> <td>158</td> <td>167</td> <td>175</td> <td>184</td> <td>193</td> </tr> <tr> <td><strong>4%</strong></td> <td>10</td> <td>20</td> <td>30</td> <td>40</td> <td>50</td> <td>60</td> <td>70</td> <td>80</td> <td>90</td> <td>100</td> <td>110</td> <td>120</td> <td>130</td> <td>140</td> <td>150</td> <td>160</td> <td>170</td> <td>180</td> <td>190</td> <td>200</td> <td>210</td> <td>220</td> </tr> <tr> <td><strong>4.5%</strong></td> <td>12</td> <td>23</td> <td>34</td> <td>45</td> <td>57</td> <td>68</td> <td>79</td> <td>90</td> <td>101</td> <td>113</td> <td>124</td> <td>135</td> <td>147</td> <td>158</td> <td>169</td> <td>180</td> <td>192</td> <td>203</td> <td>214</td> <td>225</td> <td>237</td> <td>248</td> </tr> <tr> <td><strong>4.6%</strong></td> <td>12</td> <td>23</td> <td>35</td> <td>46</td> <td>58</td> <td>69</td> <td>81</td> <td>92</td> <td>104</td> <td>115</td> <td>127</td> <td>138</td> <td>150</td> <td>161</td> <td>173</td> <td>184</td> <td>196</td> <td>207</td> <td>219</td> <td>230</td> <td>242</td> <td>253</td> </tr> <tr> <td><strong>4.7%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>47</td> <td>59</td> <td>71</td> <td>83</td> <td>94</td> <td>106</td> <td>118</td> <td>130</td> <td>141</td> <td>153</td> <td>165</td> <td>177</td> <td>188</td> <td>200</td> <td>212</td> <td>224</td> <td>235</td> <td>247</td> <td>259</td> </tr> <tr> <td><strong>4.8%</strong></td> <td>12</td> <td>24</td> <td>36</td> <td>48</td> <td>60</td> <td>72</td> <td>84</td> <td>96</td> <td>108</td> <td>120</td> <td>132</td> <td>144</td> <td>156</td> <td>168</td> <td>180</td> <td>192</td> <td>204</td> <td>216</td> <td>228</td> <td>240</td> <td>252</td> <td>264</td> </tr> <tr> <td><strong>4.9%</strong></td> <td>13</td> <td>25</td> <td>37</td> <td>49</td> <td>62</td> <td>74</td> <td>86</td> <td>98</td> <td>110</td> <td>123</td> <td>135</td> <td>147</td> <td>160</td> <td>172</td> <td>184</td> <td>196</td> <td>209</td> <td>221</td> <td>233</td> <td>245</td> <td>258</td> <td>270</td> </tr> <tr> <td><strong>5%</strong></td> <td>13</td> <td>25</td> <td>38</td> <td>50</td> <td>63</td> <td>75</td> <td>88</td> <td>100</td> <td>113</td> <td>125</td> <td>138</td> <td>150</td> <td>163</td> <td>175</td> <td>188</td> <td>200</td> <td>213</td> <td>225</td> <td>238</td> <td>250</td> <td>263</td> <td>275</td> </tr> <tr> <td><strong>5.1%</strong></td> <td>13</td> <td>26</td> <td>39</td> <td>51</td> <td>64</td> <td>77</td> <td>90</td> <td>102</td> </tr> </table>
[ "A quick and dirty solution would simply update the if statement to compare the value of the TD along with the % attached to the filter variable. Plus adding an || or statement for an empty filter allows you to reset the table too.\nif (txtValue.toUpperCase() == (filter + \"%\") || filter == \"\") {\n\n\nfunction myFunction() {\n var input, filter, table, tr, td, i, txtValue;\n input = document.getElementById(\"myInput\");\n filter = input.value.toUpperCase();\n table = document.getElementById(\"myTable\");\n tr = table.getElementsByTagName(\"tr\");\n for (i = 0; i < tr.length; i++) {\n td = tr[i].getElementsByTagName(\"td\")[0];\n if (td) {\n txtValue = td.textContent || td.innerText;\n if (txtValue.toUpperCase() == (filter + \"%\") || filter == \"\") {\n tr[i].style.display = \"\";\n } else {\n tr[i].style.display = \"none\";\n }\n }\n }\n}\n<input type=\"text\" id=\"myInput\" onkeyup=\"myFunction()\" placeholder=\"Search ABV\" title=\"Type in ABV\">\n\n<table id=\"myTable\">\n <tr class=\"header\">\n <th style=\"width:100%;\">ABV</th>\n <th><strong>1 oz.</strong></th>\n <th><strong>2 oz.</strong></th>\n <th><strong>3 oz.</strong></th>\n <th><strong>4 oz.</strong></th>\n <th><strong>5 oz.</strong></th>\n <th><strong>6 oz.</strong></th>\n <th><strong>7 oz.</strong></th>\n <th><strong>8 oz.</strong></th>\n <th><strong>9 oz.</strong></th>\n <th><strong>10 oz.</strong></th>\n <th><strong>11 oz.</strong></th>\n <th><strong>12 oz.</strong></th>\n <th><strong>13 oz.</strong></th>\n <th><strong>14 oz.</strong></th>\n <th><strong>15 oz.</strong></th>\n <th><strong>16 oz.</strong></th>\n <th><strong>17 oz.</strong></th>\n <th><strong>18 oz.</strong></th>\n <th><strong>19 oz.</strong></th>\n <th><strong>20 oz.</strong></th>\n <th><strong>21 oz.</strong></th>\n <th><strong>22 oz.</strong></th>\n </tr>\n <tr>\n <td><strong>3%</strong></td>\n <td>8</td>\n <td>15</td>\n <td>23</td>\n <td>30</td>\n <td>38</td>\n <td>45</td>\n <td>53</td>\n <td>60</td>\n <td>68</td>\n <td>75</td>\n <td>83</td>\n <td>90</td>\n <td>98</td>\n <td>105</td>\n <td>113</td>\n <td>120</td>\n <td>128</td>\n <td>135</td>\n <td>143</td>\n <td>150</td>\n <td>158</td>\n <td>165</td>\n </tr>\n <tr>\n <td><strong>3.5%</strong></td>\n <td>9</td>\n <td>18</td>\n <td>27</td>\n <td>35</td>\n <td>44</td>\n <td>53</td>\n <td>62</td>\n <td>70</td>\n <td>79</td>\n <td>88</td>\n <td>97</td>\n <td>105</td>\n <td>114</td>\n <td>123</td>\n <td>132</td>\n <td>140</td>\n <td>149</td>\n <td>158</td>\n <td>167</td>\n <td>175</td>\n <td>184</td>\n <td>193</td>\n </tr>\n <tr>\n <td><strong>4%</strong></td>\n <td>10</td>\n <td>20</td>\n <td>30</td>\n <td>40</td>\n <td>50</td>\n <td>60</td>\n <td>70</td>\n <td>80</td>\n <td>90</td>\n <td>100</td>\n <td>110</td>\n <td>120</td>\n <td>130</td>\n <td>140</td>\n <td>150</td>\n <td>160</td>\n <td>170</td>\n <td>180</td>\n <td>190</td>\n <td>200</td>\n <td>210</td>\n <td>220</td>\n </tr>\n <tr>\n <td><strong>4.5%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>34</td>\n <td>45</td>\n <td>57</td>\n <td>68</td>\n <td>79</td>\n <td>90</td>\n <td>101</td>\n <td>113</td>\n <td>124</td>\n <td>135</td>\n <td>147</td>\n <td>158</td>\n <td>169</td>\n <td>180</td>\n <td>192</td>\n <td>203</td>\n <td>214</td>\n <td>225</td>\n <td>237</td>\n <td>248</td>\n </tr>\n <tr>\n <td><strong>4.6%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>35</td>\n <td>46</td>\n <td>58</td>\n <td>69</td>\n <td>81</td>\n <td>92</td>\n <td>104</td>\n <td>115</td>\n <td>127</td>\n <td>138</td>\n <td>150</td>\n <td>161</td>\n <td>173</td>\n <td>184</td>\n <td>196</td>\n <td>207</td>\n <td>219</td>\n <td>230</td>\n <td>242</td>\n <td>253</td>\n </tr>\n <tr>\n <td><strong>4.7%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>47</td>\n <td>59</td>\n <td>71</td>\n <td>83</td>\n <td>94</td>\n <td>106</td>\n <td>118</td>\n <td>130</td>\n <td>141</td>\n <td>153</td>\n <td>165</td>\n <td>177</td>\n <td>188</td>\n <td>200</td>\n <td>212</td>\n <td>224</td>\n <td>235</td>\n <td>247</td>\n <td>259</td>\n </tr>\n <tr>\n <td><strong>4.8%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>48</td>\n <td>60</td>\n <td>72</td>\n <td>84</td>\n <td>96</td>\n <td>108</td>\n <td>120</td>\n <td>132</td>\n <td>144</td>\n <td>156</td>\n <td>168</td>\n <td>180</td>\n <td>192</td>\n <td>204</td>\n <td>216</td>\n <td>228</td>\n <td>240</td>\n <td>252</td>\n <td>264</td>\n </tr>\n <tr>\n <td><strong>4.9%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>37</td>\n <td>49</td>\n <td>62</td>\n <td>74</td>\n <td>86</td>\n <td>98</td>\n <td>110</td>\n <td>123</td>\n <td>135</td>\n <td>147</td>\n <td>160</td>\n <td>172</td>\n <td>184</td>\n <td>196</td>\n <td>209</td>\n <td>221</td>\n <td>233</td>\n <td>245</td>\n <td>258</td>\n <td>270</td>\n </tr>\n <tr>\n <td><strong>5%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>38</td>\n <td>50</td>\n <td>63</td>\n <td>75</td>\n <td>88</td>\n <td>100</td>\n <td>113</td>\n <td>125</td>\n <td>138</td>\n <td>150</td>\n <td>163</td>\n <td>175</td>\n <td>188</td>\n <td>200</td>\n <td>213</td>\n <td>225</td>\n <td>238</td>\n <td>250</td>\n <td>263</td>\n <td>275</td>\n </tr>\n <tr>\n <td><strong>5.1%</strong></td>\n <td>13</td>\n <td>26</td>\n <td>39</td>\n <td>51</td>\n <td>64</td>\n <td>77</td>\n <td>90</td>\n <td>102</td>\n </tr>\n</table>\n\n\n\n", "I would suggest using data attributes to both contain the values and to toggle the visual with CSS - whatever you want that to be. Here I just add some color to the rows found/not found.\nNow this can be HTML table or a div or whatever without changing the code\n\n\nfunction keyUpEventHandler(event) {\n const input = event.currentTarget;\n const filterValue = input.value.toUpperCase();\n const table = document.getElementById(\"myTable\");\n const searchRows = table.querySelectorAll(\".searchable-row\");\n Array.from(searchRows).forEach((rowElement) => {\n rowElement.dataset.hasMatch = \"no-match\";\n });\n Array.from(searchRows).filter(rowElement =>\n rowElement.querySelector(\".search-column\").dataset.searchValue.toUpperCase() === filterValue\n ).forEach((rowElement) => {\n rowElement.dataset.hasMatch = \"has-match\";\n });\n}\nconst searchABV = document.getElementById(\"myInput\");\nsearchABV.addEventListener('keyup', keyUpEventHandler, false);\n.searchable-row[data-has-match=\"no-match\"] {\n border: solid blue 1px;\n background-color: #ddddff;\n}\n\n.searchable-row[data-has-match=\"has-match\"] {\n border: solid green 1px;\n background-color: #ddffdd;\n}\n<input type=\"text\" id=\"myInput\" placeholder=\"Search ABV\" title=\"Type in ABV\">\n\n<table id=\"myTable\">\n <thead>\n <tr class=\"header\">\n <th style=\"width:100%;\">ABV</th>\n <th><strong>1 oz.</strong></th>\n <th><strong>2 oz.</strong></th>\n <th><strong>3 oz.</strong></th>\n <th><strong>4 oz.</strong></th>\n <th><strong>5 oz.</strong></th>\n <th><strong>6 oz.</strong></th>\n <th><strong>7 oz.</strong></th>\n <th><strong>8 oz.</strong></th>\n <th><strong>9 oz.</strong></th>\n <th><strong>10 oz.</strong></th>\n <th><strong>11 oz.</strong></th>\n <th><strong>12 oz.</strong></th>\n <th><strong>13 oz.</strong></th>\n <th><strong>14 oz.</strong></th>\n <th><strong>15 oz.</strong></th>\n <th><strong>16 oz.</strong></th>\n <th><strong>17 oz.</strong></th>\n <th><strong>18 oz.</strong></th>\n <th><strong>19 oz.</strong></th>\n <th><strong>20 oz.</strong></th>\n <th><strong>21 oz.</strong></th>\n <th><strong>22 oz.</strong></th>\n </tr>\n </thead>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"3\"><strong>3%</strong></td>\n <td>8</td>\n <td>15</td>\n <td>23</td>\n <td>30</td>\n <td>38</td>\n <td>45</td>\n <td>53</td>\n <td>60</td>\n <td>68</td>\n <td>75</td>\n <td>83</td>\n <td>90</td>\n <td>98</td>\n <td>105</td>\n <td>113</td>\n <td>120</td>\n <td>128</td>\n <td>135</td>\n <td>143</td>\n <td>150</td>\n <td>158</td>\n <td>165</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"3.5\"><strong>3.5%</strong></td>\n <td>9</td>\n <td>18</td>\n <td>27</td>\n <td>35</td>\n <td>44</td>\n <td>53</td>\n <td>62</td>\n <td>70</td>\n <td>79</td>\n <td>88</td>\n <td>97</td>\n <td>105</td>\n <td>114</td>\n <td>123</td>\n <td>132</td>\n <td>140</td>\n <td>149</td>\n <td>158</td>\n <td>167</td>\n <td>175</td>\n <td>184</td>\n <td>193</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4\"><strong>4%</strong></td>\n <td>10</td>\n <td>20</td>\n <td>30</td>\n <td>40</td>\n <td>50</td>\n <td>60</td>\n <td>70</td>\n <td>80</td>\n <td>90</td>\n <td>100</td>\n <td>110</td>\n <td>120</td>\n <td>130</td>\n <td>140</td>\n <td>150</td>\n <td>160</td>\n <td>170</td>\n <td>180</td>\n <td>190</td>\n <td>200</td>\n <td>210</td>\n <td>220</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4.5\"><strong>4.5%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>34</td>\n <td>45</td>\n <td>57</td>\n <td>68</td>\n <td>79</td>\n <td>90</td>\n <td>101</td>\n <td>113</td>\n <td>124</td>\n <td>135</td>\n <td>147</td>\n <td>158</td>\n <td>169</td>\n <td>180</td>\n <td>192</td>\n <td>203</td>\n <td>214</td>\n <td>225</td>\n <td>237</td>\n <td>248</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4.6\"><strong>4.6%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>35</td>\n <td>46</td>\n <td>58</td>\n <td>69</td>\n <td>81</td>\n <td>92</td>\n <td>104</td>\n <td>115</td>\n <td>127</td>\n <td>138</td>\n <td>150</td>\n <td>161</td>\n <td>173</td>\n <td>184</td>\n <td>196</td>\n <td>207</td>\n <td>219</td>\n <td>230</td>\n <td>242</td>\n <td>253</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4.7\"><strong>4.7%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>47</td>\n <td>59</td>\n <td>71</td>\n <td>83</td>\n <td>94</td>\n <td>106</td>\n <td>118</td>\n <td>130</td>\n <td>141</td>\n <td>153</td>\n <td>165</td>\n <td>177</td>\n <td>188</td>\n <td>200</td>\n <td>212</td>\n <td>224</td>\n <td>235</td>\n <td>247</td>\n <td>259</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4.8\"><strong>4.8%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>48</td>\n <td>60</td>\n <td>72</td>\n <td>84</td>\n <td>96</td>\n <td>108</td>\n <td>120</td>\n <td>132</td>\n <td>144</td>\n <td>156</td>\n <td>168</td>\n <td>180</td>\n <td>192</td>\n <td>204</td>\n <td>216</td>\n <td>228</td>\n <td>240</td>\n <td>252</td>\n <td>264</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"4.9\"><strong>4.9%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>37</td>\n <td>49</td>\n <td>62</td>\n <td>74</td>\n <td>86</td>\n <td>98</td>\n <td>110</td>\n <td>123</td>\n <td>135</td>\n <td>147</td>\n <td>160</td>\n <td>172</td>\n <td>184</td>\n <td>196</td>\n <td>209</td>\n <td>221</td>\n <td>233</td>\n <td>245</td>\n <td>258</td>\n <td>270</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"5\"><strong>5%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>38</td>\n <td>50</td>\n <td>63</td>\n <td>75</td>\n <td>88</td>\n <td>100</td>\n <td>113</td>\n <td>125</td>\n <td>138</td>\n <td>150</td>\n <td>163</td>\n <td>175</td>\n <td>188</td>\n <td>200</td>\n <td>213</td>\n <td>225</td>\n <td>238</td>\n <td>250</td>\n <td>263</td>\n <td>275</td>\n </tr>\n <tr class=\"searchable-row\">\n <td class=\"search-column\" data-search-value=\"5.1\"><strong>5.1%</strong></td>\n <td>13</td>\n <td>26</td>\n <td>39</td>\n <td>51</td>\n <td>64</td>\n <td>77</td>\n <td>90</td>\n <td>102</td>\n </tr>\n</table>\n\n\n\n", "When filtering, you have two options:\n\nExact matches\nCell visibility\n\n\n\nconst\n filterText = document.querySelector('#filterText'),\n exactMatch = document.querySelector('#exactMatch'),\n hideCells = document.querySelector('#hideCells'),\n table = document.querySelector('#myTable');\n\nconst findAll = (selector, parent = document) => \n [...parent.querySelectorAll(selector)];\n\nfunction doFilter() {\n const hide = hideCells.checked;\n const filter = filterText.value.trim().toUpperCase();\n findAll('tr', table).slice(1).forEach(row => {\n const showRow =\n (filter === '') ||\n (filter.length && findAll('td', row).reduce((acc, td, index) => {\n const value = td.textContent.toUpperCase();\n const matches = exactMatch.checked\n ? value === filter\n : value.includes(filter);\n td.style.visibility = !hide || (hide && matches) || index === 0\n ? 'visible'\n : 'hidden';\n return matches ? true : acc;\n }, false));\n row.style.display = showRow ? '' : 'none';\n });\n}\n#myTable {\n border-collapse: collapse;\n}\n\n#myTable tr:nth-child(odd) {\n background: hsl(120, 75%, 90%);\n}\n\n#myTable tr td:nth-child(even) {\n background: #DDD;\n}\n\n#myTable tr:nth-child(odd) td:nth-child(even) {\n background: hsl(120, 75%, 80%);\n}\n\n#myTable .header {\n background: #AAA !important;\n}\n\n#myTable .header th:nth-child(even) {\n background: #888 !important;\n}\n<form>\n <input type=\"search\" id=\"filterText\" onInput=\"doFilter()\" placeholder=\"Search ABV\" title=\"Type in ABV\" />\n <input type=\"checkbox\" id=\"exactMatch\" onChange=\"doFilter()\" />\n <label for=\"exactMatch\">Exact Match</label>\n <input type=\"checkbox\" id=\"hideCells\" onChange=\"doFilter()\" />\n <label for=\"hideCells\">Hide Cells</label>\n</form>\n<hr />\n<table id=\"myTable\">\n <tr class=\"header\">\n <th style=\"width:100%;\">ABV</th>\n <th><strong>1 oz.</strong></th>\n <th><strong>2 oz.</strong></th>\n <th><strong>3 oz.</strong></th>\n <th><strong>4 oz.</strong></th>\n <th><strong>5 oz.</strong></th>\n <th><strong>6 oz.</strong></th>\n <th><strong>7 oz.</strong></th>\n <th><strong>8 oz.</strong></th>\n <th><strong>9 oz.</strong></th>\n <th><strong>10 oz.</strong></th>\n <th><strong>11 oz.</strong></th>\n <th><strong>12 oz.</strong></th>\n <th><strong>13 oz.</strong></th>\n <th><strong>14 oz.</strong></th>\n <th><strong>15 oz.</strong></th>\n <th><strong>16 oz.</strong></th>\n <th><strong>17 oz.</strong></th>\n <th><strong>18 oz.</strong></th>\n <th><strong>19 oz.</strong></th>\n <th><strong>20 oz.</strong></th>\n <th><strong>21 oz.</strong></th>\n <th><strong>22 oz.</strong></th>\n </tr>\n <tr>\n <td><strong>3%</strong></td>\n <td>8</td>\n <td>15</td>\n <td>23</td>\n <td>30</td>\n <td>38</td>\n <td>45</td>\n <td>53</td>\n <td>60</td>\n <td>68</td>\n <td>75</td>\n <td>83</td>\n <td>90</td>\n <td>98</td>\n <td>105</td>\n <td>113</td>\n <td>120</td>\n <td>128</td>\n <td>135</td>\n <td>143</td>\n <td>150</td>\n <td>158</td>\n <td>165</td>\n </tr>\n <tr>\n <td><strong>3.5%</strong></td>\n <td>9</td>\n <td>18</td>\n <td>27</td>\n <td>35</td>\n <td>44</td>\n <td>53</td>\n <td>62</td>\n <td>70</td>\n <td>79</td>\n <td>88</td>\n <td>97</td>\n <td>105</td>\n <td>114</td>\n <td>123</td>\n <td>132</td>\n <td>140</td>\n <td>149</td>\n <td>158</td>\n <td>167</td>\n <td>175</td>\n <td>184</td>\n <td>193</td>\n </tr>\n <tr>\n <td><strong>4%</strong></td>\n <td>10</td>\n <td>20</td>\n <td>30</td>\n <td>40</td>\n <td>50</td>\n <td>60</td>\n <td>70</td>\n <td>80</td>\n <td>90</td>\n <td>100</td>\n <td>110</td>\n <td>120</td>\n <td>130</td>\n <td>140</td>\n <td>150</td>\n <td>160</td>\n <td>170</td>\n <td>180</td>\n <td>190</td>\n <td>200</td>\n <td>210</td>\n <td>220</td>\n </tr>\n <tr>\n <td><strong>4.5%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>34</td>\n <td>45</td>\n <td>57</td>\n <td>68</td>\n <td>79</td>\n <td>90</td>\n <td>101</td>\n <td>113</td>\n <td>124</td>\n <td>135</td>\n <td>147</td>\n <td>158</td>\n <td>169</td>\n <td>180</td>\n <td>192</td>\n <td>203</td>\n <td>214</td>\n <td>225</td>\n <td>237</td>\n <td>248</td>\n </tr>\n <tr>\n <td><strong>4.6%</strong></td>\n <td>12</td>\n <td>23</td>\n <td>35</td>\n <td>46</td>\n <td>58</td>\n <td>69</td>\n <td>81</td>\n <td>92</td>\n <td>104</td>\n <td>115</td>\n <td>127</td>\n <td>138</td>\n <td>150</td>\n <td>161</td>\n <td>173</td>\n <td>184</td>\n <td>196</td>\n <td>207</td>\n <td>219</td>\n <td>230</td>\n <td>242</td>\n <td>253</td>\n </tr>\n <tr>\n <td><strong>4.7%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>47</td>\n <td>59</td>\n <td>71</td>\n <td>83</td>\n <td>94</td>\n <td>106</td>\n <td>118</td>\n <td>130</td>\n <td>141</td>\n <td>153</td>\n <td>165</td>\n <td>177</td>\n <td>188</td>\n <td>200</td>\n <td>212</td>\n <td>224</td>\n <td>235</td>\n <td>247</td>\n <td>259</td>\n </tr>\n <tr>\n <td><strong>4.8%</strong></td>\n <td>12</td>\n <td>24</td>\n <td>36</td>\n <td>48</td>\n <td>60</td>\n <td>72</td>\n <td>84</td>\n <td>96</td>\n <td>108</td>\n <td>120</td>\n <td>132</td>\n <td>144</td>\n <td>156</td>\n <td>168</td>\n <td>180</td>\n <td>192</td>\n <td>204</td>\n <td>216</td>\n <td>228</td>\n <td>240</td>\n <td>252</td>\n <td>264</td>\n </tr>\n <tr>\n <td><strong>4.9%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>37</td>\n <td>49</td>\n <td>62</td>\n <td>74</td>\n <td>86</td>\n <td>98</td>\n <td>110</td>\n <td>123</td>\n <td>135</td>\n <td>147</td>\n <td>160</td>\n <td>172</td>\n <td>184</td>\n <td>196</td>\n <td>209</td>\n <td>221</td>\n <td>233</td>\n <td>245</td>\n <td>258</td>\n <td>270</td>\n </tr>\n <tr>\n <td><strong>5%</strong></td>\n <td>13</td>\n <td>25</td>\n <td>38</td>\n <td>50</td>\n <td>63</td>\n <td>75</td>\n <td>88</td>\n <td>100</td>\n <td>113</td>\n <td>125</td>\n <td>138</td>\n <td>150</td>\n <td>163</td>\n <td>175</td>\n <td>188</td>\n <td>200</td>\n <td>213</td>\n <td>225</td>\n <td>238</td>\n <td>250</td>\n <td>263</td>\n <td>275</td>\n </tr>\n <tr>\n <td><strong>5.1%</strong></td>\n <td>13</td>\n <td>26</td>\n <td>39</td>\n <td>51</td>\n <td>64</td>\n <td>77</td>\n <td>90</td>\n <td>102</td>\n </tr>\n</table>\n\n\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "datatables", "html", "javascript" ]
stackoverflow_0074659010_datatables_html_javascript.txt
Q: rabbitmqctl stop_app just hangs I am using windows and am Im trying to stop the rabbitmq app with the command rabbitmqctl stop_app but it just hangs at the command prompt. I also tried rabbitmqctl stop_app and same thing happens , it just hangs at the command line I installed it on 3 other servers and ran the same command before joining the servers to a cluster with no issues so not sure why one of the servers is just hanging at the command prompt Also when i try to start i get the following error : Stop?? C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.9\sbin>rabbitmqctl start_app Starting node 'rabbit@server1' ... Error: stop A: You can try to run it with --timeout parameter. This is not a solution but at least will not wait for infinitely. As a solution, you can clean(rm -rf) the mnesia folder which is default in /var/lib/rabbitmq/mnesia/ Then re-run the rabbitmq and it will work but don't forget that it will remove all the data! If you are running a cluster and had this issue with some of the nodes in the cluster then you can do this: first: clean the mnesia folder at the nodes which are not working and not connecting to the cluster. (After doing this, reboot is a good idea) rm -rf /var/lib/rabbitmq/mnesia/* second: remove the node from the cluster (run this command in the node which is running) rabbitmqctl forget_cluster_node rabbit@not-running-node-name later run these in the node which is not running rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl join_cluster rabbit@running-node-name rabbitmqctl start_app A: It sounds like there may be an issue with the RabbitMQ installation on that particular server. One potential solution is to try stopping the RabbitMQ service from the Windows Services Manager. To do this, follow these steps: Press the Windows key and type "services" (without quotes), then press Enter to open the Services Manager. In the Services Manager, scroll down until you find the "RabbitMQ" service, then right-click on it and select "Stop" from the context menu. Wait for the service to stop, then try starting it again using the "rabbitmqctl start_app" command. If this doesn't work, you may need to uninstall and then re-install RabbitMQ on that server to resolve the issue. Alternatively, you can try using the Windows Task Manager to kill the RabbitMQ process. To do this, follow these steps: Press the Ctrl+Shift+Esc keys to open the Task Manager. 2.In the Task Manager, click on the "Details" tab and scroll down until you find the "beam.smp" process. Right-click on the "beam.smp" process and select "End Task" to kill the process. Try starting the RabbitMQ service again using the "rabbitmqctl start_app" command. If you continue to experience issues with the RabbitMQ service on that server, you may want to seek further assistance from the RabbitMQ community or from the vendor of the software.
rabbitmqctl stop_app just hangs
I am using windows and am Im trying to stop the rabbitmq app with the command rabbitmqctl stop_app but it just hangs at the command prompt. I also tried rabbitmqctl stop_app and same thing happens , it just hangs at the command line I installed it on 3 other servers and ran the same command before joining the servers to a cluster with no issues so not sure why one of the servers is just hanging at the command prompt Also when i try to start i get the following error : Stop?? C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.9\sbin>rabbitmqctl start_app Starting node 'rabbit@server1' ... Error: stop
[ "You can try to run it with --timeout parameter.\nThis is not a solution but at least will not wait for infinitely.\nAs a solution, you can clean(rm -rf) the mnesia folder which is default in /var/lib/rabbitmq/mnesia/\nThen re-run the rabbitmq and it will work but don't forget that it will remove all the data!\nIf you are running a cluster and had this issue with some of the nodes in the cluster then you can do this:\n\nfirst: clean the mnesia folder at the nodes which are not working and not connecting to the cluster. (After doing this, reboot is a good idea)\n\n\nrm -rf /var/lib/rabbitmq/mnesia/*\n\n\n\nsecond: remove the node from the cluster (run this command in the node which is running)\n\n\nrabbitmqctl forget_cluster_node rabbit@not-running-node-name\n\n\n\nlater run these in the node which is not running\n\n\nrabbitmqctl stop_app\nrabbitmqctl reset\nrabbitmqctl join_cluster rabbit@running-node-name\nrabbitmqctl start_app\n\n\n", "It sounds like there may be an issue with the RabbitMQ installation on that particular server. One potential solution is to try stopping the RabbitMQ service from the Windows Services Manager.\nTo do this, follow these steps:\n\nPress the Windows key and type \"services\" (without quotes), then press Enter to open the Services Manager.\nIn the Services Manager, scroll down until you find the \"RabbitMQ\" service, then right-click on it and select \"Stop\" from the context menu.\nWait for the service to stop, then try starting it again using the \"rabbitmqctl start_app\" command.\n\nIf this doesn't work, you may need to uninstall and then re-install RabbitMQ on that server to resolve the issue.\nAlternatively, you can try using the Windows Task Manager to kill the RabbitMQ process. To do this, follow these steps:\n\nPress the Ctrl+Shift+Esc keys to open the Task Manager.\n2.In the Task Manager, click on the \"Details\" tab and scroll down until you find the \"beam.smp\" process.\nRight-click on the \"beam.smp\" process and select \"End Task\" to kill the process.\nTry starting the RabbitMQ service again using the \"rabbitmqctl start_app\" command.\n\nIf you continue to experience issues with the RabbitMQ service on that server, you may want to seek further assistance from the RabbitMQ community or from the vendor of the software.\n" ]
[ 0, 0 ]
[]
[]
[ "rabbitmq" ]
stackoverflow_0050416134_rabbitmq.txt
Q: error CS0029: Cannot implicitly convert type 'float' to 'UnityEngine.Quaternion' I have error error CS0029: Cannot implicitly convert type 'float' to 'UnityEngine.Quaternion' My code : case Operation.Angle: Quaternion quaternion1 = new Quaternion(values[0], values[1], values[2], values[3]); Quaternion quaternion2 = new Quaternion(values2[0], values2[1], values2[2], values2[3]); quaternion = Quaternion.Angle(quaternion1, quaternion2); break; Even using the basic quaternions instead of a float I get the same error. Documentation : https://docs.unity3d.com/ScriptReference/Quaternion.Angle.html A: Thanks to @hijinxbassist it reflects on my memory, that's working too much. Thanks again. Returns the angle in degrees between two rotations a and b. Example: Think of two GameObjects (A and B) moving around a third GameObject (C). Lines from C to A and C to B create a triangle which can change over time. The angle between CA and CB is the value Quaternion.Angle provides. IS RETURN A FLOAT. Documentation : https://docs.unity3d.com/ScriptReference/Quaternion.Angle.html
error CS0029: Cannot implicitly convert type 'float' to 'UnityEngine.Quaternion'
I have error error CS0029: Cannot implicitly convert type 'float' to 'UnityEngine.Quaternion' My code : case Operation.Angle: Quaternion quaternion1 = new Quaternion(values[0], values[1], values[2], values[3]); Quaternion quaternion2 = new Quaternion(values2[0], values2[1], values2[2], values2[3]); quaternion = Quaternion.Angle(quaternion1, quaternion2); break; Even using the basic quaternions instead of a float I get the same error. Documentation : https://docs.unity3d.com/ScriptReference/Quaternion.Angle.html
[ "Thanks to @hijinxbassist it reflects on my memory, that's working too much.\nThanks again.\nReturns the angle in degrees between two rotations a and b.\nExample: Think of two GameObjects (A and B) moving around a third GameObject (C). Lines from C to A and C to B create a triangle which can change over time. The angle between CA and CB is the value Quaternion.Angle provides.\nIS RETURN A FLOAT.\nDocumentation : https://docs.unity3d.com/ScriptReference/Quaternion.Angle.html\n" ]
[ 0 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074634713_c#_unity3d.txt
Q: Highlighting multiple hex_tiles by hovering in bokeh I try to visualize my data in a hex map. For this I use python bokeh and the corresponding hex_tile function in the figure class. My data belongs to one of 8 different classes, each having a different color. The image below shows the current visualization: I would like to add the possibility to change the color of the element (and ideally all its class members) when the mouse hovers over it. I know, that it is somewhat possible, as bokeh themselves provide the following example: https://docs.bokeh.org/en/latest/docs/gallery/hexbin.html However, I do not know how to implement this myself (as this seems to be a feature for the hexbin function and not the simple hex_tile function) Currently I provide my data in a ColumnDataSource: source = ColumnDataSource(data=dict( r=x_row, q=y_col, color=colors_array, ipc_class=ipc_array )) where "ipc_class" describes one of the 8 classes the element belongs to. For the mouse hover tooltip I used the following code: TOOLTIPS = [ ("index", "$index"), ("(r,q)", "(@r, @q)"), ("ipc_class", "@ipc_class") ] and then I visualized everything with: p = figure(plot_width=1600, plot_height=1000, title="Ipc to Hexes with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154', tooltips=TOOLTIPS) p.grid.visible = False p.hex_tile('q', 'r', source=source, fill_color='color') I would like the visualization to add a function, where hovering over one element will result in one of the following: 1. Highlight the current element by changing its color 2. Highlight multiple elements of the same class when one is hovered over by changing its color 3. Change the color of the outer line of the hex_tile element (or complete class) when the element is hovered over Which of these features is possible with bokeh and how would I go about it? EDIT: After trying to reimplement the suggestion by Tony, all elements will turn pink as soon as my mouse hits the graph and the color won´t turn back. My code looks like this: source = ColumnDataSource(data=dict( x=x_row, y=y_col, color=colors_array, ipc_class=ipc_array )) p = figure(plot_width=800, plot_height=800, title="Ipc to Square with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154') p.grid.visible = False p.hex_tile('x', 'y', source=source, fill_color='color') ################################### code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args=dict(colors=colors), code=code) hover = HoverTool(tooltips=TOOLTIPS, callback=callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) basically, I removed the tooltips from the figure function and put them down to the hover tool. As I already have red in my graph, I replaced the hover color to "pink". As I am not quite sure what each line in the "code" variable is supposed to do, I am quite helpless with this. I think one mistake may be, that my ColumnDataSource looks somewhat different from Tony's and I do not know what was done to "classifiy" the first and third element, as well as the second and fourth element together. For me, it would be perfect, if the classification would be done by the "ipc_class" variable. A: Following the discussion from previous post here comes the solution targeted for the OP code (Bokeh v1.1.0). What I did is: 1) Added a HoverTool 2) Added a JS callback to the HoverTool which: Resets the hex colors to the original ones (colors_array passed in the callback) Inspects the index of currently hovered hex (hovered_index) Gets the ip_class of currently hovered hex (hovered_ip_class) Walks through the data_source.data['ip_class'] and finds all hexagons with the same ip_class as the hovered one and sets a new color for it (pink) Send source.change.emit() signal to the BokehJS to update the model The code: from bokeh.plotting import figure, show, output_file from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors_array = ["green", "green", "blue", "blue"] x_row = [0, 1, 2, 3] y_col = [1, 1, 1, 1] ipc_array = ['A', 'B', 'A', 'B'] source = ColumnDataSource(data = dict( x = x_row, y = y_col, color = colors_array, ipc_class = ipc_array )) p = figure(plot_width = 800, plot_height = 800, title = "Ipc to Square with colors", match_aspect = True, tools = "wheel_zoom,reset,pan", background_fill_color = '#440154') p.grid.visible = False p.hex_tile('x', 'y', source = source, fill_color = 'color') ################################### code = ''' for (let i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { const hovered_index = cb_data.index.indices[0]; const hovered_ipc_class = cb_data.renderer.data_source.data['ipc_class'][hovered_index]; for (let i = 0; i < cb_data.renderer.data_source.data['ipc_class'].length; i++) { if (cb_data.renderer.data_source.data['ipc_class'][i] == hovered_ipc_class) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args = dict(ipc_array = ipc_array, colors = colors_array), code = code) hover = HoverTool(tooltips = TOOLTIPS, callback = callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) Result: A: Maybe something like this to start with (Bokeh v1.1.0): from bokeh.plotting import figure, show from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors = ["green", "blue", "green", "blue"] source = ColumnDataSource(dict(r = [0, 1, 2, 3], q = [1, 1, 1, 1], color = colors)) plot = figure(plot_width = 300, plot_height = 300, match_aspect = True) plot.hex_tile('r', 'q', fill_color = 'color', source = source) code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'red'; } } cb_data.renderer.data_source.change.emit(); ''' callback = CustomJS(args = dict(colors = colors), code = code) hover = HoverTool(tooltips = [('R', '@r')], callback = callback) plot.add_tools(hover) show(plot) Result: A: Another approach is to update cb_data.index.indices to include all those indices that have ipc_class in common, and add hover_color="pink" to hex_tile. So in the CustomJS code one would loop the ipc_class column and get the indices that match the ipc_class of the currently hovered item. In this setup there is not need to update the color column in the data source. Code below tested used Bokeh version 3.0.2. from bokeh.plotting import figure, show, output_file from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors_array = ["green", "green", "blue", "blue"] x_row = [0, 1, 2, 3] y_col = [1, 1, 1, 1] ipc_array = ['A', 'B', 'A', 'B'] source = ColumnDataSource(data = dict( x = x_row, y = y_col, color = colors_array, ipc_class = ipc_array )) plot = figure( width = 800, height = 800, title = "Ipc to Square with colors", match_aspect = True, tools = "wheel_zoom,reset,pan", background_fill_color = '#440154' ) plot.grid.visible = False plot.hex_tile( 'x', 'y', source = source, fill_color = 'color', hover_color = 'pink' # Added! ) code = ''' const hovered_index = cb_data.index.indices; const src_data = cb_data.renderer.data_source.data; if (hovered_index.length > 0) { const hovered_ipc_class = src_data['ipc_class'][hovered_index]; var idx_common_ipc_class = hovered_index; for (let i = 0; i < src_data['ipc_class'].length; i++) { if (i === hovered_index[0]) { continue; } if (src_data['ipc_class'][i] === hovered_ipc_class) { idx_common_ipc_class.push(i); } } cb_data.index.indices = idx_common_ipc_class; cb_data.renderer.data_source.change.emit(); } ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(code = code) hover = HoverTool( tooltips = TOOLTIPS, callback = callback ) plot.add_tools(hover) output_file("hexbin.html") show(p)
Highlighting multiple hex_tiles by hovering in bokeh
I try to visualize my data in a hex map. For this I use python bokeh and the corresponding hex_tile function in the figure class. My data belongs to one of 8 different classes, each having a different color. The image below shows the current visualization: I would like to add the possibility to change the color of the element (and ideally all its class members) when the mouse hovers over it. I know, that it is somewhat possible, as bokeh themselves provide the following example: https://docs.bokeh.org/en/latest/docs/gallery/hexbin.html However, I do not know how to implement this myself (as this seems to be a feature for the hexbin function and not the simple hex_tile function) Currently I provide my data in a ColumnDataSource: source = ColumnDataSource(data=dict( r=x_row, q=y_col, color=colors_array, ipc_class=ipc_array )) where "ipc_class" describes one of the 8 classes the element belongs to. For the mouse hover tooltip I used the following code: TOOLTIPS = [ ("index", "$index"), ("(r,q)", "(@r, @q)"), ("ipc_class", "@ipc_class") ] and then I visualized everything with: p = figure(plot_width=1600, plot_height=1000, title="Ipc to Hexes with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154', tooltips=TOOLTIPS) p.grid.visible = False p.hex_tile('q', 'r', source=source, fill_color='color') I would like the visualization to add a function, where hovering over one element will result in one of the following: 1. Highlight the current element by changing its color 2. Highlight multiple elements of the same class when one is hovered over by changing its color 3. Change the color of the outer line of the hex_tile element (or complete class) when the element is hovered over Which of these features is possible with bokeh and how would I go about it? EDIT: After trying to reimplement the suggestion by Tony, all elements will turn pink as soon as my mouse hits the graph and the color won´t turn back. My code looks like this: source = ColumnDataSource(data=dict( x=x_row, y=y_col, color=colors_array, ipc_class=ipc_array )) p = figure(plot_width=800, plot_height=800, title="Ipc to Square with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154') p.grid.visible = False p.hex_tile('x', 'y', source=source, fill_color='color') ################################### code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args=dict(colors=colors), code=code) hover = HoverTool(tooltips=TOOLTIPS, callback=callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) basically, I removed the tooltips from the figure function and put them down to the hover tool. As I already have red in my graph, I replaced the hover color to "pink". As I am not quite sure what each line in the "code" variable is supposed to do, I am quite helpless with this. I think one mistake may be, that my ColumnDataSource looks somewhat different from Tony's and I do not know what was done to "classifiy" the first and third element, as well as the second and fourth element together. For me, it would be perfect, if the classification would be done by the "ipc_class" variable.
[ "Following the discussion from previous post here comes the solution targeted for the OP code (Bokeh v1.1.0). What I did is:\n1) Added a HoverTool\n2) Added a JS callback to the HoverTool which:\n\nResets the hex colors to the original ones (colors_array passed in the callback)\nInspects the index of currently hovered hex (hovered_index)\nGets the ip_class of currently hovered hex (hovered_ip_class)\nWalks through the data_source.data['ip_class'] and finds all hexagons with the same ip_class as the hovered one and sets a new color for it (pink)\nSend source.change.emit() signal to the BokehJS to update the model\n\n\nThe code:\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors_array = [\"green\", \"green\", \"blue\", \"blue\"]\nx_row = [0, 1, 2, 3]\ny_col = [1, 1, 1, 1]\nipc_array = ['A', 'B', 'A', 'B']\n\nsource = ColumnDataSource(data = dict(\n x = x_row,\n y = y_col,\n color = colors_array,\n ipc_class = ipc_array\n))\n\np = figure(plot_width = 800, plot_height = 800, title = \"Ipc to Square with colors\", match_aspect = True,\n tools = \"wheel_zoom,reset,pan\", background_fill_color = '#440154')\np.grid.visible = False\np.hex_tile('x', 'y', source = source, fill_color = 'color')\n\n###################################\ncode = ''' \nfor (let i in cb_data.renderer.data_source.data['color'])\n cb_data.renderer.data_source.data['color'][i] = colors[i];\n\nif (cb_data.index.indices != null) {\n const hovered_index = cb_data.index.indices[0];\n const hovered_ipc_class = cb_data.renderer.data_source.data['ipc_class'][hovered_index];\n for (let i = 0; i < cb_data.renderer.data_source.data['ipc_class'].length; i++) {\n if (cb_data.renderer.data_source.data['ipc_class'][i] == hovered_ipc_class)\n cb_data.renderer.data_source.data['color'][i] = 'pink';\n }\n}\ncb_data.renderer.data_source.change.emit();\n'''\n\nTOOLTIPS = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"(@x, @y)\"),\n (\"ipc_class\", \"@ipc_class\")\n]\n\ncallback = CustomJS(args = dict(ipc_array = ipc_array, colors = colors_array), code = code)\nhover = HoverTool(tooltips = TOOLTIPS, callback = callback)\np.add_tools(hover)\n########################################\n\noutput_file(\"hexbin.html\")\n\nshow(p)\n\nResult:\n\n", "Maybe something like this to start with (Bokeh v1.1.0):\nfrom bokeh.plotting import figure, show\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors = [\"green\", \"blue\", \"green\", \"blue\"]\nsource = ColumnDataSource(dict(r = [0, 1, 2, 3], q = [1, 1, 1, 1], color = colors))\nplot = figure(plot_width = 300, plot_height = 300, match_aspect = True)\nplot.hex_tile('r', 'q', fill_color = 'color', source = source)\n\ncode = ''' \nfor (i in cb_data.renderer.data_source.data['color'])\n cb_data.renderer.data_source.data['color'][i] = colors[i];\n\nif (cb_data.index.indices != null) {\n hovered_index = cb_data.index.indices[0];\n hovered_color = cb_data.renderer.data_source.data['color'][hovered_index];\n for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) {\n if (cb_data.renderer.data_source.data['color'][i] == hovered_color)\n cb_data.renderer.data_source.data['color'][i] = 'red';\n }\n}\ncb_data.renderer.data_source.change.emit();\n'''\ncallback = CustomJS(args = dict(colors = colors), code = code)\nhover = HoverTool(tooltips = [('R', '@r')], callback = callback)\nplot.add_tools(hover)\nshow(plot)\n\nResult:\n\n", "Another approach is to update cb_data.index.indices to include all those indices that have ipc_class in common, and add hover_color=\"pink\" to hex_tile. So in the CustomJS code one would loop the ipc_class column and get the indices that match the ipc_class of the currently hovered item.\nIn this setup there is not need to update the color column in the data source.\nCode below tested used Bokeh version 3.0.2.\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors_array = [\"green\", \"green\", \"blue\", \"blue\"]\nx_row = [0, 1, 2, 3]\ny_col = [1, 1, 1, 1]\nipc_array = ['A', 'B', 'A', 'B']\n\nsource = ColumnDataSource(data = dict(\n x = x_row,\n y = y_col,\n color = colors_array,\n ipc_class = ipc_array\n))\n\nplot = figure(\n width = 800,\n height = 800, \n title = \"Ipc to Square with colors\",\n match_aspect = True,\n tools = \"wheel_zoom,reset,pan\",\n background_fill_color = '#440154'\n)\nplot.grid.visible = False\nplot.hex_tile(\n 'x', 'y',\n source = source,\n fill_color = 'color',\n hover_color = 'pink' # Added!\n)\n\ncode = '''\n const hovered_index = cb_data.index.indices;\n const src_data = cb_data.renderer.data_source.data;\n if (hovered_index.length > 0) {\n const hovered_ipc_class = src_data['ipc_class'][hovered_index];\n var idx_common_ipc_class = hovered_index;\n for (let i = 0; i < src_data['ipc_class'].length; i++) {\n if (i === hovered_index[0]) {\n continue;\n }\n if (src_data['ipc_class'][i] === hovered_ipc_class) {\n idx_common_ipc_class.push(i);\n }\n }\n cb_data.index.indices = idx_common_ipc_class;\n cb_data.renderer.data_source.change.emit();\n }\n'''\n\nTOOLTIPS = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"(@x, @y)\"),\n (\"ipc_class\", \"@ipc_class\")\n]\n\ncallback = CustomJS(code = code)\nhover = HoverTool(\n tooltips = TOOLTIPS,\n callback = callback\n)\nplot.add_tools(hover)\n\noutput_file(\"hexbin.html\")\nshow(p)\n\n" ]
[ 7, 2, 0 ]
[]
[]
[ "bokeh", "python" ]
stackoverflow_0055947149_bokeh_python.txt
Q: SQL - Update multiple records in one query I have table - config. Schema: config_name | config_value And I would like to update multiple records in one query. I try like that: UPDATE config SET t1.config_value = 'value' , t2.config_value = 'value2' WHERE t1.config_name = 'name1' AND t2.config_name = 'name2'; but that query is wrong :( Can you help me? A: Try either multi-table update syntax UPDATE config t1 JOIN config t2 ON t1.config_name = 'name1' AND t2.config_name = 'name2' SET t1.config_value = 'value', t2.config_value = 'value2'; Here is a SQLFiddle demo or conditional update UPDATE config SET config_value = CASE config_name WHEN 'name1' THEN 'value' WHEN 'name2' THEN 'value2' ELSE config_value END WHERE config_name IN('name1', 'name2'); Here is a SQLFiddle demo A: You can accomplish it with INSERT as below: INSERT INTO mytable (id, a, b, c) VALUES (1, 'a1', 'b1', 'c1'), (2, 'a2', 'b2', 'c2'), (3, 'a3', 'b3', 'c3'), (4, 'a4', 'b4', 'c4'), (5, 'a5', 'b5', 'c5'), (6, 'a6', 'b6', 'c6') ON DUPLICATE KEY UPDATE id=VALUES(id), a=VALUES(a), b=VALUES(b), c=VALUES(c); This insert new values into table, but if primary key is duplicated (already inserted into table) that values you specify would be updated and same record would not be inserted second time. A: in my case I have to update the records which are more than 1000, for this instead of hitting the update query each time I preferred this, UPDATE mst_users SET base_id = CASE user_id WHEN 78 THEN 999 WHEN 77 THEN 88 ELSE base_id END WHERE user_id IN(78, 77) 78,77 are the user Ids and for those user id I need to update the base_id 999 and 88 respectively.This works for me. A: instead of this UPDATE staff SET salary = 1200 WHERE name = 'Bob'; UPDATE staff SET salary = 1200 WHERE name = 'Jane'; UPDATE staff SET salary = 1200 WHERE name = 'Frank'; UPDATE staff SET salary = 1200 WHERE name = 'Susan'; UPDATE staff SET salary = 1200 WHERE name = 'John'; you can use UPDATE staff SET salary = 1200 WHERE name IN ('Bob', 'Frank', 'John'); A: maybe for someone it will be useful for Postgresql 9.5 works as a charm INSERT INTO tabelname(id, col2, col3, col4) VALUES (1, 1, 1, 'text for col4'), (DEFAULT,1,4,'another text for col4') ON CONFLICT (id) DO UPDATE SET col2 = EXCLUDED.col2, col3 = EXCLUDED.col3, col4 = EXCLUDED.col4 this SQL updates existing record and inserts if new one (2 in 1) A: Camille's solution worked. Turned it into a basic PHP function, which writes up the SQL statement. Hope this helps someone else. function _bulk_sql_update_query($table, $array) { /* * Example: INSERT INTO mytable (id, a, b, c) VALUES (1, 'a1', 'b1', 'c1'), (2, 'a2', 'b2', 'c2'), (3, 'a3', 'b3', 'c3'), (4, 'a4', 'b4', 'c4'), (5, 'a5', 'b5', 'c5'), (6, 'a6', 'b6', 'c6') ON DUPLICATE KEY UPDATE id=VALUES(id), a=VALUES(a), b=VALUES(b), c=VALUES(c); */ $sql = ""; $columns = array_keys($array[0]); $columns_as_string = implode(', ', $columns); $sql .= " INSERT INTO $table (" . $columns_as_string . ") VALUES "; $len = count($array); foreach ($array as $index => $values) { $sql .= '("'; $sql .= implode('", "', $array[$index]) . "\""; $sql .= ')'; $sql .= ($index == $len - 1) ? "" : ", \n"; } $sql .= "\nON DUPLICATE KEY UPDATE \n"; $len = count($columns); foreach ($columns as $index => $column) { $sql .= "$column=VALUES($column)"; $sql .= ($index == $len - 1) ? "" : ", \n"; } $sql .= ";"; return $sql; } A: Execute the code below to update n number of rows, where Parent ID is the id you want to get the data from and Child ids are the ids u need to be updated so it's just u need to add the parent id and child ids to update all the rows u need using a small script. UPDATE [Table] SET column1 = (SELECT column1 FROM Table WHERE IDColumn = [PArent ID]), column2 = (SELECT column2 FROM Table WHERE IDColumn = [PArent ID]), column3 = (SELECT column3 FROM Table WHERE IDColumn = [PArent ID]), column4 = (SELECT column4 FROM Table WHERE IDColumn = [PArent ID]), WHERE IDColumn IN ([List of child Ids]) A: Execute the below code if you want to update all record in all columns: update config set column1='value',column2='value'...columnN='value'; and if you want to update all columns of a particular row then execute below code: update config set column1='value',column2='value'...columnN='value' where column1='value' A: Assuming you have the list of values to update in an Excel spreadsheet with config_value in column A1 and config_name in B1 you can easily write up the query there using an Excel formula like =CONCAT("UPDATE config SET config_value = ","'",A1,"'", " WHERE config_name = ","'",B1,"'") A: INSERT INTO tablename (name, salary) VALUES ('Bob', 1125), ('Jane', 1200), ('Frank', 1100), ('Susan', 1175), ('John', 1150) ON DUPLICATE KEY UPDATE salary = VALUES(salary); A: UPDATE 2021 / MySql v8.0.20 and later The most upvoted answer advises to use the VALUES function which is now DEPRECATED for the ON DUPLICATE KEY UPDATE syntax. With v8.0.20 you get a deprecation warning with the VALUES function: INSERT INTO chart (id, flag) VALUES (1, 'FLAG_1'),(2, 'FLAG_2') ON DUPLICATE KEY UPDATE id = VALUES(id), flag = VALUES(flag); [HY000][1287] 'VALUES function' is deprecated and will be removed in a future release. Please use an alias (INSERT INTO ... VALUES (...) AS alias) and replace VALUES(col) in the ON DUPLICATE KEY UPDATE clause with alias.col instead Use the new alias syntax instead: official MySQL worklog Docs INSERT INTO chart (id, flag) VALUES (1, 'FLAG_1'),(2, 'FLAG_2') AS aliased ON DUPLICATE KEY UPDATE flag=aliased.flag; A: Try either multi-table update syntax Try it copy and SQL query: CREATE TABLE #temp (id int, name varchar(50)) CREATE TABLE #temp2 (id int, name varchar(50)) INSERT INTO #temp (id, name) VALUES (1,'abc'), (2,'xyz'), (3,'mno'), (4,'abc') INSERT INTO #temp2 (id, name) VALUES (2,'def'), (1,'mno1') SELECT * FROM #temp SELECT * FROM #temp2 UPDATE t SET name = CASE WHEN t.id = t1.id THEN t1.name ELSE t.name END FROM #temp t INNER JOIN #temp2 t1 on t.id = t1.id select * from #temp select * from #temp2 drop table #temp drop table #temp2 A: just make a transaction statement, with multiple update statement and commit. In error case, you can just rollback modification handle by starting transaction. START TRANSACTION; /*Multiple update statement*/ COMMIT; (This syntax is for MySQL, for PostgreSQL, replace 'START TRANSACTION' by 'BEGIN') A: UPDATE table name SET field name = 'value' WHERE table name.primary key A: If you need to update several rows at a time, the alternative is prepared statement: database complies a query pattern you provide the first time, keep the compiled result for current connection (depends on implementation). then you updates all the rows, by sending shortened label of the prepared function with different parameters in SQL syntax, instead of sending entire UPDATE statement several times for several updates the database parse the shortened label of the prepared function , which is linked to the pre-compiled result, then perform the updates. next time when you perform row updates, the database may still use the pre-compiled result and quickly complete the operations (so the first step above can be omitted since it may take time to compile). Here is PostgreSQL example of prepare statement, many of SQL databases (e.g. MariaDB,MySQL, Oracle) also support it.
SQL - Update multiple records in one query
I have table - config. Schema: config_name | config_value And I would like to update multiple records in one query. I try like that: UPDATE config SET t1.config_value = 'value' , t2.config_value = 'value2' WHERE t1.config_name = 'name1' AND t2.config_name = 'name2'; but that query is wrong :( Can you help me?
[ "Try either multi-table update syntax\nUPDATE config t1 JOIN config t2\n ON t1.config_name = 'name1' AND t2.config_name = 'name2'\n SET t1.config_value = 'value',\n t2.config_value = 'value2';\n\nHere is a SQLFiddle demo\nor conditional update\nUPDATE config\n SET config_value = CASE config_name \n WHEN 'name1' THEN 'value' \n WHEN 'name2' THEN 'value2' \n ELSE config_value\n END\n WHERE config_name IN('name1', 'name2');\n\nHere is a SQLFiddle demo\n", "You can accomplish it with INSERT as below:\nINSERT INTO mytable (id, a, b, c)\nVALUES (1, 'a1', 'b1', 'c1'),\n(2, 'a2', 'b2', 'c2'),\n(3, 'a3', 'b3', 'c3'),\n(4, 'a4', 'b4', 'c4'),\n(5, 'a5', 'b5', 'c5'),\n(6, 'a6', 'b6', 'c6')\nON DUPLICATE KEY UPDATE id=VALUES(id),\na=VALUES(a),\nb=VALUES(b),\nc=VALUES(c);\n\nThis insert new values into table, but if primary key is duplicated (already inserted into table) that values you specify would be updated and same record would not be inserted second time.\n", "in my case I have to update the records which are more than 1000, for this instead of hitting the update query each time I preferred this, \n UPDATE mst_users \n SET base_id = CASE user_id \n WHEN 78 THEN 999 \n WHEN 77 THEN 88 \n ELSE base_id END WHERE user_id IN(78, 77)\n\n78,77 are the user Ids and for those user id I need to update the base_id 999 and 88 respectively.This works for me.\n", "instead of this \nUPDATE staff SET salary = 1200 WHERE name = 'Bob';\nUPDATE staff SET salary = 1200 WHERE name = 'Jane';\nUPDATE staff SET salary = 1200 WHERE name = 'Frank';\nUPDATE staff SET salary = 1200 WHERE name = 'Susan';\nUPDATE staff SET salary = 1200 WHERE name = 'John';\n\nyou can use \nUPDATE staff SET salary = 1200 WHERE name IN ('Bob', 'Frank', 'John');\n\n", "maybe for someone it will be useful\nfor Postgresql 9.5 works as a charm\nINSERT INTO tabelname(id, col2, col3, col4)\nVALUES\n (1, 1, 1, 'text for col4'),\n (DEFAULT,1,4,'another text for col4')\nON CONFLICT (id) DO UPDATE SET\n col2 = EXCLUDED.col2,\n col3 = EXCLUDED.col3,\n col4 = EXCLUDED.col4\n\nthis SQL updates existing record and inserts if new one (2 in 1)\n", "Camille's solution worked. Turned it into a basic PHP function, which writes up the SQL statement. Hope this helps someone else.\n function _bulk_sql_update_query($table, $array)\n {\n /*\n * Example:\n INSERT INTO mytable (id, a, b, c)\n VALUES (1, 'a1', 'b1', 'c1'),\n (2, 'a2', 'b2', 'c2'),\n (3, 'a3', 'b3', 'c3'),\n (4, 'a4', 'b4', 'c4'),\n (5, 'a5', 'b5', 'c5'),\n (6, 'a6', 'b6', 'c6')\n ON DUPLICATE KEY UPDATE id=VALUES(id),\n a=VALUES(a),\n b=VALUES(b),\n c=VALUES(c);\n */\n $sql = \"\";\n\n $columns = array_keys($array[0]);\n $columns_as_string = implode(', ', $columns);\n\n $sql .= \"\n INSERT INTO $table\n (\" . $columns_as_string . \")\n VALUES \";\n\n $len = count($array);\n foreach ($array as $index => $values) {\n $sql .= '(\"';\n $sql .= implode('\", \"', $array[$index]) . \"\\\"\";\n $sql .= ')';\n $sql .= ($index == $len - 1) ? \"\" : \", \\n\";\n }\n\n $sql .= \"\\nON DUPLICATE KEY UPDATE \\n\";\n\n $len = count($columns);\n foreach ($columns as $index => $column) {\n\n $sql .= \"$column=VALUES($column)\";\n $sql .= ($index == $len - 1) ? \"\" : \", \\n\";\n }\n\n $sql .= \";\";\n\n return $sql;\n }\n\n", "Execute the code below to update n number of rows, where Parent ID is the id you want to get the data from and Child ids are the ids u need to be updated so it's just u need to add the parent id and child ids to update all the rows u need using a small script.\n UPDATE [Table]\n SET column1 = (SELECT column1 FROM Table WHERE IDColumn = [PArent ID]),\n column2 = (SELECT column2 FROM Table WHERE IDColumn = [PArent ID]),\n column3 = (SELECT column3 FROM Table WHERE IDColumn = [PArent ID]),\n column4 = (SELECT column4 FROM Table WHERE IDColumn = [PArent ID]),\n WHERE IDColumn IN ([List of child Ids])\n\n", "Execute the below code if you want to update all record in all columns:\nupdate config set column1='value',column2='value'...columnN='value';\n\nand if you want to update all columns of a particular row then execute below code:\nupdate config set column1='value',column2='value'...columnN='value' where column1='value'\n\n", "Assuming you have the list of values to update in an Excel spreadsheet with config_value in column A1 and config_name in B1 you can easily write up the query there using an Excel formula like\n=CONCAT(\"UPDATE config SET config_value = \",\"'\",A1,\"'\", \" WHERE config_name = \",\"'\",B1,\"'\")\n", "INSERT INTO tablename\n (name, salary)\n VALUES \n ('Bob', 1125),\n ('Jane', 1200),\n ('Frank', 1100),\n ('Susan', 1175),\n ('John', 1150)\n ON DUPLICATE KEY UPDATE salary = VALUES(salary);\n\n", "UPDATE 2021 / MySql v8.0.20 and later\nThe most upvoted answer advises to use the VALUES function which is now DEPRECATED for the ON DUPLICATE KEY UPDATE syntax. With v8.0.20 you get a deprecation warning with the VALUES function:\nINSERT INTO chart (id, flag)\nVALUES (1, 'FLAG_1'),(2, 'FLAG_2')\nON DUPLICATE KEY UPDATE id = VALUES(id), flag = VALUES(flag);\n\n\n[HY000][1287] 'VALUES function' is deprecated and will be removed in a future release. Please use an alias (INSERT INTO ... VALUES (...) AS alias) and replace VALUES(col) in the ON DUPLICATE KEY UPDATE clause with alias.col instead\n\nUse the new alias syntax instead:\n\nofficial MySQL worklog\nDocs\n\nINSERT INTO chart (id, flag) \nVALUES (1, 'FLAG_1'),(2, 'FLAG_2') AS aliased\nON DUPLICATE KEY UPDATE flag=aliased.flag;\n\n", "Try either multi-table update syntax\nTry it copy and SQL query:\nCREATE TABLE #temp (id int, name varchar(50))\nCREATE TABLE #temp2 (id int, name varchar(50))\n\nINSERT INTO #temp (id, name)\nVALUES (1,'abc'), (2,'xyz'), (3,'mno'), (4,'abc')\n\nINSERT INTO #temp2 (id, name) \nVALUES (2,'def'), (1,'mno1')\n\nSELECT * FROM #temp\nSELECT * FROM #temp2\n\nUPDATE t\nSET name = CASE WHEN t.id = t1.id THEN t1.name ELSE t.name END\nFROM #temp t \nINNER JOIN #temp2 t1 on t.id = t1.id\n \nselect * from #temp\nselect * from #temp2\n\ndrop table #temp\ndrop table #temp2\n\n", "just make a transaction statement, with multiple update statement and commit. In error case, you can just rollback modification handle by starting transaction.\nSTART TRANSACTION;\n/*Multiple update statement*/\nCOMMIT;\n\n(This syntax is for MySQL, for PostgreSQL, replace 'START TRANSACTION' by 'BEGIN')\n", "UPDATE table name SET field name = 'value' WHERE table name.primary key\n", "If you need to update several rows at a time, the alternative is prepared statement:\n\ndatabase complies a query pattern you provide the first time, keep the compiled result for current connection (depends on implementation).\nthen you updates all the rows, by sending shortened label of the prepared function with different parameters in SQL syntax, instead of sending entire UPDATE statement several times for several updates\nthe database parse the shortened label of the prepared function , which is linked to the pre-compiled result, then perform the updates.\nnext time when you perform row updates, the database may still use the pre-compiled result and quickly complete the operations (so the first step above can be omitted since it may take time to compile).\n\nHere is PostgreSQL example of prepare statement, many of SQL databases (e.g. MariaDB,MySQL, Oracle) also support it.\n" ]
[ 217, 190, 32, 27, 10, 8, 6, 5, 4, 3, 3, 0, 0, 0, 0 ]
[]
[]
[ "mysql", "record" ]
stackoverflow_0020255138_mysql_record.txt
Q: EBS Snapshot; Restarting EC2 Instance before Snapshot complete It is my understanding that when you create an EBS Snapshot, any data written to the volume up to the time that the snapshot was started, will be included. I.e, even if the snapshot is in the "pending" state, it is still safe to write to the EBS volume. Is this still considered the case for taking EBS snapshots of root volumes? In the documentation, it is stated that when you create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot. Does this imply that we can stop the instance, begin the snapshot, and then restart the instance before the snapshot has completed? The reason I'm asking is because our snapshots are taking >15 minutes, which is timing out our Snapshot Management Lambda before it can restart the instance. A: It should be safe, see https://serverfault.com/questions/548731/can-i-re-attach-my-ebs-while-a-snapshot-is-pending You should stop the instance when creating a root volume snapshot because you can not unmount root volume beforehand, hence you would risk your snapshot would be corrupted.
EBS Snapshot; Restarting EC2 Instance before Snapshot complete
It is my understanding that when you create an EBS Snapshot, any data written to the volume up to the time that the snapshot was started, will be included. I.e, even if the snapshot is in the "pending" state, it is still safe to write to the EBS volume. Is this still considered the case for taking EBS snapshots of root volumes? In the documentation, it is stated that when you create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot. Does this imply that we can stop the instance, begin the snapshot, and then restart the instance before the snapshot has completed? The reason I'm asking is because our snapshots are taking >15 minutes, which is timing out our Snapshot Management Lambda before it can restart the instance.
[ "It should be safe, see https://serverfault.com/questions/548731/can-i-re-attach-my-ebs-while-a-snapshot-is-pending\nYou should stop the instance when creating a root volume snapshot because you can not unmount root volume beforehand, hence you would risk your snapshot would be corrupted.\n" ]
[ 0 ]
[]
[]
[ "amazon_ebs", "amazon_ec2", "amazon_web_services" ]
stackoverflow_0072984491_amazon_ebs_amazon_ec2_amazon_web_services.txt
Q: setuptools pyproject.toml equivalent to `python setup.py clean --all` I'm migrating from setup.py to pyproject.toml. The commands to install my package appear to be the same, but I can't find what the pyproject.toml command for cleaning up build artifacts is. What is the equivalent to python setup.py clean --all? A: The distutils command clean is not needed for a pyproject.toml based build. Modern tools invoking PEP517/PEP518 hooks, such as build, create a temporary directory or a cache directory to store intermediate files while building, rather than littering the project directory with a build subdirectory. Anyway, it was not really an exciting command in the first place and rm -rf build does the same job. A: I ran into this same issue when I was migrating. What wim answered seems to be mostly true. If you do as the setuptools documentation says and use python -m build then the build directory will not be created, but a dist will. However if you do pip install . a build directory will be left behind even if you are using a pyproject.toml file. This can cause issues if you change your package structure or rename files as sometimes the old version that is in the build directory will be installed instead of your current changes. Personally I run pip install . && rm -rf build or pip install . && rmdir /s /q build for Windows. This could be expanded to remove any other unwanted artifacts.
setuptools pyproject.toml equivalent to `python setup.py clean --all`
I'm migrating from setup.py to pyproject.toml. The commands to install my package appear to be the same, but I can't find what the pyproject.toml command for cleaning up build artifacts is. What is the equivalent to python setup.py clean --all?
[ "The distutils command clean is not needed for a pyproject.toml based build. Modern tools invoking PEP517/PEP518 hooks, such as build, create a temporary directory or a cache directory to store intermediate files while building, rather than littering the project directory with a build subdirectory.\nAnyway, it was not really an exciting command in the first place and rm -rf build does the same job.\n", "I ran into this same issue when I was migrating. What wim answered seems to be mostly true. If you do as the setuptools documentation says and use python -m build then the build directory will not be created, but a dist will. However if you do pip install . a build directory will be left behind even if you are using a pyproject.toml file. This can cause issues if you change your package structure or rename files as sometimes the old version that is in the build directory will be installed instead of your current changes. Personally I run pip install . && rm -rf build or pip install . && rmdir /s /q build for Windows. This could be expanded to remove any other unwanted artifacts.\n" ]
[ 4, 1 ]
[]
[]
[ "pyproject.toml", "python", "setuptools" ]
stackoverflow_0072468946_pyproject.toml_python_setuptools.txt
Q: Blur an image from JavaScript Javascript Code: const activeSlide = document.querySelector('.slide.active'); const slideBgImg = getComputedStyle(activeSlide).backgroundImage; container.style.backgroundImage =slideBgImg; HTML <div class="slide-container"> <div class="slide" data-slide-no="1"></div> <div class="slide" data-slide-no="2"></div> <div class="slide" data-slide-no="3"></div> <div class="slide" data-slide-no="4"></div> <div class="slide" data-slide-no="5"></div> <div class="slide" data-slide-no="6"></div> <div class="slide" data-slide-no="7"></div> <div class="slide" data-slide-no="8"></div> <div class="slide" data-slide-no="9"></div> <div class="slide" data-slide-no="10"></div> <div class="slide" data-slide-no="11"></div> <div class="slide" data-slide-no="12"></div> </div> I want to change the background of slide.active container and it must be equal to the image that is clicked on the page. This code changes the background, but I want to add a blur effect to the image. Is there any way to do so? I don't want to blur the container; I actually want to blur the slideBgImage A: document.querySelector("div").style.backgroundImage = "url('url') blur"
Blur an image from JavaScript
Javascript Code: const activeSlide = document.querySelector('.slide.active'); const slideBgImg = getComputedStyle(activeSlide).backgroundImage; container.style.backgroundImage =slideBgImg; HTML <div class="slide-container"> <div class="slide" data-slide-no="1"></div> <div class="slide" data-slide-no="2"></div> <div class="slide" data-slide-no="3"></div> <div class="slide" data-slide-no="4"></div> <div class="slide" data-slide-no="5"></div> <div class="slide" data-slide-no="6"></div> <div class="slide" data-slide-no="7"></div> <div class="slide" data-slide-no="8"></div> <div class="slide" data-slide-no="9"></div> <div class="slide" data-slide-no="10"></div> <div class="slide" data-slide-no="11"></div> <div class="slide" data-slide-no="12"></div> </div> I want to change the background of slide.active container and it must be equal to the image that is clicked on the page. This code changes the background, but I want to add a blur effect to the image. Is there any way to do so? I don't want to blur the container; I actually want to blur the slideBgImage
[ "document.querySelector(\"div\").style.backgroundImage = \"url('url') blur\"\n\n" ]
[ 0 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074619681_css_html_javascript.txt
Q: imaplib STORE failed - Mailbox has read-only access. Cannot Delete Yahoo Email trying to delete emails in my yahoo account using imaplib. i'm new to python, figured out most of the code but unable to find anything that works relating to this error. imap = imaplib.IMAP4_SSL(imap_server) imap.login(email_address, password) imap.select("Learn", readonly=False) con = imaplib.IMAP4_SSL('imap.mail.yahoo.com',993) con.login(email_address, password) con.select('Learn',readonly=False) imap.select('"Learn"', "(UNSEEN)") for i in '1': typ, msg_data = imap.fetch('1', '(RFC822)') for response_part in msg_data: if isinstance(response_part, tuple): msg = email.message_from_bytes(response_part[1]) for header in [ 'from' ]: print('%-8s: %s' % (header.upper(), msg[header])) imap.store(i, "+FLAGS", "\\Deleted") #tried commented codes below and same error #imap.expunge() #result, data = imap.uid('STORE', str(i) , '+FLAGS', '(\\Deleted)') #imap.uid('STORE', i, '+X-GM-LABELS', '\\Trash') con.close() con.logout() i get the error below STORE command error: BAD [b'[CANNOT] STORE failed - Mailbox has read-only access'] any help would be greatly appreciated A: imap.select('"Learn"', "(UNSEEN)") Select does not take a search criterion. The second parameter is “readonly”, so this is the same as: imap.select('"Learn"', readonly="(UNSEEN)") Which as a non-empty string is the same as: imap.select('"Learn"', readonly=True) Which is why you can’t make any changes to that mailbox. Delete the second parameter: imap.select('"Learn"') You appear to be wanting to do a search for unseen messages. Use search for this.
imaplib STORE failed - Mailbox has read-only access. Cannot Delete Yahoo Email
trying to delete emails in my yahoo account using imaplib. i'm new to python, figured out most of the code but unable to find anything that works relating to this error. imap = imaplib.IMAP4_SSL(imap_server) imap.login(email_address, password) imap.select("Learn", readonly=False) con = imaplib.IMAP4_SSL('imap.mail.yahoo.com',993) con.login(email_address, password) con.select('Learn',readonly=False) imap.select('"Learn"', "(UNSEEN)") for i in '1': typ, msg_data = imap.fetch('1', '(RFC822)') for response_part in msg_data: if isinstance(response_part, tuple): msg = email.message_from_bytes(response_part[1]) for header in [ 'from' ]: print('%-8s: %s' % (header.upper(), msg[header])) imap.store(i, "+FLAGS", "\\Deleted") #tried commented codes below and same error #imap.expunge() #result, data = imap.uid('STORE', str(i) , '+FLAGS', '(\\Deleted)') #imap.uid('STORE', i, '+X-GM-LABELS', '\\Trash') con.close() con.logout() i get the error below STORE command error: BAD [b'[CANNOT] STORE failed - Mailbox has read-only access'] any help would be greatly appreciated
[ "imap.select('\"Learn\"', \"(UNSEEN)\")\n\nSelect does not take a search criterion. The second parameter is “readonly”, so this is the same as:\nimap.select('\"Learn\"', readonly=\"(UNSEEN)\")\n\nWhich as a non-empty string is the same as:\nimap.select('\"Learn\"', readonly=True)\n\nWhich is why you can’t make any changes to that mailbox. Delete the second parameter:\nimap.select('\"Learn\"')\n\nYou appear to be wanting to do a search for unseen messages. Use search for this.\n" ]
[ 0 ]
[]
[]
[ "email", "imap", "imaplib", "python" ]
stackoverflow_0074650452_email_imap_imaplib_python.txt
Q: How to retrieve custom JWT claims from within Lambda with Identity Pool? I have the following scenario and am trying to understand the right way to implement it. I have Okta as my IDP. Amazon API gateway for managing my APIs and some lambdas which handle the API requests. Identity Pool is used to provide AWS credentials to the client accessing the APIs. When the client accesses the API, I need my lambda (which handles the request) to fetch the data from DynamoDB, and filter it based on a few attributes that are specific to the user that has logged in to the client. e.g. I need to retrieve accounts for a customer using the API, but the user only has access to certain accounts and so the lambda should filter the result. I am thinking of having some custom claims defined for each user in Okta. When the client authenticates with Okta, it receives a JWT token with these claims. And it fetches the AWS credentials from Identity Pool with this token, to access the API. The API would trigger the lambda. Here, I would want to retrieve the claims and use them for filtering the data. Any thoughts on how this can be achieved? Or is there a better way to address this? Thank you. A: We can use Lambda authorizers for such a scenario. Please refer one of the following documents based on your API type. REST APIs HTTP APIs (Conceptually both Lambda Authorizers are more or less same.) What you have to do is: In the Lambda Authorizer validate the incoming JWT (which is generated by Okta). Then follow below steps only if the token is valid. Based on the custom claim(s) (which you configured in the Okta for every user), create a key value pair(s) in the context of the output of the Lambda Authorizer (as mentioned in here or here) Then those context details are available for your Lambda which does the DB lookup. With that you can do the filtering. A: In your Amazon API Gateway, configure the authorization type for your APIs to be "JWT". This will allow the gateway to verify the JWT token that the client receives from Okta, and ensure that it is valid and has not been tampered with. In your Lambda function, use the "aws-sdk" npm package to retrieve the JWT token that was sent by the client as part of the API request. You can access the JWT token from the "event" object that is passed to the Lambda function. The token will be available in the "event.requestContext.authorizer.jwt" property. Use the "jsonwebtoken" npm package to verify the JWT token and extract the claims from it. The "jsonwebtoken" package provides a "verify" method that you can use to verify the signature of the JWT token, and a "decode" method that you can use to extract the claims from the token. Once you have extracted the claims from the JWT token, you can use them in your logic for filtering the data from DynamoDB. Make sure that your logic is secure and that it properly enforces the access controls that you have defined in Okta.
How to retrieve custom JWT claims from within Lambda with Identity Pool?
I have the following scenario and am trying to understand the right way to implement it. I have Okta as my IDP. Amazon API gateway for managing my APIs and some lambdas which handle the API requests. Identity Pool is used to provide AWS credentials to the client accessing the APIs. When the client accesses the API, I need my lambda (which handles the request) to fetch the data from DynamoDB, and filter it based on a few attributes that are specific to the user that has logged in to the client. e.g. I need to retrieve accounts for a customer using the API, but the user only has access to certain accounts and so the lambda should filter the result. I am thinking of having some custom claims defined for each user in Okta. When the client authenticates with Okta, it receives a JWT token with these claims. And it fetches the AWS credentials from Identity Pool with this token, to access the API. The API would trigger the lambda. Here, I would want to retrieve the claims and use them for filtering the data. Any thoughts on how this can be achieved? Or is there a better way to address this? Thank you.
[ "We can use Lambda authorizers for such a scenario. Please refer one of the following documents based on your API type.\n\nREST APIs\nHTTP APIs\n\n(Conceptually both Lambda Authorizers are more or less same.)\nWhat you have to do is:\n\nIn the Lambda Authorizer validate the incoming JWT (which is generated by Okta). Then follow below steps only if the token is valid.\nBased on the custom claim(s) (which you configured in the Okta for every user), create a key value pair(s) in the context of the output of the Lambda Authorizer (as mentioned in here or here)\nThen those context details are available for your Lambda which does the DB lookup. With that you can do the filtering.\n\n", "\nIn your Amazon API Gateway, configure the authorization type for your APIs to be \"JWT\". This will allow the gateway to verify the JWT token that the client receives from Okta, and ensure that it is valid and has not been tampered with.\nIn your Lambda function, use the \"aws-sdk\" npm package to retrieve the JWT token that was sent by the client as part of the API request. You can access the JWT token from the \"event\" object that is passed to the Lambda function. The token will be available in the \"event.requestContext.authorizer.jwt\" property.\nUse the \"jsonwebtoken\" npm package to verify the JWT token and extract the claims from it. The \"jsonwebtoken\" package provides a \"verify\" method that you can use to verify the signature of the JWT token, and a \"decode\" method that you can use to extract the claims from the token.\nOnce you have extracted the claims from the JWT token, you can use them in your logic for filtering the data from DynamoDB. Make sure that your logic is secure and that it properly enforces the access controls that you have defined in Okta.\n\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_cognito", "aws_identitypools", "aws_lambda", "jwt" ]
stackoverflow_0074647428_amazon_cognito_aws_identitypools_aws_lambda_jwt.txt
Q: Inner Join not working for R sentiment analysis I'm trying to conduct sentiment analysis on a Simpsons' episode using the afinn library but for some reason when I inner join sentiment with my tidy_text and filter for words labelled -5, it returns words in the afinn library that are not in my dataframe. I double checked my df ('tidy_text') and it definitely is a subset of words from the episode. Any idea what I'm doing wrong here? afinn <- tidy_text %>% inner_join(get_sentiments("afinn")) %>% filter(value == -5) %>% count(word, value, sort = TRUE) A: Sounds like an interesting project. Try adding , by = c("word" = "word")) %>% afinn <- tidy_text %>% inner_join(get_sentiments("afinn"), by = c("word" = "word")) %>% filter(value == -5) %>% count(word, value, sort = TRUE)
Inner Join not working for R sentiment analysis
I'm trying to conduct sentiment analysis on a Simpsons' episode using the afinn library but for some reason when I inner join sentiment with my tidy_text and filter for words labelled -5, it returns words in the afinn library that are not in my dataframe. I double checked my df ('tidy_text') and it definitely is a subset of words from the episode. Any idea what I'm doing wrong here? afinn <- tidy_text %>% inner_join(get_sentiments("afinn")) %>% filter(value == -5) %>% count(word, value, sort = TRUE)
[ "Sounds like an interesting project.\nTry adding , by = c(\"word\" = \"word\")) %>%\nafinn <- tidy_text %>%\n inner_join(get_sentiments(\"afinn\"), by = c(\"word\" = \"word\")) %>%\n filter(value == -5) %>%\n count(word, value, sort = TRUE)\n\n" ]
[ 1 ]
[]
[]
[ "r", "sentiment_analysis", "text_mining" ]
stackoverflow_0074659657_r_sentiment_analysis_text_mining.txt
Q: Add reverse log transformation to binned scale on ggplot2 I have some constraints for my plot: x axis should be reversed and logarithmic y axis should be binned, but: bins should be displayed in reverse order bins size should have logarithmic scale or something similar (0-10 bin should be bigger than 10-20, and so on) For both x and y, 0 tick should appear on the axis (which we usually achieve with limits=c(0, 0)) Here is some sample data: set.seed(123) dat <- data.frame( a=sample(seq(0, 100), 1e4, replace=TRUE), b=sample(1e6, 1e4), t=sample(letters[seq(2)], 1e4, replace=TRUE) ) I can achieve most constraints on x axis, and some on y: dat |> ggplot(aes(y=a, x=b, colour=t)) + geom_jitter() + scale_x_continuous( trans=c("log10", "reverse"), breaks=seq(0, 6) |> purrr::map(~c(2.5, 5, 10)*10^.x) |> unlist(), expand=c(0, 0) ) + scale_y_binned(expand=c(0, 0), limits=c(0, 100)) + ggthemes::theme_clean() What's missing here is: 0 tick on x: using limits=c(0, 100) with log scale produces an error. Using scales::pseudo_log_trans instead of scales::log10 doesn't work. I tried to use ggallin::pseudolog10_trans that also keeps 0 and negatives, but couldn't figure how to mix it with another transformer. log scale on y axis. The issue here is that scale_y_binned discretizes the data, and log transformation can only be applied to continuous data. reversed y axis. The issue here is similar, because reversing an axis is not just a cosmetic operation for ggplot2 like coord_flip would be; it is actually also a transformation that requires continuous data. Cheers! A: Perhaps the easiest way to do this is to bin the scale manually and use a continuous rather than binned y scale with custom labels: dat |> within(abin <- cut(a, c(0, 10^seq(0, 2, 0.25)), include.lowest = TRUE, right = FALSE, labels = seq(-0.1072541, by = 0.25, len = 9)))|> within(abin <- as.numeric(as.character(abin))) |> ggplot(aes(y = abin, x = log10(b + 1), colour = t)) + geom_jitter() + scale_x_reverse(breaks = c(log10(c(5e5, 25e4, 1e5, 5e4, 25e3, 1e4, 5e3, 25e2, 1e3, 5e2, 250, 100, 50, 10)+1), 0.3), limits = c(6, 0.3), expand = c(0, 0), name = "b", labels = ~c(head(comma(10^(.x) - 1), -1), 0)) + scale_y_reverse('a', breaks = seq(-0.25, 2, 0.25), labels = ~ c(0, round(10^(tail(.x, -1))))) + ggthemes::theme_clean()
Add reverse log transformation to binned scale on ggplot2
I have some constraints for my plot: x axis should be reversed and logarithmic y axis should be binned, but: bins should be displayed in reverse order bins size should have logarithmic scale or something similar (0-10 bin should be bigger than 10-20, and so on) For both x and y, 0 tick should appear on the axis (which we usually achieve with limits=c(0, 0)) Here is some sample data: set.seed(123) dat <- data.frame( a=sample(seq(0, 100), 1e4, replace=TRUE), b=sample(1e6, 1e4), t=sample(letters[seq(2)], 1e4, replace=TRUE) ) I can achieve most constraints on x axis, and some on y: dat |> ggplot(aes(y=a, x=b, colour=t)) + geom_jitter() + scale_x_continuous( trans=c("log10", "reverse"), breaks=seq(0, 6) |> purrr::map(~c(2.5, 5, 10)*10^.x) |> unlist(), expand=c(0, 0) ) + scale_y_binned(expand=c(0, 0), limits=c(0, 100)) + ggthemes::theme_clean() What's missing here is: 0 tick on x: using limits=c(0, 100) with log scale produces an error. Using scales::pseudo_log_trans instead of scales::log10 doesn't work. I tried to use ggallin::pseudolog10_trans that also keeps 0 and negatives, but couldn't figure how to mix it with another transformer. log scale on y axis. The issue here is that scale_y_binned discretizes the data, and log transformation can only be applied to continuous data. reversed y axis. The issue here is similar, because reversing an axis is not just a cosmetic operation for ggplot2 like coord_flip would be; it is actually also a transformation that requires continuous data. Cheers!
[ "Perhaps the easiest way to do this is to bin the scale manually and use a continuous rather than binned y scale with custom labels:\ndat |>\n within(abin <- cut(a, c(0, 10^seq(0, 2, 0.25)), include.lowest = TRUE,\n right = FALSE, \n labels = seq(-0.1072541, by = 0.25, len = 9)))|>\n within(abin <- as.numeric(as.character(abin))) |>\n ggplot(aes(y = abin, x = log10(b + 1), colour = t)) + \n geom_jitter() + \n scale_x_reverse(breaks = c(log10(c(5e5, 25e4, 1e5, 5e4, 25e3, 1e4, 5e3,\n 25e2, 1e3, 5e2, 250, 100, 50, 10)+1), 0.3),\n limits = c(6, 0.3), expand = c(0, 0), name = \"b\",\n labels = ~c(head(comma(10^(.x) - 1), -1), 0)) + \n scale_y_reverse('a', breaks = seq(-0.25, 2, 0.25),\n labels = ~ c(0, round(10^(tail(.x, -1))))) +\n ggthemes::theme_clean()\n\n\n" ]
[ 1 ]
[]
[]
[ "ggplot2", "r" ]
stackoverflow_0074659090_ggplot2_r.txt
Q: Unable to fetch data in request.form in flask I am using flask as the backend and i have my html template as <div id="tablediv"> <form action="{{ url_for('useraction') }}" method="post"> <table class="table" id="displaytable"> <thead> <tr> <th scope="col">Name</th> <th scope="col">Email</th> <th scope="col">Address</th> <th scope="col">Action</th> </tr> </thead> <tbody> {%for i in range(0, len)%} <tr> <td id="username" name = "username">{{unverifieduser[i][1]}}</td> <td id="useremail" name="useremail">{{unverifieduser[i][2]}}</td> <td id="useraddress" name="useraddress">{{unverifieduser[i][3]}}</td> <td > <button type="submit" class="btn btn-danger" id="delete" name="delete">Delete</button> <button type="submit" class="btn btn-warning" id="verify" name="verify">Verify</button> </td> </tr> {%endfor%} </tbody> </table> </form> </div> My API call looks like @app.route('/useraction', methods = ['GET', 'POST']) def useraction(): print(request.form) if request.method =='POST': cursor = mysql.connection.cursor() cursor.execute('SELECT * FROM user where isVerified = false'); user = cursor.fetchall() return render_template('user.html', len = len(user), unverifieduser = user) The output gives only the button which i click for example 'verify' enter image description here Is there any html attribute I am missing to get all the elements of the table for example username, useremail, useraddress A: An HTML table or HTML table elements are not Form elements. In HTML, only Form elements are sent along with such a POST request. Here is an overview of all HTML Form elements. Note that these elements should be inside a form element, which is in your case the following line: <form action="{{ url_for('useraction') }}" method="post"> As for why you only have verify in your form: the sole other input element in your form is the delete button, with its type set to submit. While you can have multiple submit buttons, only the one you click is sent along with the form.
Unable to fetch data in request.form in flask
I am using flask as the backend and i have my html template as <div id="tablediv"> <form action="{{ url_for('useraction') }}" method="post"> <table class="table" id="displaytable"> <thead> <tr> <th scope="col">Name</th> <th scope="col">Email</th> <th scope="col">Address</th> <th scope="col">Action</th> </tr> </thead> <tbody> {%for i in range(0, len)%} <tr> <td id="username" name = "username">{{unverifieduser[i][1]}}</td> <td id="useremail" name="useremail">{{unverifieduser[i][2]}}</td> <td id="useraddress" name="useraddress">{{unverifieduser[i][3]}}</td> <td > <button type="submit" class="btn btn-danger" id="delete" name="delete">Delete</button> <button type="submit" class="btn btn-warning" id="verify" name="verify">Verify</button> </td> </tr> {%endfor%} </tbody> </table> </form> </div> My API call looks like @app.route('/useraction', methods = ['GET', 'POST']) def useraction(): print(request.form) if request.method =='POST': cursor = mysql.connection.cursor() cursor.execute('SELECT * FROM user where isVerified = false'); user = cursor.fetchall() return render_template('user.html', len = len(user), unverifieduser = user) The output gives only the button which i click for example 'verify' enter image description here Is there any html attribute I am missing to get all the elements of the table for example username, useremail, useraddress
[ "An HTML table or HTML table elements are not Form elements. In HTML, only Form elements are sent along with such a POST request. Here is an overview of all HTML Form elements. Note that these elements should be inside a form element, which is in your case the following line:\n<form action=\"{{ url_for('useraction') }}\" method=\"post\">\nAs for why you only have verify in your form: the sole other input element in your form is the delete button, with its type set to submit. While you can have multiple submit buttons, only the one you click is sent along with the form.\n" ]
[ 0 ]
[]
[]
[ "flask", "html" ]
stackoverflow_0074659291_flask_html.txt
Q: CondaError: Downloaded bytes did not match Content-Length while trying to download cudnn using conda This is the error I'm having in the anaconda prompt, after executing the command: conda install cudnn==7.6.5 Error: CondaError: Downloaded bytes did not match Content-Length url: https://repo.anaconda.com/pkgs/main/win-64/cudnn-7.6.5-cuda10.1_0.conda target_path: C:\Users\User\anaconda3\pkgs\cudnn-7.6.5-cuda10.1_0.conda Content-Length: 187807360 downloaded bytes: 165935561 Note that I am not installing it in a new environment but in the base. I would appreciate any help!! A: using curl download package conda install --offline your_download_package A: try go to target_path and uninstall cudnn or just install cudnn in it. It works for me when i install cudatoolkit.
CondaError: Downloaded bytes did not match Content-Length while trying to download cudnn using conda
This is the error I'm having in the anaconda prompt, after executing the command: conda install cudnn==7.6.5 Error: CondaError: Downloaded bytes did not match Content-Length url: https://repo.anaconda.com/pkgs/main/win-64/cudnn-7.6.5-cuda10.1_0.conda target_path: C:\Users\User\anaconda3\pkgs\cudnn-7.6.5-cuda10.1_0.conda Content-Length: 187807360 downloaded bytes: 165935561 Note that I am not installing it in a new environment but in the base. I would appreciate any help!!
[ "\nusing curl download package\nconda install --offline your_download_package\n\n", "try go to target_path and uninstall cudnn or just install cudnn in it. It works for me when i install cudatoolkit.\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "python" ]
stackoverflow_0065130985_anaconda_python.txt
Q: matplotlib (equal unit length): with 'equal' aspect ratio z-axis is not equal to x- and y- When I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work. A: I like the above solutions, but they do have the drawback that you need to keep track of the ranges and means over all your data. This could be cumbersome if you have multiple data sets that will be plotted together. To fix this, I made use of the ax.get_[xyz]lim3d() methods and put the whole thing into a standalone function that can be called just once before you call plt.show(). Here is the new version: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np def set_axes_equal(ax): '''Make axes of 3D plot have equal scale so that spheres appear as spheres, cubes as cubes, etc.. This is one possible solution to Matplotlib's ax.set_aspect('equal') and ax.axis('equal') not working for 3D. Input ax: a matplotlib axis, e.g., as output from plt.gca(). ''' x_limits = ax.get_xlim3d() y_limits = ax.get_ylim3d() z_limits = ax.get_zlim3d() x_range = abs(x_limits[1] - x_limits[0]) x_middle = np.mean(x_limits) y_range = abs(y_limits[1] - y_limits[0]) y_middle = np.mean(y_limits) z_range = abs(z_limits[1] - z_limits[0]) z_middle = np.mean(z_limits) # The plot bounding box is a sphere in the sense of the infinity # norm, hence I call half the max range the plot radius. plot_radius = 0.5*max([x_range, y_range, z_range]) ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius]) ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius]) ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius]) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) set_axes_equal(ax) plt.show() A: I believe matplotlib does not yet set correctly equal axis in 3D... But I found a trick some times ago (I don't remember where) that I've adapted using it. The concept is to create a fake cubic bounding box around your data. You can test it with the following code: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) # Create cubic bounding box to simulate equal aspect ratio max_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() Xb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(X.max()+X.min()) Yb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(Y.max()+Y.min()) Zb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(Z.max()+Z.min()) # Comment or uncomment following both lines to test the fake bounding box: for xb, yb, zb in zip(Xb, Yb, Zb): ax.plot([xb], [yb], [zb], 'w') plt.grid() plt.show() z data are about an order of magnitude larger than x and y, but even with equal axis option, matplotlib autoscale z axis: But if you add the bounding box, you obtain a correct scaling: A: Simple fix! I've managed to get this working in version 3.3.1. It looks like this issue has perhaps been resolved in PR#17172; You can use the ax.set_box_aspect([1,1,1]) function to ensure the aspect is correct (see the notes for the set_aspect function). When used in conjunction with the bounding box function(s) provided by @karlo and/or @Matee Ulhaq, the plots now look correct in 3D! Minimum Working Example import matplotlib.pyplot as plt import mpl_toolkits.mplot3d import numpy as np # Functions from @Mateen Ulhaq and @karlo def set_axes_equal(ax: plt.Axes): """Set 3D plot axes to equal scale. Make axes of 3D plot have equal scale so that spheres appear as spheres and cubes as cubes. Required since `ax.axis('equal')` and `ax.set_aspect('equal')` don't work on 3D. """ limits = np.array([ ax.get_xlim3d(), ax.get_ylim3d(), ax.get_zlim3d(), ]) origin = np.mean(limits, axis=1) radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0])) _set_axes_radius(ax, origin, radius) def _set_axes_radius(ax, origin, radius): x, y, z = origin ax.set_xlim3d([x - radius, x + radius]) ax.set_ylim3d([y - radius, y + radius]) ax.set_zlim3d([z - radius, z + radius]) # Generate and plot a unit sphere u = np.linspace(0, 2*np.pi, 100) v = np.linspace(0, np.pi, 100) x = np.outer(np.cos(u), np.sin(v)) # np.outer() -> outer vector product y = np.outer(np.sin(u), np.sin(v)) z = np.outer(np.ones(np.size(u)), np.cos(v)) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.plot_surface(x, y, z) ax.set_box_aspect([1,1,1]) # IMPORTANT - this is the new, key line # ax.set_proj_type('ortho') # OPTIONAL - default is perspective (shown in image above) set_axes_equal(ax) # IMPORTANT - this is also required plt.show() A: I simplified Remy F's solution by using the set_x/y/zlim functions. from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) max_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() / 2.0 mid_x = (X.max()+X.min()) * 0.5 mid_y = (Y.max()+Y.min()) * 0.5 mid_z = (Z.max()+Z.min()) * 0.5 ax.set_xlim(mid_x - max_range, mid_x + max_range) ax.set_ylim(mid_y - max_range, mid_y + max_range) ax.set_zlim(mid_z - max_range, mid_z + max_range) plt.show() A: As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach. import numpy as np xs, ys, zs = <your data> ax = <your axes> # Option 1: aspect ratio is 1:1:1 in data space ax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs))) # Option 2: aspect ratio 1:1:1 in view space ax.set_box_aspect((1, 1, 1)) A: Adapted from @karlo's answer to make things even cleaner: def set_axes_equal(ax: plt.Axes): """Set 3D plot axes to equal scale. Make axes of 3D plot have equal scale so that spheres appear as spheres and cubes as cubes. Required since `ax.axis('equal')` and `ax.set_aspect('equal')` don't work on 3D. """ limits = np.array([ ax.get_xlim3d(), ax.get_ylim3d(), ax.get_zlim3d(), ]) origin = np.mean(limits, axis=1) radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0])) _set_axes_radius(ax, origin, radius) def _set_axes_radius(ax, origin, radius): x, y, z = origin ax.set_xlim3d([x - radius, x + radius]) ax.set_ylim3d([y - radius, y + radius]) ax.set_zlim3d([z - radius, z + radius]) Usage: fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') # important! # ...draw here... set_axes_equal(ax) # important! plt.show() EDIT: This answer does not work on more recent versions of Matplotlib due to the changes merged in pull-request #13474, which is tracked in issue #17172 and issue #1077. As a temporary workaround to this, one can remove the newly added lines in lib/matplotlib/axes/_base.py: class _AxesBase(martist.Artist): ... def set_aspect(self, aspect, adjustable=None, anchor=None, share=False): ... + if (not cbook._str_equal(aspect, 'auto')) and self.name == '3d': + raise NotImplementedError( + 'It is not currently possible to manually set the aspect ' + 'on 3D axes') A: EDIT: user2525140's code should work perfectly fine, although this answer supposedly attempted to fix a non--existant error. The answer below is just a duplicate (alternative) implementation: def set_aspect_equal_3d(ax): """Fix equal aspect bug for 3D plots.""" xlim = ax.get_xlim3d() ylim = ax.get_ylim3d() zlim = ax.get_zlim3d() from numpy import mean xmean = mean(xlim) ymean = mean(ylim) zmean = mean(zlim) plot_radius = max([abs(lim - mean_) for lims, mean_ in ((xlim, xmean), (ylim, ymean), (zlim, zmean)) for lim in lims]) ax.set_xlim3d([xmean - plot_radius, xmean + plot_radius]) ax.set_ylim3d([ymean - plot_radius, ymean + plot_radius]) ax.set_zlim3d([zmean - plot_radius, zmean + plot_radius]) A: As of matplotlib 3.6.0, this feature has been added with the command ax.set_aspect('equal'). Other options are 'equalxy', 'equalxz', and 'equalyz', to set only two directions to equal aspect ratios. This changes the data limits, example below. In the upcoming 3.7.0, you will be able to change the plot box aspect ratios rather than the data limits via the command ax.set_aspect('equal', adjustable='box'). To get the original behavior, use adjustable='datalim'. A: I think this feature has been added to matplotlib since these answers have been posted. In case anyone is still searching a solution this is how I do it: import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=plt.figaspect(1)*2) ax = fig.add_subplot(projection='3d', proj_type='ortho') X = np.random.rand(100) Y = np.random.rand(100) Z = np.random.rand(100) ax.scatter(X, Y, Z, color='b') The key bit of code is figsize=plt.figaspect(1) which sets the aspect ratio of the figure to 1 by 1. The *2 after figaspect(1) scales the figure by a factor of two. You can set this scaling factor to whatever you want. NOTE: This only works for figures with one plot. A: for the time beeing ax.set_aspect('equal') araises an error (version 3.5.1 with Anaconda). ax.set_aspect('auto',adjustable='datalim') did not give a convincing solution either. a lean work-aorund with ax.set_box_aspect((asx,asy,asz)) and asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) seems to be feasible (see my code snippet) Let's hope that version 3.7 with the features @Scott mentioned will be successful soon. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D #---- generate data nn = 100 X = np.random.randn(nn)*20 + 0 Y = np.random.randn(nn)*50 + 30 Z = np.random.randn(nn)*10 + -5 #---- check aspect ratio asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) fig = plt.figure(figsize=(15,15)) ax = fig.add_subplot(projection='3d') #---- set box aspect ratio ax.set_box_aspect((asx,asy,asz)) scat = ax.scatter(X, Y, Z, c=X+Y+Z, s=500, alpha=0.8) ax.set_xlabel('X-axis'); ax.set_ylabel('Y-axis'); ax.set_zlabel('Z-axis') plt.show()
matplotlib (equal unit length): with 'equal' aspect ratio z-axis is not equal to x- and y-
When I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work.
[ "I like the above solutions, but they do have the drawback that you need to keep track of the ranges and means over all your data. This could be cumbersome if you have multiple data sets that will be plotted together. To fix this, I made use of the ax.get_[xyz]lim3d() methods and put the whole thing into a standalone function that can be called just once before you call plt.show(). Here is the new version:\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef set_axes_equal(ax):\n '''Make axes of 3D plot have equal scale so that spheres appear as spheres,\n cubes as cubes, etc.. This is one possible solution to Matplotlib's\n ax.set_aspect('equal') and ax.axis('equal') not working for 3D.\n\n Input\n ax: a matplotlib axis, e.g., as output from plt.gca().\n '''\n\n x_limits = ax.get_xlim3d()\n y_limits = ax.get_ylim3d()\n z_limits = ax.get_zlim3d()\n\n x_range = abs(x_limits[1] - x_limits[0])\n x_middle = np.mean(x_limits)\n y_range = abs(y_limits[1] - y_limits[0])\n y_middle = np.mean(y_limits)\n z_range = abs(z_limits[1] - z_limits[0])\n z_middle = np.mean(z_limits)\n\n # The plot bounding box is a sphere in the sense of the infinity\n # norm, hence I call half the max range the plot radius.\n plot_radius = 0.5*max([x_range, y_range, z_range])\n\n ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])\n ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])\n ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\nset_axes_equal(ax)\nplt.show()\n\n", "I believe matplotlib does not yet set correctly equal axis in 3D... But I found a trick some times ago (I don't remember where) that I've adapted using it. The concept is to create a fake cubic bounding box around your data.\nYou can test it with the following code:\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\n# Create cubic bounding box to simulate equal aspect ratio\nmax_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max()\nXb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(X.max()+X.min())\nYb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(Y.max()+Y.min())\nZb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(Z.max()+Z.min())\n# Comment or uncomment following both lines to test the fake bounding box:\nfor xb, yb, zb in zip(Xb, Yb, Zb):\n ax.plot([xb], [yb], [zb], 'w')\n\nplt.grid()\nplt.show()\n\nz data are about an order of magnitude larger than x and y, but even with equal axis option, matplotlib autoscale z axis:\n\nBut if you add the bounding box, you obtain a correct scaling:\n\n", "Simple fix!\nI've managed to get this working in version 3.3.1.\nIt looks like this issue has perhaps been resolved in PR#17172; You can use the ax.set_box_aspect([1,1,1]) function to ensure the aspect is correct (see the notes for the set_aspect function). When used in conjunction with the bounding box function(s) provided by @karlo and/or @Matee Ulhaq, the plots now look correct in 3D!\n\nMinimum Working Example\nimport matplotlib.pyplot as plt\nimport mpl_toolkits.mplot3d\nimport numpy as np\n\n# Functions from @Mateen Ulhaq and @karlo\ndef set_axes_equal(ax: plt.Axes):\n \"\"\"Set 3D plot axes to equal scale.\n\n Make axes of 3D plot have equal scale so that spheres appear as\n spheres and cubes as cubes. Required since `ax.axis('equal')`\n and `ax.set_aspect('equal')` don't work on 3D.\n \"\"\"\n limits = np.array([\n ax.get_xlim3d(),\n ax.get_ylim3d(),\n ax.get_zlim3d(),\n ])\n origin = np.mean(limits, axis=1)\n radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0]))\n _set_axes_radius(ax, origin, radius)\n\ndef _set_axes_radius(ax, origin, radius):\n x, y, z = origin\n ax.set_xlim3d([x - radius, x + radius])\n ax.set_ylim3d([y - radius, y + radius])\n ax.set_zlim3d([z - radius, z + radius])\n\n# Generate and plot a unit sphere\nu = np.linspace(0, 2*np.pi, 100)\nv = np.linspace(0, np.pi, 100)\nx = np.outer(np.cos(u), np.sin(v)) # np.outer() -> outer vector product\ny = np.outer(np.sin(u), np.sin(v))\nz = np.outer(np.ones(np.size(u)), np.cos(v))\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.plot_surface(x, y, z)\n\nax.set_box_aspect([1,1,1]) # IMPORTANT - this is the new, key line\n# ax.set_proj_type('ortho') # OPTIONAL - default is perspective (shown in image above)\nset_axes_equal(ax) # IMPORTANT - this is also required\nplt.show()\n\n", "I simplified Remy F's solution by using the set_x/y/zlim functions.\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\nmax_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() / 2.0\n\nmid_x = (X.max()+X.min()) * 0.5\nmid_y = (Y.max()+Y.min()) * 0.5\nmid_z = (Z.max()+Z.min()) * 0.5\nax.set_xlim(mid_x - max_range, mid_x + max_range)\nax.set_ylim(mid_y - max_range, mid_y + max_range)\nax.set_zlim(mid_z - max_range, mid_z + max_range)\n\nplt.show()\n\n\n", "As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach.\nimport numpy as np\n\nxs, ys, zs = <your data>\nax = <your axes>\n\n# Option 1: aspect ratio is 1:1:1 in data space\nax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs)))\n\n# Option 2: aspect ratio 1:1:1 in view space\nax.set_box_aspect((1, 1, 1))\n\n", "Adapted from @karlo's answer to make things even cleaner:\ndef set_axes_equal(ax: plt.Axes):\n \"\"\"Set 3D plot axes to equal scale.\n\n Make axes of 3D plot have equal scale so that spheres appear as\n spheres and cubes as cubes. Required since `ax.axis('equal')`\n and `ax.set_aspect('equal')` don't work on 3D.\n \"\"\"\n limits = np.array([\n ax.get_xlim3d(),\n ax.get_ylim3d(),\n ax.get_zlim3d(),\n ])\n origin = np.mean(limits, axis=1)\n radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0]))\n _set_axes_radius(ax, origin, radius)\n\ndef _set_axes_radius(ax, origin, radius):\n x, y, z = origin\n ax.set_xlim3d([x - radius, x + radius])\n ax.set_ylim3d([y - radius, y + radius])\n ax.set_zlim3d([z - radius, z + radius])\n\nUsage:\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal') # important!\n\n# ...draw here...\n\nset_axes_equal(ax) # important!\nplt.show()\n\n\nEDIT: This answer does not work on more recent versions of Matplotlib due to the changes merged in pull-request #13474, which is tracked in issue #17172 and issue #1077. As a temporary workaround to this, one can remove the newly added lines in lib/matplotlib/axes/_base.py:\n class _AxesBase(martist.Artist):\n ...\n\n def set_aspect(self, aspect, adjustable=None, anchor=None, share=False):\n ...\n\n+ if (not cbook._str_equal(aspect, 'auto')) and self.name == '3d':\n+ raise NotImplementedError(\n+ 'It is not currently possible to manually set the aspect '\n+ 'on 3D axes')\n\n", "EDIT: user2525140's code should work perfectly fine, although this answer supposedly attempted to fix a non--existant error. The answer below is just a duplicate (alternative) implementation:\ndef set_aspect_equal_3d(ax):\n \"\"\"Fix equal aspect bug for 3D plots.\"\"\"\n\n xlim = ax.get_xlim3d()\n ylim = ax.get_ylim3d()\n zlim = ax.get_zlim3d()\n\n from numpy import mean\n xmean = mean(xlim)\n ymean = mean(ylim)\n zmean = mean(zlim)\n\n plot_radius = max([abs(lim - mean_)\n for lims, mean_ in ((xlim, xmean),\n (ylim, ymean),\n (zlim, zmean))\n for lim in lims])\n\n ax.set_xlim3d([xmean - plot_radius, xmean + plot_radius])\n ax.set_ylim3d([ymean - plot_radius, ymean + plot_radius])\n ax.set_zlim3d([zmean - plot_radius, zmean + plot_radius])\n\n", "As of matplotlib 3.6.0, this feature has been added with the command\nax.set_aspect('equal'). Other options are 'equalxy', 'equalxz', and 'equalyz', to set only two directions to equal aspect ratios. This changes the data limits, example below.\nIn the upcoming 3.7.0, you will be able to change the plot box aspect ratios rather than the data limits via the command ax.set_aspect('equal', adjustable='box'). To get the original behavior, use adjustable='datalim'.\n\n", "I think this feature has been added to matplotlib since these answers have been posted. In case anyone is still searching a solution this is how I do it:\nimport matplotlib.pyplot as plt \nimport numpy as np\n \nfig = plt.figure(figsize=plt.figaspect(1)*2)\nax = fig.add_subplot(projection='3d', proj_type='ortho')\n \nX = np.random.rand(100)\nY = np.random.rand(100)\nZ = np.random.rand(100)\n \nax.scatter(X, Y, Z, color='b')\n\nThe key bit of code is figsize=plt.figaspect(1) which sets the aspect ratio of the figure to 1 by 1. The *2 after figaspect(1) scales the figure by a factor of two. You can set this scaling factor to whatever you want.\nNOTE: This only works for figures with one plot.\n\n", "\nfor the time beeing ax.set_aspect('equal') araises an error (version 3.5.1 with Anaconda).\n\nax.set_aspect('auto',adjustable='datalim') did not give a convincing solution either.\n\na lean work-aorund with ax.set_box_aspect((asx,asy,asz)) and asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) seems to be feasible (see my code snippet)\n\nLet's hope that version 3.7 with the features @Scott mentioned will be successful soon.\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n#---- generate data\nnn = 100\nX = np.random.randn(nn)*20 + 0\nY = np.random.randn(nn)*50 + 30\nZ = np.random.randn(nn)*10 + -5\n\n#---- check aspect ratio\nasx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z)\n\nfig = plt.figure(figsize=(15,15))\nax = fig.add_subplot(projection='3d')\n\n#---- set box aspect ratio\nax.set_box_aspect((asx,asy,asz))\nscat = ax.scatter(X, Y, Z, c=X+Y+Z, s=500, alpha=0.8)\n\nax.set_xlabel('X-axis'); ax.set_ylabel('Y-axis'); ax.set_zlabel('Z-axis')\nplt.show()\n\n\n\n\n" ]
[ 85, 79, 58, 57, 25, 23, 7, 2, 1, 0 ]
[]
[]
[ "aspect_ratio", "axis", "graph", "matplotlib", "python" ]
stackoverflow_0013685386_aspect_ratio_axis_graph_matplotlib_python.txt
Q: Iterate through characters in a string R I have a dataset a bit like this, but with around 50,000 rows: # some data subj <- c(1, 1, 1, 2, 2) session <- c(1, 1, 2, 1, 2) items <- c("hfg", "hrfg", "thflk", "plht", "sdrpv") df <- data.frame(subj, session, items) For each subject (across sessions), I want to compare each character in each row of df$items with all the characters in all other rows, using a feature matrix of 11 values, which will be used to calculate Euclidean distance for each value between each character and each other character: feature_matrix <- tribble(~char, ~val1, ~val2, ~val3, ~val4, ~val5, ~val6, ~val7, ~val8, ~val9, ~val10, ~val11, "p", -1, 1, -1, -1, 1, 1, 0, -1, 1, 0, 0, "b", -1, 1, 0, -1, 1, 1, 0, -1, 1, 0, 0, "t", -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 0, "d", -1, 1, 0, -1, 1, -1, 1, -1, -1, 1, 0, "k", -1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 0, "ɡ", -1, 1, 0, -1, 1, -1, -1, -1, -1, -1, 0, "f", -0.5, 1, -1, -1, 0, -1, 1, -1, 1, 0, 0, "v", -0.5, 1, 0, -1, 0, -1, 1, -1, 1, 0, 0, "s", -0.5, 1, -1, -1, 0, -1, 1, -1, -1, 1, 0, "c", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, -1, 0, "z", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, 1, 0, "h", -0.5, 1, 0, -1, 0, -1, -1, 1, -1, -1, -1, "m", 0, 0, 1, 1, 1, 1, 0, -1, 1, 0, 0, "n", 0, 0, 1, 1, 1, -1, 1, -1, -1, 1, 0, "r", 0.5, 0, 1, 0, -1, -1, -1, 1, 1, -1, -1, "l", 0.5, 0, 1, 0, -1, -1, 1, -1, -1, 1, 0, "w", 0.8, 0, 1, 0, 0, 1, -1, -1, 1, -1, 0, "j", 0.8, 0, 1, 0, 0, -1, 0, -1, -1, 0, 1) As an example, to compare "hfg" with "hrfg" for feature_matrix$val1 only: h_h1 <- (-0.5--0.5)^2 h_r1 <- (-0.5-0.5)^2 h_f1 <- (-0.5--0.5)^2 h_g1 <- (-0.5--1)^2 f_h1 <- (-0.5--0.5)^2 f_r1 <- (-0.5-0.5)^2 ... and so on, for all 11 values. These will then be added together, and the square root of each of these 11 values is then added together again to create a final value, the Euclidean distance: val1_sum <- h_h1 + h_r1 + h_f1 + h_g1 + f_h1 + f_r1... val2_sum <- h_h2 + h_r2 + h_f2 + h_g2 + f_h2 + f_r2... val3_sum <- .... etc distance <- sqrt(val1_sum) + sqrt(val2_sum) + sqrt(val3_sum) + ... This will leave me with a distance matrix. As a bonus, I'd also like to compare each character in each string with each other character in the same string, though I expect this would need to be done in a separate section of code. Any help gratefully appreciated! A: Here is a solution with dplyr / tidyr. However, whether it works will depend on data volume: As it is said in the comments, it quickly become unmanageable as number of comparisions growths. Some optimizations are feasible, for case, group by character group and count, and even by character group despite the order. library(dplyr) library(tidyr) # some data df <- data.frame(subj = c(1, 1, 1, 2, 2), session = c(1, 1, 2, 1, 2), items = c("hfg", "hrfg", "thflk", "plht", "sdrpv")) # features feature_matrix <- data.frame(char=c("p","b","t","d","k","ɡ","f","v", "s","c","z","h","m","n","r","l","w","j"),val1=c(-1,-1,-1,-1,-1,-1, -0.5,-0.5,-0.5,-0.5,-0.5,-0.5,0,0,0.5,0.5,0.8,0.8),val2=c(1,1,1,1, 1,1,1,1,1,1,1,1,0,0,0,0,0,0),val3=c(-1,0,-1,0,-1,0,-1,0,-1, 0,0,0,1,1,1,1,1,1),val4=c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,1, 0,0,0,0),val5=c(1,1,1,1,1,1,0,0,0,0,0,0,1,1,-1,-1,0,0),val6=c(1,1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,-1,-1,-1,1,-1),val7=c(0,0,1,1,-1,-1, 1,1,1,1,1,-1,0,1,-1,1,-1,0),val8=c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,1,-1,-1,1,-1,-1,-1),val9=c(1,1,-1,-1,-1,-1,1,1,-1,-1,-1,-1,1,-1, 1,-1,1,-1),val10=c(0,0,1,1,-1,-1,0,0,1,-1,1,-1,0,1,-1,1,-1,0), val11=c(0,0,0,0,0,0,0,0,0,0,0,-1,0,0,-1,0,0,1)) # decompose items in chars and add column "char" df_char <- df %>% mutate(char = strsplit(items, "")) %>% unnest(char) head(df_char) #> # A tibble: 6 × 4 #> subj session items char #> <dbl> <dbl> <chr> <chr> #> 1 1 1 hfg h #> 2 1 1 hfg f #> 3 1 1 hfg g #> 4 1 1 hrfg h #> 5 1 1 hrfg r #> 6 1 1 hrfg f # calculate distances distances <- select(df_char, -items) %>% inner_join( select(df_char, -items), by=c("subj"="subj", "session"="session"), suffix = c("_l", "_r")) %>% filter(char_l != char_r # because distance same char == 0 & char_l %in% feature_matrix$char # only chars that are in feature_matrix & char_r %in% feature_matrix$char) %>% # (there are missings) mutate( distance = rowSums(( slice(feature_matrix, match(char_r, char))[,-1] - slice(feature_matrix, match(char_l, char))[,-1]) ^ 2)) distances #> # A tibble: 68 × 5 #> subj session char_l char_r distance #> <dbl> <dbl> <chr> <chr> <dbl> #> 1 1 1 h f 15 #> 2 1 1 h r 9 #> 3 1 1 h f 15 #> 4 1 1 f h 15 #> 5 1 1 f h 15 #> 6 1 1 f r 18 #> 7 1 1 h f 15 #> 8 1 1 h r 9 #> 9 1 1 h f 15 #> 10 1 1 r h 9 #> # … with 58 more rows
Iterate through characters in a string R
I have a dataset a bit like this, but with around 50,000 rows: # some data subj <- c(1, 1, 1, 2, 2) session <- c(1, 1, 2, 1, 2) items <- c("hfg", "hrfg", "thflk", "plht", "sdrpv") df <- data.frame(subj, session, items) For each subject (across sessions), I want to compare each character in each row of df$items with all the characters in all other rows, using a feature matrix of 11 values, which will be used to calculate Euclidean distance for each value between each character and each other character: feature_matrix <- tribble(~char, ~val1, ~val2, ~val3, ~val4, ~val5, ~val6, ~val7, ~val8, ~val9, ~val10, ~val11, "p", -1, 1, -1, -1, 1, 1, 0, -1, 1, 0, 0, "b", -1, 1, 0, -1, 1, 1, 0, -1, 1, 0, 0, "t", -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 0, "d", -1, 1, 0, -1, 1, -1, 1, -1, -1, 1, 0, "k", -1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 0, "ɡ", -1, 1, 0, -1, 1, -1, -1, -1, -1, -1, 0, "f", -0.5, 1, -1, -1, 0, -1, 1, -1, 1, 0, 0, "v", -0.5, 1, 0, -1, 0, -1, 1, -1, 1, 0, 0, "s", -0.5, 1, -1, -1, 0, -1, 1, -1, -1, 1, 0, "c", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, -1, 0, "z", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, 1, 0, "h", -0.5, 1, 0, -1, 0, -1, -1, 1, -1, -1, -1, "m", 0, 0, 1, 1, 1, 1, 0, -1, 1, 0, 0, "n", 0, 0, 1, 1, 1, -1, 1, -1, -1, 1, 0, "r", 0.5, 0, 1, 0, -1, -1, -1, 1, 1, -1, -1, "l", 0.5, 0, 1, 0, -1, -1, 1, -1, -1, 1, 0, "w", 0.8, 0, 1, 0, 0, 1, -1, -1, 1, -1, 0, "j", 0.8, 0, 1, 0, 0, -1, 0, -1, -1, 0, 1) As an example, to compare "hfg" with "hrfg" for feature_matrix$val1 only: h_h1 <- (-0.5--0.5)^2 h_r1 <- (-0.5-0.5)^2 h_f1 <- (-0.5--0.5)^2 h_g1 <- (-0.5--1)^2 f_h1 <- (-0.5--0.5)^2 f_r1 <- (-0.5-0.5)^2 ... and so on, for all 11 values. These will then be added together, and the square root of each of these 11 values is then added together again to create a final value, the Euclidean distance: val1_sum <- h_h1 + h_r1 + h_f1 + h_g1 + f_h1 + f_r1... val2_sum <- h_h2 + h_r2 + h_f2 + h_g2 + f_h2 + f_r2... val3_sum <- .... etc distance <- sqrt(val1_sum) + sqrt(val2_sum) + sqrt(val3_sum) + ... This will leave me with a distance matrix. As a bonus, I'd also like to compare each character in each string with each other character in the same string, though I expect this would need to be done in a separate section of code. Any help gratefully appreciated!
[ "Here is a solution with dplyr / tidyr. However, whether it works will depend on data volume: As it is said in the comments, it quickly become unmanageable as number of comparisions growths.\nSome optimizations are feasible, for case, group by character group and count, and even by character group despite the order.\nlibrary(dplyr)\nlibrary(tidyr)\n\n# some data\ndf <- data.frame(subj = c(1, 1, 1, 2, 2),\n session = c(1, 1, 2, 1, 2),\n items = c(\"hfg\", \"hrfg\", \"thflk\", \"plht\", \"sdrpv\"))\n\n# features\nfeature_matrix <- data.frame(char=c(\"p\",\"b\",\"t\",\"d\",\"k\",\"ɡ\",\"f\",\"v\",\n\"s\",\"c\",\"z\",\"h\",\"m\",\"n\",\"r\",\"l\",\"w\",\"j\"),val1=c(-1,-1,-1,-1,-1,-1,\n-0.5,-0.5,-0.5,-0.5,-0.5,-0.5,0,0,0.5,0.5,0.8,0.8),val2=c(1,1,1,1,\n1,1,1,1,1,1,1,1,0,0,0,0,0,0),val3=c(-1,0,-1,0,-1,0,-1,0,-1,\n0,0,0,1,1,1,1,1,1),val4=c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,1,\n0,0,0,0),val5=c(1,1,1,1,1,1,0,0,0,0,0,0,1,1,-1,-1,0,0),val6=c(1,1,\n-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,-1,-1,-1,1,-1),val7=c(0,0,1,1,-1,-1,\n1,1,1,1,1,-1,0,1,-1,1,-1,0),val8=c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,\n-1,1,-1,-1,1,-1,-1,-1),val9=c(1,1,-1,-1,-1,-1,1,1,-1,-1,-1,-1,1,-1,\n1,-1,1,-1),val10=c(0,0,1,1,-1,-1,0,0,1,-1,1,-1,0,1,-1,1,-1,0),\nval11=c(0,0,0,0,0,0,0,0,0,0,0,-1,0,0,-1,0,0,1))\n\n# decompose items in chars and add column \"char\"\ndf_char <- df %>% mutate(char = strsplit(items, \"\")) %>% unnest(char) \nhead(df_char)\n\n#> # A tibble: 6 × 4\n#> subj session items char \n#> <dbl> <dbl> <chr> <chr>\n#> 1 1 1 hfg h \n#> 2 1 1 hfg f \n#> 3 1 1 hfg g \n#> 4 1 1 hrfg h \n#> 5 1 1 hrfg r \n#> 6 1 1 hrfg f\n\n# calculate distances\ndistances <- select(df_char, -items) %>% \n inner_join(\n select(df_char, -items), \n by=c(\"subj\"=\"subj\", \"session\"=\"session\"),\n suffix = c(\"_l\", \"_r\")) %>% \n filter(char_l != char_r # because distance same char == 0\n & char_l %in% feature_matrix$char # only chars that are in feature_matrix \n & char_r %in% feature_matrix$char) %>% # (there are missings)\n mutate(\n distance = rowSums((\n slice(feature_matrix, match(char_r, char))[,-1]\n - slice(feature_matrix, match(char_l, char))[,-1]) ^ 2))\n\ndistances\n#> # A tibble: 68 × 5\n#> subj session char_l char_r distance\n#> <dbl> <dbl> <chr> <chr> <dbl>\n#> 1 1 1 h f 15\n#> 2 1 1 h r 9\n#> 3 1 1 h f 15\n#> 4 1 1 f h 15\n#> 5 1 1 f h 15\n#> 6 1 1 f r 18\n#> 7 1 1 h f 15\n#> 8 1 1 h r 9\n#> 9 1 1 h f 15\n#> 10 1 1 r h 9\n#> # … with 58 more rows\n\n" ]
[ 0 ]
[]
[]
[ "lapply", "r" ]
stackoverflow_0074657924_lapply_r.txt
Q: Updated StatsForecast Library shows error 'forecasts' is not defined in Python I was trying to replicate this code for stat forecasting in python, I came across an odd error "name 'forecasts' is not defined" which is quite strange as I was able to replicate the code without any errors before. I believe this was resolved in the latest update of this library StatsForecast but I still run across to the same error. Can you please help me out here. The code I am replicating is from this : https://towardsdatascience.com/time-series-forecasting-with-statistical-models-f08dcd1d24d1 A similar question was earlier asked for the same error, and the solution was updated but this error till comes up after the new solution as well, attached is the link to the question: Error in Data frame definition while Multiple TS Stat Forecasting in Python import random from itertools import product from IPython.display import display, Markdown from multiprocessing import cpu_count import matplotlib.pyplot as plt import numpy as np import pandas as pd from statsforecast import StatsForecast from nixtlats.data.datasets.m4 import M4, M4Info from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta df = pd.read_excel ('C:/X/X/X/Data.xlsx',sheet_name='Transpose') df.rename(columns = {'Row Labels':'Key'}, inplace=True) df['Key'] = df['Key'].astype(str) df = pd.melt(df,id_vars='Key',value_vars=list(df.columns[1:]),var_name ='ds') df.columns = df.columns.str.replace('Key', 'unique_id') df.columns = df.columns.str.replace('value', 'y') df["ds"] = pd.to_datetime(df["ds"],format='%Y-%m-%d') df=df[["ds","unique_id","y"]] df['unique_id'] = df['unique_id'].astype('object') df = df.set_index('unique_id') df.reset_index() seasonality = 30 #Monthly data models = [ ADIDA, CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA, (TSB,0.3,0.2), MSTL, Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta, AutoARIMA, AutoETS, AutoCES, AutoTheta, HistoricAverage, Naive, RandomWalkWithDrift, (SeasonalNaive, seasonality), (SeasonalExponentialSmoothing, seasonality, 0.2), (SeasonalWindowAverage, seasonality, 2 * seasonality), (WindowAverage, 2 * seasonality) ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=cpu_count()) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) forecasts.to_excel("C:/X/X/X/Forecast_Output.xlsx",sheet_name='Sheet1') The dataset I am working with is given below: {'Row Labels': {0: 'XYZ-912750', 1: 'XYZ-461356', 2: 'XYZ-150591', 3: 'XYZ-627885', 4: 'XYZ-582638', 5: 'XYZ-631691', 6: 'XYZ-409952', 7: 'XYZ-245245', 8: 'XYZ-230662', 9: 'XYZ-533388', 10: 'XYZ-248225', 11: 'XYZ-582912', 12: 'XYZ-486079', 13: 'XYZ-867685', 14: 'XYZ-873555', 15: 'XYZ-375397', 16: 'XYZ-428066', 17: 'XYZ-774244', 18: 'XYZ-602796', 19: 'XYZ-267306', 20: 'XYZ-576156', 21: 'XYZ-775994', 22: 'XYZ-226742', 23: 'XYZ-641711', 24: 'XYZ-928543', 25: 'XYZ-217200', 26: 'XYZ-971921', 27: 'XYZ-141388', 28: 'XYZ-848360', 29: 'XYZ-864999', 30: 'XYZ-821384', 31: 'XYZ-516339', 32: 'XYZ-462488', 33: 'XYZ-140964', 34: 'XYZ-225559', 35: 'XYZ-916534', 36: 'XYZ-389683', 37: 'XYZ-247803', 38: 'XYZ-718639', 39: 'XYZ-512944', 40: 'XYZ-727601', 41: 'XYZ-315757', 42: 'XYZ-764867', 43: 'XYZ-918344', 44: 'XYZ-430939', 45: 'XYZ-204784', 46: 'XYZ-415285', 47: 'XYZ-272089', 48: 'XYZ-812045', 49: 'XYZ-889257', 50: 'XYZ-275863', 51: 'XYZ-519930', 52: 'XYZ-102141', 53: 'XYZ-324473', 54: 'XYZ-999148', 55: 'XYZ-514915', 56: 'XYZ-932751', 57: 'XYZ-669878', 58: 'XYZ-233459', 59: 'XYZ-289984', 60: 'XYZ-150061', 61: 'XYZ-355028', 62: 'XYZ-881803', 63: 'XYZ-721426', 64: 'XYZ-522174', 65: 'XYZ-790172', 66: 'XYZ-744677', 67: 'XYZ-617017', 68: 'XYZ-982812', 69: 'XYZ-940695', 70: 'XYZ-119041', 71: 'XYZ-313844', 72: 'XYZ-868117', 73: 'XYZ-791717', 74: 'XYZ-100742', 75: 'XYZ-259687', 76: 'XYZ-688842', 77: 'XYZ-247326', 78: 'XYZ-360939', 79: 'XYZ-185017', 80: 'XYZ-244773', 81: 'XYZ-289058', 82: 'XYZ-477846', 83: 'XYZ-305072', 84: 'XYZ-828236', 85: 'XYZ-668927', 86: 'XYZ-616913', 87: 'XYZ-874876', 88: 'XYZ-371693', 89: 'XYZ-951238', 90: 'XYZ-371675', 91: 'XYZ-736997', 92: 'XYZ-922244', 93: 'XYZ-883225', 94: 'XYZ-267555', 95: 'XYZ-704013', 96: 'XYZ-874917', 97: 'XYZ-567402', 98: 'XYZ-167338', 99: 'XYZ-592671', 100: 'XYZ-130168', 101: 'XYZ-492522', 102: 'XYZ-696211', 103: 'XYZ-310469', 104: 'XYZ-973277', 105: 'XYZ-841356', 106: 'XYZ-389440', 107: 'XYZ-613876', 108: 'XYZ-662850', 109: 'XYZ-800625', 110: 'XYZ-500125', 111: 'XYZ-539949', 112: 'XYZ-576121', 113: 'XYZ-339006', 114: 'XYZ-247314', 115: 'XYZ-129049', 116: 'XYZ-980653', 117: 'XYZ-678520', 118: 'XYZ-584841', 119: 'XYZ-396755', 120: 'XYZ-409502', 121: 'XYZ-824561', 122: 'XYZ-825996', 123: 'XYZ-820540', 124: 'XYZ-264710', 125: 'XYZ-241176', 126: 'XYZ-491386', 127: 'XYZ-914132', 128: 'XYZ-496194', 129: 'XYZ-941615', 130: 'XYZ-765328', 131: 'XYZ-540602', 132: 'XYZ-222660', 133: 'XYZ-324367', 134: 'XYZ-583764', 135: 'XYZ-248478', 136: 'XYZ-379180', 137: 'XYZ-628462', 138: 'XYZ-454262'}, '2021-03-01': {0: 0, 1: 951, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 487, 9: 501, 10: 0, 11: 0, 12: 0, 13: 0, 14: 715, 15: 726, 16: 235, 17: 340, 18: 0, 19: 0, 20: 0, 21: 960, 22: 127, 23: 92, 24: 0, 25: 0, 26: 170, 27: 0, 28: 0, 29: 0, 30: 0, 31: 133, 32: 0, 33: 0, 34: 105, 35: 168, 36: 0, 37: 500, 38: 0, 39: 0, 40: 61, 41: 0, 42: 212, 43: 101, 44: 0, 45: 0, 46: 0, 47: 83, 48: 185, 49: 0, 50: 131, 51: 67, 52: 0, 53: 141, 54: 0, 55: 140, 56: 0, 57: 0, 58: 180, 59: 0, 60: 0, 61: 99, 62: 63, 63: 0, 64: 0, 65: 1590, 66: 0, 67: 0, 68: 15, 69: 113, 70: 0, 71: 0, 72: 0, 73: 54, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 108, 80: 0, 81: 62, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 29, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 69, 98: 0, 99: 0, 100: 0, 101: 62, 102: 30, 103: 42, 104: 0, 105: 0, 106: 0, 107: 67, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 52, 116: 36, 117: 0, 118: 110, 119: 0, 120: 44, 121: 0, 122: 102, 123: 0, 124: 71, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 77, 132: 56, 133: 0, 134: 0, 135: 103, 136: 0, 137: 0, 138: 53}, '2021-04-01': {0: 0, 1: 553, 2: 0, 3: 0, 4: 18, 5: 0, 6: 0, 7: 0, 8: 313, 9: 1100, 10: 0, 11: 0, 12: 0, 13: 0, 14: 336, 15: 856, 16: 216, 17: 415, 18: 0, 19: 0, 20: 0, 21: 1363, 22: 148, 23: 171, 24: 0, 25: 0, 26: 260, 27: 0, 28: 0, 29: 0, 30: 0, 31: 229, 32: 0, 33: 0, 34: 286, 35: 215, 36: 0, 37: 381, 38: 0, 39: 0, 40: 171, 41: 0, 42: 261, 43: 211, 44: 0, 45: 0, 46: 0, 47: 94, 48: 167, 49: 0, 50: 171, 51: 111, 52: 0, 53: 229, 54: 0, 55: 104, 56: 0, 57: 0, 58: 158, 59: 0, 60: 0, 61: 142, 62: 156, 63: 0, 64: 0, 65: 1152, 66: 0, 67: 0, 68: 19, 69: 160, 70: 0, 71: 0, 72: 0, 73: 50, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 146, 80: 0, 81: 25, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 69, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 49, 98: 0, 99: 0, 100: 0, 101: 22, 102: 46, 103: 48, 104: 0, 105: 0, 106: 0, 107: 60, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 24, 116: 51, 117: 0, 118: 112, 119: 0, 120: 73, 121: 0, 122: 155, 123: 0, 124: 57, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 59, 132: 62, 133: 0, 134: 0, 135: 132, 136: 0, 137: 0, 138: 70}, '2021-05-01': {0: 0, 1: 439, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 119, 9: 735, 10: 0, 11: 0, 12: 0, 13: 0, 14: 183, 15: 70, 16: 79, 17: 244, 18: 0, 19: 0, 20: 0, 21: 2842, 22: 30, 23: 76, 24: 0, 25: 0, 26: 95, 27: 0, 28: 0, 29: 0, 30: 0, 31: 38, 32: 0, 33: 0, 34: 197, 35: 114, 36: 0, 37: 140, 38: 0, 39: 0, 40: 91, 41: 0, 42: 82, 43: 83, 44: 0, 45: 0, 46: 0, 47: 35, 48: 126, 49: 0, 50: 83, 51: 101, 52: 0, 53: 94, 54: 0, 55: 100, 56: 0, 57: 0, 58: 89, 59: 0, 60: 0, 61: 94, 62: 112, 63: 0, 64: 0, 65: 1903, 66: 0, 67: 0, 68: 61, 69: 91, 70: 0, 71: 0, 72: 0, 73: 30, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 116, 80: 0, 81: 12, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 56, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 0, 98: 0, 99: 0, 100: 0, 101: 20, 102: 42, 103: 35, 104: 0, 105: 0, 106: 0, 107: 59, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 0, 116: 27, 117: 0, 118: 45, 119: 0, 120: 49, 121: 0, 122: 129, 123: 0, 124: 58, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 41, 132: 41, 133: 0, 134: 0, 135: 61, 136: 0, 137: 0, 138: 38}, '2021-06-01': {0: 0, 1: 390, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 221, 9: 816, 10: 0, 11: 0, 12: 0, 13: 0, 14: 109, 15: 255, 16: 126, 17: 161, 18: 0, 19: 0, 20: 0, 21: 959, 22: 52, 23: 119, 24: 0, 25: 0, 26: 261, 27: 0, 28: 0, 29: 0, 30: 0, 31: 142, 32: 0, 33: 0, 34: 203, 35: 42, 36: 0, 37: 133, 38: 0, 39: 0, 40: 113, 41: 0, 42: 118, 43: 62, 44: 0, 45: 0, 46: 0, 47: 48, 48: 112, 49: 0, 50: 75, 51: 105, 52: 0, 53: 107, 54: 0, 55: 102, 56: 0, 57: 0, 58: 77, 59: 0, 60: 0, 61: 81, 62: 94, 63: 0, 64: 0, 65: 764, 66: 0, 67: 0, 68: 47, 69: 116, 70: 0, 71: 0, 72: 0, 73: 19, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 148, 80: 0, 81: 20, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 46, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 33, 98: 0, 99: 0, 100: 0, 101: 39, 102: 52, 103: 47, 104: 0, 105: 0, 106: 0, 107: 56, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 62, 116: 41, 117: 0, 118: 51, 119: 0, 120: 59, 121: 0, 122: 73, 123: 0, 124: 34, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 17, 132: 42, 133: 0, 134: 0, 135: 74, 136: 0, 137: 0, 138: 58}, '2021-07-01': {0: 0, 1: 349, 2: 0, 3: 0, 4: 11, 5: 0, 6: 0, 7: 0, 8: 222, 9: 418, 10: 0, 11: 0, 12: 0, 13: 0, 14: 104, 15: 57, 16: 92, 17: 118, 18: 0, 19: 0, 20: 0, 21: 2040, 22: 80, 23: 50, 24: 0, 25: 0, 26: 147, 27: 0, 28: 0, 29: 0, 30: 0, 31: 22, 32: 0, 33: 0, 34: 117, 35: 88, 36: 0, 37: 146, 38: 0, 39: 0, 40: 65, 41: 0, 42: 117, 43: 65, 44: 0, 45: 0, 46: 0, 47: 33, 48: 36, 49: 0, 50: 51, 51: 50, 52: 0, 53: 66, 54: 0, 55: 51, 56: 0, 57: 0, 58: 100, 59: 0, 60: 0, 61: 63, 62: 55, 63: 0, 64: 0, 65: 847, 66: 0, 67: 0, 68: 32, 69: 68, 70: 0, 71: 0, 72: 0, 73: 42, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 72, 80: 0, 81: 27, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 47, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 36, 98: 0, 99: 0, 100: 0, 101: 25, 102: 29, 103: 39, 104: 0, 105: 0, 106: 0, 107: 40, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 37, 116: 41, 117: 0, 118: 29, 119: 0, 120: 54, 121: 0, 122: 75, 123: 0, 124: 41, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 28, 133: 0, 134: 0, 135: 46, 136: 0, 137: 0, 138: 24}, '2021-08-01': {0: 0, 1: 402, 2: 0, 3: 0, 4: 14, 5: 0, 6: 0, 7: 0, 8: 138, 9: 373, 10: 0, 11: 0, 12: 0, 13: 0, 14: 133, 15: 107, 16: 69, 17: 116, 18: 0, 19: 0, 20: 0, 21: 1554, 22: 80, 23: 65, 24: 0, 25: 0, 26: 123, 27: 0, 28: 0, 29: 0, 30: 0, 31: 23, 32: 0, 33: 0, 34: 95, 35: 49, 36: 0, 37: 146, 38: 0, 39: 0, 40: 50, 41: 0, 42: 90, 43: 57, 44: 0, 45: 0, 46: 0, 47: 19, 48: 46, 49: 0, 50: 38, 51: 20, 52: 0, 53: 91, 54: 0, 55: 69, 56: 0, 57: 0, 58: 57, 59: 0, 60: 0, 61: 53, 62: 48, 63: 0, 64: 0, 65: 934, 66: 0, 67: 0, 68: 19, 69: 66, 70: 0, 71: 0, 72: 0, 73: 75, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 86, 80: 0, 81: 33, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 32, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 46, 98: 0, 99: 0, 100: 0, 101: 22, 102: 31, 103: 63, 104: 0, 105: 0, 106: 0, 107: 41, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 42, 116: 42, 117: 0, 118: 30, 119: 0, 120: 32, 121: 0, 122: 70, 123: 0, 124: 40, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 21, 133: 0, 134: 0, 135: 83, 136: 0, 137: 0, 138: 20}, '2021-09-01': {0: 0, 1: 560, 2: 55, 3: 496, 4: 11, 5: 0, 6: 0, 7: 0, 8: 77, 9: 309, 10: 45, 11: 257, 12: 0, 13: 0, 14: 87, 15: 179, 16: 61, 17: 79, 18: 65, 19: 144, 20: 307, 21: 840, 22: 52, 23: 41, 24: 108, 25: 156, 26: 113, 27: 0, 28: 30, 29: 27, 30: 0, 31: 59, 32: 0, 33: 0, 34: 66, 35: 53, 36: 70, 37: 42, 38: 0, 39: 26, 40: 38, 41: 0, 42: 50, 43: 11, 44: 209, 45: 56, 46: 52, 47: 18, 48: 47, 49: 0, 50: 58, 51: 32, 52: 0, 53: 76, 54: 0, 55: 45, 56: 0, 57: 63, 58: 95, 59: 0, 60: 0, 61: 33, 62: 45, 63: 0, 64: 96, 65: 249, 66: 0, 67: 0, 68: 0, 69: 73, 70: 0, 71: 30, 72: 0, 73: 41, 74: 0, 75: 0, 76: 37, 77: 22, 78: 0, 79: 68, 80: 18, 81: 47, 82: 0, 83: 0, 84: 0, 85: 79, 86: 0, 87: 75, 88: 40, 89: 39, 90: 35, 91: 0, 92: 0, 93: 0, 94: 40, 95: 0, 96: 0, 97: 44, 98: 30, 99: 46, 100: 0, 101: 33, 102: 40, 103: 31, 104: 0, 105: 17, 106: 15, 107: 32, 108: 15, 109: 0, 110: 58, 111: 63, 112: 0, 113: 0, 114: 0, 115: 42, 116: 35, 117: 19, 118: 55, 119: 0, 120: 25, 121: 0, 122: 47, 123: 0, 124: 37, 125: 16, 126: 24, 127: 124, 128: 67, 129: 0, 130: 0, 131: 28, 132: 20, 133: 0, 134: 0, 135: 34, 136: 0, 137: 26, 138: 28}, '2021-10-01': {0: 122, 1: 720, 2: 129, 3: 1135, 4: 11, 5: 0, 6: 0, 7: 85, 8: 122, 9: 280, 10: 100, 11: 159, 12: 0, 13: 0, 14: 87, 15: 115, 16: 40, 17: 32, 18: 236, 19: 176, 20: 322, 21: 334, 22: 113, 23: 49, 24: 133, 25: 119, 26: 136, 27: 0, 28: 74, 29: 56, 30: 38, 31: 83, 32: 0, 33: 0, 34: 65, 35: 88, 36: 75, 37: 68, 38: 52, 39: 36, 40: 44, 41: 11, 42: 40, 43: 13, 44: 198, 45: 244, 46: 130, 47: 23, 48: 44, 49: 0, 50: 62, 51: 49, 52: 0, 53: 92, 54: 0, 55: 14, 56: 0, 57: 83, 58: 58, 59: 0, 60: 0, 61: 44, 62: 42, 63: 39, 64: 37, 65: 132, 66: 0, 67: 0, 68: 49, 69: 57, 70: 0, 71: 40, 72: 112, 73: 28, 74: 102, 75: 0, 76: 56, 77: 17, 78: 22, 79: 37, 80: 48, 81: 0, 82: 14, 83: 13, 84: 48, 85: 84, 86: 0, 87: 104, 88: 81, 89: 34, 90: 49, 91: 0, 92: 0, 93: 42, 94: 101, 95: 41, 96: 11, 97: 74, 98: 35, 99: 45, 100: 73, 101: 19, 102: 38, 103: 26, 104: 0, 105: 26, 106: 26, 107: 43, 108: 93, 109: 0, 110: 74, 111: 70, 112: 35, 113: 25, 114: 0, 115: 55, 116: 28, 117: 0, 118: 58, 119: 0, 120: 26, 121: 0, 122: 13, 123: 0, 124: 50, 125: 16, 126: 39, 127: 74, 128: 42, 129: 29, 130: 0, 131: 24, 132: 26, 133: 0, 134: 0, 135: 125, 136: 0, 137: 37, 138: 20}, '2021-11-01': {0: 1331, 1: 1810, 2: 274, 3: 899, 4: 0, 5: 0, 6: 30, 7: 606, 8: 138, 9: 1735, 10: 209, 11: 468, 12: 0, 13: 0, 14: 327, 15: 1394, 16: 73, 17: 187, 18: 1259, 19: 355, 20: 374, 21: 2079, 22: 500, 23: 168, 24: 305, 25: 80, 26: 256, 27: 0, 28: 340, 29: 143, 30: 380, 31: 273, 32: 79, 33: 0, 34: 143, 35: 137, 36: 200, 37: 336, 38: 166, 39: 235, 40: 97, 41: 202, 42: 75, 43: 130, 44: 650, 45: 675, 46: 326, 47: 46, 48: 105, 49: 0, 50: 195, 51: 135, 52: 93, 53: 229, 54: 0, 55: 93, 56: 0, 57: 188, 58: 89, 59: 46, 60: 123, 61: 101, 62: 89, 63: 64, 64: 208, 65: 325, 66: 0, 67: 0, 68: 211, 69: 90, 70: 0, 71: 111, 72: 218, 73: 42, 74: 139, 75: 16, 76: 94, 77: 148, 78: 45, 79: 92, 80: 100, 81: 16, 82: 31, 83: 123, 84: 87, 85: 142, 86: 0, 87: 444, 88: 123, 89: 105, 90: 63, 91: 0, 92: 16, 93: 149, 94: 240, 95: 114, 96: 99, 97: 128, 98: 128, 99: 104, 100: 196, 101: 32, 102: 41, 103: 55, 104: 0, 105: 67, 106: 97, 107: 56, 108: 40, 109: 14, 110: 194, 111: 290, 112: 151, 113: 154, 114: 11, 115: 105, 116: 54, 117: 30, 118: 148, 119: 0, 120: 71, 121: 0, 122: 39, 123: 0, 124: 118, 125: 207, 126: 58, 127: 131, 128: 93, 129: 30, 130: 0, 131: 90, 132: 43, 133: 0, 134: 0, 135: 40, 136: 0, 137: 58, 138: 29}, '2021-12-01': {0: 1901, 1: 2469, 2: 298, 3: 1760, 4: 14, 5: 0, 6: 573, 7: 1444, 8: 126, 9: 1568, 10: 220, 11: 497, 12: 0, 13: 71, 14: 248, 15: 1670, 16: 77, 17: 93, 18: 910, 19: 362, 20: 698, 21: 1044, 22: 651, 23: 156, 24: 208, 25: 185, 26: 314, 27: 0, 28: 356, 29: 205, 30: 570, 31: 186, 32: 25, 33: 0, 34: 117, 35: 90, 36: 385, 37: 228, 38: 410, 39: 270, 40: 63, 41: 228, 42: 50, 43: 53, 44: 450, 45: 896, 46: 431, 47: 74, 48: 62, 49: 0, 50: 678, 51: 123, 52: 204, 53: 225, 54: 100, 55: 13, 56: 88, 57: 302, 58: 81, 59: 111, 60: 141, 61: 98, 62: 57, 63: 73, 64: 334, 65: 422, 66: 49, 67: 0, 68: 600, 69: 86, 70: 55, 71: 162, 72: 138, 73: 50, 74: 296, 75: 30, 76: 153, 77: 186, 78: 68, 79: 39, 80: 173, 81: 0, 82: 276, 83: 192, 84: 66, 85: 116, 86: 89, 87: 385, 88: 209, 89: 121, 90: 68, 91: 22, 92: 52, 93: 262, 94: 261, 95: 70, 96: 85, 97: 298, 98: 170, 99: 126, 100: 145, 101: 17, 102: 53, 103: 56, 104: 0, 105: 97, 106: 114, 107: 72, 108: 42, 109: 22, 110: 211, 111: 370, 112: 175, 113: 111, 114: 27, 115: 62, 116: 104, 117: 118, 118: 248, 119: 0, 120: 58, 121: 20, 122: 52, 123: 20, 124: 97, 125: 119, 126: 107, 127: 108, 128: 79, 129: 42, 130: 0, 131: 281, 132: 83, 133: 57, 134: 61, 135: 50, 136: 50, 137: 22, 138: 37}, '2022-01-01': {0: 938, 1: 1501, 2: 377, 3: 1455, 4: 17, 5: 0, 6: 815, 7: 562, 8: 534, 9: 628, 10: 178, 11: 332, 12: 0, 13: 177, 14: 311, 15: 614, 16: 50, 17: 121, 18: 343, 19: 314, 20: 356, 21: 587, 22: 498, 23: 67, 24: 222, 25: 230, 26: 210, 27: 0, 28: 237, 29: 131, 30: 222, 31: 74, 32: 12, 33: 0, 34: 79, 35: 53, 36: 397, 37: 351, 38: 253, 39: 269, 40: 63, 41: 211, 42: 53, 43: 163, 44: 209, 45: 287, 46: 364, 47: 59, 48: 49, 49: 0, 50: 290, 51: 55, 52: 113, 53: 76, 54: 85, 55: 83, 56: 190, 57: 166, 58: 72, 59: 108, 60: 119, 61: 121, 62: 25, 63: 46, 64: 163, 65: 204, 66: 76, 67: 0, 68: 250, 69: 76, 70: 148, 71: 161, 72: 97, 73: 44, 74: 150, 75: 34, 76: 144, 77: 189, 78: 73, 79: 27, 80: 109, 81: 0, 82: 90, 83: 185, 84: 48, 85: 110, 86: 198, 87: 216, 88: 139, 89: 59, 90: 34, 91: 45, 92: 116, 93: 187, 94: 164, 95: 34, 96: 80, 97: 45, 98: 78, 99: 82, 100: 54, 101: 14, 102: 28, 103: 31, 104: 48, 105: 52, 106: 97, 107: 29, 108: 56, 109: 33, 110: 84, 111: 212, 112: 111, 113: 128, 114: 18, 115: 81, 116: 32, 117: 115, 118: 192, 119: 0, 120: 36, 121: 194, 122: 17, 123: 55, 124: 98, 125: 104, 126: 83, 127: 101, 128: 54, 129: 36, 130: 0, 131: 156, 132: 33, 133: 104, 134: 101, 135: 31, 136: 46, 137: 66, 138: 20}, '2022-02-01': {0: 612, 1: 912, 2: 325, 3: 892, 4: 11, 5: 0, 6: 706, 7: 310, 8: 439, 9: 563, 10: 134, 11: 140, 12: 0, 13: 153, 14: 281, 15: 399, 16: 49, 17: 90, 18: 204, 19: 231, 20: 100, 21: 318, 22: 255, 23: 63, 24: 309, 25: 181, 26: 205, 27: 0, 28: 121, 29: 84, 30: 117, 31: 80, 32: 143, 33: 0, 34: 65, 35: 64, 36: 227, 37: 271, 38: 133, 39: 290, 40: 47, 41: 156, 42: 0, 43: 176, 44: 153, 45: 244, 46: 300, 47: 14, 48: 30, 49: 0, 50: 126, 51: 46, 52: 81, 53: 69, 54: 165, 55: 48, 56: 79, 57: 91, 58: 31, 59: 95, 60: 138, 61: 87, 62: 34, 63: 39, 64: 101, 65: 111, 66: 19, 67: 0, 68: 15, 69: 26, 70: 0, 71: 88, 72: 81, 73: 53, 74: 135, 75: 62, 76: 92, 77: 141, 78: 57, 79: 32, 80: 71, 81: 34, 82: 357, 83: 92, 84: 50, 85: 82, 86: 97, 87: 128, 88: 75, 89: 54, 90: 23, 91: 28, 92: 57, 93: 108, 94: 138, 95: 48, 96: 79, 97: 109, 98: 52, 99: 54, 100: 73, 101: 27, 102: 20, 103: 26, 104: 86, 105: 48, 106: 54, 107: 27, 108: 39, 109: 61, 110: 67, 111: 110, 112: 127, 113: 147, 114: 0, 115: 60, 116: 23, 117: 68, 118: 101, 119: 23, 120: 25, 121: 93, 122: 35, 123: 25, 124: 52, 125: 72, 126: 50, 127: 84, 128: 78, 129: 43, 130: 0, 131: 82, 132: 34, 133: 84, 134: 13, 135: 13, 136: 37, 137: 69, 138: 13}, '2022-03-01': {0: 573, 1: 775, 2: 267, 3: 870, 4: 19, 5: 0, 6: 494, 7: 254, 8: 402, 9: 657, 10: 180, 11: 144, 12: 0, 13: 266, 14: 240, 15: 394, 16: 106, 17: 142, 18: 216, 19: 211, 20: 113, 21: 245, 22: 152, 23: 88, 24: 225, 25: 168, 26: 177, 27: 0, 28: 92, 29: 70, 30: 98, 31: 124, 32: 103, 33: 0, 34: 85, 35: 86, 36: 189, 37: 184, 38: 108, 39: 0, 40: 69, 41: 125, 42: 26, 43: 128, 44: 119, 45: 226, 46: 251, 47: 26, 48: 58, 49: 0, 50: 109, 51: 67, 52: 70, 53: 55, 54: 157, 55: 49, 56: 51, 57: 89, 58: 43, 59: 69, 60: 136, 61: 92, 62: 79, 63: 54, 64: 59, 65: 64, 66: 35, 67: 0, 68: 239, 69: 48, 70: 101, 71: 91, 72: 53, 73: 65, 74: 147, 75: 38, 76: 70, 77: 107, 78: 41, 79: 32, 80: 51, 81: 39, 82: 130, 83: 123, 84: 44, 85: 60, 86: 177, 87: 99, 88: 75, 89: 35, 90: 21, 91: 25, 92: 77, 93: 88, 94: 86, 95: 88, 96: 52, 97: 45, 98: 42, 99: 52, 100: 121, 101: 28, 102: 22, 103: 26, 104: 104, 105: 39, 106: 48, 107: 45, 108: 42, 109: 35, 110: 74, 111: 101, 112: 101, 113: 120, 114: 22, 115: 58, 116: 23, 117: 53, 118: 70, 119: 45, 120: 30, 121: 69, 122: 44, 123: 37, 124: 33, 125: 49, 126: 49, 127: 58, 128: 55, 129: 33, 130: 0, 131: 58, 132: 30, 133: 42, 134: 43, 135: 23, 136: 31, 137: 83, 138: 22}, '2022-04-01': {0: 356, 1: 595, 2: 231, 3: 444, 4: 0, 5: 0, 6: 220, 7: 145, 8: 185, 9: 394, 10: 140, 11: 112, 12: 0, 13: 104, 14: 139, 15: 236, 16: 102, 17: 121, 18: 77, 19: 174, 20: 108, 21: 133, 22: 105, 23: 53, 24: 195, 25: 114, 26: 155, 27: 11, 28: 88, 29: 40, 30: 102, 31: 91, 32: 142, 33: 0, 34: 66, 35: 36, 36: 90, 37: 114, 38: 64, 39: 262, 40: 46, 41: 87, 42: 47, 43: 87, 44: 64, 45: 93, 46: 114, 47: 15, 48: 95, 49: 0, 50: 85, 51: 40, 52: 30, 53: 51, 54: 81, 55: 38, 56: 66, 57: 52, 58: 43, 59: 59, 60: 121, 61: 53, 62: 44, 63: 22, 64: 59, 65: 64, 66: 47, 67: 0, 68: 194, 69: 26, 70: 59, 71: 37, 72: 47, 73: 51, 74: 146, 75: 36, 76: 43, 77: 120, 78: 37, 79: 16, 80: 52, 81: 22, 82: 151, 83: 51, 84: 35, 85: 52, 86: 71, 87: 32, 88: 39, 89: 20, 90: 25, 91: 25, 92: 48, 93: 44, 94: 35, 95: 40, 96: 30, 97: 41, 98: 24, 99: 45, 100: 44, 101: 17, 102: 15, 103: 19, 104: 39, 105: 32, 106: 45, 107: 35, 108: 21, 109: 16, 110: 34, 111: 44, 112: 46, 113: 29, 114: 20, 115: 51, 116: 17, 117: 45, 118: 52, 119: 31, 120: 29, 121: 34, 122: 21, 123: 16, 124: 26, 125: 39, 126: 22, 127: 45, 128: 48, 129: 20, 130: 0, 131: 35, 132: 18, 133: 39, 134: 22, 135: 30, 136: 71, 137: 15, 138: 11}, '2022-05-01': {0: 383, 1: 326, 2: 108, 3: 397, 4: 0, 5: 0, 6: 110, 7: 83, 8: 142, 9: 240, 10: 137, 11: 70, 12: 0, 13: 142, 14: 110, 15: 203, 16: 111, 17: 265, 18: 52, 19: 109, 20: 57, 21: 85, 22: 73, 23: 202, 24: 102, 25: 50, 26: 178, 27: 42, 28: 55, 29: 26, 30: 53, 31: 173, 32: 76, 33: 0, 34: 207, 35: 87, 36: 29, 37: 79, 38: 27, 39: 102, 40: 115, 41: 33, 42: 102, 43: 65, 44: 42, 45: 47, 46: 92, 47: 25, 48: 93, 49: 0, 50: 42, 51: 80, 52: 20, 53: 105, 54: 52, 55: 70, 56: 46, 57: 31, 58: 86, 59: 39, 60: 32, 61: 33, 62: 103, 63: 16, 64: 49, 65: 24, 66: 22, 67: 0, 68: 161, 69: 78, 70: 31, 71: 36, 72: 28, 73: 73, 74: 57, 75: 21, 76: 30, 77: 39, 78: 22, 79: 70, 80: 24, 81: 55, 82: 134, 83: 25, 84: 16, 85: 28, 86: 24, 87: 28, 88: 31, 89: 17, 90: 60, 91: 30, 92: 32, 93: 49, 94: 20, 95: 13, 96: 12, 97: 31, 98: 20, 99: 25, 100: 21, 101: 33, 102: 29, 103: 36, 104: 23, 105: 26, 106: 26, 107: 31, 108: 30, 109: 15, 110: 22, 111: 20, 112: 32, 113: 27, 114: 39, 115: 18, 116: 40, 117: 31, 118: 21, 119: 24, 120: 52, 121: 22, 122: 62, 123: 37, 124: 16, 125: 19, 126: 17, 127: 23, 128: 17, 129: 15, 130: 0, 131: 22, 132: 32, 133: 24, 134: 20, 135: 21, 136: 13, 137: 23, 138: 25}, '2022-06-01': {0: 613, 1: 1944, 2: 1826, 3: 494, 4: 0, 5: 244, 6: 928, 7: 798, 8: 219, 9: 1529, 10: 1029, 11: 526, 12: 122, 13: 195, 14: 173, 15: 1261, 16: 87, 17: 243, 18: 1179, 19: 217, 20: 464, 21: 952, 22: 353, 23: 148, 24: 166, 25: 187, 26: 134, 27: 124, 28: 321, 29: 221, 30: 193, 31: 224, 32: 75, 33: 0, 34: 277, 35: 77, 36: 253, 37: 174, 38: 343, 39: 283, 40: 73, 41: 295, 42: 108, 43: 138, 44: 102, 45: 1364, 46: 467, 47: 28, 48: 87, 49: 16, 50: 145, 51: 88, 52: 128, 53: 60, 54: 80, 55: 81, 56: 40, 57: 206, 58: 61, 59: 166, 60: 144, 61: 71, 62: 78, 63: 39, 64: 331, 65: 116, 66: 25, 67: 13, 68: 62, 69: 37, 70: 24, 71: 311, 72: 106, 73: 50, 74: 257, 75: 22, 76: 56, 77: 128, 78: 100, 79: 55, 80: 139, 81: 70, 82: 140, 83: 20, 84: 53, 85: 33, 86: 38, 87: 167, 88: 218, 89: 20, 90: 34, 91: 19, 92: 25, 93: 199, 94: 122, 95: 24, 96: 28, 97: 36, 98: 69, 99: 146, 100: 33, 101: 14, 102: 21, 103: 27, 104: 28, 105: 78, 106: 62, 107: 30, 108: 47, 109: 20, 110: 78, 111: 48, 112: 35, 113: 21, 114: 17, 115: 49, 116: 61, 117: 92, 118: 26, 119: 16, 120: 47, 121: 36, 122: 54, 123: 43, 124: 23, 125: 40, 126: 22, 127: 121, 128: 145, 129: 12, 130: 18, 131: 31, 132: 31, 133: 17, 134: 23, 135: 23, 136: 19, 137: 24, 138: 24}, '2022-07-01': {0: 349, 1: 283, 2: 163, 3: 318, 4: 67, 5: 328, 6: 121, 7: 96, 8: 205, 9: 219, 10: 89, 11: 60, 12: 153, 13: 68, 14: 135, 15: 181, 16: 53, 17: 94, 18: 65, 19: 96, 20: 67, 21: 57, 22: 67, 23: 59, 24: 134, 25: 94, 26: 78, 27: 142, 28: 33, 29: 29, 30: 45, 31: 64, 32: 65, 33: 76, 34: 81, 35: 55, 36: 44, 37: 83, 38: 15, 39: 46, 40: 84, 41: 45, 42: 56, 43: 54, 44: 50, 45: 48, 46: 90, 47: 17, 48: 56, 49: 27, 50: 66, 51: 37, 52: 34, 53: 63, 54: 58, 55: 27, 56: 45, 57: 74, 58: 51, 59: 61, 60: 80, 61: 45, 62: 65, 63: 34, 64: 27, 65: 30, 66: 18, 67: 35, 68: 47, 69: 31, 70: 24, 71: 40, 72: 18, 73: 30, 74: 44, 75: 26, 76: 31, 77: 32, 78: 29, 79: 29, 80: 45, 81: 14, 82: 54, 83: 31, 84: 37, 85: 24, 86: 32, 87: 20, 88: 40, 89: 32, 90: 22, 91: 17, 92: 30, 93: 29, 94: 20, 95: 52, 96: 34, 97: 25, 98: 26, 99: 28, 100: 72, 101: 17, 102: 15, 103: 22, 104: 28, 105: 24, 106: 28, 107: 19, 108: 25, 109: 25, 110: 38, 111: 19, 112: 27, 113: 26, 114: 15, 115: 22, 116: 28, 117: 24, 118: 33, 119: 13, 120: 57, 121: 40, 122: 22, 123: 14, 124: 18, 125: 23, 126: 20, 127: 38, 128: 20, 129: 14, 130: 36, 131: 24, 132: 18, 133: 39, 134: 14, 135: 40, 136: 16, 137: 21, 138: 13}, '2022-08-01': {0: 857, 1: 500, 2: 362, 3: 334, 4: 308, 5: 296, 6: 289, 7: 266, 8: 244, 9: 223, 10: 206, 11: 192, 12: 180, 13: 169, 14: 160, 15: 159, 16: 140, 17: 134, 18: 134, 19: 128, 20: 127, 21: 126, 22: 123, 23: 116, 24: 112, 25: 111, 26: 108, 27: 102, 28: 99, 29: 94, 30: 94, 31: 89, 32: 88, 33: 88, 34: 87, 35: 85, 36: 83, 37: 79, 38: 78, 39: 77, 40: 77, 41: 77, 42: 76, 43: 75, 44: 75, 45: 74, 46: 72, 47: 65, 48: 65, 49: 65, 50: 64, 51: 64, 52: 64, 53: 62, 54: 62, 55: 61, 56: 61, 57: 61, 58: 60, 59: 60, 60: 58, 61: 55, 62: 54, 63: 54, 64: 54, 65: 54, 66: 53, 67: 53, 68: 52, 69: 50, 70: 49, 71: 49, 72: 49, 73: 48, 74: 48, 75: 48, 76: 47, 77: 47, 78: 46, 79: 44, 80: 44, 81: 43, 82: 43, 83: 43, 84: 42, 85: 42, 86: 41, 87: 41, 88: 41, 89: 40, 90: 39, 91: 39, 92: 39, 93: 39, 94: 38, 95: 37, 96: 37, 97: 36, 98: 36, 99: 36, 100: 36, 101: 35, 102: 35, 103: 35, 104: 35, 105: 35, 106: 35, 107: 34, 108: 34, 109: 34, 110: 32, 111: 32, 112: 32, 113: 32, 114: 31, 115: 31, 116: 30, 117: 30, 118: 30, 119: 30, 120: 29, 121: 29, 122: 28, 123: 28, 124: 28, 125: 28, 126: 28, 127: 28, 128: 28, 129: 28, 130: 28, 131: 27, 132: 27, 133: 27, 134: 27, 135: 27, 136: 27, 137: 27, 138: 26}} A: You have to instantiate the models since they are classes. The code would be, from statsforecast import StatsForecast from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta seasonality = 12 #Monthly data models = [ ADIDA(), CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA(), TSB(0.3,0.2), Theta(season_length=seasonality), OptimizedTheta(season_length=seasonality), DynamicTheta(season_length=seasonality), DynamicOptimizedTheta(season_length=seasonality), AutoARIMA(season_length=seasonality), AutoCES(season_length=seasonality), AutoTheta(season_length=seasonality), HistoricAverage(), Naive(), RandomWalkWithDrift(), SeasonalNaive(season_length=seasonality), SeasonalExponentialSmoothing(season_length=seasonality, alpha=0.2), ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=-1, fallback_model=SeasonalNaive(season_length=seasonality)) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) Here's a colab link fixing the error: https://colab.research.google.com/drive/1vwIImCoKzGvePbgFKidauV8sXimAvO48?usp=sharing
Updated StatsForecast Library shows error 'forecasts' is not defined in Python
I was trying to replicate this code for stat forecasting in python, I came across an odd error "name 'forecasts' is not defined" which is quite strange as I was able to replicate the code without any errors before. I believe this was resolved in the latest update of this library StatsForecast but I still run across to the same error. Can you please help me out here. The code I am replicating is from this : https://towardsdatascience.com/time-series-forecasting-with-statistical-models-f08dcd1d24d1 A similar question was earlier asked for the same error, and the solution was updated but this error till comes up after the new solution as well, attached is the link to the question: Error in Data frame definition while Multiple TS Stat Forecasting in Python import random from itertools import product from IPython.display import display, Markdown from multiprocessing import cpu_count import matplotlib.pyplot as plt import numpy as np import pandas as pd from statsforecast import StatsForecast from nixtlats.data.datasets.m4 import M4, M4Info from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta df = pd.read_excel ('C:/X/X/X/Data.xlsx',sheet_name='Transpose') df.rename(columns = {'Row Labels':'Key'}, inplace=True) df['Key'] = df['Key'].astype(str) df = pd.melt(df,id_vars='Key',value_vars=list(df.columns[1:]),var_name ='ds') df.columns = df.columns.str.replace('Key', 'unique_id') df.columns = df.columns.str.replace('value', 'y') df["ds"] = pd.to_datetime(df["ds"],format='%Y-%m-%d') df=df[["ds","unique_id","y"]] df['unique_id'] = df['unique_id'].astype('object') df = df.set_index('unique_id') df.reset_index() seasonality = 30 #Monthly data models = [ ADIDA, CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA, (TSB,0.3,0.2), MSTL, Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta, AutoARIMA, AutoETS, AutoCES, AutoTheta, HistoricAverage, Naive, RandomWalkWithDrift, (SeasonalNaive, seasonality), (SeasonalExponentialSmoothing, seasonality, 0.2), (SeasonalWindowAverage, seasonality, 2 * seasonality), (WindowAverage, 2 * seasonality) ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=cpu_count()) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) forecasts.to_excel("C:/X/X/X/Forecast_Output.xlsx",sheet_name='Sheet1') The dataset I am working with is given below: {'Row Labels': {0: 'XYZ-912750', 1: 'XYZ-461356', 2: 'XYZ-150591', 3: 'XYZ-627885', 4: 'XYZ-582638', 5: 'XYZ-631691', 6: 'XYZ-409952', 7: 'XYZ-245245', 8: 'XYZ-230662', 9: 'XYZ-533388', 10: 'XYZ-248225', 11: 'XYZ-582912', 12: 'XYZ-486079', 13: 'XYZ-867685', 14: 'XYZ-873555', 15: 'XYZ-375397', 16: 'XYZ-428066', 17: 'XYZ-774244', 18: 'XYZ-602796', 19: 'XYZ-267306', 20: 'XYZ-576156', 21: 'XYZ-775994', 22: 'XYZ-226742', 23: 'XYZ-641711', 24: 'XYZ-928543', 25: 'XYZ-217200', 26: 'XYZ-971921', 27: 'XYZ-141388', 28: 'XYZ-848360', 29: 'XYZ-864999', 30: 'XYZ-821384', 31: 'XYZ-516339', 32: 'XYZ-462488', 33: 'XYZ-140964', 34: 'XYZ-225559', 35: 'XYZ-916534', 36: 'XYZ-389683', 37: 'XYZ-247803', 38: 'XYZ-718639', 39: 'XYZ-512944', 40: 'XYZ-727601', 41: 'XYZ-315757', 42: 'XYZ-764867', 43: 'XYZ-918344', 44: 'XYZ-430939', 45: 'XYZ-204784', 46: 'XYZ-415285', 47: 'XYZ-272089', 48: 'XYZ-812045', 49: 'XYZ-889257', 50: 'XYZ-275863', 51: 'XYZ-519930', 52: 'XYZ-102141', 53: 'XYZ-324473', 54: 'XYZ-999148', 55: 'XYZ-514915', 56: 'XYZ-932751', 57: 'XYZ-669878', 58: 'XYZ-233459', 59: 'XYZ-289984', 60: 'XYZ-150061', 61: 'XYZ-355028', 62: 'XYZ-881803', 63: 'XYZ-721426', 64: 'XYZ-522174', 65: 'XYZ-790172', 66: 'XYZ-744677', 67: 'XYZ-617017', 68: 'XYZ-982812', 69: 'XYZ-940695', 70: 'XYZ-119041', 71: 'XYZ-313844', 72: 'XYZ-868117', 73: 'XYZ-791717', 74: 'XYZ-100742', 75: 'XYZ-259687', 76: 'XYZ-688842', 77: 'XYZ-247326', 78: 'XYZ-360939', 79: 'XYZ-185017', 80: 'XYZ-244773', 81: 'XYZ-289058', 82: 'XYZ-477846', 83: 'XYZ-305072', 84: 'XYZ-828236', 85: 'XYZ-668927', 86: 'XYZ-616913', 87: 'XYZ-874876', 88: 'XYZ-371693', 89: 'XYZ-951238', 90: 'XYZ-371675', 91: 'XYZ-736997', 92: 'XYZ-922244', 93: 'XYZ-883225', 94: 'XYZ-267555', 95: 'XYZ-704013', 96: 'XYZ-874917', 97: 'XYZ-567402', 98: 'XYZ-167338', 99: 'XYZ-592671', 100: 'XYZ-130168', 101: 'XYZ-492522', 102: 'XYZ-696211', 103: 'XYZ-310469', 104: 'XYZ-973277', 105: 'XYZ-841356', 106: 'XYZ-389440', 107: 'XYZ-613876', 108: 'XYZ-662850', 109: 'XYZ-800625', 110: 'XYZ-500125', 111: 'XYZ-539949', 112: 'XYZ-576121', 113: 'XYZ-339006', 114: 'XYZ-247314', 115: 'XYZ-129049', 116: 'XYZ-980653', 117: 'XYZ-678520', 118: 'XYZ-584841', 119: 'XYZ-396755', 120: 'XYZ-409502', 121: 'XYZ-824561', 122: 'XYZ-825996', 123: 'XYZ-820540', 124: 'XYZ-264710', 125: 'XYZ-241176', 126: 'XYZ-491386', 127: 'XYZ-914132', 128: 'XYZ-496194', 129: 'XYZ-941615', 130: 'XYZ-765328', 131: 'XYZ-540602', 132: 'XYZ-222660', 133: 'XYZ-324367', 134: 'XYZ-583764', 135: 'XYZ-248478', 136: 'XYZ-379180', 137: 'XYZ-628462', 138: 'XYZ-454262'}, '2021-03-01': {0: 0, 1: 951, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 487, 9: 501, 10: 0, 11: 0, 12: 0, 13: 0, 14: 715, 15: 726, 16: 235, 17: 340, 18: 0, 19: 0, 20: 0, 21: 960, 22: 127, 23: 92, 24: 0, 25: 0, 26: 170, 27: 0, 28: 0, 29: 0, 30: 0, 31: 133, 32: 0, 33: 0, 34: 105, 35: 168, 36: 0, 37: 500, 38: 0, 39: 0, 40: 61, 41: 0, 42: 212, 43: 101, 44: 0, 45: 0, 46: 0, 47: 83, 48: 185, 49: 0, 50: 131, 51: 67, 52: 0, 53: 141, 54: 0, 55: 140, 56: 0, 57: 0, 58: 180, 59: 0, 60: 0, 61: 99, 62: 63, 63: 0, 64: 0, 65: 1590, 66: 0, 67: 0, 68: 15, 69: 113, 70: 0, 71: 0, 72: 0, 73: 54, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 108, 80: 0, 81: 62, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 29, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 69, 98: 0, 99: 0, 100: 0, 101: 62, 102: 30, 103: 42, 104: 0, 105: 0, 106: 0, 107: 67, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 52, 116: 36, 117: 0, 118: 110, 119: 0, 120: 44, 121: 0, 122: 102, 123: 0, 124: 71, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 77, 132: 56, 133: 0, 134: 0, 135: 103, 136: 0, 137: 0, 138: 53}, '2021-04-01': {0: 0, 1: 553, 2: 0, 3: 0, 4: 18, 5: 0, 6: 0, 7: 0, 8: 313, 9: 1100, 10: 0, 11: 0, 12: 0, 13: 0, 14: 336, 15: 856, 16: 216, 17: 415, 18: 0, 19: 0, 20: 0, 21: 1363, 22: 148, 23: 171, 24: 0, 25: 0, 26: 260, 27: 0, 28: 0, 29: 0, 30: 0, 31: 229, 32: 0, 33: 0, 34: 286, 35: 215, 36: 0, 37: 381, 38: 0, 39: 0, 40: 171, 41: 0, 42: 261, 43: 211, 44: 0, 45: 0, 46: 0, 47: 94, 48: 167, 49: 0, 50: 171, 51: 111, 52: 0, 53: 229, 54: 0, 55: 104, 56: 0, 57: 0, 58: 158, 59: 0, 60: 0, 61: 142, 62: 156, 63: 0, 64: 0, 65: 1152, 66: 0, 67: 0, 68: 19, 69: 160, 70: 0, 71: 0, 72: 0, 73: 50, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 146, 80: 0, 81: 25, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 69, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 49, 98: 0, 99: 0, 100: 0, 101: 22, 102: 46, 103: 48, 104: 0, 105: 0, 106: 0, 107: 60, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 24, 116: 51, 117: 0, 118: 112, 119: 0, 120: 73, 121: 0, 122: 155, 123: 0, 124: 57, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 59, 132: 62, 133: 0, 134: 0, 135: 132, 136: 0, 137: 0, 138: 70}, '2021-05-01': {0: 0, 1: 439, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 119, 9: 735, 10: 0, 11: 0, 12: 0, 13: 0, 14: 183, 15: 70, 16: 79, 17: 244, 18: 0, 19: 0, 20: 0, 21: 2842, 22: 30, 23: 76, 24: 0, 25: 0, 26: 95, 27: 0, 28: 0, 29: 0, 30: 0, 31: 38, 32: 0, 33: 0, 34: 197, 35: 114, 36: 0, 37: 140, 38: 0, 39: 0, 40: 91, 41: 0, 42: 82, 43: 83, 44: 0, 45: 0, 46: 0, 47: 35, 48: 126, 49: 0, 50: 83, 51: 101, 52: 0, 53: 94, 54: 0, 55: 100, 56: 0, 57: 0, 58: 89, 59: 0, 60: 0, 61: 94, 62: 112, 63: 0, 64: 0, 65: 1903, 66: 0, 67: 0, 68: 61, 69: 91, 70: 0, 71: 0, 72: 0, 73: 30, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 116, 80: 0, 81: 12, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 56, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 0, 98: 0, 99: 0, 100: 0, 101: 20, 102: 42, 103: 35, 104: 0, 105: 0, 106: 0, 107: 59, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 0, 116: 27, 117: 0, 118: 45, 119: 0, 120: 49, 121: 0, 122: 129, 123: 0, 124: 58, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 41, 132: 41, 133: 0, 134: 0, 135: 61, 136: 0, 137: 0, 138: 38}, '2021-06-01': {0: 0, 1: 390, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 221, 9: 816, 10: 0, 11: 0, 12: 0, 13: 0, 14: 109, 15: 255, 16: 126, 17: 161, 18: 0, 19: 0, 20: 0, 21: 959, 22: 52, 23: 119, 24: 0, 25: 0, 26: 261, 27: 0, 28: 0, 29: 0, 30: 0, 31: 142, 32: 0, 33: 0, 34: 203, 35: 42, 36: 0, 37: 133, 38: 0, 39: 0, 40: 113, 41: 0, 42: 118, 43: 62, 44: 0, 45: 0, 46: 0, 47: 48, 48: 112, 49: 0, 50: 75, 51: 105, 52: 0, 53: 107, 54: 0, 55: 102, 56: 0, 57: 0, 58: 77, 59: 0, 60: 0, 61: 81, 62: 94, 63: 0, 64: 0, 65: 764, 66: 0, 67: 0, 68: 47, 69: 116, 70: 0, 71: 0, 72: 0, 73: 19, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 148, 80: 0, 81: 20, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 46, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 33, 98: 0, 99: 0, 100: 0, 101: 39, 102: 52, 103: 47, 104: 0, 105: 0, 106: 0, 107: 56, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 62, 116: 41, 117: 0, 118: 51, 119: 0, 120: 59, 121: 0, 122: 73, 123: 0, 124: 34, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 17, 132: 42, 133: 0, 134: 0, 135: 74, 136: 0, 137: 0, 138: 58}, '2021-07-01': {0: 0, 1: 349, 2: 0, 3: 0, 4: 11, 5: 0, 6: 0, 7: 0, 8: 222, 9: 418, 10: 0, 11: 0, 12: 0, 13: 0, 14: 104, 15: 57, 16: 92, 17: 118, 18: 0, 19: 0, 20: 0, 21: 2040, 22: 80, 23: 50, 24: 0, 25: 0, 26: 147, 27: 0, 28: 0, 29: 0, 30: 0, 31: 22, 32: 0, 33: 0, 34: 117, 35: 88, 36: 0, 37: 146, 38: 0, 39: 0, 40: 65, 41: 0, 42: 117, 43: 65, 44: 0, 45: 0, 46: 0, 47: 33, 48: 36, 49: 0, 50: 51, 51: 50, 52: 0, 53: 66, 54: 0, 55: 51, 56: 0, 57: 0, 58: 100, 59: 0, 60: 0, 61: 63, 62: 55, 63: 0, 64: 0, 65: 847, 66: 0, 67: 0, 68: 32, 69: 68, 70: 0, 71: 0, 72: 0, 73: 42, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 72, 80: 0, 81: 27, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 47, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 36, 98: 0, 99: 0, 100: 0, 101: 25, 102: 29, 103: 39, 104: 0, 105: 0, 106: 0, 107: 40, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 37, 116: 41, 117: 0, 118: 29, 119: 0, 120: 54, 121: 0, 122: 75, 123: 0, 124: 41, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 28, 133: 0, 134: 0, 135: 46, 136: 0, 137: 0, 138: 24}, '2021-08-01': {0: 0, 1: 402, 2: 0, 3: 0, 4: 14, 5: 0, 6: 0, 7: 0, 8: 138, 9: 373, 10: 0, 11: 0, 12: 0, 13: 0, 14: 133, 15: 107, 16: 69, 17: 116, 18: 0, 19: 0, 20: 0, 21: 1554, 22: 80, 23: 65, 24: 0, 25: 0, 26: 123, 27: 0, 28: 0, 29: 0, 30: 0, 31: 23, 32: 0, 33: 0, 34: 95, 35: 49, 36: 0, 37: 146, 38: 0, 39: 0, 40: 50, 41: 0, 42: 90, 43: 57, 44: 0, 45: 0, 46: 0, 47: 19, 48: 46, 49: 0, 50: 38, 51: 20, 52: 0, 53: 91, 54: 0, 55: 69, 56: 0, 57: 0, 58: 57, 59: 0, 60: 0, 61: 53, 62: 48, 63: 0, 64: 0, 65: 934, 66: 0, 67: 0, 68: 19, 69: 66, 70: 0, 71: 0, 72: 0, 73: 75, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 86, 80: 0, 81: 33, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 32, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 46, 98: 0, 99: 0, 100: 0, 101: 22, 102: 31, 103: 63, 104: 0, 105: 0, 106: 0, 107: 41, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 42, 116: 42, 117: 0, 118: 30, 119: 0, 120: 32, 121: 0, 122: 70, 123: 0, 124: 40, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 21, 133: 0, 134: 0, 135: 83, 136: 0, 137: 0, 138: 20}, '2021-09-01': {0: 0, 1: 560, 2: 55, 3: 496, 4: 11, 5: 0, 6: 0, 7: 0, 8: 77, 9: 309, 10: 45, 11: 257, 12: 0, 13: 0, 14: 87, 15: 179, 16: 61, 17: 79, 18: 65, 19: 144, 20: 307, 21: 840, 22: 52, 23: 41, 24: 108, 25: 156, 26: 113, 27: 0, 28: 30, 29: 27, 30: 0, 31: 59, 32: 0, 33: 0, 34: 66, 35: 53, 36: 70, 37: 42, 38: 0, 39: 26, 40: 38, 41: 0, 42: 50, 43: 11, 44: 209, 45: 56, 46: 52, 47: 18, 48: 47, 49: 0, 50: 58, 51: 32, 52: 0, 53: 76, 54: 0, 55: 45, 56: 0, 57: 63, 58: 95, 59: 0, 60: 0, 61: 33, 62: 45, 63: 0, 64: 96, 65: 249, 66: 0, 67: 0, 68: 0, 69: 73, 70: 0, 71: 30, 72: 0, 73: 41, 74: 0, 75: 0, 76: 37, 77: 22, 78: 0, 79: 68, 80: 18, 81: 47, 82: 0, 83: 0, 84: 0, 85: 79, 86: 0, 87: 75, 88: 40, 89: 39, 90: 35, 91: 0, 92: 0, 93: 0, 94: 40, 95: 0, 96: 0, 97: 44, 98: 30, 99: 46, 100: 0, 101: 33, 102: 40, 103: 31, 104: 0, 105: 17, 106: 15, 107: 32, 108: 15, 109: 0, 110: 58, 111: 63, 112: 0, 113: 0, 114: 0, 115: 42, 116: 35, 117: 19, 118: 55, 119: 0, 120: 25, 121: 0, 122: 47, 123: 0, 124: 37, 125: 16, 126: 24, 127: 124, 128: 67, 129: 0, 130: 0, 131: 28, 132: 20, 133: 0, 134: 0, 135: 34, 136: 0, 137: 26, 138: 28}, '2021-10-01': {0: 122, 1: 720, 2: 129, 3: 1135, 4: 11, 5: 0, 6: 0, 7: 85, 8: 122, 9: 280, 10: 100, 11: 159, 12: 0, 13: 0, 14: 87, 15: 115, 16: 40, 17: 32, 18: 236, 19: 176, 20: 322, 21: 334, 22: 113, 23: 49, 24: 133, 25: 119, 26: 136, 27: 0, 28: 74, 29: 56, 30: 38, 31: 83, 32: 0, 33: 0, 34: 65, 35: 88, 36: 75, 37: 68, 38: 52, 39: 36, 40: 44, 41: 11, 42: 40, 43: 13, 44: 198, 45: 244, 46: 130, 47: 23, 48: 44, 49: 0, 50: 62, 51: 49, 52: 0, 53: 92, 54: 0, 55: 14, 56: 0, 57: 83, 58: 58, 59: 0, 60: 0, 61: 44, 62: 42, 63: 39, 64: 37, 65: 132, 66: 0, 67: 0, 68: 49, 69: 57, 70: 0, 71: 40, 72: 112, 73: 28, 74: 102, 75: 0, 76: 56, 77: 17, 78: 22, 79: 37, 80: 48, 81: 0, 82: 14, 83: 13, 84: 48, 85: 84, 86: 0, 87: 104, 88: 81, 89: 34, 90: 49, 91: 0, 92: 0, 93: 42, 94: 101, 95: 41, 96: 11, 97: 74, 98: 35, 99: 45, 100: 73, 101: 19, 102: 38, 103: 26, 104: 0, 105: 26, 106: 26, 107: 43, 108: 93, 109: 0, 110: 74, 111: 70, 112: 35, 113: 25, 114: 0, 115: 55, 116: 28, 117: 0, 118: 58, 119: 0, 120: 26, 121: 0, 122: 13, 123: 0, 124: 50, 125: 16, 126: 39, 127: 74, 128: 42, 129: 29, 130: 0, 131: 24, 132: 26, 133: 0, 134: 0, 135: 125, 136: 0, 137: 37, 138: 20}, '2021-11-01': {0: 1331, 1: 1810, 2: 274, 3: 899, 4: 0, 5: 0, 6: 30, 7: 606, 8: 138, 9: 1735, 10: 209, 11: 468, 12: 0, 13: 0, 14: 327, 15: 1394, 16: 73, 17: 187, 18: 1259, 19: 355, 20: 374, 21: 2079, 22: 500, 23: 168, 24: 305, 25: 80, 26: 256, 27: 0, 28: 340, 29: 143, 30: 380, 31: 273, 32: 79, 33: 0, 34: 143, 35: 137, 36: 200, 37: 336, 38: 166, 39: 235, 40: 97, 41: 202, 42: 75, 43: 130, 44: 650, 45: 675, 46: 326, 47: 46, 48: 105, 49: 0, 50: 195, 51: 135, 52: 93, 53: 229, 54: 0, 55: 93, 56: 0, 57: 188, 58: 89, 59: 46, 60: 123, 61: 101, 62: 89, 63: 64, 64: 208, 65: 325, 66: 0, 67: 0, 68: 211, 69: 90, 70: 0, 71: 111, 72: 218, 73: 42, 74: 139, 75: 16, 76: 94, 77: 148, 78: 45, 79: 92, 80: 100, 81: 16, 82: 31, 83: 123, 84: 87, 85: 142, 86: 0, 87: 444, 88: 123, 89: 105, 90: 63, 91: 0, 92: 16, 93: 149, 94: 240, 95: 114, 96: 99, 97: 128, 98: 128, 99: 104, 100: 196, 101: 32, 102: 41, 103: 55, 104: 0, 105: 67, 106: 97, 107: 56, 108: 40, 109: 14, 110: 194, 111: 290, 112: 151, 113: 154, 114: 11, 115: 105, 116: 54, 117: 30, 118: 148, 119: 0, 120: 71, 121: 0, 122: 39, 123: 0, 124: 118, 125: 207, 126: 58, 127: 131, 128: 93, 129: 30, 130: 0, 131: 90, 132: 43, 133: 0, 134: 0, 135: 40, 136: 0, 137: 58, 138: 29}, '2021-12-01': {0: 1901, 1: 2469, 2: 298, 3: 1760, 4: 14, 5: 0, 6: 573, 7: 1444, 8: 126, 9: 1568, 10: 220, 11: 497, 12: 0, 13: 71, 14: 248, 15: 1670, 16: 77, 17: 93, 18: 910, 19: 362, 20: 698, 21: 1044, 22: 651, 23: 156, 24: 208, 25: 185, 26: 314, 27: 0, 28: 356, 29: 205, 30: 570, 31: 186, 32: 25, 33: 0, 34: 117, 35: 90, 36: 385, 37: 228, 38: 410, 39: 270, 40: 63, 41: 228, 42: 50, 43: 53, 44: 450, 45: 896, 46: 431, 47: 74, 48: 62, 49: 0, 50: 678, 51: 123, 52: 204, 53: 225, 54: 100, 55: 13, 56: 88, 57: 302, 58: 81, 59: 111, 60: 141, 61: 98, 62: 57, 63: 73, 64: 334, 65: 422, 66: 49, 67: 0, 68: 600, 69: 86, 70: 55, 71: 162, 72: 138, 73: 50, 74: 296, 75: 30, 76: 153, 77: 186, 78: 68, 79: 39, 80: 173, 81: 0, 82: 276, 83: 192, 84: 66, 85: 116, 86: 89, 87: 385, 88: 209, 89: 121, 90: 68, 91: 22, 92: 52, 93: 262, 94: 261, 95: 70, 96: 85, 97: 298, 98: 170, 99: 126, 100: 145, 101: 17, 102: 53, 103: 56, 104: 0, 105: 97, 106: 114, 107: 72, 108: 42, 109: 22, 110: 211, 111: 370, 112: 175, 113: 111, 114: 27, 115: 62, 116: 104, 117: 118, 118: 248, 119: 0, 120: 58, 121: 20, 122: 52, 123: 20, 124: 97, 125: 119, 126: 107, 127: 108, 128: 79, 129: 42, 130: 0, 131: 281, 132: 83, 133: 57, 134: 61, 135: 50, 136: 50, 137: 22, 138: 37}, '2022-01-01': {0: 938, 1: 1501, 2: 377, 3: 1455, 4: 17, 5: 0, 6: 815, 7: 562, 8: 534, 9: 628, 10: 178, 11: 332, 12: 0, 13: 177, 14: 311, 15: 614, 16: 50, 17: 121, 18: 343, 19: 314, 20: 356, 21: 587, 22: 498, 23: 67, 24: 222, 25: 230, 26: 210, 27: 0, 28: 237, 29: 131, 30: 222, 31: 74, 32: 12, 33: 0, 34: 79, 35: 53, 36: 397, 37: 351, 38: 253, 39: 269, 40: 63, 41: 211, 42: 53, 43: 163, 44: 209, 45: 287, 46: 364, 47: 59, 48: 49, 49: 0, 50: 290, 51: 55, 52: 113, 53: 76, 54: 85, 55: 83, 56: 190, 57: 166, 58: 72, 59: 108, 60: 119, 61: 121, 62: 25, 63: 46, 64: 163, 65: 204, 66: 76, 67: 0, 68: 250, 69: 76, 70: 148, 71: 161, 72: 97, 73: 44, 74: 150, 75: 34, 76: 144, 77: 189, 78: 73, 79: 27, 80: 109, 81: 0, 82: 90, 83: 185, 84: 48, 85: 110, 86: 198, 87: 216, 88: 139, 89: 59, 90: 34, 91: 45, 92: 116, 93: 187, 94: 164, 95: 34, 96: 80, 97: 45, 98: 78, 99: 82, 100: 54, 101: 14, 102: 28, 103: 31, 104: 48, 105: 52, 106: 97, 107: 29, 108: 56, 109: 33, 110: 84, 111: 212, 112: 111, 113: 128, 114: 18, 115: 81, 116: 32, 117: 115, 118: 192, 119: 0, 120: 36, 121: 194, 122: 17, 123: 55, 124: 98, 125: 104, 126: 83, 127: 101, 128: 54, 129: 36, 130: 0, 131: 156, 132: 33, 133: 104, 134: 101, 135: 31, 136: 46, 137: 66, 138: 20}, '2022-02-01': {0: 612, 1: 912, 2: 325, 3: 892, 4: 11, 5: 0, 6: 706, 7: 310, 8: 439, 9: 563, 10: 134, 11: 140, 12: 0, 13: 153, 14: 281, 15: 399, 16: 49, 17: 90, 18: 204, 19: 231, 20: 100, 21: 318, 22: 255, 23: 63, 24: 309, 25: 181, 26: 205, 27: 0, 28: 121, 29: 84, 30: 117, 31: 80, 32: 143, 33: 0, 34: 65, 35: 64, 36: 227, 37: 271, 38: 133, 39: 290, 40: 47, 41: 156, 42: 0, 43: 176, 44: 153, 45: 244, 46: 300, 47: 14, 48: 30, 49: 0, 50: 126, 51: 46, 52: 81, 53: 69, 54: 165, 55: 48, 56: 79, 57: 91, 58: 31, 59: 95, 60: 138, 61: 87, 62: 34, 63: 39, 64: 101, 65: 111, 66: 19, 67: 0, 68: 15, 69: 26, 70: 0, 71: 88, 72: 81, 73: 53, 74: 135, 75: 62, 76: 92, 77: 141, 78: 57, 79: 32, 80: 71, 81: 34, 82: 357, 83: 92, 84: 50, 85: 82, 86: 97, 87: 128, 88: 75, 89: 54, 90: 23, 91: 28, 92: 57, 93: 108, 94: 138, 95: 48, 96: 79, 97: 109, 98: 52, 99: 54, 100: 73, 101: 27, 102: 20, 103: 26, 104: 86, 105: 48, 106: 54, 107: 27, 108: 39, 109: 61, 110: 67, 111: 110, 112: 127, 113: 147, 114: 0, 115: 60, 116: 23, 117: 68, 118: 101, 119: 23, 120: 25, 121: 93, 122: 35, 123: 25, 124: 52, 125: 72, 126: 50, 127: 84, 128: 78, 129: 43, 130: 0, 131: 82, 132: 34, 133: 84, 134: 13, 135: 13, 136: 37, 137: 69, 138: 13}, '2022-03-01': {0: 573, 1: 775, 2: 267, 3: 870, 4: 19, 5: 0, 6: 494, 7: 254, 8: 402, 9: 657, 10: 180, 11: 144, 12: 0, 13: 266, 14: 240, 15: 394, 16: 106, 17: 142, 18: 216, 19: 211, 20: 113, 21: 245, 22: 152, 23: 88, 24: 225, 25: 168, 26: 177, 27: 0, 28: 92, 29: 70, 30: 98, 31: 124, 32: 103, 33: 0, 34: 85, 35: 86, 36: 189, 37: 184, 38: 108, 39: 0, 40: 69, 41: 125, 42: 26, 43: 128, 44: 119, 45: 226, 46: 251, 47: 26, 48: 58, 49: 0, 50: 109, 51: 67, 52: 70, 53: 55, 54: 157, 55: 49, 56: 51, 57: 89, 58: 43, 59: 69, 60: 136, 61: 92, 62: 79, 63: 54, 64: 59, 65: 64, 66: 35, 67: 0, 68: 239, 69: 48, 70: 101, 71: 91, 72: 53, 73: 65, 74: 147, 75: 38, 76: 70, 77: 107, 78: 41, 79: 32, 80: 51, 81: 39, 82: 130, 83: 123, 84: 44, 85: 60, 86: 177, 87: 99, 88: 75, 89: 35, 90: 21, 91: 25, 92: 77, 93: 88, 94: 86, 95: 88, 96: 52, 97: 45, 98: 42, 99: 52, 100: 121, 101: 28, 102: 22, 103: 26, 104: 104, 105: 39, 106: 48, 107: 45, 108: 42, 109: 35, 110: 74, 111: 101, 112: 101, 113: 120, 114: 22, 115: 58, 116: 23, 117: 53, 118: 70, 119: 45, 120: 30, 121: 69, 122: 44, 123: 37, 124: 33, 125: 49, 126: 49, 127: 58, 128: 55, 129: 33, 130: 0, 131: 58, 132: 30, 133: 42, 134: 43, 135: 23, 136: 31, 137: 83, 138: 22}, '2022-04-01': {0: 356, 1: 595, 2: 231, 3: 444, 4: 0, 5: 0, 6: 220, 7: 145, 8: 185, 9: 394, 10: 140, 11: 112, 12: 0, 13: 104, 14: 139, 15: 236, 16: 102, 17: 121, 18: 77, 19: 174, 20: 108, 21: 133, 22: 105, 23: 53, 24: 195, 25: 114, 26: 155, 27: 11, 28: 88, 29: 40, 30: 102, 31: 91, 32: 142, 33: 0, 34: 66, 35: 36, 36: 90, 37: 114, 38: 64, 39: 262, 40: 46, 41: 87, 42: 47, 43: 87, 44: 64, 45: 93, 46: 114, 47: 15, 48: 95, 49: 0, 50: 85, 51: 40, 52: 30, 53: 51, 54: 81, 55: 38, 56: 66, 57: 52, 58: 43, 59: 59, 60: 121, 61: 53, 62: 44, 63: 22, 64: 59, 65: 64, 66: 47, 67: 0, 68: 194, 69: 26, 70: 59, 71: 37, 72: 47, 73: 51, 74: 146, 75: 36, 76: 43, 77: 120, 78: 37, 79: 16, 80: 52, 81: 22, 82: 151, 83: 51, 84: 35, 85: 52, 86: 71, 87: 32, 88: 39, 89: 20, 90: 25, 91: 25, 92: 48, 93: 44, 94: 35, 95: 40, 96: 30, 97: 41, 98: 24, 99: 45, 100: 44, 101: 17, 102: 15, 103: 19, 104: 39, 105: 32, 106: 45, 107: 35, 108: 21, 109: 16, 110: 34, 111: 44, 112: 46, 113: 29, 114: 20, 115: 51, 116: 17, 117: 45, 118: 52, 119: 31, 120: 29, 121: 34, 122: 21, 123: 16, 124: 26, 125: 39, 126: 22, 127: 45, 128: 48, 129: 20, 130: 0, 131: 35, 132: 18, 133: 39, 134: 22, 135: 30, 136: 71, 137: 15, 138: 11}, '2022-05-01': {0: 383, 1: 326, 2: 108, 3: 397, 4: 0, 5: 0, 6: 110, 7: 83, 8: 142, 9: 240, 10: 137, 11: 70, 12: 0, 13: 142, 14: 110, 15: 203, 16: 111, 17: 265, 18: 52, 19: 109, 20: 57, 21: 85, 22: 73, 23: 202, 24: 102, 25: 50, 26: 178, 27: 42, 28: 55, 29: 26, 30: 53, 31: 173, 32: 76, 33: 0, 34: 207, 35: 87, 36: 29, 37: 79, 38: 27, 39: 102, 40: 115, 41: 33, 42: 102, 43: 65, 44: 42, 45: 47, 46: 92, 47: 25, 48: 93, 49: 0, 50: 42, 51: 80, 52: 20, 53: 105, 54: 52, 55: 70, 56: 46, 57: 31, 58: 86, 59: 39, 60: 32, 61: 33, 62: 103, 63: 16, 64: 49, 65: 24, 66: 22, 67: 0, 68: 161, 69: 78, 70: 31, 71: 36, 72: 28, 73: 73, 74: 57, 75: 21, 76: 30, 77: 39, 78: 22, 79: 70, 80: 24, 81: 55, 82: 134, 83: 25, 84: 16, 85: 28, 86: 24, 87: 28, 88: 31, 89: 17, 90: 60, 91: 30, 92: 32, 93: 49, 94: 20, 95: 13, 96: 12, 97: 31, 98: 20, 99: 25, 100: 21, 101: 33, 102: 29, 103: 36, 104: 23, 105: 26, 106: 26, 107: 31, 108: 30, 109: 15, 110: 22, 111: 20, 112: 32, 113: 27, 114: 39, 115: 18, 116: 40, 117: 31, 118: 21, 119: 24, 120: 52, 121: 22, 122: 62, 123: 37, 124: 16, 125: 19, 126: 17, 127: 23, 128: 17, 129: 15, 130: 0, 131: 22, 132: 32, 133: 24, 134: 20, 135: 21, 136: 13, 137: 23, 138: 25}, '2022-06-01': {0: 613, 1: 1944, 2: 1826, 3: 494, 4: 0, 5: 244, 6: 928, 7: 798, 8: 219, 9: 1529, 10: 1029, 11: 526, 12: 122, 13: 195, 14: 173, 15: 1261, 16: 87, 17: 243, 18: 1179, 19: 217, 20: 464, 21: 952, 22: 353, 23: 148, 24: 166, 25: 187, 26: 134, 27: 124, 28: 321, 29: 221, 30: 193, 31: 224, 32: 75, 33: 0, 34: 277, 35: 77, 36: 253, 37: 174, 38: 343, 39: 283, 40: 73, 41: 295, 42: 108, 43: 138, 44: 102, 45: 1364, 46: 467, 47: 28, 48: 87, 49: 16, 50: 145, 51: 88, 52: 128, 53: 60, 54: 80, 55: 81, 56: 40, 57: 206, 58: 61, 59: 166, 60: 144, 61: 71, 62: 78, 63: 39, 64: 331, 65: 116, 66: 25, 67: 13, 68: 62, 69: 37, 70: 24, 71: 311, 72: 106, 73: 50, 74: 257, 75: 22, 76: 56, 77: 128, 78: 100, 79: 55, 80: 139, 81: 70, 82: 140, 83: 20, 84: 53, 85: 33, 86: 38, 87: 167, 88: 218, 89: 20, 90: 34, 91: 19, 92: 25, 93: 199, 94: 122, 95: 24, 96: 28, 97: 36, 98: 69, 99: 146, 100: 33, 101: 14, 102: 21, 103: 27, 104: 28, 105: 78, 106: 62, 107: 30, 108: 47, 109: 20, 110: 78, 111: 48, 112: 35, 113: 21, 114: 17, 115: 49, 116: 61, 117: 92, 118: 26, 119: 16, 120: 47, 121: 36, 122: 54, 123: 43, 124: 23, 125: 40, 126: 22, 127: 121, 128: 145, 129: 12, 130: 18, 131: 31, 132: 31, 133: 17, 134: 23, 135: 23, 136: 19, 137: 24, 138: 24}, '2022-07-01': {0: 349, 1: 283, 2: 163, 3: 318, 4: 67, 5: 328, 6: 121, 7: 96, 8: 205, 9: 219, 10: 89, 11: 60, 12: 153, 13: 68, 14: 135, 15: 181, 16: 53, 17: 94, 18: 65, 19: 96, 20: 67, 21: 57, 22: 67, 23: 59, 24: 134, 25: 94, 26: 78, 27: 142, 28: 33, 29: 29, 30: 45, 31: 64, 32: 65, 33: 76, 34: 81, 35: 55, 36: 44, 37: 83, 38: 15, 39: 46, 40: 84, 41: 45, 42: 56, 43: 54, 44: 50, 45: 48, 46: 90, 47: 17, 48: 56, 49: 27, 50: 66, 51: 37, 52: 34, 53: 63, 54: 58, 55: 27, 56: 45, 57: 74, 58: 51, 59: 61, 60: 80, 61: 45, 62: 65, 63: 34, 64: 27, 65: 30, 66: 18, 67: 35, 68: 47, 69: 31, 70: 24, 71: 40, 72: 18, 73: 30, 74: 44, 75: 26, 76: 31, 77: 32, 78: 29, 79: 29, 80: 45, 81: 14, 82: 54, 83: 31, 84: 37, 85: 24, 86: 32, 87: 20, 88: 40, 89: 32, 90: 22, 91: 17, 92: 30, 93: 29, 94: 20, 95: 52, 96: 34, 97: 25, 98: 26, 99: 28, 100: 72, 101: 17, 102: 15, 103: 22, 104: 28, 105: 24, 106: 28, 107: 19, 108: 25, 109: 25, 110: 38, 111: 19, 112: 27, 113: 26, 114: 15, 115: 22, 116: 28, 117: 24, 118: 33, 119: 13, 120: 57, 121: 40, 122: 22, 123: 14, 124: 18, 125: 23, 126: 20, 127: 38, 128: 20, 129: 14, 130: 36, 131: 24, 132: 18, 133: 39, 134: 14, 135: 40, 136: 16, 137: 21, 138: 13}, '2022-08-01': {0: 857, 1: 500, 2: 362, 3: 334, 4: 308, 5: 296, 6: 289, 7: 266, 8: 244, 9: 223, 10: 206, 11: 192, 12: 180, 13: 169, 14: 160, 15: 159, 16: 140, 17: 134, 18: 134, 19: 128, 20: 127, 21: 126, 22: 123, 23: 116, 24: 112, 25: 111, 26: 108, 27: 102, 28: 99, 29: 94, 30: 94, 31: 89, 32: 88, 33: 88, 34: 87, 35: 85, 36: 83, 37: 79, 38: 78, 39: 77, 40: 77, 41: 77, 42: 76, 43: 75, 44: 75, 45: 74, 46: 72, 47: 65, 48: 65, 49: 65, 50: 64, 51: 64, 52: 64, 53: 62, 54: 62, 55: 61, 56: 61, 57: 61, 58: 60, 59: 60, 60: 58, 61: 55, 62: 54, 63: 54, 64: 54, 65: 54, 66: 53, 67: 53, 68: 52, 69: 50, 70: 49, 71: 49, 72: 49, 73: 48, 74: 48, 75: 48, 76: 47, 77: 47, 78: 46, 79: 44, 80: 44, 81: 43, 82: 43, 83: 43, 84: 42, 85: 42, 86: 41, 87: 41, 88: 41, 89: 40, 90: 39, 91: 39, 92: 39, 93: 39, 94: 38, 95: 37, 96: 37, 97: 36, 98: 36, 99: 36, 100: 36, 101: 35, 102: 35, 103: 35, 104: 35, 105: 35, 106: 35, 107: 34, 108: 34, 109: 34, 110: 32, 111: 32, 112: 32, 113: 32, 114: 31, 115: 31, 116: 30, 117: 30, 118: 30, 119: 30, 120: 29, 121: 29, 122: 28, 123: 28, 124: 28, 125: 28, 126: 28, 127: 28, 128: 28, 129: 28, 130: 28, 131: 27, 132: 27, 133: 27, 134: 27, 135: 27, 136: 27, 137: 27, 138: 26}}
[ "You have to instantiate the models since they are classes.\nThe code would be,\nfrom statsforecast import StatsForecast\n\nfrom statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB\nfrom statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters\nfrom statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage\nfrom statsforecast.models import MSTL\nfrom statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta\nfrom statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta\n\nseasonality = 12 #Monthly data\n\nmodels = [\n ADIDA(),\n CrostonClassic(),\n CrostonSBA(),\n CrostonOptimized(),\n IMAPA(),\n TSB(0.3,0.2),\n Theta(season_length=seasonality),\n OptimizedTheta(season_length=seasonality),\n DynamicTheta(season_length=seasonality),\n DynamicOptimizedTheta(season_length=seasonality),\n AutoARIMA(season_length=seasonality),\n AutoCES(season_length=seasonality),\n AutoTheta(season_length=seasonality),\n HistoricAverage(),\n Naive(),\n RandomWalkWithDrift(),\n SeasonalNaive(season_length=seasonality),\n SeasonalExponentialSmoothing(season_length=seasonality, alpha=0.2),\n]\n\nfcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=-1, \n fallback_model=SeasonalNaive(season_length=seasonality))\n%time forecasts = fcst.forecast(9)\nforecasts.reset_index()\n\nforecasts = forecasts.round(0)\n\nHere's a colab link fixing the error: https://colab.research.google.com/drive/1vwIImCoKzGvePbgFKidauV8sXimAvO48?usp=sharing\n" ]
[ 0 ]
[]
[]
[ "forecasting", "python", "python_3.x", "time_series" ]
stackoverflow_0074657616_forecasting_python_python_3.x_time_series.txt
Q: strange behavior of matplotlib in a jupyter ipython notebook I am trying to create plots in a qt widget but am having problems import matplotlib.pyplot as plt import numpy as np then matplotlib qt if I do the next two commands in the same cell plt.figure() plt.plot(np.arange(11) all goes well. If I separate the last two commands into two cells, I only get a qt window and not plot. Same thing if I do the object style of plotting I am running Python 3.10.5, matplotlib 3.5.3, ipykernel 6.15.1, ipython 8.4.0 and qtconsole 5.3.1 (I am having similar problems getting an active qtconsole from inside a notebook, probably related to the matplotlib qt problem). I do not have this problem in an ipython console or a qtconsole (called from the command line) but these do not have cells. On another computer, I am running Python 3.6 and do not have this. thanks for your interest A: It seems the problem went away. I run Opensuse linux with the tumbleweed distribution that is continually being updated. Tried an old notebook and the problem had disappeared
strange behavior of matplotlib in a jupyter ipython notebook
I am trying to create plots in a qt widget but am having problems import matplotlib.pyplot as plt import numpy as np then matplotlib qt if I do the next two commands in the same cell plt.figure() plt.plot(np.arange(11) all goes well. If I separate the last two commands into two cells, I only get a qt window and not plot. Same thing if I do the object style of plotting I am running Python 3.10.5, matplotlib 3.5.3, ipykernel 6.15.1, ipython 8.4.0 and qtconsole 5.3.1 (I am having similar problems getting an active qtconsole from inside a notebook, probably related to the matplotlib qt problem). I do not have this problem in an ipython console or a qtconsole (called from the command line) but these do not have cells. On another computer, I am running Python 3.6 and do not have this. thanks for your interest
[ "It seems the problem went away. I run Opensuse linux with the tumbleweed distribution that is continually being updated. Tried an old notebook and the problem had disappeared\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "matplotlib" ]
stackoverflow_0073139926_jupyter_notebook_matplotlib.txt
Q: Network Error while using axios with fastapi (python) running on http://127.0.0.1:8000 I'm trying to integrate an API with my react native app, but when I make the request, I always receive a Network error: [Unhandled promise rejection: TypeError: Network request failed] at node_modules\whatwg-fetch\dist\fetch.umd.js:541:17 in setTimeout$argument_0 at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:123:14 in _callTimer at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:379:16 in callTimers at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:414:4 in __callFunction at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:113:6 in __guard$argument_0 at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:365:10 in __guard at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:112:4 in callFunctionReturnFlushedQueue at [native code]:null in callFunctionReturnFlushedQueue My axios is configured like the following: import axios from "axios"; const baseUrl = 'http://localhost:8000/' export const api = axios.create({baseURL: baseUrl}); Also tried with 'http://10.0.2.2:8000/' and 'http://127.0.0.1:8000', but got the same error. Could someone help me here? Thank you in advance!! A: By looking at your baseUrl, I guess you are trying to access the local hosted API. So, in order to make it work on an emulator/wired device; you need to do the following steps: Open commmand prompt, type the following command: ipconfig and note down the IPv4 Address. Now, paste that same IPv4 address as the base URL in axios file. E.g. http://YOUR-IPv4-Address-Here:YOUR-API-PORT and I hope, this would resolve this 'Network request failed' issue. A: I had the same issue as you. The problem here is that you're not setting cors for your front end app. Your main server file (my case main.py), should have a cors setup like this: origins = ["http://localhost:3000"] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) In my case, my front end was running on http://localhost:3000. This should solve the issue.
Network Error while using axios with fastapi (python) running on http://127.0.0.1:8000
I'm trying to integrate an API with my react native app, but when I make the request, I always receive a Network error: [Unhandled promise rejection: TypeError: Network request failed] at node_modules\whatwg-fetch\dist\fetch.umd.js:541:17 in setTimeout$argument_0 at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:123:14 in _callTimer at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:379:16 in callTimers at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:414:4 in __callFunction at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:113:6 in __guard$argument_0 at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:365:10 in __guard at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:112:4 in callFunctionReturnFlushedQueue at [native code]:null in callFunctionReturnFlushedQueue My axios is configured like the following: import axios from "axios"; const baseUrl = 'http://localhost:8000/' export const api = axios.create({baseURL: baseUrl}); Also tried with 'http://10.0.2.2:8000/' and 'http://127.0.0.1:8000', but got the same error. Could someone help me here? Thank you in advance!!
[ "By looking at your baseUrl, I guess you are trying to access the local hosted API. So, in order to make it work on an emulator/wired device; you need to do the following steps:\nOpen commmand prompt, type the following command:\nipconfig\nand note down the IPv4 Address. Now, paste that same IPv4 address as the base URL in axios file. E.g.\nhttp://YOUR-IPv4-Address-Here:YOUR-API-PORT\nand I hope, this would resolve this 'Network request failed' issue.\n", "I had the same issue as you.\nThe problem here is that you're not setting cors for your front end app.\nYour main server file (my case main.py), should have a cors setup like this:\norigins = [\"http://localhost:3000\"]\napp.add_middleware(\n CORSMiddleware,\n allow_origins=origins,\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n)\n\nIn my case, my front end was running on http://localhost:3000.\nThis should solve the issue.\n" ]
[ 0, 0 ]
[]
[]
[ "axios", "fastapi", "react_native", "uvicorn" ]
stackoverflow_0070585182_axios_fastapi_react_native_uvicorn.txt
Q: Firestore: Multiple conditional where clauses For example I have dynamic filter for my list of books where I can set specific color, authors and categories. This filter can set multiple colors at once and multiple categories. Book > Red, Blue > Adventure, Detective. How can I add "where" conditionally? firebase .firestore() .collection("book") .where("category", "==", ) .where("color", "==", ) .where("author", "==", ) .orderBy("date") .get() .then(querySnapshot => {... A: As you can see in the API docs, the collection() method returns a CollectionReference. CollectionReference extends Query, and Query objects are immutable. Query.where() and Query.orderBy() return new Query objects that add operations on top of the original Query (which remains unmodified). You will have to write code to remember these new Query objects so you can continue to chain calls with them. So, you can rewrite your code like this: var query = firebase.firestore().collection("book") query = query.where(...) query = query.where(...) query = query.where(...) query = query.orderBy(...) query.get().then(...) Now you can put in conditionals to figure out which filters you want to apply at each stage. Just reassign query with each newly added filter. if (some_condition) { query = query.where(...) } A: Firebase Version 9 The docs do not cover this but here is how to add conditional where clauses to a query import { collection, query, where } from 'firebase/firestore' const queryConstraints = [] if (group != null) queryConstraints.push(where('group', '==', group)) if (pro != null) queryConstraints.push(where('pro', '==', pro)) const q = query(collection(db, 'videos'), ...queryConstraints) The source of this answer is a bit of intuitive guesswork and help from my best friend J-E^S^-U-S A: With Firebase Version 9 (Jan, 2022 Update): You can filter data with multiple where clauses: import { query, collection, where, getDocs } from "firebase/firestore"; const q = query( collection(db, "products"), where("category", "==", "Computer"), where("types", "array-contains", ['Laptop', 'Lenovo', 'Intel']), where("price", "<=", 1000), ); const docsSnap = await getDocs(q); docsSnap.forEach((doc) => { console.log(doc.data()); }); A: In addition to @Doug Stevenson answer. When you have more than one where it is necessary to make it more dynamic as in my case. function readDocuments(collection, options = {}) { let {where, orderBy, limit} = options; let query = firebase.firestore().collection(collection); if (where) { if (where[0] instanceof Array) { // It's an array of array for (let w of where) { query = query.where(...w); } } else { query = query.where(...where); } } if (orderBy) { query = query.orderBy(...orderBy); } if (limit) { query = query.limit(limit); } return query .get() .then() .catch() } // Usage // Multiple where let options = {where: [["category", "==", "someCategory"], ["color", "==", "red"], ["author", "==", "Sam"]], orderBy: ["date", "desc"]}; //OR // A single where let options = {where: ["category", "==", "someCategory"]}; let documents = readDocuments("books", options); A: Note that a multiple WHERE clause is inherently an AND operation. A: If you're using angular fire, you can just use reduce like so: const students = [studentID, studentID2,...]; this.afs.collection('classes', (ref: any) => students.reduce( (r: any, student: any) => r.where(`students.${student}`, '==', true) , ref) ).valueChanges({ idField: 'id' }); This is an example of multiple tags... You could easily change this for any non-angular framework. For OR queries (which can't be done with multiple where clauses), see here. A: For example, there's an array look like this const conditionList = [ { key: 'anyField', operator: '==', value: 'any value', }, { key: 'anyField', operator: '>', value: 'any value', }, { key: 'anyField', operator: '<', value: 'any value', }, { key: 'anyField', operator: '==', value: 'any value', }, { key: 'anyField', operator: '==', value: 'any value', }, ] Then you can just put the collection which one you want to set query's conditions into this funcion. function* multipleWhere( collection, conditions = [{ field: '[doc].[field name]', operator: '==', value: '[any value]' }], ) { const pop = conditions.pop() if (pop) { yield* multipleWhere( collection.where(pop.key, pop.operator, pop.value), conditions, ) } yield collection } You will get the collection set query's conditions. A: async yourFunction(){ const Ref0 = firebase.firestore().collection("your_collection").doc(doc.id) const Ref1 = appointmentsRef.where('val1', '==',condition1).get(); const Ref2 = appointmentsRef.where("val2", "!=", condition2).get() const [snapshot_val1, snapshot_val2] = await Promise.all([Ref1, Ref2]); const val1_Array = snapshot_val1.docs; const val2_Array = snapshot_val2.docs; const globale_val_Array = val1_Array .concat(val2_Array ); return globale_val_Array ; } /*Call you function*/ this.checkCurrentAppointment().then(docSnapshot=> { docSnapshot.forEach(doc=> { console.log("Your data with multiple code query:", doc.data()); }); }); A: As CollectionRef does not have query method in firebase web version 9, I modified @abk's answer. async getQueryResult(path, options = {}) { /* Example options = { where: [ ["isPublic", "==", true], ["isDeleted", "==", false] ], orderBy: [ ["likes"], ["title", "desc"] ], limit: 30 } */ try { let { where, orderBy, limit } = options; let collectionRef = collection(<firestore>, path); let queryConstraints = []; if (where) { where = where.map((w) => firestore.where(...w)); queryConstraints = [...queryConstraints, ...where]; } if (orderBy) { orderBy = orderBy.map((o) => firestore.orderBy(...o)); queryConstraints = [...queryConstraints, ...orderBy]; } if (limit) { limit = firestore.limit(limit); queryConstraints = [...queryConstraints, limit]; } const query = firestore.query(collectionRef, ...queryConstraints); const querySnapshot = await firestore.getDocs(query); const docList = querySnapshot.docs.map((doc) => { const data = doc.data(); return { id: doc.id, ...data, }; }); return docList; } catch (error) { console.log(error); } } A: Simple function where you can specify the path and an array of filters that you can pass and get you documents, hope it helps. async function filterDoc(path, filters) { if (!path) return []; //define the collection path let q = db.collection(path); //check if there are any filters and add them to the query if (filters.length > 0) { filters.forEach((filter) => { q = q.where(filter.field, filter.operator, filter.value); }); } //get the documents const snapshot = await q.get(); //loop through the documents const data = snapshot.docs.map((doc) => doc.data()); //return the data return data; } //call the function const data = await filterDoc( "categories_collection", [ { field: "status", operator: "==", value: "active", }, { field: "parent_id", operator: "==", value: "kSKpUc3xnKjtpyx8cMJC", }, ] );
Firestore: Multiple conditional where clauses
For example I have dynamic filter for my list of books where I can set specific color, authors and categories. This filter can set multiple colors at once and multiple categories. Book > Red, Blue > Adventure, Detective. How can I add "where" conditionally? firebase .firestore() .collection("book") .where("category", "==", ) .where("color", "==", ) .where("author", "==", ) .orderBy("date") .get() .then(querySnapshot => {...
[ "As you can see in the API docs, the collection() method returns a CollectionReference. CollectionReference extends Query, and Query objects are immutable. Query.where() and Query.orderBy() return new Query objects that add operations on top of the original Query (which remains unmodified). You will have to write code to remember these new Query objects so you can continue to chain calls with them. So, you can rewrite your code like this:\nvar query = firebase.firestore().collection(\"book\")\nquery = query.where(...)\nquery = query.where(...)\nquery = query.where(...)\nquery = query.orderBy(...)\nquery.get().then(...)\n\nNow you can put in conditionals to figure out which filters you want to apply at each stage. Just reassign query with each newly added filter.\nif (some_condition) {\n query = query.where(...)\n}\n\n", "Firebase Version 9\nThe docs do not cover this but here is how to add conditional where clauses to a query\nimport { collection, query, where } from 'firebase/firestore'\n\nconst queryConstraints = []\nif (group != null) queryConstraints.push(where('group', '==', group))\nif (pro != null) queryConstraints.push(where('pro', '==', pro))\nconst q = query(collection(db, 'videos'), ...queryConstraints)\n\nThe source of this answer is a bit of intuitive guesswork and help from my best friend J-E^S^-U-S\n", "With Firebase Version 9 (Jan, 2022 Update):\nYou can filter data with multiple where clauses:\nimport { query, collection, where, getDocs } from \"firebase/firestore\";\n\nconst q = query(\n collection(db, \"products\"),\n where(\"category\", \"==\", \"Computer\"),\n where(\"types\", \"array-contains\", ['Laptop', 'Lenovo', 'Intel']),\n where(\"price\", \"<=\", 1000),\n);\n\nconst docsSnap = await getDocs(q);\n \ndocsSnap.forEach((doc) => {\n console.log(doc.data());\n});\n\n", "In addition to @Doug Stevenson answer. When you have more than one where it is necessary to make it more dynamic as in my case. \nfunction readDocuments(collection, options = {}) {\n let {where, orderBy, limit} = options;\n let query = firebase.firestore().collection(collection);\n\n if (where) {\n if (where[0] instanceof Array) {\n // It's an array of array\n for (let w of where) {\n query = query.where(...w);\n }\n } else {\n query = query.where(...where);\n }\n\n }\n\n if (orderBy) {\n query = query.orderBy(...orderBy);\n }\n\n if (limit) {\n query = query.limit(limit);\n }\n\n return query\n .get()\n .then()\n .catch()\n }\n\n// Usage\n// Multiple where\nlet options = {where: [[\"category\", \"==\", \"someCategory\"], [\"color\", \"==\", \"red\"], [\"author\", \"==\", \"Sam\"]], orderBy: [\"date\", \"desc\"]};\n\n//OR\n// A single where\nlet options = {where: [\"category\", \"==\", \"someCategory\"]};\n\nlet documents = readDocuments(\"books\", options);\n\n", "Note that a multiple WHERE clause is inherently an AND operation.\n", "If you're using angular fire, you can just use reduce like so:\nconst students = [studentID, studentID2,...];\n\nthis.afs.collection('classes',\n (ref: any) => students.reduce(\n (r: any, student: any) => r.where(`students.${student}`, '==', true)\n , ref)\n).valueChanges({ idField: 'id' });\n\nThis is an example of multiple tags...\nYou could easily change this for any non-angular framework.\nFor OR queries (which can't be done with multiple where clauses), see here.\n", "For example, there's an array look like this\nconst conditionList = [\n {\n key: 'anyField',\n operator: '==',\n value: 'any value',\n },\n {\n key: 'anyField',\n operator: '>',\n value: 'any value',\n },\n {\n key: 'anyField',\n operator: '<',\n value: 'any value',\n },\n {\n key: 'anyField',\n operator: '==',\n value: 'any value',\n },\n {\n key: 'anyField',\n operator: '==',\n value: 'any value',\n },\n]\n\nThen you can just put the collection which one you want to set query's conditions into this funcion.\nfunction* multipleWhere(\n collection,\n conditions = [{ field: '[doc].[field name]', operator: '==', value: '[any value]' }],\n) {\n const pop = conditions.pop()\n if (pop) {\n yield* multipleWhere(\n collection.where(pop.key, pop.operator, pop.value),\n conditions,\n )\n }\n yield collection\n}\n\nYou will get the collection set query's conditions.\n", "async yourFunction(){\n const Ref0 = firebase.firestore().collection(\"your_collection\").doc(doc.id)\n\n const Ref1 = appointmentsRef.where('val1', '==',condition1).get();\n const Ref2 = appointmentsRef.where(\"val2\", \"!=\", condition2).get()\n\n const [snapshot_val1, snapshot_val2] = await Promise.all([Ref1, Ref2]);\n\n \n const val1_Array = snapshot_val1.docs;\n const val2_Array = snapshot_val2.docs;\n\n const globale_val_Array = val1_Array .concat(val2_Array );\n\n return globale_val_Array ;\n }\n\n\n\n/*Call you function*/\nthis.checkCurrentAppointment().then(docSnapshot=> {\n docSnapshot.forEach(doc=> {\n console.log(\"Your data with multiple code query:\", doc.data());\n });\n });\n\n", "As CollectionRef does not have query method in firebase web version 9,\nI modified @abk's answer.\nasync getQueryResult(path, options = {}) {\n /* Example\n options = {\n where: [\n [\"isPublic\", \"==\", true],\n [\"isDeleted\", \"==\", false]\n ],\n orderBy: [\n [\"likes\"],\n [\"title\", \"desc\"]\n ],\n limit: 30\n }\n */\n\n try {\n let { where, orderBy, limit } = options;\n\n let collectionRef = collection(<firestore>, path);\n let queryConstraints = [];\n\n if (where) {\n where = where.map((w) => firestore.where(...w));\n queryConstraints = [...queryConstraints, ...where];\n }\n\n if (orderBy) {\n orderBy = orderBy.map((o) => firestore.orderBy(...o));\n queryConstraints = [...queryConstraints, ...orderBy];\n }\n\n if (limit) {\n limit = firestore.limit(limit);\n queryConstraints = [...queryConstraints, limit];\n }\n\n const query = firestore.query(collectionRef, ...queryConstraints);\n const querySnapshot = await firestore.getDocs(query);\n const docList = querySnapshot.docs.map((doc) => {\n const data = doc.data();\n return {\n id: doc.id,\n ...data,\n };\n });\n return docList;\n } catch (error) {\n console.log(error);\n }\n }\n\n", "Simple function where you can specify the path and an array of filters that you can pass and get you documents, hope it helps.\nasync function filterDoc(path, filters) {\n if (!path) return [];\n\n //define the collection path\n let q = db.collection(path);\n\n //check if there are any filters and add them to the query\n if (filters.length > 0) {\n filters.forEach((filter) => {\n q = q.where(filter.field, filter.operator, filter.value);\n });\n }\n\n //get the documents\n const snapshot = await q.get();\n\n //loop through the documents\n const data = snapshot.docs.map((doc) => doc.data());\n\n //return the data\n return data;\n}\n\n//call the function\nconst data = await filterDoc(\n \"categories_collection\",\n [\n {\n field: \"status\",\n operator: \"==\",\n value: \"active\",\n },\n {\n field: \"parent_id\",\n operator: \"==\",\n value: \"kSKpUc3xnKjtpyx8cMJC\",\n },\n ]\n);\n\n" ]
[ 142, 22, 14, 8, 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "firebase", "google_cloud_firestore", "google_cloud_platform", "javascript", "node.js" ]
stackoverflow_0048036975_firebase_google_cloud_firestore_google_cloud_platform_javascript_node.js.txt
Q: MYSQL row sequence with SUM() and ORDER BY I try to order my rows by user's total point. SUM and ORDER BY working correctly. But also i want to add sequence numbers of rows. When I try to use @row_number I get some numbers but sequence is incorrect. correct num column order should be 1,2,3,4 because I use order by total_point of sum of user's points. How can I get correct sequence for num column? SELECT users.user_id, users.user_hash, (@row_number:=@row_number + 1) AS num, sum(total_point) as total_point FROM (SELECT @row_number:=0) AS t,user_stats LEFT JOIN users on users.user_id = user_stats.stats_user_id WHERE create_date BETWEEN "2020-04-01 00:00:00" AND "2020-04-30 23:59:59" GROUP BY stats_user_id ORDER BY total_point DESC v: mysql 5.7 A: You must use the with total sorted rows and give them a number SELECT user_id, user_hash, users.user_nick, (@row_number:=@row_number + 1) AS num, total_point FROM (SELECT users.user_id, users.user_hash, users.user_nick, SUM(total_point) AS total_point FROM user_stats LEFT JOIN users ON users.user_id = user_stats.stats_user_id WHERE create_date BETWEEN '2020-04-01 00:00:00' AND '2020-04-30 23:59:59' GROUP BY stats_user_id ORDER BY total_point DESC) t1, (SELECT @row_number:=0) AS t ORDER BY num ASC; A: To generate a sequence of rows in MySQL with a SUM column and order the result by that column, you can use the ROW_NUMBER() function and a subquery. Here is an example of how to do this: SELECT * FROM ( SELECT ROW_NUMBER() OVER (ORDER BY SUM(value) ASC) AS seq, SUM(value) AS total FROM my_table GROUP BY group_id ) AS t ORDER BY seq ASC; This query will generate a sequence of rows for each group in my_table, with the seq column containing the row number and the total column containing the sum of the value column for that group. The rows will be ordered by the total column in ascending order. The ROW_NUMBER() function generates a sequential integer value for each row in the result set. The OVER clause specifies that the rows should be ordered by the SUM(value) column in ascending order. The GROUP BY clause groups the rows by the group_id column, and the SUM() function calculates the sum of the value column for each group. The outer SELECT statement wraps the inner query in a subquery
MYSQL row sequence with SUM() and ORDER BY
I try to order my rows by user's total point. SUM and ORDER BY working correctly. But also i want to add sequence numbers of rows. When I try to use @row_number I get some numbers but sequence is incorrect. correct num column order should be 1,2,3,4 because I use order by total_point of sum of user's points. How can I get correct sequence for num column? SELECT users.user_id, users.user_hash, (@row_number:=@row_number + 1) AS num, sum(total_point) as total_point FROM (SELECT @row_number:=0) AS t,user_stats LEFT JOIN users on users.user_id = user_stats.stats_user_id WHERE create_date BETWEEN "2020-04-01 00:00:00" AND "2020-04-30 23:59:59" GROUP BY stats_user_id ORDER BY total_point DESC v: mysql 5.7
[ "You must use the with total sorted rows and give them a number\nSELECT \n user_id,\n user_hash,\n users.user_nick,\n (@row_number:=@row_number + 1) AS num,\n total_point\nFROM\n (SELECT \n users.user_id,\n users.user_hash,\n users.user_nick,\n SUM(total_point) AS total_point\n FROM\n user_stats\n LEFT JOIN users ON users.user_id = user_stats.stats_user_id\n WHERE\n create_date BETWEEN '2020-04-01 00:00:00' AND '2020-04-30 23:59:59'\n GROUP BY stats_user_id\n ORDER BY total_point DESC) t1,\n (SELECT @row_number:=0) AS t\nORDER BY num ASC;\n\n", "To generate a sequence of rows in MySQL with a SUM column and order the result by that column, you can use the ROW_NUMBER() function and a subquery. Here is an example of how to do this:\nSELECT *\nFROM (\n SELECT\n ROW_NUMBER() OVER (ORDER BY SUM(value) ASC) AS seq,\n SUM(value) AS total\n FROM my_table\n GROUP BY group_id\n) AS t\nORDER BY seq ASC;\n\nThis query will generate a sequence of rows for each group in my_table, with the seq column containing the row number and the total column containing the sum of the value column for that group. The rows will be ordered by the total column in ascending order.\nThe ROW_NUMBER() function generates a sequential integer value for each row in the result set. The OVER clause specifies that the rows should be ordered by the SUM(value) column in ascending order. The GROUP BY clause groups the rows by the group_id column, and the SUM() function calculates the sum of the value column for each group.\nThe outer SELECT statement wraps the inner query in a subquery\n" ]
[ 2, 0 ]
[]
[]
[ "mysql", "sequence", "sql_order_by" ]
stackoverflow_0061305442_mysql_sequence_sql_order_by.txt
Q: Nuxt js 3 scroll behavior after page load completed router.options.js file: export default { scrollBehavior(to, from, savedPosition) { return { top: 0, behavior: 'smooth', } }, } I'm using useAsyncData in the page I'm navigating to. So there's a delay between page navigations, because the router waits for the data fetch. The problem is the page is scrolled immediately and not waiting for the new page render to start. So I'm at the old page having the scrollbar going to the top before the new page to appear. A: To fix the issue where the page is scrolled immediately when navigating to a new page with a delay due to the use of the "useAsyncData" hook, you can add a check to your scroll behavior function to see if the new page has already finished rendering before scrolling to the top. Here is an example of how you can modify your scroll behavior function to achieve this: export default { scrollBehavior(to, from, savedPosition) { return new Promise((resolve, reject) => { // Check if the new page has finished rendering. if (to.meta.isReady) { // If the new page has finished rendering, scroll to the top. resolve({ top: 0, behavior: 'smooth', }); } else { // If the new page has not finished rendering, wait for it to finish. to.meta.waitForReady(() => { // When the new page has finished rendering, scroll to the top. resolve({ top: 0, behavior: 'smooth', }); }); } }); }, } This code adds a check to see if the new page has finished rendering, and if it has, it scrolls to the top immediately. If the new page has not finished rendering, it waits for the page to finish rendering and then scrolls to the top.
Nuxt js 3 scroll behavior after page load completed
router.options.js file: export default { scrollBehavior(to, from, savedPosition) { return { top: 0, behavior: 'smooth', } }, } I'm using useAsyncData in the page I'm navigating to. So there's a delay between page navigations, because the router waits for the data fetch. The problem is the page is scrolled immediately and not waiting for the new page render to start. So I'm at the old page having the scrollbar going to the top before the new page to appear.
[ "To fix the issue where the page is scrolled immediately when navigating to a new page with a delay due to the use of the \"useAsyncData\" hook, you can add a check to your scroll behavior function to see if the new page has already finished rendering before scrolling to the top.\nHere is an example of how you can modify your scroll behavior function to achieve this:\nexport default {\n scrollBehavior(to, from, savedPosition) {\n return new Promise((resolve, reject) => {\n // Check if the new page has finished rendering.\n if (to.meta.isReady) {\n // If the new page has finished rendering, scroll to the top.\n resolve({\n top: 0,\n behavior: 'smooth',\n });\n } else {\n // If the new page has not finished rendering, wait for it to finish.\n to.meta.waitForReady(() => {\n // When the new page has finished rendering, scroll to the top.\n resolve({\n top: 0,\n behavior: 'smooth',\n });\n });\n }\n });\n },\n}\n\nThis code adds a check to see if the new page has finished rendering, and if it has, it scrolls to the top immediately. If the new page has not finished rendering, it waits for the page to finish rendering and then scrolls to the top.\n" ]
[ 0 ]
[]
[]
[ "nuxt.js", "nuxtjs3" ]
stackoverflow_0074659559_nuxt.js_nuxtjs3.txt
Q: How does AddMicrosoftIdentityWebApi know how to verify the Bearer token? We have an SPA that uses MSAL to grab an access token, an id token and a refresh token and caches the tokens in local storage for use later. Behind that we have a Web API running dotnet core 6 and I have configured the authentication in the startup Program.cs like so: builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd")); Then I have an appsettings.json file that contains the AzureAD config section. "AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "example.com", "TenantId": "guid", "ClientId": "guid", "Scopes": "access_as_user" } This seems to work fine. The [Authorize] attribute protects the controllers by requiring a token. My question is, without a client secret, how can I trust the access token coming from the SPA? Is there some magic going on here in the AddMicrosoftIdentityWebApi method that verifies the token? I had a quick look in the source but didn't find anything. A: Disclaimer: not an expert in the topic, answering based on my own experience using Teams id tokens. This may not apply to your use case where you're getting actual access tokens client-side Your backend, when configured with Microsoft.Identity.Web, needs to reach out to Microsoft Identity platform (Azure AD) in order to be able to authenticate either the user or the app itself. That's done using a client secret or a client certificate. But the tricky thing here is when does that happen and whether it happens automatically or not. This is my experience using Teams tokens: If you disconnect the server from the Internet, AuthenticationMiddleware will make your requests fail immediately, If you try to call the API with a token generated from another tenant, the request won't get through due to a mismatch in the audiences. So there's certainly some protection level when not providing that client secret/certificate, but I can't tell you with confidence up to which point. However: If you don't provide the client secret and try to make use of, let's say, ITokenAcquisition.GetAuthenticationResultForUserAsync() to authenticate on behalf of the user, you will get an exception like this: MSAL.NetCore.4.44.0.0.MsalClientException: ErrorCode: Client_Credentials_Required_In_Confidential_Client_Application Microsoft.Identity.Client.MsalClientException: One client credential type required either: ClientSecret, Certificate, ClientAssertion or AppTokenProvider must be defined when creating a Confidential Client.Only specify one.See https://aka.ms/msal-net-client-credentials. Same for authenticating on behalf of the app with ITokenAcquisition.GetAuthenticationResultForAppAsync(). Once again, my use case seems to be slightly different than yours since I only get a useless-by-itself Teams id token client-side, so user authentication server-side is required and that's when the Teams token <--> Actually useful tokens exchanges happen with Microsoft.Identity.Web's help.
How does AddMicrosoftIdentityWebApi know how to verify the Bearer token?
We have an SPA that uses MSAL to grab an access token, an id token and a refresh token and caches the tokens in local storage for use later. Behind that we have a Web API running dotnet core 6 and I have configured the authentication in the startup Program.cs like so: builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd")); Then I have an appsettings.json file that contains the AzureAD config section. "AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "example.com", "TenantId": "guid", "ClientId": "guid", "Scopes": "access_as_user" } This seems to work fine. The [Authorize] attribute protects the controllers by requiring a token. My question is, without a client secret, how can I trust the access token coming from the SPA? Is there some magic going on here in the AddMicrosoftIdentityWebApi method that verifies the token? I had a quick look in the source but didn't find anything.
[ "Disclaimer: not an expert in the topic, answering based on my own experience using Teams id tokens. This may not apply to your use case where you're getting actual access tokens client-side\nYour backend, when configured with Microsoft.Identity.Web, needs to reach out to Microsoft Identity platform (Azure AD) in order to be able to authenticate either the user or the app itself. That's done using a client secret or a client certificate. But the tricky thing here is when does that happen and whether it happens automatically or not.\nThis is my experience using Teams tokens:\n\nIf you disconnect the server from the Internet, AuthenticationMiddleware will make your requests fail immediately,\n\nIf you try to call the API with a token generated from another tenant, the request won't get through due to a mismatch in the audiences.\n\n\nSo there's certainly some protection level when not providing that client secret/certificate, but I can't tell you with confidence up to which point.\nHowever:\n\nIf you don't provide the client secret and try to make use of, let's say, ITokenAcquisition.GetAuthenticationResultForUserAsync() to authenticate on behalf of the user, you will get an exception like this:\n\nMSAL.NetCore.4.44.0.0.MsalClientException:\n ErrorCode: Client_Credentials_Required_In_Confidential_Client_Application\nMicrosoft.Identity.Client.MsalClientException: One client credential type required either: ClientSecret, Certificate, ClientAssertion or AppTokenProvider must be defined when creating a Confidential Client.Only specify one.See https://aka.ms/msal-net-client-credentials.\n\n\nSame for authenticating on behalf of the app with ITokenAcquisition.GetAuthenticationResultForAppAsync().\n\nOnce again, my use case seems to be slightly different than yours since I only get a useless-by-itself Teams id token client-side, so user authentication server-side is required and that's when the Teams token <--> Actually useful tokens exchanges happen with Microsoft.Identity.Web's help.\n" ]
[ 1 ]
[]
[]
[ ".net_core", "asp.net_core_webapi", "msal" ]
stackoverflow_0074658061_.net_core_asp.net_core_webapi_msal.txt
Q: How do I enable RBAC access for a group to view Function App "Log Stream" in Azure? How do I enable RBAC access for a group to view Function App "Log Stream" in Azure? Basically, I have a AD group with an assignee ID. I don't know how to find the "scope id" for the "Log Stream". My users have contributor access to the Function App itself, but the "Log Stream" is not permitted. I configured Terraform like so, to enable "Application Insights Contributor" scope for my users but they still cannot see the "Log Stream". So, I am just trying to identify which of the RBAC roles encompasses the "Log Stream" viewing. Does anyone know? resource "azurerm_role_assignment" "azurerm_ai_contributor" { count = var.environment != "prod" ? 1 : 0 principal_id = local.dev_team_object_id role_definition_name = "Application Insights Component Contributor" scope = azurerm_application_insights.i.id } I'm guessing: is there a way to give "Reader" permission to all of "Monitoring" within my function app? A: Provide Log Analytics Reader role to the desired group scoped to the function app.
How do I enable RBAC access for a group to view Function App "Log Stream" in Azure?
How do I enable RBAC access for a group to view Function App "Log Stream" in Azure? Basically, I have a AD group with an assignee ID. I don't know how to find the "scope id" for the "Log Stream". My users have contributor access to the Function App itself, but the "Log Stream" is not permitted. I configured Terraform like so, to enable "Application Insights Contributor" scope for my users but they still cannot see the "Log Stream". So, I am just trying to identify which of the RBAC roles encompasses the "Log Stream" viewing. Does anyone know? resource "azurerm_role_assignment" "azurerm_ai_contributor" { count = var.environment != "prod" ? 1 : 0 principal_id = local.dev_team_object_id role_definition_name = "Application Insights Component Contributor" scope = azurerm_application_insights.i.id } I'm guessing: is there a way to give "Reader" permission to all of "Monitoring" within my function app?
[ "Provide Log Analytics Reader role to the desired group scoped to the function app.\n" ]
[ 0 ]
[]
[]
[ "azure_active_directory", "azure_rbac" ]
stackoverflow_0074406168_azure_active_directory_azure_rbac.txt
Q: Python monkeypatch.setattr() with pytest fixture at module scope First of all, the relevant portion of my project directory looks like: └── my_package ├── my_subpackage │ ├── my_module.py | └── other_module.py └── tests └── my_subpackage └── unit_test.py I am writing some tests in unit_test.py that require mocking of an external resource at the module level. I would like to use a pytest fixture with module level scope and pytest monkeypatch to acomplish this. Here is a snippet of what I have tried in unit_test.py: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='function') def external_access(monkeypatch): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeypatch.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' Everything works just fine. But when I try to change line 8 from @pytest.fixture(scope='function') to @pytest.fixture(scope='module'), I get the following error. ScopeMismatch: You tried to access the 'function' scoped fixture 'monkeypatch' with a 'module' scoped request object, involved factories my_package\tests\unit_test.py:7: def external_access(monkeypatch) ..\..\Anaconda3\envs\py37\lib\site-packages\_pytest\monkeypatch.py:20: def monkeypatch() Does anyone know how to monkeypatch with module level scope? In case anyone wants to know, this is what the two modules look like as well. my_module.py from my_package.my_subpackage.other_module import ExternalAccess class MyClass(object): def __init__(self): self.external_access = ExternalAccess() self.data = None def get_something(self): self.data = self.external_access.get_something() other_module.py class ExternalAccess(object): def get_something(self): return 'Call to external resource.' A: I found this issue which guided the way. I needed to make a few changes to the solution for module level scope. unit_test.py now looks like this: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='module') def monkeymodule(): from _pytest.monkeypatch import MonkeyPatch mpatch = MonkeyPatch() yield mpatch mpatch.undo() @pytest.fixture(scope='module') def external_access(monkeymodule): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeymodule.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' A: This has gotten simpler as of pytest 6.2, thanks to the pytest.MonkeyPatch class and context-manager (https://docs.pytest.org/en/6.2.x/reference.html#pytest.MonkeyPatch). Building off Rich's answer, the monkeymodule fixture can now be written as follows: @pytest.fixture(scope='module') def monkeymodule(): with pytest.MonkeyPatch.context() as mp: yield mp @pytest.fixture(scope='function') def external_access(monkeymodule): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeymodule.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something)
Python monkeypatch.setattr() with pytest fixture at module scope
First of all, the relevant portion of my project directory looks like: └── my_package ├── my_subpackage │ ├── my_module.py | └── other_module.py └── tests └── my_subpackage └── unit_test.py I am writing some tests in unit_test.py that require mocking of an external resource at the module level. I would like to use a pytest fixture with module level scope and pytest monkeypatch to acomplish this. Here is a snippet of what I have tried in unit_test.py: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='function') def external_access(monkeypatch): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeypatch.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' Everything works just fine. But when I try to change line 8 from @pytest.fixture(scope='function') to @pytest.fixture(scope='module'), I get the following error. ScopeMismatch: You tried to access the 'function' scoped fixture 'monkeypatch' with a 'module' scoped request object, involved factories my_package\tests\unit_test.py:7: def external_access(monkeypatch) ..\..\Anaconda3\envs\py37\lib\site-packages\_pytest\monkeypatch.py:20: def monkeypatch() Does anyone know how to monkeypatch with module level scope? In case anyone wants to know, this is what the two modules look like as well. my_module.py from my_package.my_subpackage.other_module import ExternalAccess class MyClass(object): def __init__(self): self.external_access = ExternalAccess() self.data = None def get_something(self): self.data = self.external_access.get_something() other_module.py class ExternalAccess(object): def get_something(self): return 'Call to external resource.'
[ "I found this issue which guided the way. I needed to make a few changes to the solution for module level scope. unit_test.py now looks like this:\nimport unittest.mock as mock\n\nimport pytest\n\nfrom my_package.my_subpackage.my_module import MyClass\n\n\[email protected](scope='module')\ndef monkeymodule():\n from _pytest.monkeypatch import MonkeyPatch\n mpatch = MonkeyPatch()\n yield mpatch\n mpatch.undo()\n\[email protected](scope='module')\ndef external_access(monkeymodule):\n external_access = mock.MagicMock()\n external_access.get_something = mock.MagicMock(\n return_value='Mock was used.')\n monkeymodule.setattr(\n 'my_package.my_subpackage.my_module.ExternalAccess.get_something',\n external_access.get_something)\n\n\ndef test_get_something(external_access):\n instance = MyClass()\n instance.get_something()\n assert instance.data == 'Mock was used.'\n\n", "This has gotten simpler as of pytest 6.2, thanks to the pytest.MonkeyPatch class and context-manager (https://docs.pytest.org/en/6.2.x/reference.html#pytest.MonkeyPatch). Building off Rich's answer, the monkeymodule fixture can now be written as follows:\[email protected](scope='module')\ndef monkeymodule():\n with pytest.MonkeyPatch.context() as mp:\n yield mp\n\[email protected](scope='function')\ndef external_access(monkeymodule):\n external_access = mock.MagicMock()\n external_access.get_something = mock.MagicMock(\n return_value='Mock was used.')\n monkeymodule.setattr(\n 'my_package.my_subpackage.my_module.ExternalAccess.get_something',\n external_access.get_something)\n\n" ]
[ 17, 0 ]
[]
[]
[ "fixtures", "pytest", "python", "scope" ]
stackoverflow_0053963822_fixtures_pytest_python_scope.txt
Q: PHP:Internal Server Error using WAMP (at startup) i had a problem wth php.It display error like below. It work well in other computer, but when i tried to run it in another computer it display error like below. I think i had a problem with my wampserver 2.0 configuration. Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.More information about this error may be available in the server error log. A: Its due to mod_rewrite. WAMP had commented this option by default. You need to go to WAMP icon in notification area, Apache, httpd.config search for mod_rewrite, and uncomment it. After that RESTART WAMP AND YOU ARE DONE! A: This is because mod_rewrite module is disabled in your Apache's configuration: If your website is hosted at local machine you can simply open httpd.conf in Apaches's conf folder and search for "mod_rewrite". If mod_rewrite line is having a # in front of it, just remove this #, save file, and restart Apache. If your website is hosted at shared hosting server then you may not have privileges to make changes in Apache's configuration. So you have to ask your hosting administrator to enable mod_rewrite module for you. This is already enabled by almost every shared hosting provider. A: This is mod_rewrite problem, just enable mod_rewrite in your apache server or tell your website hosting company to enable it for you. This will make your site work. A: Although everyone says the same, I want to add the answer based on some information I collected from this site. This kind of error is happened if apache mod_rewrite feature of your server is not enabled. By default WAMP server does not load this Apache Mod Rewrite module. It is disabled by default. You have to enable a this option to load this module. There is line in WAMP Apache configuration file about this but that is commented by default. To enable this mod_rewrite feature follow this instruction. Look at Wamp icon in notification area on system tray (near Clock in Windows OS) Click on the WAMP icon Go to Apache>httpd.config A Configuration file will be opened in “Note Pad” or your default text editor. Search the text “mod_rewrite” , you will find a line “#LoadModule rewrite_module modules/mod_rewrite.so“ Uncomment this line by removing the Hash “#” from the beginning of the line. Save the file Restart WAMP Server by clicking “Restart All Services” from WAMP menu. YOU ARE DONE! A: I had this problem but the rewrite module was already enabled. There is a header command in the shops .htaccess file that was causing it. Enabling the headers_module in apache fixed it. Thought this might help someone. A: For anyone finding this question, and mod_rewrite is ALREADY enabled... Check that WAMP hasn't put an unwanted .htaccess file in the root of the C:\ drive. This happened to me, I think after experimenting with different settings in the httpd.conf file. After I'd reverted my httpd.conf settings I started seeing the Internal Server Error. Deleting the .htaccess file fixed the problem. A: Check the Wampserver log files, this issue is related to something in your Apache configuration. A: Definitely do what hunzaboy suggests first. Then if you're still getting the error, I found that one of my directives in my httpd-vhosts.conf file was causing the same error to persist even after taking his advice. I commented the line: "Require all granted" (which worked using a newer version of Apache) and everything was fine. A: this must be commented in: LoadModule rewrite_module modules/mod_rewrite.so A: If you find mod_rewrite already uncommented and still you get the same error. Go and check if the headers_module is enabled or not. For that you can follow below steps. Left click on wamp icon in the notification area. Hover to Apache Hover to Apache Modules Click on headers_module Now wait for wamp server to restart and check If it works. A: I had same issue, but the problem wasn't on all my websites, it was only on 1. So just make sure you don't have following code in your htaccess file: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301,NE] Header always set Content-Security-Policy "upgrade-insecure-requests;"
PHP:Internal Server Error using WAMP (at startup)
i had a problem wth php.It display error like below. It work well in other computer, but when i tried to run it in another computer it display error like below. I think i had a problem with my wampserver 2.0 configuration. Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.More information about this error may be available in the server error log.
[ "Its due to mod_rewrite. WAMP had commented this option by default.\nYou need to go to WAMP icon in notification area, Apache, httpd.config\nsearch for mod_rewrite, and uncomment it.\nAfter that RESTART WAMP AND YOU ARE DONE! \n", "This is because mod_rewrite module is disabled in your Apache's configuration:\n\nIf your website is hosted at local machine you can simply open httpd.conf in Apaches's conf folder and search for \"mod_rewrite\". If mod_rewrite line is having a # in front of it, just remove this #, save file, and restart Apache.\nIf your website is hosted at shared hosting server then you may not have privileges to make changes in Apache's configuration. So you have to ask your hosting administrator to enable mod_rewrite module for you. This is already enabled by almost every shared hosting provider.\n\n", "This is mod_rewrite problem, just enable mod_rewrite in your apache server or tell your website hosting company to enable it for you. This will make your site work.\n", "Although everyone says the same, I want to add the answer based on some information I collected from this site.\nThis kind of error is happened if apache mod_rewrite feature of your server is not enabled. By default WAMP server does not load this Apache Mod Rewrite module. It is disabled by default. You have to enable a this option to load this module. There is line in WAMP Apache configuration file about this but that is commented by default. To enable this mod_rewrite feature follow this instruction.\n\nLook at Wamp icon in notification area on system tray (near Clock in\nWindows OS) \nClick on the WAMP icon\nGo to Apache>httpd.config \nA Configuration file will be opened in “Note Pad” or your default text\neditor. \nSearch the text “mod_rewrite” , you will find a line\n“#LoadModule rewrite_module modules/mod_rewrite.so“ \nUncomment this line by removing the Hash “#” from the beginning of the line. \nSave the file Restart WAMP Server by clicking “Restart All Services” from\nWAMP menu. YOU ARE DONE!\n\n", "I had this problem but the rewrite module was already enabled. There is a header command in the shops .htaccess file that was causing it. Enabling the headers_module in apache fixed it. Thought this might help someone.\n", "For anyone finding this question, and mod_rewrite is ALREADY enabled...\nCheck that WAMP hasn't put an unwanted .htaccess file in the root of the C:\\ drive. \nThis happened to me, I think after experimenting with different settings in the httpd.conf file.\nAfter I'd reverted my httpd.conf settings I started seeing the Internal Server Error. Deleting the .htaccess file fixed the problem.\n", "Check the Wampserver log files, this issue is related to something in your Apache configuration.\n", "Definitely do what hunzaboy suggests first. Then if you're still getting the error, I found that one of my directives in my httpd-vhosts.conf file was causing the same error to persist even after taking his advice. I commented the line: \"Require all granted\" (which worked using a newer version of Apache) and everything was fine.\n", "this must be commented in:\nLoadModule rewrite_module modules/mod_rewrite.so\n", "If you find mod_rewrite already uncommented and still you get the same error. Go and check if the headers_module is enabled or not. For that you can follow below steps.\n\nLeft click on wamp icon in the notification area.\nHover to Apache\nHover to Apache Modules\nClick on headers_module\n\nNow wait for wamp server to restart and check If it works.\n", "I had same issue, but the problem wasn't on all my websites, it was only on 1.\nSo just make sure you don't have following code in your htaccess file:\nRewriteEngine On\nRewriteCond %{HTTPS} !=on\nRewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301,NE]\nHeader always set Content-Security-Policy \"upgrade-insecure-requests;\"\n" ]
[ 40, 6, 4, 3, 3, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "php", "wampserver" ]
stackoverflow_0003540210_php_wampserver.txt
Q: Should I include LifecycleOwner in ViewModel? LifecycleOwner is currently needed in order for me to create an observer. I have code which creates an Observer in the ViewModel so I attach the LifecycleOwner when retrieving the ViewModel in my Fragment. According to Google's documentation. Caution: A ViewModel must never reference a view, Lifecycle, or any class that may hold a reference to the activity context. Did I break that warning and If I did, what way do you recommend me to move my creation of an observer for data return? I only made an observer so I'm wondering if it's still valid. Since also in Google's documentation it also said. ViewModel objects can contain LifecycleObservers, such as LiveData objects. MainFragment private lateinit var model: MainViewModel /** * Observer for our ViewModel IpAddress LiveData value. * @see Observer.onChanged * */ private val ipObserver = Observer<String> { textIp.text = it hideProgressBar() } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) model = ViewModelProviders.of(this).get(MainViewModel::class.java) model.attach(this) } override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? = inflater?.inflate(R.layout.fragment_main, container, false) override fun onViewCreated(view: View?, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) buttonRetrieveIp.setOnClickListener { showProgressBar() model.fetchMyIp().observe(this, ipObserver) //Here we attach our ipObserver } } override fun showProgressBar() { textIp.visibility = View.GONE progressBar.visibility = View.VISIBLE } override fun hideProgressBar() { progressBar.visibility = View.GONE textIp.visibility = View.VISIBLE } MainViewModel private var ipAddress = MutableLiveData<String>() private lateinit var owner: LifecycleOwner fun attach(fragment: MainFragment) { owner = fragment } /** * For more information regarding Fuel Request using Fuel Routing and Live Data Response. * @see <a href="https://github.com/kittinunf/Fuel#routing-support">Fuel Routing Support</a> * @see <a href="https://github.com/kittinunf/Fuel#livedata-support">Fuel LiveData Support</a> * */ fun fetchMyIp(): LiveData<String> { Fuel.request(IpAddressApi.MyIp()) .liveDataResponse() .observe(owner, Observer { if (it?.first?.statusCode == 200) {//If you want you can add a status code checker here. it.second.success { ipAddress.value = Ip.toIp(String(it))?.ip } } }) return ipAddress } Update 1: Improved ViewModel thanks to @pskink suggestion for using Transformations. private lateinit var ipAddress:LiveData<String> /** * Improved ViewModel since January 23, 2018 credits to <a href="https://stackoverflow.com/users/2252830/pskink">pskink</a> <a href=" * * For more information regarding Fuel Request using Fuel Routing and Live Data Response. * @see <a href="https://github.com/kittinunf/Fuel#routing-support">Fuel Routing Support</a> * @see <a href="https://github.com/kittinunf/Fuel#livedata-support">Fuel LiveData Support</a> * */ fun fetchMyIp(): LiveData<String> { ipAddress = Transformations.map(Fuel.request(IpAddressApi.MyIp()).liveDataResponse(), { var ip:String? = "" it.second.success { ip = Ip.toIp(String(it))?.ip } ip }) return ipAddress } A: No. If you wish to observe changes of some LiveData inside your ViewModel you can use observeForever() which doesn't require LifecycleOwner. Remember to remove this observer on ViewModel's onCleared() event: val observer = new Observer() { override public void onChanged(Integer integer) { //Do something with "integer" } } ... liveData.observeForever(observer); ... override fun onCleared() { liveData.removeObserver(observer) super.onCleared() } Very good reference with examples of observe LiveData. A: Assumptions: Fuel refers to your ViewModel Fuel.request(IpAddressApi.MyIp()) is a method in your ViewModel IpAddressApi.MyIp() does not have a reference to your LifecycleOwner, If all are true,then you are not violating it. So long as you are not passing a LifecycleOwner reference to the ViewModel you are safe! LifecycleOwner - relates to an Activity or Fragment as it owns the various Android Lifecycles e.g onCreate, onPause, onDestroy etc A: in Kotlin this can be something like: val mObserver = Observer<List<QueueTabData>> { myString-> // do something with myString } A: Should I include LifecycleOwner in ViewModel? Ans: No The purpose of viewmodel is to hold UI data, so that it survives across configuration changes. And the reason for the following Caution: A ViewModel must never reference a view, Lifecycle, or any class that may hold a reference to the activity context. Is because the viewmodel survives configuration changes whereas activities don't. They are destroyed and re-created on configuration change. So, if you have any activity context references in viewmodel they would refer to previous activity that got destroyed. So this leads to memory leak. And hence it is not recommended. Furthermore, If you have repositories that act as your data source we should avoid using LiveData for such purposes as mentioned here in the paragraph just above the code block. This is because LiveData are handled on MainThread that may lead to UI freeze. We should use kotlin flows for such purposes.
Should I include LifecycleOwner in ViewModel?
LifecycleOwner is currently needed in order for me to create an observer. I have code which creates an Observer in the ViewModel so I attach the LifecycleOwner when retrieving the ViewModel in my Fragment. According to Google's documentation. Caution: A ViewModel must never reference a view, Lifecycle, or any class that may hold a reference to the activity context. Did I break that warning and If I did, what way do you recommend me to move my creation of an observer for data return? I only made an observer so I'm wondering if it's still valid. Since also in Google's documentation it also said. ViewModel objects can contain LifecycleObservers, such as LiveData objects. MainFragment private lateinit var model: MainViewModel /** * Observer for our ViewModel IpAddress LiveData value. * @see Observer.onChanged * */ private val ipObserver = Observer<String> { textIp.text = it hideProgressBar() } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) model = ViewModelProviders.of(this).get(MainViewModel::class.java) model.attach(this) } override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? = inflater?.inflate(R.layout.fragment_main, container, false) override fun onViewCreated(view: View?, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) buttonRetrieveIp.setOnClickListener { showProgressBar() model.fetchMyIp().observe(this, ipObserver) //Here we attach our ipObserver } } override fun showProgressBar() { textIp.visibility = View.GONE progressBar.visibility = View.VISIBLE } override fun hideProgressBar() { progressBar.visibility = View.GONE textIp.visibility = View.VISIBLE } MainViewModel private var ipAddress = MutableLiveData<String>() private lateinit var owner: LifecycleOwner fun attach(fragment: MainFragment) { owner = fragment } /** * For more information regarding Fuel Request using Fuel Routing and Live Data Response. * @see <a href="https://github.com/kittinunf/Fuel#routing-support">Fuel Routing Support</a> * @see <a href="https://github.com/kittinunf/Fuel#livedata-support">Fuel LiveData Support</a> * */ fun fetchMyIp(): LiveData<String> { Fuel.request(IpAddressApi.MyIp()) .liveDataResponse() .observe(owner, Observer { if (it?.first?.statusCode == 200) {//If you want you can add a status code checker here. it.second.success { ipAddress.value = Ip.toIp(String(it))?.ip } } }) return ipAddress } Update 1: Improved ViewModel thanks to @pskink suggestion for using Transformations. private lateinit var ipAddress:LiveData<String> /** * Improved ViewModel since January 23, 2018 credits to <a href="https://stackoverflow.com/users/2252830/pskink">pskink</a> <a href=" * * For more information regarding Fuel Request using Fuel Routing and Live Data Response. * @see <a href="https://github.com/kittinunf/Fuel#routing-support">Fuel Routing Support</a> * @see <a href="https://github.com/kittinunf/Fuel#livedata-support">Fuel LiveData Support</a> * */ fun fetchMyIp(): LiveData<String> { ipAddress = Transformations.map(Fuel.request(IpAddressApi.MyIp()).liveDataResponse(), { var ip:String? = "" it.second.success { ip = Ip.toIp(String(it))?.ip } ip }) return ipAddress }
[ "No. If you wish to observe changes of some LiveData inside your ViewModel you can use observeForever() which doesn't require LifecycleOwner.\nRemember to remove this observer on ViewModel's onCleared() event:\nval observer = new Observer() {\n override public void onChanged(Integer integer) {\n //Do something with \"integer\"\n }\n}\n\n...\nliveData.observeForever(observer);\n\n...\noverride fun onCleared() {\n liveData.removeObserver(observer) \n super.onCleared()\n}\n\nVery good reference with examples of observe LiveData.\n", "Assumptions:\n\nFuel refers to your ViewModel\nFuel.request(IpAddressApi.MyIp()) is a method in your ViewModel\nIpAddressApi.MyIp() does not have a reference to your LifecycleOwner, \n\nIf all are true,then you are not violating it. So long as you are not passing a LifecycleOwner reference to the ViewModel you are safe!\nLifecycleOwner - relates to an Activity or Fragment as it owns the various Android Lifecycles e.g onCreate, onPause, onDestroy etc \n", "in Kotlin this can be something like:\nval mObserver = Observer<List<QueueTabData>> { myString->\n// do something with myString\n}\n\n", "\nShould I include LifecycleOwner in ViewModel?\nAns: No\n\nThe purpose of viewmodel is to hold UI data, so that it survives across configuration changes.\nAnd the reason for the following\n\nCaution: A ViewModel must never reference a view, Lifecycle, or any class that may hold a reference to the activity context.\n\nIs because the viewmodel survives configuration changes whereas activities don't. They are destroyed and re-created on configuration change. So, if you have any activity context references in viewmodel they would refer to previous activity that got destroyed.\nSo this leads to memory leak. And hence it is not recommended.\nFurthermore,\nIf you have repositories that act as your data source we should avoid using LiveData for such purposes as mentioned here in the paragraph just above the code block.\nThis is because LiveData are handled on MainThread that may lead to UI freeze.\nWe should use kotlin flows for such purposes.\n" ]
[ 65, 2, 0, 0 ]
[]
[]
[ "android", "android_viewmodel", "mvvm" ]
stackoverflow_0048396092_android_android_viewmodel_mvvm.txt
Q: Random number generator generating the same number using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Discord.Interactions; namespace StorytimeBot.v2.Modules { public class Commands : InteractionModuleBase<SocketInteractionContext> { [SlashCommand("roll", "Roll A D20!")] public async Task Roll(int xD20, int bonus) { string diceRolls = ""; for (int i = 1; i <= xD20; i++) { Random r = new Random(); int dc = r.Next(1, 21); diceRolls += $"Roll {i}: {dc} + {bonus} = {dc + bonus}\n"; } await RespondAsync(diceRolls); } This is the snippet of the command that is not doing what is expected. The goal is to put all the strings made in the loop into one variable then print it in discord. However, the variable "dc" does not keep the individual random iterations, but instead all the numbers come out the same. I find this odd and have no clue why it doesn't work when the variable "i" increments properly. Mind that no errors or warnings pop up in the editor. I have tried multiple solutions like using arrays, using ReplyAsync at the end, and lists, but dc still only keeps one value. There is a way for the code to work by putting ReplyAsync in every iteration of the loop and changing the += to =, but that prints multiple messages into discord for each iteration. Making it slow, so I would like to see if there is anything I can do to avoid having to use the slow solution. A: if i am understanding your question correctly the problem is that the random number generator gives the same numbers every time right? If so i think it's best to take the Random r = new Random(); out of the for loop and place it before it.
Random number generator generating the same number
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Discord.Interactions; namespace StorytimeBot.v2.Modules { public class Commands : InteractionModuleBase<SocketInteractionContext> { [SlashCommand("roll", "Roll A D20!")] public async Task Roll(int xD20, int bonus) { string diceRolls = ""; for (int i = 1; i <= xD20; i++) { Random r = new Random(); int dc = r.Next(1, 21); diceRolls += $"Roll {i}: {dc} + {bonus} = {dc + bonus}\n"; } await RespondAsync(diceRolls); } This is the snippet of the command that is not doing what is expected. The goal is to put all the strings made in the loop into one variable then print it in discord. However, the variable "dc" does not keep the individual random iterations, but instead all the numbers come out the same. I find this odd and have no clue why it doesn't work when the variable "i" increments properly. Mind that no errors or warnings pop up in the editor. I have tried multiple solutions like using arrays, using ReplyAsync at the end, and lists, but dc still only keeps one value. There is a way for the code to work by putting ReplyAsync in every iteration of the loop and changing the += to =, but that prints multiple messages into discord for each iteration. Making it slow, so I would like to see if there is anything I can do to avoid having to use the slow solution.
[ "if i am understanding your question correctly the problem is that the random number generator gives the same numbers every time right? If so i think it's best to take the Random r = new Random(); out of the for loop and place it before it.\n" ]
[ 2 ]
[]
[]
[ "c#" ]
stackoverflow_0074659683_c#.txt
Q: Pushing registers to memory stack When we are pushing a register to the memory stack, what does it enable us to do?Does it just simply help us perform operations which dont fit in the AL,AH registers? I had to write a program for a computer which has the x8086 processor and I had to find the equivalent time in hours of 35600 seconds so the AL,AH registers were to small to perform the division 35600/3600 A: There's quite a few things you can do with push: Storing a value so that you can use it later. On the 8086, you can only bit shift or rotate by either 1 or the value in the CL register. This can become a problem if you're using CX as a loop counter but you're trying to bit-shift during your loop. So you can fix it with this: foo: push cx mov al,[si] mov cl,2 shl al,cl mov [di],al pop cx loop foo Reverse the order of letters in a string. .data myString db "Hello",0 .code mov si,offset myString mov ax,0 mov cx,0 cld stringReverse: lodsb ;read the letter (and inc si to the next letter) cmp al,0 ;is it zero jz nowReverse ;if so, reverse the string push ax ;push the letter onto the stack inc cx ;increment our counter jmp stringReverse nowReverse: mov si,offset myString ;reload the pointer to the first letter in myString loop_nowReverse: jcxz done ;once we've written all the letters, stop. pop ax ;get the letters back in reverse order stosb ;overwrite the old string dec cx jmp loop_nowReverse done:
Pushing registers to memory stack
When we are pushing a register to the memory stack, what does it enable us to do?Does it just simply help us perform operations which dont fit in the AL,AH registers? I had to write a program for a computer which has the x8086 processor and I had to find the equivalent time in hours of 35600 seconds so the AL,AH registers were to small to perform the division 35600/3600
[ "There's quite a few things you can do with push:\n\nStoring a value so that you can use it later.\n\nOn the 8086, you can only bit shift or rotate by either 1 or the value in the CL register. This can become a problem if you're using CX as a loop counter but you're trying to bit-shift during your loop. So you can fix it with this:\nfoo:\npush cx\n mov al,[si]\n mov cl,2\n shl al,cl\n mov [di],al\npop cx\nloop foo\n\n\nReverse the order of letters in a string.\n\n.data\nmyString db \"Hello\",0\n.code\n\nmov si,offset myString\nmov ax,0\nmov cx,0\ncld\n\nstringReverse:\nlodsb ;read the letter (and inc si to the next letter)\ncmp al,0 ;is it zero\njz nowReverse ;if so, reverse the string\npush ax ;push the letter onto the stack\ninc cx ;increment our counter\njmp stringReverse\n\nnowReverse:\nmov si,offset myString ;reload the pointer to the first letter in myString\n\nloop_nowReverse:\njcxz done ;once we've written all the letters, stop.\npop ax ;get the letters back in reverse order\nstosb ;overwrite the old string\ndec cx \njmp loop_nowReverse\ndone:\n\n" ]
[ 0 ]
[]
[]
[ "assembly", "emu8086", "memory", "stack" ]
stackoverflow_0074646252_assembly_emu8086_memory_stack.txt
Q: Programmaticaly simulating Alt + Enter key press is not working Here is my code: keybd_event(VK_MENU, 0, 0, 0); keybd_event(VK_RETURN, 0, 0, 0); Sleep(200); keybd_event(VK_MENU, 0, KEYEVENTF_KEYUP, 0); keybd_event(VK_RETURN, 0, KEYEVENTF_KEYUP, 0); The first line would press Alt The second line would press Enter ↵ (or Return ↵), The fourth line would release Alt, The fifth line would release Enter ↵ (or Return ↵). A: You are not setting the KEYEVENTF_EXTENDEDKEY flag to keep the keys pressed down. Change your code to: keybd_event(VK_MENU, 0, KEYEVENTF_EXTENDEDKEY, 0); keybd_event(VK_RETURN, 0, KEYEVENTF_EXTENDEDKEY, 0); Sleep(200); keybd_event(VK_MENU, 0, KEYEVENTF_KEYUP, 0); keybd_event(VK_RETURN, 0, KEYEVENTF_KEYUP, 0); Also you really don't need the sleep in the middle if you are just sending a Alt + Enter You can see all of the keycodes here at the MSDN page. Alt = VK_MENU Left Alt = VK_LMENU Right Alt Gr = VK_RMENU A: scan codes get injected at a much lower level in the input stack. Whereas VK's are pretty high in the stack. Some applications, such as games, exclusively listen for keyboard input at the lower levels and will miss any injected VKs. If you are trying to simulate a ╘ getting pressed with ALT+221, I suspect Windows is doing the translation somewhere in the middle. – selbie Mar 12, 2018 at 0:51
Programmaticaly simulating Alt + Enter key press is not working
Here is my code: keybd_event(VK_MENU, 0, 0, 0); keybd_event(VK_RETURN, 0, 0, 0); Sleep(200); keybd_event(VK_MENU, 0, KEYEVENTF_KEYUP, 0); keybd_event(VK_RETURN, 0, KEYEVENTF_KEYUP, 0); The first line would press Alt The second line would press Enter ↵ (or Return ↵), The fourth line would release Alt, The fifth line would release Enter ↵ (or Return ↵).
[ "You are not setting the KEYEVENTF_EXTENDEDKEY flag to keep the keys pressed down. Change your code to:\nkeybd_event(VK_MENU, 0, KEYEVENTF_EXTENDEDKEY, 0);\nkeybd_event(VK_RETURN, 0, KEYEVENTF_EXTENDEDKEY, 0);\nSleep(200);\nkeybd_event(VK_MENU, 0, KEYEVENTF_KEYUP, 0);\nkeybd_event(VK_RETURN, 0, KEYEVENTF_KEYUP, 0);\n\nAlso you really don't need the sleep in the middle if you are just sending a Alt + Enter\nYou can see all of the keycodes here at the MSDN page. \n\nAlt = VK_MENU\nLeft Alt = VK_LMENU\nRight Alt Gr = VK_RMENU\n\n", "scan codes get injected at a much lower level in the input stack. Whereas VK's are pretty high in the stack. Some applications, such as games, exclusively listen for keyboard input at the lower levels and will miss any injected VKs. If you are trying to simulate a ╘ getting pressed with ALT+221, I suspect Windows is doing the translation somewhere in the middle. –\nselbie\nMar 12, 2018 at 0:51\n" ]
[ 1, 0 ]
[]
[]
[ "c++", "console", "keyboard", "windows" ]
stackoverflow_0030917945_c++_console_keyboard_windows.txt
Q: Restore original position draggable element when it's dropped in the wrong place HTML + Javascript Im tryinig to use this code which another spoke of him in another post but that modified a little. http://jsfiddle.net/fsy37kv4/ I would like when some circle was dropped in a wrong position, the circle restore to the original position. My issue is when I drag the circle from its initial position I want it to disappear from it so that it is noticed that it is moving, but if I release it in the wrong place it returns to its initial position HTML code <div id="bracelet"> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> </div> <br/> <br/> <br/> <br/> <br/> <div id="div2" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag1" src="http://upload.wikimedia.org/wikipedia/commons/4/4f/Button-Red.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div3" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag2" src="http://upload.wikimedia.org/wikipedia/commons/a/a8/Button-Blue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div4" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag3" src="http://upload.wikimedia.org/wikipedia/commons/c/ca/Button-Lightblue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div5" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag4" src="http://upload.wikimedia.org/wikipedia/commons/b/ba/Button-Purple.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div6" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag5" src="http://upload.wikimedia.org/wikipedia/commons/6/68/Button-Orange.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div7" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag6" src="http://upload.wikimedia.org/wikipedia/commons/d/dc/Button-Green.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div8" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag7" src="http://upload.wikimedia.org/wikipedia/commons/4/4f/Button-Red.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div9" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag8" src="http://upload.wikimedia.org/wikipedia/commons/a/a8/Button-Blue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div10" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag9" src="http://upload.wikimedia.org/wikipedia/commons/c/ca/Button-Lightblue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div11" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag10" src="http://upload.wikimedia.org/wikipedia/commons/b/ba/Button-Purple.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div12" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag11" src="http://upload.wikimedia.org/wikipedia/commons/6/68/Button-Orange.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div13" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag12" src="http://upload.wikimedia.org/wikipedia/commons/d/dc/Button-Green.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> JS CODE function allowDrop(ev) { ev.preventDefault(); } function drag(ev) { var el = ev.target; var parent = el.getAttribute("data-parent"); if(!parent){ el.setAttribute("data-parent", el.parentNode.id); } ev.dataTransfer.setData("Text", el.id); setTimeout(() => { ev.target.classList.add('hide'); //Ocultamos el elemento al arrastrar , hay que definir la clase .hide en el css de la pagina }, 0); } function drop(ev) { ev.preventDefault(); var data = ev.dataTransfer.getData("Text"); ev.target.appendChild(document.getElementById(data)); ev.target.classList.remove('drag-over'); // get the draggable element const id = ev.dataTransfer.getData('text/plain'); const draggable = document.getElementById(id); // add it to the drop target ev.target.appendChild(draggable); // display the draggable element draggable.classList.remove('hide'); } function dragEnd(ev){ if(ev.dataTransfer.dropEffect == "none"){ var parent = document.getElementById(ev.target.getAttribute("data-parent")); parent.appendChild(ev.target); } } A: const blockes = document.querySelectorAll('.block'), boxes = document.querySelectorAll('.box'); let dragElem = null; blockes.forEach(block =>{ block.draggable = true; block.addEventListener('dragstart', startDragBlock); block.addEventListener('dragend', endDragBlock); }); function startDragBlock() { dragElem = this; setTimeout (()=> { this.classList.add('hide'); }, 0); } function endDragBlock(){ dragElem = null; this.classList.remove('hide'); //alert(this.parentNode.id); } boxes.forEach(box => { box.addEventListener('dragover', dragBoxOver); box.addEventListener('dragenter', dragBoxEnter); box.addEventListener('dragleave', dragBoxLeave); box.addEventListener('drop', dropInBox); }) function dragBoxOver(evt){ evt.preventDefault(); this.classList.add('hover'); } function dragBoxEnter(evt){ evt.preventDefault(); this.classList.add('hover'); if (this.id == "4") { this.style.backgroundColor = 'red'; } } function dragBoxLeave(){ this.style.backgroundColor = ''; this.classList.remove('hover'); } function dropInBox(){ this.append(dragElem); this.classList.remove('hover'); } *{box-sizing: border-box;} body { font-family: sans-serif; background-color: #fff; } .wrapper { width: 96%; margin: 50px auto; display: flex; flex-wrap: wrap; align-items: flex-start; gap: 15px; } div { background-color: #fff; border: 2px solid #058bb4; min-width: 130px; min-height: 130px; padding: 15px; margin: 15px 0; text-align: center; } .block { cursor: grabbing; background-color: #10b6e9; } .hide { display: none; } .hover { background-color: #94e5fa; position: relative; } .hover::before { content: "Пора уже отпускать"; display: block; position: absolute; background-color: #000; color: #fff; border-radius: 6px; padding: 5px 10px; top: -40px; left: 50%; transform: translate(-50%); min-width: 150px; }html, body { height: 100%; width: 100%; } <div class="wrapper"> <div class='block'>Draggable1</div> <div class='box' id="1"></div> <div class='block'>Draggable2</div> <div class='box' id="2"></div> <div class='block'>Draggable3</div> <div class='box' id="3"></div> <div class='block'>Draggable1</div> <div class='box' id="4"></div> <div class='block'>Draggable2</div> <div class='box' id="5"></div> <div class='block'>Draggable3</div> <div class='box' id="6"></div> </div>
Restore original position draggable element when it's dropped in the wrong place HTML + Javascript
Im tryinig to use this code which another spoke of him in another post but that modified a little. http://jsfiddle.net/fsy37kv4/ I would like when some circle was dropped in a wrong position, the circle restore to the original position. My issue is when I drag the circle from its initial position I want it to disappear from it so that it is noticed that it is moving, but if I release it in the wrong place it returns to its initial position HTML code <div id="bracelet"> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div> </div> <br/> <br/> <br/> <br/> <br/> <div id="div2" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag1" src="http://upload.wikimedia.org/wikipedia/commons/4/4f/Button-Red.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div3" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag2" src="http://upload.wikimedia.org/wikipedia/commons/a/a8/Button-Blue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div4" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag3" src="http://upload.wikimedia.org/wikipedia/commons/c/ca/Button-Lightblue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div5" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag4" src="http://upload.wikimedia.org/wikipedia/commons/b/ba/Button-Purple.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div6" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag5" src="http://upload.wikimedia.org/wikipedia/commons/6/68/Button-Orange.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div7" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag6" src="http://upload.wikimedia.org/wikipedia/commons/d/dc/Button-Green.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div8" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag7" src="http://upload.wikimedia.org/wikipedia/commons/4/4f/Button-Red.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div9" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag8" src="http://upload.wikimedia.org/wikipedia/commons/a/a8/Button-Blue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div10" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag9" src="http://upload.wikimedia.org/wikipedia/commons/c/ca/Button-Lightblue.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div11" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag10" src="http://upload.wikimedia.org/wikipedia/commons/b/ba/Button-Purple.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div12" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag11" src="http://upload.wikimedia.org/wikipedia/commons/6/68/Button-Orange.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> <div id="div13" class="drop" ondrop="drop(event)" ondragover="allowDrop(event)"> <img id="drag12" src="http://upload.wikimedia.org/wikipedia/commons/d/dc/Button-Green.svg" ondragend="dragEnd(event)" draggable="true" ondragstart="drag(event)" width="50" height="50"> </div> JS CODE function allowDrop(ev) { ev.preventDefault(); } function drag(ev) { var el = ev.target; var parent = el.getAttribute("data-parent"); if(!parent){ el.setAttribute("data-parent", el.parentNode.id); } ev.dataTransfer.setData("Text", el.id); setTimeout(() => { ev.target.classList.add('hide'); //Ocultamos el elemento al arrastrar , hay que definir la clase .hide en el css de la pagina }, 0); } function drop(ev) { ev.preventDefault(); var data = ev.dataTransfer.getData("Text"); ev.target.appendChild(document.getElementById(data)); ev.target.classList.remove('drag-over'); // get the draggable element const id = ev.dataTransfer.getData('text/plain'); const draggable = document.getElementById(id); // add it to the drop target ev.target.appendChild(draggable); // display the draggable element draggable.classList.remove('hide'); } function dragEnd(ev){ if(ev.dataTransfer.dropEffect == "none"){ var parent = document.getElementById(ev.target.getAttribute("data-parent")); parent.appendChild(ev.target); } }
[ "\n\nconst blockes = document.querySelectorAll('.block'), \n boxes = document.querySelectorAll('.box');\nlet dragElem = null;\nblockes.forEach(block =>{ \n block.draggable = true;\n block.addEventListener('dragstart', startDragBlock);\n block.addEventListener('dragend', endDragBlock);\n});\n\nfunction startDragBlock() {\n dragElem = this;\n setTimeout (()=> {\n this.classList.add('hide');\n }, 0);\n}\nfunction endDragBlock(){\n dragElem = null;\n this.classList.remove('hide');\n //alert(this.parentNode.id);\n}\n\nboxes.forEach(box => {\n box.addEventListener('dragover', dragBoxOver); \n box.addEventListener('dragenter', dragBoxEnter); \n box.addEventListener('dragleave', dragBoxLeave); \n box.addEventListener('drop', dropInBox); \n})\n\nfunction dragBoxOver(evt){\n evt.preventDefault();\n this.classList.add('hover');\n}\nfunction dragBoxEnter(evt){\n evt.preventDefault();\n this.classList.add('hover');\n if (this.id == \"4\") {\n this.style.backgroundColor = 'red';\n }\n}\nfunction dragBoxLeave(){\n this.style.backgroundColor = '';\n this.classList.remove('hover');\n}\nfunction dropInBox(){\n this.append(dragElem);\n this.classList.remove('hover');\n}\n*{box-sizing: border-box;}\nbody {\n font-family: sans-serif;\n background-color: #fff;\n}\n.wrapper {\n width: 96%;\n margin: 50px auto;\n display: flex;\n flex-wrap: wrap;\n align-items: flex-start;\n gap: 15px;\n}\ndiv {\n background-color: #fff;\n border: 2px solid #058bb4;\n min-width: 130px;\n min-height: 130px;\n padding: 15px;\n margin: 15px 0;\n text-align: center;\n}\n.block {\n cursor: grabbing;\n background-color: #10b6e9;\n}\n\n.hide {\n display: none;\n}\n.hover {\n background-color: #94e5fa;\n position: relative;\n}\n.hover::before {\n content: \"Пора уже отпускать\";\n display: block;\n position: absolute;\n background-color: #000;\n color: #fff;\n border-radius: 6px;\n padding: 5px 10px;\n top: -40px;\n left: 50%;\n transform: translate(-50%);\n min-width: 150px;\n}html, body {\n height: 100%;\n width: 100%;\n}\n<div class=\"wrapper\">\n <div class='block'>Draggable1</div>\n <div class='box' id=\"1\"></div>\n <div class='block'>Draggable2</div>\n <div class='box' id=\"2\"></div>\n <div class='block'>Draggable3</div>\n <div class='box' id=\"3\"></div>\n <div class='block'>Draggable1</div>\n <div class='box' id=\"4\"></div>\n <div class='block'>Draggable2</div>\n <div class='box' id=\"5\"></div>\n <div class='block'>Draggable3</div>\n <div class='box' id=\"6\"></div>\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "drag_and_drop", "draggable", "html", "javascript" ]
stackoverflow_0074545558_css_drag_and_drop_draggable_html_javascript.txt
Q: Can I use input type=time to specify a period I have an HTML page with a time input that I use to set a period of time, say one hour, or one hour and thirty minutes <input type="time" value="01:30"> This works okay, but depending on the locale, the display value may be 01:30 AM rather than just 01:30. This is good for specifying a certain moment of the day, but is meaningless for a period. I could probably use a text input, or two numeric inputs for hour and minute, but the time input is very convenient here. Is there a way to specify a display format, or get rid of the AM/PM indicator? A: The HTML input date and time will use the browsers local language/locale settings. You can easily test this behaviour by toggling which language you have on top of your language list in your browser. The AM/PM indicator is caused by the language set in browser. For instance in my norwegian language browser there is a 24 hour time picker. You need to use more advanced date-pickers to get control of this or build something on your own using for instance multiple fields to represent year, month, day, hour, minutes etc. More information found here: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/date
Can I use input type=time to specify a period
I have an HTML page with a time input that I use to set a period of time, say one hour, or one hour and thirty minutes <input type="time" value="01:30"> This works okay, but depending on the locale, the display value may be 01:30 AM rather than just 01:30. This is good for specifying a certain moment of the day, but is meaningless for a period. I could probably use a text input, or two numeric inputs for hour and minute, but the time input is very convenient here. Is there a way to specify a display format, or get rid of the AM/PM indicator?
[ "The HTML input date and time will use the browsers local language/locale settings. You can easily test this behaviour by toggling which language you have on top of your language list in your browser. The AM/PM indicator is caused by the language set in browser. For instance in my norwegian language browser there is a 24 hour time picker.\nYou need to use more advanced date-pickers to get control of this or build something on your own using for instance multiple fields to represent year, month, day, hour, minutes etc.\nMore information found here: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/date\n" ]
[ 0 ]
[]
[]
[ "format", "html", "input" ]
stackoverflow_0074659597_format_html_input.txt
Q: MPI_ANY_SOURCE in non blocking receive (MPI_Irecv) I use LD_PRELOAD to override the MPI_Irecv function with my own function to do some debugging of the MPI_Irecv function. Here, my wrapper function "myMPI_Irecv.c" code: int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) { int rang_Irecv; MPI_Comm_rank(comm, &rang_Irecv); printf(" Calling MPI_Irecv, je suis processeur=%d, source=%d, buffer=%p\n", rang_Irecv,source,buf); return PMPI_Irecv(buf, count, datatype, source, tag, comm, request); } After running my MPI application (I am using MPICH): I found that there are some calls of MPI_ANY_SOURCE, because I found SOURCE=-2 when I print the source. My question, is how to know which source (sender) for the nonblocking receive MPI_Irecv ? Thank you in advance. Best regards, A: I suspect (you didn't include enough code) that you are reading the source of the receive call. Instead you have to look at the MPI_Wait call for that receive. It outputs a status object, and you can investigate mystatus.MPI_SOURCE to see where the message actually came from.
MPI_ANY_SOURCE in non blocking receive (MPI_Irecv)
I use LD_PRELOAD to override the MPI_Irecv function with my own function to do some debugging of the MPI_Irecv function. Here, my wrapper function "myMPI_Irecv.c" code: int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) { int rang_Irecv; MPI_Comm_rank(comm, &rang_Irecv); printf(" Calling MPI_Irecv, je suis processeur=%d, source=%d, buffer=%p\n", rang_Irecv,source,buf); return PMPI_Irecv(buf, count, datatype, source, tag, comm, request); } After running my MPI application (I am using MPICH): I found that there are some calls of MPI_ANY_SOURCE, because I found SOURCE=-2 when I print the source. My question, is how to know which source (sender) for the nonblocking receive MPI_Irecv ? Thank you in advance. Best regards,
[ "I suspect (you didn't include enough code) that you are reading the source of the receive call. Instead you have to look at the MPI_Wait call for that receive. It outputs a status object, and you can investigate mystatus.MPI_SOURCE to see where the message actually came from.\n" ]
[ 1 ]
[]
[]
[ "mpi" ]
stackoverflow_0074658454_mpi.txt
Q: How to handle the asynchronous call in Firebase while working with Android I am having a hard time to solve a problem. So basically I am trying to migrate my database from Room in Android to Firebase. I have been able to store my values in Firebase following a similar structure I was trying to save in Room Database. Now the main issue which I am facing is while retrieving the values from Firebase. Being more specifically I am working with nested recycler views so I have a bit complex structure. I will explain it below. So the data works like, there are floors and for each floor there are rooms and for each room there are machines. So it goes in that hierarchy. When I was working with local database I created a function which handles this functionality in my ViewModel : This is how it looks : fun load() { //Observing all floors getAllFloors.observeForever(Observer { viewModelScope.launch(Dispatchers.Main) { /** Converting list of floors to a distinct and sorted floor list * Input -> [0,0,1,2,3,4,2,4,1,3], Output -> [0,1,2,3,4] */ val distinctFloorNames = it.distinct().sorted() val floorsList = mutableListOf<FloorsDataClass>() val devicesList = mutableListOf<String>() //Loop over distinct floors for getting each floor for (floorName in distinctFloorNames) { //At each floor prepare a list of rooms val rooms = repository.getAllRooms(floorName) //Getting distinct (in case rooms gets repeated -> only during testing) and sorted rooms val distinctRoomNames = rooms.distinct().sorted() Timber.d("Floor: $floorName, Rooms: $distinctFloorNames") val roomsList = mutableListOf<RoomsDataClass>() //Loop over rooms in the floor for (roomName in distinctRoomNames) { //In each room prepare a list of devices val devicesName = repository.getAllDevices(roomName) val distinctDeviceName = devicesName.distinct().sorted() //Transform the list of string to list of DeviceClassObject val deviceData = mutableListOf<DevicesDataClass>() //For each device get the attached machine for (device in distinctDeviceName) { //Get the machine associated with the device val machine = repository.getMachine(device) Timber.d("Machine: $machine") //Attach the device and machine to the [DevicesDataClass Object] deviceData.add(DevicesDataClass(device, machine)) /**Attach the room name and the devices list to the *[RoomDataClass Object] **/ roomsList.add(RoomsDataClass(roomName, deviceData)) //Saving devices in a list for managing devicesList.add(device) } } /**Add the room list to the floor object and add the floor to the floor list **/ floorsList.add(FloorsDataClass(floorName, roomsList)) } //Sending the list as livedata to be further observed - from add details for device - manage devices fragment devicesLiveData.postValue(devicesList) /** Post the complete value of floorList in the floorListLiveData which will be * observed from the [ControlPanelFragment] */ floorListLiveData.postValue(floorsList) Timber.d("$floorsList") } }) } Now to display the data I just observed this floorsList and then pass it to my nested adapters which shows data accordingly. I am trying to fetch data from Firebase in a similar way. I have reached to a point where I am even able to fetch my floors and rooms for each floor but the problem comes while fetching the machines. Basically I am using two ValueEventListener in my project. I am using the value coming from one of the listeners to fill my data. But as reading data from Firebase is asynchronous my data field shows empty because I try to use that data before it comes from the database. That's like the main issue. Firebase structure Code for reading values from Firebase private fun readRoomsAndFloorFromFirebase(): List<FloorsDataClass> { val roomsDataClass: MutableList<RoomsDataClass> = mutableListOf() val devicesDataClass: MutableList<DevicesDataClass> = mutableListOf() val floorsDataClass: MutableList<FloorsDataClass> = mutableListOf() val listener = object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { var floors: FloorsDataClass // Log.d(TAG, "Data: ${snapshot}") for (i in snapshot.children) { Log.i(TAG, "Data: $i") // floor = "${i.key}" for (j in i.children) { Log.i(TAG, "Value: ${j.key}") // roomsList.add("${j.key}") val listener = object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { // Log.w(TAG, "Listener: ${snapshot.child("Device ID").value}") val device = snapshot.child("Device ID").value.toString() val machine = snapshot.child("Machine").value.toString() devicesDataClass.add(DevicesDataClass(device, machine)) } override fun onCancelled(error: DatabaseError) {} } //Getting the list of devices and saving it with particular room roomsDataClass.add(RoomsDataClass("${j.key}", devicesDataClass)) realtime.child("USERS").child(auth.uid!!).child( "ADDED DEVICES" ).child("${i.key}").child("${j.key}") .addValueEventListener(listener) } //Storing the particular floor with room data class values floors = FloorsDataClass("${i.key}", roomsDataClass) floorsDataClass.add(floors) } Log.e(TAG, "List 1: $floorsDataClass") } override fun onCancelled(error: DatabaseError) {} } realtime.child("USERS").child(auth.uid!!).child("ADDED DEVICES") .addValueEventListener(listener) Log.e(TAG, "List: $floorsDataClass") return floorsDataClass } Data Classes : data class FloorsDataClass(val floor: String, val rooms: List<RoomsDataClass>) data class RoomsDataClass(val room:String, val devices: List<DevicesDataClass>) data class DevicesDataClass(val device: String, val machine: String?) P.S - I want to read data from that firebase structure such that I have one object which contains the first element as the first floor then inside it, it can store rooms and then further it can store devices for that room. Once the room loop is done, I want to go ahead and save that with the floor. If more code or ss required to understand the question please comment. A: I'd recommend you to use Kotlin Flow or Android LiveData to setup observers to execute some work when data is ready. val floors: MutableLiveData<FloorsDataClass> = MutableLiveData() private fun startReadingRoomsAndFloorFromFirebase() { val listener = object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { floors.value = // your parsed data // or if you prefer: floors.postValue(/* your parsed data */) } override fun onCancelled(error: DatabaseError) {} } } And then in the Activity/Fragment you observe that LiveData and setup the RecyclerView in the callback. I encourage you to review this tutorial on LiveData!
How to handle the asynchronous call in Firebase while working with Android
I am having a hard time to solve a problem. So basically I am trying to migrate my database from Room in Android to Firebase. I have been able to store my values in Firebase following a similar structure I was trying to save in Room Database. Now the main issue which I am facing is while retrieving the values from Firebase. Being more specifically I am working with nested recycler views so I have a bit complex structure. I will explain it below. So the data works like, there are floors and for each floor there are rooms and for each room there are machines. So it goes in that hierarchy. When I was working with local database I created a function which handles this functionality in my ViewModel : This is how it looks : fun load() { //Observing all floors getAllFloors.observeForever(Observer { viewModelScope.launch(Dispatchers.Main) { /** Converting list of floors to a distinct and sorted floor list * Input -> [0,0,1,2,3,4,2,4,1,3], Output -> [0,1,2,3,4] */ val distinctFloorNames = it.distinct().sorted() val floorsList = mutableListOf<FloorsDataClass>() val devicesList = mutableListOf<String>() //Loop over distinct floors for getting each floor for (floorName in distinctFloorNames) { //At each floor prepare a list of rooms val rooms = repository.getAllRooms(floorName) //Getting distinct (in case rooms gets repeated -> only during testing) and sorted rooms val distinctRoomNames = rooms.distinct().sorted() Timber.d("Floor: $floorName, Rooms: $distinctFloorNames") val roomsList = mutableListOf<RoomsDataClass>() //Loop over rooms in the floor for (roomName in distinctRoomNames) { //In each room prepare a list of devices val devicesName = repository.getAllDevices(roomName) val distinctDeviceName = devicesName.distinct().sorted() //Transform the list of string to list of DeviceClassObject val deviceData = mutableListOf<DevicesDataClass>() //For each device get the attached machine for (device in distinctDeviceName) { //Get the machine associated with the device val machine = repository.getMachine(device) Timber.d("Machine: $machine") //Attach the device and machine to the [DevicesDataClass Object] deviceData.add(DevicesDataClass(device, machine)) /**Attach the room name and the devices list to the *[RoomDataClass Object] **/ roomsList.add(RoomsDataClass(roomName, deviceData)) //Saving devices in a list for managing devicesList.add(device) } } /**Add the room list to the floor object and add the floor to the floor list **/ floorsList.add(FloorsDataClass(floorName, roomsList)) } //Sending the list as livedata to be further observed - from add details for device - manage devices fragment devicesLiveData.postValue(devicesList) /** Post the complete value of floorList in the floorListLiveData which will be * observed from the [ControlPanelFragment] */ floorListLiveData.postValue(floorsList) Timber.d("$floorsList") } }) } Now to display the data I just observed this floorsList and then pass it to my nested adapters which shows data accordingly. I am trying to fetch data from Firebase in a similar way. I have reached to a point where I am even able to fetch my floors and rooms for each floor but the problem comes while fetching the machines. Basically I am using two ValueEventListener in my project. I am using the value coming from one of the listeners to fill my data. But as reading data from Firebase is asynchronous my data field shows empty because I try to use that data before it comes from the database. That's like the main issue. Firebase structure Code for reading values from Firebase private fun readRoomsAndFloorFromFirebase(): List<FloorsDataClass> { val roomsDataClass: MutableList<RoomsDataClass> = mutableListOf() val devicesDataClass: MutableList<DevicesDataClass> = mutableListOf() val floorsDataClass: MutableList<FloorsDataClass> = mutableListOf() val listener = object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { var floors: FloorsDataClass // Log.d(TAG, "Data: ${snapshot}") for (i in snapshot.children) { Log.i(TAG, "Data: $i") // floor = "${i.key}" for (j in i.children) { Log.i(TAG, "Value: ${j.key}") // roomsList.add("${j.key}") val listener = object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { // Log.w(TAG, "Listener: ${snapshot.child("Device ID").value}") val device = snapshot.child("Device ID").value.toString() val machine = snapshot.child("Machine").value.toString() devicesDataClass.add(DevicesDataClass(device, machine)) } override fun onCancelled(error: DatabaseError) {} } //Getting the list of devices and saving it with particular room roomsDataClass.add(RoomsDataClass("${j.key}", devicesDataClass)) realtime.child("USERS").child(auth.uid!!).child( "ADDED DEVICES" ).child("${i.key}").child("${j.key}") .addValueEventListener(listener) } //Storing the particular floor with room data class values floors = FloorsDataClass("${i.key}", roomsDataClass) floorsDataClass.add(floors) } Log.e(TAG, "List 1: $floorsDataClass") } override fun onCancelled(error: DatabaseError) {} } realtime.child("USERS").child(auth.uid!!).child("ADDED DEVICES") .addValueEventListener(listener) Log.e(TAG, "List: $floorsDataClass") return floorsDataClass } Data Classes : data class FloorsDataClass(val floor: String, val rooms: List<RoomsDataClass>) data class RoomsDataClass(val room:String, val devices: List<DevicesDataClass>) data class DevicesDataClass(val device: String, val machine: String?) P.S - I want to read data from that firebase structure such that I have one object which contains the first element as the first floor then inside it, it can store rooms and then further it can store devices for that room. Once the room loop is done, I want to go ahead and save that with the floor. If more code or ss required to understand the question please comment.
[ "I'd recommend you to use Kotlin Flow or Android LiveData to setup observers to execute some work when data is ready.\nval floors: MutableLiveData<FloorsDataClass> = MutableLiveData()\n\nprivate fun startReadingRoomsAndFloorFromFirebase() {\n\n val listener = object : ValueEventListener {\n override fun onDataChange(snapshot: DataSnapshot) {\n floors.value = // your parsed data\n // or if you prefer:\n floors.postValue(/* your parsed data */)\n }\n\n override fun onCancelled(error: DatabaseError) {}\n }\n}\n\nAnd then in the Activity/Fragment you observe that LiveData and setup the RecyclerView in the callback. I encourage you to review this tutorial on LiveData!\n" ]
[ 0 ]
[]
[]
[ "android", "android_room", "firebase", "firebase_realtime_database", "kotlin" ]
stackoverflow_0074657940_android_android_room_firebase_firebase_realtime_database_kotlin.txt
Q: How can I continue to prompt for input, if my REGEX is not matched? I'm trying to get this loop to continue. So far, when input is not matched to my REGEX, "input not valid" gets displayed but loop won't continue. What am I missing here? Apreciate your help! import java.util.Scanner; import java.util.regex.Matcher; import java.util.regex.Pattern; public class Main { public static void main(String[] args) { String input; //some variables Pattern pattern = Pattern.compile(REGEX); Scanner scn = new Scanner(System.in); boolean found = false; do { System.out.println("ask user for input"); input = scn.next(); Matcher matcher = pattern.matcher(input); try { matcher.find(); //some Code found = true; scn.close(); } catch (IllegalStateException e) { System.out.println("input not valid."); //stuck here scn.next(); continue; } } while (!found); // some more Code } } A: There are many problems with your code: IllegalStateException is not raised by the "not-match", but by the Scanner class, so why catch it? you are not doing anything with the result of matcher.find(), I think you want found = matcher.find() in case the input is not valid, you execute twice scn.next(); Moreover: you can simplify the initialization boolean found = false; with boolean found; continue; at the end of a loop is not necessary Fixed code: boolean found; do { System.out.println("ask user for input"); input = scn.next(); found = pattern.matcher(input).find(); if (!found) { System.out.println("input not valid."); } } while (!found); scn.close(); A: It seems to me like scn.next() after your "input not valid" line isn't doing anything, but it's going to wait for the user to input a string. That's why it looks like the loop isn't continuing: it's waiting for you to input a string because of that line. However, when you do input something, that input will just be thrown away. Removing that line seems like it will do the trick. A: Look at your loop the condition of your loop state is !found right once the do finished the while test !found a true is need for continue but you defined found as true so !found= false so the while stop. you should get found to return false till the loop is over A: Let's try keeping things simple. import java.util.Scanner; import java.util.regex.Pattern; public class Main { private static final Pattern pattern = Pattern.compile("^[a-zA-Z0-9 ]+$"); public static void main(String[] args) { Scanner scn = new Scanner(System.in); boolean valid; String value; do { System.out.println("ask user for input"); value = scn.next(); valid = pattern.matcher(value).matches(); if (!valid) System.out.println("input not valid."); } while (!valid); System.out.printf("Valid input is %s", value); } } The result is: ask user for input 123-abc input not valid. ask user for input qwerty58 Valid input is qwerty58 Process finished with exit code 0
How can I continue to prompt for input, if my REGEX is not matched?
I'm trying to get this loop to continue. So far, when input is not matched to my REGEX, "input not valid" gets displayed but loop won't continue. What am I missing here? Apreciate your help! import java.util.Scanner; import java.util.regex.Matcher; import java.util.regex.Pattern; public class Main { public static void main(String[] args) { String input; //some variables Pattern pattern = Pattern.compile(REGEX); Scanner scn = new Scanner(System.in); boolean found = false; do { System.out.println("ask user for input"); input = scn.next(); Matcher matcher = pattern.matcher(input); try { matcher.find(); //some Code found = true; scn.close(); } catch (IllegalStateException e) { System.out.println("input not valid."); //stuck here scn.next(); continue; } } while (!found); // some more Code } }
[ "There are many problems with your code:\n\nIllegalStateException is not raised by the \"not-match\", but by the Scanner class, so why catch it?\nyou are not doing anything with the result of matcher.find(), I think you want found = matcher.find()\nin case the input is not valid, you execute twice scn.next();\n\nMoreover:\n\nyou can simplify the initialization boolean found = false; with boolean found;\ncontinue; at the end of a loop is not necessary\n\nFixed code:\n boolean found;\n do {\n System.out.println(\"ask user for input\");\n input = scn.next();\n found = pattern.matcher(input).find();\n if (!found) {\n System.out.println(\"input not valid.\");\n }\n } while (!found);\n scn.close();\n\n", "It seems to me like scn.next() after your \"input not valid\" line isn't doing anything, but it's going to wait for the user to input a string. That's why it looks like the loop isn't continuing: it's waiting for you to input a string because of that line. However, when you do input something, that input will just be thrown away. Removing that line seems like it will do the trick.\n", "Look at your loop the condition of your loop state is !found right once the do finished the while test !found a true is need for continue but you defined found as true so !found= false so the while stop. you should get found to return false till the loop is over\n", "Let's try keeping things simple.\nimport java.util.Scanner;\nimport java.util.regex.Pattern;\n\npublic class Main {\n private static final Pattern pattern = Pattern.compile(\"^[a-zA-Z0-9 ]+$\");\n\n public static void main(String[] args) {\n Scanner scn = new Scanner(System.in);\n boolean valid;\n String value;\n do {\n System.out.println(\"ask user for input\");\n value = scn.next();\n valid = pattern.matcher(value).matches();\n if (!valid) System.out.println(\"input not valid.\");\n } while (!valid);\n\n System.out.printf(\"Valid input is %s\", value);\n }\n}\n\nThe result is:\nask user for input\n123-abc\ninput not valid.\nask user for input\nqwerty58\nValid input is qwerty58\nProcess finished with exit code 0\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "exception", "java", "regex", "try_catch" ]
stackoverflow_0074659502_exception_java_regex_try_catch.txt
Q: How to have global database connection across the application in .net core with mongodb? I am new to .net core. I have to build rest apis with MongoDB. I am wondering how can i have a global database connection and global configuration to get any collection of the database. What i found on the internet is that i can have database name, connection string, collection name in the appsettings.json and create a class like this public class MongoDBSettings { public string ConnectionURI { get; set; } = null!; public string DatabaseName { get; set; } = null!; public string CollectionName { get; set; } = null!; } After this i can configure this database connection in the Program.cs builder.Services.Configure<MongoDBSettings(builder.Configuration.GetSection("MongoDB")); builder.Services.AddSingleton<MongoDBService>(); Now for querying on the collection i should do like this in the respective service file let say OrderService.cs private readonly IMongoCollection<Playlist> _playlistCollection; public OrderService(IOptions<MongoDBSettings> mongoDBSettings) { MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName); _playlistCollection = database.GetCollection<Playlist>(mongoDBSettings.Value.CollectionName); } public async Task<List<Playlist>> GetAsync() { // my code here } I understand all above mentioned approach. But the problem with this approach is that there is no global connection in the application. Every time i have to query on any collection then each entity service like orderService.cs will create a new database connection like i mentioned above like this MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); So this is inefficient. And second problem with this approach is that to get each database collection's instance i have to write these 3 lines of code in every service.cs file to get instance of each collection like this MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName); _playlistCollection = database.GetCollection<Playlist>(mongoDBSettings.Value.CollectionName); So how can i overcome both the issues and can i solve these 2 problems? How to have global database connection that will be generic and can be used from every? How to have generic logic to get instance of database collection's instance? A: You need a dbcontext class that will store all of your tables (dbsets) so you can access them anywhere on the application. I'll use this repo as a basic example. You have the context class and it's interface which are later on injected via DI on to every controller or class constructor you need. (Check line 75 on Startup.cs). Once that is done you might use your context interface anywhere. For example, in a repository class. As for your db configuration settings I don't see why you might need to have that all over the application. Maybe you only needed at Startup.cs but in case you do check out the options monitor object. It comes handy when you need configurations from appSettings.json into your objects anywhere on the application. Just make sure you're not exposing any app secrets publicly.
How to have global database connection across the application in .net core with mongodb?
I am new to .net core. I have to build rest apis with MongoDB. I am wondering how can i have a global database connection and global configuration to get any collection of the database. What i found on the internet is that i can have database name, connection string, collection name in the appsettings.json and create a class like this public class MongoDBSettings { public string ConnectionURI { get; set; } = null!; public string DatabaseName { get; set; } = null!; public string CollectionName { get; set; } = null!; } After this i can configure this database connection in the Program.cs builder.Services.Configure<MongoDBSettings(builder.Configuration.GetSection("MongoDB")); builder.Services.AddSingleton<MongoDBService>(); Now for querying on the collection i should do like this in the respective service file let say OrderService.cs private readonly IMongoCollection<Playlist> _playlistCollection; public OrderService(IOptions<MongoDBSettings> mongoDBSettings) { MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName); _playlistCollection = database.GetCollection<Playlist>(mongoDBSettings.Value.CollectionName); } public async Task<List<Playlist>> GetAsync() { // my code here } I understand all above mentioned approach. But the problem with this approach is that there is no global connection in the application. Every time i have to query on any collection then each entity service like orderService.cs will create a new database connection like i mentioned above like this MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); So this is inefficient. And second problem with this approach is that to get each database collection's instance i have to write these 3 lines of code in every service.cs file to get instance of each collection like this MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI); IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName); _playlistCollection = database.GetCollection<Playlist>(mongoDBSettings.Value.CollectionName); So how can i overcome both the issues and can i solve these 2 problems? How to have global database connection that will be generic and can be used from every? How to have generic logic to get instance of database collection's instance?
[ "You need a dbcontext class that will store all of your tables (dbsets) so you can access them anywhere on the application. I'll use this repo as a basic example.\nYou have the context class and it's interface which are later on injected via DI on to every controller or class constructor you need. (Check line 75 on Startup.cs).\nOnce that is done you might use your context interface anywhere. For example, in a repository class. As for your db configuration settings I don't see why you might need to have that all over the application. Maybe you only needed at Startup.cs but in case you do check out the options monitor object. It comes handy when you need configurations from appSettings.json into your objects anywhere on the application. Just make sure you're not exposing any app secrets publicly.\n" ]
[ 0 ]
[]
[]
[ ".net_core", "c#", "mongodb" ]
stackoverflow_0074659319_.net_core_c#_mongodb.txt
Q: ControllerBase.File missing from ASP.NET project I am working on an old ASP.NET WebApi with .Net Framework 4.6.1 and MVC3 project. The problem is that I don't have the File return type, not the one from System.IO but the one from the controller. In ASP.NET Core for example, this File is located in ControllerBase class. So, since my project only knows about System.IO.File, when writing the following code I get the following error: code: return File(memory, mimeType, document.Name); error: Provides static methods for the creation, copying, deletion, moving, and opening of a single file, and aids in the creation of a FileStream objects. Non-invocable member 'File' cannot be used like a method. Method, delegate or event is expected. So is there any way to get this 'File' in my project? Thank you guys! A: You're mixing up your .net API's the File() method is from .net Core, you're using .net framework, so that's why File isn't available. For .net framework you need to use the HttpResponseRessage for .net framework in your controller, something like below: byte[] msarray = memory; HttpResponseMessage httpResponseMessage = new HttpResponseMessage(); httpResponseMessage.Content = new ByteArrayContent(msarray); httpResponseMessage.Content.Headers.Add("x-filename", document.Name); httpResponseMessage.Content.Headers.ContentType = new MediaTypeHeaderValue(mimeType); httpResponseMessage.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment"); httpResponseMessage.Content.Headers.ContentDisposition.FileName = filename; httpResponseMessage.StatusCode = HttpStatusCode.OK; return httpResponseMessage;
ControllerBase.File missing from ASP.NET project
I am working on an old ASP.NET WebApi with .Net Framework 4.6.1 and MVC3 project. The problem is that I don't have the File return type, not the one from System.IO but the one from the controller. In ASP.NET Core for example, this File is located in ControllerBase class. So, since my project only knows about System.IO.File, when writing the following code I get the following error: code: return File(memory, mimeType, document.Name); error: Provides static methods for the creation, copying, deletion, moving, and opening of a single file, and aids in the creation of a FileStream objects. Non-invocable member 'File' cannot be used like a method. Method, delegate or event is expected. So is there any way to get this 'File' in my project? Thank you guys!
[ "You're mixing up your .net API's the File() method is from .net Core, you're using .net framework, so that's why File isn't available. For .net framework you need to use the HttpResponseRessage for .net framework in your controller, something like below:\n byte[] msarray = memory;\n\n HttpResponseMessage httpResponseMessage = new HttpResponseMessage();\n httpResponseMessage.Content = new ByteArrayContent(msarray);\n httpResponseMessage.Content.Headers.Add(\"x-filename\", document.Name);\n httpResponseMessage.Content.Headers.ContentType = new MediaTypeHeaderValue(mimeType);\n httpResponseMessage.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue(\"attachment\");\n httpResponseMessage.Content.Headers.ContentDisposition.FileName = filename;\n httpResponseMessage.StatusCode = HttpStatusCode.OK;\n return httpResponseMessage;\n\n" ]
[ 0 ]
[]
[]
[ "asp.net", "c#" ]
stackoverflow_0063134246_asp.net_c#.txt
Q: Builder pattern - child instance cannot work with methods of the parent abstract class I am creating DTO structure with Builder pattern. Because of existence of many requests I created parent request AbstractRequest to create concrete requests - e.g. ConcreteRequest in this example. Base Buildable interface defines contract to all Requests. public interface Buildable<T> { T build(); void validate(); } Parent request AbstractRequest to create concrete ConcreteRequest that holds parameters used by all descendants (for brevity globalValue only in this example). public abstract class AbstractRequest { private final String globalValue; public AbstractRequest(BuilderImpl builder) { this.globalValue = builder.global; } public interface Builder<T> extends Buildable<T> { Builder<T> globalValue(String globalValue); } public abstract static class BuilderImpl<T> implements Builder<T> { private String global; @Override public Builder<T> globalValue(String globalValue) { this.global = globalValue; return this; } } } Concrete request that has one private parameter localValue: public final class ConcreteRequest extends AbstractRequest { private final String localValue; public ConcreteRequest(BuilderImpl builder) { super(builder); this.localValue = builder.localValue; } public String getLocalValue() { return localValue; } public static Builder builder(){ return new BuilderImpl(); } public interface Builder extends AbstractRequest.Builder<ConcreteRequest> { Builder localValue(String localValue); } public static final class BuilderImpl extends AbstractRequest.BuilderImpl<ConcreteRequest> implements Builder { private String localValue; @Override public ConcreteRequest build() { this.validate(); return new ConcreteRequest(this); } @Override public void validate() { // do validation } @Override public Builder localValue(String localValue) { this.localValue = localValue; return this; } } } Q: Why is not ConcreteRequest#getLocalValue accessible while ConcreteRequest#build is available? A: I modified my code and it seems it works. public interface Buildable<T> { T build(); void validate(); } Parent class: public abstract class AbstractRequest { private final String globalValue; public AbstractRequest(BuilderImpl builder) { this.globalValue = builder.global; } public String getGlobalValue() { return globalValue; } public interface Builder<B extends Builder, C extends AbstractRequest> extends Buildable<C> { B globalValue(String globalValue); } public abstract static class BuilderImpl<B extends Builder, C extends AbstractRequest> implements Builder<B, C> { private String global; @Override public B globalValue(String globalValue) { this.global = globalValue; return (B) this; } @Override public void validate() { if (global == null) { throw new IllegalArgumentException("Global must not be null"); } } } } and public final class ConcreteRequest extends AbstractRequest { private final String localValue; public ConcreteRequest(BuilderImpl builder) { super(builder); this.localValue = builder.localValue; } public static Builder builder() { return new BuilderImpl(); } public String getLocalValue() { return localValue; } public interface Builder extends AbstractRequest.Builder<Builder, ConcreteRequest> { Builder localValue(String localValue); } public static final class BuilderImpl extends AbstractRequest.BuilderImpl<Builder, ConcreteRequest> implements Builder { private String localValue; @Override public Builder localValue(String localValue) { this.localValue = localValue; return this; } @Override public ConcreteRequest build() { this.validate(); return new ConcreteRequest(this); } @Override public void validate() { super.validate(); if (localValue == null) { throw new IllegalArgumentException("Local must not be null"); } } } } And now I can see all methods:
Builder pattern - child instance cannot work with methods of the parent abstract class
I am creating DTO structure with Builder pattern. Because of existence of many requests I created parent request AbstractRequest to create concrete requests - e.g. ConcreteRequest in this example. Base Buildable interface defines contract to all Requests. public interface Buildable<T> { T build(); void validate(); } Parent request AbstractRequest to create concrete ConcreteRequest that holds parameters used by all descendants (for brevity globalValue only in this example). public abstract class AbstractRequest { private final String globalValue; public AbstractRequest(BuilderImpl builder) { this.globalValue = builder.global; } public interface Builder<T> extends Buildable<T> { Builder<T> globalValue(String globalValue); } public abstract static class BuilderImpl<T> implements Builder<T> { private String global; @Override public Builder<T> globalValue(String globalValue) { this.global = globalValue; return this; } } } Concrete request that has one private parameter localValue: public final class ConcreteRequest extends AbstractRequest { private final String localValue; public ConcreteRequest(BuilderImpl builder) { super(builder); this.localValue = builder.localValue; } public String getLocalValue() { return localValue; } public static Builder builder(){ return new BuilderImpl(); } public interface Builder extends AbstractRequest.Builder<ConcreteRequest> { Builder localValue(String localValue); } public static final class BuilderImpl extends AbstractRequest.BuilderImpl<ConcreteRequest> implements Builder { private String localValue; @Override public ConcreteRequest build() { this.validate(); return new ConcreteRequest(this); } @Override public void validate() { // do validation } @Override public Builder localValue(String localValue) { this.localValue = localValue; return this; } } } Q: Why is not ConcreteRequest#getLocalValue accessible while ConcreteRequest#build is available?
[ "I modified my code and it seems it works.\npublic interface Buildable<T> {\n\n T build();\n\n void validate();\n\n}\n\nParent class:\npublic abstract class AbstractRequest {\n\n private final String globalValue;\n\n public AbstractRequest(BuilderImpl builder) {\n this.globalValue = builder.global;\n }\n\n public String getGlobalValue() {\n return globalValue;\n }\n\n public interface Builder<B extends Builder, C extends AbstractRequest> extends Buildable<C> {\n\n B globalValue(String globalValue);\n\n }\n\n public abstract static class BuilderImpl<B extends Builder, C extends AbstractRequest> implements Builder<B, C> {\n\n private String global;\n\n @Override\n public B globalValue(String globalValue) {\n this.global = globalValue;\n return (B) this;\n }\n\n @Override\n public void validate() {\n if (global == null) {\n throw new IllegalArgumentException(\"Global must not be null\");\n }\n }\n\n }\n\n}\n\nand\npublic final class ConcreteRequest extends AbstractRequest {\n\n private final String localValue;\n\n public ConcreteRequest(BuilderImpl builder) {\n super(builder);\n this.localValue = builder.localValue;\n }\n\n public static Builder builder() {\n return new BuilderImpl();\n }\n\n public String getLocalValue() {\n return localValue;\n }\n\n public interface Builder extends AbstractRequest.Builder<Builder, ConcreteRequest> {\n\n Builder localValue(String localValue);\n\n }\n\n public static final class BuilderImpl extends AbstractRequest.BuilderImpl<Builder, ConcreteRequest> implements Builder {\n\n private String localValue;\n\n @Override\n public Builder localValue(String localValue) {\n this.localValue = localValue;\n return this;\n }\n\n @Override\n public ConcreteRequest build() {\n this.validate();\n return new ConcreteRequest(this);\n }\n\n @Override\n public void validate() {\n super.validate();\n if (localValue == null) {\n throw new IllegalArgumentException(\"Local must not be null\");\n }\n }\n\n }\n\n}\n\nAnd now I can see all methods:\n\n" ]
[ 0 ]
[]
[]
[ "builder_pattern", "java", "oop" ]
stackoverflow_0074658801_builder_pattern_java_oop.txt
Q: Consolidate each row of dataframe returning a dataframe into ouput dataframe I am looking for help in a scenario where I have a scala dataframe PARENT. I need to loop through each record in PARENT dataframe Query the records from a database based on a filter using ID value of parent (the output of this step is dataframe) append few attributes from parent to queried dataframe Ex: ParentDF id parentname 1 X 2 Y Queried Dataframe for id 1 id queryid name 1 23 lobo 1 45 sobo 1 56 aobo Queried Dataframe for id 2 id queryid name 2 53 lama 2 67 dama 2 56 pama Final output required : id parentname queryid name 1 X 23 lobo 1 X 45 sobo 1 X 56 aobo 2 Y 53 lama 2 Y 67 dama 2 Y 56 pama Update1: I tried using foreachpartition and use foreach internally to loop through each record and got below error. error: Unable to find encoder for type org.apache.spark.sql.DataFrame. An implicit Encoder[org.apache.spark.sql.DataFrame] is needed to store org.apache.spark.sql.DataFrame instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases. falttenedData.map(row=>{ I need to do this with scalability plz. Any help is really appreciated. A: The solution is pretty straightforward, you just need to join your parentDF and your other one. parentDF.join( otherDF, Seq("id"), "left" ) As you're caring about scalability, In case your "otherDF" is quite small (it has less than 10K rows for example with 2-3 cols), you should consider using broadcast join : parentDF.join(broadcast(otherDF), Seq("id"), "left). A: You can use the .join method on a dataframe for this one. Some example code would be something like this: val df = Seq((1, "X"), (2, "Y")).toDF("id", "parentname") df.show +---+----------+ | id|parentname| +---+----------+ | 1| X| | 2| Y| +---+----------+ val df2 = Seq((1, 23, "lobo"), (1, 45, "sobo"), (1, 56, "aobo"), (2, 53, "lama"), (2, 67, "dama"), (2, 56, "pama")).toDF("id", "queryid", "name") df2.show +---+-------+----+ | id|queryid|name| +---+-------+----+ | 1| 23|lobo| | 1| 45|sobo| | 1| 56|aobo| | 2| 53|lama| | 2| 67|dama| | 2| 56|pama| +---+-------+----+ val output=df.join(df2, Seq("id")) output.show +---+----------+-------+----+ | id|parentname|queryid|name| +---+----------+-------+----+ | 1| X| 23|lobo| | 1| X| 45|sobo| | 1| X| 56|aobo| | 2| Y| 53|lama| | 2| Y| 67|dama| | 2| Y| 56|pama| +---+----------+-------+----+ Hope this helps! :) A: import org.apache.spark.sql.functions._ // Load the parent dataframe val parentDF = spark.read.parquet("/path/to/parent.parquet") // Loop through each row in the parent dataframe parentDF.foreach { row => val parentID = row.getAs[Long]("id") // Query the records from the database using the parent ID val query = s"SELECT * FROM my_table WHERE parent_id = $parentID" val childDF = spark.sql(query) // Append the attributes from the parent row to the child dataframe val childWithParentDF = childDF.withColumn("parent_name", lit(row.getAs[String]("name"))) .withColumn("parent_email", lit(row.getAs[String]("email"))) // Append the child dataframe to the output dataframe val outputDF = outputDF.union(childWithParentDF) } In this example, the parentDF dataframe is loaded from a Parquet file. Then, a foreach loop is used to iterate over each
Consolidate each row of dataframe returning a dataframe into ouput dataframe
I am looking for help in a scenario where I have a scala dataframe PARENT. I need to loop through each record in PARENT dataframe Query the records from a database based on a filter using ID value of parent (the output of this step is dataframe) append few attributes from parent to queried dataframe Ex: ParentDF id parentname 1 X 2 Y Queried Dataframe for id 1 id queryid name 1 23 lobo 1 45 sobo 1 56 aobo Queried Dataframe for id 2 id queryid name 2 53 lama 2 67 dama 2 56 pama Final output required : id parentname queryid name 1 X 23 lobo 1 X 45 sobo 1 X 56 aobo 2 Y 53 lama 2 Y 67 dama 2 Y 56 pama Update1: I tried using foreachpartition and use foreach internally to loop through each record and got below error. error: Unable to find encoder for type org.apache.spark.sql.DataFrame. An implicit Encoder[org.apache.spark.sql.DataFrame] is needed to store org.apache.spark.sql.DataFrame instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases. falttenedData.map(row=>{ I need to do this with scalability plz. Any help is really appreciated.
[ "The solution is pretty straightforward, you just need to join your parentDF and your other one.\nparentDF.join(\n otherDF,\n Seq(\"id\"),\n \"left\"\n)\n\nAs you're caring about scalability, In case your \"otherDF\" is quite small (it has less than 10K rows for example with 2-3 cols), you should consider using broadcast join : parentDF.join(broadcast(otherDF), Seq(\"id\"), \"left).\n", "You can use the .join method on a dataframe for this one.\nSome example code would be something like this:\nval df = Seq((1, \"X\"), (2, \"Y\")).toDF(\"id\", \"parentname\")\ndf.show\n+---+----------+ \n| id|parentname| \n+---+----------+ \n| 1| X| \n| 2| Y| \n+---+----------+\n\nval df2 = Seq((1, 23, \"lobo\"), (1, 45, \"sobo\"), (1, 56, \"aobo\"), (2, 53, \"lama\"), (2, 67, \"dama\"), (2, 56, \"pama\")).toDF(\"id\", \"queryid\", \"name\")\ndf2.show\n+---+-------+----+ \n| id|queryid|name| \n+---+-------+----+ \n| 1| 23|lobo| \n| 1| 45|sobo| \n| 1| 56|aobo| \n| 2| 53|lama| \n| 2| 67|dama| \n| 2| 56|pama| \n+---+-------+----+\n\nval output=df.join(df2, Seq(\"id\"))\noutput.show\n+---+----------+-------+----+ \n| id|parentname|queryid|name| \n+---+----------+-------+----+ \n| 1| X| 23|lobo| \n| 1| X| 45|sobo| \n| 1| X| 56|aobo| \n| 2| Y| 53|lama| \n| 2| Y| 67|dama| \n| 2| Y| 56|pama| \n+---+----------+-------+----+\n\nHope this helps! :)\n", "import org.apache.spark.sql.functions._\n\n// Load the parent dataframe\nval parentDF = spark.read.parquet(\"/path/to/parent.parquet\")\n\n// Loop through each row in the parent dataframe\nparentDF.foreach { row =>\n val parentID = row.getAs[Long](\"id\")\n\n // Query the records from the database using the parent ID\n val query = s\"SELECT * FROM my_table WHERE parent_id = $parentID\"\n val childDF = spark.sql(query)\n\n // Append the attributes from the parent row to the child dataframe\n val childWithParentDF = childDF.withColumn(\"parent_name\", lit(row.getAs[String](\"name\")))\n .withColumn(\"parent_email\", lit(row.getAs[String](\"email\")))\n\n // Append the child dataframe to the output dataframe\n val outputDF = outputDF.union(childWithParentDF)\n}\n\nIn this example, the parentDF dataframe is loaded from a Parquet file. Then, a foreach loop is used to iterate over each\n" ]
[ 2, 1, 0 ]
[]
[]
[ "apache_spark", "dataframe", "scala" ]
stackoverflow_0074526051_apache_spark_dataframe_scala.txt
Q: Regex match 10 characters after second pattern I would like to match 10 characters after the second pattern: My String: www.mysite.de/ep/3423141549/ep/B104RHWZZZ?something What I want to be matched: B104RHWZZZ What the regex currently matches: B104RHWZZZ?something Currently, my Regex looks like this: (?<=\/ep\/)(?:(?!\/ep\/).)*$. Could someone help me to change the regex that it only matches 10 characters after the second "/ep/" ("B104RHWZZZ")? A: It depends on which characters you allow to match. If you want to allow 10 non whitspace characters characters not being / or ? then you could use; (?<=\/ep\/)[^\/?\s]{10}(?=[^\/\s]*$) Explanation (?<=\/ep\/) Assert /ep/ directly to the left [^\/?\s]{10} Match 10 times any non whitespace character except for / and ? (?=[^\/\s]*$) Assert no more occurrence of / to the right Regex demo Or matching 1+ chars other than / ? & instead of exactly 10: (?<=\/ep\/)[^\/?&\s]+(?=[^\/\s]*$) Regex demo A: This would match the string as matching group 1: ep\/\w+\/ep\/(\w+) https://regex101.com/r/9tUjxG/1 While lookarounds can make this expression more sophisticated so that you won't require matching groups, it makes (in my experiences) the expression hard to read, understand and maintain/extend. That's why I would always keep regexes as simple as possible.
Regex match 10 characters after second pattern
I would like to match 10 characters after the second pattern: My String: www.mysite.de/ep/3423141549/ep/B104RHWZZZ?something What I want to be matched: B104RHWZZZ What the regex currently matches: B104RHWZZZ?something Currently, my Regex looks like this: (?<=\/ep\/)(?:(?!\/ep\/).)*$. Could someone help me to change the regex that it only matches 10 characters after the second "/ep/" ("B104RHWZZZ")?
[ "It depends on which characters you allow to match. If you want to allow 10 non whitspace characters characters not being / or ? then you could use;\n(?<=\\/ep\\/)[^\\/?\\s]{10}(?=[^\\/\\s]*$)\n\nExplanation\n\n(?<=\\/ep\\/) Assert /ep/ directly to the left\n[^\\/?\\s]{10} Match 10 times any non whitespace character except for / and ?\n(?=[^\\/\\s]*$) Assert no more occurrence of / to the right\n\nRegex demo\nOr matching 1+ chars other than / ? & instead of exactly 10:\n(?<=\\/ep\\/)[^\\/?&\\s]+(?=[^\\/\\s]*$)\n\nRegex demo\n", "This would match the string as matching group 1:\nep\\/\\w+\\/ep\\/(\\w+)\nhttps://regex101.com/r/9tUjxG/1\nWhile lookarounds can make this expression more sophisticated so that you won't require matching groups, it makes (in my experiences) the expression hard to read, understand and maintain/extend.\nThat's why I would always keep regexes as simple as possible.\n" ]
[ 1, 1 ]
[]
[]
[ "regex" ]
stackoverflow_0074659344_regex.txt
Q: Bridge table for many-to-many relationship I'm confused about what the strucutre of the the bridge table. star schema book is stucturing the bridge table with a group_key example, if i have fact_orders ( order_sk , order_nk, amount, group_key) dim_sales_person ( sales_person_SK, sales_person_nk, name) the bridge_table ==> orders_salesperson_bridge_table : ( group_key, sales_person_SK) FROM any other source, ( google, youtube,..), the definition of the bridge table : Junction table in a database, also referred to as a Bridge table or Associative Table, bridges the tables together by referencing the primary keys of each data table. strucutre : fact_orders ( order_sk , order_nk, amount) dim_sales_person ( sales_person_SK, sales_person_nk, name) bridge table --> orders_salesperson_bridge_table : ( Order_id, sales_person_SK) when should i choose each technique ? and why ? thanks for any help, A: The SALES_GROUP table is there to allow you to allocate the value of a sale proportionately to multiple sales people. So in your example the sales amount of 1000 would be allocated as 750/250 between the 2 sales people
Bridge table for many-to-many relationship
I'm confused about what the strucutre of the the bridge table. star schema book is stucturing the bridge table with a group_key example, if i have fact_orders ( order_sk , order_nk, amount, group_key) dim_sales_person ( sales_person_SK, sales_person_nk, name) the bridge_table ==> orders_salesperson_bridge_table : ( group_key, sales_person_SK) FROM any other source, ( google, youtube,..), the definition of the bridge table : Junction table in a database, also referred to as a Bridge table or Associative Table, bridges the tables together by referencing the primary keys of each data table. strucutre : fact_orders ( order_sk , order_nk, amount) dim_sales_person ( sales_person_SK, sales_person_nk, name) bridge table --> orders_salesperson_bridge_table : ( Order_id, sales_person_SK) when should i choose each technique ? and why ? thanks for any help,
[ "The SALES_GROUP table is there to allow you to allocate the value of a sale proportionately to multiple sales people. So in your example the sales amount of 1000 would be allocated as 750/250 between the 2 sales people\n" ]
[ 0 ]
[]
[]
[ "kimball", "many_to_many", "powerbi", "star_schema_datawarehouse" ]
stackoverflow_0074618952_kimball_many_to_many_powerbi_star_schema_datawarehouse.txt
Q: Encode accented characters from http request to email body Good day. I have a Java Tomcat's application the allow to compile a form where there is a textarea field. The submit go to a servlet where the request fields are read and and e-mail to send is composed. In order to send e-mail I use JavaMail. The textarea field is inserted in the setText method of the MimeMessage class. When I receive the e-mail in my client the accented characters are visbile as ? (question mark). "Avrei bisogno di più informazione sull'immobile ed in particolare sulla sua qualità e ricevere delle foto." "Avrei bisogno di pi? informazione sull'immobile ed in particolare sulla sua qualit? e ricevere delle foto." Consider that the jsp page is UTF-8 econded and also the servlet processRequest start with request.setCharacterEncoding("UTF-8"); Infact if I put the value in MySql table I haven't problems. How can I resolve the problem? Best regards. Stefano Errani A: Try that: MimeMessage msg = new MimeMessage(your session); msg.setContent(your content, "text/html; charset=utf-8"); I hope, it will help!
Encode accented characters from http request to email body
Good day. I have a Java Tomcat's application the allow to compile a form where there is a textarea field. The submit go to a servlet where the request fields are read and and e-mail to send is composed. In order to send e-mail I use JavaMail. The textarea field is inserted in the setText method of the MimeMessage class. When I receive the e-mail in my client the accented characters are visbile as ? (question mark). "Avrei bisogno di più informazione sull'immobile ed in particolare sulla sua qualità e ricevere delle foto." "Avrei bisogno di pi? informazione sull'immobile ed in particolare sulla sua qualit? e ricevere delle foto." Consider that the jsp page is UTF-8 econded and also the servlet processRequest start with request.setCharacterEncoding("UTF-8"); Infact if I put the value in MySql table I haven't problems. How can I resolve the problem? Best regards. Stefano Errani
[ "Try that:\nMimeMessage msg = new MimeMessage(your session);\nmsg.setContent(your content, \"text/html; charset=utf-8\");\n\nI hope, it will help!\n" ]
[ 0 ]
[]
[]
[ "character_encoding", "jakarta_mail", "servlets" ]
stackoverflow_0074659705_character_encoding_jakarta_mail_servlets.txt
Q: ACE editor auto-completion duplicating prefix I'm trying to add auto-completions for mermaid diagrams to my editor: const mermaids = Object.entries({ "mermaid graph": `graph LR\n x --> y`, }).map(([name, autocompletion]) => ({ caption: name, meta: name, value: "``mermaid\n" + autocompletion + "\n```" })); aceeditor.setOptions({ enableBasicAutocompletion: [{ getCompletions: (editor, session, pos, prefix, callback) => { callback(null, [ ...mermaids ]) } }], enableSnippets: false, enableLiveAutocompletion: true }); In the resulting editor, if the user types "graph" or "mermaid" and hits enter to auto-complete, it works as expected. (With the exception of less-than desirable cursor position after the completion.) If the user types "```" and hits enter, the autocompletion occurs after the originally typed "```". E.g., ``````mermaid graph LR x --> y \``` <-- just escaped here for SO's sake Is there an efficient way to correct this? If not, what event can I use to determine when an auto-completion has actually occurred and search for duplicate markers? Is there a better way to do this in general? A: I think the broader issue here is that your completion item more closely resembles a snippet. Using snippets would also give you better control over where the cursor goes after insertion. Answering your second question, Autocomplete.insertMatch is the function responsible for inserting the chosen completion item. You could hook it, or perhaps use the mysteriously undocumented .completer field on the completion items? It's been there for 9 years now, perhaps it is not an accident. const FakeCompleter = { insertMatch: (editor, data) => { // stolen from default insertMatch, needed to erase text that triggered our completion: if (editor.completer.completions.filterText) { var ranges = editor.selection.getAllRanges() for (var i = 0, range; range = ranges[i]; i++) { range.start.column -= editor.completer.completions.filterText.length editor.session.remove(range) } } // if there are ` symbols immediately before cursor, omit those from completion: let text = (data.value || data) const lead = editor.selection.lead const prefix = editor.session.getLine(lead.row).substring(0, lead.column) const mt = /.*?(`{1,3})$/.exec(prefix) if (mt && text.startsWith(mt[1])) text = text.substring(mt[1].length) // and finally call the regular insertion: editor.execCommand("insertstring", text) } } const mermaids = Object.entries({ "mermaid graph": `graph LR\n x --> y`, }).map(([name, autocompletion]) => ({ caption: name, meta: name, value: "``mermaid\n" + autocompletion + "\n```", completer: FakeCompleter }));
ACE editor auto-completion duplicating prefix
I'm trying to add auto-completions for mermaid diagrams to my editor: const mermaids = Object.entries({ "mermaid graph": `graph LR\n x --> y`, }).map(([name, autocompletion]) => ({ caption: name, meta: name, value: "``mermaid\n" + autocompletion + "\n```" })); aceeditor.setOptions({ enableBasicAutocompletion: [{ getCompletions: (editor, session, pos, prefix, callback) => { callback(null, [ ...mermaids ]) } }], enableSnippets: false, enableLiveAutocompletion: true }); In the resulting editor, if the user types "graph" or "mermaid" and hits enter to auto-complete, it works as expected. (With the exception of less-than desirable cursor position after the completion.) If the user types "```" and hits enter, the autocompletion occurs after the originally typed "```". E.g., ``````mermaid graph LR x --> y \``` <-- just escaped here for SO's sake Is there an efficient way to correct this? If not, what event can I use to determine when an auto-completion has actually occurred and search for duplicate markers? Is there a better way to do this in general?
[ "I think the broader issue here is that your completion item more closely resembles a snippet. Using snippets would also give you better control over where the cursor goes after insertion.\nAnswering your second question, Autocomplete.insertMatch is the function responsible for inserting the chosen completion item. You could hook it, or perhaps use the mysteriously undocumented .completer field on the completion items? It's been there for 9 years now, perhaps it is not an accident.\nconst FakeCompleter = {\n insertMatch: (editor, data) => {\n // stolen from default insertMatch, needed to erase text that triggered our completion:\n if (editor.completer.completions.filterText) {\n var ranges = editor.selection.getAllRanges()\n for (var i = 0, range; range = ranges[i]; i++) {\n range.start.column -= editor.completer.completions.filterText.length\n editor.session.remove(range)\n }\n }\n \n // if there are ` symbols immediately before cursor, omit those from completion:\n let text = (data.value || data)\n const lead = editor.selection.lead\n const prefix = editor.session.getLine(lead.row).substring(0, lead.column)\n const mt = /.*?(`{1,3})$/.exec(prefix)\n if (mt && text.startsWith(mt[1])) text = text.substring(mt[1].length)\n \n // and finally call the regular insertion:\n editor.execCommand(\"insertstring\", text)\n }\n}\nconst mermaids = Object.entries({\n \"mermaid graph\": `graph LR\\n x --> y`,\n}).map(([name, autocompletion]) => ({\n caption: name,\n meta: name,\n value: \"``mermaid\\n\" + autocompletion + \"\\n```\",\n completer: FakeCompleter\n}));\n\n\n" ]
[ 0 ]
[]
[]
[ "ace_editor" ]
stackoverflow_0074384161_ace_editor.txt
Q: calculate n workdays or last workday from a given date, using calendar table, with SQL for Impala/Oracle I need to perform calculations using a calendar table, whose specifications I present below: Add or Subtract N workdays, excluding weekends and holydays. Get the last workday from previous month, excluding weekends and holydays, from a given date. Columns explanation: ref_date : days of the year - (the date we need to calc...) civil_util : '0' -> holydays and weekends --- '1' are workdays target_util : '0' -> weekends --- '1' are workdays ano : correspondent year. prev_wkday : previous ref_date, using Lag() function next_wkday : next ref_date, using Lead() function. SQL that generates the example below: select *, LAG (to_date(ref_date),1) OVER (ORDER BY to_date(ref_date)) AS prev_wkday, Lead (to_date(ref_date),1) OVER (ORDER BY to_date(ref_date)) AS next_wkday from cd_estruturais.calendario_datas where ano = 2022 and ref_date between '2022-11-30' and date_add('2022-11-30',5) --and civil_util = 1 --limit 1 I need to answer both questions 1) and 2), using SQL Impala/Oracle. Regarding question 1), to get the next (1) workday from '2022-11-30', we could add in the above query, the both criteria (civil_util = 1 with limit 1), because civil_util = 1 selects only workdays, excluding weekends and holydays. The answer is '2022-12-02'. I need the most efficient sql to calc the (n) workdays after and before '2022-11-30'. Regarding question 2), to get the last workday from previous month from '2022-11-30', we must get the ref_date '2022-10-31'. this was the last workday of previous month. Can anyone help please? A: Try with calendar(ref_date, civil_util, target_util) as ( select to_date('2022-11-30','yyyy-mm-dd'), 1, 1 from dual union all select to_date('2022-12-01','yyyy-mm-dd'), 0, 1 from dual union all select to_date('2022-12-02','yyyy-mm-dd'), 1, 1 from dual union all select to_date('2022-12-03','yyyy-mm-dd'), 0, 0 from dual union all select to_date('2022-12-04','yyyy-mm-dd'), 0, 0 from dual union all select to_date('2022-12-05','yyyy-mm-dd'), 1, 1 from dual -- union all ), newcalendar(ref_date, civil_util, target_util, workday, workday_rnk, last_wd_month) as ( select ref_date, civil_util, target_util, workday, workday_rnk, last_value(last_wd_month) ignore nulls over(partition by trunc(ref_date,'MM')) as last_wd_month from ( select ref_date, civil_util, target_util, workday, /*nvl2(workday,workday_rnk,null) as*/ workday_rnk, max(nvl2(workday,ref_date,null)) over(partition by trunc(ref_date,'MM'), workday) as last_wd_month from ( select c.*, sum(workday) over(order by ref_date) as workday_rnk from ( select c.*, case when civil_util+target_util>0 then 1 end as workday from calendar c ) c ) ) ) select * from newcalendar order by ref_date ; For the point 1, you just add/substract from workday_rnk, and the ref_date of the corresponding row is your answer. Note that this version with the "nvl2(workday,workday_rnk,null)" commented out works also if the day is an off day, if you don't want that just remove the comment. For the point 2, all rows have "last_wd_month" set, so it's just a matter of where clause on it (with trunc(last_wd_month,'MM') = target). REF_DATE CIVIL_UTIL TARGET_UTIL WORKDAY WORKDAY_RNK LAST_WD_ -------- ---------- ----------- ---------- ----------- -------- 30/11/22 1 1 1 1 30/11/22 01/12/22 0 1 1 2 05/12/22 02/12/22 1 1 1 3 05/12/22 03/12/22 0 0 3 05/12/22 04/12/22 0 0 3 05/12/22 05/12/22 1 1 1 4 05/12/22
calculate n workdays or last workday from a given date, using calendar table, with SQL for Impala/Oracle
I need to perform calculations using a calendar table, whose specifications I present below: Add or Subtract N workdays, excluding weekends and holydays. Get the last workday from previous month, excluding weekends and holydays, from a given date. Columns explanation: ref_date : days of the year - (the date we need to calc...) civil_util : '0' -> holydays and weekends --- '1' are workdays target_util : '0' -> weekends --- '1' are workdays ano : correspondent year. prev_wkday : previous ref_date, using Lag() function next_wkday : next ref_date, using Lead() function. SQL that generates the example below: select *, LAG (to_date(ref_date),1) OVER (ORDER BY to_date(ref_date)) AS prev_wkday, Lead (to_date(ref_date),1) OVER (ORDER BY to_date(ref_date)) AS next_wkday from cd_estruturais.calendario_datas where ano = 2022 and ref_date between '2022-11-30' and date_add('2022-11-30',5) --and civil_util = 1 --limit 1 I need to answer both questions 1) and 2), using SQL Impala/Oracle. Regarding question 1), to get the next (1) workday from '2022-11-30', we could add in the above query, the both criteria (civil_util = 1 with limit 1), because civil_util = 1 selects only workdays, excluding weekends and holydays. The answer is '2022-12-02'. I need the most efficient sql to calc the (n) workdays after and before '2022-11-30'. Regarding question 2), to get the last workday from previous month from '2022-11-30', we must get the ref_date '2022-10-31'. this was the last workday of previous month. Can anyone help please?
[ "Try\nwith calendar(ref_date, civil_util, target_util) as (\n select to_date('2022-11-30','yyyy-mm-dd'), 1, 1 from dual union all\n select to_date('2022-12-01','yyyy-mm-dd'), 0, 1 from dual union all \n select to_date('2022-12-02','yyyy-mm-dd'), 1, 1 from dual union all \n select to_date('2022-12-03','yyyy-mm-dd'), 0, 0 from dual union all \n select to_date('2022-12-04','yyyy-mm-dd'), 0, 0 from dual union all \n select to_date('2022-12-05','yyyy-mm-dd'), 1, 1 from dual -- union all \n),\nnewcalendar(ref_date, civil_util, target_util, workday, workday_rnk, last_wd_month) as ( \n select ref_date, civil_util, target_util, workday, workday_rnk, \n last_value(last_wd_month) ignore nulls over(partition by trunc(ref_date,'MM')) as last_wd_month\n from (\n select ref_date, civil_util, target_util, workday, /*nvl2(workday,workday_rnk,null) as*/ workday_rnk,\n max(nvl2(workday,ref_date,null)) over(partition by trunc(ref_date,'MM'), workday) as last_wd_month\n from (\n select c.*, sum(workday) over(order by ref_date) as workday_rnk\n from (\n select c.*, case when civil_util+target_util>0 then 1 end as workday \n from calendar c\n ) c\n )\n )\n)\nselect * from newcalendar\norder by ref_date\n;\n\nFor the point 1, you just add/substract from workday_rnk, and the ref_date of the corresponding row is your answer.\nNote that this version with the \"nvl2(workday,workday_rnk,null)\" commented out works also if the day is an off day, if you don't want that just remove the comment.\nFor the point 2, all rows have \"last_wd_month\" set, so it's just a matter of where clause on it (with trunc(last_wd_month,'MM') = target).\nREF_DATE CIVIL_UTIL TARGET_UTIL WORKDAY WORKDAY_RNK LAST_WD_\n-------- ---------- ----------- ---------- ----------- --------\n30/11/22 1 1 1 1 30/11/22\n01/12/22 0 1 1 2 05/12/22\n02/12/22 1 1 1 3 05/12/22\n03/12/22 0 0 3 05/12/22\n04/12/22 0 0 3 05/12/22\n05/12/22 1 1 1 4 05/12/22\n\n" ]
[ 0 ]
[]
[]
[ "impala", "oracle", "sql" ]
stackoverflow_0074658334_impala_oracle_sql.txt
Q: Expected an assignment or function call and instead saw an expression no-unused-expressions - React Js Crud I want to activate the delete button in the table using its id. It's a crud app, I wrote this code for "Edit" and "Add" operations, and I wrote this code for deleting a row(product). But I am getting an error: "Expected an assignment or function call and instead saw an expression no-unused-expressions". Where do I have a mistake? Have I written the onClick event correctly? const Products = () => { const[products, setProducts] = useState([]); useEffect(() => { loadProducts(); },[]); const loadProducts = async () => { const result = await axios.get("http://localhost:3001/products"); setProducts(result.data); }; const deleteUSer = async id => { await axios.delete(`http://localhost:3001/products/${id}`); loadProducts(); } <Link className='btn btn-otline-primary m-2' to={`/product/edit/${product.id}`}>Edit</Link> <Link className='btn btn-danger m-2' onClick={()=>{deleteUSer}}>Delete</Link> A: You are miss-calling your deleteUser It should either be called explicitly like: <Link className='btn btn-danger m-2' onClick={()=>{deleteUSer()}}>Delete</Link> Or implicitly like: <Link className='btn btn-danger m-2' onClick={deleteUSer}>Delete</Link>
Expected an assignment or function call and instead saw an expression no-unused-expressions - React Js Crud
I want to activate the delete button in the table using its id. It's a crud app, I wrote this code for "Edit" and "Add" operations, and I wrote this code for deleting a row(product). But I am getting an error: "Expected an assignment or function call and instead saw an expression no-unused-expressions". Where do I have a mistake? Have I written the onClick event correctly? const Products = () => { const[products, setProducts] = useState([]); useEffect(() => { loadProducts(); },[]); const loadProducts = async () => { const result = await axios.get("http://localhost:3001/products"); setProducts(result.data); }; const deleteUSer = async id => { await axios.delete(`http://localhost:3001/products/${id}`); loadProducts(); } <Link className='btn btn-otline-primary m-2' to={`/product/edit/${product.id}`}>Edit</Link> <Link className='btn btn-danger m-2' onClick={()=>{deleteUSer}}>Delete</Link>
[ "You are miss-calling your deleteUser\nIt should either be called explicitly like:\n<Link className='btn btn-danger m-2' onClick={()=>{deleteUSer()}}>Delete</Link>\nOr implicitly like:\n<Link className='btn btn-danger m-2' onClick={deleteUSer}>Delete</Link>\n" ]
[ 0 ]
[]
[]
[ "crud", "delete_row", "reactjs" ]
stackoverflow_0074658268_crud_delete_row_reactjs.txt
Q: Powershell - Efficient way to pull event viewer logs Currently it takes 15-20 seconds to pull specific event viewer logs, is there a more efficient way to accomplish the same end result? I need the last 5 minutes' worth of application logs for Instance ID 21. Start-Transcript -Path C:\Windows\Blah\Data\Logs\Temp\StatusErrors.TXT -Append -Force -ErrorAction SilentlyContinue Get-EventLog -LogName application -After (Get-Date).AddMinutes(-5) -InstanceID 21 -Message "*device*" | Select-Object -ExpandProperty message Stop-Transcript A: I am not getting into the logic of it because already it is yielding results. Get-Eventlog is kinda obsolete. Use Get-WinEvent where you can use advanced XPath and XML filters and the log will use its indexes to return targeted events very quickly. A sample below: $filter = @{ LogName = 'application' ID = 21 StartTime = (Get-Date).AddMinutes(-5) } #$Computer = "Hostname" ## In case you are running it remotely Get-WinEvent -FilterHashTable $filter #-ComputerName $Computer (Commented out since it is when you run remotely) Hope it helps.
Powershell - Efficient way to pull event viewer logs
Currently it takes 15-20 seconds to pull specific event viewer logs, is there a more efficient way to accomplish the same end result? I need the last 5 minutes' worth of application logs for Instance ID 21. Start-Transcript -Path C:\Windows\Blah\Data\Logs\Temp\StatusErrors.TXT -Append -Force -ErrorAction SilentlyContinue Get-EventLog -LogName application -After (Get-Date).AddMinutes(-5) -InstanceID 21 -Message "*device*" | Select-Object -ExpandProperty message Stop-Transcript
[ "I am not getting into the logic of it because already it is yielding results. Get-Eventlog is kinda obsolete. Use Get-WinEvent where you can use advanced XPath and XML filters and the log will use its indexes to return targeted events very quickly.\nA sample below:\n$filter = @{\n LogName = 'application'\n ID = 21\n StartTime = (Get-Date).AddMinutes(-5) \n}\n#$Computer = \"Hostname\" ## In case you are running it remotely\nGet-WinEvent -FilterHashTable $filter #-ComputerName $Computer (Commented out since it is when you run remotely)\n\nHope it helps.\n" ]
[ 2 ]
[]
[]
[ "event_viewer", "performance", "powershell" ]
stackoverflow_0074659675_event_viewer_performance_powershell.txt
Q: How to succinctly convert an iterator over &str to a collection of String I am new to Rust, and it seems very awkward to use sequences of functional transformations on strings, because they often return &str. For example, here is an implementation in which I try to read lines of two words separated by a space, and store them into a container of tuples: use itertools::Itertools; fn main() { let s = std::io::stdin() .lines() .map(|l| l.unwrap()) .map(|l| { l.split(" ") .collect_tuple() .map(|(a, b)| (a.to_string(), b.to_string())) .unwrap() }) .collect::<Vec<_>>(); println!("{:?}", s); } https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=7f6d370457cc3254195565f69047018c Because split returns an iterator to &str objects, whose scope is the lambda used for the map, the only way I saw to return them was to manually convert them back to strings. This seems really awkward. Is there a better way to implement such a program? A: Rust is explicit about allocation. The Strings returned by the lines() iterator don't persist beyond the iterator chain, so you can't just store references into them. Therefore, logically, there needs to be a to_string() (or to_owned, or String::from) somewhere. But putting it after the tuple creation is a bit awkward, because it requires you to call the function twice. You can turn the result of the split() into owned objects instead. This should work: .map(|l| { l.split(" ") .map(String::from) .collect_tuple() .unwrap() }) .collect::<Vec<(_,_)>>(); Note that now you have to be explicit about the tuple type, though.
How to succinctly convert an iterator over &str to a collection of String
I am new to Rust, and it seems very awkward to use sequences of functional transformations on strings, because they often return &str. For example, here is an implementation in which I try to read lines of two words separated by a space, and store them into a container of tuples: use itertools::Itertools; fn main() { let s = std::io::stdin() .lines() .map(|l| l.unwrap()) .map(|l| { l.split(" ") .collect_tuple() .map(|(a, b)| (a.to_string(), b.to_string())) .unwrap() }) .collect::<Vec<_>>(); println!("{:?}", s); } https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=7f6d370457cc3254195565f69047018c Because split returns an iterator to &str objects, whose scope is the lambda used for the map, the only way I saw to return them was to manually convert them back to strings. This seems really awkward. Is there a better way to implement such a program?
[ "Rust is explicit about allocation. The Strings returned by the lines() iterator don't persist beyond the iterator chain, so you can't just store references into them. Therefore, logically, there needs to be a to_string() (or to_owned, or String::from) somewhere.\nBut putting it after the tuple creation is a bit awkward, because it requires you to call the function twice. You can turn the result of the split() into owned objects instead. This should work:\n .map(|l| {\n l.split(\" \")\n .map(String::from)\n .collect_tuple()\n .unwrap()\n })\n .collect::<Vec<(_,_)>>();\n\nNote that now you have to be explicit about the tuple type, though.\n" ]
[ 3 ]
[]
[]
[ "collections", "iterator", "rust" ]
stackoverflow_0074659568_collections_iterator_rust.txt
Q: IAM policy to specific S3 bucket not effective without ListAllMyBuckets I've specified the IAM access policy for one specific S3 bucket that's working fine with ListAllMyBuckets action. However I don't want to list all buckets to the user. If I remove LisAllBuckets action then I get the error, Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 37A0TA0JGKQA56FJ; S3 Extended Request ID: yWLJEG4RSqGKXjphkcvfOUTCqPe6Qtq/aZUKek1LJ error when trying to access using access key id & Secret access key thru my application. It looks this policy should work as per AWS guidelines https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/ - but its not working as expected. Can you pls help me to resolve this issue? Thanks. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::ohdart-dev-assessments" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::ohdart-dev-assessments/*" ] } ] } A: TL;DR: This isn't supported by AWS. I'm trying to set up the same scenario, both for least-priveleged access as well as for providing the simplest ease of use. According to the AWS knowedlege centre (as of 2022-07-22) if you do not want to allow the s3:ListAllMyBuckets action the recommended alternative is to allow the s3:ListBucket action (possibly providing a Condition so as to limit the paths accessible): { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET" ], "Condition": { "StringLike": { "s3:prefix": [ "folder1/folder2/*" ] } } } and then to: Provide the user with a direct console link to the S3 bucket or folder. The direct console link to an S3 bucket looks like this: https://s3.console.aws.amazon.com/s3/buckets/DOC-EXAMPLE-BUCKET/folder1/folder2/
IAM policy to specific S3 bucket not effective without ListAllMyBuckets
I've specified the IAM access policy for one specific S3 bucket that's working fine with ListAllMyBuckets action. However I don't want to list all buckets to the user. If I remove LisAllBuckets action then I get the error, Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 37A0TA0JGKQA56FJ; S3 Extended Request ID: yWLJEG4RSqGKXjphkcvfOUTCqPe6Qtq/aZUKek1LJ error when trying to access using access key id & Secret access key thru my application. It looks this policy should work as per AWS guidelines https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/ - but its not working as expected. Can you pls help me to resolve this issue? Thanks. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::ohdart-dev-assessments" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::ohdart-dev-assessments/*" ] } ] }
[ "TL;DR: This isn't supported by AWS.\n\nI'm trying to set up the same scenario, both for least-priveleged access as well as for providing the simplest ease of use.\nAccording to the AWS knowedlege centre (as of 2022-07-22) if you do not want to allow the s3:ListAllMyBuckets action the recommended alternative is to allow the s3:ListBucket action (possibly providing a Condition so as to limit the paths accessible):\n{\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::DOC-EXAMPLE-BUCKET\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"folder1/folder2/*\"\n ]\n }\n }\n}\n\nand then to:\n\nProvide the user with a direct console link to the S3 bucket or folder. The direct console link to an S3 bucket looks like this:\n\nhttps://s3.console.aws.amazon.com/s3/buckets/DOC-EXAMPLE-BUCKET/folder1/folder2/\n\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services" ]
stackoverflow_0068577726_amazon_s3_amazon_web_services.txt
Q: Save jsonb in postgres database I have a table in Postgres with a jsonb column. I'm using Entity Framework to upsert data on this table, but I'm always getting the error Input string was not in a correct format because of the jsonb column. This is an example of a query I generate: INSERT INTO example_table (id, name, details) VALUES (1, 'john','{\r\n \"age\": \"17\"\r\n}') ON CONFLICT (name) DO NOTHING This is the command I'm trying to execute: _context.ExecuteSqlRaw("INSERT INTO example_table (id, name, details) VALUES (1, 'john','{\r\n \"age\": \"17\"\r\n}') ON CONFLICT (name) DO NOTHING"); If I remove the json the query is executed perfectly. What am I doing wrong? A: Json and Jsonb doesn't support \n, \r symbols. Use this: INSERT INTO example_table (id, name, details) VALUES (1, 'john','{"age":"17"}') ON CONFLICT (name) DO NOTHING
Save jsonb in postgres database
I have a table in Postgres with a jsonb column. I'm using Entity Framework to upsert data on this table, but I'm always getting the error Input string was not in a correct format because of the jsonb column. This is an example of a query I generate: INSERT INTO example_table (id, name, details) VALUES (1, 'john','{\r\n \"age\": \"17\"\r\n}') ON CONFLICT (name) DO NOTHING This is the command I'm trying to execute: _context.ExecuteSqlRaw("INSERT INTO example_table (id, name, details) VALUES (1, 'john','{\r\n \"age\": \"17\"\r\n}') ON CONFLICT (name) DO NOTHING"); If I remove the json the query is executed perfectly. What am I doing wrong?
[ "Json and Jsonb doesn't support \\n, \\r symbols. Use this:\nINSERT INTO example_table (id, name, details) \nVALUES \n(1, 'john','{\"age\":\"17\"}') \nON CONFLICT (name) DO NOTHING \n\n" ]
[ 0 ]
[]
[]
[ "c#", "ef_core_6.0", "postgresql" ]
stackoverflow_0074659335_c#_ef_core_6.0_postgresql.txt
Q: How do I implement recursion to this python program def recurse( aList ): matches = [ match for match in action if "A" in match ] uses = " ".join(matches) return f"Answer: { aList.index( uses )" This is the non recursive method. I just couldn't figure out how to implement recursion in regards of lists. Output should be Answer: n uses. Can anybody help. A: Recursion is a bad fit for this problem in Python, because lists aren' really recursive data structures. But you could write the following: def recurse(aList): if not aList: return 0 return ("A" in aList[0]) + recurse(aList[1:]) Nothing in an empty list, by definition, contains "A". Otherwise, determined if "A" is in the first element of the list, and add 1 or 0 as appropriate (remember, bools are ints in Python) to the number of matches in the rest of the list. The recursive function should only deal with the count itself; let the caller of the recursive function put the count into a string: print("Answer: {recurse(aList)} uses"}
How do I implement recursion to this python program
def recurse( aList ): matches = [ match for match in action if "A" in match ] uses = " ".join(matches) return f"Answer: { aList.index( uses )" This is the non recursive method. I just couldn't figure out how to implement recursion in regards of lists. Output should be Answer: n uses. Can anybody help.
[ "Recursion is a bad fit for this problem in Python, because lists aren' really recursive data structures. But you could write the following:\ndef recurse(aList):\n if not aList:\n return 0\n return (\"A\" in aList[0]) + recurse(aList[1:])\n\nNothing in an empty list, by definition, contains \"A\". Otherwise, determined if \"A\" is in the first element of the list, and add 1 or 0 as appropriate (remember, bools are ints in Python) to the number of matches in the rest of the list.\nThe recursive function should only deal with the count itself; let the caller of the recursive function put the count into a string:\nprint(\"Answer: {recurse(aList)} uses\"}\n\n" ]
[ 1 ]
[]
[]
[ "list", "python", "python_3.x", "recursion" ]
stackoverflow_0074659546_list_python_python_3.x_recursion.txt
Q: tapgesturerecognizer not working in flutter webview I tried using GestureRecognizer in flutter webview to zoom out on Double pinch.But it is not working. If you have any solution then please help me import 'dart:async'; import 'package:flutter/material.dart'; import 'package:webview_flutter/webview_flutter.dart'; import 'package:flutter/foundation.dart'; import 'package:flutter/gestures.dart'; void main()=>runApp(WebView()); class WebView extends StatefulWidget { @override _WebViewState createState() => _WebViewState(); } class _WebViewState extends State<WebView> { final Completer<WebViewController> _controller = Completer<WebViewController>(); @override Widget build(BuildContext context) { return Scaffold( body: Builder(builder: (BuildContext context) { return WebView( key: UniqueKey(), gestureRecognizers: Set()..add(Factory<TapGestureRecognizer>(() => TapGestureRecognizer(),)),//ZOOM WEBVIEW initialUrl: 'https://stackoverflow.com/questions/', javascriptMode: JavascriptMode.unrestricted, onWebViewCreated: (WebViewController webViewController) { _controller.complete(webViewController); }, ); }), ); } } This was the important part of code If you need full code then please comment. A: I run at the same problem and was able to make TapGestureRecognizer work by wrapping WebView into GestureDetector with onTap() inside of it. Except that I don't know why does it work. @override Widget build(BuildContext context) { return Scaffold( body: GestureDetector( onTap: () { print("This one doesn't print"); }, child: Container( child: Stack( children: <Widget>[ WebView( initialUrl: 'example.com', javascriptMode: JavascriptMode.unrestricted, onWebViewCreated: (WebViewController webViewController) { _controller.complete(webViewController); }, gestureRecognizers: Set() ..add( Factory<TapGestureRecognizer>(() => TapGestureRecognizer() ..onTapDown = (tap) { print("This one prints"); })), ), ], ), ), ), ); } Wondering if anyone had similar issue and made it work properly. A: Until there is not a proper way to do it, I have done it like below for my case Container( child: Stack( children: <Widget>[ WebView( initialUrl: 'example.com', javascriptMode: JavascriptMode.unrestricted, onWebViewCreated: (WebViewController webViewController) { _controller.complete(webViewController); }, gestureRecognizers: Set() ..add( Factory<TapGestureRecognizer>(() => TapGestureRecognizer() ..onTapDown = (tap) { print("This one prints"); })), ), GestureDetector( onTap: () { print("ontap"); }, child: Container( color: Colors.transparent, width: double.infinity, height: 50, ), ) ], ), ), A: To solve this problem on iOS, simply add the following line of code to your info.plist file. <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key><true/> </dict> Hope this solves the problem for you.
tapgesturerecognizer not working in flutter webview
I tried using GestureRecognizer in flutter webview to zoom out on Double pinch.But it is not working. If you have any solution then please help me import 'dart:async'; import 'package:flutter/material.dart'; import 'package:webview_flutter/webview_flutter.dart'; import 'package:flutter/foundation.dart'; import 'package:flutter/gestures.dart'; void main()=>runApp(WebView()); class WebView extends StatefulWidget { @override _WebViewState createState() => _WebViewState(); } class _WebViewState extends State<WebView> { final Completer<WebViewController> _controller = Completer<WebViewController>(); @override Widget build(BuildContext context) { return Scaffold( body: Builder(builder: (BuildContext context) { return WebView( key: UniqueKey(), gestureRecognizers: Set()..add(Factory<TapGestureRecognizer>(() => TapGestureRecognizer(),)),//ZOOM WEBVIEW initialUrl: 'https://stackoverflow.com/questions/', javascriptMode: JavascriptMode.unrestricted, onWebViewCreated: (WebViewController webViewController) { _controller.complete(webViewController); }, ); }), ); } } This was the important part of code If you need full code then please comment.
[ "I run at the same problem and was able to make TapGestureRecognizer work by wrapping WebView into GestureDetector with onTap() inside of it. Except that I don't know why does it work.\n@override\n Widget build(BuildContext context) {\nreturn Scaffold(\n body: GestureDetector(\n onTap: () {\n print(\"This one doesn't print\");\n },\n child: Container(\n child: Stack(\n children: <Widget>[\n WebView(\n initialUrl:\n 'example.com',\n javascriptMode: JavascriptMode.unrestricted,\n onWebViewCreated: (WebViewController webViewController) {\n _controller.complete(webViewController);\n },\n gestureRecognizers: Set()\n ..add(\n Factory<TapGestureRecognizer>(() => TapGestureRecognizer()\n ..onTapDown = (tap) {\n print(\"This one prints\");\n })),\n ),\n ],\n ),\n ),\n ),\n);\n}\n\nWondering if anyone had similar issue and made it work properly.\n", "Until there is not a proper way to do it, I have done it like below for my case\nContainer(\n child: Stack(\n children: <Widget>[\n WebView(\n initialUrl:\n 'example.com',\n javascriptMode: JavascriptMode.unrestricted,\n onWebViewCreated: (WebViewController webViewController) {\n _controller.complete(webViewController);\n },\n gestureRecognizers: Set()\n ..add(\n Factory<TapGestureRecognizer>(() => TapGestureRecognizer()\n ..onTapDown = (tap) {\n print(\"This one prints\");\n })),\n ),\n GestureDetector(\n onTap: () {\n print(\"ontap\");\n },\n child: Container(\n color: Colors.transparent,\n width: double.infinity,\n height: 50,\n ),\n )\n ],\n ),\n ),\n\n", "To solve this problem on iOS, simply add the following line of code to your info.plist file.\n<key>NSAppTransportSecurity</key>\n <dict>\n <key>NSAllowsArbitraryLoads</key><true/>\n</dict>\n\nHope this solves the problem for you.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0058811375_dart_flutter.txt
Q: Google sheets and Rstudio Is there a way to collect data from googlesheet and make a daily graph from it? Thank you very much. A function that let me connect google sheet with Rstudio. A: googlesheets4 package let's you access your googlesheets data
Google sheets and Rstudio
Is there a way to collect data from googlesheet and make a daily graph from it? Thank you very much. A function that let me connect google sheet with Rstudio.
[ "googlesheets4 package let's you access your googlesheets data\n" ]
[ 1 ]
[]
[]
[ "google_sheets", "r" ]
stackoverflow_0074659711_google_sheets_r.txt
Q: Looker will not accept double single quotes for passed condition When passing a filter from an Explore like in the below example for whatever reason we cannot get it to provide double single quotes in the query condition. In the query inside the view has the condition that pulls the submitted value: As you can see below WHERE {% condition bill_id %} bill_id {% endcondition %} is used to pass the variable. The resulting query looks like: This would be fine in a normal query but we have to use OPENQUERY() here due to a compatibility issue with SQL Server and the linked server we are pulling info from. Because we use OPENQUERY we require double quotes to pass variables in OPENQUERYs query string. Essentially we need the resulting query in the view to look like this: But no matter what we try to do to add the extra single quotes for some reason it appears that looker is removing them and only using single quotes. like this: So the question comes down to this: Does anyone know how to pass a variable to the query in a view from an explore and format it so that it uses double single quotes instead of single single quotes. We have tried a few things to format this condition to include double single quotes. Since looker uses liquid html we have tried to concatenate with | and we have tried to use append: also. What can we do to take this: WHERE {% condition bill_id %} bill_id {% endcondition %} Resulting in this: WHERE (bill_id = 'value') To instead be this: WHERE (bill_id = ''value'') A: If you only need to support equality comparisons, you should be able to do this with a liquid parameter, instead of a templated filter. Docs. view my_view { derived_table: { sql: select * from openquery(DBXA, ' select * from asdf_chg_audt where asdf_bill_id = ''{% parameter filtered_bill_id %}'' ' ;; } parameter: filtered_bill_id { type: unquoted } }
Looker will not accept double single quotes for passed condition
When passing a filter from an Explore like in the below example for whatever reason we cannot get it to provide double single quotes in the query condition. In the query inside the view has the condition that pulls the submitted value: As you can see below WHERE {% condition bill_id %} bill_id {% endcondition %} is used to pass the variable. The resulting query looks like: This would be fine in a normal query but we have to use OPENQUERY() here due to a compatibility issue with SQL Server and the linked server we are pulling info from. Because we use OPENQUERY we require double quotes to pass variables in OPENQUERYs query string. Essentially we need the resulting query in the view to look like this: But no matter what we try to do to add the extra single quotes for some reason it appears that looker is removing them and only using single quotes. like this: So the question comes down to this: Does anyone know how to pass a variable to the query in a view from an explore and format it so that it uses double single quotes instead of single single quotes. We have tried a few things to format this condition to include double single quotes. Since looker uses liquid html we have tried to concatenate with | and we have tried to use append: also. What can we do to take this: WHERE {% condition bill_id %} bill_id {% endcondition %} Resulting in this: WHERE (bill_id = 'value') To instead be this: WHERE (bill_id = ''value'')
[ "If you only need to support equality comparisons, you should be able to do this with a liquid parameter, instead of a templated filter. Docs.\nview my_view {\n derived_table: {\n sql:\n select * from openquery(DBXA, '\n select *\n from asdf_chg_audt\n where asdf_bill_id = ''{% parameter filtered_bill_id %}''\n '\n ;;\n }\n parameter: filtered_bill_id {\n type: unquoted\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "liquid", "liquid_html", "looker", "lookml" ]
stackoverflow_0074481483_liquid_liquid_html_looker_lookml.txt
Q: How can I display descriptive statistics next to each other by using stargazer? My dataset has a dummy variable which divides the data set into two groups. I would like to display the descriptive statistics for both next to each other, like: example using stargazer. Is this possible? For example, if there is the mtcars data set and the variable $am divides the dataset into two groups, how can I display the one group on the left side and the other group on the other side? Thank you! I was able to display the two statistics below each other (I had to make two separate datasets for each group), but never next to each other. treated <- mtcars[mtcars$am == 1,] control <- mtcars[mtcars$am == 0,] stargazer(treated, control, keep=c("mpg", "cyl", "disp", "hp"), header=FALSE, title="Descriptive statistics", digits=1, type="text") Descriptive statistics below each other A: Someone should point out if I'm mistaken, but I don't believe that stargazer will allow for the kind of nested tables you are looking for. However, there are other packages like modelsummary, gtsummary, and flextable that can produce tables similar to stargazer. I have included examples below using select mtcars variables summarized by am. Personally, I prefer gtsummary due to its flexibility. library(tidyverse) data(mtcars) ### modelsummary # not great since it treats `cyl` as a continuous variable # https://vincentarelbundock.github.io/modelsummary/articles/datasummary.html library(modelsummary) datasummary_balance(~am, data = mtcars, dinm = FALSE) ### gtsummary # based on example 3 from here # https://www.danieldsjoberg.com/gtsummary/reference/add_stat_label.html library(gtsummary) mtcars %>% select(am, mpg, cyl, disp, hp) %>% tbl_summary( by = am, missing = "no", type = list(mpg ~ 'continuous2', cyl ~ 'categorical', disp ~ 'continuous2', hp ~ 'continuous2'), statistic = all_continuous2() ~ c("{mean} ({sd})", "{median}") ) %>% add_stat_label(label = c(mpg, disp, hp) ~ c("Mean (SD)", "Median")) %>% modify_footnote(everything() ~ NA) ### flextable # this function only works on continuous vars, so I removed `cyl` # https://davidgohel.github.io/flextable/reference/continuous_summary.html library(flextable) mtcars %>% select(am, mpg, cyl, disp, hp) %>% continuous_summary( by = "am", hide_grouplabel = FALSE, digits = 3 ) A: You can use the modelsummary package and its datasummary function, which offers a formula-based language to describe the specific table you need. (Disclaimer: I am the maintainer.) In addition to the super flexible datasummary function, there are many other functions to summarize data in easier ways. See in particular the datasummary_balance() function here: https://vincentarelbundock.github.io/modelsummary/articles/datasummary.html library(modelsummary) dat <- mtcars[, c("mpg", "cyl", "disp", "hp", "am")] datasummary( All(dat) ~ Factor(am) * (N + Mean + SD + Min + Max), data = dat)
How can I display descriptive statistics next to each other by using stargazer?
My dataset has a dummy variable which divides the data set into two groups. I would like to display the descriptive statistics for both next to each other, like: example using stargazer. Is this possible? For example, if there is the mtcars data set and the variable $am divides the dataset into two groups, how can I display the one group on the left side and the other group on the other side? Thank you! I was able to display the two statistics below each other (I had to make two separate datasets for each group), but never next to each other. treated <- mtcars[mtcars$am == 1,] control <- mtcars[mtcars$am == 0,] stargazer(treated, control, keep=c("mpg", "cyl", "disp", "hp"), header=FALSE, title="Descriptive statistics", digits=1, type="text") Descriptive statistics below each other
[ "Someone should point out if I'm mistaken, but I don't believe that stargazer will allow for the kind of nested tables you are looking for. However, there are other packages like modelsummary, gtsummary, and flextable that can produce tables similar to stargazer. I have included examples below using select mtcars variables summarized by am. Personally, I prefer gtsummary due to its flexibility.\nlibrary(tidyverse)\ndata(mtcars)\n\n### modelsummary\n# not great since it treats `cyl` as a continuous variable\n# https://vincentarelbundock.github.io/modelsummary/articles/datasummary.html\nlibrary(modelsummary)\ndatasummary_balance(~am, data = mtcars, dinm = FALSE)\n\n### gtsummary\n# based on example 3 from here\n# https://www.danieldsjoberg.com/gtsummary/reference/add_stat_label.html\nlibrary(gtsummary)\nmtcars %>%\n select(am, mpg, cyl, disp, hp) %>%\n tbl_summary(\n by = am, \n missing = \"no\",\n type = list(mpg ~ 'continuous2',\n cyl ~ 'categorical',\n disp ~ 'continuous2',\n hp ~ 'continuous2'),\n statistic = all_continuous2() ~ c(\"{mean} ({sd})\", \"{median}\")\n ) %>%\n add_stat_label(label = c(mpg, disp, hp) ~ c(\"Mean (SD)\", \"Median\")) %>%\n modify_footnote(everything() ~ NA)\n\n### flextable\n# this function only works on continuous vars, so I removed `cyl`\n# https://davidgohel.github.io/flextable/reference/continuous_summary.html\nlibrary(flextable)\nmtcars %>%\n select(am, mpg, cyl, disp, hp) %>%\n continuous_summary(\n by = \"am\",\n hide_grouplabel = FALSE,\n digits = 3\n )\n\n", "You can use the modelsummary package and its datasummary function, which offers a formula-based language to describe the specific table you need. (Disclaimer: I am the maintainer.)\nIn addition to the super flexible datasummary function, there are many other functions to summarize data in easier ways. See in particular the datasummary_balance() function here:\nhttps://vincentarelbundock.github.io/modelsummary/articles/datasummary.html\nlibrary(modelsummary)\ndat <- mtcars[, c(\"mpg\", \"cyl\", \"disp\", \"hp\", \"am\")]\ndatasummary(\n All(dat) ~ Factor(am) * (N + Mean + SD + Min + Max),\n data = dat)\n\n\n" ]
[ 2, 0 ]
[]
[]
[ "r", "stargazer" ]
stackoverflow_0074604121_r_stargazer.txt
Q: How to get the current webview object in Android? I am using Datadog to track user activity in my app. Now I need to instrument webviews. After initializing datadog's sdk , its documentation says that I have to call the following code snippet: DatadogEventBridge.setup(webView) that is, I have to call the static method setup and pass to it a WebView object. But the problem is: my application has many objects like this (many webviews). Do I have to put this code in every class that has a WebView attribute? Or is it possible in somehow use a callback function which is called whenever a webview is instatiated, the in this callback I'd call DatadogEventBridge.setup(webView)? I tried using lifecycle callbacks and then receive an Acitivty for every "onResume" method in order to check whether this activity has a webview. But it went wrong. A: I'm not really familiar with the Datadog Sdk but you could try creating your own WebView by extending the standard one - and then replace all the other WebViews with it. Here's how it'll look like in practice: class TrackableWebView : WebView { constructor(context: Context) : super(context) constructor(context: Context, attrs: AttributeSet?) : super(context, attrs) constructor(context: Context, attrs: AttributeSet?, defStyleAttr: Int) : super(context, attrs, defStyleAttr) init { DatadogEventBridge.setup(this) } } I think we can assume that the Sdk will be initialized much earlier than the WebViews, so there should be no problem with calling the DatadogEventBridge.setup(this) before the Sdk is initialized. Then if you use the XML layouts you just replace standard WebView with your custom one: <com.example.TrackableWebView android:layout_width="match_parent" android:layout_height="match_parent"/> A: You can use WebViewClient to intercept when a WebView is created and then call the DatadogEventBridge.setup(webView) from there. You can create a class that extends WebViewClient and then add it to each of your WebViews using the WebView.setWebViewClient(WebViewClient wvc) method. public class DatadogWebViewClient extends WebViewClient { @Override public void onPageFinished(WebView view, String url) { super.onPageFinished(view, URL); DatadogEventBridge.setup(view); } } A: To get the current webview object in Android, you can use the getWebView() method on the current Activity object. This method returns the WebView object that is currently being displayed by the Activity. Here is an example of how to use this method to call the DatadogEventBridge.setup() method: import android.app.Activity; import android.webkit.WebView; import com.datadog.android.DatadogEventBridge; public class MyActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Get the current webview WebView webView = getWebView(); // Set up the Datadog event bridge for the webview DatadogEventBridge.setup(webView); } } In this example, the onCreate() method is overridden to call the getWebView() method and pass the returned WebView object to the DatadogEventBridge.setup() method. This will set up the event bridge for the current webview. If you have multiple webviews in your application, you will need to call the getWebView() method and the DatadogEventBridge.setup() method in each Activity that contains a webview
How to get the current webview object in Android?
I am using Datadog to track user activity in my app. Now I need to instrument webviews. After initializing datadog's sdk , its documentation says that I have to call the following code snippet: DatadogEventBridge.setup(webView) that is, I have to call the static method setup and pass to it a WebView object. But the problem is: my application has many objects like this (many webviews). Do I have to put this code in every class that has a WebView attribute? Or is it possible in somehow use a callback function which is called whenever a webview is instatiated, the in this callback I'd call DatadogEventBridge.setup(webView)? I tried using lifecycle callbacks and then receive an Acitivty for every "onResume" method in order to check whether this activity has a webview. But it went wrong.
[ "I'm not really familiar with the Datadog Sdk but you could try creating your own WebView by extending the standard one - and then replace all the other WebViews with it. Here's how it'll look like in practice:\nclass TrackableWebView : WebView {\n\n constructor(context: Context) : super(context)\n\n constructor(context: Context, attrs: AttributeSet?) : super(context, attrs)\n\n constructor(context: Context, attrs: AttributeSet?, defStyleAttr: Int)\n : super(context, attrs, defStyleAttr)\n\n init {\n DatadogEventBridge.setup(this)\n }\n}\n\nI think we can assume that the Sdk will be initialized much earlier than the WebViews, so there should be no problem with calling the DatadogEventBridge.setup(this) before the Sdk is initialized.\nThen if you use the XML layouts you just replace standard WebView with your custom one:\n<com.example.TrackableWebView\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"/>\n\n", "You can use WebViewClient to intercept when a WebView is created and then call the DatadogEventBridge.setup(webView) from there.\nYou can create a class that extends WebViewClient and then add it to each of your WebViews using the WebView.setWebViewClient(WebViewClient wvc) method.\npublic class DatadogWebViewClient extends WebViewClient {\n\n @Override\n public void onPageFinished(WebView view, String url) {\n super.onPageFinished(view, URL);\n DatadogEventBridge.setup(view);\n }\n}\n\n", "To get the current webview object in Android, you can use the getWebView() method on the current Activity object. This method returns the WebView object that is currently being displayed by the Activity.\nHere is an example of how to use this method to call the DatadogEventBridge.setup() method:\nimport android.app.Activity;\nimport android.webkit.WebView;\nimport com.datadog.android.DatadogEventBridge;\n\npublic class MyActivity extends Activity {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n\n // Get the current webview\n WebView webView = getWebView();\n\n // Set up the Datadog event bridge for the webview\n DatadogEventBridge.setup(webView);\n }\n}\n\nIn this example, the onCreate() method is overridden to call the getWebView() method and pass the returned WebView object to the DatadogEventBridge.setup() method. This will set up the event bridge for the current webview.\nIf you have multiple webviews in your application, you will need to call the getWebView() method and the DatadogEventBridge.setup() method in each Activity that contains a webview\n" ]
[ 1, 1, 1 ]
[]
[]
[ "android", "android_instrumentation", "android_webview", "kotlin" ]
stackoverflow_0074527241_android_android_instrumentation_android_webview_kotlin.txt
Q: Display Results from CSV based on users selection from HTML table in PHP I have a basic website that pulls the data from a .csv file and displays only the first column, I am having trouble in creating a way that will let you select one of those values in the first column, to then open a new table of only that entire row's contents. the table displayed looks like this: Title Book1 Book2 .... Book n maybe a dropdown or something that will let you select Book1 and when you click a button will open a new html/php page with the remaining column entries for the Book1 row, as such that table looks like: Title Author Year Book1 Author1 2020 Here is the code for the Book Displaying table: <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="./css/styles.css"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Book Submission</title></ </head> <body style="font-family:Verdana;color:#aaaaaa;"> <div style="background-color:#e5e5e5;padding:15px;text-align:center;"> <h1>View Publications</h1> </div> <div style="overflow:auto"> <div class="menu"> <a href="./index.php">HOME</a> <a href="./CV.php">CV</a> <a href="./forecast.php">Weather</a> <?php session_start(); if(!isset($_SESSION['loggedin']))echo "<a href='./session.php'>Log in</a>"; else {echo "<a href='./session.php'>Admin Panel</a>";} ?> </div> <div class="main"> <h2>All Publications on File</h2> <?php $row = 1; if (($handle = fopen("publications.csv", "r")) !== FALSE) { echo '<table border="1">'; while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $num = count($data); if ($row == 1) { echo '<thead><tr>'; }else{ echo '<tr>'; } $c=0; if(empty($data[$c])) { $value = "&nbsp;"; }else{ $value = $data[$c]; } if ($row == 1) { echo '<th>'.$value.'</th>'; }else{ echo '<td>'.$value.'</td>'; } if ($row == 1) { echo '</tr></thead><tbody>'; }else{ echo '</tr>'; } $row++; } echo '</tbody></table>'; fclose($handle); } ?> </div> <div class="right"> <h2>Users </h2> <?php if(isset($_SESSION['loggedin']))echo "<p> You Are Logged in as an admin</p>"; else {echo "<p> No Admins Logged in!!</p>";} ?> </div> </div> <div style="background-color:#e5e5e5;text-align:center;padding:10px;margin-top:7px;">#STUDENT NUMBER</div> </body> </html> I understand that by modifiying the $c in the <php> tag, into a for loop will traverse each column to display a full table, my issue is filtering where row = book chosen by user A: Given the constraints that we talked about in the comments, here's one way that I would generally do this. First, for demo purposes I've removed the CSV file and I'm instead working with an in-memory equivalent, however the fgetcsv logic is 100% the same. Second, I broke things into functions. This isn't needed, and now that it has been done you can decompose them back to inline code, but I think this makes it more readable. There's a function to show the table, which calls another function to show the rows, which calls another function to show the individual cells. Third, I decided to pass the book title in the query string to identify the thing that the person selected. If a book title is duplicated, this will cause both entries to be shown when selected which may or may not be correct. The CSV row index could be used instead of this as discussed in the comments, too. I think (and hope) that the comments in the code make sense. function showCell($cell, $isHeader, $selectedTitle) { echo '<td>'; if ($selectedTitle || $isHeader) { echo htmlspecialchars($cell); } else { echo sprintf('<a href="?title=%1$s">%1$s</a>', htmlspecialchars($cell)); } echo '</td>', PHP_EOL; } function showRow($cellsToShow, $isHeader, $selectedTitle) { if ($isHeader) { echo '<thead>', PHP_EOL; } echo '<tr>', PHP_EOL; foreach ($cellsToShow as $cell) { showCell($cell, $isHeader, $selectedTitle); } echo '</tr>', PHP_EOL; if ($isHeader) { echo '</thead>', PHP_EOL; } } function showTable($handle, $selectedTitle) { $idx = 0; echo '<table>', PHP_EOL; while (($row = fgetcsv($handle, 1000, ",")) !== FALSE) { // If we're on the first row which holds titles, always show. // Also, if the user selected a title, and it matches this row's title, show it. if(0 !== $idx && $selectedTitle && $row[0] !== $selectedTitle){ continue; } // cellsToShow is always an array. If we're showing based on a selection, it is everything. // Otherwise, it will be an array of just the first cell. This last seems weird, but it // allows the showRow method to not concern itself with higher level logic. $cellsToShow = $selectedTitle ? $row : [$row[0]]; showRow($cellsToShow, $idx++ === 0, $selectedTitle); } echo '</table>', PHP_EOL; } // Mock a CSV file in memory for demo purposes $handle = fopen('php://memory', 'rb+'); fwrite($handle, "Title,Author,Year\nBook 1,Alice,2020\nBook 2,Bob,2019\nBook 3,Charlie,1980"); rewind($handle); // Grab the title from the query string, default to null if not found $selectedTitle = $_GET['title'] ?? null; showTable($handle, $selectedTitle); fclose($handle); A partial online demo is here (https://3v4l.org/33ose), however that doesn't include clickable links. I'll also note that although I'm partially encoding things, better care should probably be taken, specifically for things in the URL.
Display Results from CSV based on users selection from HTML table in PHP
I have a basic website that pulls the data from a .csv file and displays only the first column, I am having trouble in creating a way that will let you select one of those values in the first column, to then open a new table of only that entire row's contents. the table displayed looks like this: Title Book1 Book2 .... Book n maybe a dropdown or something that will let you select Book1 and when you click a button will open a new html/php page with the remaining column entries for the Book1 row, as such that table looks like: Title Author Year Book1 Author1 2020 Here is the code for the Book Displaying table: <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="./css/styles.css"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Book Submission</title></ </head> <body style="font-family:Verdana;color:#aaaaaa;"> <div style="background-color:#e5e5e5;padding:15px;text-align:center;"> <h1>View Publications</h1> </div> <div style="overflow:auto"> <div class="menu"> <a href="./index.php">HOME</a> <a href="./CV.php">CV</a> <a href="./forecast.php">Weather</a> <?php session_start(); if(!isset($_SESSION['loggedin']))echo "<a href='./session.php'>Log in</a>"; else {echo "<a href='./session.php'>Admin Panel</a>";} ?> </div> <div class="main"> <h2>All Publications on File</h2> <?php $row = 1; if (($handle = fopen("publications.csv", "r")) !== FALSE) { echo '<table border="1">'; while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $num = count($data); if ($row == 1) { echo '<thead><tr>'; }else{ echo '<tr>'; } $c=0; if(empty($data[$c])) { $value = "&nbsp;"; }else{ $value = $data[$c]; } if ($row == 1) { echo '<th>'.$value.'</th>'; }else{ echo '<td>'.$value.'</td>'; } if ($row == 1) { echo '</tr></thead><tbody>'; }else{ echo '</tr>'; } $row++; } echo '</tbody></table>'; fclose($handle); } ?> </div> <div class="right"> <h2>Users </h2> <?php if(isset($_SESSION['loggedin']))echo "<p> You Are Logged in as an admin</p>"; else {echo "<p> No Admins Logged in!!</p>";} ?> </div> </div> <div style="background-color:#e5e5e5;text-align:center;padding:10px;margin-top:7px;">#STUDENT NUMBER</div> </body> </html> I understand that by modifiying the $c in the <php> tag, into a for loop will traverse each column to display a full table, my issue is filtering where row = book chosen by user
[ "Given the constraints that we talked about in the comments, here's one way that I would generally do this.\nFirst, for demo purposes I've removed the CSV file and I'm instead working with an in-memory equivalent, however the fgetcsv logic is 100% the same.\nSecond, I broke things into functions. This isn't needed, and now that it has been done you can decompose them back to inline code, but I think this makes it more readable. There's a function to show the table, which calls another function to show the rows, which calls another function to show the individual cells.\nThird, I decided to pass the book title in the query string to identify the thing that the person selected. If a book title is duplicated, this will cause both entries to be shown when selected which may or may not be correct. The CSV row index could be used instead of this as discussed in the comments, too.\nI think (and hope) that the comments in the code make sense.\nfunction showCell($cell, $isHeader, $selectedTitle)\n{\n echo '<td>';\n if ($selectedTitle || $isHeader) {\n echo htmlspecialchars($cell);\n } else {\n echo sprintf('<a href=\"?title=%1$s\">%1$s</a>', htmlspecialchars($cell));\n }\n echo '</td>', PHP_EOL;\n}\n\nfunction showRow($cellsToShow, $isHeader, $selectedTitle)\n{\n if ($isHeader) {\n echo '<thead>', PHP_EOL;\n }\n\n echo '<tr>', PHP_EOL;\n foreach ($cellsToShow as $cell) {\n showCell($cell, $isHeader, $selectedTitle);\n }\n echo '</tr>', PHP_EOL;\n\n if ($isHeader) {\n echo '</thead>', PHP_EOL;\n }\n}\n\nfunction showTable($handle, $selectedTitle)\n{\n $idx = 0;\n echo '<table>', PHP_EOL;\n while (($row = fgetcsv($handle, 1000, \",\")) !== FALSE) {\n // If we're on the first row which holds titles, always show.\n // Also, if the user selected a title, and it matches this row's title, show it.\n if(0 !== $idx && $selectedTitle && $row[0] !== $selectedTitle){\n continue;\n }\n // cellsToShow is always an array. If we're showing based on a selection, it is everything.\n // Otherwise, it will be an array of just the first cell. This last seems weird, but it\n // allows the showRow method to not concern itself with higher level logic.\n $cellsToShow = $selectedTitle ? $row : [$row[0]];\n showRow($cellsToShow, $idx++ === 0, $selectedTitle);\n }\n echo '</table>', PHP_EOL;\n}\n\n// Mock a CSV file in memory for demo purposes\n$handle = fopen('php://memory', 'rb+');\nfwrite($handle, \"Title,Author,Year\\nBook 1,Alice,2020\\nBook 2,Bob,2019\\nBook 3,Charlie,1980\");\nrewind($handle);\n\n// Grab the title from the query string, default to null if not found\n$selectedTitle = $_GET['title'] ?? null;\n\nshowTable($handle, $selectedTitle);\n\nfclose($handle);\n\nA partial online demo is here (https://3v4l.org/33ose), however that doesn't include clickable links.\nI'll also note that although I'm partially encoding things, better care should probably be taken, specifically for things in the URL.\n" ]
[ 1 ]
[]
[]
[ "csv", "html", "php" ]
stackoverflow_0074647336_csv_html_php.txt
Q: Cant properly orginize self method in a class TypeError:create_bool(): incompatible function arguments. The following argument types are supported: Return Error when Im tried to make a class. When I tried as here https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#python-solution-api. everething is perfect There are some problem with self. method. But I could not undestand where exactly import cv2 import mediapipe as mp import time class FaceMeshDetector: def __init__(self, static_mode=False, maxFaces=2, minDetectionCon=0.5, minTrackCon=0.5): self.static_mode = static_mode self.maxFaces = maxFaces self.minDetectionCon = minDetectionCon self.minTrackCon = minTrackCon self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1) def findFaceMesh(self, img, draw=True): self.imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.faceMesh.process(self.imgRGB) faces = [] if self.results.multi_face_landmarks: for faceLms in self.results.multi_face_landmarks: if draw: self.mpDraw.draw_landmarks(img, faceLms, self.mpFaceMesh.FACEMESH_CONTOURS, self.drawSpec, self.drawSpec) face = [] for id, lm in enumerate(faceLms.landmark): # print(lm) ih, iw, ic = img.shape x, y = int(lm.x * iw), int(lm.y * ih) # cv2.putText(img, str(id), (x, y), cv2.FONT_HERSHEY_PLAIN, 0.7, (0, 255, 0), 1) # print(id, x, y) face.append([x, y]) faces.append(face) return img, faces def main(): cap = cv2.VideoCapture(0) pTime = 0 detector = FaceMeshDetector() while True: success, img = cap.read() img, faces = detector.findFaceMesh(img) if len(faces) != 0: print(faces[0]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (20, 70), cv2.FONT_HERSHEY_PLAIN, 3, (0, 255, 0), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == '__main__': main() Full traceback Traceback (most recent call last): File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 59, in main() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 44, in main detector = FaceMeshDetector() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 16, in init self.minTrackCon) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solutions\face_mesh.py", line 107, in init outputs=['multi_face_landmarks']) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in init for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 592, in make_packet return getattr(packet_creator, 'create' + packet_data_type.value)(data) TypeError: create_bool(): incompatible function arguments. The following argument types are supported: 1. (arg0: bool) -> mediapipe.python._framework_bindings.packet.Packet Invoked with: 0.5 A: You have a parameter in the wrong place. Use named parameters or add a value for "refine_landmarks". See signature of FaceMesh: def __init__(self, static_image_mode=False, max_num_faces=1, refine_landmarks=False, min_detection_confidence=0.5, min_tracking_confidence=0.5): Or add the missing parameter: Change self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) to self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, False, self.minDetectionCon, self.minTrackCon) A: Finally done and work def __init__(self, static_image_mode=False, max_num_faces=2, refine_landmarks=False, min_detection_confidence=0.5, min_tracking_confidence=0.5): self.static_mode = static_image_mode self.maxFaces = max_num_faces self.refine_landmarks = refine_landmarks self.minDetectionCon = min_detection_confidence self.minTrackCon = min_tracking_confidence self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.refine_landmarks, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1)
Cant properly orginize self method in a class TypeError:create_bool(): incompatible function arguments. The following argument types are supported:
Return Error when Im tried to make a class. When I tried as here https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#python-solution-api. everething is perfect There are some problem with self. method. But I could not undestand where exactly import cv2 import mediapipe as mp import time class FaceMeshDetector: def __init__(self, static_mode=False, maxFaces=2, minDetectionCon=0.5, minTrackCon=0.5): self.static_mode = static_mode self.maxFaces = maxFaces self.minDetectionCon = minDetectionCon self.minTrackCon = minTrackCon self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1) def findFaceMesh(self, img, draw=True): self.imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.faceMesh.process(self.imgRGB) faces = [] if self.results.multi_face_landmarks: for faceLms in self.results.multi_face_landmarks: if draw: self.mpDraw.draw_landmarks(img, faceLms, self.mpFaceMesh.FACEMESH_CONTOURS, self.drawSpec, self.drawSpec) face = [] for id, lm in enumerate(faceLms.landmark): # print(lm) ih, iw, ic = img.shape x, y = int(lm.x * iw), int(lm.y * ih) # cv2.putText(img, str(id), (x, y), cv2.FONT_HERSHEY_PLAIN, 0.7, (0, 255, 0), 1) # print(id, x, y) face.append([x, y]) faces.append(face) return img, faces def main(): cap = cv2.VideoCapture(0) pTime = 0 detector = FaceMeshDetector() while True: success, img = cap.read() img, faces = detector.findFaceMesh(img) if len(faces) != 0: print(faces[0]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (20, 70), cv2.FONT_HERSHEY_PLAIN, 3, (0, 255, 0), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == '__main__': main() Full traceback Traceback (most recent call last): File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 59, in main() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 44, in main detector = FaceMeshDetector() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 16, in init self.minTrackCon) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solutions\face_mesh.py", line 107, in init outputs=['multi_face_landmarks']) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in init for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 592, in make_packet return getattr(packet_creator, 'create' + packet_data_type.value)(data) TypeError: create_bool(): incompatible function arguments. The following argument types are supported: 1. (arg0: bool) -> mediapipe.python._framework_bindings.packet.Packet Invoked with: 0.5
[ "You have a parameter in the wrong place.\nUse named parameters or add a value for \"refine_landmarks\".\nSee signature of FaceMesh:\ndef __init__(self,\n static_image_mode=False,\n max_num_faces=1,\n refine_landmarks=False,\n min_detection_confidence=0.5,\n min_tracking_confidence=0.5):\n\nOr add the missing parameter:\nChange\n\nself.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon)\n\nto\n\nself.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, False, self.minDetectionCon, self.minTrackCon)\n\n", "Finally done and work\ndef __init__(self,\n static_image_mode=False,\n max_num_faces=2,\n refine_landmarks=False,\n min_detection_confidence=0.5,\n min_tracking_confidence=0.5):\n self.static_mode = static_image_mode\n self.maxFaces = max_num_faces\n self.refine_landmarks = refine_landmarks\n self.minDetectionCon = min_detection_confidence\n self.minTrackCon = min_tracking_confidence\n \n self.mpDraw = mp.solutions.drawing_utils\n self.mpFaceMesh = mp.solutions.face_mesh\n self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, \n self.maxFaces, \n self.refine_landmarks, \n self.minDetectionCon,\n self.minTrackCon)\n self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1)\n\n" ]
[ 0, 0 ]
[]
[]
[ "mediapipe", "python", "self" ]
stackoverflow_0074658820_mediapipe_python_self.txt
Q: Highcharts Gauge: Exporting Dynamic Labels - dynamic series values export but not dynamic text I've created a Highcharts temperature gauge that dynamically updates the daily maximum, daily minimum and current temperatures. Every 5 minutes, a CSV is generated on my network drive based on current conditions measured at this station. This CSV has one row and four columns of data in this order left to right: timestamp, daily max, daily min and current temps. I have three series coded for each temperature measurement, and I use jQuery to get my CSV data values, construct an array, then pass those values to their proper Highcharts series every three seconds. This code works great for that, the needles change without refreshing the page every five minutes when new CSV values are generated, and they display properly on exported images. I also have a label positioned atop the gauge that updates and displays the timestamp from my CSV on the webpage using the same jQuery array. The issue I'm attempting to solve is getting my dynamic timestamp label to display on exported images. The timestamp displays with the gauge on the webpage, it updates perfectly fine without refreshing the page, however the timestamp does not display on exported images. What I need is to have the dynamic timestamp display on the exported images of this gauge. Please let me know if you've encountered this before and/or have any suggestions on how to fix this issue. Here is a sample of my code. Please note that this code in its current state causes the text 'Timestamp 2' to display on the exported image where I want the most current timestamp to display. <script defer type="text/javascript"> Highcharts.chart('container', { chart: { type: 'gauge', name: 'Temp', plotBackgroundColor: null, plotBackgroundImage: null, margin: [50, 50, 50, 50], plotBorderWidth: 0, plotShadow: false, height: 500, events: { load: function() { this.renderer.image('file location of this image', (((this.chartWidth / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! x-coordinate (((this.chartHeight / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! y-coordinate (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight)), //!width (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight))) //!height .attr({}).css({}).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({ color: '#0000aa' }).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - (this.chartHeight - this.plotHeight) / 2).attr({}).css({}).add(); this.myLabel = this.renderer.text(['Timestamp'], ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add(); }, } }, title: { text: 'Temperature' }, pane: { startAngle: -150, endAngle: 150, background: { backgroundColor: 'transparent', borderColor: 'transparent', }, }, // the value axis yAxis: { min: -70, max: 120, minorTickInterval: 'auto', minorTickWidth: 1, minorTickLength: 10, minorTickPosition: 'inside', minorTickColor: '#666', tickPixelInterval: 30, tickWidth: 2, tickPositions: [ - 70, -60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120], tickPosition: 'inside', tickLength: 10, tickColor: '#666', labels: { step: 1, rotation: 'auto' }, title: { text: 'Deg F' }, plotBands: [{ from: -70, to: -40, color: '#FFFFFF' // white }, { from: -40, to: 0, color: '#f633ff' // green }, { from: 0, to: 32, color: '#0D0DDF' // blue }, { from: 32, to: 80, color: '#55BF3B' // yellow }, { from: 70, to: 100, color: '#DDDF0D' // yellow }, { from: 100, to: 120, color: '#DF5353' // red }] }, exporting: { allowHTML: true, sourceWidth: 1000, sourceHeight: 1000, chartOptions: { chart: { events: { load: function() { this.renderer.image('file location of this image', (((this.chartWidth / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! x-coordinate (((this.chartHeight / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! y-coordinate (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight)), //!width (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight))) //!height .attr({}).css({}).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({ color: '#0000aa' }).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - (this.chartHeight - this.plotHeight) / 2).attr({}).css({}).add(); this.myLabel = this.renderer.text(['Timestamp 2'], ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add(); } } } } }, series: [{ type: 'gauge', name: 'Current Temp', color: 'black', data: [0], dial: { backgroundColor: 'black', borderWidth: 0, baseWidth: 3, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }, { type: 'gauge', name: 'Daily Max Temp', color: 'red', data: [0], dial: { backgroundColor: 'red', borderWidth: 0, baseWidth: 1, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }, { type: 'gauge', name: 'Daily Min Temp', color: 'blue', data: [0], dial: { backgroundColor: 'blue', borderWidth: 0, baseWidth: 1, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }] }, function(chart) { if (!chart.renderer.forExport) { setInterval(function() { var pointcurrent = chart.series[0].points[0]; var pointmax = chart.series[1].points[0]; var pointmin = chart.series[2].points[0]; jQuery.get('file location of my CSV', function(data) { const dataarray = data.split(","); pointcurrent.update(parseFloat(dataarray[4])); pointmax.update(parseFloat(dataarray[1])); pointmin.update(parseFloat(dataarray[2])); chart.myLabel.attr({ text: dataarray[0] }); }); }, 3000); } }, ); </script> </div> </body> </html> I was expecting the dynamic timestamp label to display on exported images of the gauge. The correct timestamp displays just fine on the gauge in a web browser, however that same timestamp does not display on exported images. Dynamic series data updates and displays just fine with this code. A: For this configuration, you can export the actual time stamp during export to images, Is that solution you are looking for? document.getElementById('export').addEventListener('click', function() { timeStamp = chart.time.dateFormat('%Y-%m-%d %H:%M:%S', Date.now()); chart.exportChartLocal({ type: 'application/pdf', width: 600, }, { title: { text: 'It works' }, }); }) You can also define time as a separate instance: https://api.highcharts.com/class-reference/Highcharts.Time events: { load: function() { const time = new Highcharts.Time(); const timeStamp = time.dateFormat('%Y-%m-%d %H:%M:%S', Date.now()); this.myLabel = this.renderer.text(timeStamp, ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add(); }, } Demo: https://jsfiddle.net/BlackLabel/mbocn91h/
Highcharts Gauge: Exporting Dynamic Labels - dynamic series values export but not dynamic text
I've created a Highcharts temperature gauge that dynamically updates the daily maximum, daily minimum and current temperatures. Every 5 minutes, a CSV is generated on my network drive based on current conditions measured at this station. This CSV has one row and four columns of data in this order left to right: timestamp, daily max, daily min and current temps. I have three series coded for each temperature measurement, and I use jQuery to get my CSV data values, construct an array, then pass those values to their proper Highcharts series every three seconds. This code works great for that, the needles change without refreshing the page every five minutes when new CSV values are generated, and they display properly on exported images. I also have a label positioned atop the gauge that updates and displays the timestamp from my CSV on the webpage using the same jQuery array. The issue I'm attempting to solve is getting my dynamic timestamp label to display on exported images. The timestamp displays with the gauge on the webpage, it updates perfectly fine without refreshing the page, however the timestamp does not display on exported images. What I need is to have the dynamic timestamp display on the exported images of this gauge. Please let me know if you've encountered this before and/or have any suggestions on how to fix this issue. Here is a sample of my code. Please note that this code in its current state causes the text 'Timestamp 2' to display on the exported image where I want the most current timestamp to display. <script defer type="text/javascript"> Highcharts.chart('container', { chart: { type: 'gauge', name: 'Temp', plotBackgroundColor: null, plotBackgroundImage: null, margin: [50, 50, 50, 50], plotBorderWidth: 0, plotShadow: false, height: 500, events: { load: function() { this.renderer.image('file location of this image', (((this.chartWidth / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! x-coordinate (((this.chartHeight / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! y-coordinate (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight)), //!width (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight))) //!height .attr({}).css({}).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({ color: '#0000aa' }).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - (this.chartHeight - this.plotHeight) / 2).attr({}).css({}).add(); this.myLabel = this.renderer.text(['Timestamp'], ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add(); }, } }, title: { text: 'Temperature' }, pane: { startAngle: -150, endAngle: 150, background: { backgroundColor: 'transparent', borderColor: 'transparent', }, }, // the value axis yAxis: { min: -70, max: 120, minorTickInterval: 'auto', minorTickWidth: 1, minorTickLength: 10, minorTickPosition: 'inside', minorTickColor: '#666', tickPixelInterval: 30, tickWidth: 2, tickPositions: [ - 70, -60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120], tickPosition: 'inside', tickLength: 10, tickColor: '#666', labels: { step: 1, rotation: 'auto' }, title: { text: 'Deg F' }, plotBands: [{ from: -70, to: -40, color: '#FFFFFF' // white }, { from: -40, to: 0, color: '#f633ff' // green }, { from: 0, to: 32, color: '#0D0DDF' // blue }, { from: 32, to: 80, color: '#55BF3B' // yellow }, { from: 70, to: 100, color: '#DDDF0D' // yellow }, { from: 100, to: 120, color: '#DF5353' // red }] }, exporting: { allowHTML: true, sourceWidth: 1000, sourceHeight: 1000, chartOptions: { chart: { events: { load: function() { this.renderer.image('file location of this image', (((this.chartWidth / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! x-coordinate (((this.chartHeight / 2) - (this.plotHeight / 2)) + ((0.1062495 - (this.plotHeight * 0.0000245825)) * this.plotHeight)), //! y-coordinate (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight)), //!width (this.plotHeight - ((0.212499 - (this.plotHeight * 0.000049165)) * this.plotHeight))) //!height .attr({}).css({}).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({ color: '#0000aa' }).add(); this.renderer.text('text goes here', ((this.chartWidth - this.plotWidth) / 2), this.chartHeight - (this.chartHeight - this.plotHeight) / 2).attr({}).css({}).add(); this.myLabel = this.renderer.text(['Timestamp 2'], ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add(); } } } } }, series: [{ type: 'gauge', name: 'Current Temp', color: 'black', data: [0], dial: { backgroundColor: 'black', borderWidth: 0, baseWidth: 3, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }, { type: 'gauge', name: 'Daily Max Temp', color: 'red', data: [0], dial: { backgroundColor: 'red', borderWidth: 0, baseWidth: 1, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }, { type: 'gauge', name: 'Daily Min Temp', color: 'blue', data: [0], dial: { backgroundColor: 'blue', borderWidth: 0, baseWidth: 1, topWidth: 1, rearLength: '0%' }, tooltip: { valueSuffix: ' Deg F' } }] }, function(chart) { if (!chart.renderer.forExport) { setInterval(function() { var pointcurrent = chart.series[0].points[0]; var pointmax = chart.series[1].points[0]; var pointmin = chart.series[2].points[0]; jQuery.get('file location of my CSV', function(data) { const dataarray = data.split(","); pointcurrent.update(parseFloat(dataarray[4])); pointmax.update(parseFloat(dataarray[1])); pointmin.update(parseFloat(dataarray[2])); chart.myLabel.attr({ text: dataarray[0] }); }); }, 3000); } }, ); </script> </div> </body> </html> I was expecting the dynamic timestamp label to display on exported images of the gauge. The correct timestamp displays just fine on the gauge in a web browser, however that same timestamp does not display on exported images. Dynamic series data updates and displays just fine with this code.
[ "For this configuration, you can export the actual time stamp during export to images, Is that solution you are looking for?\ndocument.getElementById('export').addEventListener('click', function() {\n timeStamp = chart.time.dateFormat('%Y-%m-%d %H:%M:%S', Date.now());\n chart.exportChartLocal({\n type: 'application/pdf',\n width: 600,\n }, {\n title: {\n text: 'It works'\n },\n });\n})\n\nYou can also define time as a separate instance:\nhttps://api.highcharts.com/class-reference/Highcharts.Time\nevents: {\n load: function() {\n const time = new Highcharts.Time();\n const timeStamp = time.dateFormat('%Y-%m-%d %H:%M:%S', Date.now());\n\n this.myLabel = this.renderer.text(timeStamp, ((this.chartWidth - this.plotWidth) / 2), ((this.chartHeight - this.plotHeight) / 2) + 20).attr({}).css({}).add();\n },\n}\n\nDemo: https://jsfiddle.net/BlackLabel/mbocn91h/\n" ]
[ 0 ]
[]
[]
[ "dynamic", "export", "gauge", "highcharts", "javascript" ]
stackoverflow_0074648347_dynamic_export_gauge_highcharts_javascript.txt
Q: Is there a way to use a using-declaration inside a requires-expression I want to test whether a type can be passed to some function, but I'd like to use ADL on the function lookup and include a function from a certain namespace. Consider this code: #include <utility> #include <vector> template<class T> concept Swappable = requires(T& a, T& b) { swap(a,b); }; static_assert(Swappable<std::vector<int>>); // #1 static_assert(Swappable<int>); // #2 #1 succeeds, it finds std::swap because std is an associated namespace of std::vector<int>. But #2 fails, a built-in type has no associated namespace. How would I write something like: template<class T> concept Swappable = requires(T& a, T& b) { using std::swap; // illegal swap(a,b); }; AFAIK, you're not allowed to use a using-declaration inside a requires-expression. (NOTE Although there is a perfectly fine standard C++ concept for this, std::swappable, this example uses swap for exposition only. I'm not particularly looking to test whether something is actually swappable, I'm just trying to find a way to implement such a concept where a customization function has a default implementation in a known namespace, but might have overloads in an associated namespace.) EDIT As a workaround, I can implement the concept in a separate namespace where the names are pulled in. Not too happy about it but it works. namespace detail { using std::swap; template<class T> concept Swappable = requires(T& a, T& b) { swap(a,b); }; } // and then either use it using detail::Swappable; // or redefine it template<class T> concept Swappable = detail::Swappable<T>; A: You can put it inside a lambda: template<class T> concept Swappable = []{ using std::swap; return requires(T& a, T& b) { swap(a, b); }; }(); A: Avoid using old using-based idioms. Instead, use the customization point equivalents like ranges::swap. That is, you should not require users to use using-based idioms in their code. Provide a customization point object that does what it needs to. The operator() overloads/templates can be constrained to create the effect of the using idiom without requiring the user to actually invoke using. ranges::swap is a good example of how this gets done.
Is there a way to use a using-declaration inside a requires-expression
I want to test whether a type can be passed to some function, but I'd like to use ADL on the function lookup and include a function from a certain namespace. Consider this code: #include <utility> #include <vector> template<class T> concept Swappable = requires(T& a, T& b) { swap(a,b); }; static_assert(Swappable<std::vector<int>>); // #1 static_assert(Swappable<int>); // #2 #1 succeeds, it finds std::swap because std is an associated namespace of std::vector<int>. But #2 fails, a built-in type has no associated namespace. How would I write something like: template<class T> concept Swappable = requires(T& a, T& b) { using std::swap; // illegal swap(a,b); }; AFAIK, you're not allowed to use a using-declaration inside a requires-expression. (NOTE Although there is a perfectly fine standard C++ concept for this, std::swappable, this example uses swap for exposition only. I'm not particularly looking to test whether something is actually swappable, I'm just trying to find a way to implement such a concept where a customization function has a default implementation in a known namespace, but might have overloads in an associated namespace.) EDIT As a workaround, I can implement the concept in a separate namespace where the names are pulled in. Not too happy about it but it works. namespace detail { using std::swap; template<class T> concept Swappable = requires(T& a, T& b) { swap(a,b); }; } // and then either use it using detail::Swappable; // or redefine it template<class T> concept Swappable = detail::Swappable<T>;
[ "You can put it inside a lambda:\ntemplate<class T>\nconcept Swappable = []{\n using std::swap;\n return requires(T& a, T& b) { swap(a, b); };\n}();\n\n", "Avoid using old using-based idioms. Instead, use the customization point equivalents like ranges::swap.\nThat is, you should not require users to use using-based idioms in their code. Provide a customization point object that does what it needs to. The operator() overloads/templates can be constrained to create the effect of the using idiom without requiring the user to actually invoke using.\nranges::swap is a good example of how this gets done.\n" ]
[ 15, 6 ]
[]
[]
[ "c++", "c++20", "c++_concepts" ]
stackoverflow_0074659156_c++_c++20_c++_concepts.txt
Q: ImportError with tkinter So I found tutorial about work with GUI in python tkinter then I try to learn it from w3school, I copied the sample code: from tkinter import * from tkinter .ttk import * root = Tk() label = Label(root, text="Hello world Tkinket GUI Example ") label.pack() root.mainloop() So, I google how to install tkinter on ubuntu. I used: $ sudo apt-get install python-tk python3-tk tk-dev $ sudo apt-get install python-tk $ pip install tk It's seem it was successfully but I was wrong.. I get this error Ubuntu 22.04.1 LTS A: Generally what you want is import tkinter as tk # 'as tk' isn't required, but it's common practice from tkinter import ttk # though you aren't using any ttk widgets at the moment... I know star imports have a certain appeal, but they can lead to namespace pollution which is a huge headache! For example lets say I've done the following: from tkinter import * from tkinter.ttk import * root = Tk() label = Label(root, text='Hello!') is Label a tkinter widget or a ttk widget? Conversely... import tkinter as tk from tkinter import ttk root = tk.Tk() label = ttk.Label(root) Here, it's clear that label is a ttk widget. Now everything is namespaced appropriately, and everyone's happy! A: I believe you only need to use: from tkinter import * I'd say get rid of the: from tkinter .ttk import * A: I think you can just remove the line: from tkinter .ttk import * I don't think you need that line to run this code.
ImportError with tkinter
So I found tutorial about work with GUI in python tkinter then I try to learn it from w3school, I copied the sample code: from tkinter import * from tkinter .ttk import * root = Tk() label = Label(root, text="Hello world Tkinket GUI Example ") label.pack() root.mainloop() So, I google how to install tkinter on ubuntu. I used: $ sudo apt-get install python-tk python3-tk tk-dev $ sudo apt-get install python-tk $ pip install tk It's seem it was successfully but I was wrong.. I get this error Ubuntu 22.04.1 LTS
[ "Generally what you want is\nimport tkinter as tk # 'as tk' isn't required, but it's common practice\nfrom tkinter import ttk # though you aren't using any ttk widgets at the moment...\n\nI know star imports have a certain appeal, but they can lead to namespace pollution which is a huge headache!\nFor example lets say I've done the following:\nfrom tkinter import *\nfrom tkinter.ttk import *\n\nroot = Tk()\nlabel = Label(root, text='Hello!')\n\nis Label a tkinter widget or a ttk widget?\nConversely...\nimport tkinter as tk\nfrom tkinter import ttk\n\nroot = tk.Tk()\nlabel = ttk.Label(root)\n\nHere, it's clear that label is a ttk widget.\nNow everything is namespaced appropriately, and everyone's happy!\n", "I believe you only need to use: from tkinter import *\nI'd say get rid of the: from tkinter .ttk import *\n", "I think you can just remove the line: from tkinter .ttk import *\nI don't think you need that line to run this code.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074659714_python_python_3.x_tkinter.txt
Q: React, update state with .map() and spread operator in nested array of objects I'm still a newbie in React and I've been struggling for some time with the topic mentioned above. I've got a state which looks like this: const [questions, setQuestions] = React.useState([]) React.useEffect(()=>{ fetch('https://the-trivia-api.com/api/questions?limit=5&difficulty=medium').then((response)=>response.json()).then((data)=>{ let questionsData = data.map((item) => { return { id: nanoid(), questionText: item.question, answerOptions: [ {id: nanoid(), answerText: item.correctAnswer, isCorrect: true, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[0], isCorrect: false, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[1], isCorrect: false, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[2], isCorrect: false, selected: false,}, ].sort(()=>Math.random() -0.5), }; }) setQuestions(questionsData) }) }, []) It's a state that returns me a quiz question and 4 "randomized" buttons. What I'm trying to do is to update the state to make one of the answerOptions (that is buttons) from selected: false to selected: true. I'm sure it's doable with .map and spread operator but I'm really lost with the syntax selectAnswer is triggered by an onChange from the radio buttons from the child Component. I have access to both the question id and each of the answerOptions id so accessing those is not a problem. I just can't figure out how to return the state. Below is an example of my failed attempt to do that. function selectAnswer(answerId, questionId){ setQuestions(prevData => prevData.map((item)=>{ return item.answerOptions.map((answerItem)=>{ if(answerId === answerItem.id) { return {...item, [answerOptions[answerItem].selected]: !answerOptions[answerItem].selected} } else return {...item} }) })) } Thank you for your time in advance A: function selectAnswer(answerId, questionId) { setQuestions((prevData) => prevData.map((item) => ({ ...item, answerOptions: item.answerOptions.map((answerItem) => ({ ...answerItem, selected: answerId === answerItem.id, })), })) ); } A: In order to update the state of your component to set the selected property of one of the answerOptions to true, you can use the map method to iterate over the existing questions in the state, find the answerOption with the matching id, and update its selected property. You can use the spread operator (...) to create a new object for each question that includes the updated answerOption. Here is an example of how you could implement this in your selectAnswer function: function selectAnswer(answerId, questionId) { setQuestions(prevData => prevData.map(question => { // Create a new array of answerOptions for this question, // with the selected answerOption updated to have its selected property set to true. const answerOptions = question.answerOptions.map(answerOption => { if (answerOption.id === answerId) { return { ...answerOption, selected: true }; } else { return answerOption; } }); // Return a new object for this question with the updated answerOptions array. return { ...question, answerOptions }; }) ); } In this example, the selectAnswer function uses the map method to iterate over the existing questions in the state. For each question, it creates a new array of answerOptions, with the answerOption that has the matching id updated to have its selected property set to true. It then returns a new object for the question with the updated answerOptions array. A: Not directly related to the initial question but I recommend you read this part of the react documentation on how to fetch data using useEffect
React, update state with .map() and spread operator in nested array of objects
I'm still a newbie in React and I've been struggling for some time with the topic mentioned above. I've got a state which looks like this: const [questions, setQuestions] = React.useState([]) React.useEffect(()=>{ fetch('https://the-trivia-api.com/api/questions?limit=5&difficulty=medium').then((response)=>response.json()).then((data)=>{ let questionsData = data.map((item) => { return { id: nanoid(), questionText: item.question, answerOptions: [ {id: nanoid(), answerText: item.correctAnswer, isCorrect: true, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[0], isCorrect: false, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[1], isCorrect: false, selected: false,}, {id: nanoid(), answerText: item.incorrectAnswers[2], isCorrect: false, selected: false,}, ].sort(()=>Math.random() -0.5), }; }) setQuestions(questionsData) }) }, []) It's a state that returns me a quiz question and 4 "randomized" buttons. What I'm trying to do is to update the state to make one of the answerOptions (that is buttons) from selected: false to selected: true. I'm sure it's doable with .map and spread operator but I'm really lost with the syntax selectAnswer is triggered by an onChange from the radio buttons from the child Component. I have access to both the question id and each of the answerOptions id so accessing those is not a problem. I just can't figure out how to return the state. Below is an example of my failed attempt to do that. function selectAnswer(answerId, questionId){ setQuestions(prevData => prevData.map((item)=>{ return item.answerOptions.map((answerItem)=>{ if(answerId === answerItem.id) { return {...item, [answerOptions[answerItem].selected]: !answerOptions[answerItem].selected} } else return {...item} }) })) } Thank you for your time in advance
[ "function selectAnswer(answerId, questionId) {\n setQuestions((prevData) =>\n prevData.map((item) => ({\n ...item,\n answerOptions: item.answerOptions.map((answerItem) => ({\n ...answerItem,\n selected: answerId === answerItem.id,\n })),\n }))\n );\n}\n\n", "In order to update the state of your component to set the selected property of one of the answerOptions to true, you can use the map method to iterate over the existing questions in the state, find the answerOption with the matching id, and update its selected property. You can use the spread operator (...) to create a new object for each question that includes the updated answerOption.\nHere is an example of how you could implement this in your selectAnswer function:\nfunction selectAnswer(answerId, questionId) {\n setQuestions(prevData =>\n prevData.map(question => {\n // Create a new array of answerOptions for this question,\n // with the selected answerOption updated to have its selected property set to true.\n const answerOptions = question.answerOptions.map(answerOption => {\n if (answerOption.id === answerId) {\n return { ...answerOption, selected: true };\n } else {\n return answerOption;\n }\n });\n\n // Return a new object for this question with the updated answerOptions array.\n return { ...question, answerOptions };\n })\n );\n}\n\nIn this example, the selectAnswer function uses the map method to iterate over the existing questions in the state. For each question, it creates a new array of answerOptions, with the answerOption that has the matching id updated to have its selected property set to true. It then returns a new object for the question with the updated answerOptions array.\n", "Not directly related to the initial question but I recommend you read this part of the react documentation on how to fetch data using useEffect\n" ]
[ 1, 1, 0 ]
[]
[]
[ "javascript", "react_hooks", "reactjs", "setstate", "spread" ]
stackoverflow_0074659522_javascript_react_hooks_reactjs_setstate_spread.txt
Q: How to avoid "setState never used" when using useContext hook I know that if you have something like this... const [state, setState] = useState('some state'); and you are getting the warning for setState is never used, you should simply not useState, and instead, a normal variable. My question is, what if this is useContext instead of useState as shown below? I have this structure for other components but some components do not need to setState for this context. How can I handle this warning? Is eslint my only option here? const [state, setState] = useContext(myContext); I have another component that uses the setState but not the state. How would I handle this? A: Simply don't destructure it if you are not going to use it: const [state] = useContext(myContext); Your useContext (and useState) is returning an array. Just destructure the things which are in these positions: const useState = () => ["hi", "mom"]; const [position_0, position_1] = useState(); console.log(postition_0) // "hi" console.log(postition_1) // "mom" If you like to skip a variable, you could use a throwaway variable like: const [, position_1] = useState(); More detail in this post. If you are in control of the context, however, my personal preferred option would be to return an object instead of an array, like so: const context = () => { ...your context here... return { state, setState, ...even more... } ) This way, you could simply destructure only the things you need: const { state } = useContext(context); // just state const { setState } = useContext(context); // just setState const { state, setState } = useContext(context); // both A: If you are getting the warning that setState is never used when using the useContext hook, you can simply not destructure setState from the hook and only use the state variable. This will avoid the warning and allow you to access the context state without having to set it. Here is an example of how you could use this approach to avoid the warning: const myContext = React.createContext(); function MyComponent() { const state = useContext(myContext); return ( <div> {/* Use the context state in your component. */} <p>{state.someValue}</p> </div> ); } In this example, MyComponent only uses the state variable from the useContext hook, so setState is not destructured and the warning is avoided. This allows MyComponent to access the context state without having to set it. Alternatively, you can use the eslint linting tool to suppress the warning by adding a comment that tells eslint to ignore the unused setState variable. This can be useful if you want to use the useContext hook in a way that uses setState
How to avoid "setState never used" when using useContext hook
I know that if you have something like this... const [state, setState] = useState('some state'); and you are getting the warning for setState is never used, you should simply not useState, and instead, a normal variable. My question is, what if this is useContext instead of useState as shown below? I have this structure for other components but some components do not need to setState for this context. How can I handle this warning? Is eslint my only option here? const [state, setState] = useContext(myContext); I have another component that uses the setState but not the state. How would I handle this?
[ "Simply don't destructure it if you are not going to use it:\nconst [state] = useContext(myContext);\n\nYour useContext (and useState) is returning an array. Just destructure the things which are in these positions:\nconst useState = () => [\"hi\", \"mom\"];\n\nconst [position_0, position_1] = useState();\n\n\nconsole.log(postition_0) // \"hi\"\nconsole.log(postition_1) // \"mom\"\n\n\nIf you like to skip a variable, you could use a throwaway variable like:\nconst [, position_1] = useState();\n\nMore detail in this post.\nIf you are in control of the context, however, my personal preferred option would be to return an object instead of an array, like so:\nconst context = () => {\n ...your context here...\n\n return {\n state,\n setState,\n ...even more...\n }\n)\n\nThis way, you could simply destructure only the things you need:\nconst { state } = useContext(context); // just state\nconst { setState } = useContext(context); // just setState\nconst { state, setState } = useContext(context); // both\n\n", "If you are getting the warning that setState is never used when using the useContext hook, you can simply not destructure setState from the hook and only use the state variable. This will avoid the warning and allow you to access the context state without having to set it.\nHere is an example of how you could use this approach to avoid the warning:\nconst myContext = React.createContext();\n\nfunction MyComponent() {\n const state = useContext(myContext);\n\n return (\n <div>\n {/* Use the context state in your component. */}\n <p>{state.someValue}</p>\n </div>\n );\n}\n\nIn this example, MyComponent only uses the state variable from the useContext hook, so setState is not destructured and the warning is avoided. This allows MyComponent to access the context state without having to set it.\nAlternatively, you can use the eslint linting tool to suppress the warning by adding a comment that tells eslint to ignore the unused setState variable. This can be useful if you want to use the useContext hook in a way that uses setState\n" ]
[ 2, 0 ]
[]
[]
[ "react_context", "react_hooks", "reactjs", "state" ]
stackoverflow_0074659794_react_context_react_hooks_reactjs_state.txt
Q: Execute AWS ECS run-task with network configuration overrides I'm running a task in AWS ECS using the CLI command run-task. I'm successfully running a task as follows: aws ecs run-task --cluster ${stackName}-cluster \ --task-definition ${stackName}-${tag} \ --launch-type="FARGATE" \ --network-configuration '{ "awsvpcConfiguration": { "assignPublicIp":"DISABLED", "securityGroups": ["sg-......"], "subnets": ["subnet-.....","subnet-.....","subnet-......"]}}' \ --count 1 \ --profile ${profile} \ --overrides file://overrides.json They way I understand it, if you're using FARGATE you must to have NetworkMode: awsvpc in your TaskDefinition, and you need to specify the awsvpcConfiguration every time you run a task. This is all fine. However, to make the above invocation tidier, is there a way to pass the --networkConfiguration above as an override. The documentation says you can pass environment variables, but it's not clear if this includes network. I would be very grateful to anybody who could shed some light on this. A: No you can't do that. Here's the full list of things you can specify in ECS Task Overrides. Network configuration is not in that list. The documentation says you can pass environment variables, but it's not clear if this includes network. The network configuration is not an environment variable. If you just want to be able to simplify the the command line by passing in more arguments from a file, you can use the --cli-input-json or --cli-input-yaml arguments.
Execute AWS ECS run-task with network configuration overrides
I'm running a task in AWS ECS using the CLI command run-task. I'm successfully running a task as follows: aws ecs run-task --cluster ${stackName}-cluster \ --task-definition ${stackName}-${tag} \ --launch-type="FARGATE" \ --network-configuration '{ "awsvpcConfiguration": { "assignPublicIp":"DISABLED", "securityGroups": ["sg-......"], "subnets": ["subnet-.....","subnet-.....","subnet-......"]}}' \ --count 1 \ --profile ${profile} \ --overrides file://overrides.json They way I understand it, if you're using FARGATE you must to have NetworkMode: awsvpc in your TaskDefinition, and you need to specify the awsvpcConfiguration every time you run a task. This is all fine. However, to make the above invocation tidier, is there a way to pass the --networkConfiguration above as an override. The documentation says you can pass environment variables, but it's not clear if this includes network. I would be very grateful to anybody who could shed some light on this.
[ "No you can't do that. Here's the full list of things you can specify in ECS Task Overrides. Network configuration is not in that list.\n\nThe documentation says you can pass environment variables, but it's not clear if this includes network.\n\nThe network configuration is not an environment variable.\n\nIf you just want to be able to simplify the the command line by passing in more arguments from a file, you can use the --cli-input-json\nor --cli-input-yaml arguments.\n" ]
[ 0 ]
[]
[]
[ "amazon_ecs", "amazon_vpc", "aws_fargate" ]
stackoverflow_0074659671_amazon_ecs_amazon_vpc_aws_fargate.txt
Q: webview_flutter doesn't recognize taps on iOS I have integrated the WebView widget from webview_flutter into my flutter application. The problem I am facing is that WebView works perfectly on Android, but on iOS WebView doesn't recognize button taps. Widget build(BuildContext context) { var mediaQuery = MediaQuery.of(context); return SingleChildScrollView( child: SizedBox( height: mediaQuery.size.height - 60 - mediaQuery.padding.top, child: WebView( initialUrl: widget.url, javascriptMode: JavascriptMode.unrestricted, javascriptChannels: { _extractDataJSChannel(context), }, onWebViewCreated: (WebViewController webViewController) { _controller = webViewController; }, onPageFinished: (_) async { // TODO }, ), ), ); } I am using flutter 1.27.0-8.0.pre and webview_flutter: ^1.0.7. I have also tried to use the latest package version and 2.2.3 version of flutter and the issue persists. Any feedback is highly appreciated! A: To solve this problem on iOS, simply add the following line of code to your info.plist file, which can be found under the runner folder of your iOS project for your flutter app. <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key><true/> </dict> Hope this solves the problem for you. for some reason the webview plugin blocks all taps until I added the line above to my plist.
webview_flutter doesn't recognize taps on iOS
I have integrated the WebView widget from webview_flutter into my flutter application. The problem I am facing is that WebView works perfectly on Android, but on iOS WebView doesn't recognize button taps. Widget build(BuildContext context) { var mediaQuery = MediaQuery.of(context); return SingleChildScrollView( child: SizedBox( height: mediaQuery.size.height - 60 - mediaQuery.padding.top, child: WebView( initialUrl: widget.url, javascriptMode: JavascriptMode.unrestricted, javascriptChannels: { _extractDataJSChannel(context), }, onWebViewCreated: (WebViewController webViewController) { _controller = webViewController; }, onPageFinished: (_) async { // TODO }, ), ), ); } I am using flutter 1.27.0-8.0.pre and webview_flutter: ^1.0.7. I have also tried to use the latest package version and 2.2.3 version of flutter and the issue persists. Any feedback is highly appreciated!
[ "To solve this problem on iOS, simply add the following line of code to your info.plist file, which can be found under the runner folder of your iOS project for your flutter app.\n<key>NSAppTransportSecurity</key>\n <dict>\n <key>NSAllowsArbitraryLoads</key><true/>\n</dict>\n\nHope this solves the problem for you. for some reason the webview plugin blocks all taps until I added the line above to my plist.\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "flutter_dependencies", "webview", "webview_flutter" ]
stackoverflow_0069372742_dart_flutter_flutter_dependencies_webview_webview_flutter.txt
Q: QML: Buble event to parent How can component inform parents when certain action happens? I mean something like event.buble in JavaScript. https://www.w3docs.com/learn-javascript/bubbling-and-capturing.html For example some elements in dialog can send a "Ok" or "Cancel" action. Parent item does not know all the child items items in advance. I would add something like: Widget { signal cancel signal ok ... } ParentItem { id: myParentItem onCancel { ... } onOk { ... } Widget { id: first } Widget { id: second } // no connection section needed. Auto-connect signals by name. } } Edit: Note: adding separate Widget and then connection is a bit impractical. Some one can forget to add one or other, moreover when deleting and renaming one can remove only one part or rename one part incorrectly. Calling parent.functionName is impractival too because then such Widget can be used only in parents having functionName. A: One idea is to search through all the children and check their type. If they are the Widget type, then connect their signals to some ParentItem function. I haven't tested this code, but something like it should work. ParentItem { id: myParentItem function doCancel() { ... } function doOk() { ... } Component.onCompleted: { for (var i = 0; i < children.length; i++) { if (children[i] instanceOf Widget) { children[i].onOk.connect(doOk); children[i].onCancel.connect(doCancel); } } } Widget { id: first } Widget { id: second } }
QML: Buble event to parent
How can component inform parents when certain action happens? I mean something like event.buble in JavaScript. https://www.w3docs.com/learn-javascript/bubbling-and-capturing.html For example some elements in dialog can send a "Ok" or "Cancel" action. Parent item does not know all the child items items in advance. I would add something like: Widget { signal cancel signal ok ... } ParentItem { id: myParentItem onCancel { ... } onOk { ... } Widget { id: first } Widget { id: second } // no connection section needed. Auto-connect signals by name. } } Edit: Note: adding separate Widget and then connection is a bit impractical. Some one can forget to add one or other, moreover when deleting and renaming one can remove only one part or rename one part incorrectly. Calling parent.functionName is impractival too because then such Widget can be used only in parents having functionName.
[ "One idea is to search through all the children and check their type. If they are the Widget type, then connect their signals to some ParentItem function. I haven't tested this code, but something like it should work.\nParentItem {\n id: myParentItem\n\n function doCancel() { ... }\n function doOk() { ... }\n\n Component.onCompleted: {\n for (var i = 0; i < children.length; i++) {\n if (children[i] instanceOf Widget) {\n children[i].onOk.connect(doOk);\n children[i].onCancel.connect(doCancel);\n }\n }\n }\n\n Widget {\n id: first\n }\n Widget {\n id: second\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "qml", "qt" ]
stackoverflow_0074657340_qml_qt.txt
Q: How do I solve 'DATE_TRUNC' is not a recognized built-in function name in sql SELECT DBO.DATE_TRUNC('day',occurred_at) AS day, channel, COUNT(*) as events FROM web_events GROUP BY 1,2 ORDER BY 3 DESC; I have tried adding dbo before the function but I get Cannot find either column "DBO" or the user-defined function or aggregate "DBO.DATE_TRUNC", or the name is ambiguous. and all i want is For my date column to be truncated A: From the comments, you are using SQL Server 2019. DATE_TRUNC() isn't available to you as it is new in SQL Server 2022. If you just need the day number, use DATEPART() or cast to a date like this: --General example. SELECT DATEPART(day, GETDATE()) as [day]; -- Or, Just cast to a date. SELECT CAST(GETDATE() as date); --Or specifically for the query you posted: SELECT [date] , channel , COUNT(*) as events FROM ( SELECT * , DATEPART(day, occurred_at) as [day] , CAST(occurred_at as date) as [date] FROM web_events ) as we GROUP BY [date], [channel]
How do I solve 'DATE_TRUNC' is not a recognized built-in function name in sql
SELECT DBO.DATE_TRUNC('day',occurred_at) AS day, channel, COUNT(*) as events FROM web_events GROUP BY 1,2 ORDER BY 3 DESC; I have tried adding dbo before the function but I get Cannot find either column "DBO" or the user-defined function or aggregate "DBO.DATE_TRUNC", or the name is ambiguous. and all i want is For my date column to be truncated
[ "From the comments, you are using SQL Server 2019. DATE_TRUNC() isn't available to you as it is new in SQL Server 2022. If you just need the day number, use DATEPART() or cast to a date like this:\n--General example.\nSELECT DATEPART(day, GETDATE()) as [day];\n-- Or, Just cast to a date.\nSELECT CAST(GETDATE() as date);\n\n--Or specifically for the query you posted:\nSELECT \n [date]\n , channel\n , COUNT(*) as events\nFROM (\n SELECT *\n , DATEPART(day, occurred_at) as [day]\n , CAST(occurred_at as date) as [date]\n FROM web_events\n ) as we\nGROUP BY [date], [channel]\n\n" ]
[ 1 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0074639260_sql_sql_server.txt
Q: Concatenating lists with recursion f# I have an exercise that asks me to make a recursive function that uses @ to create a list of [1;2;3..n]. Unfortunately I cannot get this method to work. let lstInt: int list = [] let rec oneToN (n:int) : int list = let addList = [n] match n with 0 -> lstInt |_ -> lstInt@addList oneToN (n-1) I have tried making my list mutable, but that doesn't seem to actually matter nor make much sense as you can still add and remove elements from lists in f# even though it is not mutable. I have also tried removing space between @ but that shouldn't matter either. Edit: I should clarify, the issue is the lstInt@addList, which gives me the error: "The result of this expression has type 'int list' and is implicitly ignored. Consider using 'ignore' to discard this value explicitly, e.g. 'expr |> ignore', or 'let' to bind the result to a name, e.g. 'let result = expr" A: That warning is not the issue, but it points you to the issue: you're creating a new list which is a concatenation of an empty list and [n], but then you're doing nothing with that new list. It's just dropped on the floor. After that, you proceed to call oneToN (n-1) recursively and return its result. At the end of recursion, the very last call to oneToN will ultimately return an empty list, and that will be the return value of every previous iteration, since every iteration (except the last one) returns whatever the next iteration returns. What you need to do is call oneToN (n-1), which will give you a list of numbers from 1 to n-1, and then append [n] to that list. And the result of that appending would be your return value: after all, if you take a list of numbers from 1 to n-1 and attach n to the end of it, you'll get a list of numbers from 1 to n. let rec oneToN (n:int) : int list = let addList = [n] match n with 0 -> lstInt |_ -> (oneToN (n-1)) @ addList
Concatenating lists with recursion f#
I have an exercise that asks me to make a recursive function that uses @ to create a list of [1;2;3..n]. Unfortunately I cannot get this method to work. let lstInt: int list = [] let rec oneToN (n:int) : int list = let addList = [n] match n with 0 -> lstInt |_ -> lstInt@addList oneToN (n-1) I have tried making my list mutable, but that doesn't seem to actually matter nor make much sense as you can still add and remove elements from lists in f# even though it is not mutable. I have also tried removing space between @ but that shouldn't matter either. Edit: I should clarify, the issue is the lstInt@addList, which gives me the error: "The result of this expression has type 'int list' and is implicitly ignored. Consider using 'ignore' to discard this value explicitly, e.g. 'expr |> ignore', or 'let' to bind the result to a name, e.g. 'let result = expr"
[ "That warning is not the issue, but it points you to the issue: you're creating a new list which is a concatenation of an empty list and [n], but then you're doing nothing with that new list. It's just dropped on the floor.\nAfter that, you proceed to call oneToN (n-1) recursively and return its result. At the end of recursion, the very last call to oneToN will ultimately return an empty list, and that will be the return value of every previous iteration, since every iteration (except the last one) returns whatever the next iteration returns.\nWhat you need to do is call oneToN (n-1), which will give you a list of numbers from 1 to n-1, and then append [n] to that list. And the result of that appending would be your return value: after all, if you take a list of numbers from 1 to n-1 and attach n to the end of it, you'll get a list of numbers from 1 to n.\nlet rec oneToN (n:int) : int list =\n let addList = [n]\n match n with\n 0 -> \n lstInt\n |_ ->\n (oneToN (n-1)) @ addList\n\n" ]
[ 1 ]
[]
[]
[ "f#", "f#_interactive" ]
stackoverflow_0074659308_f#_f#_interactive.txt
Q: Could not load file or assembly 'Microsoft.IdentityModel I have developed a SharePoint provider hosted (MVC) app and hosted web application in iis of azure VM (WIN SERVER 2012). When we try to use application using an app, It throws below error. Server Error in '/' Application. Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. I have already installed dotNet 3.5 and 4.5 in the server. still having the same error. can anyone help me? A: You need Windows Identity Foundation either installed on your server or in your project. A: It happens because of Broken permissions on file Microsoft.IdentityModel.dll Solution is; Add the following 3 entries to %plesk_dir%\etc\DiskSecurity\DiskSecurity.xml file: <Entry AccounType="1" Account="Psaadm" Path="{ProgramFilesX86}" SubPath="Reference Assemblies" AceFlags="ThisFolderSubfoldersAndFiles" AccessMask="NoAccess" EntryFlags="0" /> <Entry AccounType="1" Account="Psacln" Path="{ProgramFilesX64}" SubPath="Reference Assemblies" AceFlags="ThisFolderSubfoldersAndFiles" AccessMask="NoAccess" EntryFlags="0" /> <Entry AccounType="1" Account="Psaadm" Path="{ProgramFilesX64}" SubPath="Reference Assemblies" AceFlags="ThisFolderSubfoldersAndFiles" AccessMask="NoAccess" EntryFlags="0" /> below the following entry: <!-- Program Files\\Reference Assemblies --> <Entry AccounType="1" Account="Psacln" Path="{ProgramFilesX86}" SubPath="Reference Assemblies" AceFlags="ThisFolderSubfoldersAndFiles" AccessMask="NoAccess" EntryFlags="0" /> A: Thanks for people who response to me. I could resolve the error. It was the missing assembly in my project. A: Go to Search – look for Windows feature in Settings Check the Windows Identity Foundation 3.5 option. Restart the machine and we are done.
Could not load file or assembly 'Microsoft.IdentityModel
I have developed a SharePoint provider hosted (MVC) app and hosted web application in iis of azure VM (WIN SERVER 2012). When we try to use application using an app, It throws below error. Server Error in '/' Application. Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. I have already installed dotNet 3.5 and 4.5 in the server. still having the same error. can anyone help me?
[ "You need Windows Identity Foundation either installed on your server or in your project.\n", "It happens because of Broken permissions on file Microsoft.IdentityModel.dll\nSolution is;\nAdd the following 3 entries to %plesk_dir%\\etc\\DiskSecurity\\DiskSecurity.xml file: \n<Entry AccounType=\"1\" Account=\"Psaadm\" Path=\"{ProgramFilesX86}\" \nSubPath=\"Reference Assemblies\" AceFlags=\"ThisFolderSubfoldersAndFiles\" \nAccessMask=\"NoAccess\" EntryFlags=\"0\" /> \n<Entry AccounType=\"1\" Account=\"Psacln\" Path=\"{ProgramFilesX64}\" \nSubPath=\"Reference Assemblies\" AceFlags=\"ThisFolderSubfoldersAndFiles\" \nAccessMask=\"NoAccess\" EntryFlags=\"0\" /> \n<Entry AccounType=\"1\" Account=\"Psaadm\" Path=\"{ProgramFilesX64}\" \nSubPath=\"Reference Assemblies\" AceFlags=\"ThisFolderSubfoldersAndFiles\" \nAccessMask=\"NoAccess\" EntryFlags=\"0\" />\n\nbelow the following entry:\n<!-- Program Files\\\\Reference Assemblies --> \n<Entry AccounType=\"1\" Account=\"Psacln\" Path=\"{ProgramFilesX86}\" \nSubPath=\"Reference Assemblies\" AceFlags=\"ThisFolderSubfoldersAndFiles\" \nAccessMask=\"NoAccess\" EntryFlags=\"0\" />\n\n", "Thanks for people who response to me. I could resolve the error. It was the missing assembly in my project.\n", "Go to Search – look for Windows feature in Settings\n\nCheck the Windows Identity Foundation 3.5 option.\n\nRestart the machine and we are done.\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ ".net_assembly", "asp.net_mvc", "sharepoint_2013" ]
stackoverflow_0041937546_.net_assembly_asp.net_mvc_sharepoint_2013.txt
Q: Utility class in Spring application - should I use static methods or not? Let's say I have a utility class DateUtil (see below). To use this method a caller method uses DateUtils.getDateAsString(aDate). Would it be better to remove the static modifier and make DateUtil a spring bean (see DateUtilsBean) and inject it into calling classes or just leave it as is? One disadvantage I can see with using static is issues around mocking, see How to mock with static methods? public class DateUtils { public static String getDateAsString(Date date) { String retValue = "" // do something here using date parameter return retValue; } } Spring Bean version @Component public class DateUtilsBean { public String getDateAsString(Date date) { String retValue = "" // do something here using date parameter return retValue; } } A: I don't think so. A DateUtils class sounds like a pure utility class that doesn't have any side effects but just processes input parameters. That kind of functionality may as well remain in a static method. I don't think it's very likely that you'll want to mock date helper methods. A: I agree with Sean Patrick Floyd. This is my criterion: if the methods of the class do things only over the parameters they receive, with no external dependencies (database, file system, user config, other objects/beans, etc.), then I would do it with static methods, usually in a final class with a private constructor. Otherwise, I would implement it using a Spring bean. So, in the case that you raise, according to this criterion, I would write a class with static methods. Regards. A: It would be better to declare it as a Spring bean because the life cycle of it is then managed by Spring, and you can eventually inject dependencies, pool the object, as well as test it in a proper way, not to talk that you could use it as a regular object and pass it as parameter, redefine the method in subclasses... etc. In short, yes it would be a better design in most cases. Nevertheless, in a case as simple as the exposed, it doesn't do a great difference. A: A good answer in this comment Should I use static at all in spring singleton beans static variable has one instance in one JVM. Singleton exists per spring context. If your application has only one context, they do the same job. But do not mix them together. In some old projects, I see some not-pure utility class using some spring bean via ApplicationContext. It is impossible to write test case for it.
Utility class in Spring application - should I use static methods or not?
Let's say I have a utility class DateUtil (see below). To use this method a caller method uses DateUtils.getDateAsString(aDate). Would it be better to remove the static modifier and make DateUtil a spring bean (see DateUtilsBean) and inject it into calling classes or just leave it as is? One disadvantage I can see with using static is issues around mocking, see How to mock with static methods? public class DateUtils { public static String getDateAsString(Date date) { String retValue = "" // do something here using date parameter return retValue; } } Spring Bean version @Component public class DateUtilsBean { public String getDateAsString(Date date) { String retValue = "" // do something here using date parameter return retValue; } }
[ "I don't think so. A DateUtils class sounds like a pure utility class that doesn't have any side effects but just processes input parameters. That kind of functionality may as well remain in a static method. I don't think it's very likely that you'll want to mock date helper methods.\n", "I agree with Sean Patrick Floyd.\nThis is my criterion: if the methods of the class do things only over the parameters they receive, with no external dependencies (database, file system, user config, other objects/beans, etc.), then I would do it with static methods, usually in a final class with a private constructor.\nOtherwise, I would implement it using a Spring bean.\nSo, in the case that you raise, according to this criterion, I would write a class with static methods.\nRegards.\n", "It would be better to declare it as a Spring bean because the life cycle of it is then managed by Spring, and you can eventually inject dependencies, pool the object, as well as test it in a proper way, not to talk that you could use it as a regular object and pass it as parameter, redefine the method in subclasses... etc.\nIn short, yes it would be a better design in most cases. Nevertheless, in a case as simple as the exposed, it doesn't do a great difference.\n", "A good answer in this comment Should I use static at all in spring singleton beans\nstatic variable has one instance in one JVM. Singleton exists per spring context. If your application has only one context, they do the same job.\nBut do not mix them together. In some old projects, I see some not-pure utility class using some spring bean via ApplicationContext. It is impossible to write test case for it.\n" ]
[ 50, 30, 18, 0 ]
[]
[]
[ "java", "methods", "spring", "static" ]
stackoverflow_0007270681_java_methods_spring_static.txt
Q: How many arithmetic operations should it take to calculate trig functions? I'm trying to assess the expected performance of calculating trigonometry functions as a function of the required precision. Obviously the wall clock time depends on the speed of the underlying arithmetic, so factoring that out by just counting number of operations: Using state-of-the-art algorithms, how many arithmetic operations (add, subtract, multiply, divide) should it take to calculate sin(x), as a function of the number of bits (or decimal digits) of precision required in the output? A: ... to assess the expected performance of calculating trigonometry functions as a function of the required precision. Look as the first omitted term in the Taylor series sine for x = π/4 as the order of error. Details: sin(x) usually has these phases: Handling special cases: NaN, infinities. Argument reduction to the primary range to say [-π/4...+π/4]. Real good reduction is hard as π is irrational and so involves code that reaches 50% of sin() time. Much time used to emulate the needed extended precision. (Research K.C. Ng's "ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit") Low quality reduction involves much less:/, truncate, -, *. Calculation over a limited range. This is what many only consider. If done with a Taylor's series and needing 53 bits, then about 10-11 terms are needed: Taylor series sine. Yet quality code often uses a pair of crafted polynomials, each of about 4-5 terms, to form the quotient p(x)/q(x). Of course dedicated hardware support in any of these steps greatly increases performance. Note: code for sin() is often paired with cos() code as extensive use of trig identities simplify the calculation. I'd expect a software solution for sin() to cost on the order of 25x a common *. This is a rough estimate. To achieve a very low error rate in the ULP, code typically uses a tad more. sine_crap() could get by with only a few terms. So when assessing time performance, there is a trade-off with correctness. How good a sin() do you want? A: assess the expected performance of calculating trigonometry functions as a function of the required precision Using the Taylors series as a predictor of the number of ops, worst case x = π/4 (45°) and the error in the calculation on the order of the last term of the series: For 32-bit float, order 6 float ops needed. For 64-bit double, order 9 float ops needed. So if time scales by the square of the FP width, double predicted to take 9/6*2*2 or 6 times as long.
How many arithmetic operations should it take to calculate trig functions?
I'm trying to assess the expected performance of calculating trigonometry functions as a function of the required precision. Obviously the wall clock time depends on the speed of the underlying arithmetic, so factoring that out by just counting number of operations: Using state-of-the-art algorithms, how many arithmetic operations (add, subtract, multiply, divide) should it take to calculate sin(x), as a function of the number of bits (or decimal digits) of precision required in the output?
[ "\n... to assess the expected performance of calculating trigonometry functions as a function of the required precision.\n\nLook as the first omitted term in the Taylor series sine for x = π/4 as the order of error.\n\nDetails: sin(x) usually has these phases:\n\nHandling special cases: NaN, infinities.\n\nArgument reduction to the primary range to say [-π/4...+π/4]. Real good reduction is hard as π is irrational and so involves code that reaches 50% of sin() time. Much time used to emulate the needed extended precision. (Research K.C. Ng's \"ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit\")\nLow quality reduction involves much less:/, truncate, -, *.\n\nCalculation over a limited range. This is what many only consider. If done with a Taylor's series and needing 53 bits, then about 10-11 terms are needed: Taylor series sine. Yet quality code often uses a pair of crafted polynomials, each of about 4-5 terms, to form the quotient p(x)/q(x).\n\n\n\nOf course dedicated hardware support in any of these steps greatly increases performance.\n\nNote: code for sin() is often paired with cos() code as extensive use of trig identities simplify the calculation.\n\n\nI'd expect a software solution for sin() to cost on the order of 25x a common *. This is a rough estimate.\nTo achieve a very low error rate in the ULP, code typically uses a tad more. sine_crap() could get by with only a few terms. So when assessing time performance, there is a trade-off with correctness. How good a sin() do you want?\n\n", "\nassess the expected performance of calculating trigonometry functions as a function of the required precision\n\nUsing the Taylors series as a predictor of the number of ops, worst case x = π/4 (45°) and the error in the calculation on the order of the last term of the series:\n\n\nFor 32-bit float, order 6 float ops needed.\nFor 64-bit double, order 9 float ops needed.\nSo if time scales by the square of the FP width, double predicted to take 9/6*2*2 or 6 times as long.\n" ]
[ 3, 0 ]
[ "We can calculate any trigonometric function using a simple right angled triangle or using the McLaurin\\Taylor Series. So it really depends on which one you choose to implement. If you only pass an angle as an argument, and wish to calculate the sin of that particular angle, it would take about 4 to 6 steps to calculate the sin using an unit circle.\n" ]
[ -2 ]
[ "floating_accuracy", "floating_point", "math", "precision", "trigonometry" ]
stackoverflow_0074649886_floating_accuracy_floating_point_math_precision_trigonometry.txt
Q: How can I create and trigger a simple onclick JS function in Rails 7 How can I get an onclick function to trigger in a Rails 7 view? I added a simple function: // app/javascript/application.js function myTest(myVar){ alert(myVar); }; myTest("test1"); The alert for "test1" does trigger on page load. However, I want to trigger this in an HTML onclick attribute within my view: // my view <a href="#" onclick="myTest('test2');">test</a> However, the browser console says that the myTest function is not defined. I do see that the function ends up in my application.js file that the browser is loading. // compiled application.js from asset pipeline (() => { // ... all of the other rails JS here function myTest(myVar) { alert(myVar); } myTest("test1"); })(); Shouldn't I be able to trigger this directly from the console also? This also gives me the error that the function does not exist. myTest("test3"); edit: I think all of the code in Rails is being put into an IIFE which is purposely not loaded in the global scope. A: You could assign a function to the window object using something like window.myTest = function myTest(myVar) { alert(myVar); }; This should make it available in the global scope. A: Add id to your element: <a href="#" id="test-link">test</a> On your javascript file: document.querySelector("#test-link").addEventListener("click", function(e){ alert("test") })
How can I create and trigger a simple onclick JS function in Rails 7
How can I get an onclick function to trigger in a Rails 7 view? I added a simple function: // app/javascript/application.js function myTest(myVar){ alert(myVar); }; myTest("test1"); The alert for "test1" does trigger on page load. However, I want to trigger this in an HTML onclick attribute within my view: // my view <a href="#" onclick="myTest('test2');">test</a> However, the browser console says that the myTest function is not defined. I do see that the function ends up in my application.js file that the browser is loading. // compiled application.js from asset pipeline (() => { // ... all of the other rails JS here function myTest(myVar) { alert(myVar); } myTest("test1"); })(); Shouldn't I be able to trigger this directly from the console also? This also gives me the error that the function does not exist. myTest("test3"); edit: I think all of the code in Rails is being put into an IIFE which is purposely not loaded in the global scope.
[ "You could assign a function to the window object using something like\nwindow.myTest = function myTest(myVar) {\n alert(myVar);\n};\n\nThis should make it available in the global scope.\n", "Add id to your element:\n<a href=\"#\" id=\"test-link\">test</a>\n\nOn your javascript file:\ndocument.querySelector(\"#test-link\").addEventListener(\"click\", function(e){\n alert(\"test\")\n})\n\n" ]
[ 0, 0 ]
[]
[]
[ "iife", "javascript", "ruby_on_rails" ]
stackoverflow_0074607408_iife_javascript_ruby_on_rails.txt
Q: How to call a function every hour? I am trying to update information from a weather service on my page. The info should be updated every hour on the hour. How exactly do I go about calling a function on the hour every hour? I kind of had an idea but I'm not sure of how to actually refine it so it works... What I had in mind was something like creating an if statement, such as: (pseudo code) //get the mins of the current time var mins = datetime.mins(); if(mins == "00"){ function(); } A: You want to check out setInterval: https://developer.mozilla.org/en-US/docs/Web/API/Window.setInterval It's a little hard to tell what you're trying to call with your code, but it would be something in the form of: function callEveryHour() { setInterval(yourFunction, 1000 * 60 * 60); } If you want it every hour, try something like: var nextDate = new Date(); if (nextDate.getMinutes() === 0) { // You can check for seconds here too callEveryHour() } else { nextDate.setHours(nextDate.getHours() + 1); nextDate.setMinutes(0); nextDate.setSeconds(0);// I wouldn't do milliseconds too ;) var difference = nextDate - new Date(); setTimeout(callEveryHour, difference); } Now, this implementation checks the time once, sets the delay (or calls the function immediately), and then relies on setInterval to keep track after that. An alternative approach may be to poll the time every x many seconds/minutes, and fire it .getMinutes() == 0 instead (similar to the first part of the if-statement), which may sacrifice (marginal) performance for (marginal) accuracy. Depending on your exact needs, I would play around with both solutions. A: Here is what should work (JSFiddle): function tick() { //get the mins of the current time var mins = new Date().getMinutes(); if (mins == "00") { alert('Do stuff'); } console.log('Tick ' + mins); } setInterval(tick, 1000); A: What you probably want is something like that: var now = new Date(); var delay = 60 * 60 * 1000; // 1 hour in msec var start = delay - (now.getMinutes() * 60 + now.getSeconds()) * 1000 + now.getMilliseconds(); setTimeout(function doSomething() { // do the operation // ... your code here... // schedule the next tick setTimeout(doSomething, delay); }, start); So basically the first time the user get the access, you need to know what is the delay in millisecond to the next "hour". So, if the user access to the page at 8:54 (with 56 seconds and 123 milliseconds), you have to schedule the first execution after around 3 minutes: after the first one is done, you can call it every "hour" (60 * 60 * 1000). A: EDIT: Oops, I didn't see the " o' clock" things, so I edit my answer : var last_execution = new Date().getTime(); function doSomething(force){ var current_time = new Date().getTime(); if (force || (current_time.getMinutes() == 0) { last_execution = current_time; // something // ... } setTimeout(doSomething(false), 1000); } // force the first time doSomething(true); A: // ... call your func now let intervalId; let timeoutId = setTimeout(() => { // ... call your func on end of current hour intervalId = setInterval(() => { // ... call your func on end of each next hours }, 3600000); }, ((60 − moment().minutes()) × 60 × 1000) - (moment().second() * 1000)); A: Repeat at specific minute past the hour This counter is a little bit more versatile; it allows to perform a task repeatedly always at the same minute past the hour (e.g. 37 minutes past the hour), and this with up to millisecond precision. The precision of this timer is derived from its recursion. At every recursion, the millisecond time to the next minute gets recalculated. This prevents time lag over long periods. The % sign refers to the modulo operator. function minuteCount(minutesAfterHour) { const now = new Date(); const hours = now.getHours(); const minutes = now.getMinutes(); const seconds = now.getSeconds(); const milliseconds = now.getMilliseconds(); waitUntilNextMinute = setTimeout(minuteCount, 60000 - seconds * 1000 - milliseconds); if(minutes % 60 === minutesAfterHour) { doSomethingHourly(); } } minuteCount(37); Finally, timers are best kept away from the main thread. They are best run from within a web worker, as explained here. This works perfectly with unfocused tabs in desktop browsers. However, dedicated web workers on Chrome for Android are put to sleep about 5 minutes after moving the main client to the background. A: Here is my pair of setIntervalWithDelay and clearIntervalWithDelay that one can use like this: let descriptor = setIntervalWithDelay(callback, 60 * 60 * 1000, nextHourDelay) And when you are done with it: clearIntervalWithDelay(descriptor) Here is my implementation of the functions: const setIntervalWithDelay = (callback, interval, delay = 0) => { let descriptor = {} descriptor.timeoutId = setTimeout(() => { if(!descriptor.timeoutId){ return } descriptor.timeoutId = null callback() descriptor.intervalId = setInterval(callback, interval) }, delay) return descriptor } export const clearIntervalWithDelay = (descriptor) => { if(!isObject(descriptor) || (!descriptor.timeoutId && !descriptor.intervalId)){ console.warn("clearIntervalWithDelay: Incorrect descriptor. Please pass an object returned by setIntervalWithDelay. Skipping this call.") return } if(descriptor.timeoutId){ clearTimeout(descriptor.timeoutId) descriptor.timeoutId = null console.log("clearIntervalWithDelay: stopped during delay.") } if(descriptor.intervalId){ clearInterval(descriptor.intervalId) descriptor.intervalId = null console.log("clearIntervalWithDelay: stopped during interval repeat.") } } One example of using dayjs to get the delay for the next hour: let nextHour = dayjs().second(0).millisecond(0).add(1, "hour") let nextHourDelay = nextHour.diff(dayjs())
How to call a function every hour?
I am trying to update information from a weather service on my page. The info should be updated every hour on the hour. How exactly do I go about calling a function on the hour every hour? I kind of had an idea but I'm not sure of how to actually refine it so it works... What I had in mind was something like creating an if statement, such as: (pseudo code) //get the mins of the current time var mins = datetime.mins(); if(mins == "00"){ function(); }
[ "You want to check out setInterval: https://developer.mozilla.org/en-US/docs/Web/API/Window.setInterval\nIt's a little hard to tell what you're trying to call with your code, but it would be something in the form of:\nfunction callEveryHour() {\n setInterval(yourFunction, 1000 * 60 * 60);\n}\n\nIf you want it every hour, try something like:\nvar nextDate = new Date();\nif (nextDate.getMinutes() === 0) { // You can check for seconds here too\n callEveryHour()\n} else {\n nextDate.setHours(nextDate.getHours() + 1);\n nextDate.setMinutes(0);\n nextDate.setSeconds(0);// I wouldn't do milliseconds too ;)\n\n var difference = nextDate - new Date();\n setTimeout(callEveryHour, difference);\n}\n\nNow, this implementation checks the time once, sets the delay (or calls the function immediately), and then relies on setInterval to keep track after that. An alternative approach may be to poll the time every x many seconds/minutes, and fire it .getMinutes() == 0 instead (similar to the first part of the if-statement), which may sacrifice (marginal) performance for (marginal) accuracy. Depending on your exact needs, I would play around with both solutions.\n", "Here is what should work (JSFiddle):\n\n\nfunction tick() {\r\n //get the mins of the current time\r\n var mins = new Date().getMinutes();\r\n if (mins == \"00\") {\r\n alert('Do stuff');\r\n }\r\n console.log('Tick ' + mins);\r\n}\r\n\r\nsetInterval(tick, 1000);\n\n\n\n", "What you probably want is something like that:\nvar now = new Date();\nvar delay = 60 * 60 * 1000; // 1 hour in msec\nvar start = delay - (now.getMinutes() * 60 + now.getSeconds()) * 1000 + now.getMilliseconds();\n\nsetTimeout(function doSomething() {\n // do the operation\n // ... your code here...\n\n // schedule the next tick\n setTimeout(doSomething, delay);\n}, start);\n\nSo basically the first time the user get the access, you need to know what is the delay in millisecond to the next \"hour\". So, if the user access to the page at 8:54 (with 56 seconds and 123 milliseconds), you have to schedule the first execution after around 3 minutes: after the first one is done, you can call it every \"hour\" (60 * 60 * 1000).\n", "EDIT: Oops, I didn't see the \" o' clock\" things, so I edit my answer :\nvar last_execution = new Date().getTime();\nfunction doSomething(force){\n var current_time = new Date().getTime();\n if (force || (current_time.getMinutes() == 0)\n {\n last_execution = current_time;\n // something\n // ...\n }\n setTimeout(doSomething(false), 1000);\n}\n// force the first time\ndoSomething(true); \n\n", "// ... call your func now\nlet intervalId;\nlet timeoutId = setTimeout(() => {\n // ... call your func on end of current hour\n intervalId = setInterval(() => {\n // ... call your func on end of each next hours\n }, 3600000);\n}, ((60 − moment().minutes()) × 60 × 1000) - (moment().second() * 1000));\n\n", "Repeat at specific minute past the hour\nThis counter is a little bit more versatile; it allows to perform a task repeatedly always at the same minute past the hour (e.g. 37 minutes past the hour), and this with up to millisecond precision.\nThe precision of this timer is derived from its recursion.\nAt every recursion, the millisecond time to the next minute gets recalculated. This prevents time lag over long periods.\nThe % sign refers to the modulo operator.\nfunction minuteCount(minutesAfterHour) {\n\n const now = new Date();\n const hours = now.getHours();\n const minutes = now.getMinutes();\n const seconds = now.getSeconds();\n const milliseconds = now.getMilliseconds();\n\n waitUntilNextMinute = setTimeout(minuteCount, 60000 - seconds * 1000 - milliseconds);\n\n if(minutes % 60 === minutesAfterHour) {\n doSomethingHourly();\n }\n\n}\n\nminuteCount(37);\n\nFinally, timers are best kept away from the main thread. They are best run from within a web worker, as explained here.\nThis works perfectly with unfocused tabs in desktop browsers.\nHowever, dedicated web workers on Chrome for Android are put to sleep about 5 minutes after moving the main client to the background.\n", "Here is my pair of setIntervalWithDelay and clearIntervalWithDelay that one can use like this:\nlet descriptor = setIntervalWithDelay(callback, 60 * 60 * 1000, nextHourDelay)\n\nAnd when you are done with it:\nclearIntervalWithDelay(descriptor)\n\nHere is my implementation of the functions:\nconst setIntervalWithDelay = (callback, interval, delay = 0) => {\n let descriptor = {}\n descriptor.timeoutId = setTimeout(() => {\n if(!descriptor.timeoutId){\n return\n }\n descriptor.timeoutId = null\n callback()\n descriptor.intervalId = setInterval(callback, interval)\n }, delay)\n return descriptor\n}\n\nexport const clearIntervalWithDelay = (descriptor) => {\n if(!isObject(descriptor) || (!descriptor.timeoutId && !descriptor.intervalId)){\n console.warn(\"clearIntervalWithDelay: Incorrect descriptor. Please pass an object returned by setIntervalWithDelay. Skipping this call.\")\n return\n }\n if(descriptor.timeoutId){\n clearTimeout(descriptor.timeoutId)\n descriptor.timeoutId = null\n console.log(\"clearIntervalWithDelay: stopped during delay.\")\n }\n if(descriptor.intervalId){\n clearInterval(descriptor.intervalId)\n descriptor.intervalId = null\n console.log(\"clearIntervalWithDelay: stopped during interval repeat.\")\n }\n}\n\nOne example of using dayjs to get the delay for the next hour:\nlet nextHour = dayjs().second(0).millisecond(0).add(1, \"hour\")\nlet nextHourDelay = nextHour.diff(dayjs())\n\n" ]
[ 30, 14, 10, 1, 0, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0019847412_javascript.txt
Q: R preprocessing error: Error in predict.BoxCoxTrans(bc[[i]], x[, i]) : newdata should be a numeric vector xtestdata is an R data frame. It contains solely numeric data. There is no missing data. ncol(xtestdata) # 45 nrow(xtestdata) # 325 # str(xtestdata) all numeric I seek to preprocess the data. This code for preprocessing of data yields an error: testtrans <- preProcess(xtestdata, method = c("BoxCox", "center", "scale")) The error is: Error in predict.BoxCoxTrans(bc[[i]], x[, i]) : newdata should be a numeric vector Thank you for any help you can offer. A: The data set named xtestdata was a tibble. When I got it out of tibble mode, all was well with the preProcess command. To "un-tibble" a data set, do this: xtestdata <- as.data.frame(xtestdata)
R preprocessing error: Error in predict.BoxCoxTrans(bc[[i]], x[, i]) : newdata should be a numeric vector
xtestdata is an R data frame. It contains solely numeric data. There is no missing data. ncol(xtestdata) # 45 nrow(xtestdata) # 325 # str(xtestdata) all numeric I seek to preprocess the data. This code for preprocessing of data yields an error: testtrans <- preProcess(xtestdata, method = c("BoxCox", "center", "scale")) The error is: Error in predict.BoxCoxTrans(bc[[i]], x[, i]) : newdata should be a numeric vector Thank you for any help you can offer.
[ "The data set named xtestdata was a tibble.\nWhen I got it out of tibble mode, all was well with the preProcess command.\nTo \"un-tibble\" a data set, do this:\nxtestdata <- as.data.frame(xtestdata)\n" ]
[ 0 ]
[]
[]
[ "data_preprocessing", "machine_learning", "transformation" ]
stackoverflow_0074648482_data_preprocessing_machine_learning_transformation.txt
Q: Index Enumeration doesn't seem to be operating properly. Where am I messing up? I'm trying to figure out how to enumerate an index properly into specified cells on an Excel spreadsheet using Python. Following a tutorial video, I thought I had it figured out, but it doesn't seem to be pulling each index value and parsing it to each individual cell as intended. Instead, it's taking only the first entry and applying it to all specified cells and ignoring the second and third entry. Can someone help me understand where I'm messing up on this? Thank you kindly. Code: from openpyxl import Workbook, load_workbook from openpyxl.utils import get_column_letter wb = load_workbook('PythonNetwork.xlsx') ws = wb['Systems'] print(ws) # Shows each designated cell as well as its cell value. for row in ws['A2':'A4']: for cell in row: print(cell, cell.value) new_data = ["192.168.1.4", "192.168.1.5", "192.168.1.6"] # Enters new data from created index. for row in ws['A2':'A4']: for index, cell in enumerate(row): cell.value = new_data[index] # Shows each designated cell value for comparison to previously printed information. for row in ws['A2':'A4']: for cell in row: print(cell.value) Output: <Worksheet "Systems"> <Cell 'Systems'.A2> 192.168.1.1 <Cell 'Systems'.A3> 192.168.1.2 <Cell 'Systems'.A4> 192.168.1.3 192.168.1.4 192.168.1.4 192.168.1.4 I tried changing the values in the index from having quotes to simple integers without quotes to see if it made any difference. It does not. For example I replaced each IP address in the index with 10, 20, etc as shown below: new_data = [10, 20, 30] The output was the same result as each cell reported back as 10 10 10 instead of 10 20 30. A: Unfortunately, this manner of accessing a range of cells is always a bit clumsy. If you look at what accessing the range returns: >>> ws['A2':'A4'] ((<Cell 'Systems'.A2>,), (<Cell 'Systems'.A3>,), (<Cell 'Systems'.A4>,)) it's a tuple of tuples, where each inner tuple is a single cell. So, in your for loop, what you're calling a row is a tuple of a single cell. It's not a row exactly, but your code to print those cells works anyway. When you try to change the values, though, the enumeration index is always 0, since each tuple has a single cell, so you're always assigning the value of new_data[0]. Instead, you can do something like this: for index, cell in enumerate(ws['A2':'A4']): cell[0].value = new_data[index] Each cell is actually a tuple of a cell, so you have to reference the 0th element.
Index Enumeration doesn't seem to be operating properly. Where am I messing up?
I'm trying to figure out how to enumerate an index properly into specified cells on an Excel spreadsheet using Python. Following a tutorial video, I thought I had it figured out, but it doesn't seem to be pulling each index value and parsing it to each individual cell as intended. Instead, it's taking only the first entry and applying it to all specified cells and ignoring the second and third entry. Can someone help me understand where I'm messing up on this? Thank you kindly. Code: from openpyxl import Workbook, load_workbook from openpyxl.utils import get_column_letter wb = load_workbook('PythonNetwork.xlsx') ws = wb['Systems'] print(ws) # Shows each designated cell as well as its cell value. for row in ws['A2':'A4']: for cell in row: print(cell, cell.value) new_data = ["192.168.1.4", "192.168.1.5", "192.168.1.6"] # Enters new data from created index. for row in ws['A2':'A4']: for index, cell in enumerate(row): cell.value = new_data[index] # Shows each designated cell value for comparison to previously printed information. for row in ws['A2':'A4']: for cell in row: print(cell.value) Output: <Worksheet "Systems"> <Cell 'Systems'.A2> 192.168.1.1 <Cell 'Systems'.A3> 192.168.1.2 <Cell 'Systems'.A4> 192.168.1.3 192.168.1.4 192.168.1.4 192.168.1.4 I tried changing the values in the index from having quotes to simple integers without quotes to see if it made any difference. It does not. For example I replaced each IP address in the index with 10, 20, etc as shown below: new_data = [10, 20, 30] The output was the same result as each cell reported back as 10 10 10 instead of 10 20 30.
[ "Unfortunately, this manner of accessing a range of cells is always a bit clumsy. If you look at what accessing the range returns:\n>>> ws['A2':'A4']\n((<Cell 'Systems'.A2>,), (<Cell 'Systems'.A3>,), (<Cell 'Systems'.A4>,))\n\nit's a tuple of tuples, where each inner tuple is a single cell. So, in your for loop, what you're calling a row is a tuple of a single cell. It's not a row exactly, but your code to print those cells works anyway.\nWhen you try to change the values, though, the enumeration index is always 0, since each tuple has a single cell, so you're always assigning the value of new_data[0].\nInstead, you can do something like this:\nfor index, cell in enumerate(ws['A2':'A4']):\n cell[0].value = new_data[index]\n\nEach cell is actually a tuple of a cell, so you have to reference the 0th element.\n" ]
[ 0 ]
[]
[]
[ "enumeration", "excel", "indexing", "openpyxl", "python" ]
stackoverflow_0074659526_enumeration_excel_indexing_openpyxl_python.txt
Q: Chart.js script does not update Im using Chart.js to to make the chart appear on the website, it doesn't update the information from the api. Maybe I have imported it incorrectly. I use the same script imports in index.html and in the js file of the script itself I don't understand what could be the problem here. https://codepen.io/jade1411/pen/LYrXPrX function removeElementsByClass(className) { const elements = document.getElementsByClassName(className); while(elements.length > 0) { elements[0].parentNode.removeChild(elements[0]); } } function draw_graph(graph_data, header) { var accepted = [], rejected = [], timestamps = []; if(typeof header === 'undefined') header = ''; //console.log(graph_data); for(var timestamp in graph_data) { // console.log("Data for " + timestamp + " a " + graph_data[timestamp]['a'] + " r " + graph_data[timestamp]['r']); accepted.push(Math.round(graph_data[timestamp]['a'])); // accepted_hashes // rejected.push(Math.round(graph_data[timestamp]['r'])); // rejected_hashes timestamps.push(timestamp); } // console.log("Data for " + timestamp + " a " + graph_data[timestamp]['a'] + " r " + graph_data[timestamp]['r']); Chart.defaults.global.defaultFontFamily = 'Verdana'; Chart.defaults.global.defaultFontStyle = 'normal'; Chart.defaults.global.defaultFontColor = '#0B6FAB'; Chart.defaults.global.defaultFontSize = 11; Chart.defaults.global.legend = false; removeElementsByClass('chartjs-hidden-iframe'); // console.log(Canvas); new Chart(document.getElementById("global-speed-chart"), { type:'line', data: { labels: timestamps, datasets: [{ // label: "accepted", data: accepted, borderColor: '#0B6FAB', backgroundColor: '#CBFFFF', borderWidth: 2, fill: false, pointRadius: 0 } // ,{ //// label: "accepted", // data: rejected, // borderColor: "#3e95cd", // backgroundColor: "#000000", // borderWidth: 2, // fill: false // } ] }, options: { title: { display: true, text: header, // fontStyle: 'bold', // fontColor: '#0B6FAB', fontSize: 16 }, legend: { labels: { fontColor: '#C0C0C0' } }, scales: { yAxes: [{ position: 'right', gridLines: { drawBorder: false //, // display:false }, //offset: true, // display: false, ticks: { beginAtZero: true, // autoSkip: true, // autoSkipPadding: 100, maxTicksLimit: 3, fontStyle: 'bold', fontColor: '#C0C0C0', fontSize: 9, userCallback: function(label, index, labels) { if(label !== 0) { return label; } }, // labelOffset: 50 // max: 2000, // min: 0 // stepSize: 100 // callback: function(value, index, values) {return '$' + value;} }, scaleLabel: { // display: false, // labelString: '1k = 1000' }, pointLabels: { // display: false } }], xAxes: [{ // display: false, gridLines: { drawBorder: false, display:false // lineWidth: 0, // zeroLineWidth: 0 }, ticks: { beginAtZero: true, autoSkip: true, // autoSkipPadding: 0, maxTicksLimit: 6, // padding: 100, fontColor: '#C0C0C0', fontSize: 9, maxRotation: 0, minRotation: 0 } }] } }}); }; A: draw_graph is not defined since the function definition is outside of the script tag in your HTML. If you copy and paste the code in your JS from codepen, and paste it into the script tag of your HTML, you should see it working.
Chart.js script does not update
Im using Chart.js to to make the chart appear on the website, it doesn't update the information from the api. Maybe I have imported it incorrectly. I use the same script imports in index.html and in the js file of the script itself I don't understand what could be the problem here. https://codepen.io/jade1411/pen/LYrXPrX function removeElementsByClass(className) { const elements = document.getElementsByClassName(className); while(elements.length > 0) { elements[0].parentNode.removeChild(elements[0]); } } function draw_graph(graph_data, header) { var accepted = [], rejected = [], timestamps = []; if(typeof header === 'undefined') header = ''; //console.log(graph_data); for(var timestamp in graph_data) { // console.log("Data for " + timestamp + " a " + graph_data[timestamp]['a'] + " r " + graph_data[timestamp]['r']); accepted.push(Math.round(graph_data[timestamp]['a'])); // accepted_hashes // rejected.push(Math.round(graph_data[timestamp]['r'])); // rejected_hashes timestamps.push(timestamp); } // console.log("Data for " + timestamp + " a " + graph_data[timestamp]['a'] + " r " + graph_data[timestamp]['r']); Chart.defaults.global.defaultFontFamily = 'Verdana'; Chart.defaults.global.defaultFontStyle = 'normal'; Chart.defaults.global.defaultFontColor = '#0B6FAB'; Chart.defaults.global.defaultFontSize = 11; Chart.defaults.global.legend = false; removeElementsByClass('chartjs-hidden-iframe'); // console.log(Canvas); new Chart(document.getElementById("global-speed-chart"), { type:'line', data: { labels: timestamps, datasets: [{ // label: "accepted", data: accepted, borderColor: '#0B6FAB', backgroundColor: '#CBFFFF', borderWidth: 2, fill: false, pointRadius: 0 } // ,{ //// label: "accepted", // data: rejected, // borderColor: "#3e95cd", // backgroundColor: "#000000", // borderWidth: 2, // fill: false // } ] }, options: { title: { display: true, text: header, // fontStyle: 'bold', // fontColor: '#0B6FAB', fontSize: 16 }, legend: { labels: { fontColor: '#C0C0C0' } }, scales: { yAxes: [{ position: 'right', gridLines: { drawBorder: false //, // display:false }, //offset: true, // display: false, ticks: { beginAtZero: true, // autoSkip: true, // autoSkipPadding: 100, maxTicksLimit: 3, fontStyle: 'bold', fontColor: '#C0C0C0', fontSize: 9, userCallback: function(label, index, labels) { if(label !== 0) { return label; } }, // labelOffset: 50 // max: 2000, // min: 0 // stepSize: 100 // callback: function(value, index, values) {return '$' + value;} }, scaleLabel: { // display: false, // labelString: '1k = 1000' }, pointLabels: { // display: false } }], xAxes: [{ // display: false, gridLines: { drawBorder: false, display:false // lineWidth: 0, // zeroLineWidth: 0 }, ticks: { beginAtZero: true, autoSkip: true, // autoSkipPadding: 0, maxTicksLimit: 6, // padding: 100, fontColor: '#C0C0C0', fontSize: 9, maxRotation: 0, minRotation: 0 } }] } }}); };
[ "draw_graph is not defined since the function definition is outside of the script tag in your HTML. If you copy and paste the code in your JS from codepen, and paste it into the script tag of your HTML, you should see it working.\n" ]
[ 0 ]
[]
[]
[ "charts", "html", "javascript" ]
stackoverflow_0074659722_charts_html_javascript.txt
Q: I am trying to take the input of dimensions of the matrix from the user which takes only even numbers as dimensions I am trying to take the input of dimensions of the matrix from the user which takes only even numbers as dimensions and if an odd number is entered it throws an exception. But my try-catch block is not working as expected as even when I enter even numbers it still shows an exception. this is my code import java.util.Scanner; class Invalidmatrixexception extends Exception { void exception() { System.out.println("Exception Occured"); System.out.println("Please enter even numbers"); } } public class matrixop { public static void main(String[] args) { try { Scanner sc = new Scanner(System.in); System.out.println("Enter the dimensions of matrix"); int m = sc.nextInt(); int n = sc.nextInt(); if (m / 2 != 0 || n / 2 != 0) { throw new Invalidmatrixexception(); } else { int arr[][] = new int[m][n]; System.out.println("Enter the elements of the matrix: "); for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { arr[i][j] = sc.nextInt(); } } } // System.out.println(a); } catch (Invalidmatrixexception e) { System.out.println("Exception Occured"); } } } A: You want the remainder of the division, not the result of the division if (m / 2 != 0 || n / 2 != 0) { throw new Invalidmatrixexception(); } must be if (m % 2 != 0 || n % 2 != 0) { throw new Invalidmatrixexception(); }
I am trying to take the input of dimensions of the matrix from the user which takes only even numbers as dimensions
I am trying to take the input of dimensions of the matrix from the user which takes only even numbers as dimensions and if an odd number is entered it throws an exception. But my try-catch block is not working as expected as even when I enter even numbers it still shows an exception. this is my code import java.util.Scanner; class Invalidmatrixexception extends Exception { void exception() { System.out.println("Exception Occured"); System.out.println("Please enter even numbers"); } } public class matrixop { public static void main(String[] args) { try { Scanner sc = new Scanner(System.in); System.out.println("Enter the dimensions of matrix"); int m = sc.nextInt(); int n = sc.nextInt(); if (m / 2 != 0 || n / 2 != 0) { throw new Invalidmatrixexception(); } else { int arr[][] = new int[m][n]; System.out.println("Enter the elements of the matrix: "); for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { arr[i][j] = sc.nextInt(); } } } // System.out.println(a); } catch (Invalidmatrixexception e) { System.out.println("Exception Occured"); } } }
[ "You want the remainder of the division, not the result of the division\nif (m / 2 != 0 || n / 2 != 0) {\n throw new Invalidmatrixexception();\n}\n\nmust be\nif (m % 2 != 0 || n % 2 != 0) {\n throw new Invalidmatrixexception();\n}\n\n" ]
[ 0 ]
[]
[]
[ "java" ]
stackoverflow_0074659746_java.txt
Q: How can I troubleshoot an error: lib/graphql has no exported mutation - for a mutation I have defined and which appears in graphql.tsx I'm trying to figure out what I need to do in order to have lib/graphql recognise the mutations I have made. I have an issue.tsx (which is a form). It imports: import { IssueInput, useUpdateIssueMutation, useAllIssuesQuery, useCreateIssueMutation, useDeleteIssueMutation, Issue as IssueGQLType, } from "lib/graphql" Other than IssueInput and Issue, I'm getting errors in my terminal that say these queries and mutations are not exported members. However when I try to load the issue page in local host, I get an error that says: error - GraphQLError [Object]: Syntax Error: Expected Name, found . It points to the line where Issue is imported. I made all of these queries and mutations in my resolver as follows: import { Arg, Mutation, Query, Resolver } from "type-graphql" import { Issue } from "./issue.model" import { IssueService } from "./issue.service" import { IssueInput } from "./inputs/create.input" import { Inject, Service } from "typedi" import { UseAuth } from "../shared/middleware/UseAuth" import { Role } from "@generated" @Service() @Resolver(() => Issue) export default class IssueResolver { @Inject(() => IssueService) issueService: IssueService @Query(() => [Issue]) async allIssues() { return await this.issueService.getAllIssues() } @Query(() => [Issue]) async futureRiskIssues() { return await this.issueService.getFutureRiskIssues() } @Query(() => Issue) async issue(@Arg("id") id: string) { return await this.issueService.getIssue(id) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async createIssue(@Arg("data") data: IssueInput) { return await this.issueService.createIssue(data) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async deleteIssue(@Arg("id") id: string) { return await this.issueService.deleteIssue(id) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async updateIssue(@Arg("id") id: string, @Arg("data") data: IssueInput) { return await this.issueService.updateIssue(id, data) } } I can also see from my graphql.tsx file, that these functions are recognised as follows: export type Mutation = { __typename?: 'Mutation'; createIssue: Issue; createUser: User; deleteIssue: Issue; destroyAccount: Scalars['Boolean']; forgotPassword: Scalars['Boolean']; getBulkSignedS3UrlForPut?: Maybe<Array<SignedResponse>>; getSignedS3UrlForPut?: Maybe<SignedResponse>; login: AuthResponse; register: AuthResponse; resetPassword: Scalars['Boolean']; updateIssue: Issue; updateMe: User; }; export type MutationCreateUserArgs = { data: UserCreateInput; }; export type MutationDeleteIssueArgs = { id: Scalars['String']; }; export type MutationUpdateIssueArgs = { data: IssueInput; id: Scalars['String']; }; I have run the codegen several times and can't think of anything else to try to force these mutations and queries to be recognised. Can anyone see a way to trouble shoot this? My codegen.yml has: schema: http://localhost:5555/graphql documents: - "src/components/**/*.{ts,tsx}" - "src/lib/**/*.{ts,tsx}" - "src/pages/**/*.{ts,tsx}" overwrite: true generates: src/lib/graphql.tsx: config: withMutationFn: false addDocBlocks: false scalars: DateTime: string plugins: - add: content: "/* eslint-disable */" - typescript - typescript-operations - typescript-react-apollo When I look at the mutations available on the authentication objects (that are provided with the [boilerplate app][1] that I am trying to use), I can see that there are mutations and queries that are differently represented in the lib/graphql file. I just can't figure out how to force the ones I write to be included in this way: export function useLoginMutation(baseOptions?: Apollo.MutationHookOptions<LoginMutation, LoginMutationVariables>) { const options = {...defaultOptions, ...baseOptions} return Apollo.useMutation<LoginMutation, LoginMutationVariables>(LoginDocument, options); } Instead, I get all of these things, but none of them look like the above and I can't figure out which one to import into my front end form so that I can make an entry in the database. None of them look like the queries or mutations I defined in my resolver export type IssueInput = { description: Scalars['String']; issueGroup: Scalars['String']; title: Scalars['String']; }; export type IssueListRelationFilter = { every?: InputMaybe<IssueWhereInput>; none?: InputMaybe<IssueWhereInput>; some?: InputMaybe<IssueWhereInput>; }; export type IssueRelationFilter = { is?: InputMaybe<IssueWhereInput>; isNot?: InputMaybe<IssueWhereInput>; }; export type IssueWhereInput = { AND?: InputMaybe<Array<IssueWhereInput>>; NOT?: InputMaybe<Array<IssueWhereInput>>; OR?: InputMaybe<Array<IssueWhereInput>>; createdAt?: InputMaybe<DateTimeFilter>; description?: InputMaybe<StringFilter>; id?: InputMaybe<UuidFilter>; issueGroup?: InputMaybe<IssueGroupRelationFilter>; issueGroupId?: InputMaybe<UuidFilter>; subscribers?: InputMaybe<UserIssueListRelationFilter>; title?: InputMaybe<StringFilter>; updatedAt?: InputMaybe<DateTimeFilter>; }; export type IssueWhereUniqueInput = { id?: InputMaybe<Scalars['String']>; }; I do have this record in my graphql.tsx file: export type Mutation = { __typename?: 'Mutation'; createIssue: Issue; createIssueGroup: IssueGroup; createUser: User; deleteIssue: Issue; deleteIssueGroup: IssueGroup; destroyAccount: Scalars['Boolean']; forgotPassword: Scalars['Boolean']; getBulkSignedS3UrlForPut?: Maybe<Array<SignedResponse>>; getSignedS3UrlForPut?: Maybe<SignedResponse>; login: AuthResponse; register: AuthResponse; resetPassword: Scalars['Boolean']; updateIssue: Issue; updateIssueGroup: IssueGroup; updateMe: User; }; but I can't say: createIssueMutation as an import in my issue.tsx where I'm trying to make a form to use to post to the database. [1]: https://github.com/NoQuarterTeam/boilerplate In the issue form, I get an error that says: "resource": "/.../src/pages/issue.tsx", "owner": "typescript", "code": "2305", "severity": 8, "message": "Module '"lib/graphql"' has no exported member 'useCreateIssueMutation'.", "source": "ts", "startLineNumber": 7, "startColumn": 27, "endLineNumber": 7, "endColumn": 54 }] and the same thing for the query A: check your codegen.yml overwrite: true schema: "http://localhost:4000/graphql" documents: "src/graphql/**/*.graphql" generates: src/generated/graphql.tsx: plugins: - "typescript" - "typescript-operations" - "typescript-react-apollo" ./graphql.schema.json: plugins: - "introspection" or try something like @Resolver(Issue) A: It seems like you are not generating the hooks that you are trying to import. You can update your codegen.yml file to add the generated hooks: schema: http://localhost:5555/graphql documents: - "src/components/**/*.{ts,tsx}" - "src/lib/**/*.{ts,tsx}" - "src/pages/**/*.{ts,tsx}" overwrite: true generates: src/lib/graphql.tsx: config: withMutationFn: false addDocBlocks: false scalars: DateTime: string withHooks: true # <--------------------- this line plugins: - add: content: "/* eslint-disable */" - typescript - typescript-operations - typescript-react-apollo A: If you are receiving the error "lib/graphql has no exported mutation" for a mutation that you have defined in your GraphQL schema, there are a few possible causes: The mutation is not correctly defined in your schema. Make sure that the mutation is defined using the mutation keyword and that it is properly named and formatted. The mutation is not being imported correctly in your code. Make sure that you are importing the mutation using the correct syntax and that it is being imported from the correct file. The mutation is not being added to the exports object in your lib/graphql.ts file. This file defines the list of mutations, queries, and subscriptions that are available to your code. Make sure that the mutation is being added to the exports object in this file. There is a type error in your code. This error can sometimes occur if the mutation is being called with the wrong type of arguments. Make sure that the arguments passed to the mutation are of the correct type and that they match the schema definition. To troubleshoot this error, you can try the following steps: Double-check the schema definition for the mutation and make sure that it is properly defined and formatted. Check the import statement for the mutation in your code and make sure that it is correct and is being imported from the correct file. Check the lib/graphql.ts file and make sure that the mutation is being added to the exports object. Check your code for type errors and make sure that the arguments passed to the mutation are of the correct type and match the schema definition. If you are still unable to resolve the error, you can try to isolate the problem by creating a minimal reproduction of the error using a smaller codebase. This can help you identify the cause of the error and find a solution more easily
How can I troubleshoot an error: lib/graphql has no exported mutation - for a mutation I have defined and which appears in graphql.tsx
I'm trying to figure out what I need to do in order to have lib/graphql recognise the mutations I have made. I have an issue.tsx (which is a form). It imports: import { IssueInput, useUpdateIssueMutation, useAllIssuesQuery, useCreateIssueMutation, useDeleteIssueMutation, Issue as IssueGQLType, } from "lib/graphql" Other than IssueInput and Issue, I'm getting errors in my terminal that say these queries and mutations are not exported members. However when I try to load the issue page in local host, I get an error that says: error - GraphQLError [Object]: Syntax Error: Expected Name, found . It points to the line where Issue is imported. I made all of these queries and mutations in my resolver as follows: import { Arg, Mutation, Query, Resolver } from "type-graphql" import { Issue } from "./issue.model" import { IssueService } from "./issue.service" import { IssueInput } from "./inputs/create.input" import { Inject, Service } from "typedi" import { UseAuth } from "../shared/middleware/UseAuth" import { Role } from "@generated" @Service() @Resolver(() => Issue) export default class IssueResolver { @Inject(() => IssueService) issueService: IssueService @Query(() => [Issue]) async allIssues() { return await this.issueService.getAllIssues() } @Query(() => [Issue]) async futureRiskIssues() { return await this.issueService.getFutureRiskIssues() } @Query(() => Issue) async issue(@Arg("id") id: string) { return await this.issueService.getIssue(id) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async createIssue(@Arg("data") data: IssueInput) { return await this.issueService.createIssue(data) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async deleteIssue(@Arg("id") id: string) { return await this.issueService.deleteIssue(id) } @UseAuth([Role.ADMIN]) @Mutation(() => Issue) async updateIssue(@Arg("id") id: string, @Arg("data") data: IssueInput) { return await this.issueService.updateIssue(id, data) } } I can also see from my graphql.tsx file, that these functions are recognised as follows: export type Mutation = { __typename?: 'Mutation'; createIssue: Issue; createUser: User; deleteIssue: Issue; destroyAccount: Scalars['Boolean']; forgotPassword: Scalars['Boolean']; getBulkSignedS3UrlForPut?: Maybe<Array<SignedResponse>>; getSignedS3UrlForPut?: Maybe<SignedResponse>; login: AuthResponse; register: AuthResponse; resetPassword: Scalars['Boolean']; updateIssue: Issue; updateMe: User; }; export type MutationCreateUserArgs = { data: UserCreateInput; }; export type MutationDeleteIssueArgs = { id: Scalars['String']; }; export type MutationUpdateIssueArgs = { data: IssueInput; id: Scalars['String']; }; I have run the codegen several times and can't think of anything else to try to force these mutations and queries to be recognised. Can anyone see a way to trouble shoot this? My codegen.yml has: schema: http://localhost:5555/graphql documents: - "src/components/**/*.{ts,tsx}" - "src/lib/**/*.{ts,tsx}" - "src/pages/**/*.{ts,tsx}" overwrite: true generates: src/lib/graphql.tsx: config: withMutationFn: false addDocBlocks: false scalars: DateTime: string plugins: - add: content: "/* eslint-disable */" - typescript - typescript-operations - typescript-react-apollo When I look at the mutations available on the authentication objects (that are provided with the [boilerplate app][1] that I am trying to use), I can see that there are mutations and queries that are differently represented in the lib/graphql file. I just can't figure out how to force the ones I write to be included in this way: export function useLoginMutation(baseOptions?: Apollo.MutationHookOptions<LoginMutation, LoginMutationVariables>) { const options = {...defaultOptions, ...baseOptions} return Apollo.useMutation<LoginMutation, LoginMutationVariables>(LoginDocument, options); } Instead, I get all of these things, but none of them look like the above and I can't figure out which one to import into my front end form so that I can make an entry in the database. None of them look like the queries or mutations I defined in my resolver export type IssueInput = { description: Scalars['String']; issueGroup: Scalars['String']; title: Scalars['String']; }; export type IssueListRelationFilter = { every?: InputMaybe<IssueWhereInput>; none?: InputMaybe<IssueWhereInput>; some?: InputMaybe<IssueWhereInput>; }; export type IssueRelationFilter = { is?: InputMaybe<IssueWhereInput>; isNot?: InputMaybe<IssueWhereInput>; }; export type IssueWhereInput = { AND?: InputMaybe<Array<IssueWhereInput>>; NOT?: InputMaybe<Array<IssueWhereInput>>; OR?: InputMaybe<Array<IssueWhereInput>>; createdAt?: InputMaybe<DateTimeFilter>; description?: InputMaybe<StringFilter>; id?: InputMaybe<UuidFilter>; issueGroup?: InputMaybe<IssueGroupRelationFilter>; issueGroupId?: InputMaybe<UuidFilter>; subscribers?: InputMaybe<UserIssueListRelationFilter>; title?: InputMaybe<StringFilter>; updatedAt?: InputMaybe<DateTimeFilter>; }; export type IssueWhereUniqueInput = { id?: InputMaybe<Scalars['String']>; }; I do have this record in my graphql.tsx file: export type Mutation = { __typename?: 'Mutation'; createIssue: Issue; createIssueGroup: IssueGroup; createUser: User; deleteIssue: Issue; deleteIssueGroup: IssueGroup; destroyAccount: Scalars['Boolean']; forgotPassword: Scalars['Boolean']; getBulkSignedS3UrlForPut?: Maybe<Array<SignedResponse>>; getSignedS3UrlForPut?: Maybe<SignedResponse>; login: AuthResponse; register: AuthResponse; resetPassword: Scalars['Boolean']; updateIssue: Issue; updateIssueGroup: IssueGroup; updateMe: User; }; but I can't say: createIssueMutation as an import in my issue.tsx where I'm trying to make a form to use to post to the database. [1]: https://github.com/NoQuarterTeam/boilerplate In the issue form, I get an error that says: "resource": "/.../src/pages/issue.tsx", "owner": "typescript", "code": "2305", "severity": 8, "message": "Module '"lib/graphql"' has no exported member 'useCreateIssueMutation'.", "source": "ts", "startLineNumber": 7, "startColumn": 27, "endLineNumber": 7, "endColumn": 54 }] and the same thing for the query
[ "check your codegen.yml\noverwrite: true\nschema: \"http://localhost:4000/graphql\"\ndocuments: \"src/graphql/**/*.graphql\"\ngenerates:\n src/generated/graphql.tsx:\n plugins:\n - \"typescript\"\n - \"typescript-operations\"\n - \"typescript-react-apollo\"\n ./graphql.schema.json:\n plugins:\n - \"introspection\"\n\n\nor try something like @Resolver(Issue)\n", "It seems like you are not generating the hooks that you are trying to import.\nYou can update your codegen.yml file to add the generated hooks:\nschema: http://localhost:5555/graphql\ndocuments:\n - \"src/components/**/*.{ts,tsx}\"\n - \"src/lib/**/*.{ts,tsx}\"\n - \"src/pages/**/*.{ts,tsx}\"\noverwrite: true\ngenerates:\n src/lib/graphql.tsx:\n config:\n withMutationFn: false\n addDocBlocks: false\n scalars:\n DateTime: string\n withHooks: true # <--------------------- this line\n plugins:\n - add:\n content: \"/* eslint-disable */\"\n - typescript\n - typescript-operations\n - typescript-react-apollo\n\n", "If you are receiving the error \"lib/graphql has no exported mutation\" for a mutation that you have defined in your GraphQL schema, there are a few possible causes:\nThe mutation is not correctly defined in your schema. Make sure that the mutation is defined using the mutation keyword and that it is properly named and formatted.\nThe mutation is not being imported correctly in your code. Make sure that you are importing the mutation using the correct syntax and that it is being imported from the correct file.\nThe mutation is not being added to the exports object in your lib/graphql.ts file. This file defines the list of mutations, queries, and subscriptions that are available to your code. Make sure that the mutation is being added to the exports object in this file.\nThere is a type error in your code. This error can sometimes occur if the mutation is being called with the wrong type of arguments. Make sure that the arguments passed to the mutation are of the correct type and that they match the schema definition.\nTo troubleshoot this error, you can try the following steps:\nDouble-check the schema definition for the mutation and make sure that it is properly defined and formatted.\nCheck the import statement for the mutation in your code and make sure that it is correct and is being imported from the correct file.\nCheck the lib/graphql.ts file and make sure that the mutation is being added to the exports object.\nCheck your code for type errors and make sure that the arguments passed to the mutation are of the correct type and match the schema definition.\nIf you are still unable to resolve the error, you can try to isolate the problem by creating a minimal reproduction of the error using a smaller codebase. This can help you identify the cause of the error and find a solution more easily\n" ]
[ 1, 0, 0 ]
[]
[]
[ "graphql", "typegraphql" ]
stackoverflow_0074438362_graphql_typegraphql.txt
Q: Ignore a line of Javascript if the element isn't present on the page and skip to the next line? Working on a django web app and running into an issue with my javascript. The web application is multiple different html pages, so the elements my js code is searching for are present on some pages but not others. If the second line is not present on the current page, the script stops running and the final function will not work. I have a "plan" page where you can add additional tags to your plan and then a separate page to filter results. If I'm on the plan page then the "#filterBtn" element is not present so my createNewTagField function doesn't work. If I switch the two lines of code, the opposite happens. I can't get them both to work since the elements javascript is searching for are on two different pages and not present at the same time. These are the lines causing problems. document.addEventListener('DOMContentLoaded', function() { document.querySelector('#mobile-menu').onclick = toggleMobileMenu; document.querySelector('#filterBtn').onclick = toggleFiltersMenu; document.querySelector('#addTag').onclick = createNewTagField; }); I've rearranged the lines of code and it just fixes it for one page while still having the problem on the other page. I'm thinking it needs to be something like if null then continue to the next line, but haven't been able to find the right code from my searching. A: If you do not have more than a couple tags/lines it's I would use the try-catch statement to ignore a line of code if the element isn't present on the page: try { // This line of code may throw an error if the element is not found const element = document.querySelector('.my-element'); // Do something with the element... } catch (error) { // If an error is thrown, ignore it and continue with the next line console.log(error); // Output: Error: The element is not found } // This line of code will be executed even if the element is not found console.log('Element not found. Trying again') Do this for each of the scenarios. A: You can check for truthiness in JavaScript. https://developer.mozilla.org/en-US/docs/Glossary/Truthy Because the absence of an element in the DOM will result in null, this works pretty well, and it's highly readable. document.addEventListener('DOMContentLoaded', function() { const mobileMenu = document.querySelector('#mobile-menu'); const filterBtn = document.querySelector('#filterBtn'); const addTag = document.querySelector('#addTag'); if (mobileMenu) mobileMenu.onclick = toggleMobileMenu; if (filterBtn) mobileMenu.onclick = toggleFiltersMenu; if (addTag) mobileMenu.onclick = createNewTagField; });
Ignore a line of Javascript if the element isn't present on the page and skip to the next line?
Working on a django web app and running into an issue with my javascript. The web application is multiple different html pages, so the elements my js code is searching for are present on some pages but not others. If the second line is not present on the current page, the script stops running and the final function will not work. I have a "plan" page where you can add additional tags to your plan and then a separate page to filter results. If I'm on the plan page then the "#filterBtn" element is not present so my createNewTagField function doesn't work. If I switch the two lines of code, the opposite happens. I can't get them both to work since the elements javascript is searching for are on two different pages and not present at the same time. These are the lines causing problems. document.addEventListener('DOMContentLoaded', function() { document.querySelector('#mobile-menu').onclick = toggleMobileMenu; document.querySelector('#filterBtn').onclick = toggleFiltersMenu; document.querySelector('#addTag').onclick = createNewTagField; }); I've rearranged the lines of code and it just fixes it for one page while still having the problem on the other page. I'm thinking it needs to be something like if null then continue to the next line, but haven't been able to find the right code from my searching.
[ "If you do not have more than a couple tags/lines it's I would use the try-catch statement to ignore a line of code if the element isn't present on the page:\ntry {\n// This line of code may throw an error if the element is not found\nconst element = document.querySelector('.my-element');\n// Do something with the element...\n} catch (error) {\n// If an error is thrown, ignore it and continue with the next line\nconsole.log(error); // Output: Error: The element is not found\n}\n\n// This line of code will be executed even if the element is not found\nconsole.log('Element not found. Trying again')\n\nDo this for each of the scenarios.\n", "You can check for truthiness in JavaScript. https://developer.mozilla.org/en-US/docs/Glossary/Truthy\nBecause the absence of an element in the DOM will result in null, this works pretty well, and it's highly readable.\ndocument.addEventListener('DOMContentLoaded', function() {\n \n const mobileMenu = document.querySelector('#mobile-menu');\n const filterBtn = document.querySelector('#filterBtn');\n const addTag = document.querySelector('#addTag');\n\n if (mobileMenu) mobileMenu.onclick = toggleMobileMenu;\n if (filterBtn) mobileMenu.onclick = toggleFiltersMenu;\n if (addTag) mobileMenu.onclick = createNewTagField;\n \n});\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "javascript" ]
stackoverflow_0074659373_django_javascript.txt
Q: How to increase the PasswordField text when seen on a mobile device? This JavaFX control doesn't seem to have a GluonFX equivalent for the mobile device environment. On the desktop, the circle char that masks user input looks fine. But on the Android the same char is as small as tapping paper with the tip of an ink pen. I've tried .password-field .text { -fx-font-size: 15; } and passwordField.setFont(new Font(15)); but neither seem to change it. Is this just how it is for now? No way to make those black circles look bigger on the phone to the User? A: This doesn't work: .password-field .text { -fx-font-size: 15; } because the inner Text node has its font property bound to the TextField/PasswordField control, and when the latter is set, it overrules it. However, as you mentioned, this doesn't work either: passwordField.setFont(new Font(15)); If you use Gluon Mobile, the glisten.css stylesheet applies the Roboto font, and the password field gets Roboto, regular, 13.3pt. You can verify this by adding a listener: passworldField.fontProperty().addListener((obs, ov, nv) -> System.out.println("Font: " + nv)); which results in something like: Font: Font[name=System Regular, family=System, style=Regular, size=15.0] Font: Font[name=System Regular, family=System, style=Regular, size=13.300000190734863] Font: Font[name=Roboto, family=Roboto, style=Regular, size=13.300000190734863] So it ends up using the usual "small" font. But this works: .password-field { -fx-font-size: 15pt; } as it is applied after the Roboto font is set (your css stylesheet is applied at the end). Font: Font[name=System Regular, family=System, style=Regular, size=15.0] ... Font: Font[name=Roboto, family=Roboto, style=Regular, size=19.999999523162843] (Note that on Android there is some font scaling applied as well). If the bullet symbol is still too small, you can use a bigger font size. See https://www.htmlsymbols.xyz/unicode/U+2022): Only with 28+ you will get a more visible symbol. The only caveat of using large font sizes is that the caret gets too tall.
How to increase the PasswordField text when seen on a mobile device?
This JavaFX control doesn't seem to have a GluonFX equivalent for the mobile device environment. On the desktop, the circle char that masks user input looks fine. But on the Android the same char is as small as tapping paper with the tip of an ink pen. I've tried .password-field .text { -fx-font-size: 15; } and passwordField.setFont(new Font(15)); but neither seem to change it. Is this just how it is for now? No way to make those black circles look bigger on the phone to the User?
[ "This doesn't work:\n.password-field .text {\n -fx-font-size: 15;\n}\n\nbecause the inner Text node has its font property bound to the TextField/PasswordField control, and when the latter is set, it overrules it.\nHowever, as you mentioned, this doesn't work either:\npasswordField.setFont(new Font(15));\n\nIf you use Gluon Mobile, the glisten.css stylesheet applies the Roboto font, and the password field gets Roboto, regular, 13.3pt.\nYou can verify this by adding a listener:\npassworldField.fontProperty().addListener((obs, ov, nv) -> System.out.println(\"Font: \" + nv));\n\nwhich results in something like:\nFont: Font[name=System Regular, family=System, style=Regular, size=15.0]\nFont: Font[name=System Regular, family=System, style=Regular, size=13.300000190734863]\nFont: Font[name=Roboto, family=Roboto, style=Regular, size=13.300000190734863]\n\nSo it ends up using the usual \"small\" font.\nBut this works:\n.password-field {\n -fx-font-size: 15pt;\n}\n\nas it is applied after the Roboto font is set (your css stylesheet is applied at the end).\nFont: Font[name=System Regular, family=System, style=Regular, size=15.0]\n...\nFont: Font[name=Roboto, family=Roboto, style=Regular, size=19.999999523162843]\n\n(Note that on Android there is some font scaling applied as well).\nIf the bullet symbol is still too small, you can use a bigger font size.\nSee https://www.htmlsymbols.xyz/unicode/U+2022): Only with 28+ you will get a more visible symbol.\nThe only caveat of using large font sizes is that the caret gets too tall.\n" ]
[ 2 ]
[]
[]
[ "android", "gluon_mobile", "javafx" ]
stackoverflow_0074648279_android_gluon_mobile_javafx.txt
Q: How to avoid Swift warning "Instance method 'willChangeValue' is unavailable from asynchronous contexts; " How can I call "willChangeValue" when using swift Task/await without the following warning showing up? Instance method 'willChangeValue' is unavailable from asynchronous contexts; Only notify of changes to a key in a synchronous context. Notifying changes across suspension points has undefined behavior.; this is an error in Swift 6 @objc dynamic var localFilesTitle: String { get { return "\(localTitle)(\(localFiles.count))" } set { } } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. Task { await initialise() self.isInitialised = true let local = await self.getLocalFiles() DebugLog("Found \(local.count) local files") for file in local.filter({!$0.isDirectory}) { DebugLog(" \(file.name),\(file.size),\(file.modifiedDate)") } self.willChangeValue(forKey: "localFilesTitle") self.localFiles.append(contentsOf: local.filter({!$0.isDirectory})) self.didChangeValue(forKey: "localFilesTitle") // let remote = await self.getRemoteFiles() // // self.awsFiles = remote } } A: I avoid patterns where I have to call willChangeValue and didChangeValue manually. The dynamic stored properties can do this for us. So, there are a few additional approaches, in addition to those already discussed: I would forego the computed property, make it a simple stored property, and dynamic will take care of all the necessary KVO notifications for me. class ViewController: NSViewController { // or UIViewController, as appropriate @objc dynamic var localFilesTitle: String = "" var localTitle: String = "" var localFiles: [FileWrapper] = [] { didSet { localFilesTitle = "\(localTitle) (\(localFiles.count))" } } override func viewDidLoad() { super.viewDidLoad() Task { localTitle = "Foo" let localDirectories = await self.getLocalFiles() .filter { $0.isDirectory } localFiles.append(contentsOf: localDirectories) } } } The other approach is to make localFiles a dynamic property, as well, and use keyPathsForValuesAffectingValue to tell the KVO system that the localFilesTitle is affected automatically by localFiles: class ViewController: NSViewController { @objc dynamic var localFilesTitle: String { "\(localTitle) (\(localFiles.count))" } var localTitle: String = "" @objc dynamic var localFiles: [FileWrapper] = [] override func viewDidLoad() { // same as above ... } override class func keyPathsForValuesAffectingValue(forKey key: String) -> Set<String> { guard key == #keyPath(localFilesTitle) else { return super.keyPathsForValuesAffectingValue(forKey: key) } return [#keyPath(localFiles)] } } A: That code should be called on the main actor, so you should wrap it like this: await MainActor.run { self.willChangeValue(forKey: "localFilesTitle") self.localFiles.append(contentsOf: local.filter({!$0.isDirectory})) self.didChangeValue(forKey: "localFilesTitle") } A: The proposed solution by jrturton works, but if you want to avoid having even more nested blocks you can also delegate to a MainActor async method like this: Updated with feedback from jrturton. override func viewDidLoad() { super.viewDidLoad() Task { /* async code here */ await myMethod() } } @MainActor private func myMethod() { willChangeValue(forKey: "localFilesTitle") localFiles.append(contentsOf: local.filter({!$0.isDirectory})) didChangeValue(forKey: "localFilesTitle") }
How to avoid Swift warning "Instance method 'willChangeValue' is unavailable from asynchronous contexts; "
How can I call "willChangeValue" when using swift Task/await without the following warning showing up? Instance method 'willChangeValue' is unavailable from asynchronous contexts; Only notify of changes to a key in a synchronous context. Notifying changes across suspension points has undefined behavior.; this is an error in Swift 6 @objc dynamic var localFilesTitle: String { get { return "\(localTitle)(\(localFiles.count))" } set { } } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. Task { await initialise() self.isInitialised = true let local = await self.getLocalFiles() DebugLog("Found \(local.count) local files") for file in local.filter({!$0.isDirectory}) { DebugLog(" \(file.name),\(file.size),\(file.modifiedDate)") } self.willChangeValue(forKey: "localFilesTitle") self.localFiles.append(contentsOf: local.filter({!$0.isDirectory})) self.didChangeValue(forKey: "localFilesTitle") // let remote = await self.getRemoteFiles() // // self.awsFiles = remote } }
[ "I avoid patterns where I have to call willChangeValue and didChangeValue manually. The dynamic stored properties can do this for us.\nSo, there are a few additional approaches, in addition to those already discussed:\n\nI would forego the computed property, make it a simple stored property, and dynamic will take care of all the necessary KVO notifications for me.\nclass ViewController: NSViewController { // or UIViewController, as appropriate\n\n @objc dynamic var localFilesTitle: String = \"\"\n\n var localTitle: String = \"\"\n\n var localFiles: [FileWrapper] = [] {\n didSet {\n localFilesTitle = \"\\(localTitle) (\\(localFiles.count))\"\n }\n }\n\n override func viewDidLoad() {\n super.viewDidLoad()\n\n Task {\n localTitle = \"Foo\"\n let localDirectories = await self.getLocalFiles()\n .filter { $0.isDirectory }\n\n localFiles.append(contentsOf: localDirectories)\n }\n }\n}\n\n\nThe other approach is to make localFiles a dynamic property, as well, and use keyPathsForValuesAffectingValue to tell the KVO system that the localFilesTitle is affected automatically by localFiles:\nclass ViewController: NSViewController {\n\n @objc dynamic var localFilesTitle: String { \"\\(localTitle) (\\(localFiles.count))\" }\n\n var localTitle: String = \"\"\n\n @objc dynamic var localFiles: [FileWrapper] = []\n\n override func viewDidLoad() { \n // same as above ...\n }\n\n override class func keyPathsForValuesAffectingValue(forKey key: String) -> Set<String> {\n guard key == #keyPath(localFilesTitle) else {\n return super.keyPathsForValuesAffectingValue(forKey: key)\n }\n\n return [#keyPath(localFiles)]\n }\n}\n\n\n\n", "That code should be called on the main actor, so you should wrap it like this:\nawait MainActor.run { \n self.willChangeValue(forKey: \"localFilesTitle\")\n self.localFiles.append(contentsOf: local.filter({!$0.isDirectory}))\n self.didChangeValue(forKey: \"localFilesTitle\") \n}\n\n", "The proposed solution by jrturton works, but if you want to avoid having even more nested blocks you can also delegate to a MainActor async method like this:\nUpdated with feedback from jrturton.\noverride func viewDidLoad() {\n super.viewDidLoad()\n Task {\n /*\n async code here\n */\n\n await myMethod()\n } \n}\n\n@MainActor private func myMethod() {\n willChangeValue(forKey: \"localFilesTitle\")\n localFiles.append(contentsOf: local.filter({!$0.isDirectory}))\n didChangeValue(forKey: \"localFilesTitle\")\n}\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "async_await", "swift", "task" ]
stackoverflow_0074651103_async_await_swift_task.txt
Q: How do I store variable of user input into specific address using ARM assembly User inputs a value and is read into using scanf and it is scored into variable. I want to put that variable into a specific address that we have defined. I tried using [var name] dcd [value] but it does not work. This is my code /* -- printf01.s */ .data address dcd 0x2000000 /* First message */ .balign 4 ask_number: .asciz "Enter a number to store: " .balign 4 ask_address: .asciz "Enter an addresss:" /* Second message */ .balign 4 read_number: .asciz "Number from user input is %d\n" .balign 4 read_address: .asciz "Number in address %d is %d\n" /* Format pattern for scanf */ .balign 4 scan_pattern: .asciz "%d" /* Where scanf will store the number read */ .balign 4 number: .word 0 .balign 4 return: .word 0 .text .global main main: ldr r1, =return str lr, [r1] @ save return address into return ldr r0, =ask_number @ r0 <- &message1 bl printf @ call to printf ldr r0, =scan_pattern @ r0 <- &scan_pattern ldr r1, =number @ r1 <- &number_read bl scanf @ call to scanf ldr r0, =read_number ldr r1, =number ldr r1, [r1] bl printf ldr r0, =read_address @ r0 <- &message2 ldr r1, =number @ r1 <- &number_read ldr r2, [r1] @ r1 <- *r1 bl printf @ call to printf ldr lr, =return ldr lr, [lr] @ load return address bx lr /* External */ .global printf .global scanf Any help is appreciated A: Here's how you do it (assuming the value you want to store at 0x2000000 is in r1) mov r0,0x2000000 str r1,[r0]
How do I store variable of user input into specific address using ARM assembly
User inputs a value and is read into using scanf and it is scored into variable. I want to put that variable into a specific address that we have defined. I tried using [var name] dcd [value] but it does not work. This is my code /* -- printf01.s */ .data address dcd 0x2000000 /* First message */ .balign 4 ask_number: .asciz "Enter a number to store: " .balign 4 ask_address: .asciz "Enter an addresss:" /* Second message */ .balign 4 read_number: .asciz "Number from user input is %d\n" .balign 4 read_address: .asciz "Number in address %d is %d\n" /* Format pattern for scanf */ .balign 4 scan_pattern: .asciz "%d" /* Where scanf will store the number read */ .balign 4 number: .word 0 .balign 4 return: .word 0 .text .global main main: ldr r1, =return str lr, [r1] @ save return address into return ldr r0, =ask_number @ r0 <- &message1 bl printf @ call to printf ldr r0, =scan_pattern @ r0 <- &scan_pattern ldr r1, =number @ r1 <- &number_read bl scanf @ call to scanf ldr r0, =read_number ldr r1, =number ldr r1, [r1] bl printf ldr r0, =read_address @ r0 <- &message2 ldr r1, =number @ r1 <- &number_read ldr r2, [r1] @ r1 <- *r1 bl printf @ call to printf ldr lr, =return ldr lr, [lr] @ load return address bx lr /* External */ .global printf .global scanf Any help is appreciated
[ "Here's how you do it (assuming the value you want to store at 0x2000000 is in r1)\nmov r0,0x2000000\nstr r1,[r0]\n\n" ]
[ 0 ]
[]
[]
[ "arm" ]
stackoverflow_0074285650_arm.txt
Q: Java Formating input by user Create a program that will make the user input the MONTH,DAY and YEARS as integers then use switch statement and concatenation to display it in this format: Sample output:       Enter a Month : 1       Enter a Day : 13        Enter a Year : 2022       January 13 , 2022 Note: If the user inputs invalid month and day then the program will display it is invalid. The months of the calendar is 12 and the days are until 31. this what my teachers do this but i have not yet knowledgable by this language. can someone help me please .
Java Formating input by user
Create a program that will make the user input the MONTH,DAY and YEARS as integers then use switch statement and concatenation to display it in this format: Sample output:       Enter a Month : 1       Enter a Day : 13        Enter a Year : 2022       January 13 , 2022 Note: If the user inputs invalid month and day then the program will display it is invalid. The months of the calendar is 12 and the days are until 31. this what my teachers do this but i have not yet knowledgable by this language. can someone help me please .
[]
[]
[ "The following code demonstrates the use of various format specifiers.\nint i = 55/22;\n// decimal integer\nSystem.out.printf (\"55/22 = %d %n\", i);\n// Pad with zeros\ndouble q = 1.0/2.0;\nSystem.out.printf (\"1.0/2.0 = %09.3f %n\", q);\n// Scientific notation\nq = 5000.0/3.0;\nSystem.out.printf (\"5000/3.0 = %7.2e %n\", q);\n// Negative infinity\nq = -10.0/0.0;\nSystem.out.printf (\"-10.0/0.0 = %7.2e %n\", q);\n// Multiple arguments\n//Pi value, E–base of natural logarithm\nSystem.out.printf (\"pi = %5.3f, e = %5.4f %n\", Math.PI,Math.E);\n\nOutput:\n55/22 = 2\n21.0/55.0 = 00000.500\n5000/3.0 = 1.67e+03\n-10.0/0.0 = -Infinity\npi = 3.142, e = 2.7183 \n\n" ]
[ -2 ]
[ "java" ]
stackoverflow_0074659816_java.txt
Q: If textfield is empty set border to red, if we enter text, remove red border In Swift, I have added border color red to text fields. I want to implement that if we enter any text in it, the textfield border color should be removed, and if again there is no text in it, the border color should again be set to red. Without a click of any button. (programmatically only) A: you can do this with following simple code. add this below line in you viewdidload or view configure method textField.addTarget(self, action: #selector(ViewController.textFieldDidChange(_:)), for: .editingChanged) change your text field border color or remove the border as you need @objc func textFieldDidChange(_ textField: UITextField) { if (textField.text!.isEmpty) { // change your textfield border color } else { // remove text field border or change color } } A: This should help: if self.yourTextField.text.isEmpty { self.yourTextField.layer.borderColor = UIColor.red.cgColor self.yourTextField.layer.borderWidth = 1 } else { self.yourTextField.layer.borderWidth = 0 } A: Xcode 14 / Swift 5.7: Setup a binding variable and use the .onChange modifier with an .overlay modifier containing a shape (i.e. RoundedRectangle) to change the "border" color. NOTE: you will want to use the .textFieldStyle(PlainTextField()) modifier too. @State private var isTitleMissing: Bool = true @State private var inputTitle: String = "" TextField("Insert Title", text: $inputTitle) .textFieldStyle(PlainTtextFieldStyle()) .onChange(of: inputTitle) { _ in isTitleMissing = inputTitle.isEmpty } .overlay(RoundedRectangle(cornerRadius: 16).stroke(isTitleMissing ? Color.red : Color.gray))
If textfield is empty set border to red, if we enter text, remove red border
In Swift, I have added border color red to text fields. I want to implement that if we enter any text in it, the textfield border color should be removed, and if again there is no text in it, the border color should again be set to red. Without a click of any button. (programmatically only)
[ "you can do this with following simple code.\nadd this below line in you viewdidload or view configure method\ntextField.addTarget(self, action: #selector(ViewController.textFieldDidChange(_:)), for: .editingChanged)\n\nchange your text field border color or remove the border as you need\n@objc func textFieldDidChange(_ textField: UITextField) {\n if (textField.text!.isEmpty)\n {\n // change your textfield border color\n }\n else\n {\n // remove text field border or change color\n }\n }\n\n", "This should help:\nif self.yourTextField.text.isEmpty\n{\n self.yourTextField.layer.borderColor = UIColor.red.cgColor\n self.yourTextField.layer.borderWidth = 1\n}\nelse\n{\n self.yourTextField.layer.borderWidth = 0\n}\n\n", "Xcode 14 / Swift 5.7:\nSetup a binding variable and use the .onChange modifier with an .overlay modifier containing a shape (i.e. RoundedRectangle) to change the \"border\" color. NOTE: you will want to use the .textFieldStyle(PlainTextField()) modifier too.\n@State private var isTitleMissing: Bool = true\n@State private var inputTitle: String = \"\"\n \nTextField(\"Insert Title\", text: $inputTitle)\n .textFieldStyle(PlainTtextFieldStyle())\n .onChange(of: inputTitle) { _ in\n isTitleMissing = inputTitle.isEmpty\n }\n .overlay(RoundedRectangle(cornerRadius: 16).stroke(isTitleMissing ? Color.red : Color.gray))\n\n\n" ]
[ 5, 0, 0 ]
[]
[]
[ "border", "ios", "ipad", "iphone", "swift" ]
stackoverflow_0069913604_border_ios_ipad_iphone_swift.txt
Q: How to save XGBoost/LightGBM model to PostgreSQL database in Python for subsequent inference in Java? I'm restricted to a PostgreSQL as 'model storage' for the models itself or respective components (coefficients, ..). Obviously, PostgreSQL is far from being a fully-fledged model storage, so I can't rule out that I have to implement the whole model training process in Java [...]. I couldn't find a solution that involves a PostgreSQL database as intermediate storage for the models. Writing files directly to the disk/other storages isn't really an option for me. I considered calling Python code from within the Java application but I don't know whether this would be an efficient solution for subsequent inference tasks and beyond [...]. Are there ways to serialize PMML or other formats that can be loaded via Java implementations of the algorithms? Or ways to use the model definitions/parameters directly for reproducing the model [...]? A: Using PostgreSQL as dummy model storage: Train a model in Python. Establish PostgreSQL connection, dump your model in Pickle data format to the "models" table. Obviously, the data type of the main column should be BLOB. Anytime you want to use the model for some application, unpickle it from the "models" table. The "models" table may have extra columns for storing the model in alternative data formats such as PMML. Assuming you've used correct Python-to-PMML conversion tools, you can assume that the Pickle representation and the PMML representation of the same model will be functionally identical (ie. making the same prediction when given the same input). Using PMML in Java/JVM applications is easy.
How to save XGBoost/LightGBM model to PostgreSQL database in Python for subsequent inference in Java?
I'm restricted to a PostgreSQL as 'model storage' for the models itself or respective components (coefficients, ..). Obviously, PostgreSQL is far from being a fully-fledged model storage, so I can't rule out that I have to implement the whole model training process in Java [...]. I couldn't find a solution that involves a PostgreSQL database as intermediate storage for the models. Writing files directly to the disk/other storages isn't really an option for me. I considered calling Python code from within the Java application but I don't know whether this would be an efficient solution for subsequent inference tasks and beyond [...]. Are there ways to serialize PMML or other formats that can be loaded via Java implementations of the algorithms? Or ways to use the model definitions/parameters directly for reproducing the model [...]?
[ "Using PostgreSQL as dummy model storage:\n\nTrain a model in Python.\nEstablish PostgreSQL connection, dump your model in Pickle data format to the \"models\" table. Obviously, the data type of the main column should be BLOB.\nAnytime you want to use the model for some application, unpickle it from the \"models\" table.\n\nThe \"models\" table may have extra columns for storing the model in alternative data formats such as PMML. Assuming you've used correct Python-to-PMML conversion tools, you can assume that the Pickle representation and the PMML representation of the same model will be functionally identical (ie. making the same prediction when given the same input). Using PMML in Java/JVM applications is easy.\n" ]
[ 0 ]
[]
[]
[ "java", "lightgbm", "machine_learning", "python", "xgboost" ]
stackoverflow_0074656521_java_lightgbm_machine_learning_python_xgboost.txt
Q: String interpolation not replacing with actual value in Jenkins Groovy I am trying to replace value of a Yaml file with yq and global variable declared in my jenkins.groovy script. The Code does not throw any error but it is not replacing the values and Yaml remains unchanged. stage('Clone abc repository and prepare abc-installer') { agent { label LABEL_CICD } steps { sshagent(credentials:['jenkins-deploy-password']) { sh 'whoami; PWD=$(pwd); SCRIPT_DIR=${PWD}/installer; \ yq -i \'.abc-server.image.tag = $IMAGE_TAG\' ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ yq -i \'.spec.source.targetRevision = $ARGO_TARGETREVISION\' ${SCRIPT_DIR}/manifests/abc/abc.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/abc.yaml; \ } } } A: You should be using double quotes for String interpolation to work. sh """ whoami; PWD=$(pwd); SCRIPT_DIR=${PWD}/installer; \ yq -i \'.abc-server.image.tag = $IMAGE_TAG\' ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ yq -i \'.spec.source.targetRevision = $ARGO_TARGETREVISION\' ${SCRIPT_DIR}/manifests/abc/abc.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/abc.yaml; """
String interpolation not replacing with actual value in Jenkins Groovy
I am trying to replace value of a Yaml file with yq and global variable declared in my jenkins.groovy script. The Code does not throw any error but it is not replacing the values and Yaml remains unchanged. stage('Clone abc repository and prepare abc-installer') { agent { label LABEL_CICD } steps { sshagent(credentials:['jenkins-deploy-password']) { sh 'whoami; PWD=$(pwd); SCRIPT_DIR=${PWD}/installer; \ yq -i \'.abc-server.image.tag = $IMAGE_TAG\' ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \ yq -i \'.spec.source.targetRevision = $ARGO_TARGETREVISION\' ${SCRIPT_DIR}/manifests/abc/abc.yaml; \ cat ${SCRIPT_DIR}/manifests/abc/abc.yaml; \ } } }
[ "You should be using double quotes for String interpolation to work.\nsh \"\"\"\nwhoami; PWD=$(pwd); SCRIPT_DIR=${PWD}/installer; \\\n \n yq -i \\'.abc-server.image.tag = $IMAGE_TAG\\' ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \\\n cat ${SCRIPT_DIR}/manifests/abc/values-single.yaml; \\\n yq -i \\'.spec.source.targetRevision = $ARGO_TARGETREVISION\\' ${SCRIPT_DIR}/manifests/abc/abc.yaml; \\\n cat ${SCRIPT_DIR}/manifests/abc/abc.yaml;\n\"\"\"\n\n" ]
[ 0 ]
[]
[]
[ "groovy", "jenkins" ]
stackoverflow_0074658745_groovy_jenkins.txt
Q: Command PhaseScriptExecution failed with a nonzero exit code while trying to add Flutter to iOS app I am trying to add flutter to the existing app. So before doing on the production app I tried with brand new Xcode 10 Single View Application. I followed the tutorial here on https://github.com/flutter/flutter/wiki/Add-Flutter-to-existing-apps and got stuck after adding the run script in the build phase of my target. It's giving the error: Error: iphoneos/AFEiOS.build export TEMP_ROOT=/Users/dhavalkansara/Library/Developer/Xcode/DerivedData/AFEiOS-gctxucyuhlhesnfkbuxfswkozboo/Build/Intermediates.noindex export TOOLCHAINS=com.apple.dt.toolchain.XcodeDefault export TOOLCHAIN_DIR=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain export TREAT_MISSING_BASELINES_AS_TEST_FAILURES=NO export TeamIdentifierPrefix=RQ9BPQCP49. export UID=501 export UNLOCALIZED_RESOURCES_FOLDER_PATH=AFEiOS.app export UNSTRIPPED_PRODUCT=NO export USER=dhavalkansara export USER_APPS_DIR=/Users/dhavalkansara/Applications export USER_LIBRARY_DIR=/Users/dhavalkansara/Library export USE_DYNAMIC_NO_PIC=YES export USE_HEADERMAP=YES export USE_HEADER_SYMLINKS=NO export VALIDATE_PRODUCT=NO export VALID_ARCHS="arm64 arm64e armv7 armv7s" export VERBOSE_PBXCP=NO export VERSIONPLIST_PATH=AFEiOS.app/version.plist export VERSION_INFO_BUILDER=dhavalkansara export VERSION_INFO_FILE=AFEiOS_vers.c export VERSION_INFO_STRING=""@(#)PROGRAM:AFEiOS PROJECT:AFEiOS-"" export WRAPPER_EXTENSION=app export WRAPPER_NAME=AFEiOS.app export WRAPPER_SUFFIX=.app export WRAP_ASSET_PACKS_IN_SEPARATE_DIRECTORIES=NO export XCODE_APP_SUPPORT_DIR=/Applications/Xcode.app/Contents/Developer/Library/Xcode export XCODE_PRODUCT_BUILD_VERSION=10E1001 export XCODE_VERSION_ACTUAL=1020 export XCODE_VERSION_MAJOR=1000 export XCODE_VERSION_MINOR=1020 export XPCSERVICES_FOLDER_PATH=AFEiOS.app/XPCServices export YACC=yacc export arch=undefined_arch export variant=normal /bin/sh -c /Users/dhavalkansara/Library/Developer/Xcode/DerivedData/AFEiOS-gctxucyuhlhesnfkbuxfswkozboo/Build/Intermediates.noindex/AFEiOS.build/Debug-iphoneos/AFEiOS.build/Script-19DAA30A22C0FB0100A039E2.sh The path lib/main.dart does not exist The path does not exist Command PhaseScriptExecution failed with a nonzero exit code I have tried below things. My pod file: # Uncomment the next line to define a global platform for your project platform :ios, '10.0' target 'AFEiOS' do # Comment the next line if you don't want to use dynamic frameworks use_frameworks! # Pods for AFEiOS target 'AFEiOSTests' do inherit! :search_paths # Pods for testing end target 'AFEiOSUITests' do inherit! :search_paths # Pods for testing end flutter_application_path = '/Users/dhavalkansara/FlutterToNative/AFE_flutter/' eval(File.read(File.join(flutter_application_path, '.ios', 'Flutter', 'podhelper.rb')), binding) end Already added FLUTTER_ROOT in my project. Please help me out with this issue. A: This work for me Thanks Alexander Lozano, I upload image for Xcode Version 12.0.1 A: this works for me: xcode 11.3.1 xcode ->targets->runner in build phases: run script and thin binary, select: run script only when installing A: I was having the issue for 2 days. My failure was slightly different: Failed to package /path/to/my/app. Command PhaseScriptExecution failed with a nonzero exit code note: Using new build system note: Planning note: Build preparation complete note: Building targets in parallel I was on Flutter 2.5.1, I tried all of the suggestions found here (amongst others I googled). I wish I knew the answer or understood why, but a simple `flutter upgrade` Worked for me (I ended up with 2.5.3) A: This worked for me: 1) Add this to your podfile: flutter_application_path = '/Users/dhavalkansara/FlutterToNative/AFE_flutter/' eval(File.read(File.join(flutter_application_path, '.ios', 'Flutter', 'podhelper.rb')), binding) install_all_flutter_pods(flutter_application_path) PS. You may also need to do that for each target, i.e target 'AFEiOSTests' do inherit! :search_paths install_all_flutter_pods(flutter_application_path) # Pods for testing end target 'AFEiOSUITests' do inherit! :search_paths install_all_flutter_pods(flutter_application_path) # Pods for testing end 2) Run pod install A: By doing flutter clean. Xcode builds clean.(Key: command+shift+k). xcode ->targets->runner in build phases: run script and thin binary, unselect: run script only when installing then run flutter run in the terminal of the project resolve the issue for me. A: For beginners: (tested on XCODE 12.0.1) open Xcode--> Open a project or file --> go to the flutter app path/ios directory--> open --> Runner and follow attached image steps A: Fix for me was force quitting Xcode. There recently was update, after which this issue appeared. After normal quitting and opening, there was error visible. But after force quitting and opening, Xcode showed dialog box, that some of its dependencies are missing and they will be downloaded. After that build was working for me again. A: Easy way to fix this: Go to Menu -> System Preferences -> Security & Privacy. Choose the Privacy Tab. Select the Full Disk Access item in the left list. Select Xcode in the right list. Source: https://github.com/flutter/flutter/issues/76302 A: If you dig deep and try and find out what is causing this error - it will help you to go to the right direction. For me the reason for this error was " "/packages/flutter_tools/bin/xcode_backend.sh: No such file or directory" Now in order to find xcode_backend.sh the FLUTTER_ROOT variable needs to be defined. It is defined in a file called Flutter/Generated.xcconfig, which is generated when you run flutter pub get or flutter build. Goto Project -> Runner -> Info -> Configurations Debug Runner -> Generated (under Runner) Runner -> Pods-Runner.debug Release Runner -> Generated (under Runner) Runner -> Pods-Runner.release Profile Runner -> Generated (under Runner) Runner -> Pods-Runner.profile At the end all the 3 will have 2 configurations set. Best of luck :) A: I tried everything from flutter clean to pod deintegrate to restarting to changing to "for install builds only". None solved my problem. This happened every time I made large changes to an auto-generated API. Deleting the pushing everything, deleting the folder, and re-cloning worked for me, but it was of course none ideal. Instead of taking a solution wholesale here, you should run flutter run --verbose and trace the source of the error. In my case, I traced Member not found: 'FirebaseAppPlatform.verifyExtends'. to which this solution worked. With a widely attributable error like this, always assess the risks of even more widely attributable solutions. Some of the solutions are quite dangerous. A: How I resolved this issue: step 1: open up your flutter iOS directory in Xcode and try to build, it should give a more detailed overview of where the error is in your code. PhaseScriptExecution failed with a nonzero exit code an error returned by Xcode build engine - you probably ignored warnings not obvious syntax error in your flutter code, could be a scope or imports declaration. Ensure your build succeeds in Xcode before closing simulator. A: Try to run sudo rm -rf /private/var/folders/* in Terminal. This resolves my problem. Refer to: https://apple.stackexchange.com/a/176374/448505 A: Somewhat related to other questions, such as this. For me this started happening after some update in my Mac. The solution included a flutter clean + flutter pub get and restarting both the device (iPhone) and computer (Mac) - restarting the Mac made all the difference. Other answers may suggest to mark the "For install builds only" option on Xcode - that actually broke things for me, build started complaining about not finding Flutter.h. So yeah, restarting things may solve your problem. Perhaps throw in a flutter upgrade as well. A: This worked out strange for me. There recently was an new update, after which this issue appeared. After normal quitting and opening, error was visible. Force quit the xcode and rebuilding the app worked for me. A: Building it from the CLI fixed it for me flutter build ipa A: Anyone facing problem when building or archiving for 1st time on xcode How I resolved this issue: clean flutter project with flutter clean build project with flutter build ios. This will sign app to the appstore build/archive app on xcode A: I trying many things for me apps works after update flutter with flutter upgrade I hope works for you all also by the way ı update may mac and xcode also :) A: TL;DR Try updating Flutter -> flutter upgrade I had this error a few times and always try the flutter clean, flutter pub get...etc but it doesn't work for me. I also try restarting laptop, deleting recently added assets (as some comments suggest) but none of it works. Then I remember how I fixed it last time by updating Flutter, and it has just worked for me again so definitely worth a try A: this happen because some packages not updated in Ios folder delete pods folder delete podfile.lock in podfile change version ( platform :ios, '11.0') '11' or higher flutter pub upgrade --major-versions (in project terminal) pod install (in ios terminal) flutter run A: open a terminal > go to your project folder flutter clean flutter pub get cd ios pod install //back to project folder cd .. flutter run or Build in your Xcode This worked for me in Xcode 14.1 A: Make sure configuration is set correctly to Debug, mine was set to "none"
Command PhaseScriptExecution failed with a nonzero exit code while trying to add Flutter to iOS app
I am trying to add flutter to the existing app. So before doing on the production app I tried with brand new Xcode 10 Single View Application. I followed the tutorial here on https://github.com/flutter/flutter/wiki/Add-Flutter-to-existing-apps and got stuck after adding the run script in the build phase of my target. It's giving the error: Error: iphoneos/AFEiOS.build export TEMP_ROOT=/Users/dhavalkansara/Library/Developer/Xcode/DerivedData/AFEiOS-gctxucyuhlhesnfkbuxfswkozboo/Build/Intermediates.noindex export TOOLCHAINS=com.apple.dt.toolchain.XcodeDefault export TOOLCHAIN_DIR=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain export TREAT_MISSING_BASELINES_AS_TEST_FAILURES=NO export TeamIdentifierPrefix=RQ9BPQCP49. export UID=501 export UNLOCALIZED_RESOURCES_FOLDER_PATH=AFEiOS.app export UNSTRIPPED_PRODUCT=NO export USER=dhavalkansara export USER_APPS_DIR=/Users/dhavalkansara/Applications export USER_LIBRARY_DIR=/Users/dhavalkansara/Library export USE_DYNAMIC_NO_PIC=YES export USE_HEADERMAP=YES export USE_HEADER_SYMLINKS=NO export VALIDATE_PRODUCT=NO export VALID_ARCHS="arm64 arm64e armv7 armv7s" export VERBOSE_PBXCP=NO export VERSIONPLIST_PATH=AFEiOS.app/version.plist export VERSION_INFO_BUILDER=dhavalkansara export VERSION_INFO_FILE=AFEiOS_vers.c export VERSION_INFO_STRING=""@(#)PROGRAM:AFEiOS PROJECT:AFEiOS-"" export WRAPPER_EXTENSION=app export WRAPPER_NAME=AFEiOS.app export WRAPPER_SUFFIX=.app export WRAP_ASSET_PACKS_IN_SEPARATE_DIRECTORIES=NO export XCODE_APP_SUPPORT_DIR=/Applications/Xcode.app/Contents/Developer/Library/Xcode export XCODE_PRODUCT_BUILD_VERSION=10E1001 export XCODE_VERSION_ACTUAL=1020 export XCODE_VERSION_MAJOR=1000 export XCODE_VERSION_MINOR=1020 export XPCSERVICES_FOLDER_PATH=AFEiOS.app/XPCServices export YACC=yacc export arch=undefined_arch export variant=normal /bin/sh -c /Users/dhavalkansara/Library/Developer/Xcode/DerivedData/AFEiOS-gctxucyuhlhesnfkbuxfswkozboo/Build/Intermediates.noindex/AFEiOS.build/Debug-iphoneos/AFEiOS.build/Script-19DAA30A22C0FB0100A039E2.sh The path lib/main.dart does not exist The path does not exist Command PhaseScriptExecution failed with a nonzero exit code I have tried below things. My pod file: # Uncomment the next line to define a global platform for your project platform :ios, '10.0' target 'AFEiOS' do # Comment the next line if you don't want to use dynamic frameworks use_frameworks! # Pods for AFEiOS target 'AFEiOSTests' do inherit! :search_paths # Pods for testing end target 'AFEiOSUITests' do inherit! :search_paths # Pods for testing end flutter_application_path = '/Users/dhavalkansara/FlutterToNative/AFE_flutter/' eval(File.read(File.join(flutter_application_path, '.ios', 'Flutter', 'podhelper.rb')), binding) end Already added FLUTTER_ROOT in my project. Please help me out with this issue.
[ "This work for me Thanks Alexander Lozano, I upload image for Xcode Version 12.0.1\n\n", "this works for me: xcode 11.3.1\nxcode ->targets->runner\n\nin build phases: run script and thin binary, select: run script only when installing\n", "I was having the issue for 2 days.\nMy failure was slightly different:\nFailed to package /path/to/my/app.\nCommand PhaseScriptExecution failed with a nonzero exit code\nnote: Using new build system\nnote: Planning\nnote: Build preparation complete\nnote: Building targets in parallel\n\nI was on Flutter 2.5.1, I tried all of the suggestions found here (amongst others I googled). I wish I knew the answer or understood why, but a simple\n`flutter upgrade` \n\nWorked for me (I ended up with 2.5.3)\n", "This worked for me: \n1) Add this to your podfile: \nflutter_application_path = '/Users/dhavalkansara/FlutterToNative/AFE_flutter/'\n eval(File.read(File.join(flutter_application_path, '.ios', 'Flutter', 'podhelper.rb')), binding)\n\n install_all_flutter_pods(flutter_application_path)\n\nPS. You may also need to do that for each target, i.e \ntarget 'AFEiOSTests' do\n inherit! :search_paths\n install_all_flutter_pods(flutter_application_path)\n # Pods for testing\n end\n\n target 'AFEiOSUITests' do\n inherit! :search_paths\n install_all_flutter_pods(flutter_application_path)\n # Pods for testing\n end\n\n2) Run pod install\n", "By doing flutter clean. Xcode builds clean.(Key: command+shift+k).\nxcode ->targets->runner\nin build phases: run script and thin binary, unselect: run script only when installing\nthen run flutter run in the terminal of the project resolve the issue for me.\n", "For beginners: (tested on XCODE 12.0.1)\nopen Xcode--> Open a project or file --> go to the flutter app path/ios directory--> open --> Runner and follow attached image steps\n\n", "Fix for me was force quitting Xcode. There recently was update, after which this issue appeared. After normal quitting and opening, there was error visible. But after force quitting and opening, Xcode showed dialog box, that some of its dependencies are missing and they will be downloaded. After that build was working for me again.\n", "Easy way to fix this:\n\nGo to Menu -> System Preferences -> Security & Privacy.\nChoose the Privacy Tab.\nSelect the Full Disk Access item in the left list.\nSelect Xcode in the right list.\n\nSource: https://github.com/flutter/flutter/issues/76302\n", "If you dig deep and try and find out what is causing this error - it will help you to go to the right direction. For me the reason for this error was \"\n\"/packages/flutter_tools/bin/xcode_backend.sh: No such file or directory\"\nNow in order to find xcode_backend.sh the FLUTTER_ROOT variable needs to be defined. It is defined in a file called Flutter/Generated.xcconfig, which is generated when you run flutter pub get or flutter build.\nGoto Project -> Runner -> Info -> Configurations\nDebug Runner -> Generated\n(under Runner) Runner -> Pods-Runner.debug\nRelease Runner -> Generated\n(under Runner) Runner -> Pods-Runner.release\nProfile Runner -> Generated\n(under Runner) Runner -> Pods-Runner.profile\nAt the end all the 3 will have 2 configurations set.\nBest of luck :)\n", "I tried everything from flutter clean to pod deintegrate to restarting to changing to \"for install builds only\". None solved my problem. This happened every time I made large changes to an auto-generated API. Deleting the pushing everything, deleting the folder, and re-cloning worked for me, but it was of course none ideal.\nInstead of taking a solution wholesale here, you should run flutter run --verbose and trace the source of the error. In my case, I traced Member not found: 'FirebaseAppPlatform.verifyExtends'. to which this solution worked.\nWith a widely attributable error like this, always assess the risks of even more widely attributable solutions. Some of the solutions are quite dangerous.\n", "How I resolved this issue:\nstep 1: open up your flutter iOS directory in Xcode and try to build, it should give a more detailed overview of where the error is in your code.\nPhaseScriptExecution failed with a nonzero exit code an error returned by Xcode build engine - you probably ignored warnings not obvious syntax error in your flutter code, could be a scope or imports declaration.\nEnsure your build succeeds in Xcode before closing simulator.\n", "Try to run sudo rm -rf /private/var/folders/* in Terminal. This resolves my problem.\nRefer to: https://apple.stackexchange.com/a/176374/448505\n", "Somewhat related to other questions, such as this. For me this started happening after some update in my Mac. The solution included a flutter clean + flutter pub get and restarting both the device (iPhone) and computer (Mac) - restarting the Mac made all the difference. Other answers may suggest to mark the \"For install builds only\" option on Xcode - that actually broke things for me, build started complaining about not finding Flutter.h. So yeah, restarting things may solve your problem. Perhaps throw in a flutter upgrade as well.\n", "This worked out strange for me.\nThere recently was an new update, after which this issue appeared. After normal quitting and opening, error was visible.\nForce quit the xcode and rebuilding the app worked for me.\n", "Building it from the CLI fixed it for me\nflutter build ipa\n", "Anyone facing problem when building or archiving for 1st time on xcode\nHow I resolved this issue:\n\nclean flutter project with flutter clean\nbuild project with flutter build ios. This will sign app to the appstore\nbuild/archive app on xcode\n\n", "I trying many things for me apps works after update flutter with flutter upgrade\nI hope works for you all also by the way ı update may mac and xcode also :)\n", "TL;DR Try updating Flutter -> flutter upgrade\nI had this error a few times and always try the flutter clean, flutter pub get...etc but it doesn't work for me. I also try restarting laptop, deleting recently added assets (as some comments suggest) but none of it works.\nThen I remember how I fixed it last time by updating Flutter, and it has just worked for me again so definitely worth a try\n", "this happen because some packages not updated\nin Ios folder\n\ndelete pods folder\ndelete podfile.lock\nin podfile change version ( platform :ios, '11.0') '11' or higher\nflutter pub upgrade --major-versions (in project terminal)\npod install (in ios terminal)\nflutter run\n\n", "open a terminal > go to your project folder\nflutter clean\n\nflutter pub get\n\ncd ios\n\npod install\n\n//back to project folder\ncd .. \n\nflutter run\n\nor Build in your Xcode\n\nThis worked for me in Xcode 14.1\n", "Make sure configuration is set correctly to Debug, mine was set to \"none\"\n\n" ]
[ 73, 28, 6, 3, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "In my case i solved this issue by update xcode, so check the xcode if there is new update.\n" ]
[ -1 ]
[ "dart", "flutter", "ios", "swift", "xcode" ]
stackoverflow_0057919267_dart_flutter_ios_swift_xcode.txt
Q: MariaDB error message error when connecting to a database: SQLSTATE[42000]: Syntax error or access violation: 1064 I am doing a project where I have a registration form that I am trying to send to a MariaDB database to store the registration form values. I have an example from my professor and I have copied it for the most part, just changing some of the values I need to bind. The two following files are connected through a php file, that I don't think is necessary to see. However when the user submits the correct input I run into an error message(line 4 is marked below): SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 4 I attached an image of my webpage after submitting and getting an error. This first "connectionInfo.php" file contains variable for the database name and password: <?php session_start(); $serverName = "localhost"; $dbname = "project"; $dbuserName = "root"; $dbpassword = ""; ?> this next file, "insertValidData.php" contains the main code to access and send information to the database: <?php if($isValid){ try{ $conn = new PDO ("mysql:host=$serverName;dbname=$dbname", $dbuserName, $dbpassword);// line 4 $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = $conn->prepare("INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth) VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth"); $sql->bindParam(':userName', $userName); $sql->bindParam(':password', $password); $sql->bindParam(':firstName', $firstName); $sql->bindParam(':lastName', $lastName); $sql->bindParam(':address1', $address); $sql->bindParam(':address2', $address2); $sql->bindParam(':city', $city); $sql->bindParam(':state', $state); $sql->bindParam(':zipCode', $zipcode); $sql->bindParam(':phone', $phoneNumber); $sql->bindParam(':email', $email); $sql->bindParam(':gender', $gender); $sql->bindParam(':maritalStatus', $maritalStat); $sql->bindParam(':dateOfBirth', $birthday); $sql->execute(); $last_id = $conn->lastInsertId(); $_SESSION["last_id"] = "last_id"; header("Location: confirmation.php"); } catch(PDOException $e){ echo "Connection Falied: " . $e->getMessage(); } finally{ $conn = null; } } ?> pic of the MariaDB database: enter image description here I have also check to make sure that the variables in connectionInfo.php are the same as in insertValidData.php and if I echo them out it seems that they are. I am really not sure what to do, I have even asked my teacher and they said it looks fine so I thought I would try asking here. A: The problem must be something minor. You may have forgotten to add parentheses. Wrong use $sql = $conn->prepare("INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth) VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth"); Would you try this? $sql = $conn->prepare("INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth) VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth)");
MariaDB error message error when connecting to a database: SQLSTATE[42000]: Syntax error or access violation: 1064
I am doing a project where I have a registration form that I am trying to send to a MariaDB database to store the registration form values. I have an example from my professor and I have copied it for the most part, just changing some of the values I need to bind. The two following files are connected through a php file, that I don't think is necessary to see. However when the user submits the correct input I run into an error message(line 4 is marked below): SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 4 I attached an image of my webpage after submitting and getting an error. This first "connectionInfo.php" file contains variable for the database name and password: <?php session_start(); $serverName = "localhost"; $dbname = "project"; $dbuserName = "root"; $dbpassword = ""; ?> this next file, "insertValidData.php" contains the main code to access and send information to the database: <?php if($isValid){ try{ $conn = new PDO ("mysql:host=$serverName;dbname=$dbname", $dbuserName, $dbpassword);// line 4 $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = $conn->prepare("INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth) VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth"); $sql->bindParam(':userName', $userName); $sql->bindParam(':password', $password); $sql->bindParam(':firstName', $firstName); $sql->bindParam(':lastName', $lastName); $sql->bindParam(':address1', $address); $sql->bindParam(':address2', $address2); $sql->bindParam(':city', $city); $sql->bindParam(':state', $state); $sql->bindParam(':zipCode', $zipcode); $sql->bindParam(':phone', $phoneNumber); $sql->bindParam(':email', $email); $sql->bindParam(':gender', $gender); $sql->bindParam(':maritalStatus', $maritalStat); $sql->bindParam(':dateOfBirth', $birthday); $sql->execute(); $last_id = $conn->lastInsertId(); $_SESSION["last_id"] = "last_id"; header("Location: confirmation.php"); } catch(PDOException $e){ echo "Connection Falied: " . $e->getMessage(); } finally{ $conn = null; } } ?> pic of the MariaDB database: enter image description here I have also check to make sure that the variables in connectionInfo.php are the same as in insertValidData.php and if I echo them out it seems that they are. I am really not sure what to do, I have even asked my teacher and they said it looks fine so I thought I would try asking here.
[ "The problem must be something minor. You may have forgotten to add parentheses. Wrong use\n$sql = $conn->prepare(\"INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth)\n VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth\");\n\nWould you try this?\n$sql = $conn->prepare(\"INSERT INTO registration (userName, password, firstName, lastName, address1, address2, city, state, zipCode, phone, email, gender, maritalStatus, dateOfBirth)\n VALUES (:userName, :password, :firstName, :lastName, :address1, :address2, :city, :state, :zipCode,:phone, :email, :gender, :maritalStatus, :dateOfBirth)\");\n\n" ]
[ 0 ]
[]
[]
[ "database", "mariadb", "pdo", "php" ]
stackoverflow_0074659472_database_mariadb_pdo_php.txt
Q: What does question mark followed by colon does on this RegEx? Can anybody explain to me what is the meaning of question mark followed by colon in a regular expression? I've looked in the official documentation site and I can't find anything related to it. I know the ? is used after a token to indicate option but I just can't seem to find out what does this do. The code goes as this \b(?:(?:https?|ftp)://|www.) My list of url's is: www.google.com www.facebook.com www.youtube.com www.themeforest.net www.enter.co www.icefilms.info www.wikipedia.org www.rojadirecta.me http:// If I remove the first ?: and second ?: of the expression it works the same, selecting all the http, https:// and www. matches my expression ends up like this \b((https?|ftp)://|www.) So I'm not getting what is the difference, read somewhere that it had to do with the delimiters but I'm already using /, so what is the need of these ?: A: In the regular expression \b(?:(?:https?|ftp)://|www.), the question mark followed by the colon (?:) is a non-capturing group. This means that the group will not be captured and can be accessed as a whole. In this regular expression, the non-capturing group (?:https?|ftp) is used to match either "http" or "https" or "ftp". The non-capturing group (?:https?|ftp)://|www. is used to match either the full URL with the protocol (e.g. "http://www.example.com") or just the "www." part of the URL (e.g. "www.example.com"). Non-capturing groups are useful when you want to group multiple patterns together but do not want to capture the matched text as a separate group. This can make the regular expression more concise and easier to read. For more information on non-capturing groups and how to use them in regular expressions, you can refer to the documentation for your chosen regex engine.
What does question mark followed by colon does on this RegEx?
Can anybody explain to me what is the meaning of question mark followed by colon in a regular expression? I've looked in the official documentation site and I can't find anything related to it. I know the ? is used after a token to indicate option but I just can't seem to find out what does this do. The code goes as this \b(?:(?:https?|ftp)://|www.) My list of url's is: www.google.com www.facebook.com www.youtube.com www.themeforest.net www.enter.co www.icefilms.info www.wikipedia.org www.rojadirecta.me http:// If I remove the first ?: and second ?: of the expression it works the same, selecting all the http, https:// and www. matches my expression ends up like this \b((https?|ftp)://|www.) So I'm not getting what is the difference, read somewhere that it had to do with the delimiters but I'm already using /, so what is the need of these ?:
[ "In the regular expression \\b(?:(?:https?|ftp)://|www.), the question mark followed by the colon (?:) is a non-capturing group. This means that the group will not be captured and can be accessed as a whole.\nIn this regular expression, the non-capturing group (?:https?|ftp) is used to match either \"http\" or \"https\" or \"ftp\". The non-capturing group (?:https?|ftp)://|www. is used to match either the full URL with the protocol (e.g. \"http://www.example.com\") or just the \"www.\" part of the URL (e.g. \"www.example.com\").\nNon-capturing groups are useful when you want to group multiple patterns together but do not want to capture the matched text as a separate group. This can make the regular expression more concise and easier to read.\nFor more information on non-capturing groups and how to use them in regular expressions, you can refer to the documentation for your chosen regex engine.\n" ]
[ 0 ]
[]
[]
[ "php", "regex" ]
stackoverflow_0074659806_php_regex.txt