content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Cannot find docker-compose.yml I wanted to edit the docker compose file of a project. After using docker inspect I found this: com.docker.compose.project.config_files /data/compose/15/docker-compose.yml But the directory /data/* does not exist and therefore I cannot find the compose file to edit. Where can I find the docker-compose.yml file? I tried to access the /data directory but this doesn’t exist A: I solved the problem by restoring the docker-compose.yml file from a previous Portainer backup and starting the stack via SSH and not in Portainer
Cannot find docker-compose.yml
I wanted to edit the docker compose file of a project. After using docker inspect I found this: com.docker.compose.project.config_files /data/compose/15/docker-compose.yml But the directory /data/* does not exist and therefore I cannot find the compose file to edit. Where can I find the docker-compose.yml file? I tried to access the /data directory but this doesn’t exist
[ "I solved the problem by restoring the docker-compose.yml file from a previous Portainer backup and starting the stack via SSH and not in Portainer\n" ]
[ 0 ]
[]
[]
[ "docker", "docker_compose" ]
stackoverflow_0074639735_docker_docker_compose.txt
Q: Importing express router from es6 module this is my first nodeJS project and im getting stuck already. In the package.json "type" = "module". I keep getting an error "ERR_MODULE_NOT_FOUND". I tried default and named export but both didnt work. productRoutes.js import express from 'express'; const router = express.Router(); router.get('/', (req, res) => { res.status(200).json({ msg: 'get products' }); }); export default router; server.js import 'dotenv/config'; import express from 'express'; import mongoose from 'mongoose'; import routes from './api/v1/routes/productRoutes'; // express app const app = express(); // middleware app.use(express.json()); app.use(express.urlencoded({ extended: true })); app.use((req, res, next) => { console.log(req.path, req.method); next(); }); // routes app.use('/api/products', routes); // connect to db mongoose .connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, }) .then(() => { console.log('connected to database'); // listen to port app.listen(process.env.PORT, () => { console.log('listening for requests on port', process.env.PORT); }); }) .catch((err) => { console.log(err); }); A: If you don't provide extension of a js file .js the node compiler doesn't understand it, unless you use some other transpilers like babel. so change import routes from './api/v1/routes/productRoutes'; to import routes from './api/v1/routes/productRoutes.js';
Importing express router from es6 module
this is my first nodeJS project and im getting stuck already. In the package.json "type" = "module". I keep getting an error "ERR_MODULE_NOT_FOUND". I tried default and named export but both didnt work. productRoutes.js import express from 'express'; const router = express.Router(); router.get('/', (req, res) => { res.status(200).json({ msg: 'get products' }); }); export default router; server.js import 'dotenv/config'; import express from 'express'; import mongoose from 'mongoose'; import routes from './api/v1/routes/productRoutes'; // express app const app = express(); // middleware app.use(express.json()); app.use(express.urlencoded({ extended: true })); app.use((req, res, next) => { console.log(req.path, req.method); next(); }); // routes app.use('/api/products', routes); // connect to db mongoose .connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, }) .then(() => { console.log('connected to database'); // listen to port app.listen(process.env.PORT, () => { console.log('listening for requests on port', process.env.PORT); }); }) .catch((err) => { console.log(err); });
[ "If you don't provide extension of a js file .js the node compiler doesn't understand it, unless you use some other transpilers like babel.\nso change\nimport routes from './api/v1/routes/productRoutes';\nto\nimport routes from './api/v1/routes/productRoutes.js';\n" ]
[ 1 ]
[]
[]
[ "javascript", "node.js" ]
stackoverflow_0074657169_javascript_node.js.txt
Q: How to see the readiness status of each container in a pod I have a pod with a normal python container and istio running as a side car. During a redeployment, I see that one container start fine, the other does not. NAME READY STATUS my-pod-name-xyz 1/2 Running The logs of both containers appear to be fine. In the description of the pod, I see the following event: Warning Unhealthy 97s kubelet Readiness probe failed: Get "http://172.29.148.23:15021/healthz/ready": dial tcp 172.29.148.23:15021: connect: connection refused How do I find out which container is failing? A: The ReadinessProbe for each container should be different, so you could see which is failing. Also use kubectl get pod <pod> -o yaml and search for containerStatuses. There you can see the status of each container.
How to see the readiness status of each container in a pod
I have a pod with a normal python container and istio running as a side car. During a redeployment, I see that one container start fine, the other does not. NAME READY STATUS my-pod-name-xyz 1/2 Running The logs of both containers appear to be fine. In the description of the pod, I see the following event: Warning Unhealthy 97s kubelet Readiness probe failed: Get "http://172.29.148.23:15021/healthz/ready": dial tcp 172.29.148.23:15021: connect: connection refused How do I find out which container is failing?
[ "The ReadinessProbe for each container should be different, so you could see which is failing.\nAlso use\nkubectl get pod <pod> -o yaml\n\nand search for containerStatuses. There you can see the status of each container.\n" ]
[ 3 ]
[]
[]
[ "kubectl", "kubernetes" ]
stackoverflow_0074656202_kubectl_kubernetes.txt
Q: Zsh detects insecure completion-dependent directories I get the following error messages when I open my terminal, Hyper: [oh-my-zsh] Insecure completion-dependent directories detected: drwxrwxr-x 7 dwaynethe2nd admin 224 Apr 25 15:00 /usr/local/share/zsh drwxrwxr-x 4 dwaynethe2nd admin 128 Apr 25 14:53 /usr/local/share/zsh/site-functions A: This is an issue with ZSH, your shell, not Hyper, your terminal. I actually had the same issue earlier today. There are some solutions in this issue on Github, and I will quote some of them here but I recommend you follow the link and read the comments there. The first solution is to change the ownership of the problematic directories. I will not recommend this without knowing more about your environment, but for most people this will fix the issue: chmod 755 /usr/local/share/zsh chmod 755 /usr/local/share/zsh/site-functions The second solution is to set ZSH_DISABLE_COMPFIX=true (or "true" in quotes) in your .zshrc file, to tell ZSH to not check for insecure directories. The third solution, and the solution that fixed the issue for me, is to initialise compinit with the -u flag. This will use all the directories found by compaudit without checking them for security issues. To do this, you will have to change your .zshrc file or wherever you are configuring autocomplete. A: On my mac, what helped was running brew doctor The program told me the exact commands to run! A: What solved it for me (on OSX) was to change the ownership on all the directories that are called out after you see [Insecure completion-dependent directories detected:]. Important: You have to change permissions on the target files if they are symlinked. For me the complete list was: sudo chown -R $(whoami) /usr/local/share/zsh sudo chown -R $(whoami) /usr/local/share/zsh/site-functions sudo chown -R $(whoami) /usr/local/Homebrew/completions/zsh/_brew sudo chown -R $(whoami) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/completions/zsh/_brew_services sudo chown -R $(whoami) /usr/local/Cellar/gh/2.4.0/share/zsh/site-functions/_gh sudo chown -R $(whoami) /usr/local/Cellar/tldr/1.4.2/share/zsh/site-functions/_tldr A: Reinstalling brew solved this problem for me. Uninstall brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)" Install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" A: Executing two commands solved my issue. sudo chown -R $(whoami) /usr/local/share/zsh /usr/local/share/zsh/site-functions chmod u+w /usr/local/share/zsh/usr/local/share/zsh/site-functions A: For me, the site-functions were symlinked to Homebrew directories. /usr/local/share/zsh/site-functions/_brew -> ../../../Homebrew/completions/zsh/_brew So the only thing that would work is to issue the command against those directories directly. sudo chown -R $USER /opt/homebrew/completions/zsh/_brew Or for cellars sudo chown -R $USER /opt/homebrew/Cellar/ripgrep/13.0.0/share/zsh/site-functions/_rg I'm not sure whether this will pop up again when running brew update. A: In my case following works. sudo chown -R $(whoami) /usr/local/share/zsh sudo chown -R $(whoami) /usr/local/share/zsh/site-functions sudo chown -R $(whoami) /usr/local/Homebrew/completions/zsh/_brew sudo chown -R $(whoami) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/completions/zsh/_brew_services A: To fix this, I used the ZSH_DISABLE_COMPFIX option. The key to using this one is that it has to be placed before/above the “source $ZSH/oh-my-zsh.sh” line: # ZSH_CUSTOM=/path/to/new-custom-folder # Which plugins would you like to load? # Standard plugins can be found in $ZSH/plugins/ # Custom plugins may be added to $ZSH_CUSTOM/plugins/ # Example format: plugins=(rails git textmate ruby lighthouse) # Add wisely, as too many plugins slow down shell startup. plugins=(git) #Fix errors ZSH_DISABLE_COMPFIX="true" source $ZSH/oh-my-zsh.sh # User configuration
Zsh detects insecure completion-dependent directories
I get the following error messages when I open my terminal, Hyper: [oh-my-zsh] Insecure completion-dependent directories detected: drwxrwxr-x 7 dwaynethe2nd admin 224 Apr 25 15:00 /usr/local/share/zsh drwxrwxr-x 4 dwaynethe2nd admin 128 Apr 25 14:53 /usr/local/share/zsh/site-functions
[ "This is an issue with ZSH, your shell, not Hyper, your terminal. I actually had the same issue earlier today. There are some solutions in this issue on Github, and I will quote some of them here but I recommend you follow the link and read the comments there.\nThe first solution is to change the ownership of the problematic directories.\nI will not recommend this without knowing more about your environment, but for most people this will fix the issue:\nchmod 755 /usr/local/share/zsh\nchmod 755 /usr/local/share/zsh/site-functions\n\nThe second solution is to set ZSH_DISABLE_COMPFIX=true (or \"true\" in quotes) in your .zshrc file, to tell ZSH to not check for insecure directories.\nThe third solution, and the solution that fixed the issue for me, is to initialise compinit with the -u flag. This will use all the directories found by compaudit without checking them for security issues. To do this, you will have to change your .zshrc file or wherever you are configuring autocomplete.\n", "On my mac, what helped was running brew doctor\nThe program told me the exact commands to run!\n", "What solved it for me (on OSX) was to change the ownership on all the directories that are called out after you see [Insecure completion-dependent directories detected:].\nImportant: You have to change permissions on the target files if they are symlinked. For me the complete list was:\nsudo chown -R $(whoami) /usr/local/share/zsh \nsudo chown -R $(whoami) /usr/local/share/zsh/site-functions\nsudo chown -R $(whoami) /usr/local/Homebrew/completions/zsh/_brew\nsudo chown -R $(whoami) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/completions/zsh/_brew_services\nsudo chown -R $(whoami) /usr/local/Cellar/gh/2.4.0/share/zsh/site-functions/_gh\nsudo chown -R $(whoami) /usr/local/Cellar/tldr/1.4.2/share/zsh/site-functions/_tldr\n\n", "Reinstalling brew solved this problem for me.\nUninstall brew:\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)\"\n\nInstall brew:\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n\n", "Executing two commands solved my issue.\n\nsudo chown -R $(whoami) /usr/local/share/zsh /usr/local/share/zsh/site-functions\nchmod u+w /usr/local/share/zsh/usr/local/share/zsh/site-functions\n\n", "For me, the site-functions were symlinked to Homebrew directories.\n/usr/local/share/zsh/site-functions/_brew -> ../../../Homebrew/completions/zsh/_brew\nSo the only thing that would work is to issue the command against those directories directly.\nsudo chown -R $USER /opt/homebrew/completions/zsh/_brew \n\nOr for cellars\nsudo chown -R $USER /opt/homebrew/Cellar/ripgrep/13.0.0/share/zsh/site-functions/_rg\n\nI'm not sure whether this will pop up again when running brew update.\n", "In my case following works.\nsudo chown -R $(whoami) /usr/local/share/zsh \nsudo chown -R $(whoami) /usr/local/share/zsh/site-functions\nsudo chown -R $(whoami) /usr/local/Homebrew/completions/zsh/_brew\nsudo chown -R $(whoami) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/completions/zsh/_brew_services\n\n", "To fix this, I used the ZSH_DISABLE_COMPFIX option. The key to using this one is that it has to be placed before/above the “source $ZSH/oh-my-zsh.sh” line:\n# ZSH_CUSTOM=/path/to/new-custom-folder\n\n# Which plugins would you like to load?\n# Standard plugins can be found in $ZSH/plugins/\n# Custom plugins may be added to $ZSH_CUSTOM/plugins/\n# Example format: plugins=(rails git textmate ruby lighthouse)\n# Add wisely, as too many plugins slow down shell startup.\nplugins=(git)\n\n#Fix errors\nZSH_DISABLE_COMPFIX=\"true\"\n\nsource $ZSH/oh-my-zsh.sh\n\n# User configuration\n\n" ]
[ 139, 10, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "hyperterminal", "macos", "oh_my_zsh", "zsh" ]
stackoverflow_0061433167_hyperterminal_macos_oh_my_zsh_zsh.txt
Q: Relay Command doesn't execute I've decided to use the mvvm architecture in my project and tried converting my code to it. But it looks like I'm misunderstanding something, as the properties I'd like to set in an relay command are either not changed or the changing has no effect. Either way this behavior is not desired. My code is very simple and looks like this: MainViewModel.cs public partial class MainViewModel : ObservableRecipient { [ObservableProperty] private string testText; [RelayCommand] private void Toggle() { testText = "pressed"; } } Mainview.xaml <Page x:Class="Calendarium.Views.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:viewModels="using:Calendarium.ViewModels" d:DataContext="{d:DesignInstance Type=viewModels:MainViewModel}"> <Grid> <TextBlock Text="{Binding TestText}"> <Button Command="{Binding ToggleCommand}"> </Grid> </Page> A: You seem to have set the design time DataContext only. You also need to set the actual DataContext that is used at runtime: <Page x:Class="Calendarium.Views.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:viewModels="using:Calendarium.ViewModels" d:DataContext="{d:DesignInstance Type=viewModels:MainViewModel}"> <Page.DataContext> <viewModels:MainViewModel /> </Page.DataContext> <Grid> ... </Grid> </Page> Besides, you should set the value of the generated property in your Toggle method: [RelayCommand] private void Toggle() { TestText = "pressed"; }
Relay Command doesn't execute
I've decided to use the mvvm architecture in my project and tried converting my code to it. But it looks like I'm misunderstanding something, as the properties I'd like to set in an relay command are either not changed or the changing has no effect. Either way this behavior is not desired. My code is very simple and looks like this: MainViewModel.cs public partial class MainViewModel : ObservableRecipient { [ObservableProperty] private string testText; [RelayCommand] private void Toggle() { testText = "pressed"; } } Mainview.xaml <Page x:Class="Calendarium.Views.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:viewModels="using:Calendarium.ViewModels" d:DataContext="{d:DesignInstance Type=viewModels:MainViewModel}"> <Grid> <TextBlock Text="{Binding TestText}"> <Button Command="{Binding ToggleCommand}"> </Grid> </Page>
[ "You seem to have set the design time DataContext only.\nYou also need to set the actual DataContext that is used at runtime:\n<Page\n x:Class=\"Calendarium.Views.MainPage\"\n xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"\n xmlns:x=\"http://schemas.microsoft.com/winfx/2006/xaml\"\n xmlns:d=\"http://schemas.microsoft.com/expression/blend/2008\"\n xmlns:mc=\"http://schemas.openxmlformats.org/markup-compatibility/2006\"\n mc:Ignorable=\"d\"\n xmlns:viewModels=\"using:Calendarium.ViewModels\"\n d:DataContext=\"{d:DesignInstance Type=viewModels:MainViewModel}\">\n <Page.DataContext>\n <viewModels:MainViewModel />\n </Page.DataContext>\n <Grid>\n ...\n </Grid>\n</Page>\n\nBesides, you should set the value of the generated property in your Toggle method:\n[RelayCommand]\nprivate void Toggle()\n{\n TestText = \"pressed\";\n}\n\n" ]
[ 1 ]
[]
[]
[ "data_binding", "windows_community_toolkit", "winui_3" ]
stackoverflow_0074656996_data_binding_windows_community_toolkit_winui_3.txt
Q: How to change the php version in Lando 8.1.2 This is my first question. I'm a beginner. I'm working in a wordpress project with sage. When I try to run the template, there is a mistake: The composer version is ok, but Lando version is php7.4 and I need the php8.1 version. I've tried to change the php version in lando.yml and then using the command lando rebuild but it doesn't work. I'm also working with manjaro. Any help will be welcome. Thanks! A: You could start with lando init $ lando init ? From where should we get your app's codebase? remote git repo or archive ? Please enter the URL of the git repo or tar archive containing your application code https://wordpress.org/latest.tar.gz ? What recipe do you want to use? wordpress ? Where is your webroot relative to the init destination? wordpress ? What do you want to call this app? my-wordpress-app Then you get a .lando.yml where you need to specify the php version for the appserver. # .lando.yml name: my-wordpress-app recipe: wordpress config: webroot: wordpress services: appserver: type: php:8.1 #add these lines You also might need to destroy and rebuild after making that php-change: lando destroy -y && lando start Sources: https://docs.lando.dev/php https://docs.lando.dev/wordpress/config.html#choosing-a-php-version
How to change the php version in Lando 8.1.2
This is my first question. I'm a beginner. I'm working in a wordpress project with sage. When I try to run the template, there is a mistake: The composer version is ok, but Lando version is php7.4 and I need the php8.1 version. I've tried to change the php version in lando.yml and then using the command lando rebuild but it doesn't work. I'm also working with manjaro. Any help will be welcome. Thanks!
[ "You could start with lando init\n$ lando init\n? From where should we get your app's codebase? remote git repo or archive\n? Please enter the URL of the git repo or tar archive containing your application code https://wordpress.org/latest.tar.gz\n? What recipe do you want to use? wordpress\n? Where is your webroot relative to the init destination? wordpress\n? What do you want to call this app? my-wordpress-app\n\nThen you get a .lando.yml where you need to specify the php version for the appserver.\n# .lando.yml\nname: my-wordpress-app\nrecipe: wordpress\nconfig:\n webroot: wordpress\nservices:\n appserver:\n type: php:8.1 #add these lines\n\nYou also might need to destroy and rebuild after making that php-change:\nlando destroy -y && lando start \n\nSources:\n\nhttps://docs.lando.dev/php\nhttps://docs.lando.dev/wordpress/config.html#choosing-a-php-version\n\n" ]
[ 0 ]
[]
[]
[ "composer_php", "lando", "php" ]
stackoverflow_0074389892_composer_php_lando_php.txt
Q: Xcode Build Fails on Archive, but the Build does not Fail on Run with ResearchKit What I am trying to achieve is to archive the app and still have a working consent form on the app. Although this piece of code is deprecated and says to use ORKInstructionStep I haven't found anywhere how to use ORKInstructionStep instead of ORKVisualConsentStep. Whenever I try to Archive the app I keep getting this problem. Still, I also can't remove the ORKVisualConsentStep because I haven't been able to find anywhere how to add the consentDocument otherwise. I've looked at so many other tutorials and all of them use the ORKVisualConsentStep and haven't seen anyone that has had this problem show how to still keep the consent form working. I can get the app running on a phone but I can't get it to archive so not entirely sure what is going on. Any insight would be beneficial thank you A: If you don't know how to added all frameworks, libraries and Embedded contents then you can find shown below what I shared and you can remove it. Please check after your local files remove it. A: I managed to fix the problem by using cocoapods to install the ResearchKit framework instead of dragging ResearchKit into my project. It now archives however I had to comment out this line of code instructionStep.imageContentMode = .scaleAspectFit for my survey tasks.
Xcode Build Fails on Archive, but the Build does not Fail on Run with ResearchKit
What I am trying to achieve is to archive the app and still have a working consent form on the app. Although this piece of code is deprecated and says to use ORKInstructionStep I haven't found anywhere how to use ORKInstructionStep instead of ORKVisualConsentStep. Whenever I try to Archive the app I keep getting this problem. Still, I also can't remove the ORKVisualConsentStep because I haven't been able to find anywhere how to add the consentDocument otherwise. I've looked at so many other tutorials and all of them use the ORKVisualConsentStep and haven't seen anyone that has had this problem show how to still keep the consent form working. I can get the app running on a phone but I can't get it to archive so not entirely sure what is going on. Any insight would be beneficial thank you
[ "If you don't know how to added all frameworks, libraries and Embedded contents then you can find shown below what I shared and you can remove it. Please check after your local files remove it.\n\n", "I managed to fix the problem by using cocoapods to install the ResearchKit framework instead of dragging ResearchKit into my project. It now archives however I had to comment out this line of code instructionStep.imageContentMode = .scaleAspectFit for my survey tasks.\n" ]
[ 1, 0 ]
[]
[]
[ "gdprconsentform", "researchkit", "swift" ]
stackoverflow_0074317214_gdprconsentform_researchkit_swift.txt
Q: How to calculate time interval for different data frame fragments? I need to calculate how long the behavioral state tracks last for each ID. Note that for each ID the states are repeated, and then another state category begins. How do I calculate the time interval of each of these state tracks? dput(States_time) structure(list(lon = c(-28.3595, -28.2916, -28.2814, -28.2811, -28.2941, -28.2987, -28.3056, -28.3009, -28.2965, -28.298, -28.2827, -28.2747, -28.275, -28.28, -28.2778, -28.2989, -28.3993, -28.5896, -28.6515, -28.6625, -28.6526, -28.6297, -28.6011, -28.5733, -28.5489, -28.5236, -28.5112, -28.4849, -28.4602, -28.4421, -28.4144, -28.3996, -28.2903, -27.9601, -27.7619, -27.6135, -27.4749, -27.3197, -27.1767, -27.018, -26.8653, -26.7084, -26.5583, -26.4027, -26.2577, -26.1213, -26.0116, -25.9065, -25.8206, -25.776, -25.7385, -25.6924, -25.6358, -25.5908, -25.5518, -25.5243, -25.4925, -25.4853, -25.4661, -25.4356, -25.4173, -25.395, -25.3735, -25.3381, -25.301, -25.2703, -25.239, -25.1921, -25.1499, -25.0827, -25.0187, -24.9652, -24.9143, -24.8738, -24.8267, -24.7986, -24.7854, -24.7649, -24.7566, -24.739, -24.7048, -24.6733, -24.6437, -24.6048, -24.567, -24.5339, -24.4894, -24.4492, -24.4036, -24.3487, -24.2806, -24.2065, -24.1409, -24.0692, -24.0127, -24.0053, -24.2019, -24.0767, -24.0857, -24.1316, -24.2088, -24.2969, -24.3794, -24.4611, -24.5548, -24.635, -24.7181, -24.8224, -24.9065, -24.9912, -25.0784, -25.1429, -25.1876, -25.169, -25.1462, -25.1207, -25.0841, -25.0688, -25.0547, -25.0419, -25.0361, -25.03, -25.0209, -25.0247, -25.0133, -24.9938, -24.9714, -24.9025, -24.7087, -34.1, -34.0755, -34.2068, -34.1481, -34.1995, -34.298, -34.2026, -34.2008, -34.0994, -34.1009, -34.1302, -34.1662, -34.2025, -34.2, -34.3016, -34.5, -34.4988, -34.5454, -34.5284, -34.6043, -34.5097, -34.6, -34.6999, -34.7861, -34.7044, -34.7994, -34.8637, -34.8, -34.7999, -34.8693, -35.0269, -35.0142, -35.0679, -35.0177, -35.2996, -35.2996, -35.2021, -35.1966, -35.0775, -35.2, -35.2, -35.3, -35.7, -35.9001, -36.0981, -35.9, -35.995, -35.9979, -36.0999, -36.3085, -36.6099, -36.5084, -36.292, -36.1978, -36.189, -36.088, -35.8531, -35.8039, -35.7001, -35.6, -35.6058, -35.6993, -35.5269, -35.5111, -35.584, -35.4749, -35.2995, -35.3, -35.3011, -35.3161, -35.4912, -35.4005, -35.3032, -35.3646, -35.4001, -35.3999, -35.3221, -35.4987, -35.3992, -35.4657, -35.5036, -35.5953, -35.452, -35.3996, -35.1007, -35.0992, -35, -34.9153, -34.8997, -34.9, -34.6695, -34.5, -34.4998, -34.4936, -34.4881, -34.5, -34.5088, -34.4909, -34.4215, -34.4, -34.4221, -34.3077, -34.2982, -34.201, -34.2329, -34.1874, -34.2, -34.1586, -34.0904, -33.9012, -33.9001, -33.7038, -33.7, -33.6981, -33.6, -33.554, -33.4017, -33.3853, -33.2722, -33.2, -33.1011, -33.1258, -33.3458, -33.1016, -32.6021, -32.5, -32.3, -32.2, -32.0999, -31.9997, -31.9731, -32, -31.9696, -31.9999, -31.8999, -31.7995, -31.7, -31.6, -31.6052, -31.6, -31.4005, -31.2285, -30.914, -30.7997, -30.7609, -30.8, -30.7, -30.7982, -30.6994, -30.8006, -30.6992, -30.6999, -30.7685, -30.6001, -30.5999, -30.3, -30.2995, -30.3537, -30.2966, -30.3997, -30.3024, -30.1991, -29.8679, -29.4235, -29.2194, -29, -28.8005, -28.8, -28.8, -28.8, -28.7344, -28.7317, -28.7, -28.5802, -28.4976, -28.3992, -28.2977, -28.2457, -28.1993, -28.2, -28.0999, -28.2, -28.1015, -28.1, -28.0994, -28.0735, -28.0135, -28.0711, -28.0002, -28.0995, -28.0001, -28, -27.9999, -27.9095, -27.9803, -27.9992, -27.8006, -27.9, -27.9993, -27.9026, -27.6922, -27.7993, -27.7999, -27.8, -27.8456, -27.8, -27.8, -28.002, -28.1069, -27.9001, -27.8999, -27.8192, -27.7465, -27.7, -27.6793, -27.6142, -27.6106, -27.6587, -27.7007, -27.9773, -27.9611, -27.8003, -27.8009, -27.8006, -27.8481, -27.7925, -27.8742, -28.1019, -28.301, -28.2001, -28.4997, -28.5999, -28.7916, -28.6931, -28.5978, -28.7253, -28.697, -28.7, -28.7, -28.8035, -29.0898, -28.7286, -29.032, -29.1002, -29.2061, -29.3832, -29.536, -29.5018, -29.5996, -29.5307, -29.5311, -29.5617, -29.7, -29.7017), lat = c(-51.06006, -51.28517, -51.40259, -51.48351, -51.52873, -51.56416, -51.59571, -51.63744, -51.67779, -51.72939, -51.78254, -51.84945, -51.94071, -52.02408, -52.10333, -52.1629, -52.16154, -52.17246, -52.21171, -52.28003, -52.35472, -52.44014, -52.52132, -52.60039, -52.66922, -52.73575, -52.78971, -52.85371, -52.9003, -52.93157, -52.96226, -52.99958, -53.09385, -53.24145, -53.29013, -53.28483, -53.26885, -53.24375, -53.21312, -53.19024, -53.16994, -53.13826, -53.09963, -53.06583, -53.01386, -52.96535, -52.93325, -52.92167, -52.96504, -53.0329, -53.10186, -53.16542, -53.2373, -53.30987, -53.38074, -53.43605, -53.48412, -53.5254, -53.55489, -53.60127, -53.62973, -53.67887, -53.73908, -53.80004, -53.88948, -53.96345, -54.02259, -54.07842, -54.12353, -54.19933, -54.26942, -54.33925, -54.40561, -54.47321, -54.54209, -54.60073, -54.66368, -54.69002, -54.68796, -54.68729, -54.66922, -54.66249, -54.66347, -54.67504, -54.67826, -54.66793, -54.66168, -54.64167, -54.62624, -54.61335, -54.6161, -54.62989, -54.65052, -54.68577, -54.72183, -54.78754, -54.91899, -54.72204, -54.69924, -54.72291, -54.75192, -54.78754, -54.83085, -54.88303, -54.9501, -55.01851, -55.08152, -55.13521, -55.18038, -55.20593, -55.2436, -55.24347, -55.17465, -55.1805, -55.21629, -55.27335, -55.33738, -55.39316, -55.4478, -55.49499, -55.52758, -55.57528, -55.61018, -55.64505, -55.68691, -55.71773, -55.74487, -55.72205, -55.60666, -51.13792, -51.17052, -51.3451, -51.4629, -51.47005, -51.42874, -51.4272, -51.68723, -51.81362, -51.75005, -51.76939, -51.76007, -51.75012, -51.73492, -51.73968, -51.98032, -52.05917, -52.02466, -52.04233, -52.03264, -52.02026, -52.03165, -52.0607, -52.24141, -52.09744, -52.07395, -52.08017, -52.15966, -52.18366, -52.2111, -52.24547, -52.27173, -52.36574, -52.40889, -52.6732, -53.00803, -53.16961, -53.37446, -53.70476, -53.80153, -53.77921, -53.71024, -53.75746, -53.75865, -53.73004, -53.64385, -53.62506, -53.67841, -53.60954, -53.62173, -53.65541, -53.80536, -53.78368, -53.81837, -53.84945, -53.84896, -53.85606, -53.883, -53.8963, -53.86775, -53.9305, -53.91549, -53.92121, -53.9488, -53.89267, -53.90644, -54.09036, -53.95329, -53.95036, -53.94051, -53.87723, -53.86579, -53.82513, -53.69677, -53.42409, -53.40176, -53.38414, -53.33449, -53.43198, -53.37085, -53.29052, -53.34959, -53.15344, -53.23876, -53.31791, -53.34568, -53.37738, -53.40082, -53.39937, -53.52188, -53.76661, -54.02559, -54.18722, -54.21053, -54.20398, -54.2249, -54.2416, -54.18884, -54.20042, -54.18864, -54.19856, -54.29201, -54.31143, -54.33484, -54.25884, -54.25269, -54.26423, -54.26734, -54.2685, -54.22733, -54.24499, -54.24012, -54.25105, -54.27297, -54.30706, -54.31001, -54.37102, -54.37937, -54.44155, -54.48134, -54.53592, -54.59169, -54.45268, -54.55661, -54.69559, -54.76132, -54.83257, -54.86716, -54.86818, -54.89168, -54.87052, -54.87898, -54.75036, -54.87729, -54.87221, -54.82877, -54.83236, -54.81084, -54.80782, -54.7802, -54.78806, -54.80395, -54.79016, -54.79585, -54.84057, -54.86429, -54.86168, -54.89269, -54.82421, -54.94506, -55.06045, -54.99855, -55.08724, -54.95141, -54.98314, -54.91695, -54.9283, -54.91034, -54.69406, -54.5147, -54.50343, -54.52586, -54.60164, -54.57242, -54.62384, -54.73504, -54.71507, -54.80268, -54.79076, -54.85857, -55.09139, -55.22931, -55.48602, -55.40168, -55.3541, -55.40575, -55.3628, -55.37155, -55.377, -55.35164, -55.35345, -55.33444, -55.32451, -55.30861, -55.31914, -55.30992, -55.29195, -55.2648, -55.24832, -55.24193, -55.22511, -55.18921, -55.19439, -55.21504, -55.22035, -55.22813, -55.18929, -55.22937, -55.22085, -55.26273, -55.21107, -55.24041, -55.21117, -55.22153, -55.25639, -55.21879, -55.1953, -55.01422, -54.89181, -54.9697, -54.96593, -55.23091, -55.3906, -55.2657, -55.29315, -55.33206, -55.34862, -55.37101, -55.35072, -55.46937, -55.43232, -55.4311, -55.47295, -55.47624, -55.52129, -55.51233, -55.54909, -55.6934, -55.79863, -55.73806, -56.03811, -56.08909, -56.29905, -56.3, -56.35134, -56.25699, -56.20255, -56.24877, -56.28026, -56.27907, -56.2477, -56.31938, -56.56267, -56.88127, -56.99994, -57.01007, -57.06402, -57.15684, -57.14518, -57.1525, -57.19671, -57.23882, -57.21254, -57.20548), ID = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L), levels = c("7617.05", "7618.05", "10946.05", "20162.03", "20687.03", "21791.03", "21792.03", "21800.03", "21809.03", "21810.03", "24640.03", "24641.05", "24642.03", "26712.05", "27258.05", "27259.03", "27259.05", "27259.06", "27261.03", "27261.05", "27261.07", "33000.05", "33000.06", "33001.05", "33001.06", "37229.05", "37229.06", "37230.06", "37231.05", "37231.07", "37234.05", "37234.06", "37236.06", "37282.06", "37286.07", "37288.06", "37288.07", "42521.06", "42521.07", "42525.07", "50682.06", "50682.07", "50686.07", "50687.07", "60004.07", "60007.07", "81122.09", "81123.09", "81124.09", "81125.09", "81126.09", "84480.12", "84484.17", "84484.18", "84485.17", "84485.18", "84497.1", "87624.1", "87631.1", "87632.12", "87635.17", "87640.18", "87759.08", "87760.08", "87761.08", "87762.08", "87763.08", "87764.08", "87765.08", "87766.08", "87767.08", "87768.08", "87768.11", "87769.11", "87770.08", "87771.09", "87773.08", "87773.09", "87773.1", "87773.11", "87774.08", "87774.09", "87774.11", "87775.08", "87775.12", "87776.08", "87776.11", "87776.17", "87777.08", "87777.1", "87777.17", "87778.08", "87778.1", "87780.17", "87781.1", "87783.09", "87783.11", "88719.09", "88720.09", "88724.1", "88726.1", "88727.09", "96380.1", "102211.1", "111868.11", "111868.16", "111868.18", "111869.11", "111869.17", "111870.17", "111870.18", "111871.12", "112694.12", "112696.17", "112696.18", "112702.12", "112706.18", "112712.12", "112714.12", "112717.12", "112719.18", "112728.17", "120937.17", "120938.16", "120938.18", "120942.17", "120942.18", "120943.17", "120947.12", "120947.17", "121189.12", "121191.17", "121191.18", "121192.12", "121193.12", "121195.12", "121196.12", "121203.17", "121206.17", "123226.17", "171994.17", "171994.18", "171995.18", "171997.17", "172000.17", "172001.17", "172002.17", "172003.17", "172004.17", "172008.18", "194591.19", "194593.19", "194601.19", "194603.19"), class = "factor"), sex = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), levels = c("F", "I", "M"), class = "factor"), timestamp_adjusted = structure(c(1133496000, 1133517600, 1133539200, 1133560800, 1133582400, 1133604000, 1133625600, 1133647200, 1133668800, 1133690400, 1133712000, 1133733600, 1133755200, 1133776800, 1133798400, 1133820000, 1133841600, 1133863200, 1133884800, 1133906400, 1133928000, 1133949600, 1133971200, 1133992800, 1134014400, 1134036000, 1134057600, 1134079200, 1134100800, 1134122400, 1134144000, 1134165600, 1134187200, 1134208800, 1134230400, 1134252000, 1134273600, 1134295200, 1134316800, 1134338400, 1134360000, 1134381600, 1134403200, 1134424800, 1134446400, 1134468000, 1134489600, 1134511200, 1134532800, 1134554400, 1134576000, 1134597600, 1134619200, 1134640800, 1134662400, 1134684000, 1134705600, 1134727200, 1134748800, 1134770400, 1134792000, 1134813600, 1134835200, 1134856800, 1134878400, 1134900000, 1134921600, 1134943200, 1134964800, 1134986400, 1135008000, 1135029600, 1135051200, 1135072800, 1135094400, 1135116000, 1135137600, 1135159200, 1135180800, 1135202400, 1135224000, 1135245600, 1135267200, 1135288800, 1135310400, 1135332000, 1135353600, 1135375200, 1135396800, 1135418400, 1135440000, 1135461600, 1135483200, 1135504800, 1135526400, 1135548000, 1135569600, 1135591200, 1135612800, 1135634400, 1135656000, 1135677600, 1135699200, 1135720800, 1135742400, 1135764000, 1135785600, 1135807200, 1135828800, 1135850400, 1135872000, 1135893600, 1135915200, 1135936800, 1135958400, 1135980000, 1136001600, 1136023200, 1136044800, 1136066400, 1136088000, 1136109600, 1136131200, 1136152800, 1136174400, 1136196000, 1136217600, 1136239200, 1136260800, 1512511200, 1512532800, 1512554400, 1512576000, 1512597600, 1512619200, 1512640800, 1512662400, 1512684000, 1512705600, 1512727200, 1512748800, 1512770400, 1512792000, 1512813600, 1512835200, 1512856800, 1512878400, 1512900000, 1512921600, 1512943200, 1512964800, 1512986400, 1513008000, 1513029600, 1513051200, 1513072800, 1513094400, 1513116000, 1513137600, 1513159200, 1513180800, 1513202400, 1513224000, 1513245600, 1513267200, 1513288800, 1513310400, 1513332000, 1513353600, 1513375200, 1513396800, 1513418400, 1513440000, 1513461600, 1513483200, 1513504800, 1513526400, 1513548000, 1513569600, 1513591200, 1513612800, 1513634400, 1513656000, 1513677600, 1513699200, 1513720800, 1513742400, 1513764000, 1513785600, 1513807200, 1513828800, 1513850400, 1513872000, 1513893600, 1513915200, 1513936800, 1513958400, 1513980000, 1514001600, 1514023200, 1514044800, 1514066400, 1514088000, 1514109600, 1514131200, 1514152800, 1514174400, 1514196000, 1514217600, 1514239200, 1514260800, 1514282400, 1514304000, 1514325600, 1514347200, 1514368800, 1514390400, 1514412000, 1514433600, 1514455200, 1514476800, 1514498400, 1514520000, 1514541600, 1514563200, 1514584800, 1514606400, 1514628000, 1514649600, 1514671200, 1514692800, 1514714400, 1514736000, 1514757600, 1514779200, 1514800800, 1514822400, 1514844000, 1514865600, 1514887200, 1514908800, 1514930400, 1514952000, 1514973600, 1514995200, 1515016800, 1515038400, 1515060000, 1515081600, 1515103200, 1515124800, 1515146400, 1515168000, 1515189600, 1515211200, 1515232800, 1515254400, 1515276000, 1515297600, 1515319200, 1515340800, 1515362400, 1515384000, 1515405600, 1515427200, 1515448800, 1515470400, 1515492000, 1515513600, 1515535200, 1515556800, 1515578400, 1515600000, 1515621600, 1515643200, 1515664800, 1515686400, 1515708000, 1515729600, 1515751200, 1515772800, 1515794400, 1515816000, 1515837600, 1515859200, 1515880800, 1515902400, 1515924000, 1515945600, 1515967200, 1515988800, 1516010400, 1516032000, 1516053600, 1516075200, 1516096800, 1516118400, 1516140000, 1516161600, 1516183200, 1516204800, 1516226400, 1516248000, 1516269600, 1516291200, 1516312800, 1516334400, 1516356000, 1516377600, 1516399200, 1516420800, 1516442400, 1516464000, 1516485600, 1516507200, 1516528800, 1516550400, 1516572000, 1516593600, 1516615200, 1516636800, 1516658400, 1516680000, 1516701600, 1516723200, 1516744800, 1516766400, 1516788000, 1516809600, 1516831200, 1516852800, 1516874400, 1516896000, 1516917600, 1516939200, 1516960800, 1516982400, 1517004000, 1517025600, 1517047200, 1517068800, 1517090400, 1517112000, 1517133600, 1517155200, 1517176800, 1517198400, 1517220000, 1517241600, 1517263200, 1517284800, 1517306400, 1517328000, 1517349600, 1517371200, 1517392800, 1517414400, 1517436000, 1517457600, 1517479200, 1517500800, 1517522400, 1517544000, 1517565600, 1517587200, 1517608800, 1517630400, 1517652000, 1517673600, 1517695200, 1517716800, 1517738400, 1517760000, 1517781600, 1517803200, 1517824800, 1517846400, 1517868000, 1517889600, 1517911200, 1517932800, 1517954400, 1517976000), class = c("POSIXct", "POSIXt"), tzone = "UTC"), States = c("IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "ARS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "ARS", "ARS", "IND", "IND", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "ARS", "IND", "IND", "IND", "IND", "IND", "TRANS", "IND", "IND", "ARS", "ARS", "ARS", "IND", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "TRANS", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "IND", "IND", "IND", "ARS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "ARS", "ARS", "IND", "IND", "ARS", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "ARS", "ARS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND")), row.names = c("317", "318", "319", "320", "321", "322", "323", "324", "325", "326", "327", "328", "329", "330", "331", "332", "333", "334", "335", "336", "337", "338", "339", "340", "341", "342", "343", "344", "345", "346", "347", "348", "349", "350", "351", "352", "353", "354", "355", "356", "357", "358", "359", "360", "361", "362", "363", "364", "365", "366", "367", "368", "369", "370", "371", "372", "373", "374", "375", "376", "377", "378", "379", "380", "381", "382", "383", "384", "385", "386", "387", "388", "389", "390", "391", "392", "393", "394", "395", "396", "397", "398", "399", "400", "401", "402", "403", "404", "405", "406", "407", "408", "409", "410", "411", "412", "413", "414", "415", "416", "417", "418", "419", "420", "421", "422", "423", "424", "425", "426", "427", "428", "429", "430", "431", "432", "433", "434", "435", "436", "437", "438", "439", "440", "441", "442", "443", "444", "445", "1103", "1104", "1105", "1106", "1107", "1108", "1109", "1110", "1111", "1112", "1113", "1114", "1115", "1116", "1117", "1118", "1119", "1120", "1121", "1122", "1123", "1124", "1125", "1126", "1127", "1128", "1129", "1130", "1131", "1132", "1133", "1134", "1135", "1136", "1137", "1138", "1139", "1140", "1141", "1142", "1143", "1144", "1145", "1146", "1147", "1148", "1149", "1150", "1151", "1152", "1153", "1154", "1155", "1156", "1157", "1158", "1159", "1160", "1161", "1162", "1163", "1164", "1165", "1166", "1167", "1168", "1169", "1170", "1171", "1172", "1173", "1174", "1175", "1176", "1177", "1178", "1179", "1180", "1181", "1182", "1183", "1184", "1185", "1186", "1187", "1188", "1189", "1190", "1191", "1192", "1193", "1194", "1195", "1196", "1197", "1198", "1199", "1200", "1201", "1202", "1203", "1204", "1205", "1206", "1207", "1208", "1209", "1210", "1211", "1212", "1213", "1214", "1215", "1216", "1217", "1218", "1219", "1220", "1221", "1222", "1223", "1224", "1225", "1226", "1227", "1228", "1229", "1230", "1231", "1232", "1233", "1234", "1235", "1236", "1237", "1238", "1239", "1240", "1241", "1242", "1243", "1244", "1245", "1246", "1247", "1248", "1249", "1250", "1251", "1252", "1253", "1254", "1255", "1256", "1257", "1258", "1259", "1260", "1261", "1262", "1263", "1264", "1265", "1266", "1267", "1268", "1269", "1270", "1271", "1272", "1273", "1274", "1275", "1276", "1277", "1278", "1279", "1280", "1281", "1282", "1283", "1284", "1285", "1286", "1287", "1288", "1289", "1290", "1291", "1292", "1293", "1294", "1295", "1296", "1297", "1298", "1299", "1300", "1301", "1302", "1303", "1304", "1305", "1306", "1307", "1308", "1309", "1310", "1311", "1312", "1313", "1314", "1315", "1316", "1317", "1318", "1319", "1320", "1321", "1322", "1323", "1324", "1325", "1326", "1327", "1328", "1329", "1330", "1331", "1332", "1333", "1334", "1335", "1336", "1337", "1338", "1339", "1340", "1341", "1342", "1343", "1344", "1345", "1346", "1347", "1348", "1349", "1350", "1351", "1352", "1353", "1354", "1355", "1356"), class = "data.frame") I tried to calculate it as follows, however it separates each behavioral state as first and last location for each ID. duration_STATES <- AA_jub %>% group_by(id,sex,States) %>% slice(c(1, n())) %>% ungroup() duration_STATES2 <- duration_STATES %>% dplyr::group_by(States) %>% mutate(dftime = difftime(timestamp_adjusted, lag(timestamp_adjusted), units = "days")) I need to calculate the time interval of each track of behavioral state by ID. Which would result in something like this: ID sex States Duration 10946.05 F IND_1 14h 10946.05 F ARS_1 20h 10946.05 F IND_2 5h ... To be able to calculate how long each individual spent in each behavioral state. A: You can use rleid from data.table to "group" by changes in States - this will allow you to have time differences for multiple States for a given individual. See if this gives you what you are looking for. library(tidyverse) library(data.table) States_time %>% group_by(ID, grp = rleid(States), States) %>% summarise(dftime = difftime(last(timestamp_adjusted), first(timestamp_adjusted), units = "days")) Output ID grp States dftime <fct> <int> <chr> <drtn> 1 10946.05 1 IND 7.75 days 2 10946.05 2 TRANS 0.25 days 3 10946.05 3 IND 23.50 days 4 111870.17 3 IND 5.50 days 5 111870.17 4 ARS 0.25 days 6 111870.17 5 IND 1.50 days 7 111870.17 6 TRANS 2.75 days 8 111870.17 7 ARS 0.25 days 9 111870.17 8 IND 0.25 days 10 111870.17 9 TRANS 0.25 days # … with 34 more rows
How to calculate time interval for different data frame fragments?
I need to calculate how long the behavioral state tracks last for each ID. Note that for each ID the states are repeated, and then another state category begins. How do I calculate the time interval of each of these state tracks? dput(States_time) structure(list(lon = c(-28.3595, -28.2916, -28.2814, -28.2811, -28.2941, -28.2987, -28.3056, -28.3009, -28.2965, -28.298, -28.2827, -28.2747, -28.275, -28.28, -28.2778, -28.2989, -28.3993, -28.5896, -28.6515, -28.6625, -28.6526, -28.6297, -28.6011, -28.5733, -28.5489, -28.5236, -28.5112, -28.4849, -28.4602, -28.4421, -28.4144, -28.3996, -28.2903, -27.9601, -27.7619, -27.6135, -27.4749, -27.3197, -27.1767, -27.018, -26.8653, -26.7084, -26.5583, -26.4027, -26.2577, -26.1213, -26.0116, -25.9065, -25.8206, -25.776, -25.7385, -25.6924, -25.6358, -25.5908, -25.5518, -25.5243, -25.4925, -25.4853, -25.4661, -25.4356, -25.4173, -25.395, -25.3735, -25.3381, -25.301, -25.2703, -25.239, -25.1921, -25.1499, -25.0827, -25.0187, -24.9652, -24.9143, -24.8738, -24.8267, -24.7986, -24.7854, -24.7649, -24.7566, -24.739, -24.7048, -24.6733, -24.6437, -24.6048, -24.567, -24.5339, -24.4894, -24.4492, -24.4036, -24.3487, -24.2806, -24.2065, -24.1409, -24.0692, -24.0127, -24.0053, -24.2019, -24.0767, -24.0857, -24.1316, -24.2088, -24.2969, -24.3794, -24.4611, -24.5548, -24.635, -24.7181, -24.8224, -24.9065, -24.9912, -25.0784, -25.1429, -25.1876, -25.169, -25.1462, -25.1207, -25.0841, -25.0688, -25.0547, -25.0419, -25.0361, -25.03, -25.0209, -25.0247, -25.0133, -24.9938, -24.9714, -24.9025, -24.7087, -34.1, -34.0755, -34.2068, -34.1481, -34.1995, -34.298, -34.2026, -34.2008, -34.0994, -34.1009, -34.1302, -34.1662, -34.2025, -34.2, -34.3016, -34.5, -34.4988, -34.5454, -34.5284, -34.6043, -34.5097, -34.6, -34.6999, -34.7861, -34.7044, -34.7994, -34.8637, -34.8, -34.7999, -34.8693, -35.0269, -35.0142, -35.0679, -35.0177, -35.2996, -35.2996, -35.2021, -35.1966, -35.0775, -35.2, -35.2, -35.3, -35.7, -35.9001, -36.0981, -35.9, -35.995, -35.9979, -36.0999, -36.3085, -36.6099, -36.5084, -36.292, -36.1978, -36.189, -36.088, -35.8531, -35.8039, -35.7001, -35.6, -35.6058, -35.6993, -35.5269, -35.5111, -35.584, -35.4749, -35.2995, -35.3, -35.3011, -35.3161, -35.4912, -35.4005, -35.3032, -35.3646, -35.4001, -35.3999, -35.3221, -35.4987, -35.3992, -35.4657, -35.5036, -35.5953, -35.452, -35.3996, -35.1007, -35.0992, -35, -34.9153, -34.8997, -34.9, -34.6695, -34.5, -34.4998, -34.4936, -34.4881, -34.5, -34.5088, -34.4909, -34.4215, -34.4, -34.4221, -34.3077, -34.2982, -34.201, -34.2329, -34.1874, -34.2, -34.1586, -34.0904, -33.9012, -33.9001, -33.7038, -33.7, -33.6981, -33.6, -33.554, -33.4017, -33.3853, -33.2722, -33.2, -33.1011, -33.1258, -33.3458, -33.1016, -32.6021, -32.5, -32.3, -32.2, -32.0999, -31.9997, -31.9731, -32, -31.9696, -31.9999, -31.8999, -31.7995, -31.7, -31.6, -31.6052, -31.6, -31.4005, -31.2285, -30.914, -30.7997, -30.7609, -30.8, -30.7, -30.7982, -30.6994, -30.8006, -30.6992, -30.6999, -30.7685, -30.6001, -30.5999, -30.3, -30.2995, -30.3537, -30.2966, -30.3997, -30.3024, -30.1991, -29.8679, -29.4235, -29.2194, -29, -28.8005, -28.8, -28.8, -28.8, -28.7344, -28.7317, -28.7, -28.5802, -28.4976, -28.3992, -28.2977, -28.2457, -28.1993, -28.2, -28.0999, -28.2, -28.1015, -28.1, -28.0994, -28.0735, -28.0135, -28.0711, -28.0002, -28.0995, -28.0001, -28, -27.9999, -27.9095, -27.9803, -27.9992, -27.8006, -27.9, -27.9993, -27.9026, -27.6922, -27.7993, -27.7999, -27.8, -27.8456, -27.8, -27.8, -28.002, -28.1069, -27.9001, -27.8999, -27.8192, -27.7465, -27.7, -27.6793, -27.6142, -27.6106, -27.6587, -27.7007, -27.9773, -27.9611, -27.8003, -27.8009, -27.8006, -27.8481, -27.7925, -27.8742, -28.1019, -28.301, -28.2001, -28.4997, -28.5999, -28.7916, -28.6931, -28.5978, -28.7253, -28.697, -28.7, -28.7, -28.8035, -29.0898, -28.7286, -29.032, -29.1002, -29.2061, -29.3832, -29.536, -29.5018, -29.5996, -29.5307, -29.5311, -29.5617, -29.7, -29.7017), lat = c(-51.06006, -51.28517, -51.40259, -51.48351, -51.52873, -51.56416, -51.59571, -51.63744, -51.67779, -51.72939, -51.78254, -51.84945, -51.94071, -52.02408, -52.10333, -52.1629, -52.16154, -52.17246, -52.21171, -52.28003, -52.35472, -52.44014, -52.52132, -52.60039, -52.66922, -52.73575, -52.78971, -52.85371, -52.9003, -52.93157, -52.96226, -52.99958, -53.09385, -53.24145, -53.29013, -53.28483, -53.26885, -53.24375, -53.21312, -53.19024, -53.16994, -53.13826, -53.09963, -53.06583, -53.01386, -52.96535, -52.93325, -52.92167, -52.96504, -53.0329, -53.10186, -53.16542, -53.2373, -53.30987, -53.38074, -53.43605, -53.48412, -53.5254, -53.55489, -53.60127, -53.62973, -53.67887, -53.73908, -53.80004, -53.88948, -53.96345, -54.02259, -54.07842, -54.12353, -54.19933, -54.26942, -54.33925, -54.40561, -54.47321, -54.54209, -54.60073, -54.66368, -54.69002, -54.68796, -54.68729, -54.66922, -54.66249, -54.66347, -54.67504, -54.67826, -54.66793, -54.66168, -54.64167, -54.62624, -54.61335, -54.6161, -54.62989, -54.65052, -54.68577, -54.72183, -54.78754, -54.91899, -54.72204, -54.69924, -54.72291, -54.75192, -54.78754, -54.83085, -54.88303, -54.9501, -55.01851, -55.08152, -55.13521, -55.18038, -55.20593, -55.2436, -55.24347, -55.17465, -55.1805, -55.21629, -55.27335, -55.33738, -55.39316, -55.4478, -55.49499, -55.52758, -55.57528, -55.61018, -55.64505, -55.68691, -55.71773, -55.74487, -55.72205, -55.60666, -51.13792, -51.17052, -51.3451, -51.4629, -51.47005, -51.42874, -51.4272, -51.68723, -51.81362, -51.75005, -51.76939, -51.76007, -51.75012, -51.73492, -51.73968, -51.98032, -52.05917, -52.02466, -52.04233, -52.03264, -52.02026, -52.03165, -52.0607, -52.24141, -52.09744, -52.07395, -52.08017, -52.15966, -52.18366, -52.2111, -52.24547, -52.27173, -52.36574, -52.40889, -52.6732, -53.00803, -53.16961, -53.37446, -53.70476, -53.80153, -53.77921, -53.71024, -53.75746, -53.75865, -53.73004, -53.64385, -53.62506, -53.67841, -53.60954, -53.62173, -53.65541, -53.80536, -53.78368, -53.81837, -53.84945, -53.84896, -53.85606, -53.883, -53.8963, -53.86775, -53.9305, -53.91549, -53.92121, -53.9488, -53.89267, -53.90644, -54.09036, -53.95329, -53.95036, -53.94051, -53.87723, -53.86579, -53.82513, -53.69677, -53.42409, -53.40176, -53.38414, -53.33449, -53.43198, -53.37085, -53.29052, -53.34959, -53.15344, -53.23876, -53.31791, -53.34568, -53.37738, -53.40082, -53.39937, -53.52188, -53.76661, -54.02559, -54.18722, -54.21053, -54.20398, -54.2249, -54.2416, -54.18884, -54.20042, -54.18864, -54.19856, -54.29201, -54.31143, -54.33484, -54.25884, -54.25269, -54.26423, -54.26734, -54.2685, -54.22733, -54.24499, -54.24012, -54.25105, -54.27297, -54.30706, -54.31001, -54.37102, -54.37937, -54.44155, -54.48134, -54.53592, -54.59169, -54.45268, -54.55661, -54.69559, -54.76132, -54.83257, -54.86716, -54.86818, -54.89168, -54.87052, -54.87898, -54.75036, -54.87729, -54.87221, -54.82877, -54.83236, -54.81084, -54.80782, -54.7802, -54.78806, -54.80395, -54.79016, -54.79585, -54.84057, -54.86429, -54.86168, -54.89269, -54.82421, -54.94506, -55.06045, -54.99855, -55.08724, -54.95141, -54.98314, -54.91695, -54.9283, -54.91034, -54.69406, -54.5147, -54.50343, -54.52586, -54.60164, -54.57242, -54.62384, -54.73504, -54.71507, -54.80268, -54.79076, -54.85857, -55.09139, -55.22931, -55.48602, -55.40168, -55.3541, -55.40575, -55.3628, -55.37155, -55.377, -55.35164, -55.35345, -55.33444, -55.32451, -55.30861, -55.31914, -55.30992, -55.29195, -55.2648, -55.24832, -55.24193, -55.22511, -55.18921, -55.19439, -55.21504, -55.22035, -55.22813, -55.18929, -55.22937, -55.22085, -55.26273, -55.21107, -55.24041, -55.21117, -55.22153, -55.25639, -55.21879, -55.1953, -55.01422, -54.89181, -54.9697, -54.96593, -55.23091, -55.3906, -55.2657, -55.29315, -55.33206, -55.34862, -55.37101, -55.35072, -55.46937, -55.43232, -55.4311, -55.47295, -55.47624, -55.52129, -55.51233, -55.54909, -55.6934, -55.79863, -55.73806, -56.03811, -56.08909, -56.29905, -56.3, -56.35134, -56.25699, -56.20255, -56.24877, -56.28026, -56.27907, -56.2477, -56.31938, -56.56267, -56.88127, -56.99994, -57.01007, -57.06402, -57.15684, -57.14518, -57.1525, -57.19671, -57.23882, -57.21254, -57.20548), ID = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L, 110L), levels = c("7617.05", "7618.05", "10946.05", "20162.03", "20687.03", "21791.03", "21792.03", "21800.03", "21809.03", "21810.03", "24640.03", "24641.05", "24642.03", "26712.05", "27258.05", "27259.03", "27259.05", "27259.06", "27261.03", "27261.05", "27261.07", "33000.05", "33000.06", "33001.05", "33001.06", "37229.05", "37229.06", "37230.06", "37231.05", "37231.07", "37234.05", "37234.06", "37236.06", "37282.06", "37286.07", "37288.06", "37288.07", "42521.06", "42521.07", "42525.07", "50682.06", "50682.07", "50686.07", "50687.07", "60004.07", "60007.07", "81122.09", "81123.09", "81124.09", "81125.09", "81126.09", "84480.12", "84484.17", "84484.18", "84485.17", "84485.18", "84497.1", "87624.1", "87631.1", "87632.12", "87635.17", "87640.18", "87759.08", "87760.08", "87761.08", "87762.08", "87763.08", "87764.08", "87765.08", "87766.08", "87767.08", "87768.08", "87768.11", "87769.11", "87770.08", "87771.09", "87773.08", "87773.09", "87773.1", "87773.11", "87774.08", "87774.09", "87774.11", "87775.08", "87775.12", "87776.08", "87776.11", "87776.17", "87777.08", "87777.1", "87777.17", "87778.08", "87778.1", "87780.17", "87781.1", "87783.09", "87783.11", "88719.09", "88720.09", "88724.1", "88726.1", "88727.09", "96380.1", "102211.1", "111868.11", "111868.16", "111868.18", "111869.11", "111869.17", "111870.17", "111870.18", "111871.12", "112694.12", "112696.17", "112696.18", "112702.12", "112706.18", "112712.12", "112714.12", "112717.12", "112719.18", "112728.17", "120937.17", "120938.16", "120938.18", "120942.17", "120942.18", "120943.17", "120947.12", "120947.17", "121189.12", "121191.17", "121191.18", "121192.12", "121193.12", "121195.12", "121196.12", "121203.17", "121206.17", "123226.17", "171994.17", "171994.18", "171995.18", "171997.17", "172000.17", "172001.17", "172002.17", "172003.17", "172004.17", "172008.18", "194591.19", "194593.19", "194601.19", "194603.19"), class = "factor"), sex = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), levels = c("F", "I", "M"), class = "factor"), timestamp_adjusted = structure(c(1133496000, 1133517600, 1133539200, 1133560800, 1133582400, 1133604000, 1133625600, 1133647200, 1133668800, 1133690400, 1133712000, 1133733600, 1133755200, 1133776800, 1133798400, 1133820000, 1133841600, 1133863200, 1133884800, 1133906400, 1133928000, 1133949600, 1133971200, 1133992800, 1134014400, 1134036000, 1134057600, 1134079200, 1134100800, 1134122400, 1134144000, 1134165600, 1134187200, 1134208800, 1134230400, 1134252000, 1134273600, 1134295200, 1134316800, 1134338400, 1134360000, 1134381600, 1134403200, 1134424800, 1134446400, 1134468000, 1134489600, 1134511200, 1134532800, 1134554400, 1134576000, 1134597600, 1134619200, 1134640800, 1134662400, 1134684000, 1134705600, 1134727200, 1134748800, 1134770400, 1134792000, 1134813600, 1134835200, 1134856800, 1134878400, 1134900000, 1134921600, 1134943200, 1134964800, 1134986400, 1135008000, 1135029600, 1135051200, 1135072800, 1135094400, 1135116000, 1135137600, 1135159200, 1135180800, 1135202400, 1135224000, 1135245600, 1135267200, 1135288800, 1135310400, 1135332000, 1135353600, 1135375200, 1135396800, 1135418400, 1135440000, 1135461600, 1135483200, 1135504800, 1135526400, 1135548000, 1135569600, 1135591200, 1135612800, 1135634400, 1135656000, 1135677600, 1135699200, 1135720800, 1135742400, 1135764000, 1135785600, 1135807200, 1135828800, 1135850400, 1135872000, 1135893600, 1135915200, 1135936800, 1135958400, 1135980000, 1136001600, 1136023200, 1136044800, 1136066400, 1136088000, 1136109600, 1136131200, 1136152800, 1136174400, 1136196000, 1136217600, 1136239200, 1136260800, 1512511200, 1512532800, 1512554400, 1512576000, 1512597600, 1512619200, 1512640800, 1512662400, 1512684000, 1512705600, 1512727200, 1512748800, 1512770400, 1512792000, 1512813600, 1512835200, 1512856800, 1512878400, 1512900000, 1512921600, 1512943200, 1512964800, 1512986400, 1513008000, 1513029600, 1513051200, 1513072800, 1513094400, 1513116000, 1513137600, 1513159200, 1513180800, 1513202400, 1513224000, 1513245600, 1513267200, 1513288800, 1513310400, 1513332000, 1513353600, 1513375200, 1513396800, 1513418400, 1513440000, 1513461600, 1513483200, 1513504800, 1513526400, 1513548000, 1513569600, 1513591200, 1513612800, 1513634400, 1513656000, 1513677600, 1513699200, 1513720800, 1513742400, 1513764000, 1513785600, 1513807200, 1513828800, 1513850400, 1513872000, 1513893600, 1513915200, 1513936800, 1513958400, 1513980000, 1514001600, 1514023200, 1514044800, 1514066400, 1514088000, 1514109600, 1514131200, 1514152800, 1514174400, 1514196000, 1514217600, 1514239200, 1514260800, 1514282400, 1514304000, 1514325600, 1514347200, 1514368800, 1514390400, 1514412000, 1514433600, 1514455200, 1514476800, 1514498400, 1514520000, 1514541600, 1514563200, 1514584800, 1514606400, 1514628000, 1514649600, 1514671200, 1514692800, 1514714400, 1514736000, 1514757600, 1514779200, 1514800800, 1514822400, 1514844000, 1514865600, 1514887200, 1514908800, 1514930400, 1514952000, 1514973600, 1514995200, 1515016800, 1515038400, 1515060000, 1515081600, 1515103200, 1515124800, 1515146400, 1515168000, 1515189600, 1515211200, 1515232800, 1515254400, 1515276000, 1515297600, 1515319200, 1515340800, 1515362400, 1515384000, 1515405600, 1515427200, 1515448800, 1515470400, 1515492000, 1515513600, 1515535200, 1515556800, 1515578400, 1515600000, 1515621600, 1515643200, 1515664800, 1515686400, 1515708000, 1515729600, 1515751200, 1515772800, 1515794400, 1515816000, 1515837600, 1515859200, 1515880800, 1515902400, 1515924000, 1515945600, 1515967200, 1515988800, 1516010400, 1516032000, 1516053600, 1516075200, 1516096800, 1516118400, 1516140000, 1516161600, 1516183200, 1516204800, 1516226400, 1516248000, 1516269600, 1516291200, 1516312800, 1516334400, 1516356000, 1516377600, 1516399200, 1516420800, 1516442400, 1516464000, 1516485600, 1516507200, 1516528800, 1516550400, 1516572000, 1516593600, 1516615200, 1516636800, 1516658400, 1516680000, 1516701600, 1516723200, 1516744800, 1516766400, 1516788000, 1516809600, 1516831200, 1516852800, 1516874400, 1516896000, 1516917600, 1516939200, 1516960800, 1516982400, 1517004000, 1517025600, 1517047200, 1517068800, 1517090400, 1517112000, 1517133600, 1517155200, 1517176800, 1517198400, 1517220000, 1517241600, 1517263200, 1517284800, 1517306400, 1517328000, 1517349600, 1517371200, 1517392800, 1517414400, 1517436000, 1517457600, 1517479200, 1517500800, 1517522400, 1517544000, 1517565600, 1517587200, 1517608800, 1517630400, 1517652000, 1517673600, 1517695200, 1517716800, 1517738400, 1517760000, 1517781600, 1517803200, 1517824800, 1517846400, 1517868000, 1517889600, 1517911200, 1517932800, 1517954400, 1517976000), class = c("POSIXct", "POSIXt"), tzone = "UTC"), States = c("IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "ARS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "ARS", "ARS", "IND", "IND", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "ARS", "IND", "IND", "IND", "IND", "IND", "TRANS", "IND", "IND", "ARS", "ARS", "ARS", "IND", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "TRANS", "IND", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "ARS", "IND", "IND", "IND", "ARS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "TRANS", "TRANS", "ARS", "ARS", "IND", "IND", "ARS", "ARS", "ARS", "ARS", "IND", "IND", "IND", "IND", "ARS", "ARS", "TRANS", "TRANS", "TRANS", "TRANS", "IND", "IND", "IND", "IND", "IND", "IND", "IND", "IND")), row.names = c("317", "318", "319", "320", "321", "322", "323", "324", "325", "326", "327", "328", "329", "330", "331", "332", "333", "334", "335", "336", "337", "338", "339", "340", "341", "342", "343", "344", "345", "346", "347", "348", "349", "350", "351", "352", "353", "354", "355", "356", "357", "358", "359", "360", "361", "362", "363", "364", "365", "366", "367", "368", "369", "370", "371", "372", "373", "374", "375", "376", "377", "378", "379", "380", "381", "382", "383", "384", "385", "386", "387", "388", "389", "390", "391", "392", "393", "394", "395", "396", "397", "398", "399", "400", "401", "402", "403", "404", "405", "406", "407", "408", "409", "410", "411", "412", "413", "414", "415", "416", "417", "418", "419", "420", "421", "422", "423", "424", "425", "426", "427", "428", "429", "430", "431", "432", "433", "434", "435", "436", "437", "438", "439", "440", "441", "442", "443", "444", "445", "1103", "1104", "1105", "1106", "1107", "1108", "1109", "1110", "1111", "1112", "1113", "1114", "1115", "1116", "1117", "1118", "1119", "1120", "1121", "1122", "1123", "1124", "1125", "1126", "1127", "1128", "1129", "1130", "1131", "1132", "1133", "1134", "1135", "1136", "1137", "1138", "1139", "1140", "1141", "1142", "1143", "1144", "1145", "1146", "1147", "1148", "1149", "1150", "1151", "1152", "1153", "1154", "1155", "1156", "1157", "1158", "1159", "1160", "1161", "1162", "1163", "1164", "1165", "1166", "1167", "1168", "1169", "1170", "1171", "1172", "1173", "1174", "1175", "1176", "1177", "1178", "1179", "1180", "1181", "1182", "1183", "1184", "1185", "1186", "1187", "1188", "1189", "1190", "1191", "1192", "1193", "1194", "1195", "1196", "1197", "1198", "1199", "1200", "1201", "1202", "1203", "1204", "1205", "1206", "1207", "1208", "1209", "1210", "1211", "1212", "1213", "1214", "1215", "1216", "1217", "1218", "1219", "1220", "1221", "1222", "1223", "1224", "1225", "1226", "1227", "1228", "1229", "1230", "1231", "1232", "1233", "1234", "1235", "1236", "1237", "1238", "1239", "1240", "1241", "1242", "1243", "1244", "1245", "1246", "1247", "1248", "1249", "1250", "1251", "1252", "1253", "1254", "1255", "1256", "1257", "1258", "1259", "1260", "1261", "1262", "1263", "1264", "1265", "1266", "1267", "1268", "1269", "1270", "1271", "1272", "1273", "1274", "1275", "1276", "1277", "1278", "1279", "1280", "1281", "1282", "1283", "1284", "1285", "1286", "1287", "1288", "1289", "1290", "1291", "1292", "1293", "1294", "1295", "1296", "1297", "1298", "1299", "1300", "1301", "1302", "1303", "1304", "1305", "1306", "1307", "1308", "1309", "1310", "1311", "1312", "1313", "1314", "1315", "1316", "1317", "1318", "1319", "1320", "1321", "1322", "1323", "1324", "1325", "1326", "1327", "1328", "1329", "1330", "1331", "1332", "1333", "1334", "1335", "1336", "1337", "1338", "1339", "1340", "1341", "1342", "1343", "1344", "1345", "1346", "1347", "1348", "1349", "1350", "1351", "1352", "1353", "1354", "1355", "1356"), class = "data.frame") I tried to calculate it as follows, however it separates each behavioral state as first and last location for each ID. duration_STATES <- AA_jub %>% group_by(id,sex,States) %>% slice(c(1, n())) %>% ungroup() duration_STATES2 <- duration_STATES %>% dplyr::group_by(States) %>% mutate(dftime = difftime(timestamp_adjusted, lag(timestamp_adjusted), units = "days")) I need to calculate the time interval of each track of behavioral state by ID. Which would result in something like this: ID sex States Duration 10946.05 F IND_1 14h 10946.05 F ARS_1 20h 10946.05 F IND_2 5h ... To be able to calculate how long each individual spent in each behavioral state.
[ "You can use rleid from data.table to \"group\" by changes in States - this will allow you to have time differences for multiple States for a given individual. See if this gives you what you are looking for.\nlibrary(tidyverse)\nlibrary(data.table)\n\nStates_time %>%\n group_by(ID, grp = rleid(States), States) %>%\n summarise(dftime = difftime(last(timestamp_adjusted), first(timestamp_adjusted), units = \"days\"))\n\nOutput\n ID grp States dftime \n <fct> <int> <chr> <drtn> \n 1 10946.05 1 IND 7.75 days\n 2 10946.05 2 TRANS 0.25 days\n 3 10946.05 3 IND 23.50 days\n 4 111870.17 3 IND 5.50 days\n 5 111870.17 4 ARS 0.25 days\n 6 111870.17 5 IND 1.50 days\n 7 111870.17 6 TRANS 2.75 days\n 8 111870.17 7 ARS 0.25 days\n 9 111870.17 8 IND 0.25 days\n10 111870.17 9 TRANS 0.25 days\n# … with 34 more rows\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "dplyr", "package", "r", "time" ]
stackoverflow_0074644409_dataframe_dplyr_package_r_time.txt
Q: SYSIBM.SQLSTATISTICS and SYSIBM.SQLPRIMARYKEYS using most of CPU in DB2 on Windows I have a fairly busy DB2 on Windows server - 9.7, fix pack 11. About 60% of the CPU time used by all queries in the package cache is being used by the following two statements: CALL SYSIBM.SQLSTATISTICS(?,?,?,?,?,?) CALL SYSIBM.SQLPRIMARYKEYS(?,?,?,?) I'm fairly decent with physical tuning and have spent a lot of time on SQL tuning on this system as well. The applications are all custom, and educating developers is something I also spend time on. I get the impression that these two stored procedures are something that perhaps ODBC calls? Reading their descriptions, they also seem like things that are completely unnecessary to do the work being done. The application doesn't need to know the primary key of a table to be able to query it! Is there anything I can tell my developers to do that will either eliminate/reduce the execution of these or cache the information so that they're not executing against the database millions of times and eating up so much CPU? Or alternately anything I can do at the database level to reduce their impact? A: 6.5 years later, and I have the answer to my own question. This is a side effect of using an ORM. Part of what it does is to discover the database schema. Rails also has a similar workload. In Rails, you can avoid this by using the schema cache. This becomes particularly important at scale. Not sure if there are equivalencies for other ORMs, but I hope so!
SYSIBM.SQLSTATISTICS and SYSIBM.SQLPRIMARYKEYS using most of CPU in DB2 on Windows
I have a fairly busy DB2 on Windows server - 9.7, fix pack 11. About 60% of the CPU time used by all queries in the package cache is being used by the following two statements: CALL SYSIBM.SQLSTATISTICS(?,?,?,?,?,?) CALL SYSIBM.SQLPRIMARYKEYS(?,?,?,?) I'm fairly decent with physical tuning and have spent a lot of time on SQL tuning on this system as well. The applications are all custom, and educating developers is something I also spend time on. I get the impression that these two stored procedures are something that perhaps ODBC calls? Reading their descriptions, they also seem like things that are completely unnecessary to do the work being done. The application doesn't need to know the primary key of a table to be able to query it! Is there anything I can tell my developers to do that will either eliminate/reduce the execution of these or cache the information so that they're not executing against the database millions of times and eating up so much CPU? Or alternately anything I can do at the database level to reduce their impact?
[ "6.5 years later, and I have the answer to my own question. This is a side effect of using an ORM. Part of what it does is to discover the database schema. Rails also has a similar workload. In Rails, you can avoid this by using the schema cache. This becomes particularly important at scale. Not sure if there are equivalencies for other ORMs, but I hope so!\n" ]
[ 0 ]
[]
[]
[ "db2", "db2_luw", "odbc" ]
stackoverflow_0038058577_db2_db2_luw_odbc.txt
Q: Symfony UX chart does not show chart in easyadmin I have a problem with displaying a chart in easyadmin. It does show up in the HTML as shown below. <canvas data-controller="symfony--ux-chartjs--chart" data-symfony--ux-chartjs--chart-view-value="{&quot;type&quot;:&quot;line&quot;,&quot;data&quot;:{&quot;labels&quot;:[&quot;January&quot;,&quot;February&quot;,&quot;March&quot;,&quot;April&quot;,&quot;May&quot;,&quot;June&quot;,&quot;July&quot;],&quot;datasets&quot;:[{&quot;label&quot;:&quot;My First dataset&quot;,&quot;backgroundColor&quot;:&quot;rgb(0, 0, 0)&quot;,&quot;borderColor&quot;:&quot;rgb(0, 0, 0)&quot;,&quot;data&quot;:[0,10,5,2,20,30,45]}]},&quot;options&quot;:{&quot;scales&quot;:{&quot;y&quot;:{&quot;suggestedMin&quot;:0,&quot;suggestedMax&quot;:100}}}}"></canvas> My controller looks like this: dashboardController.php <?php namespace App\Controller\Admin; use App\Entity\Product; use App\Entity\ProductCategory; use App\Entity\ProductImage; use EasyCorp\Bundle\EasyAdminBundle\Config\Assets; use EasyCorp\Bundle\EasyAdminBundle\Config\Dashboard; use EasyCorp\Bundle\EasyAdminBundle\Config\MenuItem; use EasyCorp\Bundle\EasyAdminBundle\Controller\AbstractDashboardController; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\Routing\Annotation\Route; use Symfony\UX\Chartjs\Builder\ChartBuilderInterface; use Symfony\UX\Chartjs\Model\Chart; class DashboardController extends AbstractDashboardController { public function __construct( private ChartBuilderInterface $chartBuilder,) { } public function configureAssets(): Assets { return parent::configureAssets() ->addWebpackEncoreEntry('app'); } #[Route('/admin', name: 'admin')] public function index(): Response { return $this->render('admin/dashboard.html.twig', [ 'chart' => $this->createChart()]); } private function createChart(): Chart { $chart = $this->chartBuilder->createChart(Chart::TYPE_LINE); $chart->setData([ 'labels' => ['January', 'February', 'March', 'April', 'May', 'June', 'July'], 'datasets' => [ [ 'label' => 'My First dataset', 'backgroundColor' => 'rgb(0, 0, 0)', 'borderColor' => 'rgb(0, 0, 0)', 'data' => [0, 10, 5, 2, 20, 30, 45], ], ], ]); $chart->setOptions([ 'scales' => [ 'y' => [ 'suggestedMin' => 0, 'suggestedMax' => 100, ], ], ]); return $chart; } public function configureDashboard(): Dashboard { return Dashboard::new() ->setTitle('Dashboard'); } public function configureMenuItems(): iterable { yield MenuItem::linkToDashboard('Dashboard', 'fa fa-home'); yield MenuItem::section('Products'); yield MenuItem::linkToCrud('Product', 'fas fa-box', Product::class); yield MenuItem::linkToCrud('Product images', 'fas fa-images', ProductImage::class); yield MenuItem::linkToCrud('Product category', 'fas fa-boxes-stacked', ProductCategory::class); } } The twig file does look this dashboard.html.twig {% extends '@!EasyAdmin/page/content.html.twig' %} {% block page_title %} Dashboard {% endblock %} {% block main %} <div class="row"> <div class="col-12"> {{ render_chart(chart) }} </div> </div> {% endblock %} My webpack.config.js is this const Encore = require('@symfony/webpack-encore'); // Manually configure the runtime environment if not already configured yet by the "encore" command. // It's useful when you use tools that rely on webpack.config.js file. if (!Encore.isRuntimeEnvironmentConfigured()) { Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev'); } Encore // directory where compiled assets will be stored .setOutputPath('public/build/') // public path used by the web server to access the output path .setPublicPath('/build') // only needed for CDN's or subdirectory deploy //.setManifestKeyPrefix('build/') /* * ENTRY CONFIG * * Each entry will result in one JavaScript file (e.g. app.js) * and one CSS file (e.g. app.css) if your JavaScript imports CSS. */ .addEntry('app', './assets/app.js') // enables the Symfony UX Stimulus bridge (used in assets/bootstrap.js) .enableStimulusBridge('./assets/controllers.json') // When enabled, Webpack "splits" your files into smaller pieces for greater optimization. .splitEntryChunks() // will require an extra script tag for runtime.js // but, you probably want this, unless you're building a single-page app .enableSingleRuntimeChunk() /* * FEATURE CONFIG * * Enable & configure other features below. For a full * list of features, see: * https://symfony.com/doc/current/frontend.html#adding-more-features */ .cleanupOutputBeforeBuild() .enableBuildNotifications() .enableSourceMaps(!Encore.isProduction()) // enables hashed filenames (e.g. app.abc123.css) .enableVersioning(Encore.isProduction()) // configure Babel // .configureBabel((config) => { // config.plugins.push('@babel/a-babel-plugin'); // }) // enables and configure @babel/preset-env polyfills .configureBabelPresetEnv((config) => { config.useBuiltIns = 'usage'; config.corejs = '3.23'; }) // enables Sass/SCSS support //.enableSassLoader() // uncomment if you use TypeScript //.enableTypeScriptLoader() // uncomment if you use React //.enableReactPreset() // uncomment to get integrity="..." attributes on your script & link tags // requires WebpackEncoreBundle 1.4 or higher //.enableIntegrityHashes(Encore.isProduction()) // uncomment if you're having problems with a jQuery plugin //.autoProvidejQuery() ; module.exports = Encore.getWebpackConfig(); I got an error in the console since I added the following code: public function configureAssets(): Assets { return parent::configureAssets() ->addWebpackEncoreEntry('app'); } I added that part of code because I thought this would fix the problem of not showing the graph. Before I added the code there where no errors in the console but after I added the code these errors came up. Can someone tell me what I am doing wrong. I use Symfony 6.1 and easyadmin 4 A: The problem was that it could not access the files so I edited the entrypoints.json to the correct paths and this fixed it.
Symfony UX chart does not show chart in easyadmin
I have a problem with displaying a chart in easyadmin. It does show up in the HTML as shown below. <canvas data-controller="symfony--ux-chartjs--chart" data-symfony--ux-chartjs--chart-view-value="{&quot;type&quot;:&quot;line&quot;,&quot;data&quot;:{&quot;labels&quot;:[&quot;January&quot;,&quot;February&quot;,&quot;March&quot;,&quot;April&quot;,&quot;May&quot;,&quot;June&quot;,&quot;July&quot;],&quot;datasets&quot;:[{&quot;label&quot;:&quot;My First dataset&quot;,&quot;backgroundColor&quot;:&quot;rgb(0, 0, 0)&quot;,&quot;borderColor&quot;:&quot;rgb(0, 0, 0)&quot;,&quot;data&quot;:[0,10,5,2,20,30,45]}]},&quot;options&quot;:{&quot;scales&quot;:{&quot;y&quot;:{&quot;suggestedMin&quot;:0,&quot;suggestedMax&quot;:100}}}}"></canvas> My controller looks like this: dashboardController.php <?php namespace App\Controller\Admin; use App\Entity\Product; use App\Entity\ProductCategory; use App\Entity\ProductImage; use EasyCorp\Bundle\EasyAdminBundle\Config\Assets; use EasyCorp\Bundle\EasyAdminBundle\Config\Dashboard; use EasyCorp\Bundle\EasyAdminBundle\Config\MenuItem; use EasyCorp\Bundle\EasyAdminBundle\Controller\AbstractDashboardController; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\Routing\Annotation\Route; use Symfony\UX\Chartjs\Builder\ChartBuilderInterface; use Symfony\UX\Chartjs\Model\Chart; class DashboardController extends AbstractDashboardController { public function __construct( private ChartBuilderInterface $chartBuilder,) { } public function configureAssets(): Assets { return parent::configureAssets() ->addWebpackEncoreEntry('app'); } #[Route('/admin', name: 'admin')] public function index(): Response { return $this->render('admin/dashboard.html.twig', [ 'chart' => $this->createChart()]); } private function createChart(): Chart { $chart = $this->chartBuilder->createChart(Chart::TYPE_LINE); $chart->setData([ 'labels' => ['January', 'February', 'March', 'April', 'May', 'June', 'July'], 'datasets' => [ [ 'label' => 'My First dataset', 'backgroundColor' => 'rgb(0, 0, 0)', 'borderColor' => 'rgb(0, 0, 0)', 'data' => [0, 10, 5, 2, 20, 30, 45], ], ], ]); $chart->setOptions([ 'scales' => [ 'y' => [ 'suggestedMin' => 0, 'suggestedMax' => 100, ], ], ]); return $chart; } public function configureDashboard(): Dashboard { return Dashboard::new() ->setTitle('Dashboard'); } public function configureMenuItems(): iterable { yield MenuItem::linkToDashboard('Dashboard', 'fa fa-home'); yield MenuItem::section('Products'); yield MenuItem::linkToCrud('Product', 'fas fa-box', Product::class); yield MenuItem::linkToCrud('Product images', 'fas fa-images', ProductImage::class); yield MenuItem::linkToCrud('Product category', 'fas fa-boxes-stacked', ProductCategory::class); } } The twig file does look this dashboard.html.twig {% extends '@!EasyAdmin/page/content.html.twig' %} {% block page_title %} Dashboard {% endblock %} {% block main %} <div class="row"> <div class="col-12"> {{ render_chart(chart) }} </div> </div> {% endblock %} My webpack.config.js is this const Encore = require('@symfony/webpack-encore'); // Manually configure the runtime environment if not already configured yet by the "encore" command. // It's useful when you use tools that rely on webpack.config.js file. if (!Encore.isRuntimeEnvironmentConfigured()) { Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev'); } Encore // directory where compiled assets will be stored .setOutputPath('public/build/') // public path used by the web server to access the output path .setPublicPath('/build') // only needed for CDN's or subdirectory deploy //.setManifestKeyPrefix('build/') /* * ENTRY CONFIG * * Each entry will result in one JavaScript file (e.g. app.js) * and one CSS file (e.g. app.css) if your JavaScript imports CSS. */ .addEntry('app', './assets/app.js') // enables the Symfony UX Stimulus bridge (used in assets/bootstrap.js) .enableStimulusBridge('./assets/controllers.json') // When enabled, Webpack "splits" your files into smaller pieces for greater optimization. .splitEntryChunks() // will require an extra script tag for runtime.js // but, you probably want this, unless you're building a single-page app .enableSingleRuntimeChunk() /* * FEATURE CONFIG * * Enable & configure other features below. For a full * list of features, see: * https://symfony.com/doc/current/frontend.html#adding-more-features */ .cleanupOutputBeforeBuild() .enableBuildNotifications() .enableSourceMaps(!Encore.isProduction()) // enables hashed filenames (e.g. app.abc123.css) .enableVersioning(Encore.isProduction()) // configure Babel // .configureBabel((config) => { // config.plugins.push('@babel/a-babel-plugin'); // }) // enables and configure @babel/preset-env polyfills .configureBabelPresetEnv((config) => { config.useBuiltIns = 'usage'; config.corejs = '3.23'; }) // enables Sass/SCSS support //.enableSassLoader() // uncomment if you use TypeScript //.enableTypeScriptLoader() // uncomment if you use React //.enableReactPreset() // uncomment to get integrity="..." attributes on your script & link tags // requires WebpackEncoreBundle 1.4 or higher //.enableIntegrityHashes(Encore.isProduction()) // uncomment if you're having problems with a jQuery plugin //.autoProvidejQuery() ; module.exports = Encore.getWebpackConfig(); I got an error in the console since I added the following code: public function configureAssets(): Assets { return parent::configureAssets() ->addWebpackEncoreEntry('app'); } I added that part of code because I thought this would fix the problem of not showing the graph. Before I added the code there where no errors in the console but after I added the code these errors came up. Can someone tell me what I am doing wrong. I use Symfony 6.1 and easyadmin 4
[ "The problem was that it could not access the files so I edited the entrypoints.json to the correct paths and this fixed it.\n" ]
[ 0 ]
[]
[]
[ "chart.js", "easyadmin", "php", "symfony", "symfonyux" ]
stackoverflow_0074654023_chart.js_easyadmin_php_symfony_symfonyux.txt
Q: unable to resolve dependency tree - ERESOLVE I am trying to run an old project on my new system but while doing npm install, this is what I am getting.. I've tried using same Node and NPM versions as per my old system, but nothing worked for me.. here is my package.json "dependencies": { "@angular-material-components/datetime-picker": "6.0.3", "@angular-material-components/moment-adapter": "6.0.0", "@angular/animations": "12.2.16", "@angular/cdk": "12.2.13", "@angular/common": "12.2.16", "@angular/compiler": "12.2.16", "@angular/core": "12.2.16", "@angular/forms": "12.2.16", "@angular/material": "12.2.13", "@angular/material-moment-adapter": "12.2.13", "@angular/platform-browser": "12.2.16", "@angular/platform-browser-dynamic": "12.2.16", "@angular/router": "12.2.16", "@fullcalendar/angular": "5.11.0", "@fullcalendar/core": "5.11.0", "@fullcalendar/daygrid": "5.11.0", "@fullcalendar/interaction": "5.11.0", "@fullcalendar/timegrid": "5.11.0", "@microsoft/signalr": "^6.0.6", "@ng-bootstrap/ng-bootstrap": "^10.0.0", "@ng-idle/core": "11.1.0", "@ng-idle/keepalive": "11.0.3", "@ng-select/ng-select": "7.4.0", "@ngx-translate/core": "12.1.2", "@ngx-translate/http-loader": "5.0.0", "angular-email-editor": "^0.9.0", "bootstrap": "^4.6.1", "browserslist": "^4.20.2", "chart.js": "^3.5.1", "command": "0.0.5", "core-js": "3.8.3", "cronstrue": "^2.2.0", "file-saver-es": "^2.0.5", "jquery": "^3.6.0", "moment": "^2.29.3", "ng-pick-datetime": "^7.0.0", "ng2-ckeditor": "1.3.6", "ngx-doc-viewer": "^2.1.2", "ngx-perfect-scrollbar": "^10.1.1", "ngx-toastr": "^13.2.1", "ngx-ui-loader": "^11.0.0", "primeicons": "^5.0.0", "primeng": "^12.2.2", "quill": "^1.3.7", "rxjs": "^6.6.7", "validator": "^13.7.0", "zone.js": "~0.11.4" }, "devDependencies": { "@angular-devkit/build-angular": "~12.2.17", "@angular/cli": "^12.2.17", "@angular/compiler-cli": "12.2.16", "@angular/language-service": "12.2.16", "@types/ckeditor": "^4.9.10", "@types/file-saver-es": "2.0.1", "@types/jquery": "^3.5.14", "@types/node": "~17.0.23", "@types/validator": "^13.7.3", "codelyzer": "^5.2.2", "ini": "^1.3.7", "jasmine-core": "~3.5.0", "jasmine-spec-reporter": "~5.0.0", "karma": "~6.3.17", "karma-chrome-launcher": "~3.1.0", "karma-coverage-istanbul-reporter": "~3.0.2", "karma-jasmine": "~4.0.0", "karma-jasmine-html-reporter": "^1.5.0", "tslint": "~6.1.3", "typescript": "4.3.5" }, "optionalDependencies": { "protractor": "^7.0.0", "ts-node": "~8.4.1", "tslint": "~6.1.3" } I've tried resolving dependencies using following command npm install --legacy-peer-deps And also this.. npm install --save --legacy-peer-deps but ended up getting more dependencies errors, Also tried to clear cache and did a fresh install npm cache clean --force npm install but nothing is working. stuck on this error since yesterday, and it is getting on my nerves now.. Any kind of help will be much much appreciated. PS: I know these kind of questions have already been asked here, but nothing worked for me.. tried each and every solution that worked for someone, but bad luck for me as of now.. A: I would like to help you, but this error most likely has to do with your node version and npm version not being compatible. I think you should check again and maybe try a few different node versions. You could try that with NVM: https://github.com/nvm-sh/nvm
unable to resolve dependency tree - ERESOLVE
I am trying to run an old project on my new system but while doing npm install, this is what I am getting.. I've tried using same Node and NPM versions as per my old system, but nothing worked for me.. here is my package.json "dependencies": { "@angular-material-components/datetime-picker": "6.0.3", "@angular-material-components/moment-adapter": "6.0.0", "@angular/animations": "12.2.16", "@angular/cdk": "12.2.13", "@angular/common": "12.2.16", "@angular/compiler": "12.2.16", "@angular/core": "12.2.16", "@angular/forms": "12.2.16", "@angular/material": "12.2.13", "@angular/material-moment-adapter": "12.2.13", "@angular/platform-browser": "12.2.16", "@angular/platform-browser-dynamic": "12.2.16", "@angular/router": "12.2.16", "@fullcalendar/angular": "5.11.0", "@fullcalendar/core": "5.11.0", "@fullcalendar/daygrid": "5.11.0", "@fullcalendar/interaction": "5.11.0", "@fullcalendar/timegrid": "5.11.0", "@microsoft/signalr": "^6.0.6", "@ng-bootstrap/ng-bootstrap": "^10.0.0", "@ng-idle/core": "11.1.0", "@ng-idle/keepalive": "11.0.3", "@ng-select/ng-select": "7.4.0", "@ngx-translate/core": "12.1.2", "@ngx-translate/http-loader": "5.0.0", "angular-email-editor": "^0.9.0", "bootstrap": "^4.6.1", "browserslist": "^4.20.2", "chart.js": "^3.5.1", "command": "0.0.5", "core-js": "3.8.3", "cronstrue": "^2.2.0", "file-saver-es": "^2.0.5", "jquery": "^3.6.0", "moment": "^2.29.3", "ng-pick-datetime": "^7.0.0", "ng2-ckeditor": "1.3.6", "ngx-doc-viewer": "^2.1.2", "ngx-perfect-scrollbar": "^10.1.1", "ngx-toastr": "^13.2.1", "ngx-ui-loader": "^11.0.0", "primeicons": "^5.0.0", "primeng": "^12.2.2", "quill": "^1.3.7", "rxjs": "^6.6.7", "validator": "^13.7.0", "zone.js": "~0.11.4" }, "devDependencies": { "@angular-devkit/build-angular": "~12.2.17", "@angular/cli": "^12.2.17", "@angular/compiler-cli": "12.2.16", "@angular/language-service": "12.2.16", "@types/ckeditor": "^4.9.10", "@types/file-saver-es": "2.0.1", "@types/jquery": "^3.5.14", "@types/node": "~17.0.23", "@types/validator": "^13.7.3", "codelyzer": "^5.2.2", "ini": "^1.3.7", "jasmine-core": "~3.5.0", "jasmine-spec-reporter": "~5.0.0", "karma": "~6.3.17", "karma-chrome-launcher": "~3.1.0", "karma-coverage-istanbul-reporter": "~3.0.2", "karma-jasmine": "~4.0.0", "karma-jasmine-html-reporter": "^1.5.0", "tslint": "~6.1.3", "typescript": "4.3.5" }, "optionalDependencies": { "protractor": "^7.0.0", "ts-node": "~8.4.1", "tslint": "~6.1.3" } I've tried resolving dependencies using following command npm install --legacy-peer-deps And also this.. npm install --save --legacy-peer-deps but ended up getting more dependencies errors, Also tried to clear cache and did a fresh install npm cache clean --force npm install but nothing is working. stuck on this error since yesterday, and it is getting on my nerves now.. Any kind of help will be much much appreciated. PS: I know these kind of questions have already been asked here, but nothing worked for me.. tried each and every solution that worked for someone, but bad luck for me as of now..
[ "I would like to help you, but this error most likely has to do with your node version and npm version not being compatible. I think you should check again and maybe try a few different node versions. You could try that with NVM: https://github.com/nvm-sh/nvm\n" ]
[ 0 ]
[]
[]
[ "angular", "dependencies", "npm", "typescript" ]
stackoverflow_0074642316_angular_dependencies_npm_typescript.txt
Q: Proper way to use react-hook-form Controller with Material-UI Autocomplete I am trying to use a custom Material-UI Autocomplete component and connect it to react-hook-form. TLDR: Need to use MUI Autocomplete with react-hook-form Controller without defaultValue My custom Autocomplete component takes an object with the structure {_id:'', name: ''} it displays the name and returns the _id when an option is selected. The Autocomplete works just fine. <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} onChange={(event, newValue, reason) => { handler(name, reason === 'clear' ? null : newValue._id); }} renderInput={params => <TextField {...params} {...inputProps} />} /> In order to make it work with react-hook-form I've set the setValues to be the handler for onChange in the Autocomplete and manually register the component in an useEffect as follows useEffect(() => { register({ name: "country1" }); },[]); This works fine but I would like to not have the useEffect hook and just make use of the register somehow directly. Next I tried to use the Controller component from react-hook-form to proper register the field in the form and not to use the useEffect hook <Controller name="country2" as={ <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} onChange={(event, newValue, reason) => reason === "clear" ? null : newValue._id } renderInput={params => ( <TextField {...params} label="Country" /> )} /> } control={control} /> I've changed the onChange in the Autocomplete component to return the value directly but it doesn't seem to work. Using inputRef={register} on the <TextField/> would not cut it for me because I want to save the _id and not the name HERE is a working sandbox with the two cases. The first with useEffect and setValue in the Autocomplete that works. The second my attempt in using Controller component Any help is appreciated. LE After the comment from Bill with the working sandbox of MUI Autocomplete, I Managed to get a functional result <Controller name="country" as={ <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} renderInput={params => <TextField {...params} label="Country" />} /> } onChange={([, { _id }]) => _id} control={control} /> The only problem is that I get an MUI Error in the console Material-UI: A component is changing the uncontrolled value state of Autocomplete to be controlled. I've tried to set an defaultValue for it but it still behaves like that. Also I would not want to set a default value from the options array due to the fact that these fields in the form are not required. The updated sandbox HERE Any help is still very much appreciated A: The accepted answer (probably) works for the bugged version of Autocomplete. I think the bug was fixed some time after that, so that the solution can be slightly simplified. This is very useful reference/codesandbox when working with react-hook-form and material-ui: https://codesandbox.io/s/react-hook-form-controller-601-j2df5? From the above link, I modified the Autocomplete example: import TextField from '@material-ui/core/TextField'; import Autocomplete from '@material-ui/lab/Autocomplete'; const ControlledAutocomplete = ({ options = [], renderInput, getOptionLabel, onChange: ignored, control, defaultValue, name, renderOption }) => { return ( <Controller render={({ onChange, ...props }) => ( <Autocomplete options={options} getOptionLabel={getOptionLabel} renderOption={renderOption} renderInput={renderInput} onChange={(e, data) => onChange(data)} {...props} /> )} onChange={([, data]) => data} defaultValue={defaultValue} name={name} control={control} /> ); } With the usage: <ControlledAutocomplete control={control} name="inputName" options={[{ name: 'test' }]} getOptionLabel={(option) => `Option: ${option.name}`} renderInput={(params) => <TextField {...params} label="My label" margin="normal" />} defaultValue={null} /> control is from the return value of useForm(} Note that I'm passing null as defaultValue as in my case this input is not required. If you'll leave defaultValue you might get some errors from material-ui library. UPDATE: Per Steve question in the comments, this is how I'm rendering the input, so that it checks for errors: renderInput={(params) => ( <TextField {...params} label="Field Label" margin="normal" error={errors[fieldName]} /> )} Where errors is an object from react-hook-form's formMethods: const { control, watch, errors, handleSubmit } = formMethods A: So, I fixed this. But it revealed what I believe to be an error in Autocomplete. First... specifically to your issue, you can eliminate the MUI Error by adding a defaultValue to the <Controller>. But that was only the beginning of another round or problems. The problem is that functions for getOptionLabel, getOptionSelected, and onChange are sometimes passed the value (i.e. the _id in this case) and sometimes passed the option structure - as you would expect. Here's the code I finally came up with: import React from "react"; import { useForm, Controller } from "react-hook-form"; import { TextField } from "@material-ui/core"; import { Autocomplete } from "@material-ui/lab"; import { Button } from "@material-ui/core"; export default function FormTwo({ options }) { const { register, handleSubmit, control } = useForm(); const getOpObj = option => { if (!option._id) option = options.find(op => op._id === option); return option; }; return ( <form onSubmit={handleSubmit(data => console.log(data))}> <Controller name="country" as={ <Autocomplete options={options} getOptionLabel={option => getOpObj(option).name} getOptionSelected={(option, value) => { return option._id === getOpObj(value)._id; }} renderInput={params => <TextField {...params} label="Country" />} /> } onChange={([, obj]) => getOpObj(obj)._id} control={control} defaultValue={options[0]} /> <Button type="submit">Submit</Button> </form> ); } A: import { Button } from "@material-ui/core"; import Autocomplete from "@material-ui/core/Autocomplete"; import { red } from "@material-ui/core/colors"; import Container from "@material-ui/core/Container"; import CssBaseline from "@material-ui/core/CssBaseline"; import { makeStyles } from "@material-ui/core/styles"; import TextField from "@material-ui/core/TextField"; import AdapterDateFns from "@material-ui/lab/AdapterDateFns"; import LocalizationProvider from "@material-ui/lab/LocalizationProvider"; import React, { useEffect, useState } from "react"; import { Controller, useForm } from "react-hook-form"; export default function App() { const [itemList, setItemList] = useState([]); // const classes = useStyles(); const { control, handleSubmit, setValue, formState: { errors } } = useForm({ mode: "onChange", defaultValues: { item: null } }); const onSubmit = (formInputs) => { console.log("formInputs", formInputs); }; useEffect(() => { setItemList([ { id: 1, name: "item1" }, { id: 2, name: "item2" } ]); setValue("item", { id: 3, name: "item3" }); }, [setValue]); return ( <LocalizationProvider dateAdapter={AdapterDateFns}> <Container component="main" maxWidth="xs"> <CssBaseline /> <form onSubmit={handleSubmit(onSubmit)} noValidate> <Controller control={control} name="item" rules={{ required: true }} render={({ field: { onChange, value } }) => ( <Autocomplete onChange={(event, item) => { onChange(item); }} value={value} options={itemList} getOptionLabel={(item) => (item.name ? item.name : "")} getOptionSelected={(option, value) => value === undefined || value === "" || option.id === value.id } renderInput={(params) => ( <TextField {...params} label="items" margin="normal" variant="outlined" error={!!errors.item} helperText={errors.item && "item required"} required /> )} /> )} /> <button onClick={() => { setValue("item", { id: 1, name: "item1" }); }} > setValue </button> <Button type="submit" fullWidth size="large" variant="contained" color="primary" // className={classes.submit} > submit </Button> </form> </Container> </LocalizationProvider> ); } A: I do not know why the above answers did not work for me, here is the simplest code that worked for me, I used render function of Controller with onChange to change the value according to the selected one. <Controller control={control} name="type" rules={{ required: 'Veuillez choisir une réponse', }} render={({ field: { onChange, value } }) => ( <Autocomplete freeSolo options={['field', 'select', 'multiple', 'date']} onChange={(event, values) => onChange(values)} value={value} renderInput={(params) => ( <TextField {...params} label="type" variant="outlined" onChange={onChange} /> )} /> )} A: Thanks to all the other answers, as of April 15 2022, I was able to figure out how to get this working and render the label in the TextField component: const ControlledAutocomplete = ({ options, name, control, defaultValue, error, rules, helperText, }) => ( <Controller name={name} control={control} defaultValue={defaultValue} rules={rules} render={({ field }) => ( <Autocomplete disablePortal options={options} getOptionLabel={(option) => option?.label ?? options.find(({ code }) => code === option)?.label ?? '' } {...field} renderInput={(params) => ( <TextField {...params} error={Boolean(error)} helperText={helperText} /> )} onChange={(_event, data) => field.onChange(data?.code ?? '')} /> )} /> ); ControlledAutocomplete.propTypes = { options: PropTypes.arrayOf({ label: PropTypes.string, code: PropTypes.string, }), name: PropTypes.string, control: PropTypes.func, defaultValue: PropTypes.string, error: PropTypes.object, rules: PropTypes.object, helperText: PropTypes.string, }; In my case, options is an array of {code: 'US', label: 'United States'} objects. The biggest difference is the getOptionLabel, which I guess needs to account for if both when you have the list open (and option is an object) and when the option is rendered in the TextField (when option is a string) as well as when nothing is selected. A: I have made it work pretty well including multiple tags selector as follow bellow. It will work fine with mui5 and react-hook-form 7 import { useForm, Controller } from 'react-hook-form'; import Autocomplete from '@mui/material/Autocomplete'; //setup your form and control <Controller control={control} name="yourFiledSubmitName" rules={{ required: 'required field', }} render={({ field: { onChange } }) => ( <Autocomplete multiple options={yourDataArray} getOptionLabel={(option) => option.label} onChange={(event, item) => { onChange(item); }} renderInput={(params) => ( <TextField {...params} label="Your label" placeholder="Your placeholder" /> )} )} />
Proper way to use react-hook-form Controller with Material-UI Autocomplete
I am trying to use a custom Material-UI Autocomplete component and connect it to react-hook-form. TLDR: Need to use MUI Autocomplete with react-hook-form Controller without defaultValue My custom Autocomplete component takes an object with the structure {_id:'', name: ''} it displays the name and returns the _id when an option is selected. The Autocomplete works just fine. <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} onChange={(event, newValue, reason) => { handler(name, reason === 'clear' ? null : newValue._id); }} renderInput={params => <TextField {...params} {...inputProps} />} /> In order to make it work with react-hook-form I've set the setValues to be the handler for onChange in the Autocomplete and manually register the component in an useEffect as follows useEffect(() => { register({ name: "country1" }); },[]); This works fine but I would like to not have the useEffect hook and just make use of the register somehow directly. Next I tried to use the Controller component from react-hook-form to proper register the field in the form and not to use the useEffect hook <Controller name="country2" as={ <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} onChange={(event, newValue, reason) => reason === "clear" ? null : newValue._id } renderInput={params => ( <TextField {...params} label="Country" /> )} /> } control={control} /> I've changed the onChange in the Autocomplete component to return the value directly but it doesn't seem to work. Using inputRef={register} on the <TextField/> would not cut it for me because I want to save the _id and not the name HERE is a working sandbox with the two cases. The first with useEffect and setValue in the Autocomplete that works. The second my attempt in using Controller component Any help is appreciated. LE After the comment from Bill with the working sandbox of MUI Autocomplete, I Managed to get a functional result <Controller name="country" as={ <Autocomplete options={options} getOptionLabel={option => option.name} getOptionSelected={(option, value) => option._id === value._id} renderInput={params => <TextField {...params} label="Country" />} /> } onChange={([, { _id }]) => _id} control={control} /> The only problem is that I get an MUI Error in the console Material-UI: A component is changing the uncontrolled value state of Autocomplete to be controlled. I've tried to set an defaultValue for it but it still behaves like that. Also I would not want to set a default value from the options array due to the fact that these fields in the form are not required. The updated sandbox HERE Any help is still very much appreciated
[ "The accepted answer (probably) works for the bugged version of Autocomplete. I think the bug was fixed some time after that, so that the solution can be slightly simplified.\nThis is very useful reference/codesandbox when working with react-hook-form and material-ui: https://codesandbox.io/s/react-hook-form-controller-601-j2df5?\nFrom the above link, I modified the Autocomplete example:\nimport TextField from '@material-ui/core/TextField';\nimport Autocomplete from '@material-ui/lab/Autocomplete';\n\n\nconst ControlledAutocomplete = ({ options = [], renderInput, getOptionLabel, onChange: ignored, control, defaultValue, name, renderOption }) => {\n return (\n <Controller\n render={({ onChange, ...props }) => (\n <Autocomplete\n options={options}\n getOptionLabel={getOptionLabel}\n renderOption={renderOption}\n renderInput={renderInput}\n onChange={(e, data) => onChange(data)}\n {...props}\n />\n )}\n onChange={([, data]) => data}\n defaultValue={defaultValue}\n name={name}\n control={control}\n />\n );\n}\n\nWith the usage:\n<ControlledAutocomplete\n control={control}\n name=\"inputName\"\n options={[{ name: 'test' }]}\n getOptionLabel={(option) => `Option: ${option.name}`}\n renderInput={(params) => <TextField {...params} label=\"My label\" margin=\"normal\" />}\n defaultValue={null}\n/>\n\ncontrol is from the return value of useForm(}\nNote that I'm passing null as defaultValue as in my case this input is not required. If you'll leave defaultValue you might get some errors from material-ui library.\nUPDATE:\nPer Steve question in the comments, this is how I'm rendering the input, so that it checks for errors:\nrenderInput={(params) => (\n <TextField\n {...params}\n label=\"Field Label\"\n margin=\"normal\"\n error={errors[fieldName]}\n />\n )}\n\nWhere errors is an object from react-hook-form's formMethods:\nconst { control, watch, errors, handleSubmit } = formMethods\n\n", "So, I fixed this. But it revealed what I believe to be an error in Autocomplete.\nFirst... specifically to your issue, you can eliminate the MUI Error by adding a defaultValue to the <Controller>. But that was only the beginning of another round or problems.\nThe problem is that functions for getOptionLabel, getOptionSelected, and onChange are sometimes passed the value (i.e. the _id in this case) and sometimes passed the option structure - as you would expect.\nHere's the code I finally came up with:\nimport React from \"react\";\nimport { useForm, Controller } from \"react-hook-form\";\nimport { TextField } from \"@material-ui/core\";\nimport { Autocomplete } from \"@material-ui/lab\";\nimport { Button } from \"@material-ui/core\";\nexport default function FormTwo({ options }) {\n const { register, handleSubmit, control } = useForm();\n\n const getOpObj = option => {\n if (!option._id) option = options.find(op => op._id === option);\n return option;\n };\n\n return (\n <form onSubmit={handleSubmit(data => console.log(data))}>\n <Controller\n name=\"country\"\n as={\n <Autocomplete\n options={options}\n getOptionLabel={option => getOpObj(option).name}\n getOptionSelected={(option, value) => {\n return option._id === getOpObj(value)._id;\n }}\n renderInput={params => <TextField {...params} label=\"Country\" />}\n />\n }\n onChange={([, obj]) => getOpObj(obj)._id}\n control={control}\n defaultValue={options[0]}\n />\n <Button type=\"submit\">Submit</Button>\n </form>\n );\n}\n\n", "import { Button } from \"@material-ui/core\";\nimport Autocomplete from \"@material-ui/core/Autocomplete\";\nimport { red } from \"@material-ui/core/colors\";\nimport Container from \"@material-ui/core/Container\";\nimport CssBaseline from \"@material-ui/core/CssBaseline\";\nimport { makeStyles } from \"@material-ui/core/styles\";\nimport TextField from \"@material-ui/core/TextField\";\nimport AdapterDateFns from \"@material-ui/lab/AdapterDateFns\";\nimport LocalizationProvider from \"@material-ui/lab/LocalizationProvider\";\nimport React, { useEffect, useState } from \"react\";\nimport { Controller, useForm } from \"react-hook-form\";\n\n\nexport default function App() {\n const [itemList, setItemList] = useState([]);\n // const classes = useStyles();\n\n const {\n control,\n handleSubmit,\n setValue,\n formState: { errors }\n } = useForm({\n mode: \"onChange\",\n defaultValues: { item: null }\n });\n\n const onSubmit = (formInputs) => {\n console.log(\"formInputs\", formInputs);\n };\n\n useEffect(() => {\n setItemList([\n { id: 1, name: \"item1\" },\n { id: 2, name: \"item2\" }\n ]);\n setValue(\"item\", { id: 3, name: \"item3\" });\n }, [setValue]);\n\n return (\n <LocalizationProvider dateAdapter={AdapterDateFns}>\n <Container component=\"main\" maxWidth=\"xs\">\n <CssBaseline />\n\n <form onSubmit={handleSubmit(onSubmit)} noValidate>\n <Controller\n control={control}\n name=\"item\"\n rules={{ required: true }}\n render={({ field: { onChange, value } }) => (\n <Autocomplete\n onChange={(event, item) => {\n onChange(item);\n }}\n value={value}\n options={itemList}\n getOptionLabel={(item) => (item.name ? item.name : \"\")}\n getOptionSelected={(option, value) =>\n value === undefined || value === \"\" || option.id === value.id\n }\n renderInput={(params) => (\n <TextField\n {...params}\n label=\"items\"\n margin=\"normal\"\n variant=\"outlined\"\n error={!!errors.item}\n helperText={errors.item && \"item required\"}\n required\n />\n )}\n />\n )}\n />\n\n <button\n onClick={() => {\n setValue(\"item\", { id: 1, name: \"item1\" });\n }}\n >\n setValue\n </button>\n\n <Button\n type=\"submit\"\n fullWidth\n size=\"large\"\n variant=\"contained\"\n color=\"primary\"\n // className={classes.submit}\n >\n submit\n </Button>\n </form>\n </Container>\n </LocalizationProvider>\n );\n}\n\n", "I do not know why the above answers did not work for me, here is the simplest code that worked for me, I used render function of Controller with onChange to change the value according to the selected one.\n<Controller\ncontrol={control}\nname=\"type\"\nrules={{\n required: 'Veuillez choisir une réponse',\n}}\nrender={({ field: { onChange, value } }) => (\n <Autocomplete\n freeSolo\n options={['field', 'select', 'multiple', 'date']}\n onChange={(event, values) => onChange(values)}\n value={value}\n renderInput={(params) => (\n <TextField\n {...params}\n label=\"type\"\n variant=\"outlined\"\n onChange={onChange}\n />\n )}\n />\n)}\n\n", "Thanks to all the other answers, as of April 15 2022, I was able to figure out how to get this working and render the label in the TextField component:\nconst ControlledAutocomplete = ({\n options,\n name,\n control,\n defaultValue,\n error,\n rules,\n helperText,\n}) => (\n <Controller\n name={name}\n control={control}\n defaultValue={defaultValue}\n rules={rules}\n render={({ field }) => (\n <Autocomplete\n disablePortal\n options={options}\n getOptionLabel={(option) =>\n option?.label ??\n options.find(({ code }) => code === option)?.label ??\n ''\n }\n {...field}\n renderInput={(params) => (\n <TextField\n {...params}\n error={Boolean(error)}\n helperText={helperText}\n />\n )}\n onChange={(_event, data) => field.onChange(data?.code ?? '')}\n />\n )}\n />\n);\n\nControlledAutocomplete.propTypes = {\n options: PropTypes.arrayOf({\n label: PropTypes.string,\n code: PropTypes.string,\n }),\n name: PropTypes.string,\n control: PropTypes.func,\n defaultValue: PropTypes.string,\n error: PropTypes.object,\n rules: PropTypes.object,\n helperText: PropTypes.string,\n};\n\nIn my case, options is an array of {code: 'US', label: 'United States'} objects. The biggest difference is the getOptionLabel, which I guess needs to account for if both when you have the list open (and option is an object) and when the option is rendered in the TextField (when option is a string) as well as when nothing is selected.\n", "I have made it work pretty well including multiple tags selector as follow bellow. It will work fine with mui5 and react-hook-form 7\nimport { useForm, Controller } from 'react-hook-form';\nimport Autocomplete from '@mui/material/Autocomplete';\n\n//setup your form and control\n\n<Controller\n control={control}\n name=\"yourFiledSubmitName\"\n rules={{\n required: 'required field',\n }}\n render={({ field: { onChange } }) => (\n <Autocomplete\n multiple\n options={yourDataArray}\n getOptionLabel={(option) => option.label}\n onChange={(event, item) => {\n onChange(item);\n }}\n renderInput={(params) => (\n <TextField {...params} label=\"Your label\" placeholder=\"Your placeholder\"\n />\n )}\n )}\n/>\n\n" ]
[ 23, 14, 8, 3, 2, 0 ]
[ "Instead of using controller, with the help of register, setValue of useForm and value, onChange of Autocomplete we can achieve the same result.\nconst [selectedCaste, setSelectedCaste] = useState([]);\nconst {register, errors, setValue} = useForm();\n\nuseEffect(() => {\n register(\"caste\");\n}, [register]);\n\nreturn (\n <Autocomplete\n multiple\n options={casteList}\n disableCloseOnSelect\n value={selectedCaste}\n onChange={(_, values) => {\n setSelectedCaste([...values]);\n setValue(\"caste\", [...values]);\n }}\n getOptionLabel={(option) => option}\n renderOption={(option, { selected }) => (\n <React.Fragment>\n <Checkbox\n icon={icon}\n checkedIcon={checkedIcon}\n style={{ marginRight: 8 }}\n checked={selected}\n />\n {option}\n </React.Fragment>\n )}\n style={{ width: \"100%\" }}\n renderInput={(params) => (\n <TextField\n {...params}\n id=\"caste\"\n error={!!errors.caste}\n helperText={errors.caste?.message}\n variant=\"outlined\"\n label=\"Select caste\"\n placeholder=\"Caste\"\n />\n )}\n />\n);\n\n" ]
[ -4 ]
[ "material_ui", "react_hook_form", "reactjs" ]
stackoverflow_0061655199_material_ui_react_hook_form_reactjs.txt
Q: Restrict/deny the allowed locations for resources I am looking to assign the resource policy that to limit the allowed locations where the resources can be deployed, so that I can be use only the particular resources for my work and the cost will be low. I found This but this is like manually restricted I need it in the script way. I searched in the network but didn't find any related doc. Can anyone help on this, thanks in advance. A: I have followed the below configuration to deny the allowed locations for resources Go-To Portal → and search for Policy and policy definition I have filled the appropriate fields and i have used the below script to deny allocated locations { "properties": { "displayName": "Allowed resource types", "policyType": "BuiltIn", "mode": "Indexed", "description": "This policy enables you to specify the resource types that your organization can deploy. Only resource types that support 'tags' and 'location' will be affected by this policy. To restrict all resources please duplicate this policy and change the 'mode' to 'All'.", "metadata": { "version": "1.0.0", "category": "General" }, "parameters": { "listOfResourceTypesAllowed": { "type": "Array", "metadata": { "description": "The list of resource types that can be deployed.", "displayName": "Allowed resource types", "strongType": "resourceTypes" } } }, "policyRule": { "if": { "not": { "field": "type", "in": "[parameters('listOfResourceTypesAllowed')]" } }, "then": { "effect": "deny" } } I have assigned the policy and when I check in the assignments I am able to see When I check to create resource group with non allowed locations I am not able to create
Restrict/deny the allowed locations for resources
I am looking to assign the resource policy that to limit the allowed locations where the resources can be deployed, so that I can be use only the particular resources for my work and the cost will be low. I found This but this is like manually restricted I need it in the script way. I searched in the network but didn't find any related doc. Can anyone help on this, thanks in advance.
[ "I have followed the below configuration to deny the allowed locations for resources\nGo-To Portal → and search for Policy and policy definition\n\n\nI have filled the appropriate fields and i have used the below script to deny allocated locations\n\n { \n\"properties\": { \n\"displayName\": \"Allowed resource types\", \n\"policyType\": \"BuiltIn\", \n\"mode\": \"Indexed\", \n\"description\": \"This policy enables you to specify the resource types that your organization can deploy. Only resource types that support 'tags' and 'location' will be affected by this policy. To restrict all resources please duplicate this policy and change the 'mode' to 'All'.\", \n\"metadata\": { \n\"version\": \"1.0.0\", \n\"category\": \"General\" \n}, \n\"parameters\": { \n\"listOfResourceTypesAllowed\": { \n\"type\": \"Array\", \n\"metadata\": { \n\"description\": \"The list of resource types that can be deployed.\", \n\"displayName\": \"Allowed resource types\", \n\"strongType\": \"resourceTypes\" \n} \n} \n}, \n\"policyRule\": {\n\"if\": {\n\"not\": {\n\"field\": \"type\",\n\"in\": \"[parameters('listOfResourceTypesAllowed')]\"\n}\n},\n\"then\": {\n\"effect\": \"deny\"\n}\n}\n\n\nI have assigned the policy and when I check in the assignments I am able to see\n\n\n\nWhen I check to create resource group with non allowed locations I am not able to create\n\n\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_cloud_services", "azure_resource_group", "policy" ]
stackoverflow_0074608047_azure_azure_cloud_services_azure_resource_group_policy.txt
Q: Symfony - Use underscore or dot instead of brackets with FormType in URL query (get method) I search to use underscore or dot instead of brackets with FormType in URL query with GET method. Current I've actually this URL when I submit my form: /path?fieldName%5BsubFieldName%5D=toto Or this (decoded): /path?fieldName[subFieldName]=toto Expected I'd like to have this: /path?fieldName_subFieldName=toto Or this: /path?fieldName.subFieldName=toto Tried I tried a few options in the form configuration but I don't know what they correspond to, and I don't know where to google (what terms to use). A: It's the normal behaviour of Symfony Form Component. To avoid this, you should use named form builder, like this : public function controllerMethod(FormFactory $formFactory) { $form = $formFactory->createNamed( '', // Your form name MyFormType::class, // Your form type [], // data [] // form options ); // Do something } For the button, you have to remove it from your Form class (i mean, the SubmitType one), and add it manually tou your html (with classical <button></button> inside of your form rendering)
Symfony - Use underscore or dot instead of brackets with FormType in URL query (get method)
I search to use underscore or dot instead of brackets with FormType in URL query with GET method. Current I've actually this URL when I submit my form: /path?fieldName%5BsubFieldName%5D=toto Or this (decoded): /path?fieldName[subFieldName]=toto Expected I'd like to have this: /path?fieldName_subFieldName=toto Or this: /path?fieldName.subFieldName=toto Tried I tried a few options in the form configuration but I don't know what they correspond to, and I don't know where to google (what terms to use).
[ "It's the normal behaviour of Symfony Form Component.\nTo avoid this, you should use named form builder, like this :\npublic function controllerMethod(FormFactory $formFactory) {\n $form = $formFactory->createNamed(\n '', // Your form name\n MyFormType::class, // Your form type\n [], // data\n [] // form options\n );\n\n // Do something\n}\n\nFor the button, you have to remove it from your Form class (i mean, the SubmitType one), and add it manually tou your html (with classical <button></button> inside of your form rendering)\n" ]
[ 0 ]
[]
[]
[ "symfony", "symfony_4.3" ]
stackoverflow_0074653469_symfony_symfony_4.3.txt
Q: Graphics.DrawString doesn't draw last SPACE chars? The string is "Hello World " with 10 SPACE chars in the end, but Graphics.DrawString in Right Alignment omits all of SPACE chars, it just draws "Hello World" only. protected override void OnPaint(PaintEventArgs e) { Rectangle rct = new Rectangle(20, 100, 200, 20); e.Graphics.DrawRectangle(new Pen(Color.Lime), rct); e.Graphics.DrawString("Hello World ", Font, new SolidBrush(SystemColors.ControlText), rct, new StringFormat() { Alignment = StringAlignment.Far}); base.OnPaint(e); } A: To include the Trailing Spaces when drawing strings with Gdi+ Graphics.DrawString method, pass a StringFormat to a proper overload and add or append (|=) the StringFormatFlags.MeasureTrailingSpaces value to the StringFormat.FormatFlags property. protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); Rectangle rct = new Rectangle(20, 100, 200, 20); string s = "Hello World "; using (var sf = new StringFormat(StringFormat.GenericTypographic)) { sf.Alignment = StringAlignment.Far; sf.LineAlignment = StringAlignment.Center; sf.FormatFlags = StringFormatFlags.MeasureTrailingSpaces; e.Graphics.TextRenderingHint = System.Drawing.Text.TextRenderingHint.ClearTypeGridFit; e.Graphics.DrawString(s, Font, SystemBrushes.ControlText, rct, sf); } e.Graphics.DrawRectangle(Pens.Lime, rct); } Consider using the Gdi TextRenderer class to draw strings over controls unless you encounter problems like drawing on transparent backgrounds. The previous code could have been written like: protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); Rectangle rct = new Rectangle(20, 100, 200, 20); string s = "Hello World "; TextRenderer.DrawText(e.Graphics, s, Font, rct, SystemColors.ControlText, TextFormatFlags.Right | TextFormatFlags.VerticalCenter); e.Graphics.DrawRectangle(Pens.Lime, rct); }
Graphics.DrawString doesn't draw last SPACE chars?
The string is "Hello World " with 10 SPACE chars in the end, but Graphics.DrawString in Right Alignment omits all of SPACE chars, it just draws "Hello World" only. protected override void OnPaint(PaintEventArgs e) { Rectangle rct = new Rectangle(20, 100, 200, 20); e.Graphics.DrawRectangle(new Pen(Color.Lime), rct); e.Graphics.DrawString("Hello World ", Font, new SolidBrush(SystemColors.ControlText), rct, new StringFormat() { Alignment = StringAlignment.Far}); base.OnPaint(e); }
[ "To include the Trailing Spaces when drawing strings with Gdi+ Graphics.DrawString method, pass a StringFormat to a proper overload and add or append (|=) the StringFormatFlags.MeasureTrailingSpaces value to the StringFormat.FormatFlags property.\nprotected override void OnPaint(PaintEventArgs e)\n{\n base.OnPaint(e);\n\n Rectangle rct = new Rectangle(20, 100, 200, 20);\n string s = \"Hello World \";\n\n using (var sf = new StringFormat(StringFormat.GenericTypographic))\n {\n sf.Alignment = StringAlignment.Far;\n sf.LineAlignment = StringAlignment.Center;\n sf.FormatFlags = StringFormatFlags.MeasureTrailingSpaces;\n\n e.Graphics.TextRenderingHint = System.Drawing.Text.TextRenderingHint.ClearTypeGridFit;\n e.Graphics.DrawString(s, Font, SystemBrushes.ControlText, rct, sf);\n }\n\n e.Graphics.DrawRectangle(Pens.Lime, rct);\n}\n\nConsider using the Gdi TextRenderer class to draw strings over controls unless you encounter problems like drawing on transparent backgrounds.\nThe previous code could have been written like:\nprotected override void OnPaint(PaintEventArgs e)\n{\n base.OnPaint(e);\n\n Rectangle rct = new Rectangle(20, 100, 200, 20);\n string s = \"Hello World \";\n\n TextRenderer.DrawText(e.Graphics, s, Font, rct, SystemColors.ControlText,\n TextFormatFlags.Right |\n TextFormatFlags.VerticalCenter);\n\n e.Graphics.DrawRectangle(Pens.Lime, rct);\n}\n\n" ]
[ 1 ]
[]
[]
[ "c#", "drawstring" ]
stackoverflow_0074653444_c#_drawstring.txt
Q: Why does FFmpeg encode by default? By default the following unpresuming FFmpeg command: ffmpeg -i "input.mp4" "output.mkv" ...will lossily encode a file unless it has the -c copy flag added, which will then pass the video through without any encoding. I remember not realising this as a beginner to FFmpeg years ago and being surprised when I found out, and ever since then it's something I've wondered about but not got around to asking. The main justification for this behaviour that comes to mind for me is that encoding is a much more common operation, and it might be annoying to have to pass an extra -encode flag for most uses. Was this ever one of the reasons cited for this design decision? Has the issue ever even been discussed in the FFmpeg mailing lists, or has it remained unquestioned since being written during the days of Fabrice Bellard?
Why does FFmpeg encode by default?
By default the following unpresuming FFmpeg command: ffmpeg -i "input.mp4" "output.mkv" ...will lossily encode a file unless it has the -c copy flag added, which will then pass the video through without any encoding. I remember not realising this as a beginner to FFmpeg years ago and being surprised when I found out, and ever since then it's something I've wondered about but not got around to asking. The main justification for this behaviour that comes to mind for me is that encoding is a much more common operation, and it might be annoying to have to pass an extra -encode flag for most uses. Was this ever one of the reasons cited for this design decision? Has the issue ever even been discussed in the FFmpeg mailing lists, or has it remained unquestioned since being written during the days of Fabrice Bellard?
[]
[]
[ "The -c copy string uses copy value to copy the video, audio, and subtitle bitstream from the input file to the output file without re-encoding them. -c copy means set all codec operations to copy as exactly as they are, from source to the newly generated file. When you don't use the -c copy string, FFmpeg would start re-encoding each stream in the input using default encoder, thus according to your settings, the output file might become lossy/lossless.\n" ]
[ -1 ]
[ "design_decisions", "ffmpeg", "open_source", "video_editing", "video_encoding" ]
stackoverflow_0074646432_design_decisions_ffmpeg_open_source_video_editing_video_encoding.txt
Q: How to create correctly a generic model mapper based on the code I wrote? I am trying to create a generic model mapper using ModelMapper. This is what I got till now, it has only one method, that converts to the type given as second parameter @Component public class Mapper { private final ModelMapper modelMapper; @Autowired public Mapper(ModelMapper modelMapper) { this.modelMapper = modelMapper; } public Object convertToType(Object object, Class<?> type) { Object convertedObject = modelMapper.map(object, type); return convertedObject; } } And this is how I use it: DepartmentDTO departmentDTO = (DepartmentDTO) modelMapper.convertToType(department.get(), DepartmentDTO.class);, here I convert from a department entity to it's DTO class And here I make the opposite, from DTO to entity. Department department = (Department) modelMapper.convertToType(departmentDTO, Department.class); EDIT How can I improve my code? Is it something wrong about the method I use? A: If you want to avoid casting, use a generic method. public class Mapper { private final ModelMapper modelMapper; //ctor public <R> R convertToType(Object source, Class<R> resultClass) { return modelMapper.map(source, resultClass); } } Additionally you can change method parameters names to something more descriptive of their functions - source and resultClass are just some possibilities.
How to create correctly a generic model mapper based on the code I wrote?
I am trying to create a generic model mapper using ModelMapper. This is what I got till now, it has only one method, that converts to the type given as second parameter @Component public class Mapper { private final ModelMapper modelMapper; @Autowired public Mapper(ModelMapper modelMapper) { this.modelMapper = modelMapper; } public Object convertToType(Object object, Class<?> type) { Object convertedObject = modelMapper.map(object, type); return convertedObject; } } And this is how I use it: DepartmentDTO departmentDTO = (DepartmentDTO) modelMapper.convertToType(department.get(), DepartmentDTO.class);, here I convert from a department entity to it's DTO class And here I make the opposite, from DTO to entity. Department department = (Department) modelMapper.convertToType(departmentDTO, Department.class); EDIT How can I improve my code? Is it something wrong about the method I use?
[ "If you want to avoid casting, use a generic method.\npublic class Mapper {\n\n private final ModelMapper modelMapper;\n \n //ctor\n\n public <R> R convertToType(Object source, Class<R> resultClass) {\n return modelMapper.map(source, resultClass);\n }\n}\n\nAdditionally you can change method parameters names to something more descriptive of their functions - source and resultClass are just some possibilities.\n" ]
[ 1 ]
[]
[]
[ "java", "spring", "spring_boot" ]
stackoverflow_0074656783_java_spring_spring_boot.txt
Q: Performing arithmetic calculations on all possible digit combinations in a list I create data in a format like this: initial_data = [ "518-2", '533-3', '534-0', '000-3', '000-4'] I need to perform several operations (add, sub, div, mult, factorial, power_to, root) on the part before the hyphen to see if there's an equation which equals the part after the hyphen. Like so: #5182 -5 - 1 + 8 = 2 or 5*(-1) - 1 + 8 = 2 #000-3 number, solution, number_of_solutions 000-3,(0! + 0!) + 0! = 3,2 or 000-4,,0 or 533-3,5 - (3! / 3) = 3,5 Every digit in the part before the hyphen can have an opposite sign, so I found this: def inverter(data): inverted_data = [-x for x in data] res = list(product(*zip(data, inverted_data))) return res I'm supposed to create a CSV file like in the example above but I haven't gotten to that part yet and that seems like the easiest part. What I have are several disparate parts that I can't connect in a sensible way: import numpy as np from itertools import product from math import factorial def plus(a, b): return a + b def minus(a, b): return a - b def mult(a, b): return a * b def div(a, b): if b!=0: if a%b==0: return a//b return np.nan def the_factorial(a, b): try: return factorial(int(a)) except ValueError: return np.nan def power_to(a:int, b:int)->int: try: return int(a**b) except ValueError: return np.nan def root(a:int, b:int)->int: try: return int(b**(1 / a)) except (TypeError, ZeroDivisionError, ValueError): return np.nan def combinations(nums, funcs): """Both arguments are lists""" t = [] for i in range(len(nums)-1): t.append(nums) t.append(funcs) t.append(nums) return list(itertools.product(*t)) def solve(instance): instance = list(instance) for i in range(len(instance)//2): b = instance.pop() func = instance.pop() a = instance.pop() instance.append(func(a, b)) return instance[0] def main(): try: # a = [1, 3 ,4] a = [int(-5), int(-1), int(8)] func = [plus, minus, mult, div, the_factorial, power_to, root] combs = combinations(a, func) solutions = [solve(i) for i in combs] for i, j in zip(combs, solutions): print(i, j) except ValueError: #If there's too many combinations return np.nan I'm having trouble transforming the data from the initial_data to inverter to main which currently only works on one example and returns an ugly readout with a function object in the middle. Thanks in advance. A: I think this will help you a lot (tweaks are on you) but it doesn't write in a CSV, I leave that for you to try, just take into account that there are thousands of possible combinations and in some cases, the results are really huge (see comments in main()). I've added missing types in function declarations for clarity and successful linting (compatible with older Python versions). Also, I think that the function combinations() is not needed so I removed it. In my proposed code, the function solve() is the one doing the magic :) Said all that, here's the full code: import numpy as np from itertools import product from math import factorial from typing import Union, Callable, Tuple, List, Set def plus(a: int, b: int) -> int: return a + b def minus(a: int, b: int) -> int: return a - b def mult(a: int, b: int) -> int: return a * b def div(a: int, b: int) -> Union[int, float]: try: retval = int(a / b) except (ValueError, ZeroDivisionError): retval = np.nan return retval def the_factorial(a: int) -> Union[int, float]: try: return factorial(int(a)) except ValueError: return np.nan except OverflowError: return np.inf def power_to(a: int, b: int) -> Union[int, float]: try: return int(a ** b) except (ValueError, ZeroDivisionError): return np.nan def root(a: int, b: int) -> Union[int, float]: try: return int(b ** (1 / a)) except (TypeError, ZeroDivisionError, ValueError): return np.nan def solve(values: Tuple[int, int, int], ops: List[Callable]) -> list[Tuple[str, int]]: # Iterate over available functions. combs = list() for f in FACTORS: # Get values to operate with. x, y, z = values sx, sy, sz = x, y, z a, b, c = f # Calculate the factorial for the values (if applicable). if a == 1: sx = f"{x}!" x = the_factorial(x) if b == 1: sy = f"{y}!" y = the_factorial(y) if c == 1: sz = f"{z}!" z = the_factorial(z) for ext_op in ops: # External operation. for int_op in ops: # Internal operation. # Create equations by grouping the first 2 elements, e.g.: ((x + y) * z). eq_str = f"{ext_op.__name__}({int_op.__name__}({sx}, {sy}), {sz})" eq_val = ext_op(int_op(x, y), z) combs.append((eq_str, eq_val)) # Create equations by grouping the last 2 elements, e.g.: (x + (y * z)). eq_str = f"{ext_op.__name__}({sx}, {int_op.__name__}({sy}, {sz}))" eq_val = ext_op(x, int_op(y, z)) combs.append((eq_str, eq_val)) return combs def inverter(data: List[int]) -> List[Tuple[int, int, int]]: inverted_data = [-x for x in data] res = list(product(*zip(data, inverted_data))) return res # Data to process. INITIAL_DATA: List[str] = [ "518-2", '533-3', # '534-0', # '000-3', # '000-4' ] # Available functions. FUNCTIONS: List[Callable] = [ # the_factorial() removed, see solve(). plus, minus, mult, div, power_to, root ] # Get posible combinations to apply the factor operation. FACTORS: Set[Tuple] = set(product([1, 0, 0], repeat=3)) def main(): cases = 0 # Count all possible cases (for each input value). data = list() # List with all final data to be dumped in CSV. print("number, solution, number_of_solutions") # Iterate over all initial data. for eq in INITIAL_DATA: # Get values before and after the hyphen. nums, res = eq.split('-') res = int(res) # Get combinations with inverted values. combs = inverter([int(n) for n in list(nums)]) # Iterate over combinations and generate a list with their many possible solutions. sol_cnt = 0 # Number of solutions (for each input value). solutions = list() # List with all final data to be dumped in CSV. for i in [solve(i, FUNCTIONS) for i in combs]: for j in i: str_repr, value = j # Some values exceed the 4300 digits, hence the 'try-catch'. # The function 'sys.set_int_max_str_digits()' may be used instead to increase the str() capabilites. try: str(value) except ValueError: value = np.inf if value == res: sol_cnt += 1 solutions.append((eq, str_repr, value)) cases += 1 # Iterate over all data gathered, and add number of solutions. for i in range(len(solutions)): eq, str_repr, value = solutions[i] solutions[i] += (sol_cnt,) print(f"{eq}, {str_repr} = {value}, {sol_cnt}") data.extend(solutions) # Print all the solutions for this input. print(f"\nThese are the {sol_cnt} solutions for input {eq}:") solutions = [s for s in solutions if (type(s[2]) is int and s[2] == res)] for i in range(len(solutions)): print(f" {i:4}. {solutions[i][1]}") print() print(f"\nTotal cases: {cases}") And for the output, note that solutions are printed/formatted using the name of your functions, not mathematical operators. This is just an excerpt of the output generated for the first value in initial_data using factorials in the 1st and 3rd digits: number, solution, number_of_solutions 518-2, plus(plus(5!, 1), 8!) = 40441, 12 518-2, plus(5!, plus(1, 8!)) = 40441, 12 518-2, plus(minus(5!, 1), 8!) = 40439, 12 518-2, plus(5!, minus(1, 8!)) = -40199, 12 518-2, plus(mult(5!, 1), 8!) = 40440, 12 518-2, plus(5!, mult(1, 8!)) = 40440, 12 518-2, plus(div(5!, 1), 8!) = 40440, 12 518-2, plus(5!, div(1, 8!)) = 120, 12 518-2, plus(power_to(5!, 1), 8!) = 40440, 12 518-2, plus(5!, power_to(1, 8!)) = 121, 12 518-2, plus(root(5!, 1), 8!) = 40321, 12 518-2, plus(5!, root(1, 8!)) = 40440, 12 ... These are the 12 solutions for input 518-2: 0. plus(minus(-5, 1!), 8) 1. minus(-5, minus(1!, 8)) 2. plus(minus(-5, 1), 8) 3. minus(-5, minus(1, 8)) 4. minus(-5, plus(1!, -8)) 5. minus(minus(-5, 1!), -8) 6. minus(-5, plus(1, -8)) 7. minus(minus(-5, 1), -8) 8. plus(plus(-5, -1), 8) 9. plus(-5, plus(-1, 8)) 10. plus(-5, minus(-1, -8)) 11. minus(plus(-5, -1), -8) Total cases: 4608 Note that 4608 cases were processed just for the first value in initial_data, so I recommend you to try with this one first and then add the rest, as for some cases it could take a lot of processing time. Also, I noticed that you are truncating the values in div() and root() so bear it in mind. You will see lots of nan and inf in the full output because there are huge values and conditions like div/0, so it's expected.
Performing arithmetic calculations on all possible digit combinations in a list
I create data in a format like this: initial_data = [ "518-2", '533-3', '534-0', '000-3', '000-4'] I need to perform several operations (add, sub, div, mult, factorial, power_to, root) on the part before the hyphen to see if there's an equation which equals the part after the hyphen. Like so: #5182 -5 - 1 + 8 = 2 or 5*(-1) - 1 + 8 = 2 #000-3 number, solution, number_of_solutions 000-3,(0! + 0!) + 0! = 3,2 or 000-4,,0 or 533-3,5 - (3! / 3) = 3,5 Every digit in the part before the hyphen can have an opposite sign, so I found this: def inverter(data): inverted_data = [-x for x in data] res = list(product(*zip(data, inverted_data))) return res I'm supposed to create a CSV file like in the example above but I haven't gotten to that part yet and that seems like the easiest part. What I have are several disparate parts that I can't connect in a sensible way: import numpy as np from itertools import product from math import factorial def plus(a, b): return a + b def minus(a, b): return a - b def mult(a, b): return a * b def div(a, b): if b!=0: if a%b==0: return a//b return np.nan def the_factorial(a, b): try: return factorial(int(a)) except ValueError: return np.nan def power_to(a:int, b:int)->int: try: return int(a**b) except ValueError: return np.nan def root(a:int, b:int)->int: try: return int(b**(1 / a)) except (TypeError, ZeroDivisionError, ValueError): return np.nan def combinations(nums, funcs): """Both arguments are lists""" t = [] for i in range(len(nums)-1): t.append(nums) t.append(funcs) t.append(nums) return list(itertools.product(*t)) def solve(instance): instance = list(instance) for i in range(len(instance)//2): b = instance.pop() func = instance.pop() a = instance.pop() instance.append(func(a, b)) return instance[0] def main(): try: # a = [1, 3 ,4] a = [int(-5), int(-1), int(8)] func = [plus, minus, mult, div, the_factorial, power_to, root] combs = combinations(a, func) solutions = [solve(i) for i in combs] for i, j in zip(combs, solutions): print(i, j) except ValueError: #If there's too many combinations return np.nan I'm having trouble transforming the data from the initial_data to inverter to main which currently only works on one example and returns an ugly readout with a function object in the middle. Thanks in advance.
[ "I think this will help you a lot (tweaks are on you) but it doesn't write in a CSV, I leave that for you to try, just take into account that there are thousands of possible combinations and in some cases, the results are really huge (see comments in main()).\nI've added missing types in function declarations for clarity and successful linting (compatible with older Python versions).\nAlso, I think that the function combinations() is not needed so I removed it.\nIn my proposed code, the function solve() is the one doing the magic :)\nSaid all that, here's the full code:\nimport numpy as np\nfrom itertools import product\nfrom math import factorial\nfrom typing import Union, Callable, Tuple, List, Set\n\n\ndef plus(a: int, b: int) -> int:\n return a + b\n\n\ndef minus(a: int, b: int) -> int:\n return a - b\n\n\ndef mult(a: int, b: int) -> int:\n return a * b\n\n\ndef div(a: int, b: int) -> Union[int, float]:\n try:\n retval = int(a / b)\n except (ValueError, ZeroDivisionError):\n retval = np.nan\n return retval\n\n\ndef the_factorial(a: int) -> Union[int, float]:\n try:\n return factorial(int(a))\n except ValueError:\n return np.nan\n except OverflowError:\n return np.inf\n\n\ndef power_to(a: int, b: int) -> Union[int, float]:\n try:\n return int(a ** b)\n except (ValueError, ZeroDivisionError):\n return np.nan\n\n\ndef root(a: int, b: int) -> Union[int, float]:\n try:\n return int(b ** (1 / a))\n except (TypeError, ZeroDivisionError, ValueError):\n return np.nan\n\n\ndef solve(values: Tuple[int, int, int], ops: List[Callable]) -> list[Tuple[str, int]]:\n # Iterate over available functions.\n combs = list()\n for f in FACTORS:\n # Get values to operate with.\n x, y, z = values\n sx, sy, sz = x, y, z\n a, b, c = f\n # Calculate the factorial for the values (if applicable).\n if a == 1:\n sx = f\"{x}!\"\n x = the_factorial(x)\n if b == 1:\n sy = f\"{y}!\"\n y = the_factorial(y)\n if c == 1:\n sz = f\"{z}!\"\n z = the_factorial(z)\n for ext_op in ops: # External operation.\n for int_op in ops: # Internal operation.\n # Create equations by grouping the first 2 elements, e.g.: ((x + y) * z).\n eq_str = f\"{ext_op.__name__}({int_op.__name__}({sx}, {sy}), {sz})\"\n eq_val = ext_op(int_op(x, y), z)\n combs.append((eq_str, eq_val))\n # Create equations by grouping the last 2 elements, e.g.: (x + (y * z)).\n eq_str = f\"{ext_op.__name__}({sx}, {int_op.__name__}({sy}, {sz}))\"\n eq_val = ext_op(x, int_op(y, z))\n combs.append((eq_str, eq_val))\n return combs\n\n\ndef inverter(data: List[int]) -> List[Tuple[int, int, int]]:\n inverted_data = [-x for x in data]\n res = list(product(*zip(data, inverted_data)))\n return res\n\n\n# Data to process.\nINITIAL_DATA: List[str] = [\n \"518-2\",\n '533-3',\n # '534-0',\n # '000-3',\n # '000-4'\n]\n# Available functions.\nFUNCTIONS: List[Callable] = [ # the_factorial() removed, see solve().\n plus,\n minus,\n mult,\n div,\n power_to,\n root\n]\n# Get posible combinations to apply the factor operation.\nFACTORS: Set[Tuple] = set(product([1, 0, 0], repeat=3))\n\n\ndef main():\n cases = 0 # Count all possible cases (for each input value).\n data = list() # List with all final data to be dumped in CSV.\n print(\"number, solution, number_of_solutions\")\n # Iterate over all initial data.\n for eq in INITIAL_DATA:\n # Get values before and after the hyphen.\n nums, res = eq.split('-')\n res = int(res)\n # Get combinations with inverted values.\n combs = inverter([int(n) for n in list(nums)])\n # Iterate over combinations and generate a list with their many possible solutions.\n sol_cnt = 0 # Number of solutions (for each input value).\n solutions = list() # List with all final data to be dumped in CSV.\n for i in [solve(i, FUNCTIONS) for i in combs]:\n for j in i:\n str_repr, value = j\n # Some values exceed the 4300 digits, hence the 'try-catch'.\n # The function 'sys.set_int_max_str_digits()' may be used instead to increase the str() capabilites.\n try:\n str(value)\n except ValueError:\n value = np.inf\n if value == res:\n sol_cnt += 1\n solutions.append((eq, str_repr, value))\n cases += 1\n # Iterate over all data gathered, and add number of solutions.\n for i in range(len(solutions)):\n eq, str_repr, value = solutions[i]\n solutions[i] += (sol_cnt,)\n print(f\"{eq}, {str_repr} = {value}, {sol_cnt}\")\n data.extend(solutions)\n # Print all the solutions for this input.\n print(f\"\\nThese are the {sol_cnt} solutions for input {eq}:\")\n solutions = [s for s in solutions if (type(s[2]) is int and s[2] == res)]\n for i in range(len(solutions)):\n print(f\" {i:4}. {solutions[i][1]}\")\n print()\n print(f\"\\nTotal cases: {cases}\")\n\nAnd for the output, note that solutions are printed/formatted using the name of your functions, not mathematical operators. This is just an excerpt of the output generated for the first value in initial_data using factorials in the 1st and 3rd digits:\nnumber, solution, number_of_solutions\n518-2, plus(plus(5!, 1), 8!) = 40441, 12\n518-2, plus(5!, plus(1, 8!)) = 40441, 12 \n518-2, plus(minus(5!, 1), 8!) = 40439, 12 \n518-2, plus(5!, minus(1, 8!)) = -40199, 12 \n518-2, plus(mult(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, mult(1, 8!)) = 40440, 12 \n518-2, plus(div(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, div(1, 8!)) = 120, 12\n518-2, plus(power_to(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, power_to(1, 8!)) = 121, 12 \n518-2, plus(root(5!, 1), 8!) = 40321, 12 \n518-2, plus(5!, root(1, 8!)) = 40440, 12\n\n...\n\nThese are the 12 solutions for input 518-2:\n 0. plus(minus(-5, 1!), 8)\n 1. minus(-5, minus(1!, 8))\n 2. plus(minus(-5, 1), 8)\n 3. minus(-5, minus(1, 8))\n 4. minus(-5, plus(1!, -8))\n 5. minus(minus(-5, 1!), -8)\n 6. minus(-5, plus(1, -8))\n 7. minus(minus(-5, 1), -8)\n 8. plus(plus(-5, -1), 8)\n 9. plus(-5, plus(-1, 8))\n 10. plus(-5, minus(-1, -8))\n 11. minus(plus(-5, -1), -8)\n\nTotal cases: 4608\n\nNote that 4608 cases were processed just for the first value in initial_data, so I recommend you to try with this one first and then add the rest, as for some cases it could take a lot of processing time.\nAlso, I noticed that you are truncating the values in div() and root() so bear it in mind. You will see lots of nan and inf in the full output because there are huge values and conditions like div/0, so it's expected.\n" ]
[ 0 ]
[]
[]
[ "functools", "numpy", "python" ]
stackoverflow_0074648564_functools_numpy_python.txt
Q: Set a different media type for 401 error code using Swashbuckle Is it possible to configure Swashbuckle to produce a different media type for a specific HTTP error code 401 in Swagger UI? I want the media type for the 400 error code to be application/JSON, but for 401, it is text/HTML. Is there a way to achieve this? Annotations above controller, [Authorize] [ApiController] [ApiVersion("1.0")] [Route("[controller]")] [Route("v{version:apiVersion}/[controller]")] [Produces("application/json")] public class MyController : ControllerBase { } Annotations above my method, [ProducesResponseType(StatusCodes.Status201Created)] [ProducesResponseType(StatusCodes.Status400BadRequest)] [ProducesResponseType(StatusCodes.Status401Unauthorized)] [ProducesResponseType(StatusCodes.Status500InternalServerError)] [HttpPost] public async Task<ActionResult<MyModel>> PostAsync(MyRequest myRequest) { } A: Have you tried SwaggerResponse attribute, it derives from ProducesResponseType and has been enhanced to support adding the content media type: [SwaggerResponse(StatusCodes.Status201Created, "Successfully created something", typeof(MyModel), "application/json")] [SwaggerResponse(StatusCodes.Status401Unauthorized, "Unauthorized", typeof(string), "text/html")] [SwaggerResponse(StatusCodes.Status400BadRequest, "Invalid request", typeof(string), "text/html")] [SwaggerResponse(StatusCodes.Status500InternalServerError, "Internal Server Error", typeof(string), "text/html")] It is part of the nuget package: Swashbuckle.AspNetCore.Annotations
Set a different media type for 401 error code using Swashbuckle
Is it possible to configure Swashbuckle to produce a different media type for a specific HTTP error code 401 in Swagger UI? I want the media type for the 400 error code to be application/JSON, but for 401, it is text/HTML. Is there a way to achieve this? Annotations above controller, [Authorize] [ApiController] [ApiVersion("1.0")] [Route("[controller]")] [Route("v{version:apiVersion}/[controller]")] [Produces("application/json")] public class MyController : ControllerBase { } Annotations above my method, [ProducesResponseType(StatusCodes.Status201Created)] [ProducesResponseType(StatusCodes.Status400BadRequest)] [ProducesResponseType(StatusCodes.Status401Unauthorized)] [ProducesResponseType(StatusCodes.Status500InternalServerError)] [HttpPost] public async Task<ActionResult<MyModel>> PostAsync(MyRequest myRequest) { }
[ "Have you tried SwaggerResponse attribute, it derives from ProducesResponseType and has been enhanced to support adding the content media type:\n [SwaggerResponse(StatusCodes.Status201Created, \"Successfully created something\", typeof(MyModel), \"application/json\")]\n [SwaggerResponse(StatusCodes.Status401Unauthorized, \"Unauthorized\", typeof(string), \"text/html\")]\n [SwaggerResponse(StatusCodes.Status400BadRequest, \"Invalid request\", typeof(string), \"text/html\")]\n [SwaggerResponse(StatusCodes.Status500InternalServerError, \"Internal Server Error\", typeof(string), \"text/html\")]\n\nIt is part of the nuget package: Swashbuckle.AspNetCore.Annotations\n" ]
[ 0 ]
[]
[]
[ "openapi", "swagger", "swagger_ui", "swashbuckle.aspnetcore" ]
stackoverflow_0074309109_openapi_swagger_swagger_ui_swashbuckle.aspnetcore.txt
Q: How to change the palette-legend in seaborn pairplot I've just learned that I can change axis-label font-size using sns.set_context. Is there an analogous way to change the content and size of the text in the 'palette-legend' on the right? I'd like to enlarge the text and relabel the '0' and '1', which were used for matrix manipulation, back to descriptive text. A: You can use set_title() and set_text() to set the names of the legend title & labels. Similarly, use plt.setp() to change the font to the size you need it to be... an example is shown below. penguins = sns.load_dataset("penguins") g=sns.pairplot(penguins, hue="species") g._legend.set_title("New Title") ## Change text of Title new_labels = ['Label 1', 'Label 2', 'Label 3'] for t, l in zip(g._legend.texts, new_labels): t.set_text(l) ## Change text of labels plt.setp(g._legend.get_title(), fontsize=30) ## Set the Title font to 30 plt.setp(g._legend.get_texts(), fontsize=20) ## Set the label font to 20 plt.show()
How to change the palette-legend in seaborn pairplot
I've just learned that I can change axis-label font-size using sns.set_context. Is there an analogous way to change the content and size of the text in the 'palette-legend' on the right? I'd like to enlarge the text and relabel the '0' and '1', which were used for matrix manipulation, back to descriptive text.
[ "You can use set_title() and set_text() to set the names of the legend title & labels. Similarly, use plt.setp() to change the font to the size you need it to be... an example is shown below.\npenguins = sns.load_dataset(\"penguins\")\ng=sns.pairplot(penguins, hue=\"species\")\n\ng._legend.set_title(\"New Title\") ## Change text of Title\nnew_labels = ['Label 1', 'Label 2', 'Label 3']\nfor t, l in zip(g._legend.texts, new_labels):\n t.set_text(l) ## Change text of labels\n \nplt.setp(g._legend.get_title(), fontsize=30) ## Set the Title font to 30\nplt.setp(g._legend.get_texts(), fontsize=20) ## Set the label font to 20\n\nplt.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "seaborn" ]
stackoverflow_0074656704_matplotlib_python_seaborn.txt
Q: MS Graph API Blocking Credentials on one call, but not another While expanding our WPF Apps emailing functions to include larger attachments, we went from using the MS GRAPH API endpoint me/sendMail to send emails: https://graph.microsoft.com/v1.0/me/sendMail to using the me/messages endpoint to create a draft so that we could create an upload session to that draft so that we could upload larger attachments (pdf reports) https://graph.microsoft.com/v1.0/me/messages We are acquiring tokens via MSAL for both. However, when using the second method, we receive the following response: "ErrorAccessDenied" "Access is denied. Check credentials and try again." Our expectation was that those two endpoints wouldn't have different credentialing requirements. Our organization's AzureAD accounts are federated delegate, so the only flow we can use is interactive Authorization Code -- so we are calling into MSAL to get the AzureAD token for both endpoints. A: The endpoint for creating a draft message POST /me/messages requires Mail.ReadWrite permission. While endpoints for sending mail POST /me/messages/{id}/send POST /me/sendMail require Mail.Send. Adding Mail.ReadWrite permission should resolve the error.
MS Graph API Blocking Credentials on one call, but not another
While expanding our WPF Apps emailing functions to include larger attachments, we went from using the MS GRAPH API endpoint me/sendMail to send emails: https://graph.microsoft.com/v1.0/me/sendMail to using the me/messages endpoint to create a draft so that we could create an upload session to that draft so that we could upload larger attachments (pdf reports) https://graph.microsoft.com/v1.0/me/messages We are acquiring tokens via MSAL for both. However, when using the second method, we receive the following response: "ErrorAccessDenied" "Access is denied. Check credentials and try again." Our expectation was that those two endpoints wouldn't have different credentialing requirements. Our organization's AzureAD accounts are federated delegate, so the only flow we can use is interactive Authorization Code -- so we are calling into MSAL to get the AzureAD token for both endpoints.
[ "The endpoint for creating a draft message\nPOST /me/messages\n\nrequires Mail.ReadWrite permission. While endpoints for sending mail\nPOST /me/messages/{id}/send\nPOST /me/sendMail\n\nrequire Mail.Send.\nAdding Mail.ReadWrite permission should resolve the error.\n" ]
[ 1 ]
[]
[]
[ "azure_ad_verifiable_credentials", "c#", "microsoft_graph_api", "msal" ]
stackoverflow_0074657304_azure_ad_verifiable_credentials_c#_microsoft_graph_api_msal.txt
Q: Code giving NameError: name 'x' is not defined I am new to Python, and I am trying to make a numerical analysis model of differential equations. import sympy as sympy def picard_solver(y_0, x_0, rhs_expression, iteration_count:int = 5): x, phi = sympy.symbols("x phi") phi = x_0 for i in range(iteration_count + 1): phi = y_0 + sympy.integrate(rhs_expression(x, phi), (x, x_0, x)) return phi import numpy import plotly.graph_objects as go y_set = [picard_solver(1, 0, lambda x, y: x * y, i) for i in range(1, 6)] x_grid = numpy.linspace(-2, 2, 1000) y_picard = list() for y in y_set: y_picard.append(numpy.array([float(y.evalf(subs={x: x_i})) for x_i in x_grid])) y_exact = numpy.exp((x_grid) * (x_grid) / 2) fig = go.Figure() for i, y_order in enumerate(y_picard): fig.add_trace(go.Scatter(x=x_grid, y=y_order, name=f"Picard Order {i + 1}")) # fig.add_trace(go.Scatter(x=x_grid, y=y_picard, name="Picard Solution")) fig.add_trace(go.Scatter(x=x_grid, y=y_exact, name="Exact Solution")) fig.show() fig.write_html("picard_vs_exact.html") But when I try to run it, I get NameError: name 'x' is not defined error, can someone help me? I want a graph to be shown. A: I think that you need to pass a string in the y.evalf(subs={'x': x_i}) part of your code.
Code giving NameError: name 'x' is not defined
I am new to Python, and I am trying to make a numerical analysis model of differential equations. import sympy as sympy def picard_solver(y_0, x_0, rhs_expression, iteration_count:int = 5): x, phi = sympy.symbols("x phi") phi = x_0 for i in range(iteration_count + 1): phi = y_0 + sympy.integrate(rhs_expression(x, phi), (x, x_0, x)) return phi import numpy import plotly.graph_objects as go y_set = [picard_solver(1, 0, lambda x, y: x * y, i) for i in range(1, 6)] x_grid = numpy.linspace(-2, 2, 1000) y_picard = list() for y in y_set: y_picard.append(numpy.array([float(y.evalf(subs={x: x_i})) for x_i in x_grid])) y_exact = numpy.exp((x_grid) * (x_grid) / 2) fig = go.Figure() for i, y_order in enumerate(y_picard): fig.add_trace(go.Scatter(x=x_grid, y=y_order, name=f"Picard Order {i + 1}")) # fig.add_trace(go.Scatter(x=x_grid, y=y_picard, name="Picard Solution")) fig.add_trace(go.Scatter(x=x_grid, y=y_exact, name="Exact Solution")) fig.show() fig.write_html("picard_vs_exact.html") But when I try to run it, I get NameError: name 'x' is not defined error, can someone help me? I want a graph to be shown.
[ "I think that you need to pass a string in the y.evalf(subs={'x': x_i}) part of your code.\n" ]
[ 1 ]
[]
[]
[ "nameerror", "python" ]
stackoverflow_0074656877_nameerror_python.txt
Q: How to import a JSON column in Laravel Excel I'm trying to import a file that contains json data in some columns and this data needs to be imported into JSONB fields in PostgreSQL. Json data example: {"phone":"6365615298", "website":"http://www.happychinafood.com"} However, when the file gets imported, the data imported appears as follows in the database: "{\""phone\"":\""6365615298\"", \""website\"":\""http://www.happychinafood.com\""}" I need the data imported EXACTLY as how the example is provided. Is there any way to achieve this? The package I'm using is maatwebsite/excel A: Found the solution, basically, the following things need to be considered: The field containing JSON data needs to be decoded The model should cast these fields as an array Laravel models convert data to arrays to insertion, but if the fields in DB are JSON fields the model will automatically serialize the values Reference: https://laravel.com/docs/8.x/eloquent-mutators#array-and-json-casting Example of my code: public function model(array $row) { $this->affectedRows++; $parameters = []; $i = 0; foreach ($this->columns as $column => $value) { if ($value) { $parameters[$column] = $value; } elseif ($this->isJson($row[$i])) { $parameters[$column] = json_decode($row[$i], true, 64, JSON_THROW_ON_ERROR); } else { $parameters[$column] = $row[$i]; } $i++; } return new $this->modelClass($parameters); } private function isJson($string): bool { json_decode($string); return json_last_error() === JSON_ERROR_NONE; }
How to import a JSON column in Laravel Excel
I'm trying to import a file that contains json data in some columns and this data needs to be imported into JSONB fields in PostgreSQL. Json data example: {"phone":"6365615298", "website":"http://www.happychinafood.com"} However, when the file gets imported, the data imported appears as follows in the database: "{\""phone\"":\""6365615298\"", \""website\"":\""http://www.happychinafood.com\""}" I need the data imported EXACTLY as how the example is provided. Is there any way to achieve this? The package I'm using is maatwebsite/excel
[ "Found the solution, basically, the following things need to be considered:\nThe field containing JSON data needs to be decoded\nThe model should cast these fields as an array\nLaravel models convert data to arrays to insertion, but if the fields in DB are JSON fields the model will automatically serialize the values\nReference: https://laravel.com/docs/8.x/eloquent-mutators#array-and-json-casting\nExample of my code:\npublic function model(array $row)\n{\n $this->affectedRows++;\n $parameters = [];\n $i = 0;\n foreach ($this->columns as $column => $value) {\n if ($value) {\n $parameters[$column] = $value;\n } elseif ($this->isJson($row[$i])) {\n $parameters[$column] = json_decode($row[$i], true, 64, JSON_THROW_ON_ERROR);\n } else {\n $parameters[$column] = $row[$i];\n }\n $i++;\n }\n return new $this->modelClass($parameters);\n}\n\nprivate function isJson($string): bool\n{\n json_decode($string);\n return json_last_error() === JSON_ERROR_NONE;\n}\n\n" ]
[ 0 ]
[]
[]
[ "json", "laravel", "laravel_excel", "postgresql" ]
stackoverflow_0074647450_json_laravel_laravel_excel_postgresql.txt
Q: How to get Unix time in C++, similar to Python's "time.time()" When measuring elapsed time in Python, I use the following method. import time startTime = time.time() nowTime = time.time() - startTime I think this code gets UNIX time in seconds. time.time() returns a Float Value such as this. >>> import time >>> time.time() 1541317313.336098 How can I use the same measurement technique in C++ as in Python? I intend to use C++ in a WIndows 64-bit limited environment. A: You're friend is std::chrono::steady_clock – and note: not std::chrono::system_clock as it doesn't guarantee the clock advancing monotonically (consider e.g. DST changing!). Then you can do: auto startTime = std::chrono::steady_clock::now(); // work of which the duration is to be measured auto duration = std::chrono::steady_clock::now() - startTime; The difference is a std::chrono::duration object, which you now can retrieve the relevant information from in the desired granularity, e.g. as ms: auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count(); Instead of casting, which simply cuts off the sub-units just like an integral cast cuts away the fractional part of a floating point value, rounding is possible, too. Side note: If std::chrono::steady_clock happens to lack sufficient precision there's still std::chrono::high_resolution_clock, though it is not guaranteed that it is actually more precise than the former – and worse, might even be implemented in terms of std::system_clock (see notes there), so its use is not recommended. A: double now = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now().time_since_epoch()).count(); Go here for details. www.epochconverter.com
How to get Unix time in C++, similar to Python's "time.time()"
When measuring elapsed time in Python, I use the following method. import time startTime = time.time() nowTime = time.time() - startTime I think this code gets UNIX time in seconds. time.time() returns a Float Value such as this. >>> import time >>> time.time() 1541317313.336098 How can I use the same measurement technique in C++ as in Python? I intend to use C++ in a WIndows 64-bit limited environment.
[ "You're friend is std::chrono::steady_clock – and note: not std::chrono::system_clock as it doesn't guarantee the clock advancing monotonically (consider e.g. DST changing!).\nThen you can do:\nauto startTime = std::chrono::steady_clock::now();\n\n// work of which the duration is to be measured\n\nauto duration = std::chrono::steady_clock::now() - startTime;\n\nThe difference is a std::chrono::duration object, which you now can retrieve the relevant information from in the desired granularity, e.g. as ms:\n auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();\n\nInstead of casting, which simply cuts off the sub-units just like an integral cast cuts away the fractional part of a floating point value, rounding is possible, too.\nSide note: If std::chrono::steady_clock happens to lack sufficient precision there's still std::chrono::high_resolution_clock, though it is not guaranteed that it is actually more precise than the former – and worse, might even be implemented in terms of std::system_clock (see notes there), so its use is not recommended.\n", "double now = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now().time_since_epoch()).count();\n\nGo here for details.\nwww.epochconverter.com\n" ]
[ 2, 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074657182_c++.txt
Q: AmCharts v4 - How do i change the color of this tooltop? In AmCharts v4, is it possible to change the color of the tooltip pointed by the arrow? I just want to change the color of the tooltip and not the guide line. I am using the JSON approach to build the chart is that's relevant. Thanks! A: I've finally found a solution for this. "chartCursor": { "enabled": true, "animationDuration": 0, "color":"#000000", "cursorColor": "#DDEAFB" }, The key is the cursorColor property inside chartCursor. Also, the color property will let you change the font color inside the tooltip.
AmCharts v4 - How do i change the color of this tooltop?
In AmCharts v4, is it possible to change the color of the tooltip pointed by the arrow? I just want to change the color of the tooltip and not the guide line. I am using the JSON approach to build the chart is that's relevant. Thanks!
[ "I've finally found a solution for this.\n \"chartCursor\": {\n \"enabled\": true,\n \"animationDuration\": 0,\n \"color\":\"#000000\",\n \"cursorColor\": \"#DDEAFB\" \n },\n\nThe key is the cursorColor property inside chartCursor. Also, the color property will let you change the font color inside the tooltip.\n" ]
[ 0 ]
[]
[]
[ "amcharts" ]
stackoverflow_0074632636_amcharts.txt
Q: Angular currency pipe no decimal value I am trying to use the currency pipe in angular to display a whole number price, I don't need it to add .00 to my number, the thing is, its not formatting it according to my instructions. here is my HTML: <h5 class="price"><span>{{billingInfo.amount | currency:billingInfo.currencyCode:'1.0-0'}}</span> {{billingInfo.period}}</h5> here is my ts: ngOnInit() { this.billingInfo = {amount: 100, currencyCode: 'USD', period: 'Hour'}; } and here is the output: $100.00 Hour things I tried to do: 1.use the decimal number pipe(no good, the currency pipe turns it into a string) 2.add number formater(:1.0-0) to my currencyPipe but it seems to be ignored what am I missing? A: To remove .00 from the currency pipe you can use this pattern. See the digitsInfo section on CurrencyPipe for more information. {{ amount | currency : 'USD' : 'symbol' : '1.0-0' }} If you don't need the decimal you can use the number pipe. ${{amount | number}} A: Best I can tell you're just missing the display parameter which is supposed to be the 2nd parameter to the currency pipe, so if you change it to the following it should work: <h5 class="price"><span>{{billingInfo.amount | currency:billingInfo.currencyCode:'symbol':'1.0-0'}}</span> {{billingInfo.period}}</h5> A: Here is my solution: amount = 5; //or 5.00; {{ amount | currency: 'USD':true:'2.0' }} Output would be: 5, if the amount is set to 5.99 then the output would be 5.99. You can lean more from angular CurrencyPipe here https://angular.io/api/common/CurrencyPipe A: In Angular, to format a currency, use the currency pipe on a number as shown here. <p>{{amount | currency:'USD':true:'1.2-2'}}</p> The first parameter, 'USD', of the pipe is an ISO currency code (e.g. ‘USD’,’EUR’, etc.) The second parameter, true, is an optional boolean to specify whether or not you want to render the currency symbol (‘$’, ‘€’); default is false The third parameter,'1.2-2', also optional, specifies how to format the number, using the same formatting rules as apply to the number pipe. A: I was reading through this topic and didn't saw the "perfect" answer if you ask me. The following workaround i used to show the decimals if provided, but strip them if they are unwanted. {{ value | currency: 'EUR':'symbol': (value % 1 == 0) ? '1.0-0': '1.2-2' }} So as you can see the value % 1 == 0 is a mod functionality that checks if the value is dividable by a whole number. If thats not the case it will return false (15.00 % 1 == 0) -> true (15.50 % 1 == 0) -> false A: reference https://angular.io/api/common/DecimalPipe <h5 class="price"><span>{{billingInfo.amount | currency:billingInfo.currencyCode:true:'1.0-0'}}</span> {{billingInfo.period}}</h5> A: For those looking to perform this currency formatting without decimals inside your TypeScript, you can use the CurrencyPipe's transform method like so: import { CurrencyPipe } from '@angular/common'; constructor(private readonly currencyPipe: CurrencyPipe) ngOnInit() { const billingAmount = 100; this.billingInfo = this.currencyPipe.transform(billingAmount, 'USD', 'symbol', '1.0-0'); }
Angular currency pipe no decimal value
I am trying to use the currency pipe in angular to display a whole number price, I don't need it to add .00 to my number, the thing is, its not formatting it according to my instructions. here is my HTML: <h5 class="price"><span>{{billingInfo.amount | currency:billingInfo.currencyCode:'1.0-0'}}</span> {{billingInfo.period}}</h5> here is my ts: ngOnInit() { this.billingInfo = {amount: 100, currencyCode: 'USD', period: 'Hour'}; } and here is the output: $100.00 Hour things I tried to do: 1.use the decimal number pipe(no good, the currency pipe turns it into a string) 2.add number formater(:1.0-0) to my currencyPipe but it seems to be ignored what am I missing?
[ "To remove .00 from the currency pipe you can use this pattern. See the digitsInfo section on CurrencyPipe for more information.\n{{ amount | currency : 'USD' : 'symbol' : '1.0-0' }} \n\nIf you don't need the decimal you can use the number pipe.\n${{amount | number}}\n\n", "Best I can tell you're just missing the display parameter which is supposed to be the 2nd parameter to the currency pipe, so if you change it to the following it should work:\n<h5 class=\"price\"><span>{{billingInfo.amount | currency:billingInfo.currencyCode:'symbol':'1.0-0'}}</span> {{billingInfo.period}}</h5>\n\n", "Here is my solution:\namount = 5; //or 5.00;\n{{ amount | currency: 'USD':true:'2.0' }}\n\nOutput would be: 5, \n\nif the amount is set to 5.99 then the output would be 5.99.\nYou can lean more from angular CurrencyPipe here https://angular.io/api/common/CurrencyPipe\n", "In Angular, to format a currency, use the currency pipe on a number as shown here.\n<p>{{amount | currency:'USD':true:'1.2-2'}}</p>\n\n\nThe first parameter, 'USD', of the pipe is an ISO currency code (e.g.\n‘USD’,’EUR’, etc.)\nThe second parameter, true, is an optional boolean to specify whether\nor not you want to render the currency symbol (‘$’, ‘€’); default is\nfalse\nThe third parameter,'1.2-2', also optional, specifies how to format\nthe number, using the same formatting rules as apply to the number\npipe.\n\n", "I was reading through this topic and didn't saw the \"perfect\" answer if you ask me. The following workaround i used to show the decimals if provided, but strip them if they are unwanted.\n{{ value | currency: 'EUR':'symbol': (value % 1 == 0) ? '1.0-0': '1.2-2' }}\n\nSo as you can see the value % 1 == 0 is a mod functionality that checks if the value is dividable by a whole number. If thats not the case it will return false\n(15.00 % 1 == 0) -> true\n(15.50 % 1 == 0) -> false\n\n", "reference https://angular.io/api/common/DecimalPipe\n <h5 class=\"price\"><span>{{billingInfo.amount | \n currency:billingInfo.currencyCode:true:'1.0-0'}}</span> {{billingInfo.period}}</h5>\n\n", "For those looking to perform this currency formatting without decimals inside your TypeScript, you can use the CurrencyPipe's transform method like so:\nimport { CurrencyPipe } from '@angular/common';\n\nconstructor(private readonly currencyPipe: CurrencyPipe)\n\nngOnInit() {\n const billingAmount = 100;\n\n this.billingInfo = this.currencyPipe.transform(billingAmount, 'USD', 'symbol', '1.0-0');\n}\n\n\n" ]
[ 83, 6, 3, 2, 2, 0, 0 ]
[]
[]
[ "angular", "angular5" ]
stackoverflow_0049403895_angular_angular5.txt
Q: Java, get days between two dates In java, I want to get the number of days between two dates, excluding those two dates. For example: If first date = 11 November 2011 and the second date = 13 November 2011 then it should be 1. This is the code I am using but doesn't work (secondDate and firstDate are Calendar objects): long diff=secondDate.getTimeInMillis()-firstDate.getTimeInMillis(); float day_count=(float)diff / (24 * 60 * 60 * 1000); daysCount.setText((int)day_count+""); I even tried rounding the results but that didn't help. How do I get the number of days between dates in java excluding the days themselves? A: I've just tested on SDK 8 (Android 2.2) the following code snippet: Calendar date1 = Calendar.getInstance(); Calendar date2 = Calendar.getInstance(); date1.clear(); date1.set( datePicker1.getYear(), datePicker1.getMonth(), datePicker1.getDayOfMonth()); date2.clear(); date2.set( datePicker2.getYear(), datePicker2.getMonth(), datePicker2.getDayOfMonth()); long diff = date2.getTimeInMillis() - date1.getTimeInMillis(); float dayCount = (float) diff / (24 * 60 * 60 * 1000); textView.setText(Long.toString(diff) + " " + (int) dayCount); it works perfectly and in both cases (Nov 10,2011 - Nov 8,2011) and (Nov 13,2011 - Nov 11,2011) gives dayCount = 2.0 A: Get Days between java.util.Dates, ignoring daylight savings time Quick and dirty hack: public int get_days_between_dates(Date date1, Date date2) { //if date2 is more in the future than date1 then the result will be negative //if date1 is more in the future than date2 then the result will be positive. return (int)((date2.getTime() - date1.getTime()) / (1000*60*60*24l)); } This function will work 99.99% of the time, except when it surprises you later on in the edge cases during leap-seconds, daylight savings, timezone changes leap years and the like. If you are OK with the calculation being off by 1 (or 2) hours once in a while, this will suffice. Get Days between Dates taking into account leapseconds, daylight savings, timezones, etc If you are asking this question you need to slap yourself. What does it mean for two dates to be at least 1 day apart? It's very confusing. What if one Date is midnight in one timezone, and the other date is 1AM in another timezone? Depending on how you interpret it, the answer is both 1 and 0. You think you can just force the dates you pass into the above function as Universal time format; that will fix some of your problems. But then you just relocate the problem into how you convert your local time to a universal time. The logical conversion from your timezone to universal time may not be what is intuitive. In some cases you will get a day difference when the dates passed in are obviously two days apart. And you think you can deal with that? There are some simplistic calendar systems in the world which are constantly changing depending on the harvest season and installed political rulers. If you want to convert their time to UTC, java.util.Date is going to fail you at the worst moment. If you need to calculate the days between dates and it is critical that everything come out right, you need to get an external library called Joda Time: (They have taken care of all the details for you, so you can stay blissfully unaware of them): http://joda-time.sourceforge.net/index.html A: java.time The java.time API, released with Java-8 in March 2014, supplanted the error-prone legacy date-time API. Since then, using this modern date-time API has been strongly recommended. Solution using modern date-time API Using Calendar#toInstant, convert your java.util.Calendar instances into java.time.Instant and then into java.time.ZonedDateTime instances and then use ChronoUnit.DAYS.between to get the number of days between them. Demo: import java.time.ZoneId; import java.time.ZonedDateTime; import java.time.temporal.ChronoUnit; import java.util.Calendar; public class Main { public static void main(String[] args) { // Sample start and end dates as java.util.Date Calendar startCal = Calendar.getInstance(); startCal.set(2011, 10, 11); // 11 November 2011 Calendar endCal = Calendar.getInstance(); endCal.set(2011, 10, 13); // 13 November 2011 // Convert the java.util.Calendar into java.time.ZonedDateTime // Replace ZoneId.systemDefault() with the applicable ZoneId ZonedDateTime startDateTime = startCal.toInstant().atZone(ZoneId.systemDefault()); ZonedDateTime endDateTime = endCal.toInstant().atZone(ZoneId.systemDefault()); // The end date is excluded by default. Subtract 1 to exclude the start date long days = ChronoUnit.DAYS.between(startDateTime, endDateTime) - 1; System.out.println(days); } } Output: 1 Learn more about the modern Date-Time API from Trail: Date Time. A: I have two suggestions: Make sure your float day_count is calculated correctly float day_count = ((float)diff) / (24f * 60f * 60f * 1000f); If it's rounding error, try using floor method daysCount.setText("" + (int)Math.floor(day_count)); A: Don't use floats for integer calculations. Are you sure your dates are days? The precision of the Date type is milliseconds. So the first thing you need to do is round the date to something which doesn't have hours. Example: It's just one hour from 23:30 2011-11-01 to 00:30 2011-11-02 but the two dates are on different days. A: If you are only going to be dealing with dates between the years 1900 and 2100, there is a simple calculation which will give you the number of days since 1900: public static int daysSince1900(Date date) { Calendar c = new GregorianCalendar(); c.setTime(date); int year = c.get(Calendar.YEAR); if (year < 1900 || year > 2099) { throw new IllegalArgumentException("daysSince1900 - Date must be between 1900 and 2099"); } year -= 1900; int month = c.get(Calendar.MONTH) + 1; int days = c.get(Calendar.DAY_OF_MONTH); if (month < 3) { month += 12; year--; } int yearDays = (int) (year * 365.25); int monthDays = (int) ((month + 1) * 30.61); return (yearDays + monthDays + days - 63); } Thus, to get the difference in days between two dates, you calculate their days since 1900 and calc the difference. Our daysBetween method looks like this: public static Integer getDaysBetween(Date date1, Date date2) { if (date1 == null || date2 == null) { return null; } int days1 = daysSince1900(date1); int days2 = daysSince1900(date2); if (days1 < days2) { return days2 - days1; } else { return days1 - days2; } } In your case you would need to subtract an extra day (if the days are not equal). And don't ask me where this calculation came from because we've used it since the early '90s.
Java, get days between two dates
In java, I want to get the number of days between two dates, excluding those two dates. For example: If first date = 11 November 2011 and the second date = 13 November 2011 then it should be 1. This is the code I am using but doesn't work (secondDate and firstDate are Calendar objects): long diff=secondDate.getTimeInMillis()-firstDate.getTimeInMillis(); float day_count=(float)diff / (24 * 60 * 60 * 1000); daysCount.setText((int)day_count+""); I even tried rounding the results but that didn't help. How do I get the number of days between dates in java excluding the days themselves?
[ "I've just tested on SDK 8 (Android 2.2) the following code snippet:\nCalendar date1 = Calendar.getInstance();\nCalendar date2 = Calendar.getInstance();\n\ndate1.clear();\ndate1.set(\n datePicker1.getYear(),\n datePicker1.getMonth(),\n datePicker1.getDayOfMonth());\ndate2.clear();\ndate2.set(\n datePicker2.getYear(),\n datePicker2.getMonth(),\n datePicker2.getDayOfMonth());\n\nlong diff = date2.getTimeInMillis() - date1.getTimeInMillis();\n\nfloat dayCount = (float) diff / (24 * 60 * 60 * 1000);\n\ntextView.setText(Long.toString(diff) + \" \" + (int) dayCount);\n\nit works perfectly and in both cases (Nov 10,2011 - Nov 8,2011) and (Nov 13,2011 - Nov 11,2011) gives dayCount = 2.0\n", "Get Days between java.util.Dates, ignoring daylight savings time\nQuick and dirty hack:\npublic int get_days_between_dates(Date date1, Date date2)\n{ \n //if date2 is more in the future than date1 then the result will be negative\n //if date1 is more in the future than date2 then the result will be positive.\n\n return (int)((date2.getTime() - date1.getTime()) / (1000*60*60*24l));\n}\n\nThis function will work 99.99% of the time, except when it surprises you later on in the edge cases during leap-seconds, daylight savings, timezone changes leap years and the like. If you are OK with the calculation being off by 1 (or 2) hours once in a while, this will suffice.\nGet Days between Dates taking into account leapseconds, daylight savings, timezones, etc\nIf you are asking this question you need to slap yourself. What does it mean for two dates to be at least 1 day apart? It's very confusing. What if one Date is midnight in one timezone, and the other date is 1AM in another timezone? Depending on how you interpret it, the answer is both 1 and 0.\nYou think you can just force the dates you pass into the above function as Universal time format; that will fix some of your problems. But then you just relocate the problem into how you convert your local time to a universal time. The logical conversion from your timezone to universal time may not be what is intuitive. In some cases you will get a day difference when the dates passed in are obviously two days apart.\nAnd you think you can deal with that? There are some simplistic calendar systems in the world which are constantly changing depending on the harvest season and installed political rulers. If you want to convert their time to UTC, java.util.Date is going to fail you at the worst moment.\nIf you need to calculate the days between dates and it is critical that everything come out right, you need to get an external library called Joda Time: (They have taken care of all the details for you, so you can stay blissfully unaware of them): http://joda-time.sourceforge.net/index.html\n", "java.time\nThe java.time API, released with Java-8 in March 2014, supplanted the error-prone legacy date-time API. Since then, using this modern date-time API has been strongly recommended.\nSolution using modern date-time API\nUsing Calendar#toInstant, convert your java.util.Calendar instances into java.time.Instant and then into java.time.ZonedDateTime instances and then use ChronoUnit.DAYS.between to get the number of days between them.\nDemo:\nimport java.time.ZoneId;\nimport java.time.ZonedDateTime;\nimport java.time.temporal.ChronoUnit;\nimport java.util.Calendar;\n\npublic class Main {\n\n public static void main(String[] args) {\n // Sample start and end dates as java.util.Date\n Calendar startCal = Calendar.getInstance();\n startCal.set(2011, 10, 11); // 11 November 2011\n\n Calendar endCal = Calendar.getInstance();\n endCal.set(2011, 10, 13); // 13 November 2011\n\n // Convert the java.util.Calendar into java.time.ZonedDateTime\n // Replace ZoneId.systemDefault() with the applicable ZoneId\n ZonedDateTime startDateTime = startCal.toInstant().atZone(ZoneId.systemDefault());\n ZonedDateTime endDateTime = endCal.toInstant().atZone(ZoneId.systemDefault());\n\n // The end date is excluded by default. Subtract 1 to exclude the start date\n long days = ChronoUnit.DAYS.between(startDateTime, endDateTime) - 1;\n System.out.println(days);\n }\n}\n\nOutput:\n1\n\nLearn more about the modern Date-Time API from Trail: Date Time.\n", "I have two suggestions:\n\nMake sure your float day_count is calculated correctly\nfloat day_count = ((float)diff) / (24f * 60f * 60f * 1000f);\nIf it's rounding error, try using floor method\ndaysCount.setText(\"\" + (int)Math.floor(day_count)); \n\n", "\nDon't use floats for integer calculations.\nAre you sure your dates are days? The precision of the Date type is milliseconds. So the first thing you need to do is round the date to something which doesn't have hours. Example: It's just one hour from 23:30 2011-11-01 to 00:30 2011-11-02 but the two dates are on different days.\n\n", "If you are only going to be dealing with dates between the years 1900 and 2100, there is a simple calculation which will give you the number of days since 1900:\npublic static int daysSince1900(Date date) {\n Calendar c = new GregorianCalendar();\n c.setTime(date);\n\n int year = c.get(Calendar.YEAR);\n if (year < 1900 || year > 2099) {\n throw new IllegalArgumentException(\"daysSince1900 - Date must be between 1900 and 2099\");\n }\n year -= 1900;\n int month = c.get(Calendar.MONTH) + 1;\n int days = c.get(Calendar.DAY_OF_MONTH);\n\n if (month < 3) {\n month += 12;\n year--;\n }\n int yearDays = (int) (year * 365.25);\n int monthDays = (int) ((month + 1) * 30.61);\n\n return (yearDays + monthDays + days - 63);\n}\n\nThus, to get the difference in days between two dates, you calculate their days since 1900 and calc the difference. Our daysBetween method looks like this:\npublic static Integer getDaysBetween(Date date1, Date date2) {\n if (date1 == null || date2 == null) {\n return null;\n }\n\n int days1 = daysSince1900(date1);\n int days2 = daysSince1900(date2);\n\n if (days1 < days2) {\n return days2 - days1;\n } else {\n return days1 - days2;\n }\n}\n\nIn your case you would need to subtract an extra day (if the days are not equal).\nAnd don't ask me where this calculation came from because we've used it since the early '90s.\n" ]
[ 11, 7, 1, 0, 0, 0 ]
[]
[]
[ "date", "java" ]
stackoverflow_0007976989_date_java.txt
Q: Python, reduce list of string doesn't work with newline? I try to combine a list of string to string using reduce function but it doesn't work. I prefer to use reduce function anyway how do I fix this? >> reduce(lambda x, y: x + y + "\n", ["dog", "cat"]) # this doesn't work # dogcat >> "\n".join(["dog", "cat"]) # this works # dog # cat A: The purpose of join, is to put the element between each reduce(lambda x, y: x + "\n" + y, ["dog", "cat"]) A: ###################### METHOD 1 ###################### strings = ["This", "is", "a", "list", "of", "strings"] # join the strings using lambda joined = lambda strings: "\n".join(strings) print(joined(strings)) ###################### METHOD 2 ###################### mylist = ["a", "b", "c", "d", "e"] # use list comprehension to join the list of strings mystring = "\n".join([str(x) for x in mylist]) print(mystring) strings = ["This", "is", "a", "list", "of", "strings"] ###################### METHOD 3 ###################### import functools list_of_strings = ["a", "b", "c"] print(functools.reduce(lambda x, y: x + "\n" + y, list_of_strings))
Python, reduce list of string doesn't work with newline?
I try to combine a list of string to string using reduce function but it doesn't work. I prefer to use reduce function anyway how do I fix this? >> reduce(lambda x, y: x + y + "\n", ["dog", "cat"]) # this doesn't work # dogcat >> "\n".join(["dog", "cat"]) # this works # dog # cat
[ "The purpose of join, is to put the element between each\nreduce(lambda x, y: x + \"\\n\" + y, [\"dog\", \"cat\"])\n\n", "###################### METHOD 1 ######################\n\nstrings = [\"This\", \"is\", \"a\", \"list\", \"of\", \"strings\"]\n\n# join the strings using lambda\njoined = lambda strings: \"\\n\".join(strings)\nprint(joined(strings))\n\n###################### METHOD 2 ######################\n\nmylist = [\"a\", \"b\", \"c\", \"d\", \"e\"]\n\n# use list comprehension to join the list of strings\nmystring = \"\\n\".join([str(x) for x in mylist])\nprint(mystring)\n\nstrings = [\"This\", \"is\", \"a\", \"list\", \"of\", \"strings\"]\n\n###################### METHOD 3 ######################\nimport functools\nlist_of_strings = [\"a\", \"b\", \"c\"]\nprint(functools.reduce(lambda x, y: x + \"\\n\" + y, list_of_strings))\n\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074657363_python.txt
Q: Split hours & minutes from AWSDateTime I'm using a .split() function to return the time from an AWSDateTime type. I'm looking to set a variable using the split string, I need the time to only show hours & minutes & not seconds. AWSDateTime from DB - startDateTime = '2022-11-1111:11:11.482Z' Variable to be set in HH/MM - this.startTime = this.edit.onHold[0].startDateTime.split('T')[1]; I'm currently getting the value returned of - 11:11:11.482Z but what I'm looking for is 11:11 Is there a way using the .split() method to be more specific and remove the seconds. I need this as I'm splitting the variable to be used as a 'time' type on the HTML and currently this doesn't work. A: With split, there you go '2022-11-11T11:11:11.482Z'.split("T")[1].split(":").slice(0, 2).join(":") However I feel regex are usually more efficient '2022-11-11T11:11:11.482Z'.match(/\d{1,2}:\d{1,2}/)[0]
Split hours & minutes from AWSDateTime
I'm using a .split() function to return the time from an AWSDateTime type. I'm looking to set a variable using the split string, I need the time to only show hours & minutes & not seconds. AWSDateTime from DB - startDateTime = '2022-11-1111:11:11.482Z' Variable to be set in HH/MM - this.startTime = this.edit.onHold[0].startDateTime.split('T')[1]; I'm currently getting the value returned of - 11:11:11.482Z but what I'm looking for is 11:11 Is there a way using the .split() method to be more specific and remove the seconds. I need this as I'm splitting the variable to be used as a 'time' type on the HTML and currently this doesn't work.
[ "With split, there you go\n'2022-11-11T11:11:11.482Z'.split(\"T\")[1].split(\":\").slice(0, 2).join(\":\")\n\nHowever I feel regex are usually more efficient\n'2022-11-11T11:11:11.482Z'.match(/\\d{1,2}:\\d{1,2}/)[0]\n\n" ]
[ 1 ]
[]
[]
[ "html", "javascript", "types", "typescript" ]
stackoverflow_0074657333_html_javascript_types_typescript.txt
Q: pandas to_html: add attributes to table tag I'm using the pandas to_html() method to build a table for my website. I want to add some attributes to the <table> tag; however I'm not sure how to do this. my_table = Markup(df.to_html(classes="table")) Which produces: <table border="1" class="dataframe table"> I want to produce the following: <table border="1" class="dataframe table" attribute="value" attribute2="value2"> A: This can be achieved simply by manipulating the rendered html with a simple regular expression: import re df = pd.DataFrame(1, index=[1, 2], columns=list('AB')) html = df.to_html(classes="table") html = re.sub( r'<table([^>]*)>', r'<table\1 attribute="value" attribute2="value2">', html ) print(html.split('\n')[0]) <table border="1" class="dataframe table" attribute="value" attribute2="value2"> A: For a pandas-only solution, you can use a Styler object: df = pd.DataFrame(1, index=[1, 2], columns=list('AB')) styled = df.style.set_table_attributes('foo="foo" bar="bar"') print(styled.to_html()) Output: <style type="text/css"> </style> <table id="T_7d7d8" foo="foo" bar="bar"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_7d7d8_level0_col0" class="col_heading level0 col0" >A</th> <th id="T_7d7d8_level0_col1" class="col_heading level0 col1" >B</th> </tr> </thead> <tbody> <tr> <th id="T_7d7d8_level0_row0" class="row_heading level0 row0" >1</th> <td id="T_7d7d8_row0_col0" class="data row0 col0" >1</td> <td id="T_7d7d8_row0_col1" class="data row0 col1" >1</td> </tr> <tr> <th id="T_7d7d8_level0_row1" class="row_heading level0 row1" >2</th> <td id="T_7d7d8_row1_col0" class="data row1 col0" >1</td> <td id="T_7d7d8_row1_col1" class="data row1 col1" >1</td> </tr> </tbody> </table>
pandas to_html: add attributes to table tag
I'm using the pandas to_html() method to build a table for my website. I want to add some attributes to the <table> tag; however I'm not sure how to do this. my_table = Markup(df.to_html(classes="table")) Which produces: <table border="1" class="dataframe table"> I want to produce the following: <table border="1" class="dataframe table" attribute="value" attribute2="value2">
[ "This can be achieved simply by manipulating the rendered html with a simple regular expression:\nimport re\n\ndf = pd.DataFrame(1, index=[1, 2], columns=list('AB'))\n\nhtml = df.to_html(classes=\"table\")\nhtml = re.sub(\n r'<table([^>]*)>',\n r'<table\\1 attribute=\"value\" attribute2=\"value2\">',\n html\n)\n\nprint(html.split('\\n')[0])\n\n<table border=\"1\" class=\"dataframe table\" attribute=\"value\" attribute2=\"value2\">\n\n", "For a pandas-only solution, you can use a Styler object:\ndf = pd.DataFrame(1, index=[1, 2], columns=list('AB'))\nstyled = df.style.set_table_attributes('foo=\"foo\" bar=\"bar\"')\nprint(styled.to_html())\n\nOutput:\n<style type=\"text/css\">\n</style>\n<table id=\"T_7d7d8\" foo=\"foo\" bar=\"bar\">\n <thead>\n <tr>\n <th class=\"blank level0\" >&nbsp;</th>\n <th id=\"T_7d7d8_level0_col0\" class=\"col_heading level0 col0\" >A</th>\n <th id=\"T_7d7d8_level0_col1\" class=\"col_heading level0 col1\" >B</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th id=\"T_7d7d8_level0_row0\" class=\"row_heading level0 row0\" >1</th>\n <td id=\"T_7d7d8_row0_col0\" class=\"data row0 col0\" >1</td>\n <td id=\"T_7d7d8_row0_col1\" class=\"data row0 col1\" >1</td>\n </tr>\n <tr>\n <th id=\"T_7d7d8_level0_row1\" class=\"row_heading level0 row1\" >2</th>\n <td id=\"T_7d7d8_row1_col0\" class=\"data row1 col0\" >1</td>\n <td id=\"T_7d7d8_row1_col1\" class=\"data row1 col1\" >1</td>\n </tr>\n </tbody>\n</table>\n\n" ]
[ 6, 0 ]
[]
[]
[ "html", "pandas", "python" ]
stackoverflow_0043312995_html_pandas_python.txt
Q: Using a paramorphism inside of an apomorphism I'm trying to use paramorphisms and apomorhisms (in haskell): -- Fixed point of a Functor newtype Fix f = In (f (Fix f)) deriving instance (Eq (f (Fix f))) => Eq (Fix f) deriving instance (Ord (f (Fix f))) => Ord (Fix f) deriving instance (Show (f (Fix f))) => Show (Fix f) out :: Fix f -> f (Fix f) out (In f) = f type RAlgebra f a = f (Fix f, a) -> a para :: (Functor f) => RAlgebra f a -> Fix f -> a para rAlg = rAlg . fmap fanout . out where fanout t = (t, para rAlg t) -- Apomorphism type RCoalgebra f a = a -> f (Either (Fix f) a) apo :: Functor f => RCoalgebra f a -> a -> Fix f apo rCoalg = In . fmap fanin . rCoalg where fanin = either id (apo rCoalg) to define the following recursive function: fun concat3 (v,E,r) = add(r,v) | concat3 (v,l,E) = add(l,v) | concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) = if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2) else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r)) else N(v,l,r) It takes two binary trees and an element that is greater than the values in the left tree and less than the values in the right tree and combines them into one binary tree :: value -> tree1 -> tree2 -> tree3 I have defined the add function (which inserts an element into a binary tree) as a paramorphism like so: add :: Ord a => a -> RAlgebra (ATreeF a) (ATreeF' a) add elem EmptyATreeF = In (NodeATreeF elem 1 (In EmptyATreeF) (In EmptyATreeF)) add elem (NodeATreeF cur _ (prevLeft, left) (prevRight, right)) | elem < cur = bATreeConstruct cur left prevRight | elem > cur = bATreeConstruct cur prevLeft right | otherwise = nATreeConstruct cur prevLeft prevRight When I try to write concat3 as an apomorphism: concat3 :: Ord a => a -> RCoalgebra (ATreeF a) (ATreeF' a, ATreeF' a) concat3 elem (In EmptyATreeF, In (NodeATreeF cur2 size2 left2 right2)) = out para (insertATreeFSetPAlg elem) (In (NodeATreeF cur2 size2 (Left left2) (Left right2))) ... Because the next level of the apomorphism has not been evaluated yet, I get a type error from the compiler. Couldn't match type: Fix (ATreeF a) with: Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a) Expected: ATreeF a (Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a)) Actual: ATreeF a (Fix (ATreeF a)) Is there another approach I can take? A: Some missing context to explain the solution is that this is from an implementation of weight-balanced trees, specifically Adams's variant (which happens to be the data structure behind Data.Set and Data.Map.) A problem when writing concat3 as a coalgebra is that it is not corecursive, strictly speaking, because the recursive calls of concat3 are under a smart constructor T', i.e., a function (which does some non-trivial rebalancing). A solution is to introduce an intermediate representation which delays the evaluation of that smart constructor. -- | Tree with delayed rebalancing operations T', or Id when no rebalancing is needed data TreeF1 a x = E1 | T' a x x | Id (Tree a) deriving Functor So we can write a coalgebra of TreeF1: concatAlg :: Ord a => a -> RCoalgebra (TreeF1 a) (Tree a, Tree a) concatAlg v (In E, r) = Id (add r v) concatAlg v (l, In E) = Id (add l v) concatAlg v (l@(In (T v1 n1 l1 r1)), r@(In (T v2 n2 l2 r2))) = if balance * n1 < n2 then T' v2 (Right (l, l2)) (Left (In (Id r2))) else if balance * n2 < n1 then T' v1 (Left (In (Id l1))) (Right (r1, r)) else Id (_N v1 l r) {- Reference implementation for comparison: fun concat3 (v,E,r) = add(r,v) | concat3 (v,l,E) = add(l,v) | concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) = if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2) else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r)) else N(v,l,r) -} And we can convert a Fix (TreeF1 a) to Fix (Tree a) via a catamorphism, finally executing those delayed applications of rebalancing T'. _T :: a -> Tree a -> Tree a -> Tree a _T = error "todo: rebalance" type Algebra f a = f a -> a -- do the rebalancing on T' v l r nodes rebalanceAlg :: Algebra (TreeF1 a) (Tree a) rebalanceAlg E1 = In E rebalanceAlg (T' v l r) = _T v l r rebalanceAlg (Id t) = t So concat3 is a composition of cata and apo using the above algebras: concat3 :: Ord a => a -> Tree a -> Tree a -> Tree a concat3 v l r = (cata rebalanceAlg . apo (concatAlg v)) (l, r) You can fuse cata and apo so that, after some elementary compiler optimizations, the intermediate tree does not get allocated: -- fusion of (cata _ . apo _) cataApo :: Functor f => Algebra f b -> RCoalgebra f a -> a -> b cataApo alg coalg = go where go x = alg (either (cata alg) go <$> coalg x) concat3' :: Ord a => a -> Tree a -> Tree a -> Tree a concat3' v l r = cataApo rebalanceAlg (concatAlg v) (l, r) Full gist: https://gist.github.com/Lysxia/281010fbe40eac9be0b135d4733c3d5a
Using a paramorphism inside of an apomorphism
I'm trying to use paramorphisms and apomorhisms (in haskell): -- Fixed point of a Functor newtype Fix f = In (f (Fix f)) deriving instance (Eq (f (Fix f))) => Eq (Fix f) deriving instance (Ord (f (Fix f))) => Ord (Fix f) deriving instance (Show (f (Fix f))) => Show (Fix f) out :: Fix f -> f (Fix f) out (In f) = f type RAlgebra f a = f (Fix f, a) -> a para :: (Functor f) => RAlgebra f a -> Fix f -> a para rAlg = rAlg . fmap fanout . out where fanout t = (t, para rAlg t) -- Apomorphism type RCoalgebra f a = a -> f (Either (Fix f) a) apo :: Functor f => RCoalgebra f a -> a -> Fix f apo rCoalg = In . fmap fanin . rCoalg where fanin = either id (apo rCoalg) to define the following recursive function: fun concat3 (v,E,r) = add(r,v) | concat3 (v,l,E) = add(l,v) | concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) = if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2) else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r)) else N(v,l,r) It takes two binary trees and an element that is greater than the values in the left tree and less than the values in the right tree and combines them into one binary tree :: value -> tree1 -> tree2 -> tree3 I have defined the add function (which inserts an element into a binary tree) as a paramorphism like so: add :: Ord a => a -> RAlgebra (ATreeF a) (ATreeF' a) add elem EmptyATreeF = In (NodeATreeF elem 1 (In EmptyATreeF) (In EmptyATreeF)) add elem (NodeATreeF cur _ (prevLeft, left) (prevRight, right)) | elem < cur = bATreeConstruct cur left prevRight | elem > cur = bATreeConstruct cur prevLeft right | otherwise = nATreeConstruct cur prevLeft prevRight When I try to write concat3 as an apomorphism: concat3 :: Ord a => a -> RCoalgebra (ATreeF a) (ATreeF' a, ATreeF' a) concat3 elem (In EmptyATreeF, In (NodeATreeF cur2 size2 left2 right2)) = out para (insertATreeFSetPAlg elem) (In (NodeATreeF cur2 size2 (Left left2) (Left right2))) ... Because the next level of the apomorphism has not been evaluated yet, I get a type error from the compiler. Couldn't match type: Fix (ATreeF a) with: Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a) Expected: ATreeF a (Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a)) Actual: ATreeF a (Fix (ATreeF a)) Is there another approach I can take?
[ "Some missing context to explain the solution is that this is from an implementation of weight-balanced trees, specifically Adams's variant (which happens to be the data structure behind Data.Set and Data.Map.)\nA problem when writing concat3 as a coalgebra is that it is not corecursive, strictly speaking, because the recursive calls of concat3 are under a smart constructor T', i.e., a function (which does some non-trivial rebalancing).\nA solution is to introduce an intermediate representation which delays the evaluation of that smart constructor.\n-- | Tree with delayed rebalancing operations T', or Id when no rebalancing is needed\ndata TreeF1 a x = E1 | T' a x x | Id (Tree a)\n deriving Functor\n\nSo we can write a coalgebra of TreeF1:\nconcatAlg :: Ord a => a -> RCoalgebra (TreeF1 a) (Tree a, Tree a)\nconcatAlg v (In E, r) = Id (add r v)\nconcatAlg v (l, In E) = Id (add l v)\nconcatAlg v (l@(In (T v1 n1 l1 r1)), r@(In (T v2 n2 l2 r2))) =\n if balance * n1 < n2 then T' v2 (Right (l, l2)) (Left (In (Id r2)))\n else if balance * n2 < n1 then T' v1 (Left (In (Id l1))) (Right (r1, r))\n else Id (_N v1 l r)\n\n{- Reference implementation for comparison:\nfun concat3 (v,E,r) = add(r,v)\n | concat3 (v,l,E) = add(l,v)\n | concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) =\n if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2)\n else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r))\n else N(v,l,r)\n-}\n\nAnd we can convert a Fix (TreeF1 a) to Fix (Tree a) via a catamorphism, finally executing those delayed applications of rebalancing T'.\n_T :: a -> Tree a -> Tree a -> Tree a\n_T = error \"todo: rebalance\"\n\ntype Algebra f a = f a -> a\n\n-- do the rebalancing on T' v l r nodes\nrebalanceAlg :: Algebra (TreeF1 a) (Tree a)\nrebalanceAlg E1 = In E\nrebalanceAlg (T' v l r) = _T v l r\nrebalanceAlg (Id t) = t\n\nSo concat3 is a composition of cata and apo using the above algebras:\nconcat3 :: Ord a => a -> Tree a -> Tree a -> Tree a\nconcat3 v l r = (cata rebalanceAlg . apo (concatAlg v)) (l, r)\n\nYou can fuse cata and apo so that, after some elementary compiler optimizations, the intermediate tree does not get allocated:\n-- fusion of (cata _ . apo _)\ncataApo :: Functor f => Algebra f b -> RCoalgebra f a -> a -> b\ncataApo alg coalg = go\n where\n go x = alg (either (cata alg) go <$> coalg x)\n\nconcat3' :: Ord a => a -> Tree a -> Tree a -> Tree a\nconcat3' v l r = cataApo rebalanceAlg (concatAlg v) (l, r)\n\nFull gist: https://gist.github.com/Lysxia/281010fbe40eac9be0b135d4733c3d5a\n" ]
[ 0 ]
[]
[]
[ "haskell", "recursion_schemes" ]
stackoverflow_0074651343_haskell_recursion_schemes.txt
Q: How to write to Blob Storage in Azure SQL Server using TSql? I'm creating a stored procedure which gets executed when a CSV is uploaded to Blob Storage. This file is then processed using TSQL and wish to write the result to a file I have been able to read a file and process it using DATA_SOURCE, database scoped credential and external data source. I'm however stuck on writing the output back to a different blob container. How would I do this? A: If it was me, I'd use Azure Data Factory, you can create a pipeline that's activated when a file is added to a blob, have it import that file, run an SP and export the results to a blob. That maybe an Azure function that is activated on changes to a blob container.
How to write to Blob Storage in Azure SQL Server using TSql?
I'm creating a stored procedure which gets executed when a CSV is uploaded to Blob Storage. This file is then processed using TSQL and wish to write the result to a file I have been able to read a file and process it using DATA_SOURCE, database scoped credential and external data source. I'm however stuck on writing the output back to a different blob container. How would I do this?
[ "If it was me, I'd use Azure Data Factory, you can create a pipeline that's activated when a file is added to a blob, have it import that file, run an SP and export the results to a blob.\nThat maybe an Azure function that is activated on changes to a blob container.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_blob_storage", "sql" ]
stackoverflow_0074655764_azure_azure_blob_storage_sql.txt
Q: How can I expose my postgresSQL pods with LoadBalancer service? I setup 1 master node and 2 worker nodes on bare matel server. I deploy my postgressSQL with 3 replica sets. This is my deployment file. apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: selector: matchLabels: app: postgres replicas: 3 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:latest imagePullPolicy: "IfNotPresent" envFrom: - configMapRef: name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pv-claim --- kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local app: postgres spec: storageClassName: standard capacity: storage: 15Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim labels: app: postgres spec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 15Gi --- apiVersion: v1 kind: ConfigMap metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: postgresdb POSTGRES_USER: postgres POSTGRES_PASSWORD: root --- I also follow the MetalLB https://metallb.universe.tf/installation/ and set up layer2 load balancer. which is running fine and I can even expose nginx pod with this service. As you can see here. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h28m nginx LoadBalancer 10.107.29.158 153.10.19.35 80:30703/TCP 162m These are my running pods NAME READY STATUS RESTARTS AGE nginx-76d6c9b8c-lrljz 1/1 Running 0 3h29m postgres-7dff8d6d74-8mlnt 1/1 Running 0 136m postgres-7dff8d6d74-9zxsk 1/1 Running 0 136m postgres-7dff8d6d74-xzkkx 1/1 Running 0 136m What Issue Do I face? When I try to expose the postgresSQL pods with load balancer I am not able to connect. Server is not reachable. I try to expose as follow kubectl expose deploy postgres --port 30432 --type LoadBalancer I also try to create a yaml file for this service and still not successful. kind: Service apiVersion: v1 metadata: name: postgres-svc labels: app: postgres spec: type: LoadBalancer ports: - port: 5432 targetPort: 30432 type: LoadBalancer selector: metallb-service: postgres What Do I expect? I want to expose my Pods to external network with this load balancer service so all the new data should be updated in all 3 replica set. Can you please help me to fix my service.yaml file? I will be very thanks full A: You don't specify a port in your postres container. With kubectl expose you should specify a targetPort: kubectl expose deploy postgres --port 30432 --target-port 5432 --type LoadBalancer In your YAML you have to switch ports: kind: Service apiVersion: v1 metadata: name: postgres-svc labels: app: postgres spec: type: LoadBalancer ports: - port: 30432 targetPort: 5432 type: LoadBalancer selector: app: postgres Here also the selector was wrong. It has to match labels on the pod.
How can I expose my postgresSQL pods with LoadBalancer service?
I setup 1 master node and 2 worker nodes on bare matel server. I deploy my postgressSQL with 3 replica sets. This is my deployment file. apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: selector: matchLabels: app: postgres replicas: 3 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:latest imagePullPolicy: "IfNotPresent" envFrom: - configMapRef: name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pv-claim --- kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local app: postgres spec: storageClassName: standard capacity: storage: 15Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim labels: app: postgres spec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 15Gi --- apiVersion: v1 kind: ConfigMap metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: postgresdb POSTGRES_USER: postgres POSTGRES_PASSWORD: root --- I also follow the MetalLB https://metallb.universe.tf/installation/ and set up layer2 load balancer. which is running fine and I can even expose nginx pod with this service. As you can see here. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h28m nginx LoadBalancer 10.107.29.158 153.10.19.35 80:30703/TCP 162m These are my running pods NAME READY STATUS RESTARTS AGE nginx-76d6c9b8c-lrljz 1/1 Running 0 3h29m postgres-7dff8d6d74-8mlnt 1/1 Running 0 136m postgres-7dff8d6d74-9zxsk 1/1 Running 0 136m postgres-7dff8d6d74-xzkkx 1/1 Running 0 136m What Issue Do I face? When I try to expose the postgresSQL pods with load balancer I am not able to connect. Server is not reachable. I try to expose as follow kubectl expose deploy postgres --port 30432 --type LoadBalancer I also try to create a yaml file for this service and still not successful. kind: Service apiVersion: v1 metadata: name: postgres-svc labels: app: postgres spec: type: LoadBalancer ports: - port: 5432 targetPort: 30432 type: LoadBalancer selector: metallb-service: postgres What Do I expect? I want to expose my Pods to external network with this load balancer service so all the new data should be updated in all 3 replica set. Can you please help me to fix my service.yaml file? I will be very thanks full
[ "You don't specify a port in your postres container.\nWith kubectl expose you should specify a targetPort:\nkubectl expose deploy postgres --port 30432 --target-port 5432 --type LoadBalancer\n\nIn your YAML you have to switch ports:\nkind: Service\napiVersion: v1\nmetadata:\n name: postgres-svc\n labels:\n app: postgres\nspec:\n type: LoadBalancer\n ports:\n - port: 30432\n targetPort: 5432\n type: LoadBalancer\n selector:\n app: postgres\n\nHere also the selector was wrong. It has to match labels on the pod.\n" ]
[ 0 ]
[]
[]
[ "kubectl", "kubernetes", "postgresql" ]
stackoverflow_0074657146_kubectl_kubernetes_postgresql.txt
Q: Why does changing the kernel_initializer lead to NaN loss? I am running an advantage actor-critic (A2C) reinforcement learning model, but when I change the kernel_initializer, it gives me an error where my state has value. Moreover, it works only when kernel_initializer=tf.zeros_initializer(). I have changed the model to this code, and I'm facing a different problem: repeating the same action. However, when I changed the kernel_initializer to tf.zeros_initializer(), it started to choose different actions. state =[-103.91446672 -109. 7.93509779 0. 0. 1. ] The model class Actor: """The actor class""" def __init__(self, sess, num_actions, observation_shape, config): self._sess = sess self._state = tf.placeholder(dtype=tf.float32, shape=observation_shape, name='state') self._action = tf.placeholder(dtype=tf.int32, name='action') self._target = tf.placeholder(dtype=tf.float32, name='target') self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) self._output_layer = tf.layers.dense(inputs=self._hidden_layer, units=num_actions, kernel_initializer=tf.zeros_initializer()) self._action_probs = tf.squeeze(tf.nn.softmax(self._output_layer)) self._picked_action_prob = tf.gather(self._action_probs, self._action) self._loss = -tf.log(self._picked_action_prob) * self._target self._optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate) self._train_op = self._optimizer.minimize(self._loss) def predict(self, s): return self._sess.run(self._action_probs, {self._state: s}) def update(self, s, a, target): self._sess.run(self._train_op, {self._state: s, self._action: a, self._target: target}) class Critic: """The critic class""" def __init__(self, sess, observation_shape, config): self._sess = sess self._config = config self._name = config.critic_name self._observation_shape = observation_shape self._build_model() def _build_model(self): with tf.variable_scope(self._name): self._state = tf.placeholder(dtype=tf.float32, shape=self._observation_shape, name='state') self._target = tf.placeholder(dtype=tf.float32, name='target') self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) self._out = tf.layers.dense(inputs=self._hidden_layer, units=1, kernel_initializer=tf.zeros_initializer()) self._value_estimate = tf.squeeze(self._out) self._loss = tf.squared_difference(self._out, self._target) self._optimizer = tf.train.AdamOptimizer(learning_rate=self._config.learning_rate) self._update_step = self._optimizer.minimize(self._loss) def predict(self, s): return self._sess.run(self._value_estimate, feed_dict={self._state: s}) def update(self, s, target): self._sess.run(self._update_step, feed_dict={self._state: s, self._target: target}) The problem is that I need the learning process to be improved. So, I thought if I changed the kernel_initializer, it might improve, but it gave me this error message. action = np.random.choice(np.arange(lenaction), p=action_prob) File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice ValueError: probabilities contain NaN Any Idea what causing this? A: Using a kernel_initializer of tf.zeros_initializer() for your dense layers in the actor and critic networks can lead to the issue you are experiencing, where the loss becomes NaN and the model repeats the same action. This is because using a kernel_initializer of tf.zeros_initializer() initializes all of the weights in the dense layers to zeros, which can prevent the network from learning. In general, it is better to use a different kernel_initializer for your dense layers, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). These initializers initialize the weights with random values, which allows the network to learn and produce more diverse outputs. To fix the issue with your model, you can try changing the kernel_initializer for your dense layers to a different value, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). This should allow your network to learn and avoid the issue where the loss becomes NaN and the model repeats the same action. You can also try using a different optimizer, such as RMSProp or Adagrad, which may be better suited for this problem. Additionally, you can try adjusting the learning rate and other hyperparameters of the model to see if that improves its performance. If the tf.zeros_initializer initializer is the only initializer that works for your network, but the performance is not good, there are several steps you can take to improve the performance of your network. First, you can try adjusting the parameters of the tf.zeros_initializer initializer to fine-tune the starting weights for your network. The tf.zeros_initializer initializer does not have any parameters, so you will need to use a different initializer and adjust its parameters to control the starting weights for your network. For example, you can try using the tf.random_normal_initializer initializer, which will provide random starting weights for the network. You can adjust the mean and stddev parameters to control the distribution of the starting weights, and experiment with different values to see which provides the best performance for your network. Alternatively, you can try adjusting other hyperparameters, such as the learning rate or the optimizer, to improve the performance of your network. For example, you can try using a different optimizer, such as the Adam optimizer or the RMSprop optimizer, to see if it provides better performance for your network. You can also try modifying the state, action, and reward definitions for your network to see if a different representation improves the performance of your network. For example, you can try using a different state representation, such as a different set of features or a different scaling or normalization method, to see if it improves the performance of your network. Finally, you can try using more data or more complex network architectures to improve the performance of your network. For example, you can try using a larger dataset, or a deeper or wider network, to see if it provides better performance for your network. For more information, see the TensorFlow documentation on training and evaluating neural networks. https://www.tensorflow.org/guide/keras/train_and_evaluate
Why does changing the kernel_initializer lead to NaN loss?
I am running an advantage actor-critic (A2C) reinforcement learning model, but when I change the kernel_initializer, it gives me an error where my state has value. Moreover, it works only when kernel_initializer=tf.zeros_initializer(). I have changed the model to this code, and I'm facing a different problem: repeating the same action. However, when I changed the kernel_initializer to tf.zeros_initializer(), it started to choose different actions. state =[-103.91446672 -109. 7.93509779 0. 0. 1. ] The model class Actor: """The actor class""" def __init__(self, sess, num_actions, observation_shape, config): self._sess = sess self._state = tf.placeholder(dtype=tf.float32, shape=observation_shape, name='state') self._action = tf.placeholder(dtype=tf.int32, name='action') self._target = tf.placeholder(dtype=tf.float32, name='target') self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) self._output_layer = tf.layers.dense(inputs=self._hidden_layer, units=num_actions, kernel_initializer=tf.zeros_initializer()) self._action_probs = tf.squeeze(tf.nn.softmax(self._output_layer)) self._picked_action_prob = tf.gather(self._action_probs, self._action) self._loss = -tf.log(self._picked_action_prob) * self._target self._optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate) self._train_op = self._optimizer.minimize(self._loss) def predict(self, s): return self._sess.run(self._action_probs, {self._state: s}) def update(self, s, a, target): self._sess.run(self._train_op, {self._state: s, self._action: a, self._target: target}) class Critic: """The critic class""" def __init__(self, sess, observation_shape, config): self._sess = sess self._config = config self._name = config.critic_name self._observation_shape = observation_shape self._build_model() def _build_model(self): with tf.variable_scope(self._name): self._state = tf.placeholder(dtype=tf.float32, shape=self._observation_shape, name='state') self._target = tf.placeholder(dtype=tf.float32, name='target') self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) self._out = tf.layers.dense(inputs=self._hidden_layer, units=1, kernel_initializer=tf.zeros_initializer()) self._value_estimate = tf.squeeze(self._out) self._loss = tf.squared_difference(self._out, self._target) self._optimizer = tf.train.AdamOptimizer(learning_rate=self._config.learning_rate) self._update_step = self._optimizer.minimize(self._loss) def predict(self, s): return self._sess.run(self._value_estimate, feed_dict={self._state: s}) def update(self, s, target): self._sess.run(self._update_step, feed_dict={self._state: s, self._target: target}) The problem is that I need the learning process to be improved. So, I thought if I changed the kernel_initializer, it might improve, but it gave me this error message. action = np.random.choice(np.arange(lenaction), p=action_prob) File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice ValueError: probabilities contain NaN Any Idea what causing this?
[ "Using a kernel_initializer of tf.zeros_initializer() for your dense layers in the actor and critic networks can lead to the issue you are experiencing, where the loss becomes NaN and the model repeats the same action. This is because using a kernel_initializer of tf.zeros_initializer() initializes all of the weights in the dense layers to zeros, which can prevent the network from learning.\nIn general, it is better to use a different kernel_initializer for your dense layers, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). These initializers initialize the weights with random values, which allows the network to learn and produce more diverse outputs.\nTo fix the issue with your model, you can try changing the kernel_initializer for your dense layers to a different value, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). This should allow your network to learn and avoid the issue where the loss becomes NaN and the model repeats the same action.\nYou can also try using a different optimizer, such as RMSProp or Adagrad, which may be better suited for this problem. Additionally, you can try adjusting the learning rate and other hyperparameters of the model to see if that improves its performance.\nIf the tf.zeros_initializer initializer is the only initializer that works for your network, but the performance is not good, there are several steps you can take to improve the performance of your network.\nFirst, you can try adjusting the parameters of the tf.zeros_initializer initializer to fine-tune the starting weights for your network. The tf.zeros_initializer initializer does not have any parameters, so you will need to use a different initializer and adjust its parameters to control the starting weights for your network.\nFor example, you can try using the tf.random_normal_initializer initializer, which will provide random starting weights for the network. You can adjust the mean and stddev parameters to control the distribution of the starting weights, and experiment with different values to see which provides the best performance for your network.\nAlternatively, you can try adjusting other hyperparameters, such as the learning rate or the optimizer, to improve the performance of your network. For example, you can try using a different optimizer, such as the Adam optimizer or the RMSprop optimizer, to see if it provides better performance for your network.\nYou can also try modifying the state, action, and reward definitions for your network to see if a different representation improves the performance of your network. For example, you can try using a different state representation, such as a different set of features or a different scaling or normalization method, to see if it improves the performance of your network.\nFinally, you can try using more data or more complex network architectures to improve the performance of your network. For example, you can try using a larger dataset, or a deeper or wider network, to see if it provides better performance for your network. For more information, see the TensorFlow documentation on training and evaluating neural networks. https://www.tensorflow.org/guide/keras/train_and_evaluate\n" ]
[ 0 ]
[]
[]
[ "actor_critics", "python", "random", "tensorflow" ]
stackoverflow_0074612124_actor_critics_python_random_tensorflow.txt
Q: How to change Postgres connection at run time using Adonis.js? I have a stream replication db running and I would like to change the connection to it when the main db fail. I have set up the connection for both dbs on config/database.ts and tried to use the connection menager event as in adonis docs : Doc example: import Database from "@ioc:Adonis/Lucid/Database"; Database.manager.on('db:connection:error', (connection) => { console.log(self === connection) // true }) I tried to use it on start/events.ts but it doesn't recognize the 'on' method. Property 'on' does not exist on type 'ConnectionManagerContract' Any idea on how to make this work? Thanks. A: It looks like you are trying to use the on method of the ConnectionManagerContract interface in the AdonisJS web framework. In TypeScript, you cannot call methods on an interface directly. Instead, you can create an instance of the class that implements the interface and call the method on that instance. In the case of the ConnectionManagerContract interface, the implementing class is the ConnectionManager class. Here is an example of how you can use the on method of the ConnectionManager class to listen for db:connection:error events: import Database from "@ioc:Adonis/Lucid/Database"; const manager = new Database.manager.ConnectionManager(); manager.on('db:connection:error', (connection) => { console.log(connection); }); You can then use the manager instance to access other methods of the ConnectionManager class, such as create or acquire to create or acquire connections to your databases. I hope this helps! Let me know if you have any other questions.
How to change Postgres connection at run time using Adonis.js?
I have a stream replication db running and I would like to change the connection to it when the main db fail. I have set up the connection for both dbs on config/database.ts and tried to use the connection menager event as in adonis docs : Doc example: import Database from "@ioc:Adonis/Lucid/Database"; Database.manager.on('db:connection:error', (connection) => { console.log(self === connection) // true }) I tried to use it on start/events.ts but it doesn't recognize the 'on' method. Property 'on' does not exist on type 'ConnectionManagerContract' Any idea on how to make this work? Thanks.
[ "It looks like you are trying to use the on method of the ConnectionManagerContract interface in the AdonisJS web framework.\nIn TypeScript, you cannot call methods on an interface directly. Instead, you can create an instance of the class that implements the interface and call the method on that instance. In the case of the ConnectionManagerContract interface, the implementing class is the ConnectionManager class.\nHere is an example of how you can use the on method of the ConnectionManager class to listen for db:connection:error events:\nimport Database from \"@ioc:Adonis/Lucid/Database\";\n\nconst manager = new Database.manager.ConnectionManager();\n\nmanager.on('db:connection:error', (connection) => {\n console.log(connection);\n});\n\nYou can then use the manager instance to access other methods of the ConnectionManager class, such as create or acquire to create or acquire connections to your databases.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "adonis.js", "javascript", "lucid", "postgresql" ]
stackoverflow_0074656914_adonis.js_javascript_lucid_postgresql.txt
Q: Azure DevOps YAML stage template I have two environments, preprod and prod and they are pretty much the same. So I created an yaml file, InfurstructureTemplate.yaml parameters: xxxx jobs: - job: provision_job I want to use this template for my two environments, here is what in mind: stages: - stage: PreProd Environment - template: InfurstructureTemplate.yaml - parameters: xxxx - stage: Prod Environment - template: InfurstructureTemplate.yaml - parameters: xxxx Is this the right way to use yaml template? When I googled this, seems template is on Stages level and you can't put it on stage. A: Yes, you can do it very similar, put the template in the jobs: stages: - stage: PreProd_Environment displayName: PreProd Environment jobs: - template: InfurstructureTemplate.yaml parameters: projectDir: xxx - stage: Prod_Environment displayName: Prod Environment dependsOn: [ PreProd_Environment ] jobs: - template: InfurstructureTemplate.yaml parameters: projectDir: xxx A: You may try this too as described here stages: - template: InfurstructureTemplate.yaml parameters: xxxx - template: InfurstructureTemplate.yaml parameters: xxxx
Azure DevOps YAML stage template
I have two environments, preprod and prod and they are pretty much the same. So I created an yaml file, InfurstructureTemplate.yaml parameters: xxxx jobs: - job: provision_job I want to use this template for my two environments, here is what in mind: stages: - stage: PreProd Environment - template: InfurstructureTemplate.yaml - parameters: xxxx - stage: Prod Environment - template: InfurstructureTemplate.yaml - parameters: xxxx Is this the right way to use yaml template? When I googled this, seems template is on Stages level and you can't put it on stage.
[ "Yes, you can do it very similar, put the template in the jobs:\nstages:\n - stage: PreProd_Environment\n displayName: PreProd Environment\n jobs:\n - template: InfurstructureTemplate.yaml\n parameters:\n projectDir: xxx\n\n - stage: Prod_Environment\n displayName: Prod Environment\n dependsOn: [ PreProd_Environment ]\n jobs:\n - template: InfurstructureTemplate.yaml\n parameters:\n projectDir: xxx\n\n", "You may try this too as described here\nstages:\n- template: InfurstructureTemplate.yaml\n parameters:\n xxxx\n\n- template: InfurstructureTemplate.yaml\n parameters:\n xxxx\n\n" ]
[ 4, 0 ]
[]
[]
[ "azure_devops", "azure_pipelines", "azure_pipelines_yaml", "yaml" ]
stackoverflow_0068727538_azure_devops_azure_pipelines_azure_pipelines_yaml_yaml.txt
Q: How to send data with NavigationComponent in Android In my application I have 2 fragment and for show these I use NavigationComponent and I want sent some data. My navigation codes: <navigation xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/nav_home" app:startDestination="@id/recipesFragment"> <fragment android:id="@+id/recipesFragment" android:name="myapp.ui.fragments.recipe.RecipesFragment" android:label="@string/recipes" tools:layout="@layout/fragment_recipes" > <action android:id="@+id/action_recipesFragment_to_menuFragment" app:destination="@id/menuFragment" /> <argument android:name="isUpdatedData" app:argType="boolean" android:defaultValue="false" /> </fragment> <dialog android:id="@+id/menuFragment" android:name="myapp.ui.fragments.recipe.menu.MenuFragment" android:label="fragment_menu" tools:layout="@layout/fragment_menu" > <action android:id="@+id/action_menuFragment_to_recipesFragment" app:destination="@id/recipesFragment" /> </dialog> </navigation> I set argument with default value, but when use this action into MenuFragment not access to me for set value! MenuFragment codes: submitBtn.setOnClickListener { val action = MenuFragmentDirections.actionMenuFragmentToRecipesFragment(true) findNavController().navigate(action) } When write true for value of actionMenuFragmentToRecipesFragment show me below error: Too many arguments for public open fun actionMenuFragmentToRecipesFragment(): MenuFragmentDirections.ActionMenuFragmentToRecipesFragment defined in myapp.ui.fragments.recipe.menu.MenuFragmentDirections How can I fix it and set data with defaultValue? A: Hi I hope this answer be useful for you if you would like to use default value below code will help you: findNavController().navigate(MenuFragmentDirections.actionMenuFragmentToRecipesFragment().setIsUpdatedData(true)) when you set default value for your argument compiler recognize code does not need setter and if you want change it you have to call setter method of argument in your navigation for custom fragment
How to send data with NavigationComponent in Android
In my application I have 2 fragment and for show these I use NavigationComponent and I want sent some data. My navigation codes: <navigation xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/nav_home" app:startDestination="@id/recipesFragment"> <fragment android:id="@+id/recipesFragment" android:name="myapp.ui.fragments.recipe.RecipesFragment" android:label="@string/recipes" tools:layout="@layout/fragment_recipes" > <action android:id="@+id/action_recipesFragment_to_menuFragment" app:destination="@id/menuFragment" /> <argument android:name="isUpdatedData" app:argType="boolean" android:defaultValue="false" /> </fragment> <dialog android:id="@+id/menuFragment" android:name="myapp.ui.fragments.recipe.menu.MenuFragment" android:label="fragment_menu" tools:layout="@layout/fragment_menu" > <action android:id="@+id/action_menuFragment_to_recipesFragment" app:destination="@id/recipesFragment" /> </dialog> </navigation> I set argument with default value, but when use this action into MenuFragment not access to me for set value! MenuFragment codes: submitBtn.setOnClickListener { val action = MenuFragmentDirections.actionMenuFragmentToRecipesFragment(true) findNavController().navigate(action) } When write true for value of actionMenuFragmentToRecipesFragment show me below error: Too many arguments for public open fun actionMenuFragmentToRecipesFragment(): MenuFragmentDirections.ActionMenuFragmentToRecipesFragment defined in myapp.ui.fragments.recipe.menu.MenuFragmentDirections How can I fix it and set data with defaultValue?
[ "Hi I hope this answer be useful for you\nif you would like to use default value below code will help you:\nfindNavController().navigate(MenuFragmentDirections.actionMenuFragmentToRecipesFragment().setIsUpdatedData(true))\nwhen you set default value for your argument compiler recognize code does not need setter and if you want change it you have to call setter method of argument in your navigation for custom fragment\n" ]
[ 1 ]
[]
[]
[ "android" ]
stackoverflow_0074655974_android.txt
Q: Python unittests used in a project structure with multiple directories I need to use unittest python library to execute tests about the 3 functions in src/arithmetics.py file. Here is my project structure. . ├── src │   └── arithmetics.py └── test └── lcm ├── __init__.py ├── test_lcm_exception.py └── test_lcm.py src/arithmetics.py def lcm(p, q): p, q = abs(p), abs(q) m = p * q while True: p %= q if not p: return m // q q %= p if not q: return m // p def lcm_better(p, q): p, q = abs(p), abs(q) m = p * q h = p % q while h != 0: p = q q = h h = p % q h = m / q return h def lcm_faulty(p, q): r, m = 0, 0 r = p * q while (r > p) and (r > q): if (r % p == 0) and (r % q == 0): m = r r = r - 1 return m test/lcm/test_lcm.py import unittest from src.arithmetics import * class LcmTest(unittest.TestCase): def test_lcm(self): for X in range(1, 100): self.assertTrue(0 == lcm(0, X)) self.assertTrue(X == lcm(X, X)) self.assertTrue(840 == lcm(60, 168)) def test_lcm_better(self): for X in range(1, 100): self.assertTrue(0 == lcm_better(0, X)) self.assertTrue(X == lcm_better(X, X)) self.assertTrue(840 == lcm_better(60, 168)) def test_lcm_faulty(self): self.assertTrue(0 == lcm_faulty(0, 0)) for X in range(1, 100): self.assertTrue(0 == lcm_faulty(X, 0)) self.assertTrue(0 == lcm_faulty(0, X)) self.assertTrue(840 == lcm_faulty(60, 168)) if __name__ == '__main__': unittest.main() test/lcm/test_lcm_exception.py import unittest from src.arithmetics import * class LcmExceptionTest(unittest.TestCase): def test_lcm_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm(X, 0)) # ZeroDivisionError def test_lcm_better_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm_better(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm_better(X, 0)) # ZeroDivisionError def test_lcm_faulty_exception(self): for X in range(1, 100): self.assertTrue(X == lcm_faulty(X, X)) # ppcm(1, 1) != 1 if __name__ == '__main__': unittest.main() test/lcm/__init__.py is an empty file To execute my tests, I tried this command : python3 -m unittest discover But the output is : ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I don't understand how can I run my tests... Thanks for helping me ! A: Some files init.py are missing I think that the problem is the missing of file __init__.py in your subfolders. Try to add this empty file in all your subfolders as I show you below: test_lcm ├── __init__.py ├── src | └── __init__py │ └── arithmetics.py └── test └── __init__py └── lcm ├── __init__.py ├── test_lcm_exception.py └── test_lcm.py If you see my tree folder I have create a folder test_lcm as root of the tree. You have to place inside it by cd command. So in test_lcm folder execute: # go to test_lcm folder cd ~/test_lcm # execute test python3 -m unittest discover The last part of the output is: ---------------------------------------------------------------------- Ran 6 tests in 0.002s FAILED (failures=1, errors=2) This show that are executed 6 tests with 2 errors (test_lcm_better_exception and test_lcm_exception fail).
Python unittests used in a project structure with multiple directories
I need to use unittest python library to execute tests about the 3 functions in src/arithmetics.py file. Here is my project structure. . ├── src │   └── arithmetics.py └── test └── lcm ├── __init__.py ├── test_lcm_exception.py └── test_lcm.py src/arithmetics.py def lcm(p, q): p, q = abs(p), abs(q) m = p * q while True: p %= q if not p: return m // q q %= p if not q: return m // p def lcm_better(p, q): p, q = abs(p), abs(q) m = p * q h = p % q while h != 0: p = q q = h h = p % q h = m / q return h def lcm_faulty(p, q): r, m = 0, 0 r = p * q while (r > p) and (r > q): if (r % p == 0) and (r % q == 0): m = r r = r - 1 return m test/lcm/test_lcm.py import unittest from src.arithmetics import * class LcmTest(unittest.TestCase): def test_lcm(self): for X in range(1, 100): self.assertTrue(0 == lcm(0, X)) self.assertTrue(X == lcm(X, X)) self.assertTrue(840 == lcm(60, 168)) def test_lcm_better(self): for X in range(1, 100): self.assertTrue(0 == lcm_better(0, X)) self.assertTrue(X == lcm_better(X, X)) self.assertTrue(840 == lcm_better(60, 168)) def test_lcm_faulty(self): self.assertTrue(0 == lcm_faulty(0, 0)) for X in range(1, 100): self.assertTrue(0 == lcm_faulty(X, 0)) self.assertTrue(0 == lcm_faulty(0, X)) self.assertTrue(840 == lcm_faulty(60, 168)) if __name__ == '__main__': unittest.main() test/lcm/test_lcm_exception.py import unittest from src.arithmetics import * class LcmExceptionTest(unittest.TestCase): def test_lcm_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm(X, 0)) # ZeroDivisionError def test_lcm_better_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm_better(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm_better(X, 0)) # ZeroDivisionError def test_lcm_faulty_exception(self): for X in range(1, 100): self.assertTrue(X == lcm_faulty(X, X)) # ppcm(1, 1) != 1 if __name__ == '__main__': unittest.main() test/lcm/__init__.py is an empty file To execute my tests, I tried this command : python3 -m unittest discover But the output is : ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I don't understand how can I run my tests... Thanks for helping me !
[ "Some files init.py are missing\nI think that the problem is the missing of file __init__.py in your subfolders. Try to add this empty file in all your subfolders as I show you below:\ntest_lcm\n├── __init__.py\n├── src\n| └── __init__py\n│ └── arithmetics.py\n└── test\n └── __init__py\n └── lcm\n ├── __init__.py\n ├── test_lcm_exception.py\n └── test_lcm.py\n\nIf you see my tree folder I have create a folder test_lcm as root of the tree. You have to place inside it by cd command.\nSo in test_lcm folder execute:\n# go to test_lcm folder\ncd ~/test_lcm\n\n# execute test\npython3 -m unittest discover\n\nThe last part of the output is:\n----------------------------------------------------------------------\nRan 6 tests in 0.002s\n\nFAILED (failures=1, errors=2)\n\nThis show that are executed 6 tests with 2 errors (test_lcm_better_exception and test_lcm_exception fail).\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "python_unittest", "unit_testing" ]
stackoverflow_0074655669_python_python_3.x_python_unittest_unit_testing.txt
Q: AWS Glue job to unzip a file from S3 and write it back to S3 I'm very new to AWS Glue, and I want to use AWS Glue to unzip a huge file present in a S3 bucket, and write the contents back to S3. I couldn't find anything while trying to google this requirement. My questions are: How to add a zip file as data source to AWS Glue? How to write it back to same S3 location? I am using AWS Glue Studio. Any help will be highly appreciated. A: I couldn't find anything while trying to google this requirement. You couldn't find anything about this, because this is not what Glue does. Glue can read gzip (not zip) files natively. If you have zip, then you have to convert all the files yourself in S3. Glue will not do it. To convert the files, you can download them, re-pack, and re-upload in gzip format, or any other format that Glue supports. A: If you are still looking for a solution. You're able to unzip a file and write it back with an AWS Glue Job by using boto3 and Python's zipfile library. A thing to consider is the size of the zip that you want to process. I've used the following script with a 6GB (zipped) 30GB (unzipped) file and it works fine. But might fail if the file is to heavy for the worker to buffer. import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ["JOB_NAME"]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) import boto3 import io from zipfile import ZipFile s3 = boto3.client("s3") bucket = "wayfair-datasource" # your s3 bucket name prefix = "files/location/" # the prefix for the objects that you want to unzip unzip_prefix = "files/unzipped_location/" # the location where you want to store your unzipped files # Get a list of all the resources in the specified prefix objects = s3.list_objects( Bucket=bucket, Prefix=prefix )["Contents"] # The following will get the unzipped files so the job doesn't try to unzip a file that is already unzipped on every run unzipped_objects = s3.list_objects( Bucket=bucket, Prefix=unzip_prefix )["Contents"] # Get a list containing the keys of the objects to unzip object_keys = [ o["Key"] for o in objects if o["Key"].endswith(".zip") ] # Get the keys for the unzipped objects unzipped_object_keys = [ o["Key"] for o in unzipped_objects ] for key in object_keys: obj = s3.get_object( Bucket="wayfair-datasource", Key=key ) objbuffer = io.BytesIO(obj["Body"].read()) # using context manager so you don't have to worry about manually closing the file with ZipFile(objbuffer) as zip: filenames = zip.namelist() # iterate over every file inside the zip for filename in filenames: with zip.open(filename) as file: filepath = unzip_prefix + filename if filepath not in unzipped_object_keys: s3.upload_fileobj(file, bucket, filepath) job.commit()
AWS Glue job to unzip a file from S3 and write it back to S3
I'm very new to AWS Glue, and I want to use AWS Glue to unzip a huge file present in a S3 bucket, and write the contents back to S3. I couldn't find anything while trying to google this requirement. My questions are: How to add a zip file as data source to AWS Glue? How to write it back to same S3 location? I am using AWS Glue Studio. Any help will be highly appreciated.
[ "\nI couldn't find anything while trying to google this requirement.\n\nYou couldn't find anything about this, because this is not what Glue does. Glue can read gzip (not zip) files natively. If you have zip, then you have to convert all the files yourself in S3. Glue will not do it.\nTo convert the files, you can download them, re-pack, and re-upload in gzip format, or any other format that Glue supports.\n", "If you are still looking for a solution. You're able to unzip a file and write it back with an AWS Glue Job by using boto3 and Python's zipfile library.\nA thing to consider is the size of the zip that you want to process. I've used the following script with a 6GB (zipped) 30GB (unzipped) file and it works fine. But might fail if the file is to heavy for the worker to buffer.\nimport sys\nfrom awsglue.transforms import *\nfrom awsglue.utils import getResolvedOptions\nfrom pyspark.context import SparkContext\nfrom awsglue.context import GlueContext\nfrom awsglue.job import Job\n\nargs = getResolvedOptions(sys.argv, [\"JOB_NAME\"])\nsc = SparkContext()\nglueContext = GlueContext(sc)\nspark = glueContext.spark_session\njob = Job(glueContext)\njob.init(args[\"JOB_NAME\"], args)\n\nimport boto3\nimport io\nfrom zipfile import ZipFile\n\ns3 = boto3.client(\"s3\")\n\nbucket = \"wayfair-datasource\" # your s3 bucket name\nprefix = \"files/location/\" # the prefix for the objects that you want to unzip\nunzip_prefix = \"files/unzipped_location/\" # the location where you want to store your unzipped files \n\n# Get a list of all the resources in the specified prefix\nobjects = s3.list_objects(\n Bucket=bucket,\n Prefix=prefix\n)[\"Contents\"]\n\n# The following will get the unzipped files so the job doesn't try to unzip a file that is already unzipped on every run\nunzipped_objects = s3.list_objects(\n Bucket=bucket,\n Prefix=unzip_prefix\n)[\"Contents\"]\n\n# Get a list containing the keys of the objects to unzip\nobject_keys = [ o[\"Key\"] for o in objects if o[\"Key\"].endswith(\".zip\") ] \n# Get the keys for the unzipped objects\nunzipped_object_keys = [ o[\"Key\"] for o in unzipped_objects ] \n\nfor key in object_keys:\n obj = s3.get_object(\n Bucket=\"wayfair-datasource\",\n Key=key\n )\n \n objbuffer = io.BytesIO(obj[\"Body\"].read())\n \n # using context manager so you don't have to worry about manually closing the file\n with ZipFile(objbuffer) as zip:\n filenames = zip.namelist()\n\n # iterate over every file inside the zip\n for filename in filenames:\n with zip.open(filename) as file:\n filepath = unzip_prefix + filename\n if filepath not in unzipped_object_keys:\n s3.upload_fileobj(file, bucket, filepath)\n\njob.commit()\n\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_glue" ]
stackoverflow_0067631226_amazon_s3_amazon_web_services_aws_glue.txt
Q: Elasticsearch, is it possible to query an index with two different type that have different set of fields? I have an index with two different types : [hits] => Array ( [total] => 408863 [max_score] => 1 [hits] => Array ( [0] => Array ( [_index] => myindex [_type] => merch [_id] => 379919 [_score] => 1 [_source] => Array ( [id] => 379919 [field1] => Lorem ipsum [field2] => ERKDK56 [field3] => 1256 ... [hits] => Array ( [total] => 4172386 [max_score] => 1 [hits] => Array ( [0] => Array ( [_index] => myindex [_type] => merchSeller [_id] => 2599218 [_score] => 1 [_source] => Array ( [id] => 2599218 [field4] => 1 [field5] => 1 [merch] => Array ( [id] => 132539 [field6] => 132539 ) [seller] => Array ( [id] => 689 [field7] => 1 ... How can I create a query that can check the fields of both types? I created a query by adding both type in the type field hoping I wil have access to there fields but it does not return good result. $params = [ 'index' => 'myindex', 'type' => 'merch,merchSeller', 'body' => [ 'query' => [ 'bool' => [ 'must'=> [ 'match' => ['field1' => 'Lorem'] , ], 'must'=> [ 'match' => ['field4' => '23'], ] ] ]] ]; I'm using laravel with "elasticsearch/elasticsearch": "~7.1". Thank you A: Selecting records with _type equals to merc and match on field1 or records with _type equals to merchSeller and match on field4 "query": { "bool" : { "should": [ { "bool": { "must": [ { "term":{"_type":"merch"} }, { "term":{"field1" : "Lorem ipsum"} } ] } }, { "bool": { "must": [ { "term":{"_type":"merchSeller"} }, { "term":{"field4" : "23"} } ] } } ] } } Be aware that _type is deprecated.
Elasticsearch, is it possible to query an index with two different type that have different set of fields?
I have an index with two different types : [hits] => Array ( [total] => 408863 [max_score] => 1 [hits] => Array ( [0] => Array ( [_index] => myindex [_type] => merch [_id] => 379919 [_score] => 1 [_source] => Array ( [id] => 379919 [field1] => Lorem ipsum [field2] => ERKDK56 [field3] => 1256 ... [hits] => Array ( [total] => 4172386 [max_score] => 1 [hits] => Array ( [0] => Array ( [_index] => myindex [_type] => merchSeller [_id] => 2599218 [_score] => 1 [_source] => Array ( [id] => 2599218 [field4] => 1 [field5] => 1 [merch] => Array ( [id] => 132539 [field6] => 132539 ) [seller] => Array ( [id] => 689 [field7] => 1 ... How can I create a query that can check the fields of both types? I created a query by adding both type in the type field hoping I wil have access to there fields but it does not return good result. $params = [ 'index' => 'myindex', 'type' => 'merch,merchSeller', 'body' => [ 'query' => [ 'bool' => [ 'must'=> [ 'match' => ['field1' => 'Lorem'] , ], 'must'=> [ 'match' => ['field4' => '23'], ] ] ]] ]; I'm using laravel with "elasticsearch/elasticsearch": "~7.1". Thank you
[ "Selecting records with _type equals to merc and match on field1 or records with _type equals to merchSeller and match on field4\n \"query\": {\n \"bool\" : {\n \"should\": [ \n {\n \"bool\": {\n \"must\": [ {\n \"term\":{\"_type\":\"merch\"}\n },\n {\n \"term\":{\"field1\" : \"Lorem ipsum\"}\n }\n ]\n }\n },\n {\n \"bool\": {\n \"must\": [ {\n \"term\":{\"_type\":\"merchSeller\"}\n },\n {\n \"term\":{\"field4\" : \"23\"}\n }\n ]\n }\n \n }\n\n ]\n }\n\n}\nBe aware that _type is deprecated.\n" ]
[ 0 ]
[]
[]
[ "elasticsearch", "elasticsearch_dsl", "laravel" ]
stackoverflow_0074638284_elasticsearch_elasticsearch_dsl_laravel.txt
Q: The Spring boot app doesn't seem to run after putting it in container I am trying to create docker containers and I was trying to 1 for MySql and another for Spring io. The DB container is running OK but the spring boot container comes at a point and exits. I have searched and tried many thing but I can't seem to be able to solve it, the thing that I concluded that it seems that something is wrong with the database environment or aplication.properties or maybe it could be somewhere else. I would be so grateful if someone could guide me to the solution. application.properties: spring.datasource.url=jdbc:mysql://DB_containerfile:3306/phase2?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC spring.datasource.username=testuser spring.datasource.password= testuser@123 spring.jpa.hibernate.ddl-auto=update spring.datasource.driver-class-name= com.mysql.cj.jdbc.Driver dockerfile: FROM openjdk:17 ADD target/springboot-crud-api-0.0.1-SNAPSHOT.jar app.jar ENTRYPOINT ["java","-jar","app.jar"] docker-compose.yml: version: '3.8' services: DB_containerfile: image: mysql:latest container_name: DB_containerfile environment: - MYSQL_ROOT_PASSWORD=//////////// - MYSQL_DATABASE=phase2 - MYSQL_USER=testuser - MYSQL_PASSWORD=testuser@123 backend_containerfile: image: backend_image container_name: backend_containerfile ports: - 8080:8080 build: context: ./ dockerfile: Dockerfile depends_on: - DB_containerfile NOTE: I assigned the password that I enter when I write this command on the cmd "mysql -u root -p" to MYSQL_ROOT_PASSWORD Spring boot log: backend_containerfile | 2022-12-02 06:02:54.757 WARN 1 --- [ main] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=com.mysql.jdbc.Driver was not found, trying direct instantiation. backend_containerfile | 2022-12-02 06:02:56.040 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization. backend_containerfile | backend_containerfile | java.sql.SQLNonTransientConnectionException: Public Key Retrieval is not allowed backend_containerfile | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:828) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:448) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:241) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[mysql-connector-j-8.0.31.jar!/:8.0.31]backend_containerfile | at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[hibernate-core-5.6.12.Final.jar!/:5.6.12.Final] What I tried: 1)I tried docker volume prune. 2)I tried removing the - MYSQL_USER=testuser - MYSQL_PASSWORD=testuser@123 spring.datasource.username=testuser spring.datasource.password= testuser@123\` and only going with - MYSQL_ROOT_PASSWORD=//////////// 3)I tried both spring.datasource.driver-class-name= com.mysql.cj.jdbc.Driver spring.datasource.driver-class-name= com.mysql.jdbc.Driver 4)I put allowPublicKeyRetrieval=true in spring.datasource.url And many other thing I tried but the result remained the same. Sorry, for the long question but I really tried to figure out the problem by myself but now I need someone's insight. And thank you. A: try to use this driverClassName in application.properties spring.datasource.driverClassName=com.mysql.jdbc.Driver Have you tried to start only Mysql db in the container and connect to it from you IDE? A: Thankfully I found the problem it was because I haven't connected the database and spring io. I will post the solution for my case, in case it helps someone. spring.datasource.url=jdbc:mysql://DB_containerfile:3306/phase2?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC spring.datasource.username=testuser spring.datasource.password= testuser@123 spring.jpa.hibernate.ddl-auto=update spring.datasource.driver-class-name= com.mysql.jdbc.Driver FROM openjdk:17 ADD target/*.jar app.jar ENTRYPOINT ["java","-jar","app.jar"] then run the following commands in the project terminal: docker-compose down mvn clean mvn clean install -DskipTests=true docker network ls docker network create spring-net //if not created docker-compose up //then CTRL+C docker network connect spring-net DB_containerfile docker container inspect DB_containerfile docker ps -a docker-compose up
The Spring boot app doesn't seem to run after putting it in container
I am trying to create docker containers and I was trying to 1 for MySql and another for Spring io. The DB container is running OK but the spring boot container comes at a point and exits. I have searched and tried many thing but I can't seem to be able to solve it, the thing that I concluded that it seems that something is wrong with the database environment or aplication.properties or maybe it could be somewhere else. I would be so grateful if someone could guide me to the solution. application.properties: spring.datasource.url=jdbc:mysql://DB_containerfile:3306/phase2?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC spring.datasource.username=testuser spring.datasource.password= testuser@123 spring.jpa.hibernate.ddl-auto=update spring.datasource.driver-class-name= com.mysql.cj.jdbc.Driver dockerfile: FROM openjdk:17 ADD target/springboot-crud-api-0.0.1-SNAPSHOT.jar app.jar ENTRYPOINT ["java","-jar","app.jar"] docker-compose.yml: version: '3.8' services: DB_containerfile: image: mysql:latest container_name: DB_containerfile environment: - MYSQL_ROOT_PASSWORD=//////////// - MYSQL_DATABASE=phase2 - MYSQL_USER=testuser - MYSQL_PASSWORD=testuser@123 backend_containerfile: image: backend_image container_name: backend_containerfile ports: - 8080:8080 build: context: ./ dockerfile: Dockerfile depends_on: - DB_containerfile NOTE: I assigned the password that I enter when I write this command on the cmd "mysql -u root -p" to MYSQL_ROOT_PASSWORD Spring boot log: backend_containerfile | 2022-12-02 06:02:54.757 WARN 1 --- [ main] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=com.mysql.jdbc.Driver was not found, trying direct instantiation. backend_containerfile | 2022-12-02 06:02:56.040 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization. backend_containerfile | backend_containerfile | java.sql.SQLNonTransientConnectionException: Public Key Retrieval is not allowed backend_containerfile | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:828) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:448) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:241) ~[mysql-connector-j-8.0.31.jar!/:8.0.31] backend_containerfile | at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[mysql-connector-j-8.0.31.jar!/:8.0.31]backend_containerfile | at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-4.0.3.jar!/:na] backend_containerfile | at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[hibernate-core-5.6.12.Final.jar!/:5.6.12.Final] What I tried: 1)I tried docker volume prune. 2)I tried removing the - MYSQL_USER=testuser - MYSQL_PASSWORD=testuser@123 spring.datasource.username=testuser spring.datasource.password= testuser@123\` and only going with - MYSQL_ROOT_PASSWORD=//////////// 3)I tried both spring.datasource.driver-class-name= com.mysql.cj.jdbc.Driver spring.datasource.driver-class-name= com.mysql.jdbc.Driver 4)I put allowPublicKeyRetrieval=true in spring.datasource.url And many other thing I tried but the result remained the same. Sorry, for the long question but I really tried to figure out the problem by myself but now I need someone's insight. And thank you.
[ "try to use this driverClassName in application.properties\nspring.datasource.driverClassName=com.mysql.jdbc.Driver\n\nHave you tried to start only Mysql db in the container and connect to it from you IDE?\n", "Thankfully I found the problem it was because I haven't connected the database and spring io. I will post the solution for my case, in case it helps someone.\n\n\nspring.datasource.url=jdbc:mysql://DB_containerfile:3306/phase2?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC\nspring.datasource.username=testuser\nspring.datasource.password= testuser@123\nspring.jpa.hibernate.ddl-auto=update\nspring.datasource.driver-class-name= com.mysql.jdbc.Driver\n\n\n\n\n\nFROM openjdk:17\nADD target/*.jar app.jar\nENTRYPOINT [\"java\",\"-jar\",\"app.jar\"]\n\n\n\nthen run the following commands in the project terminal:\n\n\ndocker-compose down\nmvn clean\nmvn clean install -DskipTests=true\ndocker network ls\ndocker network create spring-net //if not created\ndocker-compose up //then CTRL+C\ndocker network connect spring-net DB_containerfile\ndocker container inspect DB_containerfile\ndocker ps -a\ndocker-compose up\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "application.properties", "docker", "docker_compose", "dockerfile", "spring_boot" ]
stackoverflow_0074651766_application.properties_docker_docker_compose_dockerfile_spring_boot.txt
Q: C++ unordered set of custom class - segfault on insert I am following a tutorial on creating a hexagon map for a game. The source has it in struct, but I want it to be a class and so far I am unable to make it work. It compiles fine, but when I try to insert a new value in it, it segfaults. I am probably doing the hash function wrong, or something, but I have ran out of ideas on fixing it. Thanks! main.c #include <unordered_set> #include "hexagonField.hpp" int main(int argc, char **argv) { std::unordered_set <HexagonField> map; map.insert(HexagonField(0, 0, 0)); return 0; } hexagonField.hpp #ifndef HEXAGON_H #define HEXAGON_H #include <assert.h> #include <vector> class HexagonField { public: const int q, r, s; HexagonField(int q, int r, int s); ~HexagonField(); HexagonField hexagonAdd(HexagonField a, HexagonField b); HexagonField hexagonSubtract(HexagonField a, HexagonField b); HexagonField hexagonMultiply(HexagonField a, int k); int hexagonLength(HexagonField hex); int hexagonDistance(HexagonField a, HexagonField b); HexagonField hexagonDirection(int direction /* 0 to 5 */); HexagonField hexagonNeighbor(HexagonField hex, int direction); const std::vector<HexagonField> hexagonDirections = { HexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1), HexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1) }; bool operator == (const HexagonField comparedHex) const { return this->q == comparedHex.q && this->r == comparedHex.r && this->s == comparedHex.s; } bool operator != (const HexagonField comparedHex) const { return !(*this == comparedHex); }; }; namespace std { template<> struct hash<HexagonField> { size_t operator()(const HexagonField & obj) const { return hash<int>()(obj.q); } }; } #endif hexagonField.cpp #include "hexagonField.hpp" HexagonField::HexagonField(int q, int r, int s): q(q), r(r), s(s) { assert (q + r + s == 0); } HexagonField HexagonField::hexagonAdd(HexagonField a, HexagonField b) { return HexagonField(a.q + b.q, a.r + b.r, a.s + b.s); } HexagonField HexagonField::hexagonSubtract(HexagonField a, HexagonField b) { return HexagonField(a.q - b.q, a.r - b.r, a.s - b.s); } HexagonField HexagonField::hexagonMultiply(HexagonField a, int k) { return HexagonField(a.q * k, a.r * k, a.s * k); } int HexagonField::hexagonLength(HexagonField hex) { return int((abs(hex.q) + abs(hex.r) + abs(hex.s)) / 2); } int HexagonField::hexagonDistance(HexagonField a, HexagonField b) { return hexagonLength( hexagonSubtract(a, b)); } HexagonField HexagonField::hexagonDirection(int direction /* 0 to 5 */) { assert (0 <= direction && direction < 6); return hexagonDirections[direction]; } HexagonField HexagonField::hexagonNeighbor(HexagonField hex, int direction) { return hexagonAdd(hex, hexagonDirection(direction)); } A: If you run your program under a debugger, you will see it actually overflowed the stack while trying to construct HexagonField objects. This is because every object has a vector of 6 more HexagonField objects, which in turn needs another vector, and so on. As a quick fix, you can take hexagonDirections out of the HexagonField class and move it to a static file-scoped variable at the top of HexagonField.cpp instead: static const std::vector<HexagonField> hexagonDirections = { HexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1), HexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1) }; Alternatively, you can leave hexagonDirections as a static class member and move the definition to your .cpp file: // HexagonField.h class HexagonField { ... static const std::vector<HexagonField> hexagonDirections; ... }; // HexagonField.cpp const std::vector<HexagonField> HexagonField::hexagonDirections = { HexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1), HexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1) };
C++ unordered set of custom class - segfault on insert
I am following a tutorial on creating a hexagon map for a game. The source has it in struct, but I want it to be a class and so far I am unable to make it work. It compiles fine, but when I try to insert a new value in it, it segfaults. I am probably doing the hash function wrong, or something, but I have ran out of ideas on fixing it. Thanks! main.c #include <unordered_set> #include "hexagonField.hpp" int main(int argc, char **argv) { std::unordered_set <HexagonField> map; map.insert(HexagonField(0, 0, 0)); return 0; } hexagonField.hpp #ifndef HEXAGON_H #define HEXAGON_H #include <assert.h> #include <vector> class HexagonField { public: const int q, r, s; HexagonField(int q, int r, int s); ~HexagonField(); HexagonField hexagonAdd(HexagonField a, HexagonField b); HexagonField hexagonSubtract(HexagonField a, HexagonField b); HexagonField hexagonMultiply(HexagonField a, int k); int hexagonLength(HexagonField hex); int hexagonDistance(HexagonField a, HexagonField b); HexagonField hexagonDirection(int direction /* 0 to 5 */); HexagonField hexagonNeighbor(HexagonField hex, int direction); const std::vector<HexagonField> hexagonDirections = { HexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1), HexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1) }; bool operator == (const HexagonField comparedHex) const { return this->q == comparedHex.q && this->r == comparedHex.r && this->s == comparedHex.s; } bool operator != (const HexagonField comparedHex) const { return !(*this == comparedHex); }; }; namespace std { template<> struct hash<HexagonField> { size_t operator()(const HexagonField & obj) const { return hash<int>()(obj.q); } }; } #endif hexagonField.cpp #include "hexagonField.hpp" HexagonField::HexagonField(int q, int r, int s): q(q), r(r), s(s) { assert (q + r + s == 0); } HexagonField HexagonField::hexagonAdd(HexagonField a, HexagonField b) { return HexagonField(a.q + b.q, a.r + b.r, a.s + b.s); } HexagonField HexagonField::hexagonSubtract(HexagonField a, HexagonField b) { return HexagonField(a.q - b.q, a.r - b.r, a.s - b.s); } HexagonField HexagonField::hexagonMultiply(HexagonField a, int k) { return HexagonField(a.q * k, a.r * k, a.s * k); } int HexagonField::hexagonLength(HexagonField hex) { return int((abs(hex.q) + abs(hex.r) + abs(hex.s)) / 2); } int HexagonField::hexagonDistance(HexagonField a, HexagonField b) { return hexagonLength( hexagonSubtract(a, b)); } HexagonField HexagonField::hexagonDirection(int direction /* 0 to 5 */) { assert (0 <= direction && direction < 6); return hexagonDirections[direction]; } HexagonField HexagonField::hexagonNeighbor(HexagonField hex, int direction) { return hexagonAdd(hex, hexagonDirection(direction)); }
[ "If you run your program under a debugger, you will see it actually overflowed the stack while trying to construct HexagonField objects.\nThis is because every object has a vector of 6 more HexagonField objects, which in turn needs another vector, and so on.\nAs a quick fix, you can take hexagonDirections out of the HexagonField class and move it to a static file-scoped variable at the top of HexagonField.cpp instead:\nstatic const std::vector<HexagonField> hexagonDirections = {\nHexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1),\nHexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1)\n};\n\nAlternatively, you can leave hexagonDirections as a static class member and move the definition to your .cpp file:\n// HexagonField.h\nclass HexagonField {\n ...\n static const std::vector<HexagonField> hexagonDirections;\n ...\n};\n\n// HexagonField.cpp\nconst std::vector<HexagonField> HexagonField::hexagonDirections = {\n HexagonField(1, 0, -1), HexagonField(1, -1, 0), HexagonField(0, -1, 1),\n HexagonField(-1, 0, 1), HexagonField(-1, 1, 0), HexagonField(0, 1, -1)\n};\n\n" ]
[ 1 ]
[]
[]
[ "c++", "segmentation_fault", "unordered_set" ]
stackoverflow_0074657257_c++_segmentation_fault_unordered_set.txt
Q: How to show time range in the select of react-datepicker I would like to display a range time in my react-datepicker. I would like to select a range (9:00 - 9:30), instead of only 9:00. Here is my current datepicker time select : Here is what i would like to get: Thank you A: To display a range of time in the react-datepicker component, you can use the showTimeSelect and timeFormat props. Here is an example: import React from 'react'; import DatePicker from 'react-datepicker'; import "react-datepicker/dist/react-datepicker.css"; function MyDatePicker() { return ( <DatePicker showTimeSelect timeFormat="HH:mm" timeIntervals={30} dateFormat="MMMM d, yyyy h:mm aa" timeCaption="Time" /> ); } In this example, the timeFormat prop is set to "HH:mm" to display the time in hours and minutes, and the timeIntervals prop is set to 30 to display time options in 30 minute intervals. You can also use the selected prop to set the default selected time range. For example, to set the default range to 9:00 - 9:30, you can do the following: import React from 'react'; import DatePicker from 'react-datepicker'; import "react-datepicker/dist/react-datepicker.css"; function MyDatePicker() { return ( <DatePicker selected={[new Date(2020, 1, 1, 9, 0), new Date(2020, 1, 1, 9, 30)]} showTimeSelect timeFormat="HH:mm" timeIntervals={30} dateFormat="MMMM d, yyyy h:mm aa" timeCaption="Time" /> ); } I hope this helps! Let me know if you have any other questions.
How to show time range in the select of react-datepicker
I would like to display a range time in my react-datepicker. I would like to select a range (9:00 - 9:30), instead of only 9:00. Here is my current datepicker time select : Here is what i would like to get: Thank you
[ "To display a range of time in the react-datepicker component, you can use the showTimeSelect and timeFormat props. Here is an example:\nimport React from 'react';\nimport DatePicker from 'react-datepicker';\nimport \"react-datepicker/dist/react-datepicker.css\";\n\nfunction MyDatePicker() {\n return (\n <DatePicker\n showTimeSelect\n timeFormat=\"HH:mm\"\n timeIntervals={30}\n dateFormat=\"MMMM d, yyyy h:mm aa\"\n timeCaption=\"Time\"\n />\n );\n}\n\nIn this example, the timeFormat prop is set to \"HH:mm\" to display the time in hours and minutes, and the timeIntervals prop is set to 30 to display time options in 30 minute intervals.\nYou can also use the selected prop to set the default selected time range. For example, to set the default range to 9:00 - 9:30, you can do the following:\nimport React from 'react';\nimport DatePicker from 'react-datepicker';\nimport \"react-datepicker/dist/react-datepicker.css\";\n\nfunction MyDatePicker() {\n return (\n <DatePicker\n selected={[new Date(2020, 1, 1, 9, 0), new Date(2020, 1, 1, 9, 30)]}\n showTimeSelect\n timeFormat=\"HH:mm\"\n timeIntervals={30}\n dateFormat=\"MMMM d, yyyy h:mm aa\"\n timeCaption=\"Time\"\n />\n );\n}\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "react_datepicker", "reactjs" ]
stackoverflow_0074655830_react_datepicker_reactjs.txt
Q: .NET MAUI FlyoutItem Multiple With TabBar I have: <FlyoutItem FlyoutDisplayOptions="AsMultipleItems" </FlyoutItem> I would like to have a TabBar in one of the contentPages but I can't get it. A: You can use the below. <Shell FlyoutBehavior="Flyout"> <FlyoutItem Title="Home"> <Tab Title="Tab1"> <ShellContent ContentTemplate="{DataTemplate pages:Tab1}"/> </Tab> <Tab Title="Tab2"> <ShellContent ContentTemplate="{DataTemplate pages:Tab2}"/> </Tab> </FlyoutItem> </Shell>
.NET MAUI FlyoutItem Multiple With TabBar
I have: <FlyoutItem FlyoutDisplayOptions="AsMultipleItems" </FlyoutItem> I would like to have a TabBar in one of the contentPages but I can't get it.
[ "You can use the below.\n<Shell FlyoutBehavior=\"Flyout\">\n\n <FlyoutItem Title=\"Home\">\n <Tab Title=\"Tab1\">\n <ShellContent ContentTemplate=\"{DataTemplate pages:Tab1}\"/>\n </Tab>\n <Tab Title=\"Tab2\">\n <ShellContent ContentTemplate=\"{DataTemplate pages:Tab2}\"/>\n </Tab>\n </FlyoutItem>\n</Shell>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "c#", "maui" ]
stackoverflow_0074655192_c#_maui.txt
Q: How to export mysql database to excel with extra column i need to export my database to excel. But i have two tabel (users,users document) i need to mix them when exporting and add extra column dynamicly? is there any way I have watched many videos but those are single database and can't add column A: You can use SELECT INTO outfile which will write the output into the file and in this case you want to write into CSV. Write down your JOIN queries with some tables, after that you can send the result into the file. You must define your column in the queries, otherwise if you want to have a dynamic column, you must write a function to accommodate that.
How to export mysql database to excel with extra column
i need to export my database to excel. But i have two tabel (users,users document) i need to mix them when exporting and add extra column dynamicly? is there any way I have watched many videos but those are single database and can't add column
[ "You can use SELECT INTO outfile which will write the output into the file and in this case you want to write into CSV. Write down your JOIN queries with some tables, after that you can send the result into the file. You must define your column in the queries, otherwise if you want to have a dynamic column, you must write a function to accommodate that.\n" ]
[ 0 ]
[]
[]
[ "excel", "mysql", "php" ]
stackoverflow_0074412419_excel_mysql_php.txt
Q: Why does this reducer function not work in a typescript class and can I make it work? I've been playing around with reducers in this years's first advent of code challenge, and this code works fine: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { const sumInventoriesReducer = ( acc: number[], element: number[] ): number[] => [...acc, this.sumCalories(element)]; return Math.max(...elfInventories.reduce(sumInventoriesReducer, [])); } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } I then tried to split out the sumInventoriesReducer into it's own private function in the same class. This code does not work: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, [])); } private static sumInventoriesReducer( acc: number[], element: number[] ): number[] { return [...acc, this.sumCalories(element)]; } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } The logic is exactly the same, all that's changed is that it's passed in as a private function (the fact that it's static isn't the reason, tried it without static and got the same error). This is the error: TypeError: Cannot read property 'sumCalories' of undefined 20 | element: number[] 21 | ): number[] { > 22 | return [...acc, this.sumCalories(element)]; | ^ 23 | } 24 | 25 | private static sumCalories(inventory: number[]): number { I want to do this in an OOP way if I can, aware reducers are a staple of functional programming but I feel like I should be able to get this work using a private class function. Can anyone help? A: The problem is that you're trying to access an instance property (that exist only after constructor() has been called) in a static method (that exists only on the class and not on the prototype). After a constructor() method has been called the keyword this has the value of the instance object, but if you reference this in a static method you're referencing an undefined variable since to access a static method you don't call a constructor method export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, [])); } private static sumInventoriesReducer( acc: number[], element: number[] ): number[] { return [...acc, this.sumCalories(element)]; // The problem is here } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } If you want to keep this schema you can just update that line so it would go from: this.sumCalories(element) to: CalorieCounter.sumCalories(element) By doing so, you're accessing the method from the class itself and not from a non-existing instance. The resulting code would be: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, [])); } private static sumInventoriesReducer( acc: number[], element: number[] ): number[] { return [...acc, CalorieCounter.sumCalories(element)]; // The problem is here } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } Same as above, also the calculateMaxInventoryValue method is static but tries to access an instance method, by correcting it the code would become: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { return Math.max(...elfInventories.reduce(CalorieCounter.sumInventoriesReducer, [])); } private static sumInventoriesReducer( acc: number[], element: number[] ): number[] { return [...acc, CalorieCounter.sumCalories(element)]; // The problem is here } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } }
Why does this reducer function not work in a typescript class and can I make it work?
I've been playing around with reducers in this years's first advent of code challenge, and this code works fine: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { const sumInventoriesReducer = ( acc: number[], element: number[] ): number[] => [...acc, this.sumCalories(element)]; return Math.max(...elfInventories.reduce(sumInventoriesReducer, [])); } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } I then tried to split out the sumInventoriesReducer into it's own private function in the same class. This code does not work: export default class CalorieCounter { public static calculateMaxInventoryValue(elfInventories: number[][]): number { return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, [])); } private static sumInventoriesReducer( acc: number[], element: number[] ): number[] { return [...acc, this.sumCalories(element)]; } private static sumCalories(inventory: number[]): number { return inventory.reduce((a: number, b: number) => a + b, 0); } } The logic is exactly the same, all that's changed is that it's passed in as a private function (the fact that it's static isn't the reason, tried it without static and got the same error). This is the error: TypeError: Cannot read property 'sumCalories' of undefined 20 | element: number[] 21 | ): number[] { > 22 | return [...acc, this.sumCalories(element)]; | ^ 23 | } 24 | 25 | private static sumCalories(inventory: number[]): number { I want to do this in an OOP way if I can, aware reducers are a staple of functional programming but I feel like I should be able to get this work using a private class function. Can anyone help?
[ "The problem is that you're trying to access an instance property (that exist only after constructor() has been called) in a static method (that exists only on the class and not on the prototype).\nAfter a constructor() method has been called the keyword this has the value of the instance object, but if you reference this in a static method you're referencing an undefined variable since to access a static method you don't call a constructor method\nexport default class CalorieCounter {\n public static calculateMaxInventoryValue(elfInventories: number[][]): number {\n return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, []));\n }\n\n private static sumInventoriesReducer(\n acc: number[],\n element: number[]\n ): number[] {\n return [...acc, this.sumCalories(element)]; // The problem is here\n }\n\n private static sumCalories(inventory: number[]): number {\n return inventory.reduce((a: number, b: number) => a + b, 0);\n }\n}\n\nIf you want to keep this schema you can just update that line so it would go\n\nfrom: this.sumCalories(element)\nto: CalorieCounter.sumCalories(element)\nBy doing so, you're accessing the method from the class itself and not from a non-existing instance.\n\nThe resulting code would be:\nexport default class CalorieCounter {\n public static calculateMaxInventoryValue(elfInventories: number[][]): number {\n return Math.max(...elfInventories.reduce(this.sumInventoriesReducer, []));\n }\n\n private static sumInventoriesReducer(\n acc: number[],\n element: number[]\n ): number[] {\n return [...acc, CalorieCounter.sumCalories(element)]; // The problem is here\n }\n\n private static sumCalories(inventory: number[]): number {\n return inventory.reduce((a: number, b: number) => a + b, 0);\n }\n}\n\nSame as above, also the calculateMaxInventoryValue method is static but tries to access an instance method, by correcting it the code would become:\nexport default class CalorieCounter {\n public static calculateMaxInventoryValue(elfInventories: number[][]): number {\n return Math.max(...elfInventories.reduce(CalorieCounter.sumInventoriesReducer, []));\n }\n\n private static sumInventoriesReducer(\n acc: number[],\n element: number[]\n ): number[] {\n return [...acc, CalorieCounter.sumCalories(element)]; // The problem is here\n }\n\n private static sumCalories(inventory: number[]): number {\n return inventory.reduce((a: number, b: number) => a + b, 0);\n }\n}\n\n" ]
[ 1 ]
[ "This is because you don't flatten this.sumCalories(element).\nthis.sumCalories(element) returns a number[], so you need to flatten this with the three dots.\nIf you write return [...acc, ...this.sumCalories(element)]; it should work.\n" ]
[ -1 ]
[ "javascript", "reduce", "typescript" ]
stackoverflow_0074657295_javascript_reduce_typescript.txt
Q: Teradata inserting into a table with a generated UPI Hoping you all can help. I have a created table with a UPI (incremental index) and when I run the macro to insert in continuously gives me the error that "the positional assignment list has too few values". I have verified that the two tables match except for the UPI ID. How do you account for that field in the insert macro so that the table and the macro have the same number of assignments? A: The list of values specified by the INSERT statement is shorter than the list of columns in the table. This error occurs on the INSERT statement. try to check the origem table with the match of the destiny table.
Teradata inserting into a table with a generated UPI
Hoping you all can help. I have a created table with a UPI (incremental index) and when I run the macro to insert in continuously gives me the error that "the positional assignment list has too few values". I have verified that the two tables match except for the UPI ID. How do you account for that field in the insert macro so that the table and the macro have the same number of assignments?
[ "The list of values specified by the INSERT statement is shorter than the list of columns in the table. This error occurs on the INSERT statement.\ntry to check the origem table with the match of the destiny table.\n" ]
[ 0 ]
[]
[]
[ "teradata" ]
stackoverflow_0061507368_teradata.txt
Q: How to Access a array in the structure after assigning the structure variable to a pointer without arrow operator #include <stdio.h> struct dog { int name[10]; char breed[10]; int age; char color[10]; }; int main() { struct dog frodo; struct dog **ptr=&frodo.name; for(int i=0;i<10;i++) frodo.name[i]=i; for(int i=0;i<10;i++) printf(" frodo.name[%d]%d\n",i,frodo.name[i]); for(int i=0;i<10;i++) printf(" ptr =%d\n",ptr[i]); } I tried using double pointer but the index is not matching when value is being printed. O/P: frodo.name[0]0 frodo.name[1]1 ptr =0 ptr =2 A: I expect that ptr should be a pointer to the first element of frodo.name which is array of 10 ints. Therefore the type of ptr should be int*. Use: int *ptr=&frodo.name[0]; A: Your question indicates that you are new to C and still learning fundamentals, so in order to help you I'll go through your code and explain what it does, and then provide that code fixed to do what I think you are trying to do. I will try to be precise and thorough at the risk of sounding pedantic. Consider this declaration: int name[10]; This defines name as a field in struct dog which declares name to be the type of "an array of 10 ints". Note that an array can be assigned to a pointer to its "inner" type (int), in which assignment it is said to "decay" into a "pointer to int". So int * pint = name; is valid code which show this facility in C. None-the-less, the type of name is "array of 10 ints". This if you take it's address (&name) you don't get a "pointer to int; you get a "pointer to an array of 10 ints". #include <stdio.h> struct dog { int name[10]; char breed[10]; int age; char color[10]; }; So you declare a new "aggregate type", struct dog, whose objects will contain a field called name whose type is an "array of 10 ints", followed by a field called breed whose type is an "array of 10 chars", followed by a field age whose type is int, then a field color with type "array of 10 chars". int main() { struct dog frodo; Define an object frodo of type struct dog, which then contains all of the fields of a struct dog. It is allocated memory on the stack, which will be cleaned up and removed when main returns. struct dog **ptr=&frodo.name; Here is the first sign of misunderstanding. If you were to declare a variable ptr_to_dog like this: struct dog * ptr_to_dog, you would declare it as a pointer ("address of") an object whose type is struct dog, or "pointer to struct dog", for short. frodo is such an object, so setting pointer to this address of frodo is valid: ptr_to_dog = &frodo;. &frodo is an expression which takes the address of the frodo object, so the type of this expression "pointer to struct dog", which matches the type of ptr_to_dog, so the assignment is correct. Your declaration struct dog **ptr creates an object ptr whose type is "pointer to a pointer to a struct dog. If you take the address of ptr_to_dog, you get an expression whose type is "pointer to a pointer to struct dog. This is the same type as the ptr you declared, so if you were to make the assignment ptr = &ptr_to_dog, that is valid. Now let's look at the right side of the assignment, &frodo.name. Note that . is said to "bind tighter" than the &, so this is the same as &(frodo.name). The expression frodo.name gets the value of the field name in the foo object, which is of type struct dog. So the expression &frodo.name gives a pointer to that field, so its type is "pointer to the field name in the frodo object. The type of name is "array of 10 ints", so &frodo.name is "pointer to an array of 10 ints". So the declaration you gave could be described as "get the pointer to the array of 10 ints name and assign it to the "pointer to the pointer to the struct dog foo" ptr. The types are not compatible. However, all pointers can be reduced to a kind of unnamed type "pointer to memory", so the compiler can convert the "pointer to 10 ints" &frodo.name to the "pointer to the pointer to frodo" ptr. This is almost never usefull, so most compiler will emit a warning about this. Now you could do this struct dog * ptr_to_dog = &frodo, which assigns the address of frodo to the "pointer to a struct dog" ptr_to_dog, so here the types match and the assignment is proper. Note that a "pointer to struct dog" does not mean "pointer to a single struct dog"; so pointer could point to memory that hold an indefinite series of struct dogs in succession, ie an array. So you can do ptr_to_dog[3], which means "get the fourth struct dog from the series of struct dogs beginning at the address ptr_to_dog. If ptr_to_dog contains only one struct dog, what you get from that will be meaningless, but it is a valid expression. for(int i=0;i<10;i++) frodo.name[i]=i; for(int i=0;i<10;i++) printf(" frodo.name[%d]%d\n",i,frodo.name[i]); for(int i=0;i<10;i++) printf(" ptr =%d\n",ptr[i]); } Remember that the type of ptr is "pointer to a pointer to a struct dog, but given your assignment, ptr does not point to a pointer to a struct dog; it points to an "array of 10 ints" in the field name of the struct dog object frodo. So the value of ptr here is meaningless; its value is some section of the memory of the name array interpreted as a "pointer to a pointer to struct dog". From this I take it that you wanted ptr to point to the "array of 10 ints" called name in the struct dog object frodo. To get this, the declaration should have been int * ptr = frodo.name; Remember that an array type (name) can be assigned to a pointer to its inner type (int), in which the array expression (frodo.name) will "decay" into that type of pointer. This this says "get the array of 10 ints name inside the struct dog frodo, decay it to a pointer to the beginning of the series of ints in name and assign that pointer to the pointer to ints called ptr". A proper assignment. Note that @tstanisl's answer is wrong in one place; it declares ptr to be a "pointer to char". But frodo.name consists of ints not chars, so his assignment is invalid. The expression &frodo.name[0] says "get the first int in the array of 10 ints frodo.name and take its address, yielding a "pointer to the first int in frodo.name". But the address of the first int here is also the address of the series of ints in frodo.name. So the address &frodo.name and frodo.name when it is "decayed" to a pointer-to-int have the same value and type. Here is your code, fixed with the above in mind: #include <stdio.h> struct dog { int name[10]; char breed[10]; int age; char color[10]; }; int main() { struct dog frodo; int *ptr= frodo.name; for(int i=0;i<10;i++) frodo.name[i]=i; for(int i=0;i<10;i++) printf(" frodo.name[%d]%d\n",i,frodo.name[i]); for(int i=0;i<10;i++) printf(" ptr =%d\n",ptr[i]); }
How to Access a array in the structure after assigning the structure variable to a pointer without arrow operator
#include <stdio.h> struct dog { int name[10]; char breed[10]; int age; char color[10]; }; int main() { struct dog frodo; struct dog **ptr=&frodo.name; for(int i=0;i<10;i++) frodo.name[i]=i; for(int i=0;i<10;i++) printf(" frodo.name[%d]%d\n",i,frodo.name[i]); for(int i=0;i<10;i++) printf(" ptr =%d\n",ptr[i]); } I tried using double pointer but the index is not matching when value is being printed. O/P: frodo.name[0]0 frodo.name[1]1 ptr =0 ptr =2
[ "I expect that ptr should be a pointer to the first element of frodo.name which is array of 10 ints. Therefore the type of ptr should be int*. Use:\nint *ptr=&frodo.name[0];\n\n", "Your question indicates that you are new to C and still learning fundamentals,\nso in order to help you I'll go through your code and explain what it does, and\nthen provide that code fixed to do what I think you are trying to do. I will\ntry to be precise and thorough at the risk of sounding pedantic.\nConsider this declaration:\nint name[10];\n\nThis defines name as a field in struct dog which declares name to be the\ntype of \"an array of 10 ints\". Note that an array can be assigned to a\npointer to its \"inner\" type (int), in which assignment it is said to \"decay\"\ninto a \"pointer to int\". So int * pint = name; is valid code which show\nthis facility in C. None-the-less, the type of name is \"array of 10 ints\".\nThis if you take it's address (&name) you don't get a \"pointer to int; you\nget a \"pointer to an array of 10 ints\".\n#include <stdio.h>\n\nstruct dog\n{\n int name[10];\n char breed[10];\n int age;\n char color[10];\n};\n\nSo you declare a new \"aggregate type\", struct dog, whose objects will contain\na field called name whose type is an \"array of 10 ints\", followed by a field\ncalled breed whose type is an \"array of 10 chars\", followed by a field age\nwhose type is int, then a field color with type \"array of 10 chars\".\nint main()\n{\nstruct dog frodo;\n\nDefine an object frodo of type struct dog, which then contains all of the\nfields of a struct dog. It is allocated memory on the stack, which will be\ncleaned up and removed when main returns.\nstruct dog **ptr=&frodo.name;\n\nHere is the first sign of misunderstanding. If you were to declare a variable\nptr_to_dog like this: struct dog * ptr_to_dog, you would declare it as a\npointer (\"address of\") an object whose type is struct dog, or \"pointer to\nstruct dog\", for short. frodo is such an object, so setting pointer to this\naddress of frodo is valid: ptr_to_dog = &frodo;. &frodo is an expression\nwhich takes the address of the frodo object, so the type of this expression\n\"pointer to struct dog\", which matches the type of ptr_to_dog, so the\nassignment is correct.\nYour declaration struct dog **ptr creates an object ptr whose type is\n\"pointer to a pointer to a struct dog. If you take the address of\nptr_to_dog, you get an expression whose type is \"pointer to a pointer to\nstruct dog. This is the same type as the ptr you declared, so if you were\nto make the assignment ptr = &ptr_to_dog, that is valid.\nNow let's look at the right side of the assignment, &frodo.name. Note that\n. is said to \"bind tighter\" than the &, so this is the same as\n&(frodo.name). The expression frodo.name gets the value of the field name\nin the foo object, which is of type struct dog. So the expression\n&frodo.name gives a pointer to that field, so its type is \"pointer to the\nfield name in the frodo object. The type of name is \"array of 10 ints\",\nso &frodo.name is \"pointer to an array of 10 ints\". So the declaration you\ngave could be described as \"get the pointer to the array of 10 ints name and\nassign it to the \"pointer to the pointer to the struct dog foo\" ptr. The\ntypes are not compatible. However, all pointers can be reduced to a kind of\nunnamed type \"pointer to memory\", so the compiler can convert the \"pointer to 10\nints\" &frodo.name to the \"pointer to the pointer to frodo\" ptr. This is\nalmost never usefull, so most compiler will emit a warning about this.\nNow you could do this struct dog * ptr_to_dog = &frodo, which assigns the address of\nfrodo to the \"pointer to a struct dog\" ptr_to_dog, so here the types match and\nthe assignment is proper. Note that a \"pointer to struct dog\" does not mean\n\"pointer to a single struct dog\"; so pointer could point to memory that hold\nan indefinite series of struct dogs in succession, ie an array. So you can do\nptr_to_dog[3], which means \"get the fourth struct dog from the series of\nstruct dogs beginning at the address ptr_to_dog. If ptr_to_dog contains\nonly one struct dog, what you get from that will be meaningless, but it is a\nvalid expression.\nfor(int i=0;i<10;i++)\nfrodo.name[i]=i;\n\nfor(int i=0;i<10;i++)\nprintf(\" frodo.name[%d]%d\\n\",i,frodo.name[i]);\n\nfor(int i=0;i<10;i++)\nprintf(\" ptr =%d\\n\",ptr[i]);\n}\n\nRemember that the type of ptr is \"pointer to a pointer to a struct dog, but\ngiven your assignment, ptr does not point to a pointer to a struct dog; it\npoints to an \"array of 10 ints\" in the field name of the struct dog\nobject frodo. So the value of ptr here is meaningless; its value is some\nsection of the memory of the name array interpreted as a \"pointer to a pointer\nto struct dog\". From this I take it that you wanted ptr to point to the\n\"array of 10 ints\" called name in the struct dog object frodo. To get\nthis, the declaration should have been\nint * ptr = frodo.name;\n\nRemember that an array type (name) can be assigned to a pointer to its inner\ntype (int), in which the array expression (frodo.name) will \"decay\" into\nthat type of pointer. This this says \"get the array of 10 ints name inside\nthe struct dog frodo, decay it to a pointer to the beginning of the series\nof ints in name and assign that pointer to the pointer to ints called\nptr\". A proper assignment.\nNote that @tstanisl's answer is wrong in one place; it declares ptr to be a\n\"pointer to char\". But frodo.name consists of ints not chars, so his\nassignment is invalid. The expression &frodo.name[0] says \"get the first\nint in the array of 10 ints frodo.name and take its address, yielding a\n\"pointer to the first int in frodo.name\". But the address of the first\nint here is also the address of the series of ints in frodo.name. So\nthe address &frodo.name and frodo.name when it is \"decayed\" to a\npointer-to-int have the same value and type.\nHere is your code, fixed with the above in mind:\n#include <stdio.h>\n\nstruct dog\n{\n int name[10];\n char breed[10];\n int age;\n char color[10];\n};\n\nint main()\n{\nstruct dog frodo;\nint *ptr= frodo.name;\n\nfor(int i=0;i<10;i++)\nfrodo.name[i]=i;\n\nfor(int i=0;i<10;i++)\nprintf(\" frodo.name[%d]%d\\n\",i,frodo.name[i]);\n\nfor(int i=0;i<10;i++)\nprintf(\" ptr =%d\\n\",ptr[i]);\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "c", "double_pointer", "pointers", "structure" ]
stackoverflow_0074656214_c_double_pointer_pointers_structure.txt
Q: Delete record from an array of objects in javascript I have an array of objects that I get from an api, I get the data but I want to remove the ones that have a finish status after x time. First I must show all the records, after a certain time the records with FINISH status must be deleted I am using vue. This is the response I get: [ { "id": "289976", "status": "FINISH" }, { "id": "302635", "status": "PROGRESS" }, { "id": "33232", "status": "PROGRESS" } ] This is the method that obtains the information: I use setTimeout to be able to delete the records with FINISH status after a certain time getTurns() { fetch('ENPOINT', { method: 'POST', body: JSON.stringify({id: this.selected}), headers: { 'Content-Type': 'application/json' } }).then(response => response.json()) .then(data => { this.turns = data; data.forEach(turn => { if(turn.status == 'FINISH'){ setTimeout(() => { this.turns = data.filter(turn => turn.status !== 'FINISH'); }, 6000); } }); }) .catch(error => console.error(error)); } I have tried going through the array and making a conditional and it works for me, but when I call the method again I get the records with FINISH status again. I need to call the method every time since the data is updated mounted () { this.getTurns(); setInterval(() => { this.getTurns(); }, 5000); } maybe I need to order in another way, or that another javascript method I can use A: filter is exactly what you need. I don't get why you wrap everything in setInterval and wait for 5 or 6 seconds. Why don't you return the filtered data instead? return data.filter(turn -> turn.status !== 'FINISHED'); A: You mistake in this place this.turns = data; It put data in component property turns before filter; Do it after filter: .then(data => { // get before filter this.turns = data; // filter data after 6 sec setTimeout(() => { data.forEach(turn => { this.turns = data.filter(turn => turn.status !== 'FINISH'); }); }, 6000) }) Sorry, but I don't understand why you use setTimeout inside fetch. Do you sure that it necessary? A: You can avoid the setTimeout() delay if you take the promise as what it is: a promise that some data will be there! The following snippet will provide the data in the global variable turns as soon as it has been received from the remote data source (in this example just a sandbox server). The data is then filtered to exclude any entry where the property .company.catchphrase includes the word "zero" and placed into the global variabe turns. The callback in the .then()after the function getTurns() (which returns a promise!) will only be fired once the promise has been resolved. var turns; // global variable function getTurns() { return fetch("https://jsonplaceholder.typicode.com/users") .then(r => r.json()).then(data => turns=data.filter(turn=>!turn.company.catchPhrase.includes("zero")) ) .catch(error => console.error(error)); } getTurns().then(console.log);
Delete record from an array of objects in javascript
I have an array of objects that I get from an api, I get the data but I want to remove the ones that have a finish status after x time. First I must show all the records, after a certain time the records with FINISH status must be deleted I am using vue. This is the response I get: [ { "id": "289976", "status": "FINISH" }, { "id": "302635", "status": "PROGRESS" }, { "id": "33232", "status": "PROGRESS" } ] This is the method that obtains the information: I use setTimeout to be able to delete the records with FINISH status after a certain time getTurns() { fetch('ENPOINT', { method: 'POST', body: JSON.stringify({id: this.selected}), headers: { 'Content-Type': 'application/json' } }).then(response => response.json()) .then(data => { this.turns = data; data.forEach(turn => { if(turn.status == 'FINISH'){ setTimeout(() => { this.turns = data.filter(turn => turn.status !== 'FINISH'); }, 6000); } }); }) .catch(error => console.error(error)); } I have tried going through the array and making a conditional and it works for me, but when I call the method again I get the records with FINISH status again. I need to call the method every time since the data is updated mounted () { this.getTurns(); setInterval(() => { this.getTurns(); }, 5000); } maybe I need to order in another way, or that another javascript method I can use
[ "filter is exactly what you need. I don't get why you wrap everything in setInterval and wait for 5 or 6 seconds.\nWhy don't you return the filtered data instead?\nreturn data.filter(turn -> turn.status !== 'FINISHED');\n\n", "You mistake in this place\nthis.turns = data;\nIt put data in component property turns before filter;\nDo it after filter:\n\n\n.then(data => {\n // get before filter\n this.turns = data;\n \n // filter data after 6 sec\n setTimeout(() => {\n data.forEach(turn => {\n this.turns = data.filter(turn => turn.status !== 'FINISH');\n });\n }, 6000)\n})\n\n\n\nSorry, but I don't understand why you use setTimeout inside fetch. Do you sure that it necessary?\n", "You can avoid the setTimeout() delay if you take the promise as what it is: a promise that some data will be there!\nThe following snippet will provide the data in the global variable turns as soon as it has been received from the remote data source (in this example just a sandbox server). The data is then filtered to exclude any entry where the property .company.catchphrase includes the word \"zero\" and placed into the global variabe turns. The callback in the .then()after the function getTurns() (which returns a promise!) will only be fired once the promise has been resolved.\n\n\nvar turns; // global variable \nfunction getTurns() {\n return fetch(\"https://jsonplaceholder.typicode.com/users\")\n .then(r => r.json()).then(data =>\n turns=data.filter(turn=>!turn.company.catchPhrase.includes(\"zero\"))\n )\n .catch(error => console.error(error));\n}\ngetTurns().then(console.log);\n\n\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "javascript", "vue.js" ]
stackoverflow_0074656973_javascript_vue.js.txt
Q: Count column values in SSRS, VB.NET How to create selected value count in ssrs(Expresion). In my table I want "P"(selected value) count in column field.. Could someone guide me. Thank You.... Table1: 02-12-2022 03-12-2022 04-12-2022 05-12-2022 06-12-2022 08-12-2022 Presentdays P P A P P P 5 A: You can do it with =CountDistinct(Fields!DateObs.Value) I first created a sample dataset with cteSampleData as ( SELECT * FROM (VALUES (CONVERT(DATE,'2022-11-15'),'Red', 3) , ('2022-11-15','Blue', 2), ('2022-11-15','Green', 4) , ('2022-11-16','Red', 1), ('2022-11-16','Blue', 4), ('2022-11-16','Green', 2) , ('2022-11-17','Red', 2), ('2022-11-17','Blue', 1) , ('2022-11-18','Red', 4), ('2022-11-18','Blue', 3), ('2022-11-18','Green', 1) , ('2022-11-19','Red', 5), ('2022-11-19','Green', 3) ) as Samp(DateObs, ColorObs, ColorCount) ) SELECT * FROM cteSampleData as S WHERE DateObs < '2022-11-20' DateObs ColorObs ColorCount 2022-11-15 Red 3 2022-11-15 Blue 2 2022-11-15 Green 4 2022-11-16 Red 1 2022-11-16 Blue 4 2022-11-16 Green 2 2022-11-17 Red 2 2022-11-17 Blue 1 2022-11-18 Red 4 2022-11-18 Blue 3 2022-11-18 Green 1 2022-11-19 Red 5 2022-11-19 Green 3 Then I created a report with a matrix using it Note that the CountDistinct in the header gives the total columns (of just the dates) and in the Details row gives the count for that value.
Count column values in SSRS, VB.NET
How to create selected value count in ssrs(Expresion). In my table I want "P"(selected value) count in column field.. Could someone guide me. Thank You.... Table1: 02-12-2022 03-12-2022 04-12-2022 05-12-2022 06-12-2022 08-12-2022 Presentdays P P A P P P 5
[ "You can do it with\n=CountDistinct(Fields!DateObs.Value)\n\nI first created a sample dataset\nwith cteSampleData as (\n SELECT * FROM (VALUES \n (CONVERT(DATE,'2022-11-15'),'Red', 3)\n , ('2022-11-15','Blue', 2), ('2022-11-15','Green', 4)\n , ('2022-11-16','Red', 1), ('2022-11-16','Blue', 4), ('2022-11-16','Green', 2)\n , ('2022-11-17','Red', 2), ('2022-11-17','Blue', 1)\n , ('2022-11-18','Red', 4), ('2022-11-18','Blue', 3), ('2022-11-18','Green', 1)\n , ('2022-11-19','Red', 5), ('2022-11-19','Green', 3)\n ) as Samp(DateObs, ColorObs, ColorCount)\n)\nSELECT * \nFROM cteSampleData as S\nWHERE DateObs < '2022-11-20'\n \n\n\n\n\n\nDateObs\nColorObs\nColorCount\n\n\n\n\n2022-11-15\nRed\n3\n\n\n2022-11-15\nBlue\n2\n\n\n2022-11-15\nGreen\n4\n\n\n2022-11-16\nRed\n1\n\n\n2022-11-16\nBlue\n4\n\n\n2022-11-16\nGreen\n2\n\n\n2022-11-17\nRed\n2\n\n\n2022-11-17\nBlue\n1\n\n\n2022-11-18\nRed\n4\n\n\n2022-11-18\nBlue\n3\n\n\n2022-11-18\nGreen\n1\n\n\n2022-11-19\nRed\n5\n\n\n2022-11-19\nGreen\n3\n\n\n\n\nThen I created a report with a matrix using it\n\nNote that the CountDistinct in the header gives the total columns (of just the dates) and in the Details row gives the count for that value.\n" ]
[ 0 ]
[]
[]
[ "reporting_services", "vb.net" ]
stackoverflow_0074655925_reporting_services_vb.net.txt
Q: Exception middleware not catching exceptions I've created an exception handling middleware class: namespace FileSharingApp.API.Middleware { public class GlobalErrorHandlingMiddleware { private readonly RequestDelegate _next; public GlobalErrorHandlingMiddleware(RequestDelegate next) { _next = next; } public async Task InvokeAsync(HttpContext context) { try { await _next(context); } catch (Exception ex) { await HandleExceptionAsync(context, ex); } } private static Task HandleExceptionAsync(HttpContext context, Exception exception) { HttpStatusCode status; var exceptionType = exception.GetType(); if (exceptionType == typeof(InvalidOperationException)) { status = HttpStatusCode.NotFound; } else if (exceptionType == typeof(PasswordIncorrectException)) { status = HttpStatusCode.Unauthorized; } else { status = HttpStatusCode.InternalServerError; } var exceptionResult = JsonSerializer.Serialize(new { error = exception.Message, exception.StackTrace }); context.Response.ContentType = "application/json"; context.Response.StatusCode = (int)status; return context.Response.WriteAsync(exceptionResult); } I then created an extension method to add the middleware: public static class ApplicationBuilderExtensions { public static IApplicationBuilder AddGlobalErrorHandler(this IApplicationBuilder applicationBuilder) => applicationBuilder.UseMiddleware<GlobalErrorHandlingMiddleware>(); } And then called this in my program.cs class: var app = builder.Build(); app.AddGlobalErrorHandler(); app.UseExceptionHandler("/error"); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseRouting(); app.UseCors(MyAllowSpecificOrigins); app.UseHttpsRedirection(); app.UseAuthentication(); app.UseAuthorization(); app.MapControllers(); app.Run(); I've tested this by throwing an exception from an end point: [HttpPost("Login")] public async Task<ActionResult<UserDto>> LoginUser([FromBody] LoginDto loginDto) { throw new InvalidOperationException("Password is incorrect"); } But the HandleExceptionAsync method in my middleware class is not being hit (I've added a breakpoint) and a standard 500 response is being returned to the client. Can anybody see where I'm going wrong? A: Mine works just fine. Maybe it's something with where you're putting app.AddGlobalErrorHandler(); Please check my github repo on https://github.com/lucasdsalves/stack-overflow-74655126
Exception middleware not catching exceptions
I've created an exception handling middleware class: namespace FileSharingApp.API.Middleware { public class GlobalErrorHandlingMiddleware { private readonly RequestDelegate _next; public GlobalErrorHandlingMiddleware(RequestDelegate next) { _next = next; } public async Task InvokeAsync(HttpContext context) { try { await _next(context); } catch (Exception ex) { await HandleExceptionAsync(context, ex); } } private static Task HandleExceptionAsync(HttpContext context, Exception exception) { HttpStatusCode status; var exceptionType = exception.GetType(); if (exceptionType == typeof(InvalidOperationException)) { status = HttpStatusCode.NotFound; } else if (exceptionType == typeof(PasswordIncorrectException)) { status = HttpStatusCode.Unauthorized; } else { status = HttpStatusCode.InternalServerError; } var exceptionResult = JsonSerializer.Serialize(new { error = exception.Message, exception.StackTrace }); context.Response.ContentType = "application/json"; context.Response.StatusCode = (int)status; return context.Response.WriteAsync(exceptionResult); } I then created an extension method to add the middleware: public static class ApplicationBuilderExtensions { public static IApplicationBuilder AddGlobalErrorHandler(this IApplicationBuilder applicationBuilder) => applicationBuilder.UseMiddleware<GlobalErrorHandlingMiddleware>(); } And then called this in my program.cs class: var app = builder.Build(); app.AddGlobalErrorHandler(); app.UseExceptionHandler("/error"); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseRouting(); app.UseCors(MyAllowSpecificOrigins); app.UseHttpsRedirection(); app.UseAuthentication(); app.UseAuthorization(); app.MapControllers(); app.Run(); I've tested this by throwing an exception from an end point: [HttpPost("Login")] public async Task<ActionResult<UserDto>> LoginUser([FromBody] LoginDto loginDto) { throw new InvalidOperationException("Password is incorrect"); } But the HandleExceptionAsync method in my middleware class is not being hit (I've added a breakpoint) and a standard 500 response is being returned to the client. Can anybody see where I'm going wrong?
[ "\nMine works just fine.\nMaybe it's something with where you're putting app.AddGlobalErrorHandler();\nPlease check my github repo on https://github.com/lucasdsalves/stack-overflow-74655126\n" ]
[ 0 ]
[]
[]
[ ".net", "c#", "exception" ]
stackoverflow_0074655126_.net_c#_exception.txt
Q: not able to invoke chrome browser in eclipse using selenium package webdriverbasic; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class Webdriverbasicclass { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver","C:\\Users\\Sammy\\Downloads\\chromedriver_win32\\chromedriver.exe"); WebDriver driver=new ChromeDriver(); } } Exception in thread "main" java.lang.NoClassDefFoundError: org/openqa/selenium/chrome/ChromeDriver at webdriver/webdriverbasic.Webdriverbasicclass.main(Webdriverbasicclass.java:10) Caused by: java.lang.ClassNotFoundException: org.openqa.selenium.chrome.ChromeDriver at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ... 1 more exception in thread main java lang noclassdeffounderror its my first program in eclipse with selenium and i got this error and not able to invoke the browser A: It looks like a problem with the selenium jar. You havent referenced it properly. The best option is to create gradle/maven project and add the selenium as depedency. Here is an example how to add it in maven pom file: <dependencies> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>4.6.0</version> </dependency> </dependencies> The path to webdriver looks OK. A: First way: You should download selenium jar files then add it to your project. Right click on your project > build path > configure build path Select java build path > libraries > add external Jars. browse where you add selenium jars then select it. Second: You can create a maven project which is a management tool you can install its plugin from eclipse marketplace. open pom.xml file in the project then under dependencies, add the following code: <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>4.6.0</version> </dependency>
not able to invoke chrome browser in eclipse using selenium
package webdriverbasic; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class Webdriverbasicclass { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver","C:\\Users\\Sammy\\Downloads\\chromedriver_win32\\chromedriver.exe"); WebDriver driver=new ChromeDriver(); } } Exception in thread "main" java.lang.NoClassDefFoundError: org/openqa/selenium/chrome/ChromeDriver at webdriver/webdriverbasic.Webdriverbasicclass.main(Webdriverbasicclass.java:10) Caused by: java.lang.ClassNotFoundException: org.openqa.selenium.chrome.ChromeDriver at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ... 1 more exception in thread main java lang noclassdeffounderror its my first program in eclipse with selenium and i got this error and not able to invoke the browser
[ "It looks like a problem with the selenium jar. You havent referenced it properly.\nThe best option is to create gradle/maven project and add the selenium as depedency.\nHere is an example how to add it in maven pom file:\n <dependencies>\n <dependency>\n <groupId>org.seleniumhq.selenium</groupId>\n <artifactId>selenium-java</artifactId>\n <version>4.6.0</version>\n </dependency>\n </dependencies>\n\nThe path to webdriver looks OK.\n", "First way:\nYou should download selenium jar files then add it to your project.\nRight click on your project > build path > configure build path\nSelect java build path > libraries > add external Jars.\nbrowse where you add selenium jars then select it.\nSecond:\nYou can create a maven project which is a management tool you can install its plugin from eclipse marketplace.\nopen pom.xml file in the project then under dependencies, add the following code:\n <dependency>\n <groupId>org.seleniumhq.selenium</groupId>\n <artifactId>selenium-java</artifactId>\n <version>4.6.0</version>\n </dependency>\n\n" ]
[ 0, 0 ]
[]
[]
[ "eclipse", "java", "noclassdeffounderror", "selenium" ]
stackoverflow_0074652211_eclipse_java_noclassdeffounderror_selenium.txt
Q: ERROR - Unable to load signing key. Can't sign my text with my Private Key - PHP I can't instanciate my $pemkey whit a relative path in my php code. When I try to instanciate my key with openssl_pkey_get_private the program doesn't find it. Here is my code : $pemkey = openssl_pkey_get_private("file:///licensePrivateKey.pem"); if (! $pemkey ){ echo "ERROR - Unable to load signing key."; die(); } And here are my files : folder download_file.php licensePrivateKey.pem (Sorry can't display images lol) A: Check this note on PHP manual website https://www.php.net/manual/en/function.openssl-pkey-get-private.php#114998 So you only have to concatenate "file://" with an existing path string in every case This looks like an absolute path /licensePrivateKey.pem You are using windows or unix system?
ERROR - Unable to load signing key. Can't sign my text with my Private Key - PHP
I can't instanciate my $pemkey whit a relative path in my php code. When I try to instanciate my key with openssl_pkey_get_private the program doesn't find it. Here is my code : $pemkey = openssl_pkey_get_private("file:///licensePrivateKey.pem"); if (! $pemkey ){ echo "ERROR - Unable to load signing key."; die(); } And here are my files : folder download_file.php licensePrivateKey.pem (Sorry can't display images lol)
[ "Check this note on PHP manual website https://www.php.net/manual/en/function.openssl-pkey-get-private.php#114998\nSo you only have to concatenate \"file://\" with an existing path string in every case\nThis looks like an absolute path /licensePrivateKey.pem\nYou are using windows or unix system?\n" ]
[ 0 ]
[]
[]
[ "openssl", "php", "php_openssl", "private_key" ]
stackoverflow_0074657292_openssl_php_php_openssl_private_key.txt
Q: Get the class name as tag from which the method is being called - Android Logging I am trying to create a Logging class which only logs if the build is a debug build. This is my class import me.entri.entrime.BuildConfig import me.entri.entrime.utils.Constants object Logger { private val TAG = Constants.LOGGING_TAG @JvmStatic fun d(message : Any?){ if (BuildConfig.DEBUG) Log.d(TAG , message.toString()) } @JvmStatic fun d(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.d(TAG , message.toString(), e) } @JvmStatic fun e(message : Any?){ if (BuildConfig.DEBUG) Log.e(TAG , message.toString()) } @JvmStatic fun e(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.e(TAG , message.toString(), e) } @JvmStatic fun w(message : Any?){ if (BuildConfig.DEBUG) Log.w(TAG , message.toString()) } @JvmStatic fun w(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.w(TAG , message.toString(), e) } @JvmStatic fun v(message : Any?){ if (BuildConfig.DEBUG) Log.v(TAG , message.toString()) } @JvmStatic fun v(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.v(TAG , message.toString(), e) } } As you can see currently I am hard coding the TAG with a string . but instead I want to set the TAG as the class name from which this Log method has been called. For eg :- This is splash.kt class and if I call try{ val error = 4384/0 //arithmetic error for testing. }catch(e : Exception){ Logger.e("message") } Then my LogCat should show the class name as the TAG . ie Splash.kt I tried to dissect Jake Wharton's timber library which has this functionality. In that he seems to do something like this to get the class name from the stacktrace. @get:JvmSynthetic internal val explicitTag = ThreadLocal<String>() @get:JvmSynthetic internal open val tag: String? get() { val tag = explicitTag.get() if (tag != null) { explicitTag.remove() } return tag } And if the above tag is null then something like this. override val tag: String? get() = super.tag ?: Throwable().stackTrace I do not understand how both of these works , and even though I tried to use the same , I could not get the class name from which my Logger class was called. Instead as a stacktrace class name all I got was my Logger class name itself. So I wanted to know is there a any way to get the class name from which the my Logger class would be called. Thank you A: You can get caller class name through couple of ways. 1) Using StackWalker val className = StackWalker.getInstance(StackWalker.Option.RETAIN_CLASS_REFERENCE).callerClass For using StackWalker you need Java 9 or Greater 2) Using Thread StackTrace This will get the stacktrace of current thread. val className = Thread.currentThread().stackTrace[2].className 3) By creating an exception and getting its stacktrace. val className = Exception().stackTrace[1].className
Get the class name as tag from which the method is being called - Android Logging
I am trying to create a Logging class which only logs if the build is a debug build. This is my class import me.entri.entrime.BuildConfig import me.entri.entrime.utils.Constants object Logger { private val TAG = Constants.LOGGING_TAG @JvmStatic fun d(message : Any?){ if (BuildConfig.DEBUG) Log.d(TAG , message.toString()) } @JvmStatic fun d(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.d(TAG , message.toString(), e) } @JvmStatic fun e(message : Any?){ if (BuildConfig.DEBUG) Log.e(TAG , message.toString()) } @JvmStatic fun e(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.e(TAG , message.toString(), e) } @JvmStatic fun w(message : Any?){ if (BuildConfig.DEBUG) Log.w(TAG , message.toString()) } @JvmStatic fun w(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.w(TAG , message.toString(), e) } @JvmStatic fun v(message : Any?){ if (BuildConfig.DEBUG) Log.v(TAG , message.toString()) } @JvmStatic fun v(message: Any? , e : Exception?){ if (BuildConfig.DEBUG) Log.v(TAG , message.toString(), e) } } As you can see currently I am hard coding the TAG with a string . but instead I want to set the TAG as the class name from which this Log method has been called. For eg :- This is splash.kt class and if I call try{ val error = 4384/0 //arithmetic error for testing. }catch(e : Exception){ Logger.e("message") } Then my LogCat should show the class name as the TAG . ie Splash.kt I tried to dissect Jake Wharton's timber library which has this functionality. In that he seems to do something like this to get the class name from the stacktrace. @get:JvmSynthetic internal val explicitTag = ThreadLocal<String>() @get:JvmSynthetic internal open val tag: String? get() { val tag = explicitTag.get() if (tag != null) { explicitTag.remove() } return tag } And if the above tag is null then something like this. override val tag: String? get() = super.tag ?: Throwable().stackTrace I do not understand how both of these works , and even though I tried to use the same , I could not get the class name from which my Logger class was called. Instead as a stacktrace class name all I got was my Logger class name itself. So I wanted to know is there a any way to get the class name from which the my Logger class would be called. Thank you
[ "You can get caller class name through couple of ways.\n1) Using StackWalker\nval className = StackWalker.getInstance(StackWalker.Option.RETAIN_CLASS_REFERENCE).callerClass\n\nFor using StackWalker you need Java 9 or Greater\n2) Using Thread StackTrace\nThis will get the stacktrace of current thread.\nval className = Thread.currentThread().stackTrace[2].className\n\n3) By creating an exception and getting its stacktrace.\nval className = Exception().stackTrace[1].className\n\n" ]
[ 0 ]
[]
[]
[ "android", "java", "kotlin", "logging", "stack_trace" ]
stackoverflow_0074648954_android_java_kotlin_logging_stack_trace.txt
Q: How do I find how much memory is taken by a struct? Simple question: Is there a way how to find out how much memory is taken by particular struct? Ideally I would like it printed to console. Edit: Krumelur came with simple solution using sizeof function. Unfortunatelly it does not seems to work well with arrays. Following code println("Size of int \(123) is: \(sizeofValue(123))") println("Size of array \([0]) is: \(sizeofValue([0]))") println("Size of array \([0, 1, 8, 20]) is: \(sizeofValue([0, 1, 8, 20]))") Produces this output: Size of int 123 is: 8 Size of array [0] is: 8 Size of array [0, 1, 8, 20] is: 8 So different sizes of arrays give same size what is surely incorrect (at least for my purpose). A: The sizeof(T) operator is available in Swift. It returns the size taken up by the specified type or variable, just like in C. Unlike C, however, there is no concept of a stack-allocated array (static array). An array is a pointer to an object, meaning that its size will always be a size of a pointer (this is the same as for heap-allocated arrays in C). To get the size of an array, you have to do something like array.count * sizeof(Telement) but even that is only true if Telement is not an object that allocates heap memory. A: This appears to be supported within the Swift standard library now. Docs MemoryLayout.size(ofValue: self) A: As Declan McKenna pointed out, MemoryLayout.size is now a part of the standard library ("Foundation"). You use this in one of two ways: either you get the size of a type via <> bracket syntax, or of a value by calling it as a function: var someInt: Int let a = MemoryLayout.size(ofValue: someInt) let b = MemoryLayout<Int>.size /* a and b are equal */ Suppose you have an array arr of type T, you can get the allocated size as follows: let size = arr.capacity * MemoryLayout<T>.size Note that you should use arr.capacity, not arr.count. Capacity refers to how much memory has been reserved for the array, even if all the values haven't been written to. Another thing to note is that memory allocations can be a bit tricky. If you allocate a large block of memory and never write to it, the operating system might not report the application as actually using that memory. The operating system might let you malloc some enormous block of memory that's a thousand times larger than your actual memory, but if you never actually write anything to it, it won't really get allocated. The method described in this answer is for getting the hypothetical maximum amount allocated by the array.
How do I find how much memory is taken by a struct?
Simple question: Is there a way how to find out how much memory is taken by particular struct? Ideally I would like it printed to console. Edit: Krumelur came with simple solution using sizeof function. Unfortunatelly it does not seems to work well with arrays. Following code println("Size of int \(123) is: \(sizeofValue(123))") println("Size of array \([0]) is: \(sizeofValue([0]))") println("Size of array \([0, 1, 8, 20]) is: \(sizeofValue([0, 1, 8, 20]))") Produces this output: Size of int 123 is: 8 Size of array [0] is: 8 Size of array [0, 1, 8, 20] is: 8 So different sizes of arrays give same size what is surely incorrect (at least for my purpose).
[ "The sizeof(T) operator is available in Swift. It returns the size taken up by the specified type or variable, just like in C.\nUnlike C, however, there is no concept of a stack-allocated array (static array). An array is a pointer to an object, meaning that its size will always be a size of a pointer (this is the same as for heap-allocated arrays in C). To get the size of an array, you have to do something like\narray.count * sizeof(Telement)\n\nbut even that is only true if Telement is not an object that allocates heap memory.\n", "This appears to be supported within the Swift standard library now.\nDocs\nMemoryLayout.size(ofValue: self)\n\n", "As Declan McKenna pointed out, MemoryLayout.size is now a part of the standard library (\"Foundation\").\nYou use this in one of two ways: either you get the size of a type via <> bracket syntax, or of a value by calling it as a function:\nvar someInt: Int\nlet a = MemoryLayout.size(ofValue: someInt)\nlet b = MemoryLayout<Int>.size\n/* a and b are equal */\n\nSuppose you have an array arr of type T, you can get the allocated size as follows:\nlet size = arr.capacity * MemoryLayout<T>.size\n\nNote that you should use arr.capacity, not arr.count. Capacity refers to how much memory has been reserved for the array, even if all the values haven't been written to.\nAnother thing to note is that memory allocations can be a bit tricky. If you allocate a large block of memory and never write to it, the operating system might not report the application as actually using that memory. The operating system might let you malloc some enormous block of memory that's a thousand times larger than your actual memory, but if you never actually write anything to it, it won't really get allocated.\nThe method described in this answer is for getting the hypothetical maximum amount allocated by the array.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "ios", "memory", "memory_management", "swift" ]
stackoverflow_0031575849_ios_memory_memory_management_swift.txt
Q: Issue when reading csv file from url using pandas.read_csv I am trying to import a csv file from the following url "https://www.marketwatch.com/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true" using the pandas read_csv function. However, I get the following error: StopIteration: The above exception was the direct cause of the following exception: ... --> 386 raise EmptyDataError("No columns to parse from file") from err 388 line = self.names[:] 390 this_columns: list[Scalar | None] = [] EmptyDataError: No columns to parse from file Downloading the csv manually and then reading it with pd.read_csv yields the expected output without issues. As I need to repeat this for multiple csvs, I would like to directly import the csvs without having to manually download them each time. I have also tried this solution https://stackoverflow.com/questions/47243024/pandas-read-csv-on-dynamic-url-gives-emptydataerror-no-columns-to-parse-from-fi[](https://www.stackoverflow.com/), which also resulted in the 'No columns to parse from file' error. I could only find a link from the html and the button on the website, without a .csv ending: <a href="/games/stackoverflowq/download?view=holdings&amp;pub=4JwsLs_Gm4kj&amp;isDownload=true" download="Holdings - Stack Overflowq.csv" rel="nofollow">Download</a> Edit: Cleaned up the question in case somebody has a similar issue. A: The issue was indeed that the data could only be accessed after logging in. I have managed to resolve it using Selenium and this answer. from io import StringIO import pandas as pd import requests from selenium import webdriver #start requests session with login from selenium driver s = requests.Session() selenium_user_agent = driver.execute_script("return navigator.userAgent;") s.headers.update({"user-agent": selenium_user_agent}) #copy cookies from selenium driver for cookie in driver.get_cookies(): s.cookies.set(cookie['name'], cookie['value'], domain=cookie['domain']) #read csv response = s.get(url) if response.ok: data = response.content.decode('utf8') df = pd.read_csv(StringIO(data))
Issue when reading csv file from url using pandas.read_csv
I am trying to import a csv file from the following url "https://www.marketwatch.com/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true" using the pandas read_csv function. However, I get the following error: StopIteration: The above exception was the direct cause of the following exception: ... --> 386 raise EmptyDataError("No columns to parse from file") from err 388 line = self.names[:] 390 this_columns: list[Scalar | None] = [] EmptyDataError: No columns to parse from file Downloading the csv manually and then reading it with pd.read_csv yields the expected output without issues. As I need to repeat this for multiple csvs, I would like to directly import the csvs without having to manually download them each time. I have also tried this solution https://stackoverflow.com/questions/47243024/pandas-read-csv-on-dynamic-url-gives-emptydataerror-no-columns-to-parse-from-fi[](https://www.stackoverflow.com/), which also resulted in the 'No columns to parse from file' error. I could only find a link from the html and the button on the website, without a .csv ending: <a href="/games/stackoverflowq/download?view=holdings&amp;pub=4JwsLs_Gm4kj&amp;isDownload=true" download="Holdings - Stack Overflowq.csv" rel="nofollow">Download</a> Edit: Cleaned up the question in case somebody has a similar issue.
[ "The issue was indeed that the data could only be accessed after logging in.\nI have managed to resolve it using Selenium and this answer.\nfrom io import StringIO \nimport pandas as pd\nimport requests\nfrom selenium import webdriver\n\n#start requests session with login from selenium driver\ns = requests.Session()\nselenium_user_agent = driver.execute_script(\"return navigator.userAgent;\")\ns.headers.update({\"user-agent\": selenium_user_agent})\n\n#copy cookies from selenium driver\nfor cookie in driver.get_cookies():\n s.cookies.set(cookie['name'], cookie['value'], domain=cookie['domain'])\n\n#read csv\nresponse = s.get(url)\nif response.ok:\n data = response.content.decode('utf8') \n df = pd.read_csv(StringIO(data))\n \n \n\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074605550_csv_pandas_python.txt
Q: discord.py how to make a command have a cooldown? want to have this command on cooldown for 30 seconds @client.command() @commands.cooldown(1,30,commands.BucketType.user) if message.content.startswith('!sg hunt') await message.channel.send('You hunted a...') @work.error async def work_error(ctx, error): if isinstance(error, commands.CommandOnCooldown): await ctx.send(f'This command is on cooldown, you can use it in {round(error.retry_after, 2)} seconds') tried this buckettype thing A: Weidong Zhu Robot! You probably forgot to make function definition. Also @client.command() returns context, not message, so don't forget about this too. There's your quick fix @client.command() @commands.cooldown(1,30,commands.BucketType.user) async def hunt_cmd(ctx): message = ctx.message if message.content.startswith('!sg hunt') await message.channel.send('You hunted a...')```
discord.py how to make a command have a cooldown?
want to have this command on cooldown for 30 seconds @client.command() @commands.cooldown(1,30,commands.BucketType.user) if message.content.startswith('!sg hunt') await message.channel.send('You hunted a...') @work.error async def work_error(ctx, error): if isinstance(error, commands.CommandOnCooldown): await ctx.send(f'This command is on cooldown, you can use it in {round(error.retry_after, 2)} seconds') tried this buckettype thing
[ "Weidong Zhu Robot!\nYou probably forgot to make function definition. Also @client.command() returns context, not message, so don't forget about this too.\nThere's your quick fix\[email protected]()\[email protected](1,30,commands.BucketType.user)\nasync def hunt_cmd(ctx):\n message = ctx.message\n if message.content.startswith('!sg hunt')\n await message.channel.send('You hunted a...')```\n\n" ]
[ 0 ]
[]
[]
[ "bots", "discord", "discord.py", "python" ]
stackoverflow_0074579743_bots_discord_discord.py_python.txt
Q: Open and close new tab with Selenium WebDriver in OS X I'm using the Firefox Webdriver in Python 2.7 on Windows to simulate opening (Ctrl+t) and closing (Ctrl + w) a new tab. Here's my code: from selenium import webdriver from selenium.webdriver.common.keys import Keys browser = webdriver.Firefox() browser.get('https://www.google.com') main_window = browser.current_window_handle # open new tab browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't') browser.get('https://www.yahoo.com') # close tab browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 'w') How to achieve the same on a Mac? Based on this comment one should use browser.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') to open a new tab but I don't have a Mac to test it and what about the equivalent of Ctrl-w? Thanks! A: There's nothing easier and clearer than just running JavaScript. Open new tab: driver.execute_script("window.open('');") A: open a new tab: browser.get('http://www.google.com') close a tab: browser.close() switch to a tab: browser.swith_to_window(window_name) A: You can choose which window you want to close: window_name = browser.window_handles[0] Switch window: browser.switch_to.window(window_name=window_name) Then close it: browser.close() A: Just to combine the answers above for someone still curious. The below is based on Python 2.7 and a driver in Chrome. Open new tab by: driver.execute_script("window.open('"+URL+"', '__blank__');") where URL is a string such as "http://www.google.com". Close tab by: driver.close() [Note, this also doubles as driver.quit() when you only have 1 tab open]. Navigate between tabs by: driver.switch_to_window(driver.window_handles[0]) and driver.switch_to_window(driver.window_handles[1]). A: Open new tab: browser.execute_script("window.open('"+your url+"', '_blank')") Switch to new tab: browser.switch_to.window(windows[1]) A: IMHO all the above answers didn't exactly solve the original problem of closing a tab in a window and not closing the entire window or opening a blank tab. my solution: browser.switch_to.window("tab1") #change the tab1 to the id of the tab you want to close browser.execute_script("window.close('','_parent','');")
Open and close new tab with Selenium WebDriver in OS X
I'm using the Firefox Webdriver in Python 2.7 on Windows to simulate opening (Ctrl+t) and closing (Ctrl + w) a new tab. Here's my code: from selenium import webdriver from selenium.webdriver.common.keys import Keys browser = webdriver.Firefox() browser.get('https://www.google.com') main_window = browser.current_window_handle # open new tab browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't') browser.get('https://www.yahoo.com') # close tab browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 'w') How to achieve the same on a Mac? Based on this comment one should use browser.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') to open a new tab but I don't have a Mac to test it and what about the equivalent of Ctrl-w? Thanks!
[ "There's nothing easier and clearer than just running JavaScript.\nOpen new tab:\ndriver.execute_script(\"window.open('');\")\n", "open a new tab:\nbrowser.get('http://www.google.com')\n\nclose a tab:\nbrowser.close()\n\nswitch to a tab:\nbrowser.swith_to_window(window_name)\n\n", "You can choose which window you want to close:\nwindow_name = browser.window_handles[0]\n\nSwitch window:\nbrowser.switch_to.window(window_name=window_name)\n\nThen close it:\nbrowser.close()\n\n", "Just to combine the answers above for someone still curious. The below is based on Python 2.7 and a driver in Chrome.\nOpen new tab by: driver.execute_script(\"window.open('\"+URL+\"', '__blank__');\")\nwhere URL is a string such as \"http://www.google.com\".\nClose tab by:\ndriver.close() [Note, this also doubles as driver.quit() when you only have 1 tab open].\nNavigate between tabs by: driver.switch_to_window(driver.window_handles[0])\nand driver.switch_to_window(driver.window_handles[1]).\n", "Open new tab:\nbrowser.execute_script(\"window.open('\"+your url+\"', '_blank')\")\n\nSwitch to new tab:\nbrowser.switch_to.window(windows[1])\n\n", "IMHO all the above answers didn't exactly solve the original problem of closing a tab in a window and not closing the entire window or opening a blank tab.\nmy solution:\nbrowser.switch_to.window(\"tab1\") #change the tab1 to the id of the tab you want to close\nbrowser.execute_script(\"window.close('','_parent','');\")\n\n" ]
[ 14, 12, 11, 6, 2, 0 ]
[]
[]
[ "macos", "python", "selenium" ]
stackoverflow_0025951968_macos_python_selenium.txt
Q: Flink State Across multiple Transformers How can I access a state using the same-id across multiple transformers, for example the following stores an Order object via ValueState in OrderMapper class: env.addSource(source1()).keyBy(Order::getId).flatMap(new OrderMapper()).addSink(sink1()); Now I would like to access the same Order object via a SubOrderMapper class: env.addSource(source2()).keyBy(SubOrder::getOrderId).flatMap(new SubOrderMapper()).addSink(sink2()); Edit: Looks like it's not possible to have state maintained across multiple operators, is there a way to have one operator accept multiple inputs, lets say 5 sources? A: Take a look at CoProcessFunction To realize low-level operations on two inputs, applications can use CoProcessFunction or KeyedCoProcessFunction. This function is bound to two different inputs and gets individual calls to processElement1(...) and processElement2(...) for records from the two different inputs. Also side outputs might be useful for you. side output Edit: Union operator my be an option. Union A: You can create a custom EitherOfFive class that contains one of your five different stream values (I'm assuming they are all different). See Flink's Either class for the one of two case. Each input stream would use a Map function that converts the input class type to an EitherOfFive type. There would be a getKey() method that would figure out (based on which of the five values is actually set) what key to return. And then you can have a single KeyedProcessFunction that takes as input this EitherOfFive type. If the output is always the same, then you're all set. Otherwise you'll want side outputs, one per type, that feed the five different sinks.
Flink State Across multiple Transformers
How can I access a state using the same-id across multiple transformers, for example the following stores an Order object via ValueState in OrderMapper class: env.addSource(source1()).keyBy(Order::getId).flatMap(new OrderMapper()).addSink(sink1()); Now I would like to access the same Order object via a SubOrderMapper class: env.addSource(source2()).keyBy(SubOrder::getOrderId).flatMap(new SubOrderMapper()).addSink(sink2()); Edit: Looks like it's not possible to have state maintained across multiple operators, is there a way to have one operator accept multiple inputs, lets say 5 sources?
[ "Take a look at CoProcessFunction\n\nTo realize low-level operations on two inputs, applications can use\nCoProcessFunction or KeyedCoProcessFunction. This function is bound to\ntwo different inputs and gets individual calls to processElement1(...)\nand processElement2(...) for records from the two different inputs.\n\nAlso side outputs might be useful for you. side output\nEdit:\nUnion operator my be an option.\nUnion\n", "You can create a custom EitherOfFive class that contains one of your five different stream values (I'm assuming they are all different). See Flink's Either class for the one of two case.\nEach input stream would use a Map function that converts the input class type to an EitherOfFive type.\nThere would be a getKey() method that would figure out (based on which of the five values is actually set) what key to return. And then you can have a single KeyedProcessFunction that takes as input this EitherOfFive type.\nIf the output is always the same, then you're all set. Otherwise you'll want side outputs, one per type, that feed the five different sinks.\n" ]
[ 0, 0 ]
[]
[]
[ "apache_flink", "java" ]
stackoverflow_0074642596_apache_flink_java.txt
Q: Skipping last N rows of Julia Dataframe I have a problem with removing last N rows from a Dataframe in Julia. N_SKIP = 3 df = DataFrame(:col1=>1:10,:col2=>21:30) N = nrow(df) Original example Dataframe: 10×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │ │ 8 │ 8 │ 28 │ │ 9 │ 9 │ 29 │ │ 10 │ 10 │ 30 │ I want to get first N - N_SKIP rows, in this example rows with id in the 1:7 range. Result I'm trying to achieve with N = 3: 7×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │ I could use first(df::AbstractDataFrame, n::Integer) and pass the remaining number of rows in the arguments. It works, but it doesn't seem correct. julia> N_SKIP = 3 julia> df = DataFrame(:col1=>1:10,:col2=>21:30) julia> N = nrow(df) julia> first(df,N - N_SKIP) 7×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │ A: There are three ways you could want to do it (depending on what you want). Create a new data frame: julia> df[1:end-3, :] 7×2 DataFrame Row │ col1 col2 │ Int64 Int64 ─────┼────────────── 1 │ 1 21 2 │ 2 22 3 │ 3 23 4 │ 4 24 5 │ 5 25 6 │ 6 26 7 │ 7 27 julia> first(df, nrow(df) - 3) 7×2 DataFrame Row │ col1 col2 │ Int64 Int64 ─────┼────────────── 1 │ 1 21 2 │ 2 22 3 │ 3 23 4 │ 4 24 5 │ 5 25 6 │ 6 26 7 │ 7 27 Create a view of a data frame: julia> first(df, nrow(df) - 3, view=true) 7×2 SubDataFrame Row │ col1 col2 │ Int64 Int64 ─────┼────────────── 1 │ 1 21 2 │ 2 22 3 │ 3 23 4 │ 4 24 5 │ 5 25 6 │ 6 26 7 │ 7 27 julia> @view df[1:end-3, :] 7×2 SubDataFrame Row │ col1 col2 │ Int64 Int64 ─────┼────────────── 1 │ 1 21 2 │ 2 22 3 │ 3 23 4 │ 4 24 5 │ 5 25 6 │ 6 26 7 │ 7 27 Update the source data frame in place (alternatively deleteat! could be used depending on what is more convenient for you): julia> keepat!(df, 1:nrow(df)-3) 7×2 DataFrame Row │ col1 col2 │ Int64 Int64 ─────┼────────────── 1 │ 1 21 2 │ 2 22 3 │ 3 23 4 │ 4 24 5 │ 5 25 6 │ 6 26 7 │ 7 27
Skipping last N rows of Julia Dataframe
I have a problem with removing last N rows from a Dataframe in Julia. N_SKIP = 3 df = DataFrame(:col1=>1:10,:col2=>21:30) N = nrow(df) Original example Dataframe: 10×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │ │ 8 │ 8 │ 28 │ │ 9 │ 9 │ 29 │ │ 10 │ 10 │ 30 │ I want to get first N - N_SKIP rows, in this example rows with id in the 1:7 range. Result I'm trying to achieve with N = 3: 7×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │ I could use first(df::AbstractDataFrame, n::Integer) and pass the remaining number of rows in the arguments. It works, but it doesn't seem correct. julia> N_SKIP = 3 julia> df = DataFrame(:col1=>1:10,:col2=>21:30) julia> N = nrow(df) julia> first(df,N - N_SKIP) 7×2 DataFrame │ Row │ col1 │ col2 │ │ │ Int64 │ Int64 │ ├─────┼───────┼───────┤ │ 1 │ 1 │ 21 │ │ 2 │ 2 │ 22 │ │ 3 │ 3 │ 23 │ │ 4 │ 4 │ 24 │ │ 5 │ 5 │ 25 │ │ 6 │ 6 │ 26 │ │ 7 │ 7 │ 27 │
[ "There are three ways you could want to do it (depending on what you want).\n\nCreate a new data frame:\n\njulia> df[1:end-3, :]\n7×2 DataFrame\n Row │ col1 col2\n │ Int64 Int64\n─────┼──────────────\n 1 │ 1 21\n 2 │ 2 22\n 3 │ 3 23\n 4 │ 4 24\n 5 │ 5 25\n 6 │ 6 26\n 7 │ 7 27\n\njulia> first(df, nrow(df) - 3)\n7×2 DataFrame\n Row │ col1 col2\n │ Int64 Int64\n─────┼──────────────\n 1 │ 1 21\n 2 │ 2 22\n 3 │ 3 23\n 4 │ 4 24\n 5 │ 5 25\n 6 │ 6 26\n 7 │ 7 27\n\n\nCreate a view of a data frame:\n\njulia> first(df, nrow(df) - 3, view=true)\n7×2 SubDataFrame\n Row │ col1 col2\n │ Int64 Int64\n─────┼──────────────\n 1 │ 1 21\n 2 │ 2 22\n 3 │ 3 23\n 4 │ 4 24\n 5 │ 5 25\n 6 │ 6 26\n 7 │ 7 27\n\njulia> @view df[1:end-3, :]\n7×2 SubDataFrame\n Row │ col1 col2\n │ Int64 Int64\n─────┼──────────────\n 1 │ 1 21\n 2 │ 2 22\n 3 │ 3 23\n 4 │ 4 24\n 5 │ 5 25\n 6 │ 6 26\n 7 │ 7 27\n\n\nUpdate the source data frame in place (alternatively deleteat! could be used depending on what is more convenient for you):\n\njulia> keepat!(df, 1:nrow(df)-3)\n7×2 DataFrame\n Row │ col1 col2\n │ Int64 Int64\n─────┼──────────────\n 1 │ 1 21\n 2 │ 2 22\n 3 │ 3 23\n 4 │ 4 24\n 5 │ 5 25\n 6 │ 6 26\n 7 │ 7 27\n\n" ]
[ 6 ]
[]
[]
[ "dataframe", "julia" ]
stackoverflow_0074657235_dataframe_julia.txt
Q: Min and max values of an array in Python I want to calculate the minimum and maximum values of array A but I want to exclude all values less than 1e-12. I present the current and expected outputs. import numpy as np A=np.array([[9.49108487e-05], [1.05634586e-19], [5.68676707e-17], [1.02453254e-06], [2.48792902e-16], [1.02453254e-06]]) Min=np.min(A) Max=np.max(A) print(Min,Max) The current output is 1.05634586e-19 9.49108487e-05 The expected output is 1.02453254e-06 9.49108487e-05 A: Slice with boolean indexing before getting the min/max: B = A[A>1e-12] Min = np.min(B) Max = np.max(B) print(Min, Max) Output: 1.02453254e-06 9.49108487e-05 B: array([9.49108487e-05, 1.02453254e-06, 1.02453254e-06]) A: You can just select the values of the array greater than 1e-12 first and obtain the min and max of that: >>> A[A > 1e-12].min() 1.02453254e-06 >>> A[A > 1e-12].max() 9.49108487e-05 A: arr = np.array([9.49108487e-05,1.05634586e-19,5.68676707e-17,1.02453254e-06,2.48792902e-16,1.02453254e-06]) mask = arr > 1e-12 Min = np.min(arr[mask]) Max = np.max(arr[mask])
Min and max values of an array in Python
I want to calculate the minimum and maximum values of array A but I want to exclude all values less than 1e-12. I present the current and expected outputs. import numpy as np A=np.array([[9.49108487e-05], [1.05634586e-19], [5.68676707e-17], [1.02453254e-06], [2.48792902e-16], [1.02453254e-06]]) Min=np.min(A) Max=np.max(A) print(Min,Max) The current output is 1.05634586e-19 9.49108487e-05 The expected output is 1.02453254e-06 9.49108487e-05
[ "Slice with boolean indexing before getting the min/max:\nB = A[A>1e-12]\nMin = np.min(B)\nMax = np.max(B)\nprint(Min, Max)\n\nOutput: 1.02453254e-06 9.49108487e-05\nB: array([9.49108487e-05, 1.02453254e-06, 1.02453254e-06])\n", "You can just select the values of the array greater than 1e-12 first and obtain the min and max of that:\n>>> A[A > 1e-12].min()\n1.02453254e-06\n>>> A[A > 1e-12].max()\n9.49108487e-05\n\n", "arr = np.array([9.49108487e-05,1.05634586e-19,5.68676707e-17,1.02453254e-06,2.48792902e-16,1.02453254e-06]) \nmask = arr > 1e-12 \nMin = np.min(arr[mask]) \nMax = np.max(arr[mask])\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074657268_numpy_python.txt
Q: Read variable from file for usage in GitLab pipeline Given the following very simple .gitlab-ci.yml pipeline: --- variables: KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo... stages: - test build: stage: test script: - echo "$KEYCLOAK_VERSION" As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section. Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline? The documentation mentions import but this seems to be using YAML files. A: To read variables from a file you can use the source or . command. script: - source env.properties - echo $KEYCLOAK_VERSION Attention: One reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous. Maybe you can take a look here for some other solutions.
Read variable from file for usage in GitLab pipeline
Given the following very simple .gitlab-ci.yml pipeline: --- variables: KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo... stages: - test build: stage: test script: - echo "$KEYCLOAK_VERSION" As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section. Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline? The documentation mentions import but this seems to be using YAML files.
[ "To read variables from a file you can use the source or . command.\nscript:\n - source env.properties\n - echo $KEYCLOAK_VERSION\n\nAttention:\nOne reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous.\nMaybe you can take a look here for some other solutions.\n" ]
[ 1 ]
[]
[]
[ "gitlab" ]
stackoverflow_0074657300_gitlab.txt
Q: pyzk how to get the result of live capture : 1495 : 2020-02-11 11:55:00 (1, 0) Here is my sample result but then when I'm trying to split it gives me error Process terminate : 'Attendance' object has no attribute 'split' In the documentation it says print (attendance) # Attendance object How to access it? A: Convert the attendance object into a string and split it. str(attendance).split() After splitting you can access the user ID and use it where ever you want. A: found the solution i check in the github repository of pyzk and look for the attendance class and found all the object being return by the live_capture thank you :) A: It's an object of class Attendance. The split() is a method of string. So you can't directly split() an object. Dan is right, to split an object, first, you have to convert it to a string. str(obj).split() Although, you don't need to split this object to get the user id. All you have to do is, use accessor. e.g user_id = attendance_obj.user_id
pyzk how to get the result of live capture
: 1495 : 2020-02-11 11:55:00 (1, 0) Here is my sample result but then when I'm trying to split it gives me error Process terminate : 'Attendance' object has no attribute 'split' In the documentation it says print (attendance) # Attendance object How to access it?
[ "Convert the attendance object into a string and split it.\nstr(attendance).split()\n\nAfter splitting you can access the user ID and use it where ever you want.\n", "found the solution\ni check in the github repository of pyzk and look for the attendance class and found all the object being return by the live_capture thank you :)\n", "It's an object of class Attendance. The split() is a method of string. So you can't directly split() an object. Dan is right, to split an object, first, you have to convert it to a string.\nstr(obj).split()\n\nAlthough, you don't need to split this object to get the user id. All you have to do is, use accessor. e.g\nuser_id = attendance_obj.user_id\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python_3.x", "zkteco" ]
stackoverflow_0060161858_python_3.x_zkteco.txt
Q: Send MIME content to a particular user using Chilkat and Graph API access token I am trying to send MIME content to a single user using Chilkat library. For sending mail I am using access token of Graph API client credentials. But getting authentication failure error in chilkat. Below is the sample code. Calling from Main method: string mime = System.IO.File.ReadAllText(@"\\Mac\Home\Downloads\12_01.eml"); GetFreshToken(tenantID, clientID, clientSecret); SendMailUsingChilkat(tenantID,clientID, clientSecret,mime,"[email protected]"); Methods used in: bool GetFreshToken(string tenantId, string clientId, string clientSecret) { bool ret = false; Chilkat.OAuth2 oauth2 = new Chilkat.OAuth2(); bool success; oauth2.ListenPort = 3017; oauth2.AuthorizationEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/authorize"; oauth2.TokenEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/token"; oauth2.ClientId = clientId; oauth2.ClientSecret = clientSecret; oauth2.CodeChallenge = false; oauth2.Scope = "openid profile offline_access https://outlook.office365.com/SMTP.Send https://outlook.office365.com/POP.AccessAsUser.All https://outlook.office365.com/IMAP.AccessAsUser.All"; string url = oauth2.StartAuth(); if (oauth2.LastMethodSuccess != true) { MessageBox.Show(oauth2.LastErrorText); return ret; } int numMsWaited = 0; while ((numMsWaited < 30000) && (oauth2.AuthFlowState < 3)) { oauth2.SleepMs(100); numMsWaited = numMsWaited + 100; } if (oauth2.AuthFlowState < 3) { oauth2.Cancel(); MessageBox.Show("No response from the browser!"); return ret; } if (oauth2.AuthFlowState == 5) { MessageBox.Show("OAuth2 failed to complete."); MessageBox.Show(oauth2.FailureInfo); return ret; } if (oauth2.AuthFlowState == 4) { MessageBox.Show("OAuth2 authorization was denied."); MessageBox.Show(oauth2.AccessTokenResponse); return ret; } if (oauth2.AuthFlowState != 3) { MessageBox.Show("Unexpected AuthFlowState:" + Convert.ToString(oauth2.AuthFlowState)); return ret; } Chilkat.JsonObject json = new Chilkat.JsonObject(); json.Load(oauth2.AccessTokenResponse); json.EmitCompact = false; if (json.HasMember("expires_on") != true) { Chilkat.CkDateTime dtExpire = new Chilkat.CkDateTime(); dtExpire.SetFromCurrentSystemTime(); dtExpire.AddSeconds(json.IntOf("expires_in")); json.AppendString("expires_on", dtExpire.GetAsUnixTimeStr(false)); } json.Emit(); Chilkat.FileAccess fac = new Chilkat.FileAccess(); ret = fac.WriteEntireTextFile("microsoftGraph.json", json.Emit(), "utf-8", false); return ret; } bool GetOrRefreshToken(string clientId, string clientSecret, out string accessToken) { bool ret = false; accessToken = null; Chilkat.JsonObject json = new Chilkat.JsonObject(); bool success = json.LoadFile("microsoftGraph.json"); if (success != true) { return false; } Chilkat.CkDateTime dtExpire = new Chilkat.CkDateTime(); dtExpire.SetFromUnixTime(false, json.IntOf("expires_on")); if (dtExpire.ExpiresWithin(10, "minutes") != true) { Debug.WriteLine("No need to refresh, the access token won't expire within the next 10 minutes."); accessToken = json.StringOf("access_token"); return true; } // OK, we need to refresh the access token by sending a POST like this: Chilkat.OAuth2 oauth2 = new Chilkat.OAuth2(); oauth2.TokenEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/token"; oauth2.ClientId = clientId; oauth2.ClientSecret = clientSecret; oauth2.RefreshToken = json.StringOf("refresh_token"); success = oauth2.RefreshAccessToken(); if (success != true) { Debug.WriteLine(oauth2.LastErrorText); return false; } // Update the JSON with the new tokens. json.UpdateString("access_token", oauth2.AccessToken); json.UpdateString("refresh_token", oauth2.RefreshToken); json.EmitCompact = false; Debug.WriteLine(json.Emit()); if (json.HasMember("expires_on") != true) { dtExpire.SetFromCurrentSystemTime(); dtExpire.AddSeconds(json.IntOf("expires_in")); json.AppendString("expires_on", dtExpire.GetAsUnixTimeStr(false)); } Chilkat.FileAccess fac = new Chilkat.FileAccess(); ret = fac.WriteEntireTextFile("microsoftGraph.json", json.Emit(), "utf-8", false); accessToken = json.StringOf("access_token"); return ret; } async void SendMailUsingChilkat(string tenantId, string clientId, string clientSecret, string mime, string recipient) { string accessToken; GetOrRefreshToken(clientId, clientSecret, out accessToken); Chilkat.Email email = new Chilkat.Email(); email.LoadEml(mime); Chilkat.MailMan mailman = new Chilkat.MailMan(); mailman.SmtpHost = "smtp.office365.com"; mailman.SmtpPort = 587; mailman.StartTLS = true; mailman.SmtpUsername = email.From; mailman.OAuth2AccessToken = accessToken; bool success = mailman.SendMime(email.From, recipient, mime); if (success == true) { Debug.WriteLine("Mail Sent!"); return; } if (mailman.LastSmtpStatus != 535) { Debug.WriteLine(mailman.LastErrorText); return; } } I referred below links for implementation https://www.example-code.com/csharp/office365_oauth2_access_token.asp https://www.example-code.com/csharp/office365_refresh_access_token.asp https://www.example-code.com/csharp/office365_smtp_send_email.asp A: At the time I'm writing this response, Chilkat has not yet been able to get the client credentials flow working with Office365. We can get an access token, but it fails to authenticate. I believe additional magical incantations need to be performed in Azure, and I've yet to discover the secret sauce. Especially with different identity and account types: https://learn.microsoft.com/en-us/security/zero-trust/develop/identity-supported-account-types See related Microsoft docs: https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow https://learn.microsoft.com/en-us/answers/questions/1046512/smtp-for-o365-with-client-credentials-flow.html
Send MIME content to a particular user using Chilkat and Graph API access token
I am trying to send MIME content to a single user using Chilkat library. For sending mail I am using access token of Graph API client credentials. But getting authentication failure error in chilkat. Below is the sample code. Calling from Main method: string mime = System.IO.File.ReadAllText(@"\\Mac\Home\Downloads\12_01.eml"); GetFreshToken(tenantID, clientID, clientSecret); SendMailUsingChilkat(tenantID,clientID, clientSecret,mime,"[email protected]"); Methods used in: bool GetFreshToken(string tenantId, string clientId, string clientSecret) { bool ret = false; Chilkat.OAuth2 oauth2 = new Chilkat.OAuth2(); bool success; oauth2.ListenPort = 3017; oauth2.AuthorizationEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/authorize"; oauth2.TokenEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/token"; oauth2.ClientId = clientId; oauth2.ClientSecret = clientSecret; oauth2.CodeChallenge = false; oauth2.Scope = "openid profile offline_access https://outlook.office365.com/SMTP.Send https://outlook.office365.com/POP.AccessAsUser.All https://outlook.office365.com/IMAP.AccessAsUser.All"; string url = oauth2.StartAuth(); if (oauth2.LastMethodSuccess != true) { MessageBox.Show(oauth2.LastErrorText); return ret; } int numMsWaited = 0; while ((numMsWaited < 30000) && (oauth2.AuthFlowState < 3)) { oauth2.SleepMs(100); numMsWaited = numMsWaited + 100; } if (oauth2.AuthFlowState < 3) { oauth2.Cancel(); MessageBox.Show("No response from the browser!"); return ret; } if (oauth2.AuthFlowState == 5) { MessageBox.Show("OAuth2 failed to complete."); MessageBox.Show(oauth2.FailureInfo); return ret; } if (oauth2.AuthFlowState == 4) { MessageBox.Show("OAuth2 authorization was denied."); MessageBox.Show(oauth2.AccessTokenResponse); return ret; } if (oauth2.AuthFlowState != 3) { MessageBox.Show("Unexpected AuthFlowState:" + Convert.ToString(oauth2.AuthFlowState)); return ret; } Chilkat.JsonObject json = new Chilkat.JsonObject(); json.Load(oauth2.AccessTokenResponse); json.EmitCompact = false; if (json.HasMember("expires_on") != true) { Chilkat.CkDateTime dtExpire = new Chilkat.CkDateTime(); dtExpire.SetFromCurrentSystemTime(); dtExpire.AddSeconds(json.IntOf("expires_in")); json.AppendString("expires_on", dtExpire.GetAsUnixTimeStr(false)); } json.Emit(); Chilkat.FileAccess fac = new Chilkat.FileAccess(); ret = fac.WriteEntireTextFile("microsoftGraph.json", json.Emit(), "utf-8", false); return ret; } bool GetOrRefreshToken(string clientId, string clientSecret, out string accessToken) { bool ret = false; accessToken = null; Chilkat.JsonObject json = new Chilkat.JsonObject(); bool success = json.LoadFile("microsoftGraph.json"); if (success != true) { return false; } Chilkat.CkDateTime dtExpire = new Chilkat.CkDateTime(); dtExpire.SetFromUnixTime(false, json.IntOf("expires_on")); if (dtExpire.ExpiresWithin(10, "minutes") != true) { Debug.WriteLine("No need to refresh, the access token won't expire within the next 10 minutes."); accessToken = json.StringOf("access_token"); return true; } // OK, we need to refresh the access token by sending a POST like this: Chilkat.OAuth2 oauth2 = new Chilkat.OAuth2(); oauth2.TokenEndpoint = "https://login.microsoftonline.com/xxxxxx-xxxx-xxxx-xxxx-xxxx0f78bbd/oauth2/v2.0/token"; oauth2.ClientId = clientId; oauth2.ClientSecret = clientSecret; oauth2.RefreshToken = json.StringOf("refresh_token"); success = oauth2.RefreshAccessToken(); if (success != true) { Debug.WriteLine(oauth2.LastErrorText); return false; } // Update the JSON with the new tokens. json.UpdateString("access_token", oauth2.AccessToken); json.UpdateString("refresh_token", oauth2.RefreshToken); json.EmitCompact = false; Debug.WriteLine(json.Emit()); if (json.HasMember("expires_on") != true) { dtExpire.SetFromCurrentSystemTime(); dtExpire.AddSeconds(json.IntOf("expires_in")); json.AppendString("expires_on", dtExpire.GetAsUnixTimeStr(false)); } Chilkat.FileAccess fac = new Chilkat.FileAccess(); ret = fac.WriteEntireTextFile("microsoftGraph.json", json.Emit(), "utf-8", false); accessToken = json.StringOf("access_token"); return ret; } async void SendMailUsingChilkat(string tenantId, string clientId, string clientSecret, string mime, string recipient) { string accessToken; GetOrRefreshToken(clientId, clientSecret, out accessToken); Chilkat.Email email = new Chilkat.Email(); email.LoadEml(mime); Chilkat.MailMan mailman = new Chilkat.MailMan(); mailman.SmtpHost = "smtp.office365.com"; mailman.SmtpPort = 587; mailman.StartTLS = true; mailman.SmtpUsername = email.From; mailman.OAuth2AccessToken = accessToken; bool success = mailman.SendMime(email.From, recipient, mime); if (success == true) { Debug.WriteLine("Mail Sent!"); return; } if (mailman.LastSmtpStatus != 535) { Debug.WriteLine(mailman.LastErrorText); return; } } I referred below links for implementation https://www.example-code.com/csharp/office365_oauth2_access_token.asp https://www.example-code.com/csharp/office365_refresh_access_token.asp https://www.example-code.com/csharp/office365_smtp_send_email.asp
[ "At the time I'm writing this response, Chilkat has not yet been able to get the client credentials flow working with Office365. We can get an access token, but it fails to authenticate. I believe additional magical incantations need to be performed in Azure, and I've yet to discover the secret sauce.\nEspecially with different identity and account types: https://learn.microsoft.com/en-us/security/zero-trust/develop/identity-supported-account-types\nSee related Microsoft docs:\nhttps://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow\nhttps://learn.microsoft.com/en-us/answers/questions/1046512/smtp-for-o365-with-client-credentials-flow.html\n" ]
[ 0 ]
[]
[]
[ ".net", "c#", "chilkat", "chilkat_email", "microsoft_graph_api" ]
stackoverflow_0074637045_.net_c#_chilkat_chilkat_email_microsoft_graph_api.txt
Q: Convert nvarchar into datetime2 In SQL Server Management Studio, I have a nvarchar of the form 20221202 which I would like to convert into a datetime2 type. I tried using: CONVERT(datetime2, string) and CONVERT(datetime2, CONVERT(date, string)) but both attempts did not work. Do you have an idea?
Convert nvarchar into datetime2
In SQL Server Management Studio, I have a nvarchar of the form 20221202 which I would like to convert into a datetime2 type. I tried using: CONVERT(datetime2, string) and CONVERT(datetime2, CONVERT(date, string)) but both attempts did not work. Do you have an idea?
[]
[]
[ "Did you test this somehow\n Select convert(date, getdate()) as [Date], \n convert(varchar(8), convert(time, getdate())) as [Time]\n\n" ]
[ -1 ]
[ "sql", "sql_server" ]
stackoverflow_0074657475_sql_sql_server.txt
Q: ntpq utility on Alpine 3.9? I want to use Telegraf's ntpq's input, to detect timedrift on some remote machines I run. This input requires access to the ntpq binary, which is part of the ntp package. However, my telegraf container is based on Alpine 3.9, and I simply cant find any way to install ntp on Alpine. One thing I did find was this: ntpsec is a competing project to ntp and has its own implementation of ntpq. This might actually work, but this package is only supported on the current Edge of Alpine (ie, not in Alpine <= 3.10), so that's no good. Any helpful ideas on how to get ntpq on Alpine 3.9 (except for migrating my telegarf away from Alpine)? A: ntpsec is available since Alpine 3.10 using the community-repository: https://pkgs.alpinelinux.org/contents?file=ntpq&path=&name=&branch=v3.10 So you can either upgrade your container to Alpine 3.10 or try to backport (and build) ntpsec on alpine 3.9 A: This can be solved by compiling the ntp package in a separate docker build stage and copying out the ntpq executable like this: FROM alpine:3.17 AS ntp_builder RUN apk add alpine-sdk linux-headers RUN wget -q http://www.eecis.udel.edu/~ntp/ntp_spool/ntp4/ntp-4.2/ntp-4.2.8p15.tar.gz RUN tar xf ntp-4.2.8p15.tar.gz RUN cd ntp-4.2.8p15 && ./configure && make FROM alpine:3.17 RUN echo 'hosts: files dns' >> /etc/nsswitch.conf RUN apk add --no-cache iputils ca-certificates net-snmp-tools \ procps lm_sensors \ && update-ca-certificates COPY --from=ntp_builder /ntp-4.2.8p15/ntpq /usr/bin COPY telegraf /usr/bin/ COPY telegraf.conf /etc/telegraf/telegraf.conf EXPOSE 8125/udp 8092/udp 8094 COPY docker-entrypoint.sh /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"] CMD ["telegraf"] Note, you need allow the docker container to query the ntp daemon by adding a line like restrict 172.18.0.0 mask 255.255.0.0 to ntp.conf on the host.
ntpq utility on Alpine 3.9?
I want to use Telegraf's ntpq's input, to detect timedrift on some remote machines I run. This input requires access to the ntpq binary, which is part of the ntp package. However, my telegraf container is based on Alpine 3.9, and I simply cant find any way to install ntp on Alpine. One thing I did find was this: ntpsec is a competing project to ntp and has its own implementation of ntpq. This might actually work, but this package is only supported on the current Edge of Alpine (ie, not in Alpine <= 3.10), so that's no good. Any helpful ideas on how to get ntpq on Alpine 3.9 (except for migrating my telegarf away from Alpine)?
[ "ntpsec is available since Alpine 3.10 using the community-repository:\nhttps://pkgs.alpinelinux.org/contents?file=ntpq&path=&name=&branch=v3.10\nSo you can either upgrade your container to Alpine 3.10 or try to backport (and build) ntpsec on alpine 3.9\n", "This can be solved by compiling the ntp package in a separate docker build stage and copying out the ntpq executable like this:\nFROM alpine:3.17 AS ntp_builder\nRUN apk add alpine-sdk linux-headers\nRUN wget -q http://www.eecis.udel.edu/~ntp/ntp_spool/ntp4/ntp-4.2/ntp-4.2.8p15.tar.gz\nRUN tar xf ntp-4.2.8p15.tar.gz\nRUN cd ntp-4.2.8p15 && ./configure && make\n\nFROM alpine:3.17\nRUN echo 'hosts: files dns' >> /etc/nsswitch.conf\nRUN apk add --no-cache iputils ca-certificates net-snmp-tools \\\n procps lm_sensors \\\n && update-ca-certificates\nCOPY --from=ntp_builder /ntp-4.2.8p15/ntpq /usr/bin\nCOPY telegraf /usr/bin/\nCOPY telegraf.conf /etc/telegraf/telegraf.conf\n\nEXPOSE 8125/udp 8092/udp 8094\n\nCOPY docker-entrypoint.sh /entrypoint.sh\nENTRYPOINT [\"/entrypoint.sh\"]\nCMD [\"telegraf\"]\n\nNote, you need allow the docker container to query the ntp daemon by adding a line like restrict 172.18.0.0 mask 255.255.0.0 to ntp.conf on the host.\n" ]
[ 1, 0 ]
[]
[]
[ "alpine_linux", "ntp", "telegraf" ]
stackoverflow_0056748348_alpine_linux_ntp_telegraf.txt
Q: Filtering nested objects and arrays I have a Vue project and need to search an array with nested objects for a specific object and then return it. The user has a text input field for searching and the search should target "title". The data looks like this: const data = [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }, { "id": "222", "title": "bbb" }, { "id": "333", "title": "ccc" }] }, { "catId": "2", "catTitle": "b", "exampleArray": [{ "id": "444", "title": "ddd" }, { "id": "555", "title": "eee" }] }, { "catId": "3", "catTitle": "c", "exampleArray": [] }, { "catId": "4", "catTitle": "d", "exampleArray": [{ "id": "555", "title": "fff" }] }] I have tried: return data.filter(item => { return item.catArray.filter(category=> { return category.title.toLowerCase().includes(this.search.toLowerCase()) }) }) e.g. if user input is "aaa", should return: [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }] }] The search should also return all matching results. A: You were almost there! const data = [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }, { "id": "222", "title": "bbb" }, { "id": "333", "title": "ccc" }] }, { "catId": "2", "catTitle": "b", "exampleArray": [{ "id": "444", "title": "ddd" }, { "id": "555", "title": "eee" }] }, { "catId": "3", "catTitle": "c", "exampleArray": [] }, { "catId": "4", "catTitle": "d", "exampleArray": [{ "id": "555", "title": "fff" }] }]; const search = "fff"; console.log(data.filter(item => { return item.exampleArray.some(category=> { return category.title.toLowerCase().includes(search.toLowerCase()) }) })) All I did is add a check for the length after your filter. Because filter expects true or false to know if the element should be included. While it returns an array with results, an empty array is a truthy value. You need to check if the array has elements to filter correctly. EDIT: I changed to using FIND instead of FILTER. Find will return falsy value if nothing is found, while returning a truthy value (the found element) if something is found. This would have the benefit of not looping through the whole array, stopping as soon as we found something. EDIT AGAIN: some is the function you want, we learn every day! It does the same as find but instead returns true directly! A: To search an array of objects for a specific object and return it, you can use the Array.prototype.find() method. The find() method takes a callback function that returns a boolean value indicating whether the object should be included in the result. Here is an example of how you could use the find() method to search the data array for a specific object and return it: const data = [ { catId: "1", catTitle: "a", exampleArray: [ { id: "111", title: "aaa", }, { id: "222", title: "bbb", }, { id: "333", title: "ccc", }, ], }, // ... ]; // The search term that the user entered const searchTerm = "aaa"; // Use the find() method to search the data array for the object that has // a title property that includes the search term const result = data.find((item) => { return item.exampleArray.some((item) => { return item.title.toLowerCase().includes(searchTerm.toLowerCase()); }); }); console.log(result); // The result will be: // { // "catId": "1", // "catTitle": "a", // "exampleArray": [{ // "id": "111", // "title": "aaa" // }] // } In this code, the find() method is used to search the data array for the object that has a title property that includes the search term (which is converted to lowercase before being compared to the title property to make the search case-insensitive). The Array.prototype.some() method is used inside the callback function passed to find() to check if any of the objects in the exampleArray property have a title property that includes the search term. If the search term is found in one of the title properties, the find() method will return the object that contains it. Otherwise, it will return undefined. A: Here is one way you can accomplish this: return data.Filter(item => { // filter the `exampleArray` for each `item` by checking if the `title` property of each object in the array contains the search string const matchingItems = item.exampleArray.filter(example => example.title.toLowerCase().includes(this.search.toLowerCase())); // return `true` for the current `item` if there are any matching items in the `exampleArray` return matchingItems.length > 0; }); This code will filter the data array to only include objects where at least one object in the exampleArray has a title property that contains the search string. It will also return only the matching objects in the exampleArray for each item. If the search string is "aaa", this will return the following array: [ { "catId":"1", "catTitle":"a", "exampleArray":[ { "id":"111", "title":"aaa" } ] } ] A: You can achieve this requirement with the help of Array.filter() method along with String.includes() Live Demo : const data = [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }, { "id": "222", "title": "bbb" }, { "id": "333", "title": "ccc" }] }, { "catId": "2", "catTitle": "b", "exampleArray": [{ "id": "444", "title": "ddd" }, { "id": "555", "title": "eee" }] }, { "catId": "3", "catTitle": "c", "exampleArray": [] }, { "catId": "4", "catTitle": "d", "exampleArray": [{ "id": "555", "title": "fff" }] }]; const searchWord = 'aaa'; const res = data.filter(obj => { obj.exampleArray = obj.exampleArray.filter(({ title }) => title.includes(searchWord)) return obj.exampleArray.length; }); console.log(res);
Filtering nested objects and arrays
I have a Vue project and need to search an array with nested objects for a specific object and then return it. The user has a text input field for searching and the search should target "title". The data looks like this: const data = [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }, { "id": "222", "title": "bbb" }, { "id": "333", "title": "ccc" }] }, { "catId": "2", "catTitle": "b", "exampleArray": [{ "id": "444", "title": "ddd" }, { "id": "555", "title": "eee" }] }, { "catId": "3", "catTitle": "c", "exampleArray": [] }, { "catId": "4", "catTitle": "d", "exampleArray": [{ "id": "555", "title": "fff" }] }] I have tried: return data.filter(item => { return item.catArray.filter(category=> { return category.title.toLowerCase().includes(this.search.toLowerCase()) }) }) e.g. if user input is "aaa", should return: [{ "catId": "1", "catTitle": "a", "exampleArray": [{ "id": "111", "title": "aaa" }] }] The search should also return all matching results.
[ "You were almost there!\n\n\nconst data = \n[{\n \"catId\": \"1\",\n \"catTitle\": \"a\",\n \"exampleArray\": [{\n \"id\": \"111\",\n \"title\": \"aaa\"\n }, {\n \"id\": \"222\",\n \"title\": \"bbb\"\n }, {\n \"id\": \"333\",\n \"title\": \"ccc\"\n }]\n}, {\n \"catId\": \"2\",\n \"catTitle\": \"b\",\n \"exampleArray\": [{\n \"id\": \"444\",\n \"title\": \"ddd\"\n }, {\n \"id\": \"555\",\n \"title\": \"eee\"\n }]\n}, {\n \"catId\": \"3\",\n \"catTitle\": \"c\",\n \"exampleArray\": []\n}, {\n \"catId\": \"4\",\n \"catTitle\": \"d\",\n \"exampleArray\": [{\n \"id\": \"555\",\n \"title\": \"fff\"\n }]\n}];\nconst search = \"fff\";\n\nconsole.log(data.filter(item => {\n return item.exampleArray.some(category=> {\n return category.title.toLowerCase().includes(search.toLowerCase())\n })\n }))\n\n\n\nAll I did is add a check for the length after your filter. Because filter expects true or false to know if the element should be included. While it returns an array with results, an empty array is a truthy value. You need to check if the array has elements to filter correctly.\nEDIT: I changed to using FIND instead of FILTER. Find will return falsy value if nothing is found, while returning a truthy value (the found element) if something is found. This would have the benefit of not looping through the whole array, stopping as soon as we found something.\nEDIT AGAIN: some is the function you want, we learn every day! It does the same as find but instead returns true directly!\n", "To search an array of objects for a specific object and return it, you can use the Array.prototype.find() method. The find() method takes a callback function that returns a boolean value indicating whether the object should be included in the result.\nHere is an example of how you could use the find() method to search the data array for a specific object and return it:\n\n\nconst data = [\n {\n catId: \"1\",\n catTitle: \"a\",\n exampleArray: [\n {\n id: \"111\",\n title: \"aaa\",\n },\n {\n id: \"222\",\n title: \"bbb\",\n },\n {\n id: \"333\",\n title: \"ccc\",\n },\n ],\n },\n // ...\n];\n\n// The search term that the user entered\nconst searchTerm = \"aaa\";\n\n// Use the find() method to search the data array for the object that has\n// a title property that includes the search term\nconst result = data.find((item) => {\n return item.exampleArray.some((item) => {\n return item.title.toLowerCase().includes(searchTerm.toLowerCase());\n });\n});\nconsole.log(result);\n// The result will be:\n// {\n// \"catId\": \"1\",\n// \"catTitle\": \"a\",\n// \"exampleArray\": [{\n// \"id\": \"111\",\n// \"title\": \"aaa\"\n// }]\n// }\n\n\n\nIn this code, the find() method is used to search the data array for the object that has a title property that includes the search term (which is converted to lowercase before being compared to the title property to make the search case-insensitive).\nThe Array.prototype.some() method is used inside the callback function passed to find() to check if any of the objects in the exampleArray property have a title property that includes the search term.\nIf the search term is found in one of the title properties, the find() method will return the object that contains it. Otherwise, it will return undefined.\n", "Here is one way you can accomplish this:\nreturn data.Filter(item => {\n // filter the `exampleArray` for each `item` by checking if the `title` property of each object in the array contains the search string\n const matchingItems = item.exampleArray.filter(example => example.title.toLowerCase().includes(this.search.toLowerCase()));\n // return `true` for the current `item` if there are any matching items in the `exampleArray`\n return matchingItems.length > 0;\n});\n\nThis code will filter the data array to only include objects where at least one object in the exampleArray has a title property that contains the search string. It will also return only the matching objects in the exampleArray for each item.\nIf the search string is \"aaa\", this will return the following array:\n[\n {\n \"catId\":\"1\",\n \"catTitle\":\"a\",\n \"exampleArray\":[\n {\n \"id\":\"111\",\n \"title\":\"aaa\"\n }\n ]\n }\n]\n\n", "You can achieve this requirement with the help of Array.filter() method along with String.includes()\nLive Demo :\n\n\nconst data = [{\n \"catId\": \"1\",\n \"catTitle\": \"a\",\n \"exampleArray\": [{\n \"id\": \"111\",\n \"title\": \"aaa\"\n }, {\n \"id\": \"222\",\n \"title\": \"bbb\"\n }, {\n \"id\": \"333\",\n \"title\": \"ccc\"\n }]\n}, {\n \"catId\": \"2\",\n \"catTitle\": \"b\",\n \"exampleArray\": [{\n \"id\": \"444\",\n \"title\": \"ddd\"\n }, {\n \"id\": \"555\",\n \"title\": \"eee\"\n }]\n}, {\n \"catId\": \"3\",\n \"catTitle\": \"c\",\n \"exampleArray\": []\n}, {\n \"catId\": \"4\",\n \"catTitle\": \"d\",\n \"exampleArray\": [{\n \"id\": \"555\",\n \"title\": \"fff\"\n }]\n}];\n\nconst searchWord = 'aaa';\n\nconst res = data.filter(obj => {\n obj.exampleArray = obj.exampleArray.filter(({ title }) => title.includes(searchWord))\n return obj.exampleArray.length;\n});\n\nconsole.log(res);\n\n\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "arrays", "javascript", "vue.js" ]
stackoverflow_0074655906_arrays_javascript_vue.js.txt
Q: Parse HTML document with Beautiful Soup I'm pretty new parsing HTML documents and I'm stuck in this problem. Giving an HTML document made like this: <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMMainThread.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMMainThread::destroyThread()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">1</td></tr> </table> <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMNullValue<p{c::Ping}>::get()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">2</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Ping}>::initNullBlock()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">0</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">2</td><td align="right">5</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Pong}>::get()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">2</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Pong}>::initNullBlock()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">0</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">2</td><td align="right">5</td></tr> </table> <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMStaticArray.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMStaticArray<p{c::Ping}>::@constructor(,ni)</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">4</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">1</td><td align="right">2</td><td align="right">2</td></tr> </table> what I need is to create a data structure made like this: <Filename, function (related to that file), STCYC value of that function> I tried iterating like this: for files_and_functions in soup.find_all(['h3','h4','table']): for elem in files_and_functions: valore = elem.text and asking for each elem if it's a function, a file or a STCYC value, but I can't get out of it. Is there anyone who can obtain these information from this terrible HTML? Thank you very much! A: you can try using this from BeautifulSoup import BeautifulSoup except ImportError: from bs4 import BeautifulSoup html = #the HTML code you've written above parsed_html = BeautifulSoup(html) print(parsed_html.body.find('div', attrs={'class':'container'}).text) A: If html_doc contains the HTML snippet from your question you can do: soup = BeautifulSoup(html_doc, "html.parser") for t in soup.select("table.metricstable:not(:has(table))"): k = [td.text for td in t.tr.find_all("td")] v = [td.text for td in t.tr.find_next("tr").find_all("td")] d = dict(zip(k, v)) filename = t.find_previous("h3").text function = t.find_previous("h4").text styc = d["v(G) (STCYC)"] print("{:<50} {:<10} {}".format(function, styc, filename)) Prints: Function: ::OMMainThread::destroyThread() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMMainThread.h Function: ::OMNullValue::get() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h Function: ::OMNullValue::initNullBlock() 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h Function: ::OMNullValue::get() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h Function: ::OMNullValue::initNullBlock() 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h Function: ::OMStaticArray::@constructor(,ni) 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMStaticArray.h
Parse HTML document with Beautiful Soup
I'm pretty new parsing HTML documents and I'm stuck in this problem. Giving an HTML document made like this: <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMMainThread.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMMainThread::destroyThread()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">1</td></tr> </table> <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMNullValue<p{c::Ping}>::get()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">2</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Ping}>::initNullBlock()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">0</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">2</td><td align="right">5</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Pong}>::get()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">1</td><td align="right">1</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">2</td></tr> </table> <h4>Function: ::OMNullValue<p{c::Pong}>::initNullBlock()</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">0</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">0</td><td align="right">2</td><td align="right">5</td></tr> </table> <h3>File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMStaticArray.h</h3> <table class="metricstable" width="100%"> <h4>Function: ::OMStaticArray<p{c::Ping}>::@constructor(,ni)</h4> <table class="metricstable" width="100%"> <tr><td class="lightheader" align="left">Metric</td><td class="lightheader" align="right">CALLS (STCAL)</td><td class="lightheader" align="right">v(G) (STCYC)</td><td class="lightheader" align="right">GOTO (STGTO)</td><td class="lightheader" align="right">RETURN (STM19)</td><td class="lightheader" align="right">LEVEL (STMIF)</td><td class="lightheader" align="right">PARAM (STPAR)</td><td class="lightheader" align="right">PATH (STPTH)</td><td class="lightheader" align="right">STMT (STST3)</td></tr> <tr><td class="lightheader" align="left">Values</td><td align="right">4</td><td align="right">2</td><td align="right">0</td><td align="right">0</td><td align="right">1</td><td align="right">1</td><td align="right">2</td><td align="right">2</td></tr> </table> what I need is to create a data structure made like this: <Filename, function (related to that file), STCYC value of that function> I tried iterating like this: for files_and_functions in soup.find_all(['h3','h4','table']): for elem in files_and_functions: valore = elem.text and asking for each elem if it's a function, a file or a STCYC value, but I can't get out of it. Is there anyone who can obtain these information from this terrible HTML? Thank you very much!
[ "you can try using this\n from BeautifulSoup import BeautifulSoup\nexcept ImportError:\n from bs4 import BeautifulSoup\nhtml = #the HTML code you've written above\nparsed_html = BeautifulSoup(html)\nprint(parsed_html.body.find('div', attrs={'class':'container'}).text)\n\n", "If html_doc contains the HTML snippet from your question you can do:\nsoup = BeautifulSoup(html_doc, \"html.parser\")\n\nfor t in soup.select(\"table.metricstable:not(:has(table))\"):\n k = [td.text for td in t.tr.find_all(\"td\")]\n v = [td.text for td in t.tr.find_next(\"tr\").find_all(\"td\")]\n\n d = dict(zip(k, v))\n\n filename = t.find_previous(\"h3\").text\n function = t.find_previous(\"h4\").text\n styc = d[\"v(G) (STCYC)\"]\n\n print(\"{:<50} {:<10} {}\".format(function, styc, filename))\n\nPrints:\nFunction: ::OMMainThread::destroyThread() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMMainThread.h\nFunction: ::OMNullValue::get() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h\nFunction: ::OMNullValue::initNullBlock() 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h\nFunction: ::OMNullValue::get() 1 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h\nFunction: ::OMNullValue::initNullBlock() 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMNullValue.h\nFunction: ::OMStaticArray::@constructor(,ni) 2 File: /home/finxadm/XMW.SET.OXF.CPP/LangCpp/oxf/OMStaticArray.h\n\n" ]
[ 0, 0 ]
[]
[]
[ "beautifulsoup", "html", "html_parsing", "parsing", "python_3.x" ]
stackoverflow_0074657469_beautifulsoup_html_html_parsing_parsing_python_3.x.txt
Q: Programmatically configure proxy settings in iOS with the Network library How can I set the proxy settings for a connection established with Network (and not using URLSession) ? As described in this answer, one can do so with URLSession by updating the configuration: configuration.connectionProxyDictionary = [ kCFNetworkProxiesHTTPEnable as String: 1, kCFNetworkProxiesHTTPProxy as String: ip, kCFNetworkProxiesHTTPPort as String: port, "HTTPSEnable": 1, "HTTPSProxy": ip, "HTTPSPort": port, ] I would like to do something similar using the Network library. I am currently creating my connection as: NWConnection(host: host, port: port, using: .init()) but I don't know how to configure it to use a Proxy. A: To set the proxy settings for a connection established with the Network library, you can use the following code: let proxySettings = [ "HTTPEnable": 1, "HTTPProxy": ip, "HTTPPort": port, "HTTPSEnable": 1, "HTTPSProxy": ip, "HTTPSPort": port, ] let configuration = URLSessionConfiguration.default configuration.connectionProxyDictionary = proxySettings let client = Network.Client(configuration: configuration) This code will create a URLSessionConfiguration with the specified proxy settings, and then pass that configuration to the Network.Client instance. This will ensure that all connections established with that client will use the specified proxy settings.
Programmatically configure proxy settings in iOS with the Network library
How can I set the proxy settings for a connection established with Network (and not using URLSession) ? As described in this answer, one can do so with URLSession by updating the configuration: configuration.connectionProxyDictionary = [ kCFNetworkProxiesHTTPEnable as String: 1, kCFNetworkProxiesHTTPProxy as String: ip, kCFNetworkProxiesHTTPPort as String: port, "HTTPSEnable": 1, "HTTPSProxy": ip, "HTTPSPort": port, ] I would like to do something similar using the Network library. I am currently creating my connection as: NWConnection(host: host, port: port, using: .init()) but I don't know how to configure it to use a Proxy.
[ "To set the proxy settings for a connection established with the Network library, you can use the following code:\n let proxySettings = [\n \"HTTPEnable\": 1,\n \"HTTPProxy\": ip,\n \"HTTPPort\": port,\n \"HTTPSEnable\": 1,\n \"HTTPSProxy\": ip,\n \"HTTPSPort\": port,\n]\n\nlet configuration = URLSessionConfiguration.default\nconfiguration.connectionProxyDictionary = proxySettings\n\nlet client = Network.Client(configuration: configuration)\n\nThis code will create a URLSessionConfiguration with the specified proxy settings, and then pass that configuration to the Network.Client instance. This will ensure that all connections established with that client will use the specified proxy settings.\n" ]
[ 0 ]
[]
[]
[ "ios", "networking", "proxy", "swift" ]
stackoverflow_0074580614_ios_networking_proxy_swift.txt
Q: Calling HTTP endpoint using .CRT and .KEY files in ABAP? I'm trying to consume an endpoint from ABAP, by instantiate a if_http_client from cl_http_client=>create_by_url. That process works fine when I don't need to use a signed certificate. Usually I just include the certificate using the STRUST transaction. But for this specific case I have two certificate files: .crt and the .key. I'm able fetch the endpoint from Postman, because I can insert those files in Settings -> Certificates: So, how can I have it working from ABAP? How to insert those files in my http request? Should I pass them from ABAP code, or config it in STRUST or some other transation? A: EDIT: Reworked answer to better address problem as the more details arise. NOTE for readers: This is ABAP as HTTP Client (Not server) with SSL. This is also a non typical problem. Here the SAP system has to connect to another service using a specific Client Certificate to establish an SSL connection. Something that would normally be managed at network level. When loading The Certificate it must be loaded into STRUST in the client PSE area. The previous Idea(prior to edit/rework) sending the the Certificate as a header is explained as Option 3. OPTIONS : 1) SSL Handshake in ABAP . Trying to manage SSL handshake in ABAP is very likely not possible. SSL Handshake is managed by sapcryptolib. 2) Import the Client Certificate in STRUST into the Standard Client PSE. See details below 3) Use xxxx.cer as string and add as Http header (last resort if, option 2 doesnt Work) ============================================================== 2) Option 2 Details (BEST WAY) Import your certificate into Strust, in SSL client Standard area. Here is an example on standard sap docu of an actual example case. It is Dutsch Payroll interface. Using Private key certificate. *.p12 or *.pfx file . Private Key certificate https://help.sap.com/docs/ERP_HCM_SPV/491c29ac9232469bb257a2ba14ac290c/999ad0ce8bd24945b547584e776e9a4e.html Since this type of Cert cant be directly imported into SAP it explains how you can use sapgenpse at operating system level to convert the p12 into a pse file. Strust does not support import of p12 files. Now the ABAP call uses the client identity created in this step. cl_http_client=>create_by_url( EXPORTING url = 'url' ssl_id = 'CL_ID' "Ident created in step above IMPORTING client = lo_client ). Or perhaps easier to work with. Use Sm59 to create and external http addr and select this Newly created identity. Then call with http client created via destination. CALL METHOD cl_http_client=>create_by_destination EXPORTING destination = lv_destination "the new sm59 destination IMPORTING client = lo_http_client. OPTION 3 Details: (Not ideal, assume called service supports it.) if and only if, the called service support Certificates as Header Note you xxx.cer is the equivalent to an identity key. manage the string carefully. DATA: lo_client TYPE REF TO if_http_client. cl_http_client=>create_by_url( EXPORTING url = 'url' ssl_id = 'ANONYM' "Start SSL handshake as Anonymous SSL IMPORTING client = lo_client ). "and pass the actual identify as HTTP header, " Many service support this approach. But they solutions are always " specific to that service. " Example is the microsoft translation service. " the expect a user subscription key as a header. 'https://api-eur.cognitive.microsofttranslator.com/translate?api-version=3.0' lo_client->request->set_header_field( EXPORTING name = 'Client-Cert' "Check HTTP header name with called Service docu value = '<cert> in string format' ). "lo_client->send( .. ) "lo_client->receive( .. ) A: Use KeyStore Explorer tool to create single pfx file from your client key and certificate. Also you can put chain of the client certificate with this tool. Use sapgenpse on your local system and create a pse file from pfx file with below command: sapgenpse import_p12 -p c:\client.pse c:\client.pfx Go to STRUST, create your own certificate store at Enviroment->SSL Client identities. I prefer this for not mixing all of them. Then return to STRUST and chose PSE->Import and select your custom pse file. Then click PSE->Save as and select your custom identity. Add site SSL certificate to your new identiy. You can try new SSL Client configuration at SM59 with selecting your new SSL Client identity. Example ABAP code below. REPORT ZMKY_SSL_CLIENT. DATA: lo_client TYPE REF TO if_http_client, lv_code TYPE i, lv_REASON type string. cl_http_client=>create_by_url( EXPORTING url = 'https://mysslclienthost.com' ssl_id = 'MYSSLC' "Your SSL Client identity IMPORTING client = lo_client ). lo_client->SEND( ). lo_client->RECEIVE( ). lo_client->RESPONSE->GET_STATUS( IMPORTING CODE = lv_code REASON = lv_reason ). WRITE: lv_code, lv_reason.
Calling HTTP endpoint using .CRT and .KEY files in ABAP?
I'm trying to consume an endpoint from ABAP, by instantiate a if_http_client from cl_http_client=>create_by_url. That process works fine when I don't need to use a signed certificate. Usually I just include the certificate using the STRUST transaction. But for this specific case I have two certificate files: .crt and the .key. I'm able fetch the endpoint from Postman, because I can insert those files in Settings -> Certificates: So, how can I have it working from ABAP? How to insert those files in my http request? Should I pass them from ABAP code, or config it in STRUST or some other transation?
[ "EDIT: Reworked answer to better address problem as the more details arise.\nNOTE for readers: This is ABAP as HTTP Client (Not server) with SSL.\nThis is also a non typical problem. Here the SAP system has to connect to another service using a specific Client Certificate to establish an SSL connection. Something that would normally be managed at network level.\nWhen loading The Certificate it must be loaded into STRUST in the client PSE area.\nThe previous Idea(prior to edit/rework) sending the the Certificate as a header is explained as Option 3.\nOPTIONS :\n1) SSL Handshake in ABAP .\nTrying to manage SSL handshake in ABAP is very likely not possible.\nSSL Handshake is managed by sapcryptolib.\n2) Import the Client Certificate in STRUST\ninto the Standard Client PSE. See details below\n3) Use xxxx.cer as string and add as Http header\n(last resort if, option 2 doesnt Work)\n==============================================================\n2) Option 2 Details (BEST WAY)\nImport your certificate into Strust, in SSL client Standard area.\nHere is an example on standard sap docu of an actual example case. It is Dutsch Payroll interface. Using Private key certificate.\n*.p12 or *.pfx file . Private Key certificate\nhttps://help.sap.com/docs/ERP_HCM_SPV/491c29ac9232469bb257a2ba14ac290c/999ad0ce8bd24945b547584e776e9a4e.html\nSince this type of Cert cant be directly imported into SAP it explains how you can use sapgenpse at operating system level to convert the p12 into a pse file. Strust does not support import of p12 files.\nNow the ABAP call uses the client identity created in this step.\n cl_http_client=>create_by_url(\n EXPORTING\n url = 'url' \n ssl_id = 'CL_ID' \"Ident created in step above \n IMPORTING\n client = lo_client \n ).\n\nOr perhaps easier to work with.\nUse Sm59 to create and external http addr and select this\nNewly created identity.\n\nThen call with http client created via destination.\nCALL METHOD cl_http_client=>create_by_destination\n EXPORTING\n destination = lv_destination \"the new sm59 destination \n IMPORTING\n client = lo_http_client.\n\n\nOPTION 3 Details: (Not ideal, assume called service supports it.)\nif and only if, the called service support Certificates as Header\nNote you xxx.cer is the equivalent to an identity key.\nmanage the string carefully.\n DATA: lo_client TYPE REF TO if_http_client.\n\n cl_http_client=>create_by_url(\n EXPORTING\n url = 'url' \n ssl_id = 'ANONYM' \"Start SSL handshake as Anonymous SSL\n IMPORTING\n client = lo_client \n ).\n\n\n\n\"and pass the actual identify as HTTP header,\n\" Many service support this approach. But they solutions are always\n\" specific to that service.\n\" Example is the microsoft translation service.\n\" the expect a user subscription key as a header.\n'https://api-eur.cognitive.microsofttranslator.com/translate?api-version=3.0'\nlo_client->request->set_header_field(\n EXPORTING\n name = 'Client-Cert' \"Check HTTP header name with called Service docu\n value = '<cert> in string format'\n ).\n \n \"lo_client->send( .. )\n \"lo_client->receive( .. )\n\n", "Use KeyStore Explorer tool to create single pfx file from your client key and certificate. Also you can put chain of the client certificate with this tool.\nUse sapgenpse on your local system and create a pse file from pfx file with below command:\nsapgenpse import_p12 -p c:\\client.pse c:\\client.pfx\n\nGo to STRUST, create your own certificate store at Enviroment->SSL Client identities. I prefer this for not mixing all of them. Then return to STRUST and chose PSE->Import and select your custom pse file. Then click PSE->Save as and select your custom identity.\nAdd site SSL certificate to your new identiy.\nYou can try new SSL Client configuration at SM59 with selecting your new SSL Client identity.\nExample ABAP code below.\nREPORT ZMKY_SSL_CLIENT.\n\n DATA: lo_client TYPE REF TO if_http_client,\n lv_code TYPE i,\n lv_REASON type string.\n\n cl_http_client=>create_by_url(\n EXPORTING\n url = 'https://mysslclienthost.com'\n ssl_id = 'MYSSLC' \"Your SSL Client identity\n IMPORTING\n client = lo_client\n ).\n\n lo_client->SEND( ).\n\n lo_client->RECEIVE( ).\n\n lo_client->RESPONSE->GET_STATUS( IMPORTING CODE = lv_code\n REASON = lv_reason ).\n\n WRITE: lv_code, lv_reason.\n\n" ]
[ 2, 0 ]
[]
[]
[ "abap", "sap_basis", "ssl", "ssl_certificate" ]
stackoverflow_0074577287_abap_sap_basis_ssl_ssl_certificate.txt
Q: disableIntervalMomentum is not working for IOS device in Flatlist (React Native) I am using disableIntervalMomentum={true} to stop slider on next index. It is working for android but for IOS, on fast scrolling, it does not stop on next index but keeps moving. here is my code <FlatList ref={flatListRef} data={data?.offers || []} keyExtractor={(item, index) => index.toString()} horizontal={true} bounces={(data?.offers || []).length > 1} pagingEnabled={true} showsHorizontalScrollIndicator={false} snapToInterval={!isMultiCard ? snapToInterval : undefined} onViewableItemsChanged={onViewRef.current} viewabilityConfig={viewConfigRef.current} disableIntervalMomentum={true} decelerationRate={'fast'} snapToOffsets={isMultiCard && length > 1 ? snapToOffsets : undefined} scrollEnabled={true} /> I added disableIntervalMomentum={true} but it is not working for IOS. Let me know if more info is needed. A: So i figured that snapToOffsets was creating problem(because it was overriding pagingEnabled) so i removed it and it worked. A: The disableIntervalMomentum prop is used to disable the momentum-based scrolling that occurs when the user flings the list. This prop only works on Android, as momentum-based scrolling is not a feature of iOS. To stop the list from scrolling past the next index on iOS, you can try using the snapToInterval prop. This prop specifies the interval at which the list should stop when the user releases the list. For example, if the snapToInterval value is set to 120 and the user scrolls the list quickly and releases it, the list will stop at the nearest index that is a multiple of 120 (i.e. 0, 120, 240, etc.). You can also try using the snapToOffsets prop, which allows you to specify specific offsets at which the list should stop. This can be useful if you want to stop the list at specific indexes, rather than at regular intervals.
disableIntervalMomentum is not working for IOS device in Flatlist (React Native)
I am using disableIntervalMomentum={true} to stop slider on next index. It is working for android but for IOS, on fast scrolling, it does not stop on next index but keeps moving. here is my code <FlatList ref={flatListRef} data={data?.offers || []} keyExtractor={(item, index) => index.toString()} horizontal={true} bounces={(data?.offers || []).length > 1} pagingEnabled={true} showsHorizontalScrollIndicator={false} snapToInterval={!isMultiCard ? snapToInterval : undefined} onViewableItemsChanged={onViewRef.current} viewabilityConfig={viewConfigRef.current} disableIntervalMomentum={true} decelerationRate={'fast'} snapToOffsets={isMultiCard && length > 1 ? snapToOffsets : undefined} scrollEnabled={true} /> I added disableIntervalMomentum={true} but it is not working for IOS. Let me know if more info is needed.
[ "So i figured that snapToOffsets was creating problem(because it was overriding pagingEnabled) so i removed it and it worked.\n", "The disableIntervalMomentum prop is used to disable the momentum-based scrolling that occurs when the user flings the list. This prop only works on Android, as momentum-based scrolling is not a feature of iOS.\nTo stop the list from scrolling past the next index on iOS, you can try using the snapToInterval prop. This prop specifies the interval at which the list should stop when the user releases the list. For example, if the snapToInterval value is set to 120 and the user scrolls the list quickly and releases it, the list will stop at the nearest index that is a multiple of 120 (i.e. 0, 120, 240, etc.).\nYou can also try using the snapToOffsets prop, which allows you to specify specific offsets at which the list should stop. This can be useful if you want to stop the list at specific indexes, rather than at regular intervals.\n" ]
[ 0, 0 ]
[]
[]
[ "ios", "react_native", "react_native_flatlist", "reactjs", "scrollview" ]
stackoverflow_0074612246_ios_react_native_react_native_flatlist_reactjs_scrollview.txt
Q: Aggregate functions in a recursive CTE to calculate fractions of sub-groups I have a self-referential table which I am designing to describe mixtures of ingredients id raw_input parent_input amount a x 4 a y 6 b j 1 b k 3 c a 6 c b 1 d c 1 d a 1 I'd like to write a recursive CTE query which calculates the fraction of each base_input in each individual mix. For example, I'd like the output: id raw_input amount a x 0.4 a y 0.6 b j 0.25 b k 0.75 c x 0.34285714 c y 0.51428571 c j 0.03571429 c k 0.10714286 I haven't added mixture d here as it's quite tricky to calculate at this stage where the values are calculated as such: id raw_input amount a x 4/(4+6) a y 6/(4+6) b j 1/4 b k 3/4 c x 0.4*(6/(6+1)) c y 0.6*(6/(6+1)) c j 0.25*(1/(6+1)) c k 0.75*(1/(6+1)) My method for attempting this was to join a aggregate total onto the tables in the CTE, then divide the masses by this as such: WITH cte AS ( SELECT id, base_input, mass_fraction FROM (SELECT E.id, E.base_input, E.amount/f.total_mass AS mass_fraction FROM mix_table E JOIN (SELECT id, SUM(amount) as total_mass FROM mix_table GROUP BY id ) AS root_totals ON root_totals.id = E.id WHERE E.base_input IS NOT NULL) AS r UNION ALL SELECT b.id, base_input, mass_fraction/totals.total_mass FROM (SELECT F.id, cte.base_input, cte.amount/branch_totals.total_mass AS mass_fraction FROM mix_table F JOIN cte on F.parent_input = cte.id) as b JOIN (SELECT id, SUM(amount) as total_mass FROM mix_table GROUP BY id ) AS branch_totals ON branch_totals.id = totals.id ) select * from cte Running it without the totals joined onto the CTE gets most of the way there, just the individual components of the mixture group C are not scaled by their respective fractions. It seems like a CTE with an aggregate function is exactly what I want to do, just the error raised by SQL server prevents me doing it. There must be a way around this, I'm sure I'm not the first person to want to do this. Edit: I'd like to clarify that I'm aiming to do this for more than one level, where I can expand the solution to account for an indeterminate level of nested parent/children A: As you only have 2 levels, it would seem easier to do the UNION ALL outside of the CTE, rather than within it: SELECT * INTO dbo.YourTable FROM (VALUES('a','x',NULL,4), ('a','y',NULL,6), ('b','j',NULL,1), ('b','k',NULL,3), ('c',NULL,'a',6), ('c',NULL,'b',1))V(id,raw_input,parent_input, amount); GO WITH CTE AS( SELECT id, raw_input, amount, parent_input, (amount*1.) / SUM(Amount) OVER (PARTITION BY id) AS amountperc FROM dbo.YourTable YT) SELECT id, raw_input, --amount, amountperc AS amount FROM CTE WHERE parent_input IS NULL UNION ALL SELECT C.id, P.raw_input, C.amountperc * P.amountperc FROM CTE P JOIN CTE C ON P.id = C.parent_input; GO DROP TABLE dbo.YourTable; A: I managed to solve my own quandry, and for future viewers I think this only works because it doesn't require the aggregation to be run more than once, I can do the aggregation calculation before the CTE, then use it within the CTE: -- I first define the total aggregation here WITH totals AS (SELECT id, SUM(amount) as total_mass FROM test_mix GROUP BY id), cte AS ( SELECT id, raw_input, amount FROM (SELECT E.id, E.raw_input, E.amount/totals.total_mass AS amount FROM test_mix E JOIN totals ON totals.id = E.id WHERE E.raw_input IS NOT NULL) AS r UNION ALL SELECT b.id, raw_input, child_q*(parent_q/totals.total_mass) FROM (SELECT F.id, cte.raw_input, cte.amount AS parent_q, F.amount AS child_q FROM test_mix F JOIN cte on F.parent_input = cte.id) as b JOIN totals -- then use it here ON totals.id = b.id ) SELECT id,raw_input, SUM(amount) FROM cte GROUP BY id,raw_input
Aggregate functions in a recursive CTE to calculate fractions of sub-groups
I have a self-referential table which I am designing to describe mixtures of ingredients id raw_input parent_input amount a x 4 a y 6 b j 1 b k 3 c a 6 c b 1 d c 1 d a 1 I'd like to write a recursive CTE query which calculates the fraction of each base_input in each individual mix. For example, I'd like the output: id raw_input amount a x 0.4 a y 0.6 b j 0.25 b k 0.75 c x 0.34285714 c y 0.51428571 c j 0.03571429 c k 0.10714286 I haven't added mixture d here as it's quite tricky to calculate at this stage where the values are calculated as such: id raw_input amount a x 4/(4+6) a y 6/(4+6) b j 1/4 b k 3/4 c x 0.4*(6/(6+1)) c y 0.6*(6/(6+1)) c j 0.25*(1/(6+1)) c k 0.75*(1/(6+1)) My method for attempting this was to join a aggregate total onto the tables in the CTE, then divide the masses by this as such: WITH cte AS ( SELECT id, base_input, mass_fraction FROM (SELECT E.id, E.base_input, E.amount/f.total_mass AS mass_fraction FROM mix_table E JOIN (SELECT id, SUM(amount) as total_mass FROM mix_table GROUP BY id ) AS root_totals ON root_totals.id = E.id WHERE E.base_input IS NOT NULL) AS r UNION ALL SELECT b.id, base_input, mass_fraction/totals.total_mass FROM (SELECT F.id, cte.base_input, cte.amount/branch_totals.total_mass AS mass_fraction FROM mix_table F JOIN cte on F.parent_input = cte.id) as b JOIN (SELECT id, SUM(amount) as total_mass FROM mix_table GROUP BY id ) AS branch_totals ON branch_totals.id = totals.id ) select * from cte Running it without the totals joined onto the CTE gets most of the way there, just the individual components of the mixture group C are not scaled by their respective fractions. It seems like a CTE with an aggregate function is exactly what I want to do, just the error raised by SQL server prevents me doing it. There must be a way around this, I'm sure I'm not the first person to want to do this. Edit: I'd like to clarify that I'm aiming to do this for more than one level, where I can expand the solution to account for an indeterminate level of nested parent/children
[ "As you only have 2 levels, it would seem easier to do the UNION ALL outside of the CTE, rather than within it:\nSELECT *\nINTO dbo.YourTable\nFROM (VALUES('a','x',NULL,4),\n ('a','y',NULL,6),\n ('b','j',NULL,1),\n ('b','k',NULL,3),\n ('c',NULL,'a',6),\n ('c',NULL,'b',1))V(id,raw_input,parent_input, amount);\nGO\nWITH CTE AS(\n SELECT id,\n raw_input,\n amount,\n parent_input,\n (amount*1.) / SUM(Amount) OVER (PARTITION BY id) AS amountperc\n FROM dbo.YourTable YT)\nSELECT id,\n raw_input,\n --amount,\n amountperc AS amount\nFROM CTE\nWHERE parent_input IS NULL\nUNION ALL\nSELECT C.id,\n P.raw_input,\n C.amountperc * P.amountperc\nFROM CTE P\n JOIN CTE C ON P.id = C.parent_input;\nGO\n\nDROP TABLE dbo.YourTable;\n\n", "I managed to solve my own quandry, and for future viewers I think this only works because it doesn't require the aggregation to be run more than once, I can do the aggregation calculation before the CTE, then use it within the CTE:\n-- I first define the total aggregation here\n\nWITH totals AS (SELECT id, SUM(amount) as total_mass\n FROM test_mix\n GROUP BY id),\ncte AS (\n SELECT id, raw_input, amount FROM\n (SELECT E.id, E.raw_input, E.amount/totals.total_mass AS amount\n FROM test_mix E\n JOIN totals\n ON totals.id = E.id\n WHERE E.raw_input IS NOT NULL) AS r\n UNION ALL\n \n SELECT b.id, raw_input, child_q*(parent_q/totals.total_mass) FROM \n (SELECT F.id, cte.raw_input, cte.amount AS parent_q, F.amount AS child_q\n FROM test_mix F \n JOIN cte on F.parent_input = cte.id) as b\n JOIN totals -- then use it here\n ON totals.id = b.id\n )\n\nSELECT id,raw_input, SUM(amount)\nFROM cte\nGROUP BY id,raw_input\n\n" ]
[ 0, 0 ]
[]
[]
[ "aggregate_functions", "common_table_expression", "recursion", "sql", "sql_server" ]
stackoverflow_0074656087_aggregate_functions_common_table_expression_recursion_sql_sql_server.txt
Q: Config.get() Not Getting Configuration From File: "custom-environment-variables.json" Nodejs, JavaScript I am learning how to configure my Node.js App environment. For this I am using config module. Below is my index.js file: ` const config=require('config'); const express=require('express'); const app=express(); app.use(express.json()); //BUILT-IN EXPRESS MIDDLEWARE-FUNCTION //CONFIGURATION console.log('Current Working Environment:',process.env.NODE_ENV); console.log('Name is:', config.get('name')); console.log('Server is:', config.get('mail.host')); console.log('Password is:', config.get('mail.password')); ` I set NODE_ENV to production by the power shell command: $env:NODE_ENV="production". My production.json file inside the config folder is: `{ "name":"My Productoin Environmet", "mail":{ "host": "Prod-Environment" } }` And custom-environment-variables.json file is: `{ "mail":{ "passwrod":"app_password" } }` I set app_password to 12345678 by the power shell command : $env:app_password="12345678" config.get() is supposed to look at various sources to look for this configurations including, json files, configuration files and also environment variables. But whenever I run my app, I get the following error: `throw new Error('Configuration property "' + property + '" is not defined'); Error: Configuration property "mail.password" is not defined` If I remove the line : console.log('Password is:', config.get('mail.password')); everything goes well. Please, guide me what is the solution? A: Firstly you have a lot of syntactical errors for example in custom-environment-variables.json { "mail":{ "password":"app_password" } } Now if u need to store the password of your mail server in the environment variables On windows $env:app_password=12345 On Linux and OSX: export app_password=12345 how to run ? app.js const config = require("config"); console.log("Mail Password: " + config.get("mail.password")); A: i had the same problem because i didn't define an environment variable for storing the password of the mail server. So, my suggestion will be define your environment variable for storing the password using the below command line (mac) and then your code should work. export app_password=/* the password you want to set */ how to define an environment variable for storing the password of the mail server. A: While defining environment variables in command_prompt don't put space on either side of '=' sign..... eg: set app_password = 123456 -----> is wrong way set app_password=123456 -----> will work A: The issue in 99% of cases is in the name of the file in the config folder, storing your custom variables A: To add on: make sure your file has .json extension.
Config.get() Not Getting Configuration From File: "custom-environment-variables.json" Nodejs, JavaScript
I am learning how to configure my Node.js App environment. For this I am using config module. Below is my index.js file: ` const config=require('config'); const express=require('express'); const app=express(); app.use(express.json()); //BUILT-IN EXPRESS MIDDLEWARE-FUNCTION //CONFIGURATION console.log('Current Working Environment:',process.env.NODE_ENV); console.log('Name is:', config.get('name')); console.log('Server is:', config.get('mail.host')); console.log('Password is:', config.get('mail.password')); ` I set NODE_ENV to production by the power shell command: $env:NODE_ENV="production". My production.json file inside the config folder is: `{ "name":"My Productoin Environmet", "mail":{ "host": "Prod-Environment" } }` And custom-environment-variables.json file is: `{ "mail":{ "passwrod":"app_password" } }` I set app_password to 12345678 by the power shell command : $env:app_password="12345678" config.get() is supposed to look at various sources to look for this configurations including, json files, configuration files and also environment variables. But whenever I run my app, I get the following error: `throw new Error('Configuration property "' + property + '" is not defined'); Error: Configuration property "mail.password" is not defined` If I remove the line : console.log('Password is:', config.get('mail.password')); everything goes well. Please, guide me what is the solution?
[ "Firstly you have a lot of syntactical errors for example in\ncustom-environment-variables.json\n{\n \"mail\":{\n \"password\":\"app_password\"\n }\n}\n\nNow if u need to store the password of your mail server in the environment variables\nOn windows\n$env:app_password=12345\n\nOn Linux and OSX:\nexport app_password=12345\n\nhow to run ?\napp.js\nconst config = require(\"config\");\nconsole.log(\"Mail Password: \" + config.get(\"mail.password\"));\n\n\n", "i had the same problem because i didn't define an environment variable for storing the password of the mail server. So, my suggestion will be define your environment variable for storing the password using the below command line (mac) and then your code should work.\nexport app_password=/* the password you want to set */\nhow to define an environment variable for storing the password of the mail server. \n", "While defining environment variables in command_prompt don't put space on either side of '=' sign.....\neg:\nset app_password = 123456 -----> is wrong way\n \nset app_password=123456 -----> will work\n\n", "The issue in 99% of cases is in the name of the file in the config folder, storing your custom variables\n", "To add on: make sure your file has .json extension.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "node.js" ]
stackoverflow_0052003757_javascript_node.js.txt
Q: Trying to send an object to Spring boot via ReactJS I'm trying to send this post object to spring boot but I keep getting this error: Error: Required request body is missing: public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.String>> com.example.RegisterLogin.controller.RegisterController.registerHandler(java.lang.Object) I can't understand why. This is my code in Java: @PostMapping("/register") public ResponseEntity<Map<String,String>> registerHandler(@RequestBody Object registerDTO) { log.info("User: {}",registerDTO); return ResponseEntity.ok(registerService.saveUser((RegisterDTO) registerDTO)); } This is my code in reactjs: export class RegisterService{ save(user){ console.log({ method: 'POST', headers: { accept: 'application/json', body: JSON.stringify({registerDTO: user})}}) fetch('http://localhost:8080/api/auth/register', { method: 'POST', headers: { accept: 'application/json', body: JSON.stringify({registerDTO: user})}}) .then(data => console.log(data.json())) // Parsing the data into a JavaScript object .then(json => alert(JSON.stringify(json))) } } I can see via console it sends this: find out the problem, but there is another one with React send me this object: (Spring doesn't like it) {registerDTO={name=w, dsurname=wd, age=3, sex=MALE, role=USER, [email protected], password=wdkledml}} With postman I get this and everything works: {name=edfew, surname=feefewe, [email protected], password=efefefe, age=23, sex=MALE, role=USER} Does somebody know how to fix it please A: It seems you are returning the result of the console.log instead of the promise from the .json() function. You need to return the promise from the .json() function. .then(data => console.log(data.json())) //should become: .then(data => data.json())
Trying to send an object to Spring boot via ReactJS
I'm trying to send this post object to spring boot but I keep getting this error: Error: Required request body is missing: public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.String>> com.example.RegisterLogin.controller.RegisterController.registerHandler(java.lang.Object) I can't understand why. This is my code in Java: @PostMapping("/register") public ResponseEntity<Map<String,String>> registerHandler(@RequestBody Object registerDTO) { log.info("User: {}",registerDTO); return ResponseEntity.ok(registerService.saveUser((RegisterDTO) registerDTO)); } This is my code in reactjs: export class RegisterService{ save(user){ console.log({ method: 'POST', headers: { accept: 'application/json', body: JSON.stringify({registerDTO: user})}}) fetch('http://localhost:8080/api/auth/register', { method: 'POST', headers: { accept: 'application/json', body: JSON.stringify({registerDTO: user})}}) .then(data => console.log(data.json())) // Parsing the data into a JavaScript object .then(json => alert(JSON.stringify(json))) } } I can see via console it sends this: find out the problem, but there is another one with React send me this object: (Spring doesn't like it) {registerDTO={name=w, dsurname=wd, age=3, sex=MALE, role=USER, [email protected], password=wdkledml}} With postman I get this and everything works: {name=edfew, surname=feefewe, [email protected], password=efefefe, age=23, sex=MALE, role=USER} Does somebody know how to fix it please
[ "It seems you are returning the result of the console.log instead of the promise from the .json() function. You need to return the promise from the .json() function.\n\n\n.then(data => console.log(data.json()))\n\n//should become:\n\n.then(data => data.json())\n\n\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "reactjs", "rest", "spring", "spring_boot" ]
stackoverflow_0074657493_javascript_reactjs_rest_spring_spring_boot.txt
Q: ngFor directive is not rendering my div or its data I have created a service to which i am subscribed to in my component, however, the div is not rendering at all when i use ngFor. Component.TS file import { Component, OnInit } from '@angular/core'; import {FormBuilder, Validators} from '@angular/forms'; import { PlansService } from 'src/app/services/plans.service'; interface JobType { value: string; viewValue: string; } interface WorkEnv { value: string; viewValue: string; } @Component({ selector: 'app-post-job', templateUrl: './post-job.component.html', styleUrls: ['./post-job.component.css'] }) export class PostJobComponent implements OnInit { jobTypes: JobType[] = [ {value: 'type-0', viewValue: 'Full-Time'}, {value: 'type-1', viewValue: 'Part-Time'}, {value: 'type-2', viewValue: 'Freelance'}, ]; workEnvs: WorkEnv[] = [ {value: 'type-0', viewValue: 'Remote'}, {value: 'type-1', viewValue: 'Hybrid'}, {value: 'type-2', viewValue: 'On-site'}, ]; jobForm = this.fb.group({ jobTitle: this.fb.group({ title: ['', [Validators.required, Validators.minLength(2), Validators.pattern('^[_A-z0-9]*((-|\s)*[_A-z0-9])*$')]], jobType: ['', [Validators.required]] }), }) plans: any; constructor(private fb: FormBuilder, private service:PlansService) { } ngOnInit(): void { this.service.getPlans() .subscribe(response => { this.plans = response; console.log(response); }); } // Getter method to access formcontrols get myForm() { return this.jobForm.controls; } // Submit Registration Form // onSubmit() { // this.submitted = true; // if(!this.jobForm.valid) { // alert('Please fill all the required fields to create a super hero!') // } else { // console.log(this.jobForm.value) // } // } } Component.html file <div class="subscription-container" > <div class="card" *ngFor="let plan of plans " > <h1>{{plan.title}}</h1> </div> </div> the data is showing in my console.log but it is not rendering on the card at all. Please help, thank you in advance! I tried to add a dummy array in the ts file and i printed that to the console log but still did not render in the div A: You can load your plans async. Create an observable public plans$: Observable<any> Assign your subscription this.plans$ = this.service.getPlans(); In your HTML render conditionally <ng-container *ngIf="plans$ | async as plans"> <div *ngFor="let plan of plans"> {{ plan | json }} </div> </ng-container> A: Use Change detection strategy in angular : @Component({ selector: 'app-post-job', templateUrl: './post-job.component.html', changeDetection: ChangeDetectionStrategy.OnPush, styleUrls: ['./post-job.component.css'] }) constructor(private ref: ChangeDetectorRef) { } ngOnInit(): void { this.service.getPlans() .subscribe(response => { this.plans = response; console.log(response); this.ref.detectChanges(); // manually trigger change detection }); }
ngFor directive is not rendering my div or its data
I have created a service to which i am subscribed to in my component, however, the div is not rendering at all when i use ngFor. Component.TS file import { Component, OnInit } from '@angular/core'; import {FormBuilder, Validators} from '@angular/forms'; import { PlansService } from 'src/app/services/plans.service'; interface JobType { value: string; viewValue: string; } interface WorkEnv { value: string; viewValue: string; } @Component({ selector: 'app-post-job', templateUrl: './post-job.component.html', styleUrls: ['./post-job.component.css'] }) export class PostJobComponent implements OnInit { jobTypes: JobType[] = [ {value: 'type-0', viewValue: 'Full-Time'}, {value: 'type-1', viewValue: 'Part-Time'}, {value: 'type-2', viewValue: 'Freelance'}, ]; workEnvs: WorkEnv[] = [ {value: 'type-0', viewValue: 'Remote'}, {value: 'type-1', viewValue: 'Hybrid'}, {value: 'type-2', viewValue: 'On-site'}, ]; jobForm = this.fb.group({ jobTitle: this.fb.group({ title: ['', [Validators.required, Validators.minLength(2), Validators.pattern('^[_A-z0-9]*((-|\s)*[_A-z0-9])*$')]], jobType: ['', [Validators.required]] }), }) plans: any; constructor(private fb: FormBuilder, private service:PlansService) { } ngOnInit(): void { this.service.getPlans() .subscribe(response => { this.plans = response; console.log(response); }); } // Getter method to access formcontrols get myForm() { return this.jobForm.controls; } // Submit Registration Form // onSubmit() { // this.submitted = true; // if(!this.jobForm.valid) { // alert('Please fill all the required fields to create a super hero!') // } else { // console.log(this.jobForm.value) // } // } } Component.html file <div class="subscription-container" > <div class="card" *ngFor="let plan of plans " > <h1>{{plan.title}}</h1> </div> </div> the data is showing in my console.log but it is not rendering on the card at all. Please help, thank you in advance! I tried to add a dummy array in the ts file and i printed that to the console log but still did not render in the div
[ "You can load your plans async.\n\nCreate an observable\npublic plans$: Observable<any>\n\nAssign your subscription\nthis.plans$ = this.service.getPlans();\n\nIn your HTML render conditionally\n\n\n<ng-container *ngIf=\"plans$ | async as plans\">\n <div *ngFor=\"let plan of plans\">\n {{ plan | json }} \n </div>\n</ng-container>\n\n", "Use Change detection strategy in angular :\n@Component({\n selector: 'app-post-job',\n templateUrl: './post-job.component.html',\n changeDetection: ChangeDetectionStrategy.OnPush,\n styleUrls: ['./post-job.component.css']\n})\n\n\nconstructor(private ref: ChangeDetectorRef) {\n \n}\n\n\n ngOnInit(): void {\n\nthis.service.getPlans()\n .subscribe(response => {\n this.plans = response;\n console.log(response);\n this.ref.detectChanges(); // manually trigger change detection\n });\n\n}\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "html", "ngfor", "typescript" ]
stackoverflow_0074656240_angular_html_ngfor_typescript.txt
Q: cant upload or download files into database I have an ASP.NET Core 3.1 project, and I am trying to upload files into my database and displaying their URL so I can download them again. But when I press upload, I get an error in the index view referring to this line @foreach (var item in Model) Here is my model class: public class Files { [Key] public int DocumentId { get; set; } public string Name { get; set; } public string FileType { get; set; } public byte[] DataFiles { get; set; } } This is my controller: using Info.Data; using Info.Models; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Threading.Tasks; namespace Info.Controllers { public class DemoController : Controller { private readonly ApplicationDbContext _context; public DemoController(ApplicationDbContext context) { _context = context; } public IActionResult Index() { var result = _context.Files.ToList(); return View(result); } [HttpPost] public IActionResult Index(IFormFile files) { if (files != null) { if (files.Length > 0) { //Getting FileName var fileName = Path.GetFileName(files.FileName); //Getting file Extension var fileExtension = Path.GetExtension(fileName); // concatenating FileName + FileExtension var newFileName = String.Concat(Convert.ToString(Guid.NewGuid()), fileExtension); var objfiles = new Files() { DocumentId = 0, Name = newFileName, FileType = fileExtension, }; using (var target = new MemoryStream()) { files.CopyTo(target); objfiles.DataFiles = target.ToArray(); } _context.Files.Add(objfiles); _context.SaveChanges(); } } return View(); } public IActionResult DownloadImage(int id) { byte[] bytes; string fileName, contentType; var item = _context.Files.FirstOrDefault(c => c.DocumentId == id); if (item != null) { fileName = item.Name; contentType = item.FileType; bytes = item.DataFiles; return File(bytes, contentType, fileName); } return Ok("Can't find the File"); } } } Here is the view @model List<Info.Models.Files> @{ ViewData["Title"] = "Index"; } <h1>Index</h1> <div class="row"> <div class="col-md-5"> <form method="post" enctype="multipart/form-data" asp-controller="Demo" asp-action="Index"> <div class="form-group"> <div class="col-md-10"> <p>Upload file</p> <input class="form-control" name="files" type="file" /> </div> </div> <div class="form-group"> <div class="col-md-10"> <input class="btn btn-success" type="submit" value="Upload" /> </div> </div> </form> </div> </div> <ul> @foreach (var item in Model) { <li> <a asp-action="DownloadImage" asp-route-filename="@item.Name"> @item.Name </a> </li> } </ul> I can't upload or download.
cant upload or download files into database
I have an ASP.NET Core 3.1 project, and I am trying to upload files into my database and displaying their URL so I can download them again. But when I press upload, I get an error in the index view referring to this line @foreach (var item in Model) Here is my model class: public class Files { [Key] public int DocumentId { get; set; } public string Name { get; set; } public string FileType { get; set; } public byte[] DataFiles { get; set; } } This is my controller: using Info.Data; using Info.Models; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Threading.Tasks; namespace Info.Controllers { public class DemoController : Controller { private readonly ApplicationDbContext _context; public DemoController(ApplicationDbContext context) { _context = context; } public IActionResult Index() { var result = _context.Files.ToList(); return View(result); } [HttpPost] public IActionResult Index(IFormFile files) { if (files != null) { if (files.Length > 0) { //Getting FileName var fileName = Path.GetFileName(files.FileName); //Getting file Extension var fileExtension = Path.GetExtension(fileName); // concatenating FileName + FileExtension var newFileName = String.Concat(Convert.ToString(Guid.NewGuid()), fileExtension); var objfiles = new Files() { DocumentId = 0, Name = newFileName, FileType = fileExtension, }; using (var target = new MemoryStream()) { files.CopyTo(target); objfiles.DataFiles = target.ToArray(); } _context.Files.Add(objfiles); _context.SaveChanges(); } } return View(); } public IActionResult DownloadImage(int id) { byte[] bytes; string fileName, contentType; var item = _context.Files.FirstOrDefault(c => c.DocumentId == id); if (item != null) { fileName = item.Name; contentType = item.FileType; bytes = item.DataFiles; return File(bytes, contentType, fileName); } return Ok("Can't find the File"); } } } Here is the view @model List<Info.Models.Files> @{ ViewData["Title"] = "Index"; } <h1>Index</h1> <div class="row"> <div class="col-md-5"> <form method="post" enctype="multipart/form-data" asp-controller="Demo" asp-action="Index"> <div class="form-group"> <div class="col-md-10"> <p>Upload file</p> <input class="form-control" name="files" type="file" /> </div> </div> <div class="form-group"> <div class="col-md-10"> <input class="btn btn-success" type="submit" value="Upload" /> </div> </div> </form> </div> </div> <ul> @foreach (var item in Model) { <li> <a asp-action="DownloadImage" asp-route-filename="@item.Name"> @item.Name </a> </li> } </ul> I can't upload or download.
[]
[]
[ "please check your dB Context connection string and appsetting.json also did you run the migration and db update command?\nHere is a guid about dbcontext\n" ]
[ -1 ]
[ "asp.net_core_3.1", "asp.net_core_mvc", "c#" ]
stackoverflow_0074657018_asp.net_core_3.1_asp.net_core_mvc_c#.txt
Q: spring-cloud-starter-openfeign: Invalid HTTP method: PATCH executing PATCH Context I have a spring boot (version 2.2.6.RELEASE) web project. From this web application (I call "APP1") I want to call another URI using the PATCH method from another web application (Let's call it "APP2"). In my pom.xml, I have the following dependency: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> Here is how I call the PATCH method of the other web application. @FeignClient(name = "clientName", url = "base-uri") public interface MyInterface{ @PatchMapping(value = "/target-uri") void callClientMethod(Map<String, Object> args); Problem The APP2's PATCH method is effectively being called But then APP1 throws the following error: feign.RetryableException: Invalid HTTP method: PATCH executing PATCH I looked on the Internet for a solution, and added the following snipet to my pom.xml <dependency> <groupId>com.netflix.feign</groupId> <!-- Also tried io.github.openfeign --> <artifactId>feign-httpclient</artifactId> <version>8.18.0</version> </dependency> After that, APP2's PATCH method is stille properly called but in APP1 I got the following error : java.lang.NoSuchMethodError: feign.Response.create(ILjava/lang/String;Ljava/util/Map;Lfeign/Response$Body;)Lfeign/Response; Question Does anyone know how to solve this error ? Thanks in advance for your help ! A: I had the same problem and spent a lot of time for understand and resolve this problem. First what you need to understand that is the Feign doesn't support PATCH http method for call from the box! And if you can change methods in both services use PUT for update instead PATCH... But if you integrate with third party implementation you should add some configurations: 1. Add dependency which support PATCH http method: // https://mvnrepository.com/artifact/io.github.openfeign/feign-okhttp compile group: 'io.github.openfeign', name: 'feign-okhttp', version: '10.2.0' Add configuration: @Configuration public class FeignConfiguration { @Bean public OkHttpClient client() { return new OkHttpClient(); } } And example for PATCH request with Feign: @FeignClient(name = "someapi", url = "${client.someapi.url}") @Component @RequestMapping("/users") public interface SomeClient { @RequestMapping(value = "/{id}", method = RequestMethod.PATCH, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE) FeignUser update(@PathVariable("id") Long id, @RequestBody Map<String, Object> fields); } Hope it helps someone. A: Just Add: <dependency> <groupId>io.github.openfeign</groupId> <artifactId>feign-httpclient</artifactId> </dependency> A: If you are adding Feign with the following dependency: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> <!-- has dependecy on spring-cloud-openfeign-core inside, which already maintains version of feign-okhttp artifact --> </dependency> you can add okhttp client (without hardcoding artifact version) to fix the issue with PATCH request: <dependency> <groupId>io.github.openfeign</groupId> <artifactId>feign-okhttp</artifactId> <!-- Required to use PATCH --> </dependency> No other steps needed. The okhttp client will be applied automatically by auto configuration. Also, this way you don't need to manage feign-okhttp artifact version. Spring Cloud will manage version for you. Tested with Spring Boot 2.7.6
spring-cloud-starter-openfeign: Invalid HTTP method: PATCH executing PATCH
Context I have a spring boot (version 2.2.6.RELEASE) web project. From this web application (I call "APP1") I want to call another URI using the PATCH method from another web application (Let's call it "APP2"). In my pom.xml, I have the following dependency: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> Here is how I call the PATCH method of the other web application. @FeignClient(name = "clientName", url = "base-uri") public interface MyInterface{ @PatchMapping(value = "/target-uri") void callClientMethod(Map<String, Object> args); Problem The APP2's PATCH method is effectively being called But then APP1 throws the following error: feign.RetryableException: Invalid HTTP method: PATCH executing PATCH I looked on the Internet for a solution, and added the following snipet to my pom.xml <dependency> <groupId>com.netflix.feign</groupId> <!-- Also tried io.github.openfeign --> <artifactId>feign-httpclient</artifactId> <version>8.18.0</version> </dependency> After that, APP2's PATCH method is stille properly called but in APP1 I got the following error : java.lang.NoSuchMethodError: feign.Response.create(ILjava/lang/String;Ljava/util/Map;Lfeign/Response$Body;)Lfeign/Response; Question Does anyone know how to solve this error ? Thanks in advance for your help !
[ "\nI had the same problem and spent a lot of time for understand and resolve this problem.\nFirst what you need to understand that is the Feign doesn't support PATCH http method for call from the box!\nAnd if you can change methods in both services use PUT for update instead PATCH...\n\nBut if you integrate with third party implementation you should add some configurations:\n1. Add dependency which support PATCH http method:\n\n// https://mvnrepository.com/artifact/io.github.openfeign/feign-okhttp\ncompile group: 'io.github.openfeign', name: 'feign-okhttp', version:\n'10.2.0'\n\n\nAdd configuration:\n\n\n@Configuration \npublic class FeignConfiguration {\n @Bean\n public OkHttpClient client() {\n return new OkHttpClient();\n } \n}\n\n\n\nAnd example for PATCH request with Feign:\n\n\n@FeignClient(name = \"someapi\", url = \"${client.someapi.url}\")\n@Component\n@RequestMapping(\"/users\")\npublic interface SomeClient {\n\n @RequestMapping(value = \"/{id}\",\n method = RequestMethod.PATCH,\n consumes = MediaType.APPLICATION_JSON_VALUE,\n produces = MediaType.APPLICATION_JSON_VALUE)\n FeignUser update(@PathVariable(\"id\") Long id, @RequestBody Map<String, Object> fields);\n}\n\n\nHope it helps someone.\n", "Just Add:\n<dependency>\n <groupId>io.github.openfeign</groupId>\n <artifactId>feign-httpclient</artifactId>\n</dependency>\n\n", "If you are adding Feign with the following dependency:\n<dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-starter-openfeign</artifactId> <!-- has dependecy on spring-cloud-openfeign-core inside, which already maintains version of feign-okhttp artifact -->\n</dependency>\n\nyou can add okhttp client (without hardcoding artifact version) to fix the issue with PATCH request:\n<dependency>\n <groupId>io.github.openfeign</groupId>\n <artifactId>feign-okhttp</artifactId> <!-- Required to use PATCH -->\n</dependency>\n\nNo other steps needed. The okhttp client will be applied automatically by auto configuration.\nAlso, this way you don't need to manage feign-okhttp artifact version. Spring Cloud will manage version for you.\nTested with Spring Boot 2.7.6\n" ]
[ 29, 9, 0 ]
[ "The following config works for me:\n<dependency>\n <groupId>io.github.openfeign</groupId>\n <artifactId>feign-jackson</artifactId>\n <version>${feign.version}</version>\n</dependency>\n<dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-starter-openfeign</artifactId>\n</dependency>\n<dependency>\n <groupId>io.github.openfeign</groupId>\n <artifactId>feign-httpclient</artifactId>\n <version>${feign.version}</version>\n</dependency>\n\nWhere:\nfeign.version - 11.0\nSpring Boot - 2.3.0.RELEASE\nSpring-cloud.version - 2.2.3.RELEASE\n" ]
[ -1 ]
[ "feign", "http_patch", "spring_cloud_feign" ]
stackoverflow_0061641977_feign_http_patch_spring_cloud_feign.txt
Q: Sorting 1000-2000 elements with many cache misses I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss? Edit: Maybe I worded this wrong :), I don't actually need my array sorted all the time (I'm not iterating through them sequentially for anything) I just need it sorted when I'm doing a binary chop to find a matching object, and doing that quicksort at that time (when I want to search) is currently my bottleneck, because of the cache misses and jumps (I'm using a < operator on my object, but I'm hoping that inlines in release) A: Running a quicksort on each insertion is enormously inefficient. Doing a binary search and insert operation would likely be orders of magnitude faster. Using a binary search tree instead of a linear array would reduce the insert cost. Edit: I missed that you were doing sort on extraction, not insert. Regardless, keeping things sorted amortizes sorting time over each insert, which almost has to be a win, unless you have a lot of inserts for each extraction. If you want to keep the sort on-extract methodology, then maybe switch to merge sort, or another sort that has good performance for mostly-sorted data. A: Simple approach: insertion sort on every insert. Since your elements are not aligned in memory I'm guessing linked list. If so, then you could transform it into a linked list with jumps to the 10th element, the 100th and so on. This is kind of similar to the next suggestion. Or you reorganize your container structure into a binary tree (or what every tree you like, B, B*, red-black, ...) and insert elements like you would insert them into a search tree. A: I think the best approach in your case would be changing your data structure to something logarithmic and rethinking your architecture. Because the bottleneck of your application is not that sorting thing, but the question why do you have to sort everything on each insert and try to compensate that by adding on-demand sort?. Another thing you could try (that is based on your current implementation) is implementing an external pointer - something mapping table / function and sort those second keys, but I actually doubt it would benefit in this case. A: As you mention, you're going to have to do some profiling to determine if this is a bottleneck and if other approaches provide any relief. Alternatives to using an array are std::set or std::multiset which are normally implemented as R-B binary trees, and so have good performance for most applications. You're going to have to weigh using them against the frequency of the sort-when-searched pattern you implemented. In either case, I wouldn't recommend rolling-your-own sort or search unless you're interested in learning more about how it's done. A: Instead of the array of the pointers you may consider an array of structs which consist of both a pointer to your object and the sort criteria. That is: Instead of struct MyType { // ... int m_SomeField; // this is the sort criteria }; std::vector<MyType*> arr; You may do this: strcut ArrayElement { MyType* m_pObj; // the actual object int m_SortCriteria; // should be always equal to the m_pObj->m_SomeField }; std::vector<ArrayElement> arr; You may also remove the m_SomeField field from your struct, if you only access your object via this array. By such in order to sort your array you won't need to dereference m_pObj every iteration. Hence you'll utilize the cache. Of course you must keep the m_SortCriteria always synchronized with m_SomeField of the object (in case you're editing it). A: I would think that sorting on insertion would be better. We are talking O(log N) comparisons here, so say ceil( O(log N) ) + 1 retrieval of the data to sort with. For 2000, it amounts to: 8 What's great about this is that you can buffer the data of the element to be inserted, that's how you only have 8 function calls to actually insert. You may wish to look at some inlining, but do profile before you're sure THIS is the tight spot. A: Nowadays you could use a set, either a std::set, if you have unique values in your structure member, or, std::multiset if you have duplicate values in you structure member. One side note: The concept using pointers, is in general not advisable. STL containers (if used correctly) give you nearly always an optimized performance. Anyway. Please see some example code: #include <iostream> #include <array> #include <algorithm> #include <set> #include <iterator> // Demo data structure, whatever struct Data { int i{}; }; // ----------------------------------------------------------------------------------------- // All in the below section is executed during compile time. Not during runtime // It will create an array to some thousands pointer constexpr std::size_t DemoSize = 4000u; using DemoPtrData = std::array<const Data*, DemoSize>; using DemoData = std::array<Data, DemoSize>; consteval DemoData createDemoData() { DemoData dd{}; int k{}; for (Data& d : dd) d.i = k++*2; return dd; } constexpr DemoData demoData = createDemoData(); consteval DemoPtrData createDemoPtrData(const DemoData& dd) { DemoPtrData dpd{}; for (std::size_t k{}; k < dpd.size(); ++k) dpd[k] = &dd[k]; return dpd; } constexpr DemoPtrData dpd = createDemoPtrData(demoData); // ----------------------------------------------------------------------------------------- struct Comp {bool operator () (const Data* d1, const Data* d2) const { return d1->i < d2->i; }}; using MySet = std::multiset<const Data*, Comp>; int main() { // Add some thousand pointers. Will be sorted according to struct member MySet mySet{ dpd.begin(), dpd.end() }; // Extract a range of data. integer values between 42 and 52 const Data* p42 = dpd[21]; const Data* p52 = dpd[26]; // Show result for (auto iptr = mySet.lower_bound(p42); iptr != mySet.upper_bound(p52); ++iptr) std::cout << (*iptr)->i << '\n'; // Insert a new element Data d1{ 47 }; mySet.insert(&d1); // Show again std::cout << "\n\n"; for (auto iptr = mySet.lower_bound(p42); iptr != mySet.upper_bound(p52); ++iptr) std::cout << (*iptr)->i << '\n'; }
Sorting 1000-2000 elements with many cache misses
I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss? Edit: Maybe I worded this wrong :), I don't actually need my array sorted all the time (I'm not iterating through them sequentially for anything) I just need it sorted when I'm doing a binary chop to find a matching object, and doing that quicksort at that time (when I want to search) is currently my bottleneck, because of the cache misses and jumps (I'm using a < operator on my object, but I'm hoping that inlines in release)
[ "Running a quicksort on each insertion is enormously inefficient. Doing a binary search and insert operation would likely be orders of magnitude faster. Using a binary search tree instead of a linear array would reduce the insert cost.\nEdit: I missed that you were doing sort on extraction, not insert. Regardless, keeping things sorted amortizes sorting time over each insert, which almost has to be a win, unless you have a lot of inserts for each extraction.\nIf you want to keep the sort on-extract methodology, then maybe switch to merge sort, or another sort that has good performance for mostly-sorted data.\n", "Simple approach: insertion sort on every insert. Since your elements are not aligned in memory I'm guessing linked list. If so, then you could transform it into a linked list with jumps to the 10th element, the 100th and so on. This is kind of similar to the next suggestion.\nOr you reorganize your container structure into a binary tree (or what every tree you like, B, B*, red-black, ...) and insert elements like you would insert them into a search tree.\n", "I think the best approach in your case would be changing your data structure to something logarithmic and rethinking your architecture. Because the bottleneck of your application is not that sorting thing, but the question why do you have to sort everything on each insert and try to compensate that by adding on-demand sort?.\nAnother thing you could try (that is based on your current implementation) is implementing an external pointer - something mapping table / function and sort those second keys, but I actually doubt it would benefit in this case.\n", "As you mention, you're going to have to do some profiling to determine if this is a bottleneck and if other approaches provide any relief.\nAlternatives to using an array are std::set or std::multiset which are normally implemented as R-B binary trees, and so have good performance for most applications. You're going to have to weigh using them against the frequency of the sort-when-searched pattern you implemented.\nIn either case, I wouldn't recommend rolling-your-own sort or search unless you're interested in learning more about how it's done.\n", "Instead of the array of the pointers you may consider an array of structs which consist of both a pointer to your object and the sort criteria. That is:\nInstead of\nstruct MyType {\n // ...\n int m_SomeField; // this is the sort criteria\n};\n\nstd::vector<MyType*> arr;\n\nYou may do this:\nstrcut ArrayElement {\n MyType* m_pObj; // the actual object\n int m_SortCriteria; // should be always equal to the m_pObj->m_SomeField\n\n};\n\nstd::vector<ArrayElement> arr;\n\nYou may also remove the m_SomeField field from your struct, if you only access your object via this array.\nBy such in order to sort your array you won't need to dereference m_pObj every iteration. Hence you'll utilize the cache.\nOf course you must keep the m_SortCriteria always synchronized with m_SomeField of the object (in case you're editing it).\n", "I would think that sorting on insertion would be better. We are talking O(log N) comparisons here, so say ceil( O(log N) ) + 1 retrieval of the data to sort with.\nFor 2000, it amounts to: 8\nWhat's great about this is that you can buffer the data of the element to be inserted, that's how you only have 8 function calls to actually insert.\nYou may wish to look at some inlining, but do profile before you're sure THIS is the tight spot.\n", "Nowadays you could use a set, either a std::set, if you have unique values in your structure member, or, std::multiset if you have duplicate values in you structure member.\nOne side note: The concept using pointers, is in general not advisable.\nSTL containers (if used correctly) give you nearly always an optimized performance.\nAnyway. Please see some example code:\n#include <iostream>\n#include <array>\n#include <algorithm>\n#include <set>\n#include <iterator>\n\n// Demo data structure, whatever\nstruct Data {\n int i{};\n};\n\n// -----------------------------------------------------------------------------------------\n// All in the below section is executed during compile time. Not during runtime\n// It will create an array to some thousands pointer\nconstexpr std::size_t DemoSize = 4000u;\nusing DemoPtrData = std::array<const Data*, DemoSize>;\nusing DemoData = std::array<Data, DemoSize>;\nconsteval DemoData createDemoData() {\n DemoData dd{};\n int k{};\n for (Data& d : dd)\n d.i = k++*2;\n return dd;\n}\nconstexpr DemoData demoData = createDemoData();\n\nconsteval DemoPtrData createDemoPtrData(const DemoData& dd) {\n DemoPtrData dpd{};\n for (std::size_t k{}; k < dpd.size(); ++k)\n dpd[k] = &dd[k];\n return dpd;\n}\nconstexpr DemoPtrData dpd = createDemoPtrData(demoData);\n// -----------------------------------------------------------------------------------------\n\n\nstruct Comp {bool operator () (const Data* d1, const Data* d2) const { return d1->i < d2->i; }};\nusing MySet = std::multiset<const Data*, Comp>;\n\nint main() {\n // Add some thousand pointers. Will be sorted according to struct member\n MySet mySet{ dpd.begin(), dpd.end() };\n\n // Extract a range of data. integer values between 42 and 52\n const Data* p42 = dpd[21];\n const Data* p52 = dpd[26];\n\n // Show result\n for (auto iptr = mySet.lower_bound(p42); iptr != mySet.upper_bound(p52); ++iptr)\n std::cout << (*iptr)->i << '\\n';\n\n // Insert a new element\n Data d1{ 47 };\n mySet.insert(&d1);\n\n // Show again\n std::cout << \"\\n\\n\";\n for (auto iptr = mySet.lower_bound(p42); iptr != mySet.upper_bound(p52); ++iptr)\n std::cout << (*iptr)->i << '\\n';\n}\n\n" ]
[ 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "c++", "sorting" ]
stackoverflow_0002952407_algorithm_c++_sorting.txt
Q: I keep getting name 'message' not defined in python even though I made it a global variable in the function and I'm calling it Code(python): import tkinter as tk root = tk.Tk() root.geometry("600x400") message_var2 = tk.StringVar() def page2(message): print(f'test\n{message}') def getInputtemp(): global message message = message_var2.get() message_var2.set("") message_entryi = tk.Entry(root, textvariable=message_var2, font=('calibre', 10, 'normal')) message_entryi.pack() save_btn2 = tk.Button(root, text='Send', command=getInputtemp) save_btn2.pack() if message in ['1886', '2022']: page2(message) root.mainloop() I want to use the variable 'message' outside of the function but it keeps giving me the not defined error Even though I made it a global variable and I'm calling the function before trying to use it I still get the error, Even though after making it global and calling it worked in the past with other things its not working here am I doing something wrong? Did I forget some small tiny detail? A: The issue you have here is that your function getInputtemp is not getting fired. It only gets fired when button save_btn2 is clicked. Also, the if statement where the error is occuring will only get fired once. To fix this, you can either do what @Tkirishima have suggested. Or just move the if statement inside the getInputtemp function. def getInputtemp(): #global message #Then you would no longer need message as a global variable message = message_var2.get() message_var2.set("") if message in ['1886', '2022']: page2(message) But, if you do want the if statement outside the function (which I wouldn't recommend as stated earlier that it would execute when you start the script and never again): getInputtemp() #The function is called to create message as global variable if message in ['1886', '2022']: page2(message) A: The problem that you have, is the usage of the global keyword. The thing is that, you use global even tho the value doesn't exists in the global scope. If you want your program to work, the best thing to do is to define message at the start of the program as None, with that process, message exists in the global scope with a value of None. ... root.geometry("600x400") message_var2 = tk.StringVar() message = None # <<<<<<<<<< def page2(message): print(f'test\n{message}') ... Related: https://www.programiz.com/python-programming/global-keyword https://en.wikipedia.org/wiki/Scope_(computer_science)
I keep getting name 'message' not defined in python even though I made it a global variable in the function and I'm calling it
Code(python): import tkinter as tk root = tk.Tk() root.geometry("600x400") message_var2 = tk.StringVar() def page2(message): print(f'test\n{message}') def getInputtemp(): global message message = message_var2.get() message_var2.set("") message_entryi = tk.Entry(root, textvariable=message_var2, font=('calibre', 10, 'normal')) message_entryi.pack() save_btn2 = tk.Button(root, text='Send', command=getInputtemp) save_btn2.pack() if message in ['1886', '2022']: page2(message) root.mainloop() I want to use the variable 'message' outside of the function but it keeps giving me the not defined error Even though I made it a global variable and I'm calling the function before trying to use it I still get the error, Even though after making it global and calling it worked in the past with other things its not working here am I doing something wrong? Did I forget some small tiny detail?
[ "The issue you have here is that your function getInputtemp is not getting fired. It only gets fired when button save_btn2 is clicked. Also, the if statement where the error is occuring will only get fired once. To fix this, you can either do what @Tkirishima have suggested.\nOr just move the if statement inside the getInputtemp function.\ndef getInputtemp():\n #global message \n #Then you would no longer need message as a global variable\n message = message_var2.get()\n message_var2.set(\"\")\n if message in ['1886', '2022']:\n page2(message)\n\nBut, if you do want the if statement outside the function (which I wouldn't recommend as stated earlier that it would execute when you start the script and never again):\ngetInputtemp() #The function is called to create message as global variable\nif message in ['1886', '2022']:\n page2(message)\n\n", "The problem that you have, is the usage of the global keyword.\nThe thing is that, you use global even tho the value doesn't exists in the global scope.\nIf you want your program to work, the best thing to do is to define message at the start of the program as None, with that process, message exists in the global scope with a value of None.\n...\nroot.geometry(\"600x400\")\nmessage_var2 = tk.StringVar()\nmessage = None # <<<<<<<<<<\n\ndef page2(message):\n print(f'test\\n{message}')\n...\n\nRelated:\n\nhttps://www.programiz.com/python-programming/global-keyword\nhttps://en.wikipedia.org/wiki/Scope_(computer_science)\n\n" ]
[ 2, 1 ]
[]
[]
[ "function", "global_variables", "python", "tkinter", "variables" ]
stackoverflow_0074657402_function_global_variables_python_tkinter_variables.txt
Q: Idiomatic way to drop Pandas DataFrame column in an idempotent fashion (without settings errors="ignore") Is there a more Pythonic or Pandas-idiomatic way to drop a DataFrame column without just setting errors="ignore"? Suppose I have the following DataFrame: import pandas as pd from pandas import DataFrame df_initial: DataFrame = pd.DataFrame([ { "country": "DE", "price": 1, "quantity": 10 } ]) If I am unsure about when exactly a function that drops a column might be called (I am thinking in the context of a Jupyter Notebook), is there a way to do this that isn't just ignoring errors (like below)? df_country_dropped = df_initial.drop("country", axis=1, errors="ignore") Perhaps I'm being too pernickety, but I had hoped that there would be a more Pythonic way to deal with this than just ignoring a KeyError. I realise it is possible to check for the existence of the column before dropping: def drop_country_if_exists(df): if "country" in df: return df.drop("country", axis=1) df_country_dropped = drop_country_if_exists(df_initial) But I was hoping there might be a more elegant way! A: If you're trying to avoid the overhead of creating an additional function, I'd say that list comprehensions are a Pythonic way of achieving what you need. An approach like this would be idempotent, and elegant enough: df[[col for col in df.columns if col != 'country']] The upside with this method is that it's easily extensible if you want to drop a list of columns, by changing the != to a not in and passing your list of column names.
Idiomatic way to drop Pandas DataFrame column in an idempotent fashion (without settings errors="ignore")
Is there a more Pythonic or Pandas-idiomatic way to drop a DataFrame column without just setting errors="ignore"? Suppose I have the following DataFrame: import pandas as pd from pandas import DataFrame df_initial: DataFrame = pd.DataFrame([ { "country": "DE", "price": 1, "quantity": 10 } ]) If I am unsure about when exactly a function that drops a column might be called (I am thinking in the context of a Jupyter Notebook), is there a way to do this that isn't just ignoring errors (like below)? df_country_dropped = df_initial.drop("country", axis=1, errors="ignore") Perhaps I'm being too pernickety, but I had hoped that there would be a more Pythonic way to deal with this than just ignoring a KeyError. I realise it is possible to check for the existence of the column before dropping: def drop_country_if_exists(df): if "country" in df: return df.drop("country", axis=1) df_country_dropped = drop_country_if_exists(df_initial) But I was hoping there might be a more elegant way!
[ "If you're trying to avoid the overhead of creating an additional function, I'd say that list comprehensions are a Pythonic way of achieving what you need.\nAn approach like this would be idempotent, and elegant enough:\ndf[[col for col in df.columns if col != 'country']]\n\nThe upside with this method is that it's easily extensible if you want to drop a list of columns, by changing the != to a not in and passing your list of column names.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074655572_dataframe_pandas_python.txt
Q: what is the table row border width not consistent but alternating? I want the border for each table row to appear solid 1px but for some reason it is alternating in the width. Why is the width altering and not being fixed or consistent? BTW, I am testing it on Chrome. table { border-collapse: collapse; } .compositeEventContainer table tbody tr { border-bottom: 1px solid; } <div class="compositeEventContainer"> <table> <tbody> <tr> <td>first</td> <td>1</td> <td>Test</td> </tr> <tr> <td>second</td> <td>2</td> <td>Test</td> </tr> <tr> <td>third</td> <td>3</td> <td>Test</td> </tr> <tr> <td>fourth</td> <td>4</td> <td>Test</td> </tr> <tr> <td>fifth</td> <td>5</td> <td>Test</td> </tr> <tr> <td>sixth</td> <td>6</td> <td>Test</td> </tr> </tbody> </table> </div> here is my view..i am using windows and browser is chrome. A: While this is not a direct answer to your problem, based on the comments these are the things you will need to try in order to troubleshoot on your end: Reset your Chrome settings by going to settings, and then "Show advanced settings...". and then scroll all the way to the button. Then click on "Reset settings" Uninstall/Re-install Chrome (but don't log in with your username if you are doing so to prevent synchronization of settings) Try this on a different computer with the same version of Chrome Check your encoding in Chrome, Options then More Tools, then Encoding With the many different possibilities out there, this issue appears to be localized to your computer. If it works in other browsers, it will be localized to chrome. However, you may have a system setting which is causing this. Try looking at the following: Advanced Display Settings, Monitor, Refresh rate Video card custom properties, such as NVidia Experience Panel, or AMD Control Center. Look at your DPI (Control Panel -> Appearance and Personalization -> Display). It will be under Set custom text size (DPI). Your scale might be off? I hope this leads you in the right direction to finding your issue. If any of these were the problem, please comment and let me know. A: This issue is super annoying, but it has something to do with the pixel scale on some operating systems. Essentially, the browser will attempt to round border widths to the nearest device pixel causing the appearance of inconsistent border widths. A super ugly solution is to use a border width less than 1px so that it rounds to the nearest device pixel correctly. tr { border-bottom: solid .5px #d3d3d3; }
what is the table row border width not consistent but alternating?
I want the border for each table row to appear solid 1px but for some reason it is alternating in the width. Why is the width altering and not being fixed or consistent? BTW, I am testing it on Chrome. table { border-collapse: collapse; } .compositeEventContainer table tbody tr { border-bottom: 1px solid; } <div class="compositeEventContainer"> <table> <tbody> <tr> <td>first</td> <td>1</td> <td>Test</td> </tr> <tr> <td>second</td> <td>2</td> <td>Test</td> </tr> <tr> <td>third</td> <td>3</td> <td>Test</td> </tr> <tr> <td>fourth</td> <td>4</td> <td>Test</td> </tr> <tr> <td>fifth</td> <td>5</td> <td>Test</td> </tr> <tr> <td>sixth</td> <td>6</td> <td>Test</td> </tr> </tbody> </table> </div> here is my view..i am using windows and browser is chrome.
[ "While this is not a direct answer to your problem, based on the comments these are the things you will need to try in order to troubleshoot on your end:\n\nReset your Chrome settings by going to settings, and then \"Show advanced settings...\". and then scroll all the way to the button. Then click on \"Reset settings\"\nUninstall/Re-install Chrome (but don't log in with your username if you are doing so to prevent synchronization of settings)\nTry this on a different computer with the same version of Chrome\nCheck your encoding in Chrome, Options then More Tools, then Encoding\n\nWith the many different possibilities out there, this issue appears to be localized to your computer. If it works in other browsers, it will be localized to chrome. However, you may have a system setting which is causing this.\nTry looking at the following:\n\nAdvanced Display Settings, Monitor, Refresh rate\nVideo card custom properties, such as NVidia Experience Panel, or AMD Control Center. \nLook at your DPI (Control Panel -> Appearance and Personalization -> Display). It will be under Set custom text size (DPI). Your scale might be off?\n\nI hope this leads you in the right direction to finding your issue. If any of these were the problem, please comment and let me know. \n", "This issue is super annoying, but it has something to do with the pixel scale on some operating systems. Essentially, the browser will attempt to round border widths to the nearest device pixel causing the appearance of inconsistent border widths. A super ugly solution is to use a border width less than 1px so that it rounds to the nearest device pixel correctly.\ntr { border-bottom: solid .5px #d3d3d3; }\n\n" ]
[ 2, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0028592652_css_html.txt
Q: Customized Typescript Interface for Recharts Custom Tooltip I am trying to add some more data to my customized Tooltip other than the data available by default. Now, I want to write a type for such a tooltip, already seen one of the solutions which has not passed anything else other than the props which recharts chart internally passes to the custom tooltip. The solution I am referring to above: Typescript Interface for Recharts Custom Tooltip my code: <Tooltip content={ <CustomToolTipComponent detailsType={detailsType} selectedItemUnit={selectedItemUnit} /> } /> I am using this tooltip for a Line Chart. I tried searching for more solutions to it but can't find anything on recharts documentation and on any other platforms till now. The solution I mentioned above provides the inbuilt type for the collection of data passed to the Tooltip by default but doesn't give scope to add any other data type to it. A: You need to declare a new type which extends the TooltipProps type to include your inputs. import { TooltipProps } from 'recharts'; import { ValueType, NameType } from 'recharts/src/component/DefaultTooltipContent'; declare type MyTooltip<MyValue extends ValueType, MyName extends NameType> = TooltipProps<MyValue, MyName> & { detailsType: string; selectedItemUnit: object; }; const CustomTooltip = ({active, payload, label, detailsType, selectedItemUnit}: MyTooltip<ValueType, NameType>) => {};
Customized Typescript Interface for Recharts Custom Tooltip
I am trying to add some more data to my customized Tooltip other than the data available by default. Now, I want to write a type for such a tooltip, already seen one of the solutions which has not passed anything else other than the props which recharts chart internally passes to the custom tooltip. The solution I am referring to above: Typescript Interface for Recharts Custom Tooltip my code: <Tooltip content={ <CustomToolTipComponent detailsType={detailsType} selectedItemUnit={selectedItemUnit} /> } /> I am using this tooltip for a Line Chart. I tried searching for more solutions to it but can't find anything on recharts documentation and on any other platforms till now. The solution I mentioned above provides the inbuilt type for the collection of data passed to the Tooltip by default but doesn't give scope to add any other data type to it.
[ "You need to declare a new type which extends the TooltipProps type to include your inputs.\n\n\nimport { TooltipProps } from 'recharts';\nimport { ValueType, NameType } from 'recharts/src/component/DefaultTooltipContent';\n\ndeclare type MyTooltip<MyValue extends ValueType, MyName extends NameType> = \nTooltipProps<MyValue, MyName> & {\n detailsType: string;\n selectedItemUnit: object;\n};\n\nconst CustomTooltip = ({active, payload, label, detailsType, selectedItemUnit}: MyTooltip<ValueType, NameType>) => {};\n\n\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "reactjs", "recharts", "typescript" ]
stackoverflow_0071729339_javascript_reactjs_recharts_typescript.txt
Q: Which PR is responsible for a code line on GitHub? Within any accessible GitHub repo and a given code segment (e.g. a function), how is it possible to see which commit or PR has led to it? A: You can see a file with annotations of which commits contributed by hitting the blame button on a file view.
Which PR is responsible for a code line on GitHub?
Within any accessible GitHub repo and a given code segment (e.g. a function), how is it possible to see which commit or PR has led to it?
[ "You can see a file with annotations of which commits contributed by hitting the blame button on a file view.\n\n\n" ]
[ 1 ]
[]
[]
[ "github" ]
stackoverflow_0074657585_github.txt
Q: requestSingleInstanceLock returns error, preventing instance locking in ElectronJS I get the error ReferenceError: mainWindow is not defined when I try to request single instance lock in ElectronJS app. The documentation does not state anything regarding any requirement of this variable. However docs do show an example of the variable myWindow. What is going on here and how do I fix it? A: Here's the code I ended up writing which is safe on both Windows and MacOS: var fs = require('fs') module.exports = class SingleInstance { constructor(){ this.appName = app.name.split(' ').join('-').toLowerCase() this.tmpPath = app.getPath('temp') + '\\' + this.appName + '\\' if (process.platform === 'darwin') this.tmpPath = this.tmpPath.split('\\').join('//') if (!fs.existsSync(this.tmpPath)) fs.mkdirSync(this.tmpPath) this.lockFilePath = this.tmpPath + 'lock.txt' if (this.isSecondInstance()){ process.exit() } else { this.updateLockFile() } } updateLockFile(){ fs.writeFileSync(this.lockFilePath, process.pid.toString()) } isSecondInstance(){ try { var pid = fs.readFileSync(this.lockFilePath).toString() try { var isAlreadyRunning = process.kill(pid, 0) //app is the second instance. terminate. return true } catch(err) { } } catch(err) { } } }
requestSingleInstanceLock returns error, preventing instance locking in ElectronJS
I get the error ReferenceError: mainWindow is not defined when I try to request single instance lock in ElectronJS app. The documentation does not state anything regarding any requirement of this variable. However docs do show an example of the variable myWindow. What is going on here and how do I fix it?
[ "Here's the code I ended up writing which is safe on both Windows and MacOS:\nvar fs = require('fs')\n\nmodule.exports = class SingleInstance {\n constructor(){\n this.appName = app.name.split(' ').join('-').toLowerCase()\n\n this.tmpPath = app.getPath('temp') + '\\\\' + this.appName + '\\\\'\n\n if (process.platform === 'darwin') this.tmpPath = this.tmpPath.split('\\\\').join('//')\n\n if (!fs.existsSync(this.tmpPath)) fs.mkdirSync(this.tmpPath)\n\n this.lockFilePath = this.tmpPath + 'lock.txt'\n\n if (this.isSecondInstance()){\n process.exit()\n } else {\n this.updateLockFile()\n }\n }\n\n updateLockFile(){\n fs.writeFileSync(this.lockFilePath, process.pid.toString())\n }\n \n isSecondInstance(){\n try {\n var pid = fs.readFileSync(this.lockFilePath).toString()\n\n try {\n var isAlreadyRunning = process.kill(pid, 0)\n //app is the second instance. terminate.\n return true\n } catch(err) { }\n } catch(err) { }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "electron", "javascript" ]
stackoverflow_0074578176_electron_javascript.txt
Q: Console.log returning undefined on ngOninit, but returning data on function I have this problem, i want to print whatever the data is in userdata variable. but I only get an undefined text when making a console.log(). but when I place this console log inside a function and call it with a button it throws the actual data. I tried to put the console.log inside a function and call it at the end of ngoinint but this does not work. my code public userData!: any; ngOnInit(): void { this.getUserData(this.userId); console.log(this.userData) //this prints undefined } getUserData(id: number) { this.userService.getUserData(id).subscribe((response) => { this.userData = response.data; this.userCategory = response.data.UsersCategory; }); } this.printData(){ // this only works if i press a button to call it // but does not work if I call this function inside ngOnInit console.log(this.userData); } A: Your described goal i want to print whatever the data is in userdata variable. The moment you log userData is at the ngOnInit. At this time (lifecycle hook) the value has not been set yet. What could cause userData to be set? userData is only set at only place in your code - inside the subscription that is within the getUserData function. Requirements to receive a value getUserData has to be called. That will create a subscription this.userService.getUserData(id) has to emit at least one value Your code does neither show a getUserData call nor a observable emit this.userService.getUserData(id) is shown. Misconcept Your code looks a lot like synchronuous imperative. Although RxJS is reactive and handles asynchronuous values for you. I think you have a missunderstanding of how RxJS is intended to be used. Some tips that might help Try to subscribe as early as possible to observables ngOnInit() { observable$.subscribe(value => userData = value) } If your observables dont cover all informations and events > create one with that fits your needs private readonly getUserData$$ = new Subject() getUserData(id: number) { this.getUserData$$.next(id) } Use the power of rxjs operators to create a pipe that fits your needs private readonly getUserData$$ = new Subject() public userData: any; ngOnInit() { getUserData$$.pipe( switchMap(id => this.userService.getUserData(id) ).subscribe(response => this.userData = response.data) } getUserData(id: number) { this.getUserData$$.next(id) }
Console.log returning undefined on ngOninit, but returning data on function
I have this problem, i want to print whatever the data is in userdata variable. but I only get an undefined text when making a console.log(). but when I place this console log inside a function and call it with a button it throws the actual data. I tried to put the console.log inside a function and call it at the end of ngoinint but this does not work. my code public userData!: any; ngOnInit(): void { this.getUserData(this.userId); console.log(this.userData) //this prints undefined } getUserData(id: number) { this.userService.getUserData(id).subscribe((response) => { this.userData = response.data; this.userCategory = response.data.UsersCategory; }); } this.printData(){ // this only works if i press a button to call it // but does not work if I call this function inside ngOnInit console.log(this.userData); }
[ "Your described goal\ni want to print whatever the data is in userdata variable.\n\nThe moment you log userData is at the ngOnInit. At this time (lifecycle hook) the value has not been set yet.\n\nWhat could cause userData to be set?\n\nuserData is only set at only place in your code - inside the subscription that is within the getUserData function.\n\nRequirements to receive a value\n\ngetUserData has to be called. That will create a subscription\nthis.userService.getUserData(id) has to emit at least one value\n\nYour code does neither show a getUserData call nor a observable emit this.userService.getUserData(id) is shown.\nMisconcept\nYour code looks a lot like synchronuous imperative. Although RxJS is reactive and handles asynchronuous values for you. I think you have a missunderstanding of how RxJS is intended to be used. Some tips that might help\n\nTry to subscribe as early as possible to observables\n\nngOnInit() {\n observable$.subscribe(value => userData = value)\n}\n\n\nIf your observables dont cover all informations and events > create one with that fits your needs\n\nprivate readonly getUserData$$ = new Subject()\n\ngetUserData(id: number) {\n this.getUserData$$.next(id)\n}\n\n\nUse the power of rxjs operators to create a pipe that fits your needs\n\nprivate readonly getUserData$$ = new Subject()\n\npublic userData: any;\n\nngOnInit() {\n getUserData$$.pipe(\n switchMap(id => this.userService.getUserData(id)\n ).subscribe(response => this.userData = response.data)\n}\n\ngetUserData(id: number) {\n this.getUserData$$.next(id)\n}\n\n" ]
[ 1 ]
[]
[]
[ "angular", "rxjs" ]
stackoverflow_0074553717_angular_rxjs.txt
Q: Kotlin. How to map only non null values of list? I need to make some operations with my list. For example I have list of TestData: data class TestData ( val value: Int?, val name: String ) I need to map list of TestData to list of String. Here is my code: val names = listOfTestData .map { data -> getName(data.value) } <- Type mismatch. Required: Int, found Int? .distinct() The problem is that the function getName(value: Int) accepts only a non nullable type. Can I somehow skip elements from listOfTestData whose value is null ? I could filter the values before making a map, but I'll have to use inside the map !!, I'd like a more elegant solution. val names = listOfTestData .filter { it.value != null } .map { data -> getName(data.value!!) } .distinct() Please tell me how can this be done without using !! A: Instead of filter, you can mapNotNull to the values. This is basically a map, but if the mapping function you provide returns null, the element is filtered out. listOfTestData.mapNotNull { it.value } .map { getName(it) } .distinct() This will go over the list 3 times. You can combine the mapNotNull and map by using a ?.let on the value: listOfTestData.mapNotNull { data -> data.value?.let { getName(it) } }.distinct() Alternatively, use a sequence: listOfTestData.asSequence() .mapNotNull { it.value } .map { getName(it) } .distinct() .toList()
Kotlin. How to map only non null values of list?
I need to make some operations with my list. For example I have list of TestData: data class TestData ( val value: Int?, val name: String ) I need to map list of TestData to list of String. Here is my code: val names = listOfTestData .map { data -> getName(data.value) } <- Type mismatch. Required: Int, found Int? .distinct() The problem is that the function getName(value: Int) accepts only a non nullable type. Can I somehow skip elements from listOfTestData whose value is null ? I could filter the values before making a map, but I'll have to use inside the map !!, I'd like a more elegant solution. val names = listOfTestData .filter { it.value != null } .map { data -> getName(data.value!!) } .distinct() Please tell me how can this be done without using !!
[ "Instead of filter, you can mapNotNull to the values. This is basically a map, but if the mapping function you provide returns null, the element is filtered out.\nlistOfTestData.mapNotNull { it.value }\n .map { getName(it) }\n .distinct()\n\nThis will go over the list 3 times. You can combine the mapNotNull and map by using a ?.let on the value:\nlistOfTestData.mapNotNull { data -> data.value?.let { getName(it) } }.distinct()\n\nAlternatively, use a sequence:\nlistOfTestData.asSequence()\n .mapNotNull { it.value }\n .map { getName(it) }\n .distinct()\n .toList()\n\n" ]
[ 4 ]
[]
[]
[ "arraylist", "collections", "kotlin", "kotlin_extension", "kotlin_null_safety" ]
stackoverflow_0074657516_arraylist_collections_kotlin_kotlin_extension_kotlin_null_safety.txt
Q: Share data between task slots in Flink JVM memory I have 5 different jobs running in 5 task slots. They all read from Kafka and sink back to Kafka. Kafka load is about 200K messages/sec. I have another job, lets say ,job6 which needs to get some information from these 5 jobs. For each device we make some calculations in those 5 jobs, and according the results of this calculations, in the 6. task I need to do something more. As a first solution, I used sideOutputs in these 5 jobs and sent these additional info to an Kafka topic. Then my 6. job subscribed to it. But as the workload on Kafka was already very high, this solution doubled the workload on Kafka. As all task slots run in the same task manager JVM, what I have in my mind is , developing custom RichSink and RichSource functions which use same static/singleton java object. As it will be static, I beleive all tasks will have access to same object. This object will keep a queue (java BlockingQueue).Instead of feeding data to Kafka, I will feed this queue in all tasks and 6.task will process the data received from this queue. Please let me know if this is a good idea for a big distributed system. I assume clusters will not be a problem because after reading data from shared queue, I will call keyBy() so I hope Flink will handle that part. Also please let me know dangereous points and tips if you have. A: You essentially have an in-memory data store for bridging between two jobs. One of several issues here is that if the Task Manager crashes, you lose this data, thus eliminating one of the key benefits of Flink (guaranteed at-least-once or exactly-once processing). Depending on the version of Flink you're using, I wonder if Flink's new Table Store would be an option for you.
Share data between task slots in Flink JVM memory
I have 5 different jobs running in 5 task slots. They all read from Kafka and sink back to Kafka. Kafka load is about 200K messages/sec. I have another job, lets say ,job6 which needs to get some information from these 5 jobs. For each device we make some calculations in those 5 jobs, and according the results of this calculations, in the 6. task I need to do something more. As a first solution, I used sideOutputs in these 5 jobs and sent these additional info to an Kafka topic. Then my 6. job subscribed to it. But as the workload on Kafka was already very high, this solution doubled the workload on Kafka. As all task slots run in the same task manager JVM, what I have in my mind is , developing custom RichSink and RichSource functions which use same static/singleton java object. As it will be static, I beleive all tasks will have access to same object. This object will keep a queue (java BlockingQueue).Instead of feeding data to Kafka, I will feed this queue in all tasks and 6.task will process the data received from this queue. Please let me know if this is a good idea for a big distributed system. I assume clusters will not be a problem because after reading data from shared queue, I will call keyBy() so I hope Flink will handle that part. Also please let me know dangereous points and tips if you have.
[ "You essentially have an in-memory data store for bridging between two jobs. One of several issues here is that if the Task Manager crashes, you lose this data, thus eliminating one of the key benefits of Flink (guaranteed at-least-once or exactly-once processing).\nDepending on the version of Flink you're using, I wonder if Flink's new Table Store would be an option for you.\n" ]
[ 1 ]
[]
[]
[ "apache_flink", "flink_streaming" ]
stackoverflow_0074654060_apache_flink_flink_streaming.txt
Q: Python AND operator on two boolean lists - how? I have two boolean lists, e.g., x=[True,True,False,False] y=[True,False,True,False] I want to AND these lists together, with the expected output: xy=[True,False,False,False] I thought that expression x and y would work, but came to discover that it does not: in fact, (x and y) != (y and x) Output of x and y: [True,False,True,False] Output of y and x: [True,True,False,False] Using list comprehension does have correct output. Whew! xy = [x[i] and y[i] for i in range(len(x)] Mind you I could not find any reference that told me the AND operator would work as I tried with x and y. But it's easy to try things in Python. Can someone explain to me what is happening with x and y? And here is a simple test program: import random random.seed() n = 10 x = [random.random() > 0.5 for i in range(n)] y = [random.random() > 0.5 for i in range(n)] # Next two methods look sensible, but do not work a = x and y z = y and x # Next: apparently only the list comprehension method is correct xy = [x[i] and y[i] for i in range(n)] print 'x : %s'%str(x) print 'y : %s'%str(y) print 'x and y : %s'%str(a) print 'y and x : %s'%str(z) print '[x and y]: %s'%str(xy) A: and simply returns either the first or the second operand, based on their truth value. If the first operand is considered false, it is returned, otherwise the other operand is returned. Lists are considered true when not empty, so both lists are considered true. Their contents don't play a role here. Because both lists are not empty, x and y simply returns the second list object; only if x was empty would it be returned instead: >>> [True, False] and ['foo', 'bar'] ['foo', 'bar'] >>> [] and ['foo', 'bar'] [] See the Truth value testing section in the Python documentation: Any object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false: [...] any empty sequence, for example, '', (), []. [...] All other values are considered true — so objects of many types are always true. (emphasis mine), and the Boolean operations section right below that: x and y if x is false, then x, else y This is a short-circuit operator, so it only evaluates the second argument if the first one is True. You indeed need to test the values contained in the lists explicitly. You can do so with a list comprehension, as you discovered. You can rewrite it with the zip() function to pair up the values: [a and b for a, b in zip(x, y)] A: You could use numpy: >>> import numpy as np >>> x=np.array([True,True,False,False]) >>> y=np.array([True,False,True,False]) >>> x & y array([ True, False, False, False], dtype=bool) Numpy allows numerical and logical operations on arrays such as: >>> z=np.array([1,2,3,4]) >>> z+1 array([2, 3, 4, 5]) You can perform bitwise and with the & operator. Instead of a list comprehension, you can use numpy to generate the boolean array directly like so: >>> np.random.random(10)>.5 array([ True, True, True, False, False, True, True, False, False, False], dtype=bool) A: Here is a simple solution: np.logical_and(x,y) A: and is not necessarily a Boolean operator; it returns one of its two arguments, regardless of their type. If the first argument is false-ish (False, numeric zero, or an empty string/container), it returns that argument. Otherwise, it returns the second argument. In your case, both x and y are non-empty lists, so the first argument is always true-ish, meaning x and y returns y and y and x returns x. A: This should do what you want: xy = [a and b for a, b in zip(x, y)] The reason x and y returns y and y and x returns x is because boolean operators in python return the last value checked that determines the true-ness of the expression. Non-empty list's evaluate to True, and since and requires both operands to evaluate True, the last operand checked is the second operand. Contrast with x or y, which would return x because it doesn't need to check y to determine the true-ness of the expression. A: To generalize on the zip approach, use all and any for any number of lists. all for AND: [all(i) for i in zip(a, b, c)] # zip all lists and any for OR: [any(i) for i in zip(a, b, c)] A: Instead of using [a and b for a, b in zip(x, y)] one could just use the possibility of numpy to multiply bool-values: (np.array(x)*np.array(y)) >> array([ True, False, False, False], dtype=bool) Or do I overlook a special case? A: You can use the zip function x=[True,True,False,False] y=[True,False,True,False] z=[a and b for a,b in zip(x,y)] A: In addition to what @Martijn Pieters has answered, I would just add the following code to explain and and or operations in action. and returns the first falsy value encountered else the last evaluated argument. Similarly or returns the first truthy value encountered else the last evaluated argument. nl1 = [3,3,3,3,0,0,0,0] nl2 = [2,2,0,0,2,2,0,0] nl3 = [1,0,1,0,1,0,1,0] and_list = [a and b and c for a,b,c in zip(nl1,nl2,nl3)] or_list = [a or b or c for a,b,c in zip(nl1,nl2,nl3)] Values are and_list = [1, 0, 0, 0, 0, 0, 0, 0] or_list = [3, 3, 3, 3, 2, 2, 1, 0] A: Thanks for the answer @Martijn Pieters and @Tony. I dig into the timing of the various options we have to make the AND of two lists and I would like to share my results, because I found them interesting. Despite liking a lot the pythonic way [a and b for a,b in zip(x,y) ], turns out really slow. I compare with a integer product of arrays (1*(array of bool)) * (1*(array of bool)) and it turns out to be more than 10x faster import time import numpy as np array_to_filter = np.linspace(1,1000000,1000000) # 1 million of integers :-) value_limit = 100 cycles = 100 # METHOD #1: [a and b for a,b in zip(x,y) ] t0=time.clock() for jj in range(cycles): x = array_to_filter<np.max(array_to_filter)-value_limit # filter the values > MAX-value_limit y = array_to_filter>value_limit # filter the values < value_limit z= [a and b for a,b in zip(x,y) ] # AND filtered = array_to_filter[z] print('METHOD #1 = %.2f s' % ( (time.clock()-t0))) # METHOD 1*(array of bool) AND 1*(array of bool) t0=time.clock() for jj in range(cycles): x = 1*(array_to_filter<np.max(array_to_filter)-value_limit) # filter the values > MAX-value_limit y = 1*(array_to_filter>value_limit) # filter the values < value_limit z = x*y # AND z = z.astype(bool) # convert back to array of bool filtered = array_to_filter[z] print('METHOD #2 = %.2f s' % ( (time.clock()-t0))) The results are METHOD #1 = 15.36 s METHOD #2 = 1.85 s The speed is almost affected equally by the size of the array or by the number of cycles. I hope I helped someone code to be faster. :-)
Python AND operator on two boolean lists - how?
I have two boolean lists, e.g., x=[True,True,False,False] y=[True,False,True,False] I want to AND these lists together, with the expected output: xy=[True,False,False,False] I thought that expression x and y would work, but came to discover that it does not: in fact, (x and y) != (y and x) Output of x and y: [True,False,True,False] Output of y and x: [True,True,False,False] Using list comprehension does have correct output. Whew! xy = [x[i] and y[i] for i in range(len(x)] Mind you I could not find any reference that told me the AND operator would work as I tried with x and y. But it's easy to try things in Python. Can someone explain to me what is happening with x and y? And here is a simple test program: import random random.seed() n = 10 x = [random.random() > 0.5 for i in range(n)] y = [random.random() > 0.5 for i in range(n)] # Next two methods look sensible, but do not work a = x and y z = y and x # Next: apparently only the list comprehension method is correct xy = [x[i] and y[i] for i in range(n)] print 'x : %s'%str(x) print 'y : %s'%str(y) print 'x and y : %s'%str(a) print 'y and x : %s'%str(z) print '[x and y]: %s'%str(xy)
[ "and simply returns either the first or the second operand, based on their truth value. If the first operand is considered false, it is returned, otherwise the other operand is returned.\nLists are considered true when not empty, so both lists are considered true. Their contents don't play a role here.\nBecause both lists are not empty, x and y simply returns the second list object; only if x was empty would it be returned instead:\n>>> [True, False] and ['foo', 'bar']\n['foo', 'bar']\n>>> [] and ['foo', 'bar']\n[]\n\nSee the Truth value testing section in the Python documentation:\n\nAny object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false:\n[...]\n\nany empty sequence, for example, '', (), [].\n\n[...]\nAll other values are considered true — so objects of many types are always true.\n\n(emphasis mine), and the Boolean operations section right below that:\n\nx and y\n if x is false, then x, else y\nThis is a short-circuit operator, so it only evaluates the second argument if the first one is True.\n\nYou indeed need to test the values contained in the lists explicitly. You can do so with a list comprehension, as you discovered. You can rewrite it with the zip() function to pair up the values:\n[a and b for a, b in zip(x, y)]\n\n", "You could use numpy:\n>>> import numpy as np\n>>> x=np.array([True,True,False,False])\n>>> y=np.array([True,False,True,False])\n>>> x & y\narray([ True, False, False, False], dtype=bool)\n\nNumpy allows numerical and logical operations on arrays such as:\n>>> z=np.array([1,2,3,4])\n>>> z+1\narray([2, 3, 4, 5])\n\nYou can perform bitwise and with the & operator.\nInstead of a list comprehension, you can use numpy to generate the boolean array directly like so:\n>>> np.random.random(10)>.5\narray([ True, True, True, False, False, True, True, False, False, False], dtype=bool)\n\n", "Here is a simple solution:\nnp.logical_and(x,y)\n\n", "and is not necessarily a Boolean operator; it returns one of its two arguments, regardless of their type. If the first argument is false-ish (False, numeric zero, or an empty string/container), it returns that argument. Otherwise, it returns the second argument.\nIn your case, both x and y are non-empty lists, so the first argument is always true-ish, meaning x and y returns y and y and x returns x.\n", "This should do what you want:\nxy = [a and b for a, b in zip(x, y)]\n\nThe reason x and y returns y and y and x returns x is because boolean operators in python return the last value checked that determines the true-ness of the expression. Non-empty list's evaluate to True, and since and requires both operands to evaluate True, the last operand checked is the second operand. Contrast with x or y, which would return x because it doesn't need to check y to determine the true-ness of the expression.\n", "To generalize on the zip approach, use all and any for any number of lists.\nall for AND:\n[all(i) for i in zip(a, b, c)] # zip all lists\n\nand any for OR:\n[any(i) for i in zip(a, b, c)]\n\n", "Instead of using\n[a and b for a, b in zip(x, y)]\n\none could just use the possibility of numpy to multiply bool-values:\n(np.array(x)*np.array(y))\n>> array([ True, False, False, False], dtype=bool)\n\nOr do I overlook a special case?\n", "You can use the zip function\nx=[True,True,False,False]\ny=[True,False,True,False]\nz=[a and b for a,b in zip(x,y)]\n\n", "In addition to what @Martijn Pieters has answered, I would just add the following code to explain and and or operations in action.\nand returns the first falsy value encountered else the last evaluated argument. \nSimilarly or returns the first truthy value encountered else the last evaluated argument. \nnl1 = [3,3,3,3,0,0,0,0]\nnl2 = [2,2,0,0,2,2,0,0]\nnl3 = [1,0,1,0,1,0,1,0]\nand_list = [a and b and c for a,b,c in zip(nl1,nl2,nl3)]\nor_list = [a or b or c for a,b,c in zip(nl1,nl2,nl3)]\n\nValues are\nand_list = [1, 0, 0, 0, 0, 0, 0, 0]\nor_list = [3, 3, 3, 3, 2, 2, 1, 0]\n", "Thanks for the answer @Martijn Pieters and @Tony.\nI dig into the timing of the various options we have to make the AND of two lists and I would like to share my results, because I found them interesting.\nDespite liking a lot the pythonic way [a and b for a,b in zip(x,y) ], turns out really slow.\nI compare with a integer product of arrays (1*(array of bool)) * (1*(array of bool)) and it turns out to be more than 10x faster\nimport time\nimport numpy as np\narray_to_filter = np.linspace(1,1000000,1000000) # 1 million of integers :-)\nvalue_limit = 100\ncycles = 100\n\n# METHOD #1: [a and b for a,b in zip(x,y) ]\nt0=time.clock()\nfor jj in range(cycles):\n x = array_to_filter<np.max(array_to_filter)-value_limit # filter the values > MAX-value_limit\n y = array_to_filter>value_limit # filter the values < value_limit\n z= [a and b for a,b in zip(x,y) ] # AND\n filtered = array_to_filter[z]\nprint('METHOD #1 = %.2f s' % ( (time.clock()-t0)))\n\n\n\n# METHOD 1*(array of bool) AND 1*(array of bool)\nt0=time.clock()\nfor jj in range(cycles):\n x = 1*(array_to_filter<np.max(array_to_filter)-value_limit) # filter the values > MAX-value_limit\n y = 1*(array_to_filter>value_limit) # filter the values < value_limit\n z = x*y # AND\n z = z.astype(bool) # convert back to array of bool\n filtered = array_to_filter[z]\nprint('METHOD #2 = %.2f s' % ( (time.clock()-t0)))\n\nThe results are\nMETHOD #1 = 15.36 s\nMETHOD #2 = 1.85 s\n\nThe speed is almost affected equally by the size of the array or by the number of cycles.\nI hope I helped someone code to be faster. :-)\n" ]
[ 75, 20, 12, 7, 4, 2, 1, 0, 0, 0 ]
[ "The following works for me:\n([True,False,True]) and ([False,False,True])\n\noutput:\n[False, False, True]\n\n" ]
[ -1 ]
[ "boolean", "list", "operator_keyword", "python" ]
stackoverflow_0032192163_boolean_list_operator_keyword_python.txt
Q: std::fill crash with a empty std::string I cannot understand why this code segfault. I allocate enough space for n element of a giving type with an allocator, and then fill the space with a copy of the default constructed type with std::fill. #define TESTED_TYPE std::string size_t n = 5; std::allocator<TESTED_TYPE> my_alloc; TESTED_TYPE *data = my_alloc.allocate(n); TESTED_TYPE val = TESTED_TYPE(); std::fill(data, data + n, val); This code compile fine and doesnt crash with basic type like int, char etc... but not with std::string. If I give to std::fill a not empty string the code doesnt segfault either. Why ? A: You never invoked the std::string constructor, so the memory pointed to by data is uninitialized garbage. std::fill is (probably) implemented a loop that does *ptr = value; for every element in the range you pass. std::string::operator= must make sure that any memory it was previously holding on to is freed, so it calls delete[] on its internal buffer. Since this buffer is just an arbitrary address, this will very often access unmapped memory and crash. To fix this, you need to also invoke the placement new operator on each of your freshly allocated array elements so that their internal state is consistent and safe to use.
std::fill crash with a empty std::string
I cannot understand why this code segfault. I allocate enough space for n element of a giving type with an allocator, and then fill the space with a copy of the default constructed type with std::fill. #define TESTED_TYPE std::string size_t n = 5; std::allocator<TESTED_TYPE> my_alloc; TESTED_TYPE *data = my_alloc.allocate(n); TESTED_TYPE val = TESTED_TYPE(); std::fill(data, data + n, val); This code compile fine and doesnt crash with basic type like int, char etc... but not with std::string. If I give to std::fill a not empty string the code doesnt segfault either. Why ?
[ "You never invoked the std::string constructor, so the memory pointed to by data is uninitialized garbage.\nstd::fill is (probably) implemented a loop that does *ptr = value; for every element in the range you pass. std::string::operator= must make sure that any memory it was previously holding on to is freed, so it calls delete[] on its internal buffer. Since this buffer is just an arbitrary address, this will very often access unmapped memory and crash.\nTo fix this, you need to also invoke the placement new operator on each of your freshly allocated array elements so that their internal state is consistent and safe to use.\n" ]
[ 2 ]
[]
[]
[ "allocator", "c++" ]
stackoverflow_0074657527_allocator_c++.txt
Q: Reading files with fgets() I have a doubt about reading files in C using fgets(). I've seen people use loops in order to do this, but I skip the loop part, doing this instead. What's the difference between using a loop and my way? #include <stdio.h> #include <stdlib.h> int main() { FILE *file = NULL; char string[30]; file = fopen("test.txt", "r"); //test.txt contains "Hello world!" if (file == NULL) { puts("ERROR"); return 1; } fgets(string, 30, file); puts(string); fclose(file); return 0; } Outputs: Hello world! A: From the man page: The fgets() function shall read bytes from stream into the array pointed to by s until n-1 bytes are read, or a is read and transferred to s, or an end-of-file condition is encountered. A null byte shall be written immediately after the last bytes read into the array. According to this, fgets stops reading when it encounters a newline, an EOF condition, an input error, or when it has read n - 1 characters. So your approach only reads one line from the file. That's well and good if you need to read only a single line. To read a whole file line by line, fgets is called in a loop until an EOF condition is reached. Another way would be to read the whole file into a buffer with fread, and then parse it. Or read it character by character by calling getc in a loop. EDIT: In your code, fgets is trying to read (n - 1) 29 bytes of memory whereas you allocated only 10 bytes for the buffer. This leads to undefined behaviour. The memory not allocated should not be read. Use sizeof (string) instead. "Hello World!" can not fit in a buffer you allocated 10 bytes for. RETURN VALUE: Upon successful completion, fgets() shall return s. If the stream is at end-of-file, the end-of-file indicator for the stream shall be set and fgets() shall return a null pointer. If a read error occurs, the error indicator for the stream shall be set, fgets() shall return a null pointer, and shall set errno to indicate the error. You didn't check the return value of fgets. A: What's the difference between using a loop and my way? OP's way has many problems. Wrong buffer size fgets(string, 30, file); overstates the buffer size of 10 allowing undefined behavior (UB) due to a potential buffer overflow. Input result not checked fgets(string, 30, file); does not check the return value of fgets(). Until the return value is checked, the contents of string are not known to be updated correctly. Extra '\n' puts(string); appends an extra '\n'. The entire file is not certainly read A single read might not read the entire contents. Use a loop. Alternative: read until fgets() returns NULL. while (fgets(string, sizeof string, file)) { fputs(string, stdout); }
Reading files with fgets()
I have a doubt about reading files in C using fgets(). I've seen people use loops in order to do this, but I skip the loop part, doing this instead. What's the difference between using a loop and my way? #include <stdio.h> #include <stdlib.h> int main() { FILE *file = NULL; char string[30]; file = fopen("test.txt", "r"); //test.txt contains "Hello world!" if (file == NULL) { puts("ERROR"); return 1; } fgets(string, 30, file); puts(string); fclose(file); return 0; } Outputs: Hello world!
[ "From the man page:\n\nThe fgets() function shall read bytes from stream into the array pointed to by s until n-1 bytes are read, or a is read and transferred to s, or an end-of-file condition is encountered. A null byte shall be written immediately after the last bytes read into the array.\n\nAccording to this, fgets stops reading when it encounters a newline, an EOF condition, an input error, or when it has read n - 1 characters. So your approach only reads one line from the file. That's well and good if you need to read only a single line.\nTo read a whole file line by line, fgets is called in a loop until an EOF condition is reached. Another way would be to read the whole file into a buffer with fread, and then parse it.\nOr read it character by character by calling getc in a loop.\nEDIT: In your code, fgets is trying to read (n - 1) 29 bytes of memory whereas you allocated only 10 bytes for the buffer. This leads to undefined behaviour. The memory not allocated should not be read. Use sizeof (string) instead.\n\"Hello World!\" can not fit in a buffer you allocated 10 bytes for.\nRETURN VALUE:\n\nUpon successful completion, fgets() shall return s. If the stream is at end-of-file, the end-of-file indicator for the stream shall be set and fgets() shall return a null pointer. If a read error occurs, the error indicator for the stream shall be set, fgets() shall return a null pointer, and shall set errno to indicate the error.\n\nYou didn't check the return value of fgets.\n", "\nWhat's the difference between using a loop and my way?\n\nOP's way has many problems.\nWrong buffer size\nfgets(string, 30, file); overstates the buffer size of 10 allowing undefined behavior (UB) due to a potential buffer overflow.\nInput result not checked\nfgets(string, 30, file); does not check the return value of fgets().\nUntil the return value is checked, the contents of string are not known to be updated correctly.\nExtra '\\n'\nputs(string); appends an extra '\\n'.\nThe entire file is not certainly read\nA single read might not read the entire contents. Use a loop.\n\nAlternative: read until fgets() returns NULL.\nwhile (fgets(string, sizeof string, file)) {\n fputs(string, stdout);\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "c", "file" ]
stackoverflow_0074657348_c_file.txt
Q: How to use regexp_replace in hive to remove strings I have a table as: column1 -> 101#1,102#2,103#3,104#4 I am trying to remove strings (101#,102#,103#,104#). The expected output is column2 -> 1,2,3,4 I am trying to do using regexp_replace any help would be highly appreciated A: It seems silly but, you have to break the string into an array, then transform (run a function) on each element, and finally concat the array back into a string. select concat_ws( ',' , transform ( split('101#1,102#2,103#3,104#4',','), x -> regexp_replace( x, '.*#','' )))
How to use regexp_replace in hive to remove strings
I have a table as: column1 -> 101#1,102#2,103#3,104#4 I am trying to remove strings (101#,102#,103#,104#). The expected output is column2 -> 1,2,3,4 I am trying to do using regexp_replace any help would be highly appreciated
[ "It seems silly but, you have to break the string into an array, then transform (run a function) on each element, and finally concat the array back into a string.\nselect concat_ws( ',' , transform ( split('101#1,102#2,103#3,104#4',','), x -> regexp_replace( x, '.*#','' ))) \n\n" ]
[ 0 ]
[]
[]
[ "hive" ]
stackoverflow_0074616397_hive.txt
Q: Remove Flutter HTML widget's default padding There is a default padding in flutter_html already when trying to parse text. Below is the difference between using HTML(data: ...) and Text(...) widgets. Top: HTML Widget, Bottom: Text Widget How can I remove the horizontal padding? A: I think HTML or body is having a default padding and/or margin. Try adding "body": Style(margin: EdgeInsets.zero, padding: EdgeInsets.zero, in the style parameter. A: Html has updated, so I update accepted answer: "body": Style( padding: EdgeInsets.zero, margin: Margins( bottom: Margin.zero(), left: Margin.zero(), top: Margin.zero(), right: Margin.zero(), ), ), A: Add 'body': Style(margin: Margins.zero) inside the style parameter curly brackets.
Remove Flutter HTML widget's default padding
There is a default padding in flutter_html already when trying to parse text. Below is the difference between using HTML(data: ...) and Text(...) widgets. Top: HTML Widget, Bottom: Text Widget How can I remove the horizontal padding?
[ "I think HTML or body is having a default padding and/or margin.\nTry adding \"body\": Style(margin: EdgeInsets.zero, padding: EdgeInsets.zero, in the style parameter.\n", "Html has updated, so I update accepted answer:\n \"body\": Style(\n padding: EdgeInsets.zero,\n margin: Margins(\n bottom: Margin.zero(),\n left: Margin.zero(),\n top: Margin.zero(),\n right: Margin.zero(),\n ),\n ),\n\n", "Add 'body': Style(margin: Margins.zero) inside the style parameter curly brackets.\n" ]
[ 28, 1, 0 ]
[]
[]
[ "dart", "flutter", "flutter_html", "flutter_layout" ]
stackoverflow_0069495619_dart_flutter_flutter_html_flutter_layout.txt
Q: next-i18next serverSideTranslations causes _app.js to re-render when NextJS route changes export async function getServerSideProps({ locale }) { const data = await serverSideTranslations(locale, ['apple', 'home']); return { props: data, }; } export default function IndexPage() { return <h1>Hi!</h1> } I noticed that if you pass data to props, then when you change routes in NextJS _app.js is re-rendered causing flicker (give a white backlight), but if you don’t pass data to props, then everything works fine. How to remove flicker? This error is seen in both production and development environments. next-i18next.config.js module.exports = { i18n: { defaultLocale: 'fr', locales: ['fr', 'en'], }, reloadOnPrerender: process.env.NODE_ENV === 'development', }; next.config.js const { i18n } = require('./next-i18next.config'); /** @type {import('next').NextConfig} */ const nextConfig = { i18n, reactStrictMode: true, swcMinify: true, poweredByHeader: false, async rewrites() { return [ { source: '/api/:path*', destination: 'http://localhost:8080/:path*', }, ]; }, }; module.exports = nextConfig; A: It turns out that if you open a route where there is no serverSideTranslations, then this leads to resetting the states and re-rendering _app.js. Solution: It is necessary to indicate on each page serverSideTranslations
next-i18next serverSideTranslations causes _app.js to re-render when NextJS route changes
export async function getServerSideProps({ locale }) { const data = await serverSideTranslations(locale, ['apple', 'home']); return { props: data, }; } export default function IndexPage() { return <h1>Hi!</h1> } I noticed that if you pass data to props, then when you change routes in NextJS _app.js is re-rendered causing flicker (give a white backlight), but if you don’t pass data to props, then everything works fine. How to remove flicker? This error is seen in both production and development environments. next-i18next.config.js module.exports = { i18n: { defaultLocale: 'fr', locales: ['fr', 'en'], }, reloadOnPrerender: process.env.NODE_ENV === 'development', }; next.config.js const { i18n } = require('./next-i18next.config'); /** @type {import('next').NextConfig} */ const nextConfig = { i18n, reactStrictMode: true, swcMinify: true, poweredByHeader: false, async rewrites() { return [ { source: '/api/:path*', destination: 'http://localhost:8080/:path*', }, ]; }, }; module.exports = nextConfig;
[ "It turns out that if you open a route where there is no serverSideTranslations, then this leads to resetting the states and re-rendering _app.js.\nSolution: It is necessary to indicate on each page\nserverSideTranslations\n\n" ]
[ 0 ]
[]
[]
[ "i18next", "next.js", "next_i18next" ]
stackoverflow_0074656529_i18next_next.js_next_i18next.txt
Q: Prevent MySQL duplicate rows when joining How it looks The same Restaurant, denoted by same idRestaurant, is being duplicated because it has multiple images I'm trying to join the images of each restaurant using restaurant's id. I want all the images to go to the same row; do not want duplicate restaurants because a restaurant has multiple images. All images must be in same row. A: You can use the DISTINCT operator SELECT DISTINCT name FROM table WHERE id < 0 This was just done by hand so not tested but should work like that
Prevent MySQL duplicate rows when joining
How it looks The same Restaurant, denoted by same idRestaurant, is being duplicated because it has multiple images I'm trying to join the images of each restaurant using restaurant's id. I want all the images to go to the same row; do not want duplicate restaurants because a restaurant has multiple images. All images must be in same row.
[ "You can use the DISTINCT operator\nSELECT DISTINCT name\nFROM table\nWHERE id < 0\n\nThis was just done by hand so not tested but should work like that\n" ]
[ 0 ]
[]
[]
[ "join", "mysql", "mysql_workbench", "sql" ]
stackoverflow_0074657619_join_mysql_mysql_workbench_sql.txt
Q: Connect Oracle DB using service_name instead of SID using ojdbc14.jar driver in WebLogic Server 6.1 SP1 with JDK 3 While Working on a legacy application that first file date back to year 2005. It used to create connection pool that is mapped to DataSource that application connects with, URL: jdbc:oracle:thin:@host.test.intranet:1521:service_name Driver Classname:oracle.jdbc.driver.OracleDriver Properties(key=value): user=makeduser password=maskedpassword dll=ocijdbc8 protocol=thin ACLName: null Recently, the db got rehosted and the new connection details changed from SID to Service_name While trying to use same format "host"port:sid" The error that it returns when weblogic server is started Cannot startup connection pool "veroPool" weblogic.common.ResourceException:Could not >create pool connection. The DBMS driver exception was:java.sql.SQLException: Io exception: >Connection refused(DESCRIPTION=(TMP=)(VSNNUM=318767104)(ERR=12505)(ERROR_STACK=(ERROR=>(CODE=12505)(EMFI=4)))) And When trying to use following format: jdbc:oracle:thin:@//NEWHOST.TEST.INTRANET:1521/NEW-SERVICE_NAME Error returned is: Cannot startup connection pool "veroPool" weblogic.common.ResourceException:Could not create pool connection. The DBMS driver exception was: java.sql.SQLException: Io exception: Invalid connection string format, a valid format is: "host:port:sid" A: This driver version doesn't support the service name passed in the url in this format, so you have to use the SID. Try to connect to the DB and get the SID using the following query: select sys_context('userenv','instance_name') from dual; Then you can use the SID returned from the query in your connection url: jdbc:oracle:thin:@host.test.intranet:1521:SID Alternatively you can try with the following syntax to specify your connection which suports service name for this driver version: jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = <HOST>)(PORT = <PORT>))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = <SERVICE_NAME>)))
Connect Oracle DB using service_name instead of SID using ojdbc14.jar driver in WebLogic Server 6.1 SP1 with JDK 3
While Working on a legacy application that first file date back to year 2005. It used to create connection pool that is mapped to DataSource that application connects with, URL: jdbc:oracle:thin:@host.test.intranet:1521:service_name Driver Classname:oracle.jdbc.driver.OracleDriver Properties(key=value): user=makeduser password=maskedpassword dll=ocijdbc8 protocol=thin ACLName: null Recently, the db got rehosted and the new connection details changed from SID to Service_name While trying to use same format "host"port:sid" The error that it returns when weblogic server is started Cannot startup connection pool "veroPool" weblogic.common.ResourceException:Could not >create pool connection. The DBMS driver exception was:java.sql.SQLException: Io exception: >Connection refused(DESCRIPTION=(TMP=)(VSNNUM=318767104)(ERR=12505)(ERROR_STACK=(ERROR=>(CODE=12505)(EMFI=4)))) And When trying to use following format: jdbc:oracle:thin:@//NEWHOST.TEST.INTRANET:1521/NEW-SERVICE_NAME Error returned is: Cannot startup connection pool "veroPool" weblogic.common.ResourceException:Could not create pool connection. The DBMS driver exception was: java.sql.SQLException: Io exception: Invalid connection string format, a valid format is: "host:port:sid"
[ "This driver version doesn't support the service name passed in the url in this format, so you have to use the SID. Try to connect to the DB and get the SID using the following query:\nselect sys_context('userenv','instance_name') from dual; \n\nThen you can use the SID returned from the query in your connection url:\njdbc:oracle:thin:@host.test.intranet:1521:SID\n\nAlternatively you can try with the following syntax to specify your connection which suports service name for this driver version:\njdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = <HOST>)(PORT = <PORT>))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = <SERVICE_NAME>)))\n\n" ]
[ 0 ]
[]
[]
[ "ojdbc", "ora_12541", "oracle", "service_name" ]
stackoverflow_0071736943_ojdbc_ora_12541_oracle_service_name.txt
Q: How to put several checkboxes in a grid? Hello I would like to put several checkboxes in a popup as a 3x3 grid. I was able to find something similar as in this example: https://codepen.io/webdevjones/pen/qBapvrz . I tried to use display: flex but I only get one column and the labels are also no longer aligned with the checkboxes. Here is the html and css files : @import url('https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700&display=swap'); *{ font-family: 'Poppins', sans-serif; margin: 0; padding: 0; box-sizing: border-box } :root{ /* ===== Colors ===== */ --body-color: #E4E9F7; --sidebar-color: #FFF; --primary-color: #1c1a1a; --primary-color-light: #F6F5FF; --toggle-color: #DDD; --text-color: #707070; /* ===== Transition ===== */ --tran-02: all 0.2s ease; --tran-03: all 0.3s ease; --tran-04: all 0.4s ease; --tran-05: all 0.5s ease; } body{ height: 100vh; background: var(--body-color); } /*--------------------- FENETRE MODAL DONNEES ---------------------*/ /*Paramètres de la fenêtre modal*/ .modal-container-data{ display: flex; position: fixed; justify-content: center; width: 100%; height: 100%; overflow: auto; background-color: #1c1a1a; background: rgba(0, 0, 0, 0.4); } /*Paramétre des panneaux */ .data{ position: relative; top: 25%; background-color: rgb(170, 170, 170); margin: 100px auto; padding: 0; width: 300px; max-width: 85%; } /**/ .data .popup-header{ padding: 2px 16px; background-color: #ffffff; color: #1c1a1a; display: flex; } /*Paramètres du texte de la modal*/ .data .popup-header h1{ font-size: 12px; font-family: Montserrat, sans-serif; font-weight: 500; } /*Paramètre du bouton de la fenêtre modal*/ .data .close-modal{ position: absolute; top: 0px; right: 0px; font-size: 10px; padding: 1.5px 20px; border: none; border-radius: 0px; cursor: pointer; background: #fff; color: rgb(0, 0, 0); } /*Activation du background lors du passage de la souris*/ .data .close-modal:hover{ color: #FFF; background: rgb(188, 15, 15); } /*Paramètres du body de la popup*/ .data .popup-body{ padding: 1px 16px; display: flex; justify-content: center; } .container-box{ display: flex; flex-wrap: wrap; flex-direction: column; padding: 1em; } .data .popup-body input{ padding: 1em 0em; } .data .popup-body label{ display: inline; padding-left: 10px; } <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <!-- https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP --> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'"> --> <!----===== CSS ===== --> <link rel="stylesheet" href="style.css"> <!----===== Boxicons CSS ===== --> <link href='https://unpkg.com/[email protected]/css/boxicons.min.css' rel='stylesheet'> <title>Sail Vision</title> </head> <body> <div class="modal-container-data"> <div class="overlay"> <div class="data"> <div class="popup-header"> <button class="close-modal modal-trigger-data">X</button> <h1>Choix des données</h1> </div> <div class="popup-body"> <div class="container-box"> <input type="checkbox" id="entry" name="Entry"> <label for="entry">Entry</label> <input type="checkbox" id="camber forward" name="camber forward"> <label for="camber forward">Camber forward</label> <input type="checkbox" id="camber" name="camber"> <label for="camber">Camber</label> <input type="checkbox" id="draft" name="draft"> <label for="draft">Draft</label> <input type="checkbox" id="camber after" name="camber after"> <label for="camber after">camber after</label> <input type="checkbox" id="exit" name="exit"> <label for="exit">Exit</label> <input type="checkbox" id="twist" name="twist"> <label for="twist">Twist</label> <input type="checkbox" id="lateral sag" name="lateral sag"> <label for="lateral sag">Lateral sag</label> <input type="checkbox" id="longitudinal sag" name="longitudinal sag"> <label for="longitudinal sag">Longitudinal sag</label> </div> </div> </div> </div> </div> </body> </html> Regards, A: You can use CSS Grid - just make sure that the grid container element only has one child per checkbox; I've done it here by wrapping the input in the label. * { box-sizing: border-box; } body { font-family: sans-serif; background: #E4E9F7; } .data { margin: auto; background-color: #eee; padding: 0; width: 500px; } .container-box { display: grid; grid-template-columns: 1fr 1fr 1fr; gap: 1em; padding: 1em; } .container-box label { white-space: nowrap; } <div class="data"> <div class="popup-body"> <div class="container-box"> <label for="entry"><input type="checkbox" id="entry" name="Entry"> Entry</label> <label for="camber forward"><input type="checkbox" id="camber forward" name="camber forward"> Camber forward</label> <label for="camber"><input type="checkbox" id="camber" name="camber"> Camber</label> <label for="draft"><input type="checkbox" id="draft" name="draft"> Draft</label> <label for="camber after"><input type="checkbox" id="camber after" name="camber after"> Camber after</label> <label for="exit"><input type="checkbox" id="exit" name="exit"> Exit</label> <label for="twist"><input type="checkbox" id="twist" name="twist"> Twist</label> <label for="lateral sag"><input type="checkbox" id="lateral sag" name="lateral sag"> Lateral sag</label> <label for="longitudinal sag"><input type="checkbox" id="longitudinal sag" name="longitudinal sag"> Longitudinal sag</label> </div> </div> </div>
How to put several checkboxes in a grid?
Hello I would like to put several checkboxes in a popup as a 3x3 grid. I was able to find something similar as in this example: https://codepen.io/webdevjones/pen/qBapvrz . I tried to use display: flex but I only get one column and the labels are also no longer aligned with the checkboxes. Here is the html and css files : @import url('https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700&display=swap'); *{ font-family: 'Poppins', sans-serif; margin: 0; padding: 0; box-sizing: border-box } :root{ /* ===== Colors ===== */ --body-color: #E4E9F7; --sidebar-color: #FFF; --primary-color: #1c1a1a; --primary-color-light: #F6F5FF; --toggle-color: #DDD; --text-color: #707070; /* ===== Transition ===== */ --tran-02: all 0.2s ease; --tran-03: all 0.3s ease; --tran-04: all 0.4s ease; --tran-05: all 0.5s ease; } body{ height: 100vh; background: var(--body-color); } /*--------------------- FENETRE MODAL DONNEES ---------------------*/ /*Paramètres de la fenêtre modal*/ .modal-container-data{ display: flex; position: fixed; justify-content: center; width: 100%; height: 100%; overflow: auto; background-color: #1c1a1a; background: rgba(0, 0, 0, 0.4); } /*Paramétre des panneaux */ .data{ position: relative; top: 25%; background-color: rgb(170, 170, 170); margin: 100px auto; padding: 0; width: 300px; max-width: 85%; } /**/ .data .popup-header{ padding: 2px 16px; background-color: #ffffff; color: #1c1a1a; display: flex; } /*Paramètres du texte de la modal*/ .data .popup-header h1{ font-size: 12px; font-family: Montserrat, sans-serif; font-weight: 500; } /*Paramètre du bouton de la fenêtre modal*/ .data .close-modal{ position: absolute; top: 0px; right: 0px; font-size: 10px; padding: 1.5px 20px; border: none; border-radius: 0px; cursor: pointer; background: #fff; color: rgb(0, 0, 0); } /*Activation du background lors du passage de la souris*/ .data .close-modal:hover{ color: #FFF; background: rgb(188, 15, 15); } /*Paramètres du body de la popup*/ .data .popup-body{ padding: 1px 16px; display: flex; justify-content: center; } .container-box{ display: flex; flex-wrap: wrap; flex-direction: column; padding: 1em; } .data .popup-body input{ padding: 1em 0em; } .data .popup-body label{ display: inline; padding-left: 10px; } <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <!-- https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP --> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'"> --> <!----===== CSS ===== --> <link rel="stylesheet" href="style.css"> <!----===== Boxicons CSS ===== --> <link href='https://unpkg.com/[email protected]/css/boxicons.min.css' rel='stylesheet'> <title>Sail Vision</title> </head> <body> <div class="modal-container-data"> <div class="overlay"> <div class="data"> <div class="popup-header"> <button class="close-modal modal-trigger-data">X</button> <h1>Choix des données</h1> </div> <div class="popup-body"> <div class="container-box"> <input type="checkbox" id="entry" name="Entry"> <label for="entry">Entry</label> <input type="checkbox" id="camber forward" name="camber forward"> <label for="camber forward">Camber forward</label> <input type="checkbox" id="camber" name="camber"> <label for="camber">Camber</label> <input type="checkbox" id="draft" name="draft"> <label for="draft">Draft</label> <input type="checkbox" id="camber after" name="camber after"> <label for="camber after">camber after</label> <input type="checkbox" id="exit" name="exit"> <label for="exit">Exit</label> <input type="checkbox" id="twist" name="twist"> <label for="twist">Twist</label> <input type="checkbox" id="lateral sag" name="lateral sag"> <label for="lateral sag">Lateral sag</label> <input type="checkbox" id="longitudinal sag" name="longitudinal sag"> <label for="longitudinal sag">Longitudinal sag</label> </div> </div> </div> </div> </div> </body> </html> Regards,
[ "You can use CSS Grid - just make sure that the grid container element only has one child per checkbox; I've done it here by wrapping the input in the label.\n\n\n* {\n box-sizing: border-box;\n}\n\nbody {\n font-family: sans-serif;\n background: #E4E9F7;\n}\n\n.data {\n margin: auto;\n background-color: #eee;\n padding: 0;\n width: 500px;\n}\n\n.container-box {\n display: grid;\n grid-template-columns: 1fr 1fr 1fr;\n gap: 1em;\n padding: 1em;\n}\n\n.container-box label {\n white-space: nowrap;\n}\n <div class=\"data\">\n <div class=\"popup-body\">\n <div class=\"container-box\">\n <label for=\"entry\"><input type=\"checkbox\" id=\"entry\" name=\"Entry\">\n Entry</label>\n <label for=\"camber forward\"><input type=\"checkbox\" id=\"camber forward\" name=\"camber forward\">\n Camber forward</label>\n <label for=\"camber\"><input type=\"checkbox\" id=\"camber\" name=\"camber\">\n Camber</label>\n <label for=\"draft\"><input type=\"checkbox\" id=\"draft\" name=\"draft\">\n Draft</label>\n <label for=\"camber after\"><input type=\"checkbox\" id=\"camber after\" name=\"camber after\">\n Camber after</label>\n <label for=\"exit\"><input type=\"checkbox\" id=\"exit\" name=\"exit\">\n Exit</label>\n <label for=\"twist\"><input type=\"checkbox\" id=\"twist\" name=\"twist\">\n Twist</label>\n <label for=\"lateral sag\"><input type=\"checkbox\" id=\"lateral sag\" name=\"lateral sag\">\n Lateral sag</label>\n <label for=\"longitudinal sag\"><input type=\"checkbox\" id=\"longitudinal sag\" name=\"longitudinal sag\">\n Longitudinal sag</label>\n </div>\n </div>\n </div>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074657571_css_html.txt
Q: Scala Spark read with partitions drop partitions There is hdfs-directory: /home/path/date=2022-12-02, where date=2022-12-02 is a partition. Parquet file with the partition "date=2022-12-02", has been written to this directory. To read file with partition, I use: spark .read .option("basePath", "/home/path") .parquet("/home/path/date=2022-12-02") The file is read successfully with all partition-fieds. But, partition folder ("date=2022-12-02") is dropped from directory. I can't grasp, what is the reason and how to avoid it. A: There are two ways to have the date as part of your table; Read the path like this: .parquet("/home/path/") Add a new column and use input_file_path() function, then manipulate with the string until you get date column (should be fairly easy, taking last part after slash, splitting on equal sign and taking index 1). I don't think there is another way to do what you want directly.
Scala Spark read with partitions drop partitions
There is hdfs-directory: /home/path/date=2022-12-02, where date=2022-12-02 is a partition. Parquet file with the partition "date=2022-12-02", has been written to this directory. To read file with partition, I use: spark .read .option("basePath", "/home/path") .parquet("/home/path/date=2022-12-02") The file is read successfully with all partition-fieds. But, partition folder ("date=2022-12-02") is dropped from directory. I can't grasp, what is the reason and how to avoid it.
[ "There are two ways to have the date as part of your table;\n\nRead the path like this: .parquet(\"/home/path/\")\n\nAdd a new column and use input_file_path() function, then manipulate with the string until you get date column (should be fairly easy, taking last part after slash, splitting on equal sign and taking index 1).\n\n\nI don't think there is another way to do what you want directly.\n" ]
[ 1 ]
[]
[]
[ "apache_spark", "scala" ]
stackoverflow_0074657066_apache_spark_scala.txt
Q: package.json does not exist at /package.json When I am trying to use webpack in order to build my project and use it on AWS Lambda, I am getting a lot of warnings related to Critical dependency of ./node_modules/grpc. This issue happening once I import {GoogleAdsApi} from 'google-ads-api'; As I can understand this is related to dynamic importing, I might be wrong. As a result, the bundled file is huge (above 4MB) and when zipping it and using it on the Lambda I am getting the following error when the Lambda is triggered: "package.json does not exist at /package.json" *Typescript *Node ver 12.x index.ts import {GoogleAdsApi} from 'google-ads-api'; export const handler = async (event: any): Promise<any> => { try { console.log('Start', event); // @ts-ignore const api = new GoogleAdsApi({client_id: 'id', client_secret: 'secret', developer_token: 'dToken'}); return 'success'; } catch (e) { console.log('Error', e); throw e; } }; Error: package.json does not exist at /package.json, at Object.exports.find (/var/task/webpack:/node_modules/node-pre-gyp/lib/pre-binding.js:18:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/grpc_extension.js:29:1), at Object.<anonymous> (/var/task/index.js:40079:30), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/client_interceptors.js:144:12), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/client.js:35:27), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/index.js:27:14), at Object.<anonymous> (/var/task/index.js:2460:30) Webpack Warnings Webpack Warnings: WARNING in ./node_modules/bytebuffer/dist/bytebuffer-node.js 29:38-55 Module not found: Error: Can't resolve 'memcpy' in WARNING in ./node_modules/google-ads-node/node_modules/import-fresh/index.js 28:8-25 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/grpc/src/grpc_extension.js 32:12-33 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/node-pre-gyp/lib/pre-binding.js 20:22-48 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/node-pre-gyp/lib/util/versioning.js 17:20-67 Critical dependency: the request of a dependency is an expression webpack.config.js: `const path = require("path") const ForkTsCheckerWebpackPlugin = require("fork-ts-checker-webpack-plugin") module.exports = { mode: "production", entry: "./src/index.ts", resolve: { extensions: [".js", ".jsx", ".json", ".ts", ".tsx"], }, output: { libraryTarget: "commonjs", path: path.join(__dirname, "dist"), filename: "index.js", }, target: "node", module: { rules: [ { // Include ts, tsx, js, and jsx files. test: /.(ts|js)x?$/, exclude: /node_modules/, use: [ { loader: "cache-loader", options: { cacheDirectory: path.resolve(".webpackCache"), }, }, "babel-loader", ], }, ], }, plugins: [new ForkTsCheckerWebpackPlugin()], }` A: Not sure if this is still relevant or not, but i think you should use esbuild and add dependencies as dev. Shoould reduce your bundle size and should solve your problems
package.json does not exist at /package.json
When I am trying to use webpack in order to build my project and use it on AWS Lambda, I am getting a lot of warnings related to Critical dependency of ./node_modules/grpc. This issue happening once I import {GoogleAdsApi} from 'google-ads-api'; As I can understand this is related to dynamic importing, I might be wrong. As a result, the bundled file is huge (above 4MB) and when zipping it and using it on the Lambda I am getting the following error when the Lambda is triggered: "package.json does not exist at /package.json" *Typescript *Node ver 12.x index.ts import {GoogleAdsApi} from 'google-ads-api'; export const handler = async (event: any): Promise<any> => { try { console.log('Start', event); // @ts-ignore const api = new GoogleAdsApi({client_id: 'id', client_secret: 'secret', developer_token: 'dToken'}); return 'success'; } catch (e) { console.log('Error', e); throw e; } }; Error: package.json does not exist at /package.json, at Object.exports.find (/var/task/webpack:/node_modules/node-pre-gyp/lib/pre-binding.js:18:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/grpc_extension.js:29:1), at Object.<anonymous> (/var/task/index.js:40079:30), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/client_interceptors.js:144:12), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/src/client.js:35:27), at __webpack_require__ (/var/task/webpack:/webpack/bootstrap:19:1), at Object.<anonymous> (/var/task/webpack:/node_modules/grpc/index.js:27:14), at Object.<anonymous> (/var/task/index.js:2460:30) Webpack Warnings Webpack Warnings: WARNING in ./node_modules/bytebuffer/dist/bytebuffer-node.js 29:38-55 Module not found: Error: Can't resolve 'memcpy' in WARNING in ./node_modules/google-ads-node/node_modules/import-fresh/index.js 28:8-25 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/grpc/src/grpc_extension.js 32:12-33 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/node-pre-gyp/lib/pre-binding.js 20:22-48 Critical dependency: the request of a dependency is an expression WARNING in ./node_modules/node-pre-gyp/lib/util/versioning.js 17:20-67 Critical dependency: the request of a dependency is an expression webpack.config.js: `const path = require("path") const ForkTsCheckerWebpackPlugin = require("fork-ts-checker-webpack-plugin") module.exports = { mode: "production", entry: "./src/index.ts", resolve: { extensions: [".js", ".jsx", ".json", ".ts", ".tsx"], }, output: { libraryTarget: "commonjs", path: path.join(__dirname, "dist"), filename: "index.js", }, target: "node", module: { rules: [ { // Include ts, tsx, js, and jsx files. test: /.(ts|js)x?$/, exclude: /node_modules/, use: [ { loader: "cache-loader", options: { cacheDirectory: path.resolve(".webpackCache"), }, }, "babel-loader", ], }, ], }, plugins: [new ForkTsCheckerWebpackPlugin()], }`
[ "Not sure if this is still relevant or not, but i think you should use esbuild and add dependencies as dev. Shoould reduce your bundle size and should solve your problems\n" ]
[ 0 ]
[]
[]
[ "aws_lambda", "google_ads_api", "webpack" ]
stackoverflow_0065666326_aws_lambda_google_ads_api_webpack.txt
Q: Angular post subscribe never triggering HTTP call So I have looked into all the different things about how "all you have to do is add .subscribe()" and that is not doing anything, so here is what I have: private loginPost(username: string, password: string): Observable<LoginResponse> { return this.http.post<LoginResponse>( '/api/login', new Login( username, password)) .pipe(catchError(this.handleError)); } Then in a form submit handler I have this: this.loginPost( this.f.username.value, this.f.password.value) .subscribe( (response: LoginResponse) => { console.log(1); if (response && response.Token && response.Succeeded) { this.authorizeService.setUser(response); this.navigateToReturnUrl(this.getReturnUrl()); } else if (!response.Succeeded && response.NeedsPasswordReset) { this.navigateToReturnUrl(this.getReturnUrl()); this.Loading = false; } else { this.Message = 'Username or password is incorrect'; this.Loading = false; } }); The result is this (in chrome): As well as this in the console: The backend api call is never triggered, and no network call is ever done. Things I have tried (none made any difference): Enabling CORS Converting the subscribe to a promise While using a promise using await and async is all different combos Setting the result of loginPost to a const and calling .subscribe() in different ways (with and without handler functions) Trying different combos of pipe handlers Edit: In response to the comments: No network call is getting made, no errors are made, subscribe is called and no logs are generated. Per the question, no call is made, the backend is never triggered and the subscribe is not invoked. Googling "angular http subscribe never triggers" you can find I am not the only one seeing this issue. Most of these issues are unanswered and the response is the same across almost all of them. This seems to be a common issue that even with a subscribe angular doesn't make calls, I have even attempted gets and other calls with no luck. If anyone knows how to add more extensive logging that would be really helpful as no error handlers get invoked either so it just stops and doesn't do anything. A: An interceptor was created with the project template that had a pipe filter that was supposed to return null but instead it just didn't return. I love the injection stuff but this was tricky to find.
Angular post subscribe never triggering HTTP call
So I have looked into all the different things about how "all you have to do is add .subscribe()" and that is not doing anything, so here is what I have: private loginPost(username: string, password: string): Observable<LoginResponse> { return this.http.post<LoginResponse>( '/api/login', new Login( username, password)) .pipe(catchError(this.handleError)); } Then in a form submit handler I have this: this.loginPost( this.f.username.value, this.f.password.value) .subscribe( (response: LoginResponse) => { console.log(1); if (response && response.Token && response.Succeeded) { this.authorizeService.setUser(response); this.navigateToReturnUrl(this.getReturnUrl()); } else if (!response.Succeeded && response.NeedsPasswordReset) { this.navigateToReturnUrl(this.getReturnUrl()); this.Loading = false; } else { this.Message = 'Username or password is incorrect'; this.Loading = false; } }); The result is this (in chrome): As well as this in the console: The backend api call is never triggered, and no network call is ever done. Things I have tried (none made any difference): Enabling CORS Converting the subscribe to a promise While using a promise using await and async is all different combos Setting the result of loginPost to a const and calling .subscribe() in different ways (with and without handler functions) Trying different combos of pipe handlers Edit: In response to the comments: No network call is getting made, no errors are made, subscribe is called and no logs are generated. Per the question, no call is made, the backend is never triggered and the subscribe is not invoked. Googling "angular http subscribe never triggers" you can find I am not the only one seeing this issue. Most of these issues are unanswered and the response is the same across almost all of them. This seems to be a common issue that even with a subscribe angular doesn't make calls, I have even attempted gets and other calls with no luck. If anyone knows how to add more extensive logging that would be really helpful as no error handlers get invoked either so it just stops and doesn't do anything.
[ "An interceptor was created with the project template that had a pipe filter that was supposed to return null but instead it just didn't return.\nI love the injection stuff but this was tricky to find.\n" ]
[ 1 ]
[]
[]
[ "angular", "angular_httpclient", "rxjs", "rxjs_observables" ]
stackoverflow_0074634046_angular_angular_httpclient_rxjs_rxjs_observables.txt
Q: how to get index of a giving string in liste python? my list is like this, in example the string is 'a' and 'b' ; i want to return the index of string 'a' and for 'b' then i want to calculate how many time is 'a' repeated in the list1 : list1=['a','a','b','a','a','b','a','a','b','a','b','a','a'] i want to return the order of evry 'a' in list1 the result should be like this : a_position=[1,2,4,5,7,8,10,12,13] and i want to calculate how many time 'a' is repeated in list1: a_rep=9 A: You could do below: a_positions = [idx + 1 for idx, el in enumerate(list1) if el == 'a'] a_repitition = len(a_positions) print(a_positions): [1, 2, 4, 5, 7, 8, 10, 12, 13] print(a_repitition): 9 If you need repititions of each element you can also use collections.Counter from collections import Counter counter = Counter(list1) print(counter['a']): 9 A: If you want to get the indices and counts of all letters: list1=['a','a','b','a','a','b','a','a','b','a','b','a','a'] pos = {} for i,c in enumerate(list1, start=1): # 1-based indexing pos.setdefault(c, []).append(i) pos # {'a': [1, 2, 4, 5, 7, 8, 10, 12, 13], # 'b': [3, 6, 9, 11]} counts = {k: len(v) for k,v in pos.items()} # {'a': 9, 'b': 4}
how to get index of a giving string in liste python?
my list is like this, in example the string is 'a' and 'b' ; i want to return the index of string 'a' and for 'b' then i want to calculate how many time is 'a' repeated in the list1 : list1=['a','a','b','a','a','b','a','a','b','a','b','a','a'] i want to return the order of evry 'a' in list1 the result should be like this : a_position=[1,2,4,5,7,8,10,12,13] and i want to calculate how many time 'a' is repeated in list1: a_rep=9
[ "You could do below:\na_positions = [idx + 1 for idx, el in enumerate(list1) if el == 'a']\na_repitition = len(a_positions)\n\nprint(a_positions):\n[1, 2, 4, 5, 7, 8, 10, 12, 13]\n\nprint(a_repitition):\n9\n\nIf you need repititions of each element you can also use collections.Counter\nfrom collections import Counter\ncounter = Counter(list1)\n\nprint(counter['a']):\n9\n\n", "If you want to get the indices and counts of all letters:\nlist1=['a','a','b','a','a','b','a','a','b','a','b','a','a']\npos = {}\nfor i,c in enumerate(list1, start=1): # 1-based indexing\n pos.setdefault(c, []).append(i)\npos\n# {'a': [1, 2, 4, 5, 7, 8, 10, 12, 13],\n# 'b': [3, 6, 9, 11]}\n\ncounts = {k: len(v) for k,v in pos.items()}\n# {'a': 9, 'b': 4}\n\n" ]
[ 2, 1 ]
[]
[]
[ "indexing", "python" ]
stackoverflow_0074657346_indexing_python.txt