content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: How to add a string to a List I have a variable usersInput which has a value Console.ReadLine() I want to add this variable to the array built with List static void Resize<T>(ref List<T> array, string name) { array.Add(name); Console.WriteLine(); for (int i = 0; i < array.Count; i++) { Console.WriteLine(array[i]); } } static void Main(string[] args) { List<string> teachers = new List<string> { "1", "2", "3"}; Console.WriteLine("Enter a name"); string userInput = Console.ReadLine(); Resize(ref teachers, userInput); } A: Why do you need Resize<T> method at all? You could do it easier: static void Main(string[] args) { List<string> teachers = new List<string> { "1", "2", "3" }; Console.WriteLine("Enter a name"); string userInput = Console.ReadLine(); if (string.IsNullOrWhiteSpace(userInput) || !string.IsNullOrEmpty(userInput)) { teachers.Add(userInput); } // if want to change list to array then use // var arr = teachers.ToArray(); foreach (var teacher in teachers) { Console.WriteLine(teacher); } }
How to add a string to a List
I have a variable usersInput which has a value Console.ReadLine() I want to add this variable to the array built with List static void Resize<T>(ref List<T> array, string name) { array.Add(name); Console.WriteLine(); for (int i = 0; i < array.Count; i++) { Console.WriteLine(array[i]); } } static void Main(string[] args) { List<string> teachers = new List<string> { "1", "2", "3"}; Console.WriteLine("Enter a name"); string userInput = Console.ReadLine(); Resize(ref teachers, userInput); }
[ "Why do you need Resize<T> method at all? You could do it easier:\n static void Main(string[] args)\n {\n\n List<string> teachers = new List<string> { \"1\", \"2\", \"3\" };\n Console.WriteLine(\"Enter a name\");\n string userInput = Console.ReadLine();\n\n if (string.IsNullOrWhiteSpace(userInput) || !string.IsNullOrEmpty(userInput))\n {\n teachers.Add(userInput);\n }\n\n // if want to change list to array then use\n // var arr = teachers.ToArray();\n\n foreach (var teacher in teachers)\n {\n Console.WriteLine(teacher);\n }\n\n }\n\n" ]
[ 0 ]
[]
[]
[ "c#" ]
stackoverflow_0074670712_c#.txt
Q: How to quantize all nodes except a particular one? I am using tensorflow Graph Transform Tool to quantize the graph using input_names = ["prefix/input"] output_names = ["final_result"] transforms1 = ["strip_unused_nodes","fold_constants(ignore_errors=true)", "fold_batch_norms", "fold_old_batch_norms","quantize_weights" ] transformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1) I use the option quantize_weights to quantize the weights in graph, I know that certain nodes can remain unquantized by changing threshold minimum_size in quantize_weights, so leaving some nodes unquantized is certainly possible. I want to quantize the weights of all nodes except a particular node with the name K or a set of nodes that have a name in K(set). How can this be achieved? A: EDIT: the previous answer refered to Tensorflow Lite code. I updated it to refer to Tensorflow. Looking at the implementation of Tensorflow's quantize_weights, these are the instances where weights don't get quantized: tensor that is not type float tensor that has fewer than 1024 weights (or another number specified by the parameter minimum_size) If you are able to modify nodes in the graph so that they are excluded by one of the above rules, then quantize, then revert the nodes to the pre-quantized state, you might be able to do this. A: To exclude a particular node or set of nodes from being quantized, you can use the quantize_weights transform's op_types parameter. This parameter allows you to specify a list of node types that should be quantized. By default, all node types will be quantized, but you can exclude a particular node or set of nodes by providing a list that does not include the node types you want to exclude. For example, if you want to exclude nodes with the name "K" from being quantized, you can use the following code: transforms1 = ["strip_unused_nodes","fold_constants(ignore_errors=true)", "fold_batch_norms", "fold_old_batch_norms", "quantize_weights(op_types=['!K'])" ] transformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1) This will exclude any nodes with the name "K" from being quantized, while still quantizing all other nodes in the graph. Alternatively, if you want to exclude multiple node types, you can specify them in a list like this: transforms1 = ["strip_unused_nodes","fold_constants(ignore_errors=true)", "fold_batch_norms", "fold_old_batch_norms", "quantize_weights(op_types=['!K1', '!K2', '!K3'])" ] transformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1) This will exclude any nodes with the names "K1", "K2", or "K3" from being quantized, while still quantizing all other nodes in the graph. Note that the "!" character in front of the node names in the op_types list indicates that those node types should be excluded from quantization. If you omit the "!", all node types will be quantized by default, and you will need to explicitly exclude any node types you don't want to quantize.
How to quantize all nodes except a particular one?
I am using tensorflow Graph Transform Tool to quantize the graph using input_names = ["prefix/input"] output_names = ["final_result"] transforms1 = ["strip_unused_nodes","fold_constants(ignore_errors=true)", "fold_batch_norms", "fold_old_batch_norms","quantize_weights" ] transformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1) I use the option quantize_weights to quantize the weights in graph, I know that certain nodes can remain unquantized by changing threshold minimum_size in quantize_weights, so leaving some nodes unquantized is certainly possible. I want to quantize the weights of all nodes except a particular node with the name K or a set of nodes that have a name in K(set). How can this be achieved?
[ "EDIT: the previous answer refered to Tensorflow Lite code. I updated it to refer to Tensorflow.\nLooking at the implementation of Tensorflow's quantize_weights, these are the instances where weights don't get quantized:\n\ntensor that is not type float\ntensor that has fewer than 1024 weights (or another number specified by the parameter minimum_size)\n\nIf you are able to modify nodes in the graph so that they are excluded by one of the above rules, then quantize, then revert the nodes to the pre-quantized state, you might be able to do this.\n", "To exclude a particular node or set of nodes from being quantized, you can use the quantize_weights transform's op_types parameter. This parameter allows you to specify a list of node types that should be quantized. By default, all node types will be quantized, but you can exclude a particular node or set of nodes by providing a list that does not include the node types you want to exclude.\nFor example, if you want to exclude nodes with the name \"K\" from being quantized, you can use the following code:\ntransforms1 = [\"strip_unused_nodes\",\"fold_constants(ignore_errors=true)\", \"fold_batch_norms\", \"fold_old_batch_norms\", \"quantize_weights(op_types=['!K'])\" ]\ntransformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1)\n\nThis will exclude any nodes with the name \"K\" from being quantized, while still quantizing all other nodes in the graph.\nAlternatively, if you want to exclude multiple node types, you can specify them in a list like this:\ntransforms1 = [\"strip_unused_nodes\",\"fold_constants(ignore_errors=true)\", \"fold_batch_norms\", \"fold_old_batch_norms\", \"quantize_weights(op_types=['!K1', '!K2', '!K3'])\" ]\ntransformed_graph_def = TransformGraph(graph.as_graph_def(), input_names,output_names, transforms1)\n\nThis will exclude any nodes with the names \"K1\", \"K2\", or \"K3\" from being quantized, while still quantizing all other nodes in the graph.\nNote that the \"!\" character in front of the node names in the op_types list indicates that those node types should be excluded from quantization. If you omit the \"!\", all node types will be quantized by default, and you will need to explicitly exclude any node types you don't want to quantize.\n" ]
[ 0, 0 ]
[]
[]
[ "tensorflow" ]
stackoverflow_0050304096_tensorflow.txt
Q: onClick function not working with my hamburger menu My on click function isn't working on my React/Next.js website I'm building. I'm trying to have the hamburger menu change on click, from an icon to a X, as well as show the menu on click, for mobile and tablet only. I seem to have everything placed properly but just can't get the function to work or move. Is it because my onClick function is on an Image and not a Button? This is my first time coding next.js so still unsure on a few things! Here is my code for my Navbar.js import Link from "next/link"; import Image from "next/image"; import React, { useState } from "react"; import logoImage from "../public/logo.svg"; import burgerImage from "../public/nav-burger.svg"; import closeImage from "../public/close.svg"; import styles from "./layout.module.css"; import { useRouter } from "next/router"; import styled from "styled-components"; function Navbar() { const router = useRouter(); const RedLink = styled.a` color: #0f23da; `; const display = useState("false"); const changeDisplay = useState("false"); return ( <div class="border-b-2 border-blue-700 border-solid lg:h-32 border-x-0 max-lg:h-32" > <nav class=" pt-10 flex items-center w-full font-medium max-lg:block"> <div className="p-0"> <Link href="/"> <Image class="block w-24 h-10 max-h-full leading-6 align-middle cursor-pointer lg:h-16 lg:w-30 max-lg:mb-5 max-lg:h-16 max-lg:w-30" src={logoImage} alt="" width="120" height="120" /> </Link> <span> <Image className=" lg:hidden cursor-pointer max-lg:h-7 max-lg:float-right max-lg:-mt-16 max-lg:pl-20 " src={burgerImage} aria-label="Open Menu" alt="" width="120" height="120" onClick={() => changeDisplay("none")} /> </span> <span> <Image className=" lg:hidden cursor-pointer max-lg:h-7 max-lg:float-right max-lg:-mt-16 max-lg:pl-20 " src={closeImage} aria-label="Close Menu" alt="" width="120" height="120" onClick={() => changeDisplay("none")} /> </span> </div> <div className="flex items-center mt-4"> <div class="mr-auto ml-14 text-left lg:flex max-lg:ml-0 max-lg:pt-14 max-lg:mt-1 max-lg:w-full max-lg:h-full max-lg:overflow-y-auto max-lg:bg-blue-100 max-lg:z-50 absolute max-lg:static max-lg:transition-all max-lg:duration-500 max-lg:ease-in" display={display} > <div> <ul className="xl:flex lg:flex max-lg:block"> <li className="pb-4"> <Link href={"/about"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/about" ? styles.activeTab : "" } > About </RedLink> </Link> </li> <li className="pb-4"> <Link href={"/careers"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/careers" ? styles.activeTab : "" } > Careers </RedLink> </Link> </li> <li className="pb-4"> <Link href={"/blogs"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/blogs" ? styles.activeTab : "" } > Contact </RedLink> </Link> </li> </ul> </div> <div> <ul> <li className="pb-4"> <Link class=" p-0 px-0 pt-px pb-9 font-medium leading-7 cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" target="_blank" href="https://www.tbd.website/" > Test </Link> </li> </ul> </div> </div> </div> </nav> </div> ); } export default Navbar; I've tried following various youtube videos but no luck! A: useState hook syntax is as follows: [state,setter_Fn] = useState(initial_value) in the image onClick function, you must use the setter not the state value. const [changeDisplay,setChangeDispaly] = useState("false"); onClick={() => setChangeDisplay("none")}
onClick function not working with my hamburger menu
My on click function isn't working on my React/Next.js website I'm building. I'm trying to have the hamburger menu change on click, from an icon to a X, as well as show the menu on click, for mobile and tablet only. I seem to have everything placed properly but just can't get the function to work or move. Is it because my onClick function is on an Image and not a Button? This is my first time coding next.js so still unsure on a few things! Here is my code for my Navbar.js import Link from "next/link"; import Image from "next/image"; import React, { useState } from "react"; import logoImage from "../public/logo.svg"; import burgerImage from "../public/nav-burger.svg"; import closeImage from "../public/close.svg"; import styles from "./layout.module.css"; import { useRouter } from "next/router"; import styled from "styled-components"; function Navbar() { const router = useRouter(); const RedLink = styled.a` color: #0f23da; `; const display = useState("false"); const changeDisplay = useState("false"); return ( <div class="border-b-2 border-blue-700 border-solid lg:h-32 border-x-0 max-lg:h-32" > <nav class=" pt-10 flex items-center w-full font-medium max-lg:block"> <div className="p-0"> <Link href="/"> <Image class="block w-24 h-10 max-h-full leading-6 align-middle cursor-pointer lg:h-16 lg:w-30 max-lg:mb-5 max-lg:h-16 max-lg:w-30" src={logoImage} alt="" width="120" height="120" /> </Link> <span> <Image className=" lg:hidden cursor-pointer max-lg:h-7 max-lg:float-right max-lg:-mt-16 max-lg:pl-20 " src={burgerImage} aria-label="Open Menu" alt="" width="120" height="120" onClick={() => changeDisplay("none")} /> </span> <span> <Image className=" lg:hidden cursor-pointer max-lg:h-7 max-lg:float-right max-lg:-mt-16 max-lg:pl-20 " src={closeImage} aria-label="Close Menu" alt="" width="120" height="120" onClick={() => changeDisplay("none")} /> </span> </div> <div className="flex items-center mt-4"> <div class="mr-auto ml-14 text-left lg:flex max-lg:ml-0 max-lg:pt-14 max-lg:mt-1 max-lg:w-full max-lg:h-full max-lg:overflow-y-auto max-lg:bg-blue-100 max-lg:z-50 absolute max-lg:static max-lg:transition-all max-lg:duration-500 max-lg:ease-in" display={display} > <div> <ul className="xl:flex lg:flex max-lg:block"> <li className="pb-4"> <Link href={"/about"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/about" ? styles.activeTab : "" } > About </RedLink> </Link> </li> <li className="pb-4"> <Link href={"/careers"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/careers" ? styles.activeTab : "" } > Careers </RedLink> </Link> </li> <li className="pb-4"> <Link href={"/blogs"} legacyBehaviour className="px-0 pt-px pb-9 mr-10 font-medium leading-7 text-left cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" > <RedLink className={ router.pathname == "/blogs" ? styles.activeTab : "" } > Contact </RedLink> </Link> </li> </ul> </div> <div> <ul> <li className="pb-4"> <Link class=" p-0 px-0 pt-px pb-9 font-medium leading-7 cursor-pointer border-0 border-b-blue-700 border-t-transparent hover:border-y-8 max-lg:border-transparent" target="_blank" href="https://www.tbd.website/" > Test </Link> </li> </ul> </div> </div> </div> </nav> </div> ); } export default Navbar; I've tried following various youtube videos but no luck!
[ "useState hook syntax is as follows:\n[state,setter_Fn] = useState(initial_value)\n\nin the image onClick function, you must use the setter not the state value.\nconst [changeDisplay,setChangeDispaly] = useState(\"false\");\n\nonClick={() => setChangeDisplay(\"none\")}\n\n" ]
[ 0 ]
[]
[]
[ "nav", "next.js", "onclick", "reactjs" ]
stackoverflow_0074670674_nav_next.js_onclick_reactjs.txt
Q: WPF Error: Cannot find governing FrameworkElement for target element I've got a DataGrid with a row that has an image. This image is bound with a trigger to a certain state. When the state changes I want to change the image. The template itself is set on the HeaderStyle of a DataGridTemplateColumn. This template has some bindings. The first binding Day shows what day it is and the State changes the image with a trigger. These properties are set in a ViewModel. Properties: public class HeaderItem { public string Day { get; set; } public ValidationStatus State { get; set; } } this.HeaderItems = new ObservableCollection<HeaderItem>(); for (int i = 1; i < 15; i++) { this.HeaderItems.Add(new HeaderItem() { Day = i.ToString(), State = ValidationStatus.Nieuw, }); } Datagrid: <DataGrid x:Name="PersoneelsPrestatiesDataGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" AutoGenerateColumns="False" SelectionMode="Single" ItemsSource="{Binding CaregiverPerformances}" FrozenColumnCount="1" > <DataGridTemplateColumn HeaderStyle="{StaticResource headerCenterAlignment}" Header="{Binding HeaderItems[1]}" Width="50"> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{ Binding Performances[1].Duration,Converter={StaticResource timeSpanConverter},Mode=TwoWay}"/> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextAlignment="Center" Text="{ Binding Performances[1].Duration,Converter={StaticResource timeSpanConverter}}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid> Datagrid HeaderStyleTemplate: <Style x:Key="headerCenterAlignment" TargetType="{x:Type DataGridColumnHeader}"> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type DataGridColumnHeader}"> <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Text="{Binding Day}" /> <Image x:Name="imageValidation" Grid.Row="1" Width="16" Height="16" Source="{StaticResource imgBevestigd}" /> </Grid> <ControlTemplate.Triggers> <MultiDataTrigger > <MultiDataTrigger.Conditions> <Condition Binding="{Binding State}" Value="Nieuw"/> </MultiDataTrigger.Conditions> <Setter TargetName="imageValidation" Property="Source" Value="{StaticResource imgGeenStatus}"/> </MultiDataTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Now when I startup the project the images doesn't show and I get this error: System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=HeaderItems[0]; DataItem=null; target element is 'DataGridTemplateColumn' (HashCode=26950454); target property is 'Header' (type 'Object') Why is this error showing? A: Sadly any DataGridColumn hosted under DataGrid.Columns is not part of Visual tree and therefore not connected to the data context of the datagrid. So bindings do not work with their properties such as Visibility or Header etc (although these properties are valid dependency properties!). Now you may wonder how is that possible? Isn't their Binding property supposed to be bound to the data context? Well it simply is a hack. The binding does not really work. It is actually the datagrid cells that copy / clone this binding object and use it for displaying their own contents! So now back to solving your issue, I assume that HeaderItems is a property of the object that is set as the DataContext of your parent View. We can connect the DataContext of the view to any DataGridColumn via something we call a ProxyElement. The example below illustrates how to connect a logical child such as ContextMenu or DataGridColumn to the parent View's DataContext <Window x:Class="WpfApplicationMultiThreading.Window5" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vb="http://schemas.microsoft.com/wpf/2008/toolkit" Title="Window5" Height="300" Width="300" > <Grid x:Name="MyGrid"> <Grid.Resources> <FrameworkElement x:Key="ProxyElement" DataContext="{Binding}"/> </Grid.Resources> <Grid.DataContext> <TextBlock Text="Text Column Header" Tag="Tag Columne Header"/> </Grid.DataContext> <ContentControl Visibility="Collapsed" Content="{StaticResource ProxyElement}"/> <vb:DataGrid AutoGenerateColumns="False" x:Name="MyDataGrid"> <vb:DataGrid.ItemsSource> <x:Array Type="{x:Type TextBlock}"> <TextBlock Text="1" Tag="1.1"/> <TextBlock Text="2" Tag="1.2"/> <TextBlock Text="3" Tag="2.1"/> <TextBlock Text="4" Tag="2.2"/> </x:Array> </vb:DataGrid.ItemsSource> <vb:DataGrid.Columns> <vb:DataGridTextColumn Header="{Binding DataContext.Text, Source={StaticResource ProxyElement}}" Binding="{Binding Text}"/> <vb:DataGridTextColumn Header="{Binding DataContext.Tag, Source={StaticResource ProxyElement}}" Binding="{Binding Tag}"/> </vb:DataGrid.Columns> </vb:DataGrid> </Grid> </Window> The view above encountered the same binding error that you have found if I did not have implemented the ProxyElement hack. The ProxyElement is any FrameworkElement that steals the DataContext from the main View and offers it to the logical child such as ContextMenu or DataGridColumn. For that it must be hosted as a Content into an invisible ContentControl which is under the same View. I hope this guides you in correct direction. A: A slightly shorter alternative to using a StaticResource as in the accepted answer is x:Reference: <StackPanel> <!--Set the DataContext here if you do not want to inherit the parent one--> <FrameworkElement x:Name="ProxyElement" Visibility="Collapsed"/> <DataGrid> <DataGrid.Columns> <DataGridTextColumn Header="{Binding DataContext.Whatever, Source={x:Reference ProxyElement}}" Binding="{Binding ...}" /> </DataGrid.Columns> </DataGrid> </StackPanel> The main advantage of this is: if you already have an element which is not a DataGrid's ancestor (i.e. not the StackPanel in the example above), you can just give it a name and use it as the x:Reference instead, hence not needing to define any dummy FrameworkElement at all. If you try referencing an ancestor, you will get a XamlParseException at run-time due to a cyclical dependency. A: The way without a proxy is to set bindings in the constructor: var i = 0; var converter = new BooleanToVisibilityConverter(); foreach(var column in DataGrid.Columns) { BindingOperations.SetBinding(column, DataGridColumn.VisibilityProperty, new Binding($"Columns[{i++}].IsSelected") { Source = ViewModel, Converter = converter, }); } A: The Proxy Element didn't work for me, for a tooltip. For an infragistics DataGrid I did this, you might change it easily to your kind of grid: <igDP:ImageField Label="_Invited" Name="Invited"> <igDP:Field.Settings> <igDP:FieldSettings> <igDP:FieldSettings.CellValuePresenterStyle> <Style TargetType="{x:Type igDP:CellValuePresenter}"> <Setter Property="ToolTip"> <Setter.Value> <Label Content="{Binding DataItem.InvitationSent, Converter={StaticResource dateTimeConverter}}"/> </Setter.Value> </Setter> </Style> </igDP:FieldSettings.CellValuePresenterStyle> </igDP:FieldSettings> </igDP:Field.Settings> </igDP:ImageField>
WPF Error: Cannot find governing FrameworkElement for target element
I've got a DataGrid with a row that has an image. This image is bound with a trigger to a certain state. When the state changes I want to change the image. The template itself is set on the HeaderStyle of a DataGridTemplateColumn. This template has some bindings. The first binding Day shows what day it is and the State changes the image with a trigger. These properties are set in a ViewModel. Properties: public class HeaderItem { public string Day { get; set; } public ValidationStatus State { get; set; } } this.HeaderItems = new ObservableCollection<HeaderItem>(); for (int i = 1; i < 15; i++) { this.HeaderItems.Add(new HeaderItem() { Day = i.ToString(), State = ValidationStatus.Nieuw, }); } Datagrid: <DataGrid x:Name="PersoneelsPrestatiesDataGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" AutoGenerateColumns="False" SelectionMode="Single" ItemsSource="{Binding CaregiverPerformances}" FrozenColumnCount="1" > <DataGridTemplateColumn HeaderStyle="{StaticResource headerCenterAlignment}" Header="{Binding HeaderItems[1]}" Width="50"> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{ Binding Performances[1].Duration,Converter={StaticResource timeSpanConverter},Mode=TwoWay}"/> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextAlignment="Center" Text="{ Binding Performances[1].Duration,Converter={StaticResource timeSpanConverter}}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid> Datagrid HeaderStyleTemplate: <Style x:Key="headerCenterAlignment" TargetType="{x:Type DataGridColumnHeader}"> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type DataGridColumnHeader}"> <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Text="{Binding Day}" /> <Image x:Name="imageValidation" Grid.Row="1" Width="16" Height="16" Source="{StaticResource imgBevestigd}" /> </Grid> <ControlTemplate.Triggers> <MultiDataTrigger > <MultiDataTrigger.Conditions> <Condition Binding="{Binding State}" Value="Nieuw"/> </MultiDataTrigger.Conditions> <Setter TargetName="imageValidation" Property="Source" Value="{StaticResource imgGeenStatus}"/> </MultiDataTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Now when I startup the project the images doesn't show and I get this error: System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=HeaderItems[0]; DataItem=null; target element is 'DataGridTemplateColumn' (HashCode=26950454); target property is 'Header' (type 'Object') Why is this error showing?
[ "Sadly any DataGridColumn hosted under DataGrid.Columns is not part of Visual tree and therefore not connected to the data context of the datagrid. So bindings do not work with their properties such as Visibility or Header etc (although these properties are valid dependency properties!). \nNow you may wonder how is that possible? Isn't their Binding property supposed to be bound to the data context? Well it simply is a hack. The binding does not really work. It is actually the datagrid cells that copy / clone this binding object and use it for displaying their own contents!\nSo now back to solving your issue, I assume that HeaderItems is a property of the object that is set as the DataContext of your parent View. We can connect the DataContext of the view to any DataGridColumn via something we call a ProxyElement.\nThe example below illustrates how to connect a logical child such as ContextMenu or DataGridColumn to the parent View's DataContext\n <Window x:Class=\"WpfApplicationMultiThreading.Window5\"\n xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"\n xmlns:x=\"http://schemas.microsoft.com/winfx/2006/xaml\" \n xmlns:vb=\"http://schemas.microsoft.com/wpf/2008/toolkit\"\n Title=\"Window5\" Height=\"300\" Width=\"300\" >\n <Grid x:Name=\"MyGrid\">\n <Grid.Resources>\n <FrameworkElement x:Key=\"ProxyElement\" DataContext=\"{Binding}\"/>\n </Grid.Resources>\n <Grid.DataContext>\n <TextBlock Text=\"Text Column Header\" Tag=\"Tag Columne Header\"/>\n </Grid.DataContext>\n <ContentControl Visibility=\"Collapsed\"\n Content=\"{StaticResource ProxyElement}\"/>\n <vb:DataGrid AutoGenerateColumns=\"False\" x:Name=\"MyDataGrid\">\n <vb:DataGrid.ItemsSource>\n <x:Array Type=\"{x:Type TextBlock}\">\n <TextBlock Text=\"1\" Tag=\"1.1\"/>\n <TextBlock Text=\"2\" Tag=\"1.2\"/>\n <TextBlock Text=\"3\" Tag=\"2.1\"/>\n <TextBlock Text=\"4\" Tag=\"2.2\"/>\n </x:Array>\n </vb:DataGrid.ItemsSource>\n <vb:DataGrid.Columns>\n <vb:DataGridTextColumn\n Header=\"{Binding DataContext.Text,\n Source={StaticResource ProxyElement}}\"\n Binding=\"{Binding Text}\"/>\n <vb:DataGridTextColumn\n Header=\"{Binding DataContext.Tag,\n Source={StaticResource ProxyElement}}\"\n Binding=\"{Binding Tag}\"/>\n </vb:DataGrid.Columns>\n </vb:DataGrid>\n </Grid>\n</Window>\n\nThe view above encountered the same binding error that you have found if I did not have implemented the ProxyElement hack. The ProxyElement is any FrameworkElement that steals the DataContext from the main View and offers it to the logical child such as ContextMenu or DataGridColumn. For that it must be hosted as a Content into an invisible ContentControl which is under the same View.\nI hope this guides you in correct direction.\n", "A slightly shorter alternative to using a StaticResource as in the accepted answer is x:Reference:\n<StackPanel>\n\n <!--Set the DataContext here if you do not want to inherit the parent one-->\n <FrameworkElement x:Name=\"ProxyElement\" Visibility=\"Collapsed\"/>\n\n <DataGrid>\n <DataGrid.Columns>\n <DataGridTextColumn\n Header=\"{Binding DataContext.Whatever, Source={x:Reference ProxyElement}}\"\n Binding=\"{Binding ...}\" />\n </DataGrid.Columns>\n </DataGrid>\n\n</StackPanel>\n\nThe main advantage of this is: if you already have an element which is not a DataGrid's ancestor (i.e. not the StackPanel in the example above), you can just give it a name and use it as the x:Reference instead, hence not needing to define any dummy FrameworkElement at all.\nIf you try referencing an ancestor, you will get a XamlParseException at run-time due to a cyclical dependency.\n", "The way without a proxy is to set bindings in the constructor:\nvar i = 0;\nvar converter = new BooleanToVisibilityConverter();\nforeach(var column in DataGrid.Columns)\n{\n BindingOperations.SetBinding(column, DataGridColumn.VisibilityProperty, new Binding($\"Columns[{i++}].IsSelected\")\n { \n Source = ViewModel,\n Converter = converter,\n });\n}\n\n", "The Proxy Element didn't work for me, for a tooltip. For an infragistics DataGrid I did this, you might change it easily to your kind of grid:\n<igDP:ImageField Label=\"_Invited\" Name=\"Invited\">\n <igDP:Field.Settings>\n <igDP:FieldSettings>\n <igDP:FieldSettings.CellValuePresenterStyle>\n <Style TargetType=\"{x:Type igDP:CellValuePresenter}\">\n <Setter Property=\"ToolTip\">\n <Setter.Value>\n <Label Content=\"{Binding DataItem.InvitationSent, Converter={StaticResource dateTimeConverter}}\"/>\n </Setter.Value>\n </Setter>\n </Style>\n </igDP:FieldSettings.CellValuePresenterStyle>\n </igDP:FieldSettings>\n </igDP:Field.Settings>\n </igDP:ImageField>\n\n" ]
[ 176, 18, 0, 0 ]
[]
[]
[ "binding", "datagrid", "image", "multidatatrigger", "wpf" ]
stackoverflow_0007660967_binding_datagrid_image_multidatatrigger_wpf.txt
Q: How to read a file line by line, append each line to a vector and then implement MPI code So I'm attempting to make a C program to read in a file which is a number of integers in the format where the first value is the length of the file and the following lines are random integers for example: 4 2 7 8 17 or 3 9 23 14 What I want to do is to read in the file, append each line to a vector. I'll later split the vector into equal sizes and distribute them across a number of MPI processes for further tasks. I currently have tried counting the number of lines in the file and then creating a vector to store all the elements of the file via a for loop. However this has not worked. I would greatly appreciate any help. My attempt is below: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <mpi.h> int main( int argc, char *argv[]) { int rank, world_size; int root; int i; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank); if (rank == 0) { char Line[100]; char c; int count_lines=0; FILE *fp = fopen("Input_16.txt","r"); for (c = getc(fp); c != EOF; c = getc(fp)) if (c == '\n') // Increment count if this character is newline count_lines = count_lines + 1; int array[count_lines]; for (i=0; i<count_lines; i++) array[i]=fgets(Line,100,fp); printf("Prints: %c \n",array[i]); } MPI_Finalize(); } A: I don't know anything about MPI but here is how you would read the file: int read_file(const char *path, size_t *len, int **a) { *a = NULL; FILE *fp = fopen(path,"r"); if(!fp) return 0; if(fscanf(fp, "%zu", len) != 1) { printf("fscanf of len failed\n"); goto err; } if(!*len) { printf("len == 0\n"); goto err; } *a = malloc(*len * sizeof **a); if(!*a) { printf("malloc failed\n"); goto err; } for(size_t i = 0; i < *len; i++) { if(fscanf(fp, "%d", &(*a)[i]) != 1) { printf("fscanf of item %zu failed\n", i); goto err; } } fclose(fp); return 1; err: free(*a); if(fp) fclose(fp); return 0; } int main( int argc, char *argv[]) { size_t len; int *a; if(!read_file("Input_16.txt", &len, &a)) { printf("file read failed\n"); return 1; } for(size_t i = 0; i < len; i++) { printf("%d\n", a[i]); } } and example run: 2 7 8 17
How to read a file line by line, append each line to a vector and then implement MPI code
So I'm attempting to make a C program to read in a file which is a number of integers in the format where the first value is the length of the file and the following lines are random integers for example: 4 2 7 8 17 or 3 9 23 14 What I want to do is to read in the file, append each line to a vector. I'll later split the vector into equal sizes and distribute them across a number of MPI processes for further tasks. I currently have tried counting the number of lines in the file and then creating a vector to store all the elements of the file via a for loop. However this has not worked. I would greatly appreciate any help. My attempt is below: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <mpi.h> int main( int argc, char *argv[]) { int rank, world_size; int root; int i; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank); if (rank == 0) { char Line[100]; char c; int count_lines=0; FILE *fp = fopen("Input_16.txt","r"); for (c = getc(fp); c != EOF; c = getc(fp)) if (c == '\n') // Increment count if this character is newline count_lines = count_lines + 1; int array[count_lines]; for (i=0; i<count_lines; i++) array[i]=fgets(Line,100,fp); printf("Prints: %c \n",array[i]); } MPI_Finalize(); }
[ "I don't know anything about MPI but here is how you would read the file:\nint read_file(const char *path, size_t *len, int **a) {\n *a = NULL;\n FILE *fp = fopen(path,\"r\");\n if(!fp)\n return 0;\n if(fscanf(fp, \"%zu\", len) != 1) {\n printf(\"fscanf of len failed\\n\");\n goto err;\n }\n if(!*len) {\n printf(\"len == 0\\n\");\n goto err;\n }\n *a = malloc(*len * sizeof **a);\n if(!*a) {\n printf(\"malloc failed\\n\");\n goto err;\n }\n for(size_t i = 0; i < *len; i++) {\n if(fscanf(fp, \"%d\", &(*a)[i]) != 1) {\n printf(\"fscanf of item %zu failed\\n\", i);\n goto err;\n }\n }\n fclose(fp);\n return 1;\nerr:\n free(*a);\n if(fp) fclose(fp);\n return 0;\n}\n\nint main( int argc, char *argv[]) {\n size_t len;\n int *a;\n if(!read_file(\"Input_16.txt\", &len, &a)) {\n printf(\"file read failed\\n\");\n return 1;\n }\n\n for(size_t i = 0; i < len; i++) {\n printf(\"%d\\n\", a[i]);\n }\n}\n\nand example run:\n2\n7\n8\n17\n\n" ]
[ 1 ]
[]
[]
[ "c", "file", "mpi", "parallel_processing" ]
stackoverflow_0074670474_c_file_mpi_parallel_processing.txt
Q: Calculate the 3rd point of an equilateral triangle from two points at any angle, pointing the "correct" way for a Koch Snowflake Perhaps the question title needs some work. For context this is for the purpose of a Koch Snowflake (using C-like math syntax in a formula node in LabVIEW), thus why the triangle must be the correct way. (As given 2 points an equilateral triangle may be in one of two directions.) To briefly go over the algorithm: I have an array of 4 predefined coordinates initially forming a triangle, the first "generation" of the fractal. To generate the next iteration, one must for each line (pair of coordinates) get the 1/3rd and 2/3rd midpoints to be the base of a new triangle on that face, and then calculate the position of the 3rd point of the new triangle (the subject of this question). Do this for all current sides, concatenating the resulting arrays into a new array that forms the next generation of the snowflake. The array of coordinates is in a clockwise order, e.g. each vertex travelling clockwise around the shape corresponds to the next item in the array, something like this for the 2nd generation: This means that when going to add a triangle to a face, e.g. between, in that image, the vertices labelled 0 and 1, you first get the midpoints which I'll call "c" and "d", you can just rotate "d" anti-clockwise around "c" by 60 degrees to find where the new triangle top point will be (labelled e). I believe this should hold (e.g. 60 degrees anticlockwise rotating the later point around the earlier) for anywhere around the snowflake, however currently my maths only seems to work in the case where the initial triangle has a vertical side: [(0,0), (0,1)]. Else wise the triangle goes off in some other direction. I believe I have correctly constructed my loops such that the triangle generating VI (virtual instrument, effectively a "function" in written languages) will work on each line segment sequentially, but my actual calculation isn't working and I am at a loss as to how to get it in the right direction. Below is my current maths for calculating the triangle points from a single line segment, where a and b are the original vertices of the segment, c and d form new triangle base that are in-line with the original line, and e is the part that sticks out. I don't want to call it "top" as for a triangle formed from a segment going from upper-right to lower-left, the "top" will stick down. cx = ax + (bx - ax)/3; dx = ax + 2*(bx - ax)/3; cy = ay + (by - ay)/3; dy = ay + 2*(by - ay)/3; dX = dx - cx; dY = dy - cy; ex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx; ey = (sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy; note 1.0471975512 is just 60 degrees in radians. Currently for generation 2 it makes this: (note the seemingly separated triangle to the left is formed by the 2 triangles on the top and bottom having their e vertices meet in the middle and is not actually an independent triangle.) I suspect the necessity for having slightly different equations depending on weather ax or bx is larger etc, perhaps something to do with how the periodicity of sin/cos may need to be accounted for (something about quadrants in spherical coordinates?), as it looks like the misplaced triangles are at 60 degrees, just that the angle is between the wrong lines. However this is a guess and I'm just not able to imagine how to do this programmatically let alone on paper. Thankfully the maths formula node allows for if and else statements which would allow for this to be implemented if it's the case but as said I am not awfully familiar with adjusting for what I'll naively call the "quadrants thing", and am unsure how to know which quadrant one is in for each case. This was a long and rambling question which inevitably tempts nonsense so if you've any clarifying questions please comment and I'll try to fix anything/everything. A: Answering my own question thanks to @JohanC, Unsurprisingly this was a case of making many tiny adjustments and giving up just before getting it right. The correct formula was this: ex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx; ey = (-sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy; just adding a minus to the second sine function. Note that if one were travelling anticlockwise then one would want to rotate points clockwise, so you instead have the 1st sine function negated and the second one positive. A: You can calculate the 3rd point of an Equilateral triangle by rotating it by 60 degrees using a known equation from trigonometry. So this is also an answer to how to rotate one point by another point. It's known because it's something you have in high school in analytic geometry class: The code uses JavaScript, I'm answering here because I was not able to find the answer. function deg2rad(deg) { return deg * Math.PI / 180; } function rotate(p1, p2, angle) { const a = deg2rad(angle); const x = (p1.x - p2.x) * Math.cos(a) - (p1.y - p2.y) * Math.sin(a) + p2.x; const y = (p1.x - p2.x) * Math.sin(a) + (p1.y - p2.y) * Math.cos(a) + p2.y; return { x, y }; } where the angle is 60 degrees but it can be hardcoded as radians 1.0471975512.
Calculate the 3rd point of an equilateral triangle from two points at any angle, pointing the "correct" way for a Koch Snowflake
Perhaps the question title needs some work. For context this is for the purpose of a Koch Snowflake (using C-like math syntax in a formula node in LabVIEW), thus why the triangle must be the correct way. (As given 2 points an equilateral triangle may be in one of two directions.) To briefly go over the algorithm: I have an array of 4 predefined coordinates initially forming a triangle, the first "generation" of the fractal. To generate the next iteration, one must for each line (pair of coordinates) get the 1/3rd and 2/3rd midpoints to be the base of a new triangle on that face, and then calculate the position of the 3rd point of the new triangle (the subject of this question). Do this for all current sides, concatenating the resulting arrays into a new array that forms the next generation of the snowflake. The array of coordinates is in a clockwise order, e.g. each vertex travelling clockwise around the shape corresponds to the next item in the array, something like this for the 2nd generation: This means that when going to add a triangle to a face, e.g. between, in that image, the vertices labelled 0 and 1, you first get the midpoints which I'll call "c" and "d", you can just rotate "d" anti-clockwise around "c" by 60 degrees to find where the new triangle top point will be (labelled e). I believe this should hold (e.g. 60 degrees anticlockwise rotating the later point around the earlier) for anywhere around the snowflake, however currently my maths only seems to work in the case where the initial triangle has a vertical side: [(0,0), (0,1)]. Else wise the triangle goes off in some other direction. I believe I have correctly constructed my loops such that the triangle generating VI (virtual instrument, effectively a "function" in written languages) will work on each line segment sequentially, but my actual calculation isn't working and I am at a loss as to how to get it in the right direction. Below is my current maths for calculating the triangle points from a single line segment, where a and b are the original vertices of the segment, c and d form new triangle base that are in-line with the original line, and e is the part that sticks out. I don't want to call it "top" as for a triangle formed from a segment going from upper-right to lower-left, the "top" will stick down. cx = ax + (bx - ax)/3; dx = ax + 2*(bx - ax)/3; cy = ay + (by - ay)/3; dy = ay + 2*(by - ay)/3; dX = dx - cx; dY = dy - cy; ex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx; ey = (sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy; note 1.0471975512 is just 60 degrees in radians. Currently for generation 2 it makes this: (note the seemingly separated triangle to the left is formed by the 2 triangles on the top and bottom having their e vertices meet in the middle and is not actually an independent triangle.) I suspect the necessity for having slightly different equations depending on weather ax or bx is larger etc, perhaps something to do with how the periodicity of sin/cos may need to be accounted for (something about quadrants in spherical coordinates?), as it looks like the misplaced triangles are at 60 degrees, just that the angle is between the wrong lines. However this is a guess and I'm just not able to imagine how to do this programmatically let alone on paper. Thankfully the maths formula node allows for if and else statements which would allow for this to be implemented if it's the case but as said I am not awfully familiar with adjusting for what I'll naively call the "quadrants thing", and am unsure how to know which quadrant one is in for each case. This was a long and rambling question which inevitably tempts nonsense so if you've any clarifying questions please comment and I'll try to fix anything/everything.
[ "Answering my own question thanks to @JohanC, Unsurprisingly this was a case of making many tiny adjustments and giving up just before getting it right.\nThe correct formula was this: \nex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx;\ney = (-sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy;\n\njust adding a minus to the second sine function. Note that if one were travelling anticlockwise then one would want to rotate points clockwise, so you instead have the 1st sine function negated and the second one positive. \n\n", "You can calculate the 3rd point of an Equilateral triangle by rotating it by 60 degrees using a known equation from trigonometry.\nSo this is also an answer to how to rotate one point by another point.\nIt's known because it's something you have in high school in analytic geometry class:\nThe code uses JavaScript, I'm answering here because I was not able to find the answer.\nfunction deg2rad(deg) {\n return deg * Math.PI / 180;\n}\n\nfunction rotate(p1, p2, angle) {\n const a = deg2rad(angle);\n const x = (p1.x - p2.x) * Math.cos(a) - (p1.y - p2.y) * Math.sin(a) + p2.x;\n const y = (p1.x - p2.x) * Math.sin(a) + (p1.y - p2.y) * Math.cos(a) + p2.y;\n return { x, y };\n}\n\nwhere the angle is 60 degrees but it can be hardcoded as radians 1.0471975512.\n" ]
[ 4, 0 ]
[]
[]
[ "fractals", "geometry", "labview", "math" ]
stackoverflow_0059041539_fractals_geometry_labview_math.txt
Q: Need help in converting Azure Powershell to CLI I am very new to Azure CLI and having problems in converting the following command Set-AzCognitiveServicesAccount -ResourceGroupName rg-xxx -Name cs-xxx -DisableLocalAuth $false Any help will be greatly appreciated. Thanks in advance. BR A: To convert the above PowerShell command to Azure CLI, you can use the following Azure CLI command: az cognitiveservices account update --resource-group rg-xxx --name cs-xxx --disable-local-auth false Note that the --disable-local-auth option is used to enable local authentication for the Cognitive Services account. If you want to disable local authentication, you can use the --disable-local-auth true option instead. I hope this helps. Let me know if you have any other questions.
Need help in converting Azure Powershell to CLI
I am very new to Azure CLI and having problems in converting the following command Set-AzCognitiveServicesAccount -ResourceGroupName rg-xxx -Name cs-xxx -DisableLocalAuth $false Any help will be greatly appreciated. Thanks in advance. BR
[ "To convert the above PowerShell command to Azure CLI, you can use the following Azure CLI command:\naz cognitiveservices account update --resource-group rg-xxx --name cs-xxx --disable-local-auth false\n\nNote that the --disable-local-auth option is used to enable local authentication for the Cognitive Services account. If you want to disable local authentication, you can use the --disable-local-auth true option instead.\nI hope this helps. Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_cli" ]
stackoverflow_0074670797_azure_azure_cli.txt
Q: Laravel file permission incompatibility between user running the jobs and the webserver (apache) user I have a laravel app on a VPS with apache2 and supervisord configured. The file permissions setup is the following: the whole project directory is owned by the www-data group Apache uses the www-data user Me and supervisor use the app user, which belongs to the www-data group. Everything worked file until I had to handle some files both inside a job and inside a request handled by the web server. This is a summary of the flow: An user uploads a file. I save the file on a temp directory in the local disk. Storage::disk('local')->put('new-directory/filename', $fileContent); I dispatch a job that should elaborate the file The job should delete the file at the end of the elaboration: Storage::disk('local')->delete('new-directory/filename'); But actually what I got is a permission error, because the file is owned by the www-user and the app user that is used by supervisor to work the queue does not have the permissions to delete the file. I tried using the 'public' visibility: ->put('new-directory/filename', $fileContent, 'public') but the files are still protected. Here is the outpot of ll in the directory: -rw-r--r-- 1 www-data www-data 60780 Dec 5 14:15 $filename Is there a way to solve this file permission issue between the user that runs the queue and the webserver user? A: You can add in the config file in supervisor. [program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 autostart=true autorestart=true user=www-data numprocs=1 redirect_stderr=true stdout_logfile=/home/forge/app.com/worker.log A: When running queues manually, you might still need to run as www-data user. sudo -su www-data php artisan queue:work That’s if your log files are owned www-data which is usually the case.
Laravel file permission incompatibility between user running the jobs and the webserver (apache) user
I have a laravel app on a VPS with apache2 and supervisord configured. The file permissions setup is the following: the whole project directory is owned by the www-data group Apache uses the www-data user Me and supervisor use the app user, which belongs to the www-data group. Everything worked file until I had to handle some files both inside a job and inside a request handled by the web server. This is a summary of the flow: An user uploads a file. I save the file on a temp directory in the local disk. Storage::disk('local')->put('new-directory/filename', $fileContent); I dispatch a job that should elaborate the file The job should delete the file at the end of the elaboration: Storage::disk('local')->delete('new-directory/filename'); But actually what I got is a permission error, because the file is owned by the www-user and the app user that is used by supervisor to work the queue does not have the permissions to delete the file. I tried using the 'public' visibility: ->put('new-directory/filename', $fileContent, 'public') but the files are still protected. Here is the outpot of ll in the directory: -rw-r--r-- 1 www-data www-data 60780 Dec 5 14:15 $filename Is there a way to solve this file permission issue between the user that runs the queue and the webserver user?
[ "You can add in the config file in supervisor. \n[program:laravel-worker]\nprocess_name=%(program_name)s_%(process_num)02d\ncommand=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3\nautostart=true\nautorestart=true\nuser=www-data\nnumprocs=1\nredirect_stderr=true\nstdout_logfile=/home/forge/app.com/worker.log\n\n", "When running queues manually, you might still need to run as www-data user.\nsudo -su www-data php artisan queue:work\n\nThat’s if your log files are owned www-data which is usually the case.\n" ]
[ 2, 0 ]
[]
[]
[ "file_permissions", "laravel", "supervisord" ]
stackoverflow_0059196317_file_permissions_laravel_supervisord.txt
Q: How to indicate dependent entity in one to one relationship using shadow foreign keys only? I have specified: public class Cart { public int Id { get; set; } public virtual ApplicationUser ApplicationUser { get; set; } } public class ApplicationUser : IdentityUser { public virtual Cart Cart { get; set; } } ... modelBuilder.Entity<Cart>(e => { e.HasOne(e => e.ApplicationUser).WithOne(re => re.Cart); e.Property(e => e.ApplicationUser).IsRequired(); }); my intention was that Cart cannot exist without ApplicationUser associated with it, but ApplicationUser can. Unfortunately when I generate migration I get error: 'ApplicationUser' cannot be used as a property on entity type 'Cart' because it is configured as a navigation. So it seems that I can't express that ApplicationUser is indeed required because the Cart is dependent entity. How to fix that not having to declare explicit foreign keys, just using shadow ones? A: The shadow property can be specified by using a string argument in HasForeignKey: modelBuilder.Entity<ApplicationUser>().HasOne(e => e.Cart) .WithOne(e => e.ApplicationUser) .HasForeignKey<Cart>("ApplicationUserId"); The foreign key field in the database has a NOT NULL specification.
How to indicate dependent entity in one to one relationship using shadow foreign keys only?
I have specified: public class Cart { public int Id { get; set; } public virtual ApplicationUser ApplicationUser { get; set; } } public class ApplicationUser : IdentityUser { public virtual Cart Cart { get; set; } } ... modelBuilder.Entity<Cart>(e => { e.HasOne(e => e.ApplicationUser).WithOne(re => re.Cart); e.Property(e => e.ApplicationUser).IsRequired(); }); my intention was that Cart cannot exist without ApplicationUser associated with it, but ApplicationUser can. Unfortunately when I generate migration I get error: 'ApplicationUser' cannot be used as a property on entity type 'Cart' because it is configured as a navigation. So it seems that I can't express that ApplicationUser is indeed required because the Cart is dependent entity. How to fix that not having to declare explicit foreign keys, just using shadow ones?
[ "The shadow property can be specified by using a string argument in HasForeignKey:\nmodelBuilder.Entity<ApplicationUser>().HasOne(e => e.Cart)\n .WithOne(e => e.ApplicationUser)\n .HasForeignKey<Cart>(\"ApplicationUserId\");\n\nThe foreign key field in the database has a NOT NULL specification.\n" ]
[ 0 ]
[]
[]
[ ".net_core", "c#", "ef_fluent_api", "entity_framework_core", "shadow_foreign_key" ]
stackoverflow_0074669722_.net_core_c#_ef_fluent_api_entity_framework_core_shadow_foreign_key.txt
Q: Programmatically inserting a widget into Wordpress sidebar I have been using this code on PHP 7.0, but decided to upgrade to 7.4 tonight. This code automatically inserts widgets into Wordpress sidebar, but it no longer works. function insert_widget_in_sidebar( $widget_id, $widget_data, $sidebar ) { // Retrieve sidebars, widgets and their instances $sidebars_widgets = get_option( 'sidebars_widgets', array() ); $widget_instances = get_option( 'widget_' . $widget_id, array() ); // Retrieve the key of the next widget instance $numeric_keys = array_filter( array_keys( $widget_instances ), 'is_int' ); $next_key = $numeric_keys ? max( $numeric_keys ) + 1 : 2; // Add this widget to the sidebar if ( ! isset( $sidebars_widgets[ $sidebar ] ) ) { $sidebars_widgets[ $sidebar ] = array(); } $sidebars_widgets[ $sidebar ][] = $widget_id . '-' . $next_key; // Add the new widget instance $widget_instances[ $next_key ] = $widget_data; // Store updated sidebars, widgets and their instances update_option( 'sidebars_widgets', $sidebars_widgets ); update_option( 'widget_' . $widget_id, $widget_instances ); } From my research, it seems to be a problem with "[]" not initializing arrays anymore. I've tried every single way I know how, but can't get this to work. I've always initialized arrays with [], so I'm sort of lost. This is an example of the input data: insert_widget_in_sidebar('recent-posts',array('title' => $recent_posts_title,'number' => $recent_posts_number,'show_date' => $show_date),$sidebar_name); Where $sidebar_name would be, for example, 'right-sidebar'. A: I was finally able to solve the problem by properly initializing the arrays... this is an edited version of the code made for my purposes, but here are the main changes: if ( !isset( $sidebars_widgets[$sidebar] ) ) { $sidebars_widgets = array(); $sidebars_widgets[$sidebar] = array(); } $sidebars_widgets[$sidebar][] = $widget_id . '-' . '1'; // Add the new widget instance $widget_instances = array(); $widget_instances[1] = $widget_data; A: Well, this code seems no longer working in latest wp version. Widgets are dead... hello blocks! You can easily add a widget to any of your sidebars using my snipped code, which can be improved: (the example adds a html widget to the sidebar-2 - put it in a function if needed) // get widget blocks for sidebar-2 $widget_block = get_option('widget_block'); $content = 'azerty'; // check for existing value before processing $id_exist = array_search(array("content" => $content), $widget_block); if (!is_numeric($id_exist)) { // add a widget block $widget_block[] = array("content" => $content); update_option('widget_block', $widget_block); // get block id $new_sidebar2_id = array_search(array("content" => $content), $widget_block); // get sidebar-2 $sidebars_widgets = wp_get_sidebars_widgets(); // check for existing value before processing $id_exist = array_search('block-' . $new_sidebar2_id, $sidebars_widgets["sidebar-1"]); if (!is_numeric($id_exist)) { //update sidebar-2 $sidebars_widgets["sidebar-1"][] = 'block-' . $new_sidebar2_id; wp_set_sidebars_widgets($sidebars_widgets); } }
Programmatically inserting a widget into Wordpress sidebar
I have been using this code on PHP 7.0, but decided to upgrade to 7.4 tonight. This code automatically inserts widgets into Wordpress sidebar, but it no longer works. function insert_widget_in_sidebar( $widget_id, $widget_data, $sidebar ) { // Retrieve sidebars, widgets and their instances $sidebars_widgets = get_option( 'sidebars_widgets', array() ); $widget_instances = get_option( 'widget_' . $widget_id, array() ); // Retrieve the key of the next widget instance $numeric_keys = array_filter( array_keys( $widget_instances ), 'is_int' ); $next_key = $numeric_keys ? max( $numeric_keys ) + 1 : 2; // Add this widget to the sidebar if ( ! isset( $sidebars_widgets[ $sidebar ] ) ) { $sidebars_widgets[ $sidebar ] = array(); } $sidebars_widgets[ $sidebar ][] = $widget_id . '-' . $next_key; // Add the new widget instance $widget_instances[ $next_key ] = $widget_data; // Store updated sidebars, widgets and their instances update_option( 'sidebars_widgets', $sidebars_widgets ); update_option( 'widget_' . $widget_id, $widget_instances ); } From my research, it seems to be a problem with "[]" not initializing arrays anymore. I've tried every single way I know how, but can't get this to work. I've always initialized arrays with [], so I'm sort of lost. This is an example of the input data: insert_widget_in_sidebar('recent-posts',array('title' => $recent_posts_title,'number' => $recent_posts_number,'show_date' => $show_date),$sidebar_name); Where $sidebar_name would be, for example, 'right-sidebar'.
[ "I was finally able to solve the problem by properly initializing the arrays... this is an edited version of the code made for my purposes, but here are the main changes:\nif ( !isset( $sidebars_widgets[$sidebar] ) ) {\n $sidebars_widgets = array();\n $sidebars_widgets[$sidebar] = array();\n}\n$sidebars_widgets[$sidebar][] = $widget_id . '-' . '1';\n\n// Add the new widget instance\n$widget_instances = array();\n$widget_instances[1] = $widget_data;\n\n", "Well, this code seems no longer working in latest wp version. Widgets are dead... hello blocks!\nYou can easily add a widget to any of your sidebars using my snipped code, which can be improved:\n(the example adds a html widget to the sidebar-2 - put it in a function if needed)\n// get widget blocks for sidebar-2\n$widget_block = get_option('widget_block');\n$content = 'azerty';\n\n// check for existing value before processing\n$id_exist = array_search(array(\"content\" => $content), $widget_block);\nif (!is_numeric($id_exist)) {\n\n // add a widget block\n $widget_block[] = array(\"content\" => $content);\n update_option('widget_block', $widget_block);\n\n // get block id\n $new_sidebar2_id = array_search(array(\"content\" => $content), $widget_block);\n\n // get sidebar-2\n $sidebars_widgets = wp_get_sidebars_widgets();\n\n // check for existing value before processing\n $id_exist = array_search('block-' . $new_sidebar2_id, $sidebars_widgets[\"sidebar-1\"]);\n\n if (!is_numeric($id_exist)) {\n\n //update sidebar-2\n $sidebars_widgets[\"sidebar-1\"][] = 'block-' . $new_sidebar2_id;\n wp_set_sidebars_widgets($sidebars_widgets);\n }\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "php", "wordpress" ]
stackoverflow_0072076330_php_wordpress.txt
Q: Django add element to dynamic form Is there a way to add image element for each input in form? I need to have an image alongside each input from form. I created this sample form and model that works the same way as in my code. The result I'd like to get is this. Sample form code class CreateProfileForm(forms.ModelForm): fieldsets = [ ("Fieldset 1", {'fields': [ 'first_name', 'last_name' ]}), ("Fieldset 2", {'fields': [ 'address', 'phone' ]}), ] class Meta: model = Profile fields = '__all__' Sample model code class Profile(models.Model): # FIELDSET 1 first_name = models.CharField(max_length=50, verbose_name="First name") last_name = models.CharField(max_length=50, verbose_name="Last name") # FIELDSET 2 address = models.CharField(max_length=50, verbose_name="Address") last_name = models.EmailField(verbose_name="Email") The view def create_profile(request): form = CreateProfileForm() return render(request, 'form.html', {'form': form}) The template with the form {% load forms_fieldset static %} <div class="form data-form"> <form> {{ form|fieldset:'#000080' }} <div class="form-group"> <button name="upload" type="submit">Create</button> </div> </form> </div> A: To add an image element for each input in a form, you can use the as_table method on the form in your template to render the form fields as an HTML table. Each input will be rendered as a table row, and you can add an img element to each row to display the image. Here is an example of how you could do this in your template: <div class="form data-form"> <form> {% for fieldset in form.fieldsets %} <fieldset style="border-color: #000080;"> <legend style="color: #000080;">{{ fieldset.legend }}</legend> <table> {% for field in fieldset %} <tr> <td><img src="{{ field.image_url }}" alt="{{ field.label }}" /></td> <td>{{ field }}</td> <td>{{ field.help_text }}</td> </tr> {% endfor %} </table> </fieldset> {% endfor %} <div class="form-group"> <button name="upload" type="submit">Create</button> </div> </form> </div> In the code above, I have used the as_table method on the form to render the form fields as an HTML table.
Django add element to dynamic form
Is there a way to add image element for each input in form? I need to have an image alongside each input from form. I created this sample form and model that works the same way as in my code. The result I'd like to get is this. Sample form code class CreateProfileForm(forms.ModelForm): fieldsets = [ ("Fieldset 1", {'fields': [ 'first_name', 'last_name' ]}), ("Fieldset 2", {'fields': [ 'address', 'phone' ]}), ] class Meta: model = Profile fields = '__all__' Sample model code class Profile(models.Model): # FIELDSET 1 first_name = models.CharField(max_length=50, verbose_name="First name") last_name = models.CharField(max_length=50, verbose_name="Last name") # FIELDSET 2 address = models.CharField(max_length=50, verbose_name="Address") last_name = models.EmailField(verbose_name="Email") The view def create_profile(request): form = CreateProfileForm() return render(request, 'form.html', {'form': form}) The template with the form {% load forms_fieldset static %} <div class="form data-form"> <form> {{ form|fieldset:'#000080' }} <div class="form-group"> <button name="upload" type="submit">Create</button> </div> </form> </div>
[ "To add an image element for each input in a form, you can use the as_table method on the form in your template to render the form fields as an HTML table. Each input will be rendered as a table row, and you can add an img element to each row to display the image.\nHere is an example of how you could do this in your template:\n<div class=\"form data-form\">\n <form>\n {% for fieldset in form.fieldsets %}\n <fieldset style=\"border-color: #000080;\">\n <legend style=\"color: #000080;\">{{ fieldset.legend }}</legend>\n <table>\n {% for field in fieldset %}\n <tr>\n <td><img src=\"{{ field.image_url }}\" alt=\"{{ field.label }}\" /></td>\n <td>{{ field }}</td>\n <td>{{ field.help_text }}</td>\n </tr>\n {% endfor %}\n </table>\n </fieldset>\n {% endfor %}\n\n <div class=\"form-group\">\n <button name=\"upload\" type=\"submit\">Create</button>\n </div>\n </form>\n</div>\n\nIn the code above, I have used the as_table method on the form to render the form fields as an HTML table.\n" ]
[ 0 ]
[]
[]
[ "django", "django_forms", "django_models", "django_templates", "python" ]
stackoverflow_0074670825_django_django_forms_django_models_django_templates_python.txt
Q: Gson cannot deserialize same string it created? I have a HashMap<Picture, String> with Picture being a data-class I created in kotlin. I save the HashMap into the SharedPreferences using gson.toJson(hashmap) and this works fine. But when I try to deserialize the very same string (I checked) into the HashMap<Picture, String> again, it fails with a weird error. This is the Exception: java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 3 path $. This is the string for reference: { "Picture(image_url\u003dhttps://nftmintapp.infura-ipfs.io/ipfs/QmZnbgRFCvqXeahD37vaRANjPiyF9oCC2aWw1TwHat8SaU, creator_name\u003dmarkus, creator_address\u003d0x0, image_name\u003dethOS3, additional_url\u003dhttps://google.com)":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" } The String I save in reference to the String is a Bytearray that I convert to string using Base64.encodeToString(bytes, Base64.NO_WRAP). I assumed that gson would be able to de-serialize anything it serialized itself, has anybody ever encountered this? A: The issue here is that Gson is trying to deserialize a String into an object. The Picture class is a data class which is not a valid JSON object, which is why Gson is throwing the IllegalStateException. To solve this issue, you can either modify your Picture class to be a valid JSON object, or you can create a custom deserializer for your Picture class. To modify your Picture class to be a valid JSON object, you will need to add the appropriate getters, setters, and constructors. You can also add the @SerializedName annotation to your fields to indicate the name of the field when serialized to JSON. To create a custom deserializer, you will need to create a class that implements the JsonDeserializer interface. This class will define the logic for how Gson should deserialize your Picture class. Below is a basic example of a custom deserializer for a Picture class: class PictureDeserializer : JsonDeserializer<Picture> { override fun deserialize( json: JsonElement, typeOfT: Type, context: JsonDeserializationContext ): Picture { val jsonObject = json.asJsonObject // get the fields from the json object val imageUrl = jsonObject.get("image_url").asString val creatorName = jsonObject.get("creator_name").asString val creatorAddress = jsonObject.get("creator_address").asString val imageName = jsonObject.get("image_name").asString val additionalUrl = jsonObject.get("additional_url").asString // create a new Picture object with the fields val picture = Picture( imageUrl, creatorName, creatorAddress, imageName, ` additionalUrl) return picture } }
Gson cannot deserialize same string it created?
I have a HashMap<Picture, String> with Picture being a data-class I created in kotlin. I save the HashMap into the SharedPreferences using gson.toJson(hashmap) and this works fine. But when I try to deserialize the very same string (I checked) into the HashMap<Picture, String> again, it fails with a weird error. This is the Exception: java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 3 path $. This is the string for reference: { "Picture(image_url\u003dhttps://nftmintapp.infura-ipfs.io/ipfs/QmZnbgRFCvqXeahD37vaRANjPiyF9oCC2aWw1TwHat8SaU, creator_name\u003dmarkus, creator_address\u003d0x0, image_name\u003dethOS3, additional_url\u003dhttps://google.com)":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" } The String I save in reference to the String is a Bytearray that I convert to string using Base64.encodeToString(bytes, Base64.NO_WRAP). I assumed that gson would be able to de-serialize anything it serialized itself, has anybody ever encountered this?
[ "The issue here is that Gson is trying to deserialize a String into an object. The Picture class is a data class which is not a valid JSON object, which is why Gson is throwing the IllegalStateException.\nTo solve this issue, you can either modify your Picture class to be a valid JSON object, or you can create a custom deserializer for your Picture class.\nTo modify your Picture class to be a valid JSON object, you will need to add the appropriate getters, setters, and constructors. You can also add the @SerializedName annotation to your fields to indicate the name of the field when serialized to JSON.\nTo create a custom deserializer, you will need to create a class that implements the JsonDeserializer interface. This class will define the logic for how Gson should deserialize your Picture class.\nBelow is a basic example of a custom deserializer for a Picture class:\nclass PictureDeserializer : JsonDeserializer<Picture> {\noverride fun deserialize(\n json: JsonElement,\n typeOfT: Type,\n context: JsonDeserializationContext\n): Picture {\n val jsonObject = json.asJsonObject\n // get the fields from the json object \n val imageUrl = jsonObject.get(\"image_url\").asString\n val creatorName = jsonObject.get(\"creator_name\").asString\n val creatorAddress = jsonObject.get(\"creator_address\").asString\n val imageName = jsonObject.get(\"image_name\").asString\n val additionalUrl = jsonObject.get(\"additional_url\").asString\n // create a new Picture object with the fields\n val picture = Picture(\n imageUrl,\n creatorName,\n creatorAddress,\n imageName,\n ` additionalUrl) return picture\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "android", "gson", "java", "kotlin" ]
stackoverflow_0074670740_android_gson_java_kotlin.txt
Q: Websocket connection not working in Django Channels ('WebSocket connection to 'ws://localhost:8000/ws/board/7/' failed:') I'm trying to get a websocket running for a Django project I'm working on, but I can't get the websocket to connect, which is strange since I copied the example chat application from. the channels documentation and that worked fine but when I copy-pasted that same code over to my project, it didn't. So, here are the relevant sections of code: the relevant view in views.py def board_view(request, key): board = get_object_or_404(request.user.boards, pk=key) key = dumps(board.pk) return render(request, 'core/board.html', {"board":board, "permission":user_permission, "key":key}) board.html (the relevant part) <script> const key = JSON.parse("{{key|escapejs}}"); const chatSocket = new WebSocket( 'ws://' + window.location.host + '/ws/board/' + key + '/' ); routing.py from django.urls import re_path from . import consumers websocket_urlpatterns = [ re_path(r"^ws/board/(?P<key>\d+)/$", consumers.ChatConsumer.as_asgi()), ] consumers.py import json from channels.generic.websocket import WebsocketConsumer class ChatConsumer(WebsocketConsumer): def connect(self): self.accept() self.send(text_data=json.dumps({ 'type':'connection_established', 'message':'you are now connected' })) def disconnect(self, close_code): pass def receive(self, text_data): text_data_json = json.loads(text_data) message = text_data_json["message"] self.send(text_data=json.dumps({"message": message})) asgi.py import os from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter, URLRouter from channels.security.websocket import AllowedHostsOriginValidator from django.core.asgi import get_asgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'sketchboard.settings') django_asgi_app = get_asgi_application() import core.routing application = ProtocolTypeRouter({ "http": django_asgi_app, "websocket": AllowedHostsOriginValidator( AuthMiddlewareStack(URLRouter(core.routing.websocket_urlpatterns)) ), }) settings.py (relevant part): ASGI_APPLICATION = 'sketchboard.asgi.application' and INSTALLED_MY_APPS = [ 'core', ] INSTALLED_EXTENSIONS = [ 'daphne', 'allauth', 'allauth.account', 'allauth.socialaccount', 'guest_user', 'guest_user.contrib.allauth', 'rest_framework', 'channels', ] This is almost identical to the tutorial websocket setup found in the channels documentation (which worked fine when I tried it). So what I should be getting from the console on the 'board_view' page is 'you are now connected' (as defined in consumers.py), however instead I am getting a WebSocket connection to 'ws://localhost:8000/ws/board/7/' failed: message. I should point out here that the 7 is the 'key', which is a dynamic variable, but this part definitely works. I've also tried just typing in 'test' as the route in board.html and routing.py so something like: const chatSocket = new WebSocket('ws://' + window.location.host + '/test'); But I get the same error, so I don't think the problem lies in the routing.py or board.html files. I've also tried asking the django discord server but they were unable to help. This problem really has me stumped so any help would be greatly appreciated! :) A: Maybe you installed channels version 4 which is the latest, default one, and this does not start the ASGI server in development mode. please verify this and install channels version == 3.0.5 . do verify ASGI server starts when you run the runserver command. If not let me know will discuss more.
Websocket connection not working in Django Channels ('WebSocket connection to 'ws://localhost:8000/ws/board/7/' failed:')
I'm trying to get a websocket running for a Django project I'm working on, but I can't get the websocket to connect, which is strange since I copied the example chat application from. the channels documentation and that worked fine but when I copy-pasted that same code over to my project, it didn't. So, here are the relevant sections of code: the relevant view in views.py def board_view(request, key): board = get_object_or_404(request.user.boards, pk=key) key = dumps(board.pk) return render(request, 'core/board.html', {"board":board, "permission":user_permission, "key":key}) board.html (the relevant part) <script> const key = JSON.parse("{{key|escapejs}}"); const chatSocket = new WebSocket( 'ws://' + window.location.host + '/ws/board/' + key + '/' ); routing.py from django.urls import re_path from . import consumers websocket_urlpatterns = [ re_path(r"^ws/board/(?P<key>\d+)/$", consumers.ChatConsumer.as_asgi()), ] consumers.py import json from channels.generic.websocket import WebsocketConsumer class ChatConsumer(WebsocketConsumer): def connect(self): self.accept() self.send(text_data=json.dumps({ 'type':'connection_established', 'message':'you are now connected' })) def disconnect(self, close_code): pass def receive(self, text_data): text_data_json = json.loads(text_data) message = text_data_json["message"] self.send(text_data=json.dumps({"message": message})) asgi.py import os from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter, URLRouter from channels.security.websocket import AllowedHostsOriginValidator from django.core.asgi import get_asgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'sketchboard.settings') django_asgi_app = get_asgi_application() import core.routing application = ProtocolTypeRouter({ "http": django_asgi_app, "websocket": AllowedHostsOriginValidator( AuthMiddlewareStack(URLRouter(core.routing.websocket_urlpatterns)) ), }) settings.py (relevant part): ASGI_APPLICATION = 'sketchboard.asgi.application' and INSTALLED_MY_APPS = [ 'core', ] INSTALLED_EXTENSIONS = [ 'daphne', 'allauth', 'allauth.account', 'allauth.socialaccount', 'guest_user', 'guest_user.contrib.allauth', 'rest_framework', 'channels', ] This is almost identical to the tutorial websocket setup found in the channels documentation (which worked fine when I tried it). So what I should be getting from the console on the 'board_view' page is 'you are now connected' (as defined in consumers.py), however instead I am getting a WebSocket connection to 'ws://localhost:8000/ws/board/7/' failed: message. I should point out here that the 7 is the 'key', which is a dynamic variable, but this part definitely works. I've also tried just typing in 'test' as the route in board.html and routing.py so something like: const chatSocket = new WebSocket('ws://' + window.location.host + '/test'); But I get the same error, so I don't think the problem lies in the routing.py or board.html files. I've also tried asking the django discord server but they were unable to help. This problem really has me stumped so any help would be greatly appreciated! :)
[ "Maybe you installed channels version 4 which is the latest, default one, and this does not start the ASGI server in development mode. please verify this and install channels version == 3.0.5 . do verify ASGI server starts when you run the runserver command.\nIf not let me know will discuss more.\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_channels", "python_3.x", "websocket" ]
stackoverflow_0074668540_django_django_channels_python_3.x_websocket.txt
Q: Using Switch Statements with Dates and Seasons I am trying to create a date identifier using only if statements and a switch statement that can determine whether an inputted date is valid, what season the date is in and finally whether it is a leap year. I tried to get the component parts working independently first and got two of them working then tried them all together but I still can't get my switch statement working. I want my switch statement to show the season by checking both the day and month to see what season we are in but I'm not sure how to do that. Here is my code: /* Switch statement to determine season for day and month */ // Using it with a "m" on it's own works, how do I get it working for specific days? switch (m) { case 12: case 1: case 2: if ((m == 12 && d >=21) || (m == 1) || (m == 2) || (m == 3 && m < 21)) printf("The season is Winter.\n"); break; case 3: case 4: case 5: if ((m == 3 && d >= 21) || (m == 4) || (m == 5) || (m == 6 && d < 21)) printf("The season is Spring.\n"); break; case 6: case 7: case 8: if ((m == 6 && d >= 21) || (m == 7) || (m == 8) | (m == 9 && d < 21)) printf("The season is Summer.\n"); break; case 9: case 10: case 11: if ((m == 9 && d >= 21) || (m == 10) || (m == 11) || (m == 12 && d < 21)) printf("The season is Autumn.\n"); default: break; } } I tried getting the code working for each part independently, but I'm still unsure about my switch statement. How can I get it working for days as well as months? Is there a way to do it still with a switch statement? Example Output: 20/06/2022 = Spring 21/06/2022 = Summer A: You example will fail for March 1 (and other dates) since there is no case for 3 listed in the Winter case. You don't need a switch statement at all: if ((m == 12 && d >=21) || (m == 1) || (m == 2) || (m == 3 && d < 21)) printf("The season is Winter.\n"); else if ((m == 3 && d >= 21) || (m == 4) || (m == 5) || (m == 6 && d < 21)) printf("The season is Spring.\n"); else if ((m == 6 && d >= 21) || (m == 7) || (m == 8) | (m == 9 && d < 21)) printf("The season is Summer.\n"); else printf("The season is Autumn.\n"); A: If you must use a switch, then you have to put the boundary months (March, June, September, December) into their own cases: const char *season = NULL; switch (m) { case 1: case 2: season = "Winter"; break; case 3: season = (d < 21) ? "Winter" : "Spring"; break; case 4: case 5: season = "Spring"; break; case 6: season = (d < 21) ? "Spring" : "Summer"; break; case 7: case 8: season = "Summer"; break; case 9: season = (d < 21) ? "Summer" : "Autumn"; case 10: case 11: season = "Autumn"; break; case 12: season = (d < 21) ? "Autumn" : "Winter"; break; default: assert("month out of control" == NULL); season = "Unknown - invalid month"; break; } printf("The season is %s.\n", season); A: "Is there a way to do it still with a switch statement? This question indicates a flawed perspective. Algorithms should be written to clearly and cleanly achieve the objective of the task; not written to conform to a favourite scheme. As shown in other answers, switch() is either cumbersome or unnecessary to solve this problem. It's worth noting/learning that "branching" can be expensive in terms of processing time. While a ladder of if/else conditionals may be easy for a human to read and understand, finding a superior algorithm that does not involve branching will likely process much faster. If you want to calculate the name of the season instead of using a lot of magic numbers in conditionals, then this seems to work. void season( uint16_t m, uint16_t d ) { char *seasons[] = { "Winter", "Spring", "Summer", "Autumn", "Winter" }; char *p = seasons[ ((m-1) / 3) + (!(m%3)*( d >= 22 )) ]; printf( "mon %d day %d = %s\n", m, d, p ); } int main() { season( 1, 1 ); season( 1, 22 ); season( 2, 22 ); season( 3, 21 ); season( 3, 22 ); season( 9, 21 ); season( 9, 22 ); season( 12, 21 ); season( 12, 22 ); return 0; } mon 1 day 1 = Winter mon 1 day 22 = Winter mon 2 day 22 = Winter mon 3 day 21 = Winter mon 3 day 22 = Spring mon 9 day 21 = Summer mon 9 day 22 = Autumn mon 12 day 21 = Autumn mon 12 day 22 = Winter And, with only "month and day", the only indication of "leap year" would be if you had the pair "02/29". Not sure what you wanted there... EDIT: Not to forget those who live south of the equator, here is the trivial modification to the above function that takes a 3rd parameter (true for the southern hemisphere.) void season( bool SthHemi, uint16_t m, uint16_t d ) { char *seasons[] = { "Winter", "Spring", "Summer", "Autumn", "Winter", "Spring", "Summer", }; char *p = seasons[ (SthHemi*2) + ((m-1) / 3) + (!(m%3)*( d >= 22 )) ]; printf( "mon %d day %d = %s\n", m, d, p ); } EDIT2: Not really happy with the extra instances of the names of the seasons, here is an improved calculation that determines which string to use. char *season( int SthHemi, uint16_t m, uint16_t d ) { char *seasons[] = { "Winter", "Spring", "Summer", "Autumn" }; return seasons[ (((m+(SthHemi*6)-1) / 3) + (!(m%3)*( d >= 22 )))%4 ]; } SthHemi - being 0 (north) or 1 (south) - is multiplied by 6 as the hemispheres' seasons are 6 months out of phase. Adding this to the month index (1-12), subtracting 1, then dividing by 3 gives either 0,1,2,3 or 2,3,4,5 towards the index of the string to use. Now, if the month modulo 3 is 0 (ie: Mar, Jun, Sep, or Dec), use !0 (ie: 1) to multiply the truth value that the day-of-month is >= 22. This operation may add 1 to the index value calculated so far for the final days of those 'transitional' months. Finally, use modulo 4 to "wrap" larger index values into the range of 0-3 and return the appropriate string from the array of strings that are season names. Simple!
Using Switch Statements with Dates and Seasons
I am trying to create a date identifier using only if statements and a switch statement that can determine whether an inputted date is valid, what season the date is in and finally whether it is a leap year. I tried to get the component parts working independently first and got two of them working then tried them all together but I still can't get my switch statement working. I want my switch statement to show the season by checking both the day and month to see what season we are in but I'm not sure how to do that. Here is my code: /* Switch statement to determine season for day and month */ // Using it with a "m" on it's own works, how do I get it working for specific days? switch (m) { case 12: case 1: case 2: if ((m == 12 && d >=21) || (m == 1) || (m == 2) || (m == 3 && m < 21)) printf("The season is Winter.\n"); break; case 3: case 4: case 5: if ((m == 3 && d >= 21) || (m == 4) || (m == 5) || (m == 6 && d < 21)) printf("The season is Spring.\n"); break; case 6: case 7: case 8: if ((m == 6 && d >= 21) || (m == 7) || (m == 8) | (m == 9 && d < 21)) printf("The season is Summer.\n"); break; case 9: case 10: case 11: if ((m == 9 && d >= 21) || (m == 10) || (m == 11) || (m == 12 && d < 21)) printf("The season is Autumn.\n"); default: break; } } I tried getting the code working for each part independently, but I'm still unsure about my switch statement. How can I get it working for days as well as months? Is there a way to do it still with a switch statement? Example Output: 20/06/2022 = Spring 21/06/2022 = Summer
[ "You example will fail for March 1 (and other dates) since there is no case for 3 listed in the Winter case. You don't need a switch statement at all:\nif ((m == 12 && d >=21) || (m == 1) || (m == 2) || (m == 3 && d < 21)) \n printf(\"The season is Winter.\\n\");\nelse if ((m == 3 && d >= 21) || (m == 4) || (m == 5) || (m == 6 && d < 21))\n printf(\"The season is Spring.\\n\");\nelse if ((m == 6 && d >= 21) || (m == 7) || (m == 8) | (m == 9 && d < 21))\n printf(\"The season is Summer.\\n\");\nelse\n printf(\"The season is Autumn.\\n\");\n\n", "If you must use a switch, then you have to put the boundary months (March, June, September, December) into their own cases:\nconst char *season = NULL;\nswitch (m)\n{\ncase 1:\ncase 2:\n season = \"Winter\";\n break;\ncase 3:\n season = (d < 21) ? \"Winter\" : \"Spring\";\n break;\ncase 4:\ncase 5:\n season = \"Spring\";\n break;\ncase 6:\n season = (d < 21) ? \"Spring\" : \"Summer\";\n break;\ncase 7:\ncase 8:\n season = \"Summer\";\n break;\ncase 9:\n season = (d < 21) ? \"Summer\" : \"Autumn\";\ncase 10:\ncase 11:\n season = \"Autumn\";\n break;\ncase 12:\n season = (d < 21) ? \"Autumn\" : \"Winter\";\n break;\ndefault:\n assert(\"month out of control\" == NULL);\n season = \"Unknown - invalid month\";\n break;\n}\nprintf(\"The season is %s.\\n\", season);\n\n", "\n\"Is there a way to do it still with a switch statement?\n\nThis question indicates a flawed perspective. Algorithms should be written to clearly and cleanly achieve the objective of the task; not written to conform to a favourite scheme. As shown in other answers, switch() is either cumbersome or unnecessary to solve this problem.\nIt's worth noting/learning that \"branching\" can be expensive in terms of processing time. While a ladder of if/else conditionals may be easy for a human to read and understand, finding a superior algorithm that does not involve branching will likely process much faster.\nIf you want to calculate the name of the season instead of using a lot of magic numbers in conditionals, then this seems to work.\nvoid season( uint16_t m, uint16_t d ) {\n char *seasons[] = { \"Winter\", \"Spring\", \"Summer\", \"Autumn\", \"Winter\" };\n\n char *p = seasons[ ((m-1) / 3) + (!(m%3)*( d >= 22 )) ];\n printf( \"mon %d day %d = %s\\n\", m, d, p );\n}\n\nint main() {\n season( 1, 1 );\n season( 1, 22 );\n season( 2, 22 );\n season( 3, 21 );\n season( 3, 22 );\n season( 9, 21 );\n season( 9, 22 );\n season( 12, 21 );\n season( 12, 22 );\n\n return 0;\n}\n\nmon 1 day 1 = Winter\nmon 1 day 22 = Winter\nmon 2 day 22 = Winter\nmon 3 day 21 = Winter\nmon 3 day 22 = Spring\nmon 9 day 21 = Summer\nmon 9 day 22 = Autumn\nmon 12 day 21 = Autumn\nmon 12 day 22 = Winter\n\nAnd, with only \"month and day\", the only indication of \"leap year\" would be if you had the pair \"02/29\". Not sure what you wanted there...\nEDIT:\nNot to forget those who live south of the equator, here is the trivial modification to the above function that takes a 3rd parameter (true for the southern hemisphere.)\nvoid season( bool SthHemi, uint16_t m, uint16_t d ) {\n char *seasons[] = { \"Winter\", \"Spring\", \"Summer\", \"Autumn\", \"Winter\", \"Spring\", \"Summer\", };\n\n char *p = seasons[ (SthHemi*2) + ((m-1) / 3) + (!(m%3)*( d >= 22 )) ];\n printf( \"mon %d day %d = %s\\n\", m, d, p );\n}\n\nEDIT2:\nNot really happy with the extra instances of the names of the seasons, here is an improved calculation that determines which string to use.\nchar *season( int SthHemi, uint16_t m, uint16_t d ) {\n char *seasons[] = { \"Winter\", \"Spring\", \"Summer\", \"Autumn\" };\n\n return seasons[ (((m+(SthHemi*6)-1) / 3) + (!(m%3)*( d >= 22 )))%4 ];\n}\n\nSthHemi - being 0 (north) or 1 (south) - is multiplied by 6 as the hemispheres' seasons are 6 months out of phase. Adding this to the month index (1-12), subtracting 1, then dividing by 3 gives either 0,1,2,3 or 2,3,4,5 towards the index of the string to use. Now, if the month modulo 3 is 0 (ie: Mar, Jun, Sep, or Dec), use !0 (ie: 1) to multiply the truth value that the day-of-month is >= 22. This operation may add 1 to the index value calculated so far for the final days of those 'transitional' months. Finally, use modulo 4 to \"wrap\" larger index values into the range of 0-3 and return the appropriate string from the array of strings that are season names. Simple!\n" ]
[ 2, 2, 2 ]
[]
[]
[ "c", "switch_statement" ]
stackoverflow_0074670514_c_switch_statement.txt
Q: How to lock screen orientation for the iPad SwiftUI I've been force locking the screen orientation using this, which works fine on iPhone simulators: @main struct MainApp: App { @UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, UIApplicationDelegate { static var orientationLock = UIInterfaceOrientationMask.all func application(_ application: UIApplication, supportedInterfaceOrientationsFor window: UIWindow?) -> UIInterfaceOrientationMask { return AppDelegate.orientationLock } } And used on a view like so: struct ContentView : View { var body : some View { ZStack { Text("Hello World!") .onAppear{ UIDevice.current.setValue(UIInterfaceOrientation.portrait.rawValue, forKey: "orientation") AppDelegate.orientationLock = .portrait }.onDisappear{ UIDevice.current.setValue(UIInterfaceOrientation.portrait.rawValue, forKey: "orientation") AppDelegate.orientationLock = .portrait } } } } However, this doesn't work on iPad simulators. It is not enough to deselect orientations in target's Deployment Info because some views have to be different orientations. Any advice would be greatly appreciated! A: Click on your project on the left section in Xcode and then you can scroll down till you see "Deployment info". From there you can lock the app screen orientation both in iPhone and iPad
How to lock screen orientation for the iPad SwiftUI
I've been force locking the screen orientation using this, which works fine on iPhone simulators: @main struct MainApp: App { @UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, UIApplicationDelegate { static var orientationLock = UIInterfaceOrientationMask.all func application(_ application: UIApplication, supportedInterfaceOrientationsFor window: UIWindow?) -> UIInterfaceOrientationMask { return AppDelegate.orientationLock } } And used on a view like so: struct ContentView : View { var body : some View { ZStack { Text("Hello World!") .onAppear{ UIDevice.current.setValue(UIInterfaceOrientation.portrait.rawValue, forKey: "orientation") AppDelegate.orientationLock = .portrait }.onDisappear{ UIDevice.current.setValue(UIInterfaceOrientation.portrait.rawValue, forKey: "orientation") AppDelegate.orientationLock = .portrait } } } } However, this doesn't work on iPad simulators. It is not enough to deselect orientations in target's Deployment Info because some views have to be different orientations. Any advice would be greatly appreciated!
[ "Click on your project on the left section in Xcode and then you can scroll down till you see \"Deployment info\". From there you can lock the app screen orientation both in iPhone and iPad\n\n" ]
[ 0 ]
[]
[]
[ "ipad", "orientation", "screen_orientation", "swift", "swiftui" ]
stackoverflow_0074669581_ipad_orientation_screen_orientation_swift_swiftui.txt
Q: How can I serialize a jgrapht simple graph to json? I have a simple directed graph from jgrapht and I am trying to serialize it into a JSON file using jackson as follows: ObjectMapper mapper = new ObjectMapper(); File output = new File("P:\\tree.json"); ObjectWriter objectWriter = mapper.writer().withDefaultPrettyPrinter(); objectWriter.writeValue(output,simpleDirectedGraph); However I get this error: Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.jgrapht.graph.AbstractBaseGraph$ArrayListFactory and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: org.jgrapht.graph.SimpleDirectedGraph["edgeSetFactory"]) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.serialize(UnknownSerializer.java:32) at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:693) at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:675) at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:157) at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130) at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1387) at com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:1088) at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:909) at ms.fragment.JSONTreeGenerator.main(JSONTreeGenerator.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) I have seen that there is a GmlExporter but I am interested in json... how can I do that? A: You can disable the exception with: mapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false); A: The exception you got from Jackson: JsonMappingException: No serializer found for class org.jgrapht.graph.AbstractBaseGraph$ArrayListFactory and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS)) (through reference chain: org.jgrapht.graph.SimpleDirectedGraph["edgeSetFactory"]) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69) gives a clue on how to solve: no properties discovered to create BeanSerializer property SimpleDirectedGraph["edgeSetFactory"] seems empty Exclude the type of property edgeSetFactory from serialisation: AbstractBaseGraph$ArrayListFactory Write a custom Serializer Usually Jackson would use a StdBeanSerializer to write any non-primitive class to JSON. Unfortunately this does not work for abstract classes. So you can write your own JsonSerializer to handle special fields. A: You can serialize your Graph to XML and then from XML to JSON : Serialization to XML : you can use this Library: XStream http://x-stream.github.io it's a small library that will easily allow you to serialize and deserialize to and from XML. guide to use the Library : http://x-stream.github.io/tutorial.html After that try to map your XML to JSON using : <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20180813</version> </dependency> XML.java is the class you're looking for: import org.json.JSONObject; import org.json.XML; import org.json.JSONException; public class Main { public static int PRETTY_PRINT_INDENT_FACTOR = 4; public static String TEST_XML_STRING = "<?xml version=\"1.0\" ?><test attrib=\"moretest\">Turn this to JSON</test>"; public static void main(String[] args) { try { JSONObject xmlJSONObj = XML.toJSONObject(TEST_XML_STRING); String jsonPrettyPrintString = xmlJSONObj.toString(PRETTY_PRINT_INDENT_FACTOR); System.out.println(jsonPrettyPrintString); } catch (JSONException je) { System.out.println(je.toString()); } } } A: Option 1 Newer versions of JGraphT have built-in support for importing/exporting graphs from/to JSON using the jgrapht-io module. Here's an example for exporting a graph to JSON: import org.jgrapht.Graph; import org.jgrapht.graph.DefaultDirectedGraph; import org.jgrapht.graph.DefaultEdge; import org.jgrapht.nio.json.JSONExporter; import java.net.URI; import java.net.URISyntaxException; public class Main { public static void main(String[] args) throws Exception { final var jsonExporter = new JSONExporter<URI, DefaultEdge>(); jsonExporter.exportGraph( newSampleGraph(), System.out ); System.out.println(""); } // Copied from https://jgrapht.org/guide/HelloJGraphT private static Graph<URI, DefaultEdge> newSampleGraph() throws URISyntaxException { Graph<URI, DefaultEdge> g = new DefaultDirectedGraph<>(DefaultEdge.class); URI google = new URI("http://www.google.com"); URI wikipedia = new URI("http://www.wikipedia.org"); URI jgrapht = new URI("http://www.jgrapht.org"); // add the vertices g.addVertex(google); g.addVertex(wikipedia); g.addVertex(jgrapht); // add edges to create linking structure g.addEdge(jgrapht, wikipedia); g.addEdge(google, jgrapht); g.addEdge(google, wikipedia); g.addEdge(wikipedia, google); return g; } } The pom.xml file for reference: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.stackoverflow</groupId> <artifactId>questions-39438962</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>18</maven.compiler.source> <maven.compiler.target>18</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>org.jgrapht</groupId> <artifactId>jgrapht-core</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.jgrapht</groupId> <artifactId>jgrapht-io</artifactId> <version>1.5.1</version> </dependency> </dependencies> </project> Option 2 Implement a custom Jackson serializer for JGraphT graphs, register it with an ObjectMapper, and implement the logic in the serializer. Here's an example: import com.fasterxml.jackson.core.JsonGenerator; import com.fasterxml.jackson.databind.SerializerProvider; import com.fasterxml.jackson.databind.ser.std.StdSerializer; import org.jgrapht.Graph; import org.jgrapht.nio.IntegerIdProvider; import java.io.IOException; public class DefaultDirectedGraphSerializer<V, E, T extends Graph<V, E>> extends StdSerializer<T> { public DefaultDirectedGraphSerializer(Class<T> t) { super(t); } public DefaultDirectedGraphSerializer() { this(null); } @Override public void serialize(T value, JsonGenerator gen, SerializerProvider provider) throws IOException { final var idProvider = new IntegerIdProvider<>(); gen.writeStartObject(); gen.writeFieldName("graph"); gen.writeStartObject(); gen.writeFieldName("nodes"); gen.writeStartObject(); for (V v : value.vertexSet()) { final var id = idProvider.apply(v); gen.writeFieldName(id); gen.writeStartObject(); gen.writeStringField("label", v.toString()); gen.writeEndObject(); } gen.writeEndObject(); gen.writeFieldName("edges"); gen.writeStartArray(); for (E e : value.edgeSet()) { gen.writeStartObject(); final var source = value.getEdgeSource(e); final var target = value.getEdgeTarget(e); gen.writeStringField("source", idProvider.apply(source)); gen.writeStringField("target", idProvider.apply(target)); gen.writeEndObject(); } gen.writeEndArray(); gen.writeEndObject(); gen.writeEndObject(); } } import org.jgrapht.Graph; import org.jgrapht.graph.DefaultDirectedGraph; import org.jgrapht.graph.DefaultEdge; import java.net.URI; import java.net.URISyntaxException; public class Graphs { public static Graph<URI, DefaultEdge> newSampleGraph() throws URISyntaxException { final var g = new DefaultDirectedGraph<URI, DefaultEdge>(DefaultEdge.class); URI google = new URI("http://www.google.com"); URI wikipedia = new URI("http://www.wikipedia.org"); URI jgrapht = new URI("http://www.jgrapht.org"); g.addVertex(google); g.addVertex(wikipedia); g.addVertex(jgrapht); g.addEdge(jgrapht, wikipedia); g.addEdge(google, jgrapht); g.addEdge(google, wikipedia); g.addEdge(wikipedia, google); return g; } } import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.module.SimpleModule; import org.jgrapht.graph.DefaultDirectedGraph; import java.net.URISyntaxException; import static org.example.Graphs.newSampleGraph; public class Main { public static void main(String[] args) throws URISyntaxException, JsonProcessingException { final var module = new SimpleModule(); module.addSerializer(DefaultDirectedGraph.class, new DefaultDirectedGraphSerializer<>()); final ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(module); System.out.println(mapper.writeValueAsString(newSampleGraph())); } } This will produce the following JSON document (after pretty printing): { "graph": { "nodes": { "1": { "label": "http://www.google.com" }, "2": { "label": "http://www.wikipedia.org" }, "3": { "label": "http://www.jgrapht.org" } }, "edges": [ { "source": "3", "target": "2" }, { "source": "1", "target": "3" }, { "source": "1", "target": "2" }, { "source": "2", "target": "1" } ] } } A: You can use the GraphMLExporter class to export your graph to GraphML format, which is an XML-based format for representing graphs. Once you have your graph in GraphML format, you can use a tool such as jgrapht-jxpath to convert it to JSON. Here is an example: // Create the exporter GraphMLExporter<String, DefaultEdge> exporter = new GraphMLExporter<>(); // Set the options for the exporter exporter.setVertexLabelProvider(new StringNameProvider<>()); exporter.setEdgeLabelProvider(new StringEdgeNameProvider<>()); // Export the graph to GraphML format StringWriter writer = new StringWriter(); exporter.exportGraph(simpleDirectedGraph, writer); // Convert the GraphML to JSON using jgrapht-jxpath JXPathContext context = JXPathContext.newContext(writer.toString()); String json = context.getValue("/graphml/graph").toString(); Note that this is just an example, and you may need to adjust the code depending on the specifics of your graph and the desired output format.
How can I serialize a jgrapht simple graph to json?
I have a simple directed graph from jgrapht and I am trying to serialize it into a JSON file using jackson as follows: ObjectMapper mapper = new ObjectMapper(); File output = new File("P:\\tree.json"); ObjectWriter objectWriter = mapper.writer().withDefaultPrettyPrinter(); objectWriter.writeValue(output,simpleDirectedGraph); However I get this error: Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.jgrapht.graph.AbstractBaseGraph$ArrayListFactory and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: org.jgrapht.graph.SimpleDirectedGraph["edgeSetFactory"]) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.serialize(UnknownSerializer.java:32) at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:693) at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:675) at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:157) at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130) at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1387) at com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:1088) at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:909) at ms.fragment.JSONTreeGenerator.main(JSONTreeGenerator.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) I have seen that there is a GmlExporter but I am interested in json... how can I do that?
[ "You can disable the exception with:\nmapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);\n\n", "The exception you got from Jackson:\n\nJsonMappingException: No serializer found for class org.jgrapht.graph.AbstractBaseGraph$ArrayListFactory\nand no properties discovered to create BeanSerializer\n(to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS)) (through reference chain:\norg.jgrapht.graph.SimpleDirectedGraph[\"edgeSetFactory\"])\nat com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69)\n\ngives a clue on how to solve:\n\nno properties discovered to create BeanSerializer\nproperty SimpleDirectedGraph[\"edgeSetFactory\"] seems empty\n\nExclude the type of property edgeSetFactory from serialisation:\nAbstractBaseGraph$ArrayListFactory\n\nWrite a custom Serializer\nUsually Jackson would use a StdBeanSerializer to write any non-primitive class to JSON.\nUnfortunately this does not work for abstract classes.\nSo you can write your own JsonSerializer to handle special fields.\n", "You can serialize your Graph to XML and then from XML to JSON :\nSerialization to XML : you can use this Library: XStream http://x-stream.github.io it's a small library that will easily allow you to serialize and deserialize to and from XML.\nguide to use the Library : http://x-stream.github.io/tutorial.html\nAfter that try to map your XML to JSON using :\n <dependency>\n <groupId>org.json</groupId>\n <artifactId>json</artifactId>\n <version>20180813</version>\n </dependency>\n\nXML.java is the class you're looking for:\n import org.json.JSONObject;\n import org.json.XML;\n import org.json.JSONException;\n\n public class Main {\n\npublic static int PRETTY_PRINT_INDENT_FACTOR = 4;\npublic static String TEST_XML_STRING =\n \"<?xml version=\\\"1.0\\\" ?><test attrib=\\\"moretest\\\">Turn this to JSON</test>\";\n\npublic static void main(String[] args) {\n try {\n JSONObject xmlJSONObj = XML.toJSONObject(TEST_XML_STRING);\n String jsonPrettyPrintString = xmlJSONObj.toString(PRETTY_PRINT_INDENT_FACTOR);\n System.out.println(jsonPrettyPrintString);\n } catch (JSONException je) {\n System.out.println(je.toString());\n }\n }\n }\n\n", "Option 1\nNewer versions of JGraphT have built-in support for importing/exporting graphs from/to JSON using the jgrapht-io\nmodule.\nHere's an example for exporting a graph to JSON:\nimport org.jgrapht.Graph;\nimport org.jgrapht.graph.DefaultDirectedGraph;\nimport org.jgrapht.graph.DefaultEdge;\nimport org.jgrapht.nio.json.JSONExporter;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\n\npublic class Main {\n public static void main(String[] args) throws Exception {\n final var jsonExporter = new JSONExporter<URI, DefaultEdge>();\n\n jsonExporter.exportGraph(\n newSampleGraph(),\n System.out\n );\n\n System.out.println(\"\");\n }\n\n // Copied from https://jgrapht.org/guide/HelloJGraphT\n private static Graph<URI, DefaultEdge> newSampleGraph() throws URISyntaxException {\n Graph<URI, DefaultEdge> g = new DefaultDirectedGraph<>(DefaultEdge.class);\n\n URI google = new URI(\"http://www.google.com\");\n URI wikipedia = new URI(\"http://www.wikipedia.org\");\n URI jgrapht = new URI(\"http://www.jgrapht.org\");\n\n // add the vertices\n g.addVertex(google);\n g.addVertex(wikipedia);\n g.addVertex(jgrapht);\n\n // add edges to create linking structure\n g.addEdge(jgrapht, wikipedia);\n g.addEdge(google, jgrapht);\n g.addEdge(google, wikipedia);\n g.addEdge(wikipedia, google);\n\n\n return g;\n }\n}\n\nThe pom.xml file for reference:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>com.stackoverflow</groupId>\n <artifactId>questions-39438962</artifactId>\n <version>1.0-SNAPSHOT</version>\n\n <properties>\n <maven.compiler.source>18</maven.compiler.source>\n <maven.compiler.target>18</maven.compiler.target>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n </properties>\n\n <dependencies>\n <dependency>\n <groupId>org.jgrapht</groupId>\n <artifactId>jgrapht-core</artifactId>\n <version>1.5.1</version>\n </dependency>\n <dependency>\n <groupId>org.jgrapht</groupId>\n <artifactId>jgrapht-io</artifactId>\n <version>1.5.1</version>\n </dependency>\n </dependencies>\n</project>\n\nOption 2\nImplement a custom Jackson serializer for JGraphT graphs, register it with an ObjectMapper, and implement the logic in the serializer.\nHere's an example:\nimport com.fasterxml.jackson.core.JsonGenerator;\nimport com.fasterxml.jackson.databind.SerializerProvider;\nimport com.fasterxml.jackson.databind.ser.std.StdSerializer;\nimport org.jgrapht.Graph;\nimport org.jgrapht.nio.IntegerIdProvider;\n\nimport java.io.IOException;\n\npublic class DefaultDirectedGraphSerializer<V, E, T extends Graph<V, E>> extends StdSerializer<T> {\n public DefaultDirectedGraphSerializer(Class<T> t) {\n super(t);\n }\n\n public DefaultDirectedGraphSerializer() {\n this(null);\n }\n\n @Override\n public void serialize(T value, JsonGenerator gen, SerializerProvider provider) throws IOException {\n final var idProvider = new IntegerIdProvider<>();\n\n gen.writeStartObject();\n gen.writeFieldName(\"graph\");\n gen.writeStartObject();\n gen.writeFieldName(\"nodes\");\n\n gen.writeStartObject();\n for (V v : value.vertexSet()) {\n final var id = idProvider.apply(v);\n gen.writeFieldName(id);\n gen.writeStartObject();\n gen.writeStringField(\"label\", v.toString());\n gen.writeEndObject();\n }\n gen.writeEndObject();\n\n gen.writeFieldName(\"edges\");\n gen.writeStartArray();\n for (E e : value.edgeSet()) {\n gen.writeStartObject();\n\n final var source = value.getEdgeSource(e);\n final var target = value.getEdgeTarget(e);\n\n gen.writeStringField(\"source\", idProvider.apply(source));\n gen.writeStringField(\"target\", idProvider.apply(target));\n\n gen.writeEndObject();\n }\n\n gen.writeEndArray();\n\n gen.writeEndObject();\n gen.writeEndObject();\n }\n}\n\nimport org.jgrapht.Graph;\nimport org.jgrapht.graph.DefaultDirectedGraph;\nimport org.jgrapht.graph.DefaultEdge;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\n\npublic class Graphs {\n public static Graph<URI, DefaultEdge> newSampleGraph() throws URISyntaxException {\n final var g = new DefaultDirectedGraph<URI, DefaultEdge>(DefaultEdge.class);\n\n URI google = new URI(\"http://www.google.com\");\n URI wikipedia = new URI(\"http://www.wikipedia.org\");\n URI jgrapht = new URI(\"http://www.jgrapht.org\");\n\n g.addVertex(google);\n g.addVertex(wikipedia);\n g.addVertex(jgrapht);\n\n g.addEdge(jgrapht, wikipedia);\n g.addEdge(google, jgrapht);\n g.addEdge(google, wikipedia);\n g.addEdge(wikipedia, google);\n\n return g;\n }\n}\n\nimport com.fasterxml.jackson.core.JsonProcessingException;\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.fasterxml.jackson.databind.module.SimpleModule;\nimport org.jgrapht.graph.DefaultDirectedGraph;\n\nimport java.net.URISyntaxException;\n\nimport static org.example.Graphs.newSampleGraph;\n\npublic class Main {\n public static void main(String[] args) throws URISyntaxException, JsonProcessingException {\n final var module = new SimpleModule();\n module.addSerializer(DefaultDirectedGraph.class, new DefaultDirectedGraphSerializer<>());\n\n final ObjectMapper mapper = new ObjectMapper();\n mapper.registerModule(module);\n\n System.out.println(mapper.writeValueAsString(newSampleGraph()));\n }\n}\n\nThis will produce the following JSON document (after pretty printing):\n{\n \"graph\": {\n \"nodes\": {\n \"1\": {\n \"label\": \"http://www.google.com\"\n },\n \"2\": {\n \"label\": \"http://www.wikipedia.org\"\n },\n \"3\": {\n \"label\": \"http://www.jgrapht.org\"\n }\n },\n \"edges\": [\n {\n \"source\": \"3\",\n \"target\": \"2\"\n },\n {\n \"source\": \"1\",\n \"target\": \"3\"\n },\n {\n \"source\": \"1\",\n \"target\": \"2\"\n },\n {\n \"source\": \"2\",\n \"target\": \"1\"\n }\n ]\n }\n}\n\n", "You can use the GraphMLExporter class to export your graph to GraphML format, which is an XML-based format for representing graphs. Once you have your graph in GraphML format, you can use a tool such as jgrapht-jxpath to convert it to JSON.\nHere is an example:\n// Create the exporter\nGraphMLExporter<String, DefaultEdge> exporter = new GraphMLExporter<>();\n\n// Set the options for the exporter\nexporter.setVertexLabelProvider(new StringNameProvider<>());\nexporter.setEdgeLabelProvider(new StringEdgeNameProvider<>());\n\n// Export the graph to GraphML format\nStringWriter writer = new StringWriter();\nexporter.exportGraph(simpleDirectedGraph, writer);\n\n// Convert the GraphML to JSON using jgrapht-jxpath\nJXPathContext context = JXPathContext.newContext(writer.toString());\nString json = context.getValue(\"/graphml/graph\").toString();\n\nNote that this is just an example, and you may need to adjust the code depending on the specifics of your graph and the desired output format.\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "java", "jgrapht", "json", "serialization" ]
stackoverflow_0039438962_java_jgrapht_json_serialization.txt
Q: ItemTemplate for UserControls in ItemsControl - WPF My task is to implement a MDI-like interface in our WPF app. I have created this simple class as a base for all the views: public class BaseView : UserControl, INotifyPropertyChanged { public event PropertyChangedEventHandler? PropertyChanged; protected void OnPropertyChanged([CallerMemberName] string? name = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(name)); } private ViewType _type = ViewType.Null; private string _tabTitle = string.Empty; private bool _isSelected = false; public ViewType Type { get => _type; set { _type = value; OnPropertyChanged(); } } public string TabTitle { get => _tabTitle; set { _tabTitle = value; OnPropertyChanged(); } } public bool IsSelected { get => _isSelected; set { _isSelected = value; OnPropertyChanged(); } } } Next, I created few test Views. All of them start like this: <local:BaseView... In main window, there are two controls: ItemsControl (for displaying the list of opened views), and ContentControl (for displaying the selected view.) I store all the opened views in a ObservableCollection: ObservableCollection<BaseView>.... I wanted to display them as a list, so I created ItemsControl: <ItemsControl x:Name="mainItemsControl"> <ItemsControl.ItemTemplate> <DataTemplate> <Border Padding="2" Margin="2" Tag="{Binding Type}"> <TextBlock Text="{Binding TabTitle}" Foreground="White"/> </Border> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> When I set the ItemsControl's source (mainItemsControl.ItemsSource = openedViews;) and started the application, ItemsControl displayed the content of each View instead of the ItemTemplate (Border with the TextBlock). What did I do wrong? A: If I understood correctly, then the openedViews collection consists of BaseView. If so, then BaseView is a UIElement. But Data Templates are used to render non UIElements. If the Content receives a UIElement, then it is rendered directly as is. One possible variant solution. You need to remove the INotifyPropertyChanged interface from BaseView. Create a data source for your BaseView with an implementation of INotifyPropertyChanged. In BaseView create DependencyProperty for this source. Create a simple, helper container for the openedViews collection. Something like this (pseudo code): public class SomeContainer { public BaseDataSource DataSource { get => _dataSource; set { _dataSource = null; if(View is not null) { View.DataSource = DataSource; } } } public BaseView View { get => _view; set { _view = value; if(_view is not null) { _view.DataSource = DataSource; } } } }
ItemTemplate for UserControls in ItemsControl - WPF
My task is to implement a MDI-like interface in our WPF app. I have created this simple class as a base for all the views: public class BaseView : UserControl, INotifyPropertyChanged { public event PropertyChangedEventHandler? PropertyChanged; protected void OnPropertyChanged([CallerMemberName] string? name = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(name)); } private ViewType _type = ViewType.Null; private string _tabTitle = string.Empty; private bool _isSelected = false; public ViewType Type { get => _type; set { _type = value; OnPropertyChanged(); } } public string TabTitle { get => _tabTitle; set { _tabTitle = value; OnPropertyChanged(); } } public bool IsSelected { get => _isSelected; set { _isSelected = value; OnPropertyChanged(); } } } Next, I created few test Views. All of them start like this: <local:BaseView... In main window, there are two controls: ItemsControl (for displaying the list of opened views), and ContentControl (for displaying the selected view.) I store all the opened views in a ObservableCollection: ObservableCollection<BaseView>.... I wanted to display them as a list, so I created ItemsControl: <ItemsControl x:Name="mainItemsControl"> <ItemsControl.ItemTemplate> <DataTemplate> <Border Padding="2" Margin="2" Tag="{Binding Type}"> <TextBlock Text="{Binding TabTitle}" Foreground="White"/> </Border> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> When I set the ItemsControl's source (mainItemsControl.ItemsSource = openedViews;) and started the application, ItemsControl displayed the content of each View instead of the ItemTemplate (Border with the TextBlock). What did I do wrong?
[ "If I understood correctly, then the openedViews collection consists of BaseView.\nIf so, then BaseView is a UIElement.\nBut Data Templates are used to render non UIElements.\nIf the Content receives a UIElement, then it is rendered directly as is.\nOne possible variant solution.\nYou need to remove the INotifyPropertyChanged interface from BaseView.\nCreate a data source for your BaseView with an implementation of INotifyPropertyChanged.\nIn BaseView create DependencyProperty for this source.\nCreate a simple, helper container for the openedViews collection.\nSomething like this (pseudo code):\npublic class SomeContainer \n{\n public BaseDataSource DataSource \n {\n get => _dataSource;\n set\n {\n _dataSource = null;\n if(View is not null)\n {\n View.DataSource = DataSource;\n } \n }\n }\n public BaseView View\n {\n get => _view;\n set\n {\n _view = value;\n if(_view is not null)\n {\n _view.DataSource = DataSource;\n } \n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "itemscontrol", "wpf" ]
stackoverflow_0074670812_c#_itemscontrol_wpf.txt
Q: Laravel tagging overhead leaving behind significantly large reference sets using redis Using Laravel 9 with the Redis cache driver. I have an issue where the internal standard_ref and forever_ref map that Laravel uses to manage tagged cache exceeds in excess of 10MB. This map consists of numerous keys, 95% of which have already expired/decayed and no longer exist, this map seems to continue to grow in size and has a TTL of -1 (never expire). Other than "not using tags", has anyone else encountered and overcame this? Found this in the slowlog of Redis Enterprise which led me to realising this is happening: I checked the key/s via SCAN and can confirm it's a massive set of cache misses It seems extremely inefficient and expensive to be constantly transmitting 10MB back and forth just for it to find one key within the map. A: Use the Redis::scan() method to iterate over the keys in the internal standard_ref and forever_ref maps, and remove the expired/decayed keys manually. This would help to reduce the size of these maps and improve the efficiency of the cache system. This is a way showing how you could use the Redis::scan() method in your Laravel application to remove expired/decayed keys from the internal standard_ref and forever_ref maps: // Get the Redis cache instance $cache = Cache::store('redis')->getRedis(); // Set the cursor to 0 to start iterating over the keys in the map $cursor = 0; // Set a flag to indicate whether there are more keys in the map to process $more = true; // Iterate over the keys in the map until there are no more keys to process while ($more) { // Use the Redis::scan() method to get the next batch of keys from the map $results = $cache->scan($cursor, 'match', 'tag:*', 'count', 1000); // Update the cursor and more flag based on the results of the scan operation $cursor = $results[0]; $more = $results[1] > 0; // Iterate over the keys in the current batch and delete any that have expired/decayed foreach ($results[1] as $key) { $cache->del($key); } } This approach may not be suitable for all applications, as it involves manually iterating over the keys in the map and deleting them, which can be time-consuming and potentially disruptive to the cache system.
Laravel tagging overhead leaving behind significantly large reference sets using redis
Using Laravel 9 with the Redis cache driver. I have an issue where the internal standard_ref and forever_ref map that Laravel uses to manage tagged cache exceeds in excess of 10MB. This map consists of numerous keys, 95% of which have already expired/decayed and no longer exist, this map seems to continue to grow in size and has a TTL of -1 (never expire). Other than "not using tags", has anyone else encountered and overcame this? Found this in the slowlog of Redis Enterprise which led me to realising this is happening: I checked the key/s via SCAN and can confirm it's a massive set of cache misses It seems extremely inefficient and expensive to be constantly transmitting 10MB back and forth just for it to find one key within the map.
[ "Use the Redis::scan() method to iterate over the keys in the internal standard_ref and forever_ref maps, and remove the expired/decayed keys manually. This would help to reduce the size of these maps and improve the efficiency of the cache system.\nThis is a way showing how you could use the Redis::scan() method in your Laravel application to remove expired/decayed keys from the internal standard_ref and forever_ref maps:\n// Get the Redis cache instance\n$cache = Cache::store('redis')->getRedis();\n\n// Set the cursor to 0 to start iterating over the keys in the map\n$cursor = 0;\n\n// Set a flag to indicate whether there are more keys in the map to process\n$more = true;\n\n// Iterate over the keys in the map until there are no more keys to process\nwhile ($more) {\n// Use the Redis::scan() method to get the next batch of keys from the map\n$results = $cache->scan($cursor, 'match', 'tag:*', 'count', 1000);\n\n// Update the cursor and more flag based on the results of the scan operation\n$cursor = $results[0];\n$more = $results[1] > 0;\n\n// Iterate over the keys in the current batch and delete any that have expired/decayed\nforeach ($results[1] as $key) {\n $cache->del($key);\n}\n}\n\nThis approach may not be suitable for all applications, as it involves manually iterating over the keys in the map and deleting them, which can be time-consuming and potentially disruptive to the cache system.\n" ]
[ 0 ]
[]
[]
[ "laravel" ]
stackoverflow_0074642870_laravel.txt
Q: Bootstrap5 Info button is blue with black text instead of blue with white text like their demo My full code -- Random Quote Generator Project in Codepen I used Bootstrap5 https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.0.2/css/bootstrap.min.css. The button style btn-info is not styling correctly. I know that I can manually change it, but I just want to understand why it is happening. On the Bootstrap Demo Button for button style btn-info is blue and white.. Picture of info button default style However, when I added it to my code it, the style is appearing this way ... In my project the button is styled a different shade of blue and has black text The primary style is showing correctly as seen here when I change my code I checked the DOM and the css also shows the color to appear this way. Screenshot from the inspect developer tool Thank you for reading and I would really appreciate it if you could help. P.S. How do I make it so that my pictures appear instead of links to click. I pasted them in so if there is another way please advise. Again, thank you. I checked over my code and made sure nothing was overriding the styles. There is nothing that I noticed. It is a very small project. I also tested the other default button styles, and they appeared as shown on the bootstrap documentation page. I tested my code in the Chrome Browser and it also is appearing this way. A: I believe the culprit is some sort of third party extension. If you go to bootstrap buttons page what you actually should see is: You can try using an incognito window or a clean browser to verify this, but I believe that some sort of extension is distorting your view. For example, I use "Dark Reader" and in my case buttons have this appearance: A: "...but I just want to understand why it is happening." By default, Bootstrap uses a utility mixin (button-variant()) which decides whether the button's text color should be light or dark, based on the button's main (background) color. This mixin calculates all color variants for the button (focus, active, disabled, hovered). The mixin is overridable (and some bootstrap themes do have their own bespoke version of it). The exact function determining the text color is called color-contrast() and takes the background color as parameter. This is in line with WCAG's directives for minimum contrast (1.4.3 and 2.1). Documented here. Please take note of the following recommendation from Bootstrap: Authors are encouraged to test their specific uses of color and, where necessary, manually modify/extend these default colors to ensure adequate color contrast ratios.
Bootstrap5 Info button is blue with black text instead of blue with white text like their demo
My full code -- Random Quote Generator Project in Codepen I used Bootstrap5 https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.0.2/css/bootstrap.min.css. The button style btn-info is not styling correctly. I know that I can manually change it, but I just want to understand why it is happening. On the Bootstrap Demo Button for button style btn-info is blue and white.. Picture of info button default style However, when I added it to my code it, the style is appearing this way ... In my project the button is styled a different shade of blue and has black text The primary style is showing correctly as seen here when I change my code I checked the DOM and the css also shows the color to appear this way. Screenshot from the inspect developer tool Thank you for reading and I would really appreciate it if you could help. P.S. How do I make it so that my pictures appear instead of links to click. I pasted them in so if there is another way please advise. Again, thank you. I checked over my code and made sure nothing was overriding the styles. There is nothing that I noticed. It is a very small project. I also tested the other default button styles, and they appeared as shown on the bootstrap documentation page. I tested my code in the Chrome Browser and it also is appearing this way.
[ "I believe the culprit is some sort of third party extension. If you go to bootstrap buttons page what you actually should see is:\n\nYou can try using an incognito window or a clean browser to verify this, but I believe that some sort of extension is distorting your view. For example, I use \"Dark Reader\" and in my case buttons have this appearance:\n\n", "\n\"...but I just want to understand why it is happening.\"\n\nBy default, Bootstrap uses a utility mixin (button-variant()) which decides whether the button's text color should be light or dark, based on the button's main (background) color.\nThis mixin calculates all color variants for the button (focus, active, disabled, hovered). The mixin is overridable (and some bootstrap themes do have their own bespoke version of it).\nThe exact function determining the text color is called color-contrast() and takes the background color as parameter.\nThis is in line with WCAG's directives for minimum contrast (1.4.3 and 2.1). Documented here.\nPlease take note of the following recommendation from Bootstrap:\n\nAuthors are encouraged to test their specific uses of color and, where necessary, manually modify/extend these default colors to ensure adequate color contrast ratios.\n\n" ]
[ 0, 0 ]
[]
[]
[ "bootstrap_5", "button", "css", "frontend", "sass" ]
stackoverflow_0074661967_bootstrap_5_button_css_frontend_sass.txt
Q: Expected hostname at index 7 for neo4j bolt (3.5.21) We run neo4j (3.5.21) in an EC2 instance. Today, after I restarted the server, noticed this error: Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start Service start logs: Active database: graph.db Directories in use: home: /var/lib/neo4j config: /etc/neo4j logs: /var/log/neo4j plugins: /var/lib/neo4j/plugins import: /var/lib/neo4j/import data: /var/lib/neo4j/data certificates: /var/lib/neo4j/certificates run: /var/run/neo4j Starting Neo4j. WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual. Started neo4j (pid 22577). It is available at http://0.0.0.0:7474/ There may be a short delay until the server is ready. See /var/log/neo4j/neo4j.log for current status. This is what I see in neo4j.log: 2022-12-03 20:29:49.886+0000 INFO Bolt enabled on 0.0.0.0:7687. 2022-12-03 20:29:51.968+0000 INFO Started. 2022-12-03 20:29:52.121+0000 INFO Stopping... 2022-12-03 20:29:52.231+0000 INFO Stopped. 2022-12-03 20:29:52.233+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:45) at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:187) at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:124) at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:91) at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:32) Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:473) at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111) at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:180) ... 3 more Caused by: org.neo4j.graphdb.config.InvalidSettingException: Unable to construct bolt discoverable URI using '' as hostname: Expected hostname at index 7: bolt://:7687 at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:133) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.lambda$addBoltConnectorFromConfig$1(DiscoverableURIs.java:155) at java.util.Optional.ifPresent(Optional.java:159) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.addBoltConnectorFromConfig(DiscoverableURIs.java:145) at org.neo4j.server.rest.discovery.CommunityDiscoverableURIs.communityDiscoverableURIs(CommunityDiscoverableURIs.java:38) at org.neo4j.server.CommunityNeoServer.lambda$createDBMSModule$0(CommunityNeoServer.java:99) at org.neo4j.server.modules.DBMSModule.start(DBMSModule.java:59) at org.neo4j.server.AbstractNeoServer.startModules(AbstractNeoServer.java:249) at org.neo4j.server.AbstractNeoServer.access$700(AbstractNeoServer.java:102) at org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter.start(AbstractNeoServer.java:541) at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452) ... 5 more Caused by: java.net.URISyntaxException: Expected hostname at index 7: bolt://:7687 at java.net.URI$Parser.fail(URI.java:2847) at java.net.URI$Parser.failExpecting(URI.java:2853) at java.net.URI$Parser.parseHostname(URI.java:3389) at java.net.URI$Parser.parseServer(URI.java:3235) at java.net.URI$Parser.parseAuthority(URI.java:3154) at java.net.URI$Parser.parseHierarchical(URI.java:3096) at java.net.URI$Parser.parse(URI.java:3052) at java.net.URI.<init>(URI.java:673) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:128) ... 15 more 2022-12-03 20:29:52.243+0000 INFO Neo4j Server shutdown initiated by request EC2: t3.large OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1092-aws x86_64) I have already tried restarting the server, restarting the service multiple times without any success. We have not changed anything on the networking (vpc, subnet, security groups, network interface, etc) Curious if there's a config I am missing. Any help will be much appreciated. A: This error message indicates that there is a problem with the Bolt configuration in your Neo4j instance. Bolt is a Neo4j network protocol that enables client applications to connect to a Neo4j database and execute queries. The specific error message you are seeing, "Expected hostname at index 7: bolt://:7687", suggests that the hostname for the Bolt protocol is not properly configured. This could be due to a mistake in the configuration file, or it could be the result of a recent change to the configuration. To fix this issue, you will need to check the configuration of your Neo4j instance and ensure that the hostname for the Bolt protocol is properly set. This can typically be done by editing the neo4j.conf file, which is usually located in the conf directory in the Neo4j installation directory. Once you have edited the configuration file and set the correct hostname for the Bolt protocol, you should be able to start your Neo4j instance without encountering this error. If the problem persists, you may need to check the Neo4j logs or seek additional help from the Neo4j community.
Expected hostname at index 7 for neo4j bolt (3.5.21)
We run neo4j (3.5.21) in an EC2 instance. Today, after I restarted the server, noticed this error: Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start Service start logs: Active database: graph.db Directories in use: home: /var/lib/neo4j config: /etc/neo4j logs: /var/log/neo4j plugins: /var/lib/neo4j/plugins import: /var/lib/neo4j/import data: /var/lib/neo4j/data certificates: /var/lib/neo4j/certificates run: /var/run/neo4j Starting Neo4j. WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual. Started neo4j (pid 22577). It is available at http://0.0.0.0:7474/ There may be a short delay until the server is ready. See /var/log/neo4j/neo4j.log for current status. This is what I see in neo4j.log: 2022-12-03 20:29:49.886+0000 INFO Bolt enabled on 0.0.0.0:7687. 2022-12-03 20:29:51.968+0000 INFO Started. 2022-12-03 20:29:52.121+0000 INFO Stopping... 2022-12-03 20:29:52.231+0000 INFO Stopped. 2022-12-03 20:29:52.233+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:45) at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:187) at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:124) at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:91) at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:32) Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter@75401424' was successfully initialized, but failed to start. Please see the attached cause exception "Expected hostname at index 7: bolt://:7687". at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:473) at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111) at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:180) ... 3 more Caused by: org.neo4j.graphdb.config.InvalidSettingException: Unable to construct bolt discoverable URI using '' as hostname: Expected hostname at index 7: bolt://:7687 at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:133) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.lambda$addBoltConnectorFromConfig$1(DiscoverableURIs.java:155) at java.util.Optional.ifPresent(Optional.java:159) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.addBoltConnectorFromConfig(DiscoverableURIs.java:145) at org.neo4j.server.rest.discovery.CommunityDiscoverableURIs.communityDiscoverableURIs(CommunityDiscoverableURIs.java:38) at org.neo4j.server.CommunityNeoServer.lambda$createDBMSModule$0(CommunityNeoServer.java:99) at org.neo4j.server.modules.DBMSModule.start(DBMSModule.java:59) at org.neo4j.server.AbstractNeoServer.startModules(AbstractNeoServer.java:249) at org.neo4j.server.AbstractNeoServer.access$700(AbstractNeoServer.java:102) at org.neo4j.server.AbstractNeoServer$ServerComponentsLifecycleAdapter.start(AbstractNeoServer.java:541) at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452) ... 5 more Caused by: java.net.URISyntaxException: Expected hostname at index 7: bolt://:7687 at java.net.URI$Parser.fail(URI.java:2847) at java.net.URI$Parser.failExpecting(URI.java:2853) at java.net.URI$Parser.parseHostname(URI.java:3389) at java.net.URI$Parser.parseServer(URI.java:3235) at java.net.URI$Parser.parseAuthority(URI.java:3154) at java.net.URI$Parser.parseHierarchical(URI.java:3096) at java.net.URI$Parser.parse(URI.java:3052) at java.net.URI.<init>(URI.java:673) at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:128) ... 15 more 2022-12-03 20:29:52.243+0000 INFO Neo4j Server shutdown initiated by request EC2: t3.large OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1092-aws x86_64) I have already tried restarting the server, restarting the service multiple times without any success. We have not changed anything on the networking (vpc, subnet, security groups, network interface, etc) Curious if there's a config I am missing. Any help will be much appreciated.
[ "This error message indicates that there is a problem with the Bolt configuration in your Neo4j instance. Bolt is a Neo4j network protocol that enables client applications to connect to a Neo4j database and execute queries.\nThe specific error message you are seeing, \"Expected hostname at index 7: bolt://:7687\", suggests that the hostname for the Bolt protocol is not properly configured. This could be due to a mistake in the configuration file, or it could be the result of a recent change to the configuration.\nTo fix this issue, you will need to check the configuration of your Neo4j instance and ensure that the hostname for the Bolt protocol is properly set. This can typically be done by editing the neo4j.conf file, which is usually located in the conf directory in the Neo4j installation directory.\nOnce you have edited the configuration file and set the correct hostname for the Bolt protocol, you should be able to start your Neo4j instance without encountering this error. If the problem persists, you may need to check the Neo4j logs or seek additional help from the Neo4j community.\n" ]
[ 0 ]
[]
[]
[ "amazon_ec2", "cartography", "neo4j" ]
stackoverflow_0074670549_amazon_ec2_cartography_neo4j.txt
Q: get my instagram follower list with selenium I'm beginner on programming. I trying get my Instagram follower list but i have just 12 follower. I tried firstly click to box and scroll down but it didn't work. from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import time from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Chrome() url= "https://www.instagram.com/" driver.get(url) time.sleep(1) kullaniciAdiGir = driver.find_element(By.XPATH, "//*[@id='loginForm']/div/div[1]/div/label/input"") kullaniciAdiGir.send_keys("USERNAME") sifreGir = driver.find_element(By.XPATH, "//input[@name='password']") sifreGir.send_keys("PASS") girisButonu = driver.find_element(By.XPATH, "//*[@id='loginForm']/div/div[3]/button/div").click() time.sleep(5) driver.get(url="https://www.instagram.com/USERNAME/") time.sleep(3) kutucuk= driver.get(url="https://www.instagram.com/USERNAME/followers/") time.sleep(5) box =driver.find_element(By.XPATH, "//div[@class='xs83m0k xl56j7k x1iy3rx x1n2onr6 x1sy10c2 x1h5jrl4 xieb3on xmn8rco x1hfn5x7 x13wlyjk x1v7wizp x1l0w46t xa3vuyk xw8ag78']") box.click() driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(5) takipciler = driver.find_elements(By.CSS_SELECTOR, "._ab8y._ab94._ab97._ab9f._ab9k._ab9p._abcm") for takipci in takipciler: print(takipci.text) time.sleep(10) How can i fix it? How can scroll down in box? Thanks A: You can select multiple elements with this. #get all followers followers = driver.find_elements(By.CSS_SELECTOR, "._ab8y._ab94._ab97._ab9f._ab9k._ab9p._abcm") # loop each follower for user in followers: #do something here. Using css selectors, in my opinion, is much easier. Also note I used find_elemets not find_element, as the latter only returns a single result. As the data is loaded dynamically, you will, infact have to scroll to make the site load more results. Then compare what you have agaisnt whats loaded. Probably execute some javascrtipt like scroll into view on the last element in that container etc. or take a look here for an alternative solution https://www.folkstalk.com/2022/10/how-to-get-a-list-of-followers-on-instagram-python-with-code-examples.html or look at instragrams API, probably something in there for getting your followers.
get my instagram follower list with selenium
I'm beginner on programming. I trying get my Instagram follower list but i have just 12 follower. I tried firstly click to box and scroll down but it didn't work. from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import time from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Chrome() url= "https://www.instagram.com/" driver.get(url) time.sleep(1) kullaniciAdiGir = driver.find_element(By.XPATH, "//*[@id='loginForm']/div/div[1]/div/label/input"") kullaniciAdiGir.send_keys("USERNAME") sifreGir = driver.find_element(By.XPATH, "//input[@name='password']") sifreGir.send_keys("PASS") girisButonu = driver.find_element(By.XPATH, "//*[@id='loginForm']/div/div[3]/button/div").click() time.sleep(5) driver.get(url="https://www.instagram.com/USERNAME/") time.sleep(3) kutucuk= driver.get(url="https://www.instagram.com/USERNAME/followers/") time.sleep(5) box =driver.find_element(By.XPATH, "//div[@class='xs83m0k xl56j7k x1iy3rx x1n2onr6 x1sy10c2 x1h5jrl4 xieb3on xmn8rco x1hfn5x7 x13wlyjk x1v7wizp x1l0w46t xa3vuyk xw8ag78']") box.click() driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(5) takipciler = driver.find_elements(By.CSS_SELECTOR, "._ab8y._ab94._ab97._ab9f._ab9k._ab9p._abcm") for takipci in takipciler: print(takipci.text) time.sleep(10) How can i fix it? How can scroll down in box? Thanks
[ "You can select multiple elements with this.\n#get all followers\nfollowers = driver.find_elements(By.CSS_SELECTOR, \"._ab8y._ab94._ab97._ab9f._ab9k._ab9p._abcm\")\n# loop each follower\nfor user in followers:\n #do something here.\n\nUsing css selectors, in my opinion, is much easier.\nAlso note I used find_elemets not find_element, as the latter only returns a single result.\nAs the data is loaded dynamically, you will, infact have to scroll to make the site load more results. Then compare what you have agaisnt whats loaded. Probably execute some javascrtipt like scroll into view on the last element in that container etc.\nor take a look here for an alternative solution\nhttps://www.folkstalk.com/2022/10/how-to-get-a-list-of-followers-on-instagram-python-with-code-examples.html\nor look at instragrams API, probably something in there for getting your followers.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074670747_python_selenium.txt
Q: Firebase authentication error "The given sign-in provider is disabled" SOLVED I am trying to put authentication on firebase. I finish my code, and, when I tried it, it says: The given sign-in provider is disabled for this Firebase project. Enable it in the Firebase console, under the sign-in method tab of the Auth section. I tried to search on YouTube and Google. But I didn´t find anything. A: The message is telling you exactly what to do, go to the firebase console of your project and turn the corresponding auth methods on. If you are only trying to use Gmail (Google) login, then just enable Google in the sign-in methods inside the Authentication tab, like this: Also, make sure that you only put the sign-in options that you needed and turned on inside the siginOptions parameter, if Google is the only one you want, then just put firebase.auth.GoogleAuthProvider.PROVIDER_ID and remove everything else. A: The issue is telling you that you are trying to use auth firebase service but you have not enable it from firebase, SO please visit to firebase console of your peoject and enable may be it will be phone, gmail, email password or any else just enable it.like this image added below: enter image description here
Firebase authentication error "The given sign-in provider is disabled"
SOLVED I am trying to put authentication on firebase. I finish my code, and, when I tried it, it says: The given sign-in provider is disabled for this Firebase project. Enable it in the Firebase console, under the sign-in method tab of the Auth section. I tried to search on YouTube and Google. But I didn´t find anything.
[ "The message is telling you exactly what to do, go to the firebase console of your project and turn the corresponding auth methods on. If you are only trying to use Gmail (Google) login, then just enable Google in the sign-in methods inside the Authentication tab, like this:\n\nAlso, make sure that you only put the sign-in options that you needed and turned on inside the siginOptions parameter, if Google is the only one you want, then just put firebase.auth.GoogleAuthProvider.PROVIDER_ID and remove everything else.\n", "The issue is telling you that you are trying to use auth firebase service but you have not enable it from firebase, SO please visit to firebase console of your peoject and enable may be it will be phone, gmail, email password or any else just enable it.like this image added below:\nenter image description here\n" ]
[ 33, 0 ]
[]
[]
[ "firebase", "firebase_authentication", "javascript" ]
stackoverflow_0055327973_firebase_firebase_authentication_javascript.txt
Q: Python and Pandas - Distances with latitude and longitude I am trying compare distances between points (in this case fake people) in longitudes and latitudes. I can import the data, then convert the lat and long data to radians and get the following output with pandas: lat long name Veronica Session 0.200081 0.246723 Lynne Donahoo 0.775020 -1.437292 Debbie Hanley 0.260559 -1.594263 Lisandra Earls 1.203430 -2.425601 Sybil Leef -0.029293 0.592702 From there i am trying to compare different points and get the distance between them. I came across a post that seemed to be of use (https://stackoverflow.com/a/40453439/15001056) but I am unable to get this working for my data set. Any help in calculating the distance between points would be appreciated. Idealy id like to expand and optimise the route once the distance function is working. A: I used the function in the answer you linked and it worked fine. Can't confirm that the distance is in the unit you need though. df['dist'] = \ haversine(df.lat.shift(), df.long.shift(), df.loc[1:, 'lat'], df.loc[1:, 'long'], to_radians=False) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Veronica Session 0.200081 0.246723 NaN Lynne Donahoo 0.775020 -1.437292 9625.250626 Debbie Hanley 0.260559 -1.594263 3385.893020 Lisandra Earls 1.203430 -2.425601 6859.234096 Sybil Leef -0.029293 0.592702 12515.848878
Python and Pandas - Distances with latitude and longitude
I am trying compare distances between points (in this case fake people) in longitudes and latitudes. I can import the data, then convert the lat and long data to radians and get the following output with pandas: lat long name Veronica Session 0.200081 0.246723 Lynne Donahoo 0.775020 -1.437292 Debbie Hanley 0.260559 -1.594263 Lisandra Earls 1.203430 -2.425601 Sybil Leef -0.029293 0.592702 From there i am trying to compare different points and get the distance between them. I came across a post that seemed to be of use (https://stackoverflow.com/a/40453439/15001056) but I am unable to get this working for my data set. Any help in calculating the distance between points would be appreciated. Idealy id like to expand and optimise the route once the distance function is working.
[ "I used the function in the answer you linked and it worked fine. Can't confirm that the distance is in the unit you need though.\ndf['dist'] = \\\nhaversine(df.lat.shift(), df.long.shift(),\n df.loc[1:, 'lat'], df.loc[1:, 'long'], to_radians=False)\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nVeronica Session 0.200081 0.246723 NaN\nLynne Donahoo 0.775020 -1.437292 9625.250626\nDebbie Hanley 0.260559 -1.594263 3385.893020\nLisandra Earls 1.203430 -2.425601 6859.234096\nSybil Leef -0.029293 0.592702 12515.848878\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "traveling_salesman" ]
stackoverflow_0074670372_pandas_python_traveling_salesman.txt
Q: Drop rails database on fly.io? Question How do I do a simple rails db:drop on fly.io? Background I tried fly ssh console -C "/app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1" and also shelling in via fly ssh myapp then /app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1 but both give the same error: # /app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1 D, [2022-12-03T07:19:04.077483 #564] DEBUG -- : (5010.1ms) DROP DATABASE IF EXISTS "myapp" PG::ObjectInUse: ERROR: database "myapp" is being accessed by other users DETAIL: There is 1 other session using the database. Couldn't drop database 'myapp' rails aborted! ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR: database "myapp" is being accessed by other users DETAIL: There is 1 other session using the database. Notes All browsers accessing the site are closed All shell sessions are ended I tried using ps -ef | grep postgres to get the process id in order to kill it, but all I see is /bin/sh: 18: ps: not found, and Related thread. Ideas It may be possible to force postress to kill all connections to allow the db:drop to succeed (although I haven't worked out how to do that yet) It may be possible to delete all the database tables (suggested here) A: There's something else accessing your db, probably your web server. You'll need to stop your web server, then you should be able to drop the db.
Drop rails database on fly.io?
Question How do I do a simple rails db:drop on fly.io? Background I tried fly ssh console -C "/app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1" and also shelling in via fly ssh myapp then /app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1 but both give the same error: # /app/bin/rails db:drop DISABLE_DATABASE_ENVIRONMENT_CHECK=1 D, [2022-12-03T07:19:04.077483 #564] DEBUG -- : (5010.1ms) DROP DATABASE IF EXISTS "myapp" PG::ObjectInUse: ERROR: database "myapp" is being accessed by other users DETAIL: There is 1 other session using the database. Couldn't drop database 'myapp' rails aborted! ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR: database "myapp" is being accessed by other users DETAIL: There is 1 other session using the database. Notes All browsers accessing the site are closed All shell sessions are ended I tried using ps -ef | grep postgres to get the process id in order to kill it, but all I see is /bin/sh: 18: ps: not found, and Related thread. Ideas It may be possible to force postress to kill all connections to allow the db:drop to succeed (although I haven't worked out how to do that yet) It may be possible to delete all the database tables (suggested here)
[ "There's something else accessing your db, probably your web server.\nYou'll need to stop your web server, then you should be able to drop the db.\n" ]
[ 0 ]
[]
[]
[ "fly", "postgresql", "ruby_on_rails" ]
stackoverflow_0074664822_fly_postgresql_ruby_on_rails.txt
Q: Problem with wordpress + woocommerce and minimum order but except local pickup Found a code for my exact problem and used it , while it works perfect for minimum order for some reason it does not work for when choosing local pickup. my current shipping setup: and the website is papabross.gr I used this code: add_action( 'woocommerce_check_cart_items', 'wc_minimum_required_order_amount' ); function wc_minimum_required_order_amount() { // HERE Your settings $minimum_amount = 25; // The minimum cart total amount $shipping_method_id = 'local_pickup:10'; // The targeted shipping method Id (exception) // Get some variables $cart_total = (float) WC()->cart->total; // Total cart amount $chosen_methods = (array) WC()->session->get( 'chosen_shipping_methods' ); // Chosen shipping method rate Ids (array) // Only when a shipping method has been chosen if ( ! empty($chosen_methods) ) { $chosen_method = explode(':', reset($chosen_methods)); // Get the chosen shipping method Id (array) $chosen_method_id = reset($chosen_method); // Get the chosen shipping method Id } // If "Local pickup" shipping method is chosen, exit (no minimun is required) if ( isset($chosen_method_id) && $chosen_method_id === $shipping_method_id ) { return; // exit } // Add an error notice is cart total is less than the minimum required if ( $cart_total < $minimum_amount ) { wc_add_notice( sprintf( __("Η ελάχιστη παραγγελία για αποστολή είναι %s (Η παραγγελία σας μέχρι στιγμής είναι %s).", "woocommerce"), // Text message wc_price( $minimum_amount ), wc_price( $cart_total ) ), 'error' ); } } Not really sure if I am choosing the correct shipping id? Can I use another hook maybe? I have used a working code I found here and everything works but the local pickup. It still asks for a minimum order. I wonder if I have used the shipping id wrongly? A: After a long search i found a code that works! Iam sharing this in case someone gets the same problem: // Set a minimum amount of order based on shipping zone & shipping method before checking out add_action( 'woocommerce_check_cart_items', 'cw_min_num_products' ); // Only run in the Cart or Checkout pages function cw_min_num_products() { if( is_cart() || is_checkout() ) { global $woocommerce; // Set the minimum order amount, shipping zone & shipping method before checking out $minimum = 25; $county = array('GR'); $chosen_shipping = WC()->session->get( 'chosen_shipping_methods' )[0]; $chosen_shipping = explode(':', $chosen_shipping); // Defining var total amount $cart_tot_order = WC()->cart->subtotal; // Compare values and add an error in Cart's total amount // happens to be less than the minimum required before checking out. // Will display a message along the lines if( $cart_tot_order < $minimum && in_array( WC()->customer->get_shipping_country(), $county ) && $chosen_shipping[0] != 'local_pickup') { // Display error message wc_add_notice( sprintf( 'Δεν έχετε φτάσει ακόμη το ελάχιστο ποσό παραγγελίας των %s€.'. '<br/>Δεν υπάρχει ελάχιστη παραγγελία εάν επιλέξετε την παραλαβή από το κατάστημα.' . '<br />Το τρέχον ποσό της παραγγελίας σας είναι : %s€.', $minimum, $cart_tot_order ), 'error' ); } } }
Problem with wordpress + woocommerce and minimum order but except local pickup
Found a code for my exact problem and used it , while it works perfect for minimum order for some reason it does not work for when choosing local pickup. my current shipping setup: and the website is papabross.gr I used this code: add_action( 'woocommerce_check_cart_items', 'wc_minimum_required_order_amount' ); function wc_minimum_required_order_amount() { // HERE Your settings $minimum_amount = 25; // The minimum cart total amount $shipping_method_id = 'local_pickup:10'; // The targeted shipping method Id (exception) // Get some variables $cart_total = (float) WC()->cart->total; // Total cart amount $chosen_methods = (array) WC()->session->get( 'chosen_shipping_methods' ); // Chosen shipping method rate Ids (array) // Only when a shipping method has been chosen if ( ! empty($chosen_methods) ) { $chosen_method = explode(':', reset($chosen_methods)); // Get the chosen shipping method Id (array) $chosen_method_id = reset($chosen_method); // Get the chosen shipping method Id } // If "Local pickup" shipping method is chosen, exit (no minimun is required) if ( isset($chosen_method_id) && $chosen_method_id === $shipping_method_id ) { return; // exit } // Add an error notice is cart total is less than the minimum required if ( $cart_total < $minimum_amount ) { wc_add_notice( sprintf( __("Η ελάχιστη παραγγελία για αποστολή είναι %s (Η παραγγελία σας μέχρι στιγμής είναι %s).", "woocommerce"), // Text message wc_price( $minimum_amount ), wc_price( $cart_total ) ), 'error' ); } } Not really sure if I am choosing the correct shipping id? Can I use another hook maybe? I have used a working code I found here and everything works but the local pickup. It still asks for a minimum order. I wonder if I have used the shipping id wrongly?
[ "After a long search i found a code that works! Iam sharing this in case someone gets the same problem:\n// Set a minimum amount of order based on shipping zone & shipping method before checking out\n\nadd_action( 'woocommerce_check_cart_items', 'cw_min_num_products' );\n\n// Only run in the Cart or Checkout pages\nfunction cw_min_num_products() {\n \n if( is_cart() || is_checkout() ) {\n global $woocommerce;\n\n // Set the minimum order amount, shipping zone & shipping method before checking out\n $minimum = 25;\n $county = array('GR');\n $chosen_shipping = WC()->session->get( 'chosen_shipping_methods' )[0];\n $chosen_shipping = explode(':', $chosen_shipping);\n \n // Defining var total amount \n $cart_tot_order = WC()->cart->subtotal;\n \n // Compare values and add an error in Cart's total amount\n // happens to be less than the minimum required before checking out.\n // Will display a message along the lines \n \n if( $cart_tot_order < $minimum && in_array( WC()->customer->get_shipping_country(), $county ) && $chosen_shipping[0] != 'local_pickup') {\n // Display error message\n wc_add_notice( sprintf( 'Δεν έχετε φτάσει ακόμη το ελάχιστο ποσό παραγγελίας των %s€.'. '<br/>Δεν υπάρχει ελάχιστη παραγγελία εάν επιλέξετε την παραλαβή από το κατάστημα.'\n . '<br />Το τρέχον ποσό της παραγγελίας σας είναι : %s€.',\n $minimum,\n $cart_tot_order ),\n 'error' );\n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "function", "minimum", "shipping", "woocommerce" ]
stackoverflow_0074637527_function_minimum_shipping_woocommerce.txt
Q: React Cloud Firestore Not Fetching Data Properly I need to fetch all data from the collection but instead only getting one document. I will be grateful for you support. Below, I present the screenshots and code snippet regarding my concern. enter image description here enter image description here import './App.css'; import db from './firebase'; import React,{useState,useEffect} from 'react'; function App() { const [accounts,setAccounts]=useState([]) const fetchAccounts=async()=>{ const response=db.collection('accounts'); const data=await response.get(); data.docs.forEach(item=>{ setAccounts([...accounts,item.data()]) }) } useEffect(() => { fetchAccounts(); }, []) return ( <div className="App"> { accounts && accounts.map(account=>{ return( <div> <h1>Example</h1> <h4>{account.date}</h4> <p>{account.email}</p> </div> ) }) } </div> ); } export default App; A: Set state functions in React are async. It means that your values are not updated immediately. So when you update your state in a loop, each time it updates the initial value of the state and because of that at the end of the loop, only 1 item is added to the array. In order to fix the bug, you should use another variable and set your state after the loop: import React, { useState, useEffect } from 'react'; import './App.css'; import db from './firebase'; function App() { const [accounts, setAccounts] = useState([]); const fetchAccounts = async () => { const response = db.collection('accounts'); const data = await response.get(); const newAccounts = data.docs.map(item => item.data()); setAccounts(newAccounts); } useEffect(() => { fetchAccounts(); }, []) return ( <div className="App"> { accounts && accounts.map(account => { return( <div> <h1>Example</h1> <h4>{account.date}</h4> <p>{account.email}</p> </div> ) }) } </div> ); } export default App;
React Cloud Firestore Not Fetching Data Properly
I need to fetch all data from the collection but instead only getting one document. I will be grateful for you support. Below, I present the screenshots and code snippet regarding my concern. enter image description here enter image description here import './App.css'; import db from './firebase'; import React,{useState,useEffect} from 'react'; function App() { const [accounts,setAccounts]=useState([]) const fetchAccounts=async()=>{ const response=db.collection('accounts'); const data=await response.get(); data.docs.forEach(item=>{ setAccounts([...accounts,item.data()]) }) } useEffect(() => { fetchAccounts(); }, []) return ( <div className="App"> { accounts && accounts.map(account=>{ return( <div> <h1>Example</h1> <h4>{account.date}</h4> <p>{account.email}</p> </div> ) }) } </div> ); } export default App;
[ "Set state functions in React are async. It means that your values are not updated immediately.\nSo when you update your state in a loop, each time it updates the initial value of the state and because of that at the end of the loop, only 1 item is added to the array.\nIn order to fix the bug, you should use another variable and set your state after the loop:\nimport React, { useState, useEffect } from 'react';\nimport './App.css';\nimport db from './firebase';\n\nfunction App() {\n const [accounts, setAccounts] = useState([]);\n\n const fetchAccounts = async () => {\n const response = db.collection('accounts');\n const data = await response.get();\n const newAccounts = data.docs.map(item => item.data());\n setAccounts(newAccounts);\n }\n\n useEffect(() => {\n fetchAccounts();\n }, [])\n\n return (\n <div className=\"App\">\n {\n accounts && accounts.map(account => {\n return(\n <div>\n <h1>Example</h1>\n <h4>{account.date}</h4>\n <p>{account.email}</p>\n </div>\n )\n })\n }\n </div>\n );\n}\n\nexport default App;\n\n" ]
[ 1 ]
[]
[]
[ "google_cloud_firestore", "reactjs" ]
stackoverflow_0074670646_google_cloud_firestore_reactjs.txt
Q: Call the parent function (without super) instead of the inherited child function I wish to call the parent function (without super) instead of the inherited child function. What options do I have without using ES classes? function Queue() { this.items = []; this.enqueue = function enqueue(item) { this.items.push(item); return item; } } function AsyncQueue() { Queue.call(this); this.awaiters = new Queue(); this.enqueue = function enqueue(item) { const awaiter = this.awaiters.dequeue(); if (awaiter !== undefined) { setImmediate(() => { awaiter(item); }); } else { super.enqueue(item); } return item; } } AsyncQueue.prototype = Object.create(Queue.prototype); AsyncQueue.prototype.constructor = AsyncQueue; A: The OP after already having mastered the Queue super-call within the AsyncQueue subtype constructor ... Queue.call(this); ... needs to save the later to be used super.enqueue reference by assigning a bound version of it before reassigning / overwriting / shadowing it with the subtype's own enqueue implementation ... const superEnqueue = this.enqueue.bind(this); this.enqueue = function enqueue (item) { // ... } The next provided example code does this in addition to some other small suggested improvements ... function Queue() { // kind of "private class field" assured by local scope. const items = []; // privileged (thus non prototypal) methods // with access capability via local scope. this.dequeue = function dequeue (item) { return items.shift(item); } this.enqueue = function enqueue (item) { items.push(item); return item; } } function AsyncQueue() { // super call. Queue.call(this); // save/keep the initial `super.enqueue` reference by binding it. const superEnqueue = this.enqueue.bind(this); this.awaiters = new Queue; // overwrite/shadow the initial `super.enqueue` reference. this.enqueue = function enqueue (item) { const awaiter = this.awaiters.dequeue(); // if (awaiter !== undefined) { // - one does want to know whether // `awaiter` is a function since // it is going to be invoked. if ('function' === typeof awaiter) { // setImmediate(() => awaiter(item)); // - one does want to use a somewhat // similar but standardized way. setTimeout(awaiter, 0, item); } else { // super.enqueue(item); // forwarding by using the above's hand-nitted super-delegation. superEnqueue(item); } return item; } } AsyncQueue.prototype = Object.create(Queue.prototype); AsyncQueue.prototype.constructor = AsyncQueue; const asyncQueue = new AsyncQueue; console.log( '+ enqueue ...', asyncQueue.enqueue('the') // the ); console.log( '+ enqueue ...', asyncQueue.enqueue('quick') // quick ); console.log( '+ enqueue ...', asyncQueue.enqueue('brown') // brown ); console.log( '+ enqueue ...', asyncQueue.enqueue('fox') // fox ); console.log( '- dequeue ...', asyncQueue.dequeue() // the ); console.log( '+ enqueue ...', asyncQueue.enqueue('jumps') // jumps ); console.log( '- dequeue ...', asyncQueue.dequeue() // quick ); console.log( '+ + awaiters enqueue ...', asyncQueue.awaiters .enqueue( (...args) => console.log({ args }) ) // (...args) => console.log({ args }) ); console.log( '+ enqueue ...', asyncQueue.enqueue('over') // over ); console.log( '- dequeue ...', asyncQueue.dequeue() // brown ); // async, timeout based, logging { "args": ["over"] } .as-console-wrapper { min-height: 100%!important; top: 0; } Comparing the above implementation and the next provided one which uses class syntax and considering some comments from above ... "why don't you want to use classes, this is what they were made for" – Xiduzo Nov 27 at 16:00 "Avoiding it that is all. bad me. but i definitely need help here." – Gary Nov 27 at 16:46" ... the questions remains ... "What is the reason for the OP being willing to sacrifice the much more convenient and safer way of subtyping / sub-classing?" class Queue { // real private class field ... #items = []; // ... but prototypal methods. dequeue(item) { return this.#items.shift(item); } enqueue(item) { this.#items.push(item); return item; } } // a more convenient and safer subtyping / sub-classing. class AsyncQueue extends Queue { constructor() { super(); this.awaiters = new Queue; } // prototypal method again. enqueue(item) { const awaiter = this.awaiters.dequeue(); if ('function' === typeof awaiter) { setTimeout(awaiter, 0, item); } else { super.enqueue(item); } return item; } } const asyncQueue = new AsyncQueue; console.log( '+ enqueue ...', asyncQueue.enqueue('the') // the ); console.log( '+ enqueue ...', asyncQueue.enqueue('quick') // quick ); console.log( '+ enqueue ...', asyncQueue.enqueue('brown') // brown ); console.log( '+ enqueue ...', asyncQueue.enqueue('fox') // fox ); console.log( '- dequeue ...', asyncQueue.dequeue() // the ); console.log( '+ enqueue ...', asyncQueue.enqueue('jumps') // jumps ); console.log( '- dequeue ...', asyncQueue.dequeue() // quick ); console.log( '+ + awaiters enqueue ...', asyncQueue.awaiters .enqueue( (...args) => console.log({ args }) ) // (...args) => console.log({ args }) ); console.log( '+ enqueue ...', asyncQueue.enqueue('over') // over ); console.log( '- dequeue ...', asyncQueue.dequeue() // brown ); // async, timeout based, logging { "args": ["over"] } .as-console-wrapper { min-height: 100%!important; top: 0; } A: Put your methods on the prototype where they belong. Then you can directly call the parent method: function Queue() { this.items = []; } Queue.prototype.enqueue = function enqueue(item) { this.items.push(item); return item; }; function AsyncQueue() { Queue.call(this); this.awaiters = new Queue(); } AsyncQueue.prototype = Object.create(Queue.prototype); AsyncQueue.prototype.constructor = AsyncQueue; AsyncQueue.prototype.enqueue = function enqueue(item) { const awaiter = this.awaiters.dequeue(); if (awaiter !== undefined) { setImmediate(() => { awaiter(item); }); } else { Queue.prototype.enqueue.call(this, item); } return item; };
Call the parent function (without super) instead of the inherited child function
I wish to call the parent function (without super) instead of the inherited child function. What options do I have without using ES classes? function Queue() { this.items = []; this.enqueue = function enqueue(item) { this.items.push(item); return item; } } function AsyncQueue() { Queue.call(this); this.awaiters = new Queue(); this.enqueue = function enqueue(item) { const awaiter = this.awaiters.dequeue(); if (awaiter !== undefined) { setImmediate(() => { awaiter(item); }); } else { super.enqueue(item); } return item; } } AsyncQueue.prototype = Object.create(Queue.prototype); AsyncQueue.prototype.constructor = AsyncQueue;
[ "The OP after already having mastered the Queue super-call within the AsyncQueue subtype constructor ...\nQueue.call(this);\n\n... needs to save the later to be used super.enqueue reference by assigning a bound version of it before reassigning / overwriting / shadowing it with the subtype's own enqueue implementation ...\nconst superEnqueue = this.enqueue.bind(this);\n\nthis.enqueue = function enqueue (item) {\n // ...\n}\n\nThe next provided example code does this in addition to some other small suggested improvements ...\n\n\nfunction Queue() {\n // kind of \"private class field\" assured by local scope.\n const items = [];\n\n // privileged (thus non prototypal) methods\n // with access capability via local scope.\n this.dequeue = function dequeue (item) {\n return items.shift(item);\n }\n this.enqueue = function enqueue (item) {\n items.push(item);\n return item;\n }\n}\n\nfunction AsyncQueue() {\n // super call.\n Queue.call(this);\n\n // save/keep the initial `super.enqueue` reference by binding it.\n const superEnqueue = this.enqueue.bind(this);\n\n this.awaiters = new Queue; \n\n // overwrite/shadow the initial `super.enqueue` reference.\n this.enqueue = function enqueue (item) {\n\n const awaiter = this.awaiters.dequeue();\n // if (awaiter !== undefined) {\n\n // - one does want to know whether\n // `awaiter` is a function since\n // it is going to be invoked.\n if ('function' === typeof awaiter) {\n\n // setImmediate(() => awaiter(item));\n\n // - one does want to use a somewhat\n // similar but standardized way.\n setTimeout(awaiter, 0, item);\n } else {\n // super.enqueue(item);\n\n // forwarding by using the above's hand-nitted super-delegation.\n superEnqueue(item);\n }\n return item;\n }\n}\nAsyncQueue.prototype = Object.create(Queue.prototype);\nAsyncQueue.prototype.constructor = AsyncQueue;\n\nconst asyncQueue = new AsyncQueue;\n\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('the') // the\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('quick') // quick\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('brown') // brown\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('fox') // fox\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // the\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('jumps') // jumps\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // quick\n);\n\nconsole.log(\n '+ + awaiters enqueue ...',\n asyncQueue.awaiters .enqueue(\n (...args) => console.log({ args }) \n ) // (...args) => console.log({ args })\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('over') // over\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // brown\n);\n// async, timeout based, logging { \"args\": [\"over\"] }\n.as-console-wrapper { min-height: 100%!important; top: 0; }\n\n\n\nComparing the above implementation and the next provided one which uses class syntax and considering some comments from above ...\n\n\"why don't you want to use classes, this is what they were made for\" – Xiduzo Nov 27 at 16:00\n\n\"Avoiding it that is all. bad me. but i definitely need help here.\" – Gary Nov 27 at 16:46\"\n\n\n... the questions remains ... \"What is the reason for the OP being willing to sacrifice the much more convenient and safer way of subtyping / sub-classing?\"\n\n\nclass Queue {\n // real private class field ...\n #items = [];\n\n // ... but prototypal methods.\n dequeue(item) {\n return this.#items.shift(item);\n }\n enqueue(item) {\n this.#items.push(item);\n return item;\n }\n}\n\n// a more convenient and safer subtyping / sub-classing.\nclass AsyncQueue extends Queue {\n constructor() {\n super();\n\n this.awaiters = new Queue;\n }\n // prototypal method again.\n enqueue(item) {\n const awaiter = this.awaiters.dequeue();\n\n if ('function' === typeof awaiter) {\n\n setTimeout(awaiter, 0, item);\n } else {\n super.enqueue(item);\n }\n return item;\n }\n}\nconst asyncQueue = new AsyncQueue;\n\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('the') // the\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('quick') // quick\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('brown') // brown\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('fox') // fox\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // the\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('jumps') // jumps\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // quick\n);\n\nconsole.log(\n '+ + awaiters enqueue ...',\n asyncQueue.awaiters .enqueue(\n (...args) => console.log({ args }) \n ) // (...args) => console.log({ args })\n);\nconsole.log(\n '+ enqueue ...',\n asyncQueue.enqueue('over') // over\n);\nconsole.log(\n '- dequeue ...',\n asyncQueue.dequeue() // brown\n);\n// async, timeout based, logging { \"args\": [\"over\"] }\n.as-console-wrapper { min-height: 100%!important; top: 0; }\n\n\n\n", "Put your methods on the prototype where they belong. Then you can directly call the parent method:\nfunction Queue() {\n this.items = [];\n}\nQueue.prototype.enqueue = function enqueue(item) {\n this.items.push(item);\n return item;\n};\n\nfunction AsyncQueue() {\n Queue.call(this);\n this.awaiters = new Queue();\n}\n\nAsyncQueue.prototype = Object.create(Queue.prototype);\nAsyncQueue.prototype.constructor = AsyncQueue;\nAsyncQueue.prototype.enqueue = function enqueue(item) {\n const awaiter = this.awaiters.dequeue();\n if (awaiter !== undefined) {\n setImmediate(() => {\n awaiter(item);\n });\n } else {\n Queue.prototype.enqueue.call(this, item);\n }\n return item;\n};\n\n" ]
[ 0, 0 ]
[]
[]
[ "function", "inheritance", "javascript", "super" ]
stackoverflow_0074591396_function_inheritance_javascript_super.txt
Q: Dynamically choose variable template based on branch trigger I have a folder structure as follows: -> variables -> dev variables.yml -> pp variables.yml I then have a azure-pipeline.yml that extends a pipeline template called template.yml. In my template.yml, I want to use logic to determine which template variable file I want to use. So if my $(Build.SourceBranch) starts with fix/*, I want to use dev, or else use pp Ideally, this logic would work, but does not because in azure pipeline at run time cannot do this: - ${{ if or(StartsWith(variables['Build.SourceBranch'], 'refs/head/features/'),StartsWith(variables['Build.SourceBranch'], 'refs/head/fix/')) }}: - template: variables/dev/variables.yml - ${{ else }}: - template: variables/pp/variables.yml I'm not sure what to do at this point. I don't want to use parameters because the list would be too large. A: I think you're right, this is because the template expansion happens very early in the run of a pipeline. To solve your issue, you could introduce a second YAML file which only triggers fix/* and uses variables/dev/variables.yml.
Dynamically choose variable template based on branch trigger
I have a folder structure as follows: -> variables -> dev variables.yml -> pp variables.yml I then have a azure-pipeline.yml that extends a pipeline template called template.yml. In my template.yml, I want to use logic to determine which template variable file I want to use. So if my $(Build.SourceBranch) starts with fix/*, I want to use dev, or else use pp Ideally, this logic would work, but does not because in azure pipeline at run time cannot do this: - ${{ if or(StartsWith(variables['Build.SourceBranch'], 'refs/head/features/'),StartsWith(variables['Build.SourceBranch'], 'refs/head/fix/')) }}: - template: variables/dev/variables.yml - ${{ else }}: - template: variables/pp/variables.yml I'm not sure what to do at this point. I don't want to use parameters because the list would be too large.
[ "I think you're right, this is because the template expansion happens very early in the run of a pipeline.\nTo solve your issue, you could introduce a second YAML file which only triggers fix/* and uses variables/dev/variables.yml.\n" ]
[ 0 ]
[]
[]
[ "azure_pipelines", "azure_pipelines_yaml" ]
stackoverflow_0074661807_azure_pipelines_azure_pipelines_yaml.txt
Q: Mutation Observer to observe parent DOM Element whose childs are changing I am working on fetching some data from a website whose data is in a mutation state. I want to observe every change that appears on a table row. I have attached my mutation observer with the table body whose rows are changing. As every change appears on a table column therefore my code only gives me the column that is changing whereas I need a table row whose column is changing. I am unable to fetch that mutated row. Please tell me what change makes me read a mutated row from a table body. $( window ).ready(function() { // Select the node that will be observed for mutations let tableBody = document.getElementById('wtbl8bb9e9b5-1b29-4f8d-909f-2837b994bfc7').children[1]; // Options for the observer (which mutations to observe) let options = { childList: true, attributes: false, characterData: false, subtree: true, attributeOldValue: false, characterDataOldValue: false }; //Callback function to execute when mutations are observed let callback = function(mutationsList, observer) { for(const mutation of mutationsList) { console.log("MutationRecord: "+MutationRecord); if (mutation.type === 'childList') { console.log(mutation.target); console.log(mutation) console.log('A child node has been added or removed.'); console.log(mutation.addedNodes); mutation.addedNodes.forEach(function(added_node) { //if(added_node.id == 'child') { console.log('#child has been added'); console.log(added_node); }); } else if (mutation.type === 'attributes') { console.log('The ' + mutation.attributeName + ' attribute was modified.'); } } }; // Create an observer instance linked to the callback function let observer = new MutationObserver(callback); // Start observing the target node for configured mutations observer.observe(tableBody, options); }); A: the mutation partent is .target, the .target parent is .parentNode. mutation.target.parentNode;
Mutation Observer to observe parent DOM Element whose childs are changing
I am working on fetching some data from a website whose data is in a mutation state. I want to observe every change that appears on a table row. I have attached my mutation observer with the table body whose rows are changing. As every change appears on a table column therefore my code only gives me the column that is changing whereas I need a table row whose column is changing. I am unable to fetch that mutated row. Please tell me what change makes me read a mutated row from a table body. $( window ).ready(function() { // Select the node that will be observed for mutations let tableBody = document.getElementById('wtbl8bb9e9b5-1b29-4f8d-909f-2837b994bfc7').children[1]; // Options for the observer (which mutations to observe) let options = { childList: true, attributes: false, characterData: false, subtree: true, attributeOldValue: false, characterDataOldValue: false }; //Callback function to execute when mutations are observed let callback = function(mutationsList, observer) { for(const mutation of mutationsList) { console.log("MutationRecord: "+MutationRecord); if (mutation.type === 'childList') { console.log(mutation.target); console.log(mutation) console.log('A child node has been added or removed.'); console.log(mutation.addedNodes); mutation.addedNodes.forEach(function(added_node) { //if(added_node.id == 'child') { console.log('#child has been added'); console.log(added_node); }); } else if (mutation.type === 'attributes') { console.log('The ' + mutation.attributeName + ' attribute was modified.'); } } }; // Create an observer instance linked to the callback function let observer = new MutationObserver(callback); // Start observing the target node for configured mutations observer.observe(tableBody, options); });
[ "the mutation partent is .target, the .target parent is .parentNode.\nmutation.target.parentNode;\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "mutation_observers" ]
stackoverflow_0068948782_javascript_mutation_observers.txt
Q: How do I instantiate a C# service and use its data in a Blazor component I am using Blazor and I am a beginner. Currently I have several Blazor components that all need the same JSON data (data fetched from one URL, e.g., http://localhost/todo) Instead of fetching the same URL inside all my components and duplicating my code, I decided to create a Service that fetchs the URL and share this service's output accross my components. Here is the service, and all it does fetch a URL and return a JSON object (at least that is what I am trying to do) using TodoApp.Models.TodoTestModel; using Newtonsoft.Json; using System.Net.Http.Headers; namespace Todo.Services { public class Todo { public string TodoURL { get; set; } public object result { get; set; } async public Task<object> TodoTypes() { using (HttpClient client = new HttpClient()) { // HTTP header stuff HttpResponseMessage response = client.GetAsync(TodoURL).Result; response.EnsureSuccessStatusCode(); string responseData = await response.Content.ReadAsStringAsync(); result = JsonConvert.DeserializeObject<TodoTestModel>(responseData); return result; } } } } } The idea is call this service and give my components the output as <TodoComponent ToDoResult="@result"> </ToDoComponent> But I am having a problem when instanciating the object, e.g,: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { Todo td = new Todo(); td.TodoURL = ".." // does not work } In short I am trying to do the following: Instanciate the class Provide it a URL Get the JSON data (result) to pass it into my TodoComponent Thanks for the help A: To instantiate a C# class, you can use the new keyword followed by the class name, like this: Todo td = new Todo(); This creates a new instance of the Todo class. You can then set the TodoURL property on the instance like this: td.TodoURL = "http://localhost/todo"; To get the JSON data from the service, you can call the TodoTypes method on the Todo instance, like this: var result = await td.TodoTypes(); This returns an object that contains the JSON data. You can then pass this object to your TodoComponent like this: <TodoComponent ToDoResult="@result"> </TodoComponent> In your TodoComponent, you can access the JSON data in the ToDoResult property, which you can then use in your component code. Overall, your code might look something like this: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { Todo td = new Todo(); td.TodoURL = "http://localhost/todo"; var result = await td.TodoTypes(); } <TodoComponent ToDoResult="@result"> </TodoComponent> A: vaeon you can directly use client.GetJsonAsync and avoid the deserialization step in Vikram's example.The returned result set is not matching with the expected typeof List. var response = await client.GetFromJsonAsync(TodoURL); return response; A: To call the TodoTypes method on your Todo service, you will need to use the await keyword to handle the asynchronous nature of the method. Here is an example of how you can use your Todo service: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { private TodoTestModel result; protected override async Task OnInitializedAsync() { Todo td = new Todo(); td.TodoURL = "http://localhost/todo"; result = await td.TodoTypes(); } } Here, the OnInitializedAsync method is an async method that is called when the component is initialized, and it is where you can call the TodoTypes method and store the result in the result variable. Then, you can use the result variable in your TodoComponent like this: <TodoComponent ToDoResult="@result"> </TodoComponent> In your Todo service, you can make the following changes to make it more efficient: Change the return type of the TodoTypes method to Task instead of Task, so that the return type matches the type of the object you are deserializing the JSON into. Use the HttpClient instance as a field in the Todo class, instead of creating a new instance every time the TodoTypes method is called. This will avoid creating a new HttpClient every time and can help improve performance. Here is an example of how your Todo service can be updated: using TodoApp.Models.TodoTestModel; using Newtonsoft.Json; using System.Net.Http.Headers; namespace Todo.Services { public class Todo { private readonly HttpClient _client; public Todo() { _client = new HttpClient(); } public string TodoURL { get; set; } async public Task<TodoTestModel> TodoTypes() { // HTTP header stuff HttpResponseMessage response = await _client.GetAsync(TodoURL); response.EnsureSuccessStatusCode(); string responseData = await response.Content.ReadAsStringAsync(); return JsonConvert.DeserializeObject<TodoTestModel>(responseData); } } } I hope this helps! Let me know if you have any questions. My donation addresses: BTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697 A: Here is an example of how you could refactor your code to use dependency injection to inject the Todo service into your component and call the TodoTypes method to get the JSON data. First, you will need to register the Todo service with your application's dependency injection container in the ConfigureServices method of your Startup class: public void ConfigureServices(IServiceCollection services) { // Register the Todo service with the dependency injection container services.AddScoped<Todo>(); // Other code... } Then, in your component, you can use the @inject directive to inject the Todo service and call the TodoTypes method to get the JSON data. Here is an example: @page "/" @using TodoApp.Services; @inject Todo Todo; <h5> List of todos </h5> @code { TodoTestModel result; protected override async Task OnInitializedAsync() { // Call the TodoTypes method to get the JSON data result = await Todo.TodoTypes(); } } This code will call the TodoTypes method and assign the JSON data to the result variable, which you can then pass to your TodoComponent as follows: <TodoComponent ToDoResult="@result"> </ToDoComponent> A: Add below code before await builder.Build().RunAsync(); in the Program.cs file. Program.cs: ... using Microsoft.AspNetCore.Components.Web; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.RootComponents.Add<App>("#app"); builder.RootComponents.Add<HeadOutlet>("head::after"); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); // Register services builder.Services.AddScoped<TodoService>(); // <----- Add this line await builder.Build().RunAsync(); TodoService.cs: public class TodoService { public string TodoURL { get; set; } public async Task<List<TodoTestModel>> GetTodoTypesAsync() { using (HttpClient client = new HttpClient()) { // HTTP header stuff HttpResponseMessage response = await client.GetAsync(TodoURL); response.EnsureSuccessStatusCode(); string responseData = await response.Content.ReadAsStringAsync(); return JsonConvert.DeserializeObject<List<TodoTestModel>>(responseData); } } } In TodoComponent.razor file, add the below code: @page "/" @using TodoApp.Services; @inject TodoService TodoService; <h5> List of todos </h5> @if(todoList is not null && todoList.Any()) { <ul> @foreach(var item in todoList) { <li>@item.Name</li> } </ul> } @code { private List<TodoTestModel> todoList; protected override async Task OnInitializedAsync() { TodoServices.TodoURL = ""; // TODO: set the API url todoList = await TodoService.GetTodoTypesAsync(); await base.OnInitializedAsync(); } } A: Program.cs using Microsoft.AspNetCore.Components.Web; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; using BlazorApp1; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.RootComponents.Add<App>("#app"); builder.RootComponents.Add<HeadOutlet>("head::after"); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); builder.Services.AddScoped(sp => new TodoService()); await builder.Build().RunAsync(); TodoService.cs namespace BlazorApp1; public class TodoService { public TodoService() { } public List<string> GetTodos() { //write your code here that fetches the todos using the TodosUrl throw new NotImplementedException(); } public string TodosUrl { get; set; } = "https://google.com"; } TodosComponent @inject TodoService TodoService <ul> @foreach (var todo in Todos) { <li>@todo</li> } </ul> @code { public List<string> Todos { get; set; } protected override void OnInitialized() { TodoService.TodosUrl = "https://stackoverflow.com"; Todos = TodoService.GetTodos(); base.OnInitialized(); } } A: To use the Todo service in your code, you need to first inject it into your page or component using the @inject directive. This allows you to use the service in your code. You can then create an instance of the Todo class and set the TodoURL property to the URL you want to fetch data from. Finally, you can call the TodoTypes() method to fetch the data from the URL and store the result in a variable. Here is an example of how you can use the Todo service in your code: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { object result; protected override async Task OnInitializedAsync() { // Create an instance of the Todo service Todo td = new Todo(); // Set the URL to fetch data from td.TodoURL = "http://localhost/todo"; // Fetch the data from the URL and store the result in the 'result' variable result = await td.TodoTypes(); } A: Here is an example of how you could instantiate a C# service and use its data in a Blazor component: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { Todo td; protected override async Task OnInitAsync() { td = new Todo(); td.TodoURL = ".."; result = await td.TodoTypes(); } } In this example, the Todo service is injected into the component, and then an instance of the Todo class is created and its TodoURL property is set in the OnInitAsync method. Then, the TodoTypes method is called and its result is assigned to the result variable. This result variable can then be passed to other components as needed. It's worth noting that this example uses the async and await keywords to make the code more readable and to avoid blocking the main thread while the data is being fetched. You may need to add a using statement for the System.Threading.Tasks namespace in order for this code to compile. A: You can use dependency injection to make the Todo service available in your page. Dependency injection allows you to manage the lifetime of a service and access it across different components. First, you will need to register the Todo service with the dependency injection container. You can do this by adding the following code to the ConfigureServices method in the Startup class: services.AddScoped<Todo>(); This registers the Todo service with the dependency injection container and specifies that a new instance of the Todo class should be created for each request. Next, you can use the @inject directive in your page to make the Todo service available: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { // The Todo service is available as a property of the Todo variable var result = await Todo.TodoTypes(); } You can then pass the result variable to your TodoComponent using the ToDoResult parameter: <TodoComponent ToDoResult="@result"> </ToDoComponent>
How do I instantiate a C# service and use its data in a Blazor component
I am using Blazor and I am a beginner. Currently I have several Blazor components that all need the same JSON data (data fetched from one URL, e.g., http://localhost/todo) Instead of fetching the same URL inside all my components and duplicating my code, I decided to create a Service that fetchs the URL and share this service's output accross my components. Here is the service, and all it does fetch a URL and return a JSON object (at least that is what I am trying to do) using TodoApp.Models.TodoTestModel; using Newtonsoft.Json; using System.Net.Http.Headers; namespace Todo.Services { public class Todo { public string TodoURL { get; set; } public object result { get; set; } async public Task<object> TodoTypes() { using (HttpClient client = new HttpClient()) { // HTTP header stuff HttpResponseMessage response = client.GetAsync(TodoURL).Result; response.EnsureSuccessStatusCode(); string responseData = await response.Content.ReadAsStringAsync(); result = JsonConvert.DeserializeObject<TodoTestModel>(responseData); return result; } } } } } The idea is call this service and give my components the output as <TodoComponent ToDoResult="@result"> </ToDoComponent> But I am having a problem when instanciating the object, e.g,: @page "/" @using TodoApp.Services; @inject TodoApp.Services.Todo Todo; <h5> List of todos </h5> @code { Todo td = new Todo(); td.TodoURL = ".." // does not work } In short I am trying to do the following: Instanciate the class Provide it a URL Get the JSON data (result) to pass it into my TodoComponent Thanks for the help
[ "To instantiate a C# class, you can use the new keyword followed by the class name, like this:\nTodo td = new Todo();\n\nThis creates a new instance of the Todo class. You can then set the TodoURL property on the instance like this:\ntd.TodoURL = \"http://localhost/todo\";\n\n\nTo get the JSON data from the service, you can call the TodoTypes method on the Todo instance, like this:\nvar result = await td.TodoTypes();\n\nThis returns an object that contains the JSON data. You can then pass this object to your TodoComponent like this:\n<TodoComponent ToDoResult=\"@result\"> </TodoComponent>\n\nIn your TodoComponent, you can access the JSON data in the ToDoResult property, which you can then use in your component code.\nOverall, your code might look something like this:\n@page \"/\"\n@using TodoApp.Services; \n@inject TodoApp.Services.Todo Todo; \n\n<h5> List of todos </h5>\n\n@code {\n Todo td = new Todo(); \n td.TodoURL = \"http://localhost/todo\";\n var result = await td.TodoTypes();\n}\n\n<TodoComponent ToDoResult=\"@result\"> </TodoComponent>\n\n", "vaeon you can directly use client.GetJsonAsync and avoid the deserialization step in Vikram's example.The returned result set is not matching with the expected typeof List.\n var response = await client.GetFromJsonAsync(TodoURL); \n return response;\n\n", "To call the TodoTypes method on your Todo service, you will need to use the await keyword to handle the asynchronous nature of the method. Here is an example of how you can use your Todo service:\n@page \"/\"\n@using TodoApp.Services; \n@inject TodoApp.Services.Todo Todo; \n\n<h5> List of todos </h5>\n\n@code {\n private TodoTestModel result;\n\n protected override async Task OnInitializedAsync()\n {\n Todo td = new Todo(); \n td.TodoURL = \"http://localhost/todo\";\n result = await td.TodoTypes();\n }\n}\n\nHere, the OnInitializedAsync method is an async method that is called when the component is initialized, and it is where you can call the TodoTypes method and store the result in the result variable. Then, you can use the result variable in your TodoComponent like this:\n<TodoComponent ToDoResult=\"@result\"> </TodoComponent>\n\nIn your Todo service, you can make the following changes to make it more efficient:\n\nChange the return type of the TodoTypes method to\nTask instead of Task, so that the return type\nmatches the type of the object you are deserializing the JSON into.\nUse the HttpClient instance as a field in the Todo class, instead of\ncreating a new instance every time the TodoTypes method is called.\nThis will avoid creating a new HttpClient every time and can help\nimprove performance.\n\nHere is an example of how your Todo service can be updated:\nusing TodoApp.Models.TodoTestModel;\nusing Newtonsoft.Json;\nusing System.Net.Http.Headers;\n\nnamespace Todo.Services\n{\n public class Todo \n {\n private readonly HttpClient _client;\n\n public Todo()\n {\n _client = new HttpClient();\n }\n\n public string TodoURL { get; set; }\n\n async public Task<TodoTestModel> TodoTypes()\n {\n // HTTP header stuff\n\n HttpResponseMessage response = await _client.GetAsync(TodoURL);\n response.EnsureSuccessStatusCode();\n string responseData = await response.Content.ReadAsStringAsync();\n return JsonConvert.DeserializeObject<TodoTestModel>(responseData);\n }\n }\n}\n\nI hope this helps! Let me know if you have any questions.\nMy donation addresses: BTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697\n", "Here is an example of how you could refactor your code to use dependency injection to inject the Todo service into your component and call the TodoTypes method to get the JSON data.\nFirst, you will need to register the Todo service with your application's dependency injection container in the ConfigureServices method of your Startup class:\npublic void ConfigureServices(IServiceCollection services)\n{\n // Register the Todo service with the dependency injection container\n services.AddScoped<Todo>();\n\n // Other code...\n}\n\nThen, in your component, you can use the @inject directive to inject the Todo service and call the TodoTypes method to get the JSON data. Here is an example:\n@page \"/\"\n@using TodoApp.Services;\n@inject Todo Todo;\n\n<h5> List of todos </h5>\n\n@code {\n TodoTestModel result;\n\n protected override async Task OnInitializedAsync()\n {\n // Call the TodoTypes method to get the JSON data\n result = await Todo.TodoTypes();\n }\n}\n\nThis code will call the TodoTypes method and assign the JSON data to the result variable, which you can then pass to your TodoComponent as follows:\n<TodoComponent ToDoResult=\"@result\"> </ToDoComponent>\n\n", "Add below code before await builder.Build().RunAsync(); in the Program.cs file.\nProgram.cs:\n...\nusing Microsoft.AspNetCore.Components.Web;\nusing Microsoft.AspNetCore.Components.WebAssembly.Hosting;\n\nvar builder = WebAssemblyHostBuilder.CreateDefault(args);\nbuilder.RootComponents.Add<App>(\"#app\");\nbuilder.RootComponents.Add<HeadOutlet>(\"head::after\");\n\nbuilder.Services.AddScoped(sp => new HttpClient { BaseAddress = new \nUri(builder.HostEnvironment.BaseAddress) });\n\n// Register services\nbuilder.Services.AddScoped<TodoService>(); // <----- Add this line\n\nawait builder.Build().RunAsync();\n\nTodoService.cs:\npublic class TodoService\n{\n public string TodoURL { get; set; }\n \n public async Task<List<TodoTestModel>> GetTodoTypesAsync()\n {\n using (HttpClient client = new HttpClient())\n {\n // HTTP header stuff\n\n HttpResponseMessage response = await client.GetAsync(TodoURL);\n response.EnsureSuccessStatusCode();\n string responseData = await response.Content.ReadAsStringAsync();\n return JsonConvert.DeserializeObject<List<TodoTestModel>>(responseData);\n }\n }\n}\n\nIn TodoComponent.razor file, add the below code:\n@page \"/\"\n@using TodoApp.Services; \n@inject TodoService TodoService; \n\n<h5> List of todos </h5>\n@if(todoList is not null && todoList.Any())\n{\n <ul>\n @foreach(var item in todoList)\n {\n <li>@item.Name</li>\n }\n </ul>\n}\n\n@code {\n private List<TodoTestModel> todoList;\n\n protected override async Task OnInitializedAsync()\n {\n TodoServices.TodoURL = \"\"; // TODO: set the API url\n todoList = await TodoService.GetTodoTypesAsync();\n await base.OnInitializedAsync();\n }\n}\n\n", "Program.cs\nusing Microsoft.AspNetCore.Components.Web;\nusing Microsoft.AspNetCore.Components.WebAssembly.Hosting;\nusing BlazorApp1;\n\nvar builder = WebAssemblyHostBuilder.CreateDefault(args);\nbuilder.RootComponents.Add<App>(\"#app\");\nbuilder.RootComponents.Add<HeadOutlet>(\"head::after\");\n\nbuilder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });\nbuilder.Services.AddScoped(sp => new TodoService());\n\nawait builder.Build().RunAsync();\n\nTodoService.cs\nnamespace BlazorApp1;\n\npublic class TodoService\n{\n public TodoService()\n {\n }\n\n public List<string> GetTodos()\n {\n //write your code here that fetches the todos using the TodosUrl\n throw new NotImplementedException();\n }\n \n public string TodosUrl { get; set; } = \"https://google.com\";\n}\n\nTodosComponent\n@inject TodoService TodoService\n\n<ul>\n @foreach (var todo in Todos)\n {\n <li>@todo</li>\n }\n</ul>\n\n@code {\n public List<string> Todos { get; set; }\n protected override void OnInitialized()\n {\n TodoService.TodosUrl = \"https://stackoverflow.com\";\n Todos = TodoService.GetTodos();\n base.OnInitialized();\n }\n\n}\n\n\n", "To use the Todo service in your code, you need to first inject it into your page or component using the @inject directive. This allows you to use the service in your code.\nYou can then create an instance of the Todo class and set the TodoURL property to the URL you want to fetch data from. Finally, you can call the TodoTypes() method to fetch the data from the URL and store the result in a variable.\nHere is an example of how you can use the Todo service in your code:\n@page \"/\"\n@using TodoApp.Services;\n@inject TodoApp.Services.Todo Todo;\n\n<h5> List of todos </h5>\n@code {\nobject result;\nprotected override async Task OnInitializedAsync()\n{\n// Create an instance of the Todo service\nTodo td = new Todo(); \n\n// Set the URL to fetch data from\ntd.TodoURL = \"http://localhost/todo\";\n\n// Fetch the data from the URL and store the result in the 'result' \nvariable\nresult = await td.TodoTypes();\n}\n\n", "Here is an example of how you could instantiate a C# service and use its data in a Blazor component:\n@page \"/\"\n@using TodoApp.Services; \n@inject TodoApp.Services.Todo Todo; \n\n<h5> List of todos </h5>\n\n@code {\n Todo td; \n\n protected override async Task OnInitAsync()\n {\n td = new Todo(); \n td.TodoURL = \"..\";\n result = await td.TodoTypes();\n }\n}\n\nIn this example, the Todo service is injected into the component, and then an instance of the Todo class is created and its TodoURL property is set in the OnInitAsync method. Then, the TodoTypes method is called and its result is assigned to the result variable. This result variable can then be passed to other components as needed.\nIt's worth noting that this example uses the async and await keywords to make the code more readable and to avoid blocking the main thread while the data is being fetched. You may need to add a using statement for the System.Threading.Tasks namespace in order for this code to compile.\n", "You can use dependency injection to make the Todo service available in your page. Dependency injection allows you to manage the lifetime of a service and access it across different components.\nFirst, you will need to register the Todo service with the dependency injection container. You can do this by adding the following code to the ConfigureServices method in the Startup class:\nservices.AddScoped<Todo>();\n\nThis registers the Todo service with the dependency injection container and specifies that a new instance of the Todo class should be created for each request.\nNext, you can use the @inject directive in your page to make the Todo service available:\n@page \"/\"\n@using TodoApp.Services;\n@inject TodoApp.Services.Todo Todo;\n\n<h5> List of todos </h5>\n\n@code {\n // The Todo service is available as a property of the Todo variable\n var result = await Todo.TodoTypes();\n}\n\nYou can then pass the result variable to your TodoComponent using the ToDoResult parameter:\n<TodoComponent ToDoResult=\"@result\"> </ToDoComponent>\n\n" ]
[ 1, 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "blazor", "blazor_component", "blazor_server_side", "c#" ]
stackoverflow_0074566472_blazor_blazor_component_blazor_server_side_c#.txt
Q: Installing pylint with PyCharm I tried to install pylint with the pylint plugin in PyCharm. I created a blank project in a venv. I am using: pylint 2.14.0 astroid 2.11.5 Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] PyCharm 2022.1.2 I tried: specifying the path of the exe explicitly in the plugin-setting (C:\git\pythonProject\venv\Scripts\pylint.exe) different Python version (3.9.6) different pylint version new venv reinstall pylint plugin Re-installing PyCharm Restart PC I get the following Error, when i try to run -help in the console > (venv) PS C:\git\pythonProject1> pylint -help UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte screenshot Has anyone a clue? Thank you in advance A: need to use pip install pylint then generate a pylintrc file like: pylint --generate-rcfile > ~/.pylintrc and move it into your projects folder then find the path to pylint for example: venv/bin/pylint then go to preferences -> pylint in PyCharm and set the pylint executable to be that path and click test Note i have found the SonarLint plugin easier to work with. I wasn't seeing a way to navigate from the lint error to the setting to turn off the link but i saw that with SonarLint
Installing pylint with PyCharm
I tried to install pylint with the pylint plugin in PyCharm. I created a blank project in a venv. I am using: pylint 2.14.0 astroid 2.11.5 Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] PyCharm 2022.1.2 I tried: specifying the path of the exe explicitly in the plugin-setting (C:\git\pythonProject\venv\Scripts\pylint.exe) different Python version (3.9.6) different pylint version new venv reinstall pylint plugin Re-installing PyCharm Restart PC I get the following Error, when i try to run -help in the console > (venv) PS C:\git\pythonProject1> pylint -help UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte screenshot Has anyone a clue? Thank you in advance
[ "need to use\npip install pylint\n\nthen generate a pylintrc file like:\npylint --generate-rcfile > ~/.pylintrc\n\nand move it into your projects folder\nthen find the path to pylint for example:\nvenv/bin/pylint\n\nthen go to preferences -> pylint in PyCharm and set the pylint executable to be that path and click test\nNote i have found the SonarLint plugin easier to work with. I wasn't seeing a way to navigate from the lint error to the setting to turn off the link but i saw that with SonarLint\n" ]
[ 0 ]
[]
[]
[ "pylint" ]
stackoverflow_0072475362_pylint.txt
Q: How do i have the if statement become effective after the 30 seconds I want the if statement working after the 30 seconds but that isn't the case right now. I heard people recommend threading but that's just way too complicated for me. import os import time print('your computer will be shutdown if you dont play my game or if you lose it') shutdown = input("What is 12 times 13? you have 30 seconds.") time.sleep(30) if shutdown == '156': exit() elif shutdown == '': print('you didnt even try') and os.system("shutdown /s /t 1") else: os.system("shutdown /s /t 1") I tried threading already but that is really complicated and I'm expecting to print you didn't even try and shutdown after the 30 seconds if you didn't input anything A: I recommend to use threads because it makes the thing much easier here. Try this: import threading import time user_input = "" ANSWER_TIME = 30 def time_over(): match user_input: case '156': exit(0) case '': print('you didnt even try') os.system("shutdown /s /t 1") case _: os.system("shutdown /s /t 1") exit_timer = threading.Timer(ANSWER_TIME, time_over) print('your computer will be shutdown if you dont play my game or if you lose it') exit_timer.start() user_input = input("What is 12 times 13? you have 30 seconds.") Note that I replaced the if-else statements with match-cases, which are IMHO more readable. I also replaced your and statement (if you want to execute two statements, just write them below each other). A: I would use inputimeout https://pypi.org/project/inputimeout/ from inputimeout import inputimeout, TimeoutOccurred import os if __name__ == "__main__": print('your computer will be shutdown if you dont play my game or if you lose it') try: answer = inputimeout(prompt="What is 12 times 13? you have 30 seconds.", timeout=30) except TimeoutOccurred: os.system("shutdown /s /t 1") if answer == '': print('you didnt even try') os.system("shutdown /s /t 1")
How do i have the if statement become effective after the 30 seconds
I want the if statement working after the 30 seconds but that isn't the case right now. I heard people recommend threading but that's just way too complicated for me. import os import time print('your computer will be shutdown if you dont play my game or if you lose it') shutdown = input("What is 12 times 13? you have 30 seconds.") time.sleep(30) if shutdown == '156': exit() elif shutdown == '': print('you didnt even try') and os.system("shutdown /s /t 1") else: os.system("shutdown /s /t 1") I tried threading already but that is really complicated and I'm expecting to print you didn't even try and shutdown after the 30 seconds if you didn't input anything
[ "I recommend to use threads because it makes the thing much easier here. Try this:\nimport threading\nimport time\n\nuser_input = \"\"\nANSWER_TIME = 30\n\ndef time_over():\n match user_input:\n case '156':\n exit(0)\n case '':\n print('you didnt even try')\n os.system(\"shutdown /s /t 1\")\n case _:\n os.system(\"shutdown /s /t 1\")\n\nexit_timer = threading.Timer(ANSWER_TIME, time_over)\nprint('your computer will be shutdown if you dont play my game or if you lose it')\nexit_timer.start()\n\nuser_input = input(\"What is 12 times 13? you have 30 seconds.\")\n\n\nNote that I replaced the if-else statements with match-cases, which are IMHO more readable. I also replaced your and statement (if you want to execute two statements, just write them below each other).\n", "I would use inputimeout\nhttps://pypi.org/project/inputimeout/\nfrom inputimeout import inputimeout, TimeoutOccurred\nimport os\n\nif __name__ == \"__main__\":\n print('your computer will be shutdown if you dont play my game or if you lose it')\n try:\n answer = inputimeout(prompt=\"What is 12 times 13? you have 30 seconds.\", timeout=30)\n except TimeoutOccurred:\n os.system(\"shutdown /s /t 1\")\n if answer == '':\n print('you didnt even try')\n os.system(\"shutdown /s /t 1\")\n\n" ]
[ 3, 3 ]
[]
[]
[ "python" ]
stackoverflow_0074670704_python.txt
Q: How to prevent router.push() from caching the route to which it was redirected in Next.js middleware? I'm having a problem I think with client-side navigation through the router and the use of middleware. Somehow the router is remembering the first time it was redirected and the following times it navigates directly to that route without going through the middleware. This stops happening when I refresh the browser. It also doesn't happen if I run in a development environment. I would like to force the router to enter the middleware each time to re-evaluate where to redirect. To reproduce: Go to / from the browser search bar repeatedly. You have a 50% chance of being redirected to /dashboard and 50% to /profile because of middleware.ts Go to /login and click on Login button. This will make a router.push('/') and be redirected to either /dashboard or /profile. Click on Logout button. This will make a router.push('/login'). The next times Login will always redirect to the same route. This is my middleware.ts: export function middleware(request: NextRequest) { if (request.nextUrl.pathname === '/') { if (Math.random() > 0.5) { return NextResponse.redirect(new URL('/dashboard', request.url)) } else { return NextResponse.redirect(new URL('/profile', request.url)) } } } My login.tsx: import { NextPage } from 'next' import { useRouter } from 'next/router' const LoginPage: NextPage<{}> = () => { const router = useRouter() const login = () => { router.push('/') } return ( <div> <h1>Login</h1> <button onClick={login}>Login</button> </div> ) } export default LoginPage And Dashboard/Profile Page: import { NextPage } from 'next' import { useRouter } from 'next/router' const DashboardPage: NextPage<{}> = () => { const router = useRouter() const logout = () => { router.push('/login') } return ( <div> <h1>DashboardPage</h1> <button onClick={logout}>Logout</button> </div> ) } export default DashboardPage This is the site displayed in Vercel: https://nextjs-router-clientside-test.vercel.app/ And this is the full code: https://github.com/LautaroRiveiro/nextjs-router-clientside-test A: This is the default, expected behaviour as described in this GH issue #30938. This is expected since we are caching HEAD requests to reduce the amount of requests as much as possible which can still be problematic (#30901). However, you can stop caching HEAD requests and force their revalidation on client-side navigation by setting the x-middleware-cache header with a no-cache value (see related PR #32767) before redirecting in the middleware. export function middleware(request: NextRequest) { if (request.nextUrl.pathname === '/') { const redirectUrl = Math.random() > 0.5 ? '/dashboard' : '/profile' const response = NextResponse.redirect(new URL(redirectUrl, request.url)) response.headers.set('x-middleware-cache', 'no-cache') // Disables middleware caching return response; } } A: With the newest version of Next.js 13, is it still possible to opt-out of prefetch cache on middleware? Using the example @juliomalves posted does not seem to work anymore.
How to prevent router.push() from caching the route to which it was redirected in Next.js middleware?
I'm having a problem I think with client-side navigation through the router and the use of middleware. Somehow the router is remembering the first time it was redirected and the following times it navigates directly to that route without going through the middleware. This stops happening when I refresh the browser. It also doesn't happen if I run in a development environment. I would like to force the router to enter the middleware each time to re-evaluate where to redirect. To reproduce: Go to / from the browser search bar repeatedly. You have a 50% chance of being redirected to /dashboard and 50% to /profile because of middleware.ts Go to /login and click on Login button. This will make a router.push('/') and be redirected to either /dashboard or /profile. Click on Logout button. This will make a router.push('/login'). The next times Login will always redirect to the same route. This is my middleware.ts: export function middleware(request: NextRequest) { if (request.nextUrl.pathname === '/') { if (Math.random() > 0.5) { return NextResponse.redirect(new URL('/dashboard', request.url)) } else { return NextResponse.redirect(new URL('/profile', request.url)) } } } My login.tsx: import { NextPage } from 'next' import { useRouter } from 'next/router' const LoginPage: NextPage<{}> = () => { const router = useRouter() const login = () => { router.push('/') } return ( <div> <h1>Login</h1> <button onClick={login}>Login</button> </div> ) } export default LoginPage And Dashboard/Profile Page: import { NextPage } from 'next' import { useRouter } from 'next/router' const DashboardPage: NextPage<{}> = () => { const router = useRouter() const logout = () => { router.push('/login') } return ( <div> <h1>DashboardPage</h1> <button onClick={logout}>Logout</button> </div> ) } export default DashboardPage This is the site displayed in Vercel: https://nextjs-router-clientside-test.vercel.app/ And this is the full code: https://github.com/LautaroRiveiro/nextjs-router-clientside-test
[ "This is the default, expected behaviour as described in this GH issue #30938.\n\nThis is expected since we are caching HEAD requests to reduce the amount of requests as much as possible which can still be problematic (#30901).\n\nHowever, you can stop caching HEAD requests and force their revalidation on client-side navigation by setting the x-middleware-cache header with a no-cache value (see related PR #32767) before redirecting in the middleware.\nexport function middleware(request: NextRequest) {\n if (request.nextUrl.pathname === '/') {\n const redirectUrl = Math.random() > 0.5 ? '/dashboard' : '/profile'\n const response = NextResponse.redirect(new URL(redirectUrl, request.url))\n response.headers.set('x-middleware-cache', 'no-cache') // Disables middleware caching\n return response;\n }\n}\n\n", "With the newest version of Next.js 13, is it still possible to opt-out of prefetch cache on middleware?\nUsing the example @juliomalves posted does not seem to work anymore.\n" ]
[ 2, 0 ]
[]
[]
[ "javascript", "next.js" ]
stackoverflow_0073083262_javascript_next.js.txt
Q: Filter grid by date range I'm currently using ASP.NET Core Kendo Grid as: @(Html.Kendo().Grid(Model) .Name("grid") .ToolBar(t => { t.Search(); t.Custom().Name("Clear").IconClass("mdi mdi-refresh"); }) .DataSource(dataSource => dataSource .Custom() .PageSize(10) ) .Pageable(pager => pager .Position(GridPagerPosition.Bottom) ) .Sortable() .Events(events => events.DataBound("onDataBound") ) .Columns(columns => { ... columns.Bound(x => x.StartDate) .Format("{0:MM/dd/yyyy}") .Title("From Date"); columns.Bound(x => x.EndDate) .Format("{0:MM/dd/yyyy}") .Title("To Date"); }) ) As you can see I have a StartDate and EndDate columns, I want to add a DateRange calendar filter, if the date is between those dates, filter the table. How can I achieve that with .Net Core Grid? Regards A: To add a date range filter to a Kendo Grid use the Filterable() method and specify a DateRangePicker filter. @(Html.Kendo().Grid(Model) .Name("grid") .ToolBar(t => { t.Search(); t.Custom().Name("Clear").IconClass("mdi mdi-refresh"); }) .DataSource(dataSource => dataSource .Custom() .PageSize(10) ) .Pageable(pager => pager .Position(GridPagerPosition.Bottom) ) .Sortable() .Filterable(filterable => filterable .Extra(false) .Operators(operators => operators .ForString(str => str.Clear() .Contains("Contains") .IsEqualTo("Is equal to") ) .ForDate(date => date.Clear() .IsEqualTo("Is equal to") .IsGreaterThanOrEqualTo("Is after or equal to") .IsLessThanOrEqualTo("Is before or equal to") ) .ForDateRange(range => range.Clear() .IsEqualTo("Is equal to") ) ) ) .Events(events => events.DataBound("onDataBound") ) .Columns(columns => { ... columns.Bound(x => x.StartDate) .Format("{0:MM/dd/yyyy}") .Title("From Date") .Filterable(filterable => filterable .Extra(false) .Operators(operators => operators .ForDate(date => date.Clear() .IsEqualTo("Is equal to") .IsGreaterThanOrEqualTo("Is after or equal to") .IsLessThanOrEqualTo("Is before or equal to") ) ) ); columns.Bound(x => x.EndDate) .Format("{0:MM/dd/yyyy}") .Title("To Date") .Filterable(filterable => filterable .Extra(false) .Operators(operators => operators .ForDate(date => date.Clear() .IsEqualTo("Is equal to") .IsGreaterThanOrEqualTo("Is after or equal to") .IsLessThanOrEqualTo("Is before or equal to") )
Filter grid by date range
I'm currently using ASP.NET Core Kendo Grid as: @(Html.Kendo().Grid(Model) .Name("grid") .ToolBar(t => { t.Search(); t.Custom().Name("Clear").IconClass("mdi mdi-refresh"); }) .DataSource(dataSource => dataSource .Custom() .PageSize(10) ) .Pageable(pager => pager .Position(GridPagerPosition.Bottom) ) .Sortable() .Events(events => events.DataBound("onDataBound") ) .Columns(columns => { ... columns.Bound(x => x.StartDate) .Format("{0:MM/dd/yyyy}") .Title("From Date"); columns.Bound(x => x.EndDate) .Format("{0:MM/dd/yyyy}") .Title("To Date"); }) ) As you can see I have a StartDate and EndDate columns, I want to add a DateRange calendar filter, if the date is between those dates, filter the table. How can I achieve that with .Net Core Grid? Regards
[ "To add a date range filter to a Kendo Grid use the Filterable() method and specify a DateRangePicker filter.\n@(Html.Kendo().Grid(Model)\n .Name(\"grid\")\n .ToolBar(t =>\n {\n t.Search();\n t.Custom().Name(\"Clear\").IconClass(\"mdi mdi-refresh\");\n })\n .DataSource(dataSource => dataSource\n .Custom()\n .PageSize(10)\n )\n .Pageable(pager => pager\n .Position(GridPagerPosition.Bottom)\n )\n .Sortable()\n .Filterable(filterable => filterable\n .Extra(false)\n .Operators(operators => operators\n .ForString(str => str.Clear()\n .Contains(\"Contains\")\n .IsEqualTo(\"Is equal to\")\n )\n .ForDate(date => date.Clear()\n .IsEqualTo(\"Is equal to\")\n .IsGreaterThanOrEqualTo(\"Is after or equal to\")\n .IsLessThanOrEqualTo(\"Is before or equal to\")\n )\n .ForDateRange(range => range.Clear()\n .IsEqualTo(\"Is equal to\")\n )\n )\n )\n .Events(events => events.DataBound(\"onDataBound\")\n )\n .Columns(columns =>\n {\n ...\n columns.Bound(x => x.StartDate)\n .Format(\"{0:MM/dd/yyyy}\")\n .Title(\"From Date\")\n .Filterable(filterable => filterable\n .Extra(false)\n .Operators(operators => operators\n .ForDate(date => date.Clear()\n .IsEqualTo(\"Is equal to\")\n .IsGreaterThanOrEqualTo(\"Is after or equal to\")\n .IsLessThanOrEqualTo(\"Is before or equal to\")\n )\n )\n );\n columns.Bound(x => x.EndDate)\n .Format(\"{0:MM/dd/yyyy}\")\n .Title(\"To Date\")\n .Filterable(filterable => filterable\n .Extra(false)\n .Operators(operators => operators\n .ForDate(date => date.Clear()\n .IsEqualTo(\"Is equal to\")\n .IsGreaterThanOrEqualTo(\"Is after or equal to\")\n .IsLessThanOrEqualTo(\"Is before or equal to\")\n )\n \n\n" ]
[ 0 ]
[]
[]
[ "asp.net_core", "asp.net_mvc", "kendo_asp.net_mvc", "kendo_grid" ]
stackoverflow_0074622238_asp.net_core_asp.net_mvc_kendo_asp.net_mvc_kendo_grid.txt
Q: SwiftUI List with @FocusState and focus change handling I want to use a List, @FocusState to track focus, and .onChanged(of: focus) to ensure the currently focused field is visible with ScrollViewReader. The problem is: when everything is setup together the List rebuilds constantly during scrolling making the scrolling not as smooth as it needs to be. I found out that the List rebuilds on scrolling when I attach .onChanged(of: focus). The issue is gone if I replace List with ScrollView, but I like appearance of List, I need sections support, and I need editing capabilities (e.g. delete, move items), so I need to stick to List view. I used Self._printChanges() in order to see what makes the body to rebuild itself when scrolling and the output was like: ContentView: _focus changed. ContentView: _focus changed. ContentView: _focus changed. ContentView: _focus changed. ... And nothing was printed from the closure attached to .onChanged(of: focus). Below is the simplified example, the smoothness of scrolling is not a problem in this example, however, once the List content is more or less complex the smooth scrolling goes away and this is really due to .onChanged(of: focus) :( Question: Are there any chances to listen for focus changes and not provoke the List to rebuild itself on scrolling? struct ContentView: View { enum Field: Hashable { case fieldId(Int) } @FocusState var focus: Field? @State var text: String = "" var body: some View { List { let _ = Self._printChanges() ForEach(0..<100) { TextField("Enter the text for \($0)", text: $text) .id(Field.fieldId($0)) .focused($focus, equals: .fieldId($0)) } } .onChange(of: focus) { _ in print("Not printed unless focused manually") } } } A: I recommend to consider separation of list row content into standalone view and use something like focus "selection" approach. Having FocusState internal of each row prevents parent view from unneeded updates (something like pre-"set up" I assume). Tested with Xcode 13.4 / iOS 15.5 struct ContentView: View { enum Field: Hashable { case fieldId(Int) } @State private var inFocus: Field? var body: some View { List { let _ = Self._printChanges() ForEach(0..<100, id: \.self) { ExtractedView(i: $0, inFocus: $inFocus) } } .onChange(of: inFocus) { _ in print("Not printed unless focused manually") } } struct ExtractedView: View { let i: Int @Binding var inFocus: Field? @State private var text: String = "" @FocusState private var focus: Bool // << internal !! var body: some View { TextField("Enter the text for \(i)", text: $text) .focused($focus) .id(Field.fieldId(i)) .onChange(of: focus) { _ in inFocus = .fieldId(i) // << report selection outside } } } } A: if you add printChanges to the beginning of the body, you can monitor the views and see that they are being rendered by SwiftUI (all of them on each focus lost and focus gained) ... var body: some View { let _ = Self._printChanges() // <<< ADD THIS TO SEE RE-RENDER ... so after allot of testing, it seams that the problem is with .onChange, once you add it SwiftUI will redraw all the Textfields, the only BYPASS i found is to keep using the deprecated API as it works perfectly, and renders only the two textfields (the one that lost focus, and the one that gained the focus), so the code should look this: struct ContentView: View { enum Field: Hashable { case fieldId(Int) } // @FocusState var focus: Field? /// NO NEED @State var text: String = "" var body: some View { List { let _ = Self._printChanges() ForEach(0..<100) { TextField("Enter the text for \($0)", text: $text) .id(Field.fieldId($0)) // .focused($focus, equals: .fieldId($0)) /// NO NEED } } // .onChange(of: focus) { _ in /// NO NEED // print("Not printed unless focused manually") /// NO NEED // } /// NO NEED .focusable(true, onFocusChange: { focusNewValue in print("Only textfileds that lost/gained focus will print this") }) } }
SwiftUI List with @FocusState and focus change handling
I want to use a List, @FocusState to track focus, and .onChanged(of: focus) to ensure the currently focused field is visible with ScrollViewReader. The problem is: when everything is setup together the List rebuilds constantly during scrolling making the scrolling not as smooth as it needs to be. I found out that the List rebuilds on scrolling when I attach .onChanged(of: focus). The issue is gone if I replace List with ScrollView, but I like appearance of List, I need sections support, and I need editing capabilities (e.g. delete, move items), so I need to stick to List view. I used Self._printChanges() in order to see what makes the body to rebuild itself when scrolling and the output was like: ContentView: _focus changed. ContentView: _focus changed. ContentView: _focus changed. ContentView: _focus changed. ... And nothing was printed from the closure attached to .onChanged(of: focus). Below is the simplified example, the smoothness of scrolling is not a problem in this example, however, once the List content is more or less complex the smooth scrolling goes away and this is really due to .onChanged(of: focus) :( Question: Are there any chances to listen for focus changes and not provoke the List to rebuild itself on scrolling? struct ContentView: View { enum Field: Hashable { case fieldId(Int) } @FocusState var focus: Field? @State var text: String = "" var body: some View { List { let _ = Self._printChanges() ForEach(0..<100) { TextField("Enter the text for \($0)", text: $text) .id(Field.fieldId($0)) .focused($focus, equals: .fieldId($0)) } } .onChange(of: focus) { _ in print("Not printed unless focused manually") } } }
[ "I recommend to consider separation of list row content into standalone view and use something like focus \"selection\" approach. Having FocusState internal of each row prevents parent view from unneeded updates (something like pre-\"set up\" I assume).\nTested with Xcode 13.4 / iOS 15.5\nstruct ContentView: View {\n\n enum Field: Hashable {\n case fieldId(Int)\n }\n\n @State private var inFocus: Field?\n\n var body: some View {\n List {\n let _ = Self._printChanges()\n ForEach(0..<100, id: \\.self) {\n ExtractedView(i: $0, inFocus: $inFocus)\n }\n }\n .onChange(of: inFocus) { _ in\n print(\"Not printed unless focused manually\")\n }\n }\n\n struct ExtractedView: View {\n let i: Int\n @Binding var inFocus: Field?\n\n @State private var text: String = \"\"\n @FocusState private var focus: Bool // << internal !!\n\n var body: some View {\n TextField(\"Enter the text for \\(i)\", text: $text)\n .focused($focus)\n .id(Field.fieldId(i))\n .onChange(of: focus) { _ in\n inFocus = .fieldId(i) // << report selection outside\n }\n }\n }\n}\n\n", "if you add printChanges to the beginning of the body, you can monitor the views and see that they are being rendered by SwiftUI (all of them on each focus lost and focus gained)\n ...\n\nvar body: some View {\nlet _ = Self._printChanges() // <<< ADD THIS TO SEE RE-RENDER\n\n ...\n\nso after allot of testing, it seams that the problem is with .onChange, once you add it SwiftUI will redraw all the Textfields,\nthe only BYPASS i found is to keep using the deprecated API as it works perfectly, and renders only the two textfields (the one that lost focus, and the one that gained the focus),\nso the code should look this:\nstruct ContentView: View {\n enum Field: Hashable {\n case fieldId(Int)\n }\n \n // @FocusState var focus: Field? /// NO NEED\n @State var text: String = \"\"\n \n var body: some View {\n List {\n let _ = Self._printChanges()\n ForEach(0..<100) {\n TextField(\"Enter the text for \\($0)\", text: $text)\n .id(Field.fieldId($0))\n // .focused($focus, equals: .fieldId($0)) /// NO NEED\n }\n }\n// .onChange(of: focus) { _ in /// NO NEED\n// print(\"Not printed unless focused manually\") /// NO NEED\n// } /// NO NEED\n .focusable(true, onFocusChange: { focusNewValue in\n print(\"Only textfileds that lost/gained focus will print this\")\n })\n }\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "swiftui" ]
stackoverflow_0073111917_swiftui.txt
Q: Visitor Pattern with Templated Visitor This is a follow up on No user defined conversion when using standard variants and visitor pattern I need to implement a templated version of the visitor pattern as shown below, however it looks like the accept function has to be virtual which is not possible. Could you please help me? #include <variant> #include <iostream> class Visitable //I need this to be non-templated (no template for Visitable!!): Otherwise I could use CRTP to solve this issue. { public: virtual ~Visitable() = default; template<typename Visitor> /*virtual*/ double accept(Visitor* visitor) //I can't do virtual here. { throw("I don't want to end up here"); }; protected: Visitable() = default; }; struct DoubleVisitable : public Visitable { template<typename Visitor> double accept(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 1.0; }; struct StringVisitable : public Visitable { template<typename Visitor> double accept(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 0.0; }; template<typename... args> class Visitor { public: virtual ~Visitor() = default; virtual double visit(typename std::variant<args...> visitable) { auto op = [this](typename std::variant<args...> visitable) -> double { return this->apply(visitable); }; return std::visit(std::ref(op), visitable); } virtual double apply(typename std::variant<args...> visitable) = 0; Visitor() = default; }; class SubVisitor : public Visitor<DoubleVisitable, StringVisitable> { public: virtual ~SubVisitor() = default; SubVisitor() : Visitor<DoubleVisitable, StringVisitable>() {}; virtual double apply(std::variant<DoubleVisitable, StringVisitable> visitable) override { return std::visit( [this](auto&& v){return process(v);}, visitable ); }; virtual double process(const StringVisitable& visitable) { std::cout << "STRING HANDLED" << std::endl; return 0.0; } virtual double process(const DoubleVisitable& visitable) { std::cout << "DOUBLE HANDLED" << std::endl; return 1.0; } }; int main(int argc, char* argv[]) { SubVisitor visitor; DoubleVisitable visitable; visitable.accept(&visitor); //I want to be doing this: Visitable* doubleV = new DoubleVisitable(); doubleV->accept(&visitor); delete doubleV; return 1; } The code is here Link. Could you please help me make this not throw but collapses to the right child class DoubleVisitable or StringVisitable. It looks like I need virtual templated member function which is not possible as mentioned here Can a class member function template be virtual? A: In C++, there are no template virtual functions. This does not exist. What you can do is either: have an accept method for each class you'd like to visit (each descendant) have a std::variant<> of implementations instead of inheritance. A: It says in the question that Visitable cannot be a template. But is it allowed to inherit from a template class? And do you know all the possible visitors? If so, you could add a new template class that Visitable inherits from and that declares virtual methods for all the visitors: template <typename ... T> class AcceptMethods {}; template <> class AcceptMethods<> {}; template <typename First, typename ... Rest> class AcceptMethods<First, Rest...> : public AcceptMethods<Rest...> { public: virtual double accept(First* ) = 0; virtual ~AcceptMethods() {} }; typedef AcceptMethods<SubVisitor> AllAcceptMethods; class Visitable : public AllAcceptMethods { public: virtual ~Visitable() = default; }; In the above code, we are just listing SubVisitor, but AcceptMethods is variadic so it could be typedef AcceptMethods<A, B, C, D, AndSoOn> AllAcceptMethods;. Then we add another template class WithGenericAcceptMethod whose purpose is to implement the accept methods declared by AcceptMethods by calling a template method acceptT: template <typename This, typename ... T> class WithGenericAcceptMethod {}; template <typename This> class WithGenericAcceptMethod<This, AcceptMethods<>> : public Visitable {}; template <typename This, typename First, typename ... Rest> class WithGenericAcceptMethod<This, AcceptMethods<First, Rest...>> : public WithGenericAcceptMethod<This, AcceptMethods<Rest...>> { public: double accept(First* visitor) override { return ((This*)this)->template acceptT<First>(visitor); } virtual ~WithGenericAcceptMethod() {} }; This class takes as first argument a This parameter in the spirit of CRTP. Then we can now let the specific visitable classes inherit from WithGenericAcceptMethod and implement the template acceptT method: struct DoubleVisitable : public WithGenericAcceptMethod<DoubleVisitable, AllAcceptMethods> { template<typename Visitor> double acceptT(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 1.0; }; struct StringVisitable : public WithGenericAcceptMethod<StringVisitable, AllAcceptMethods> { template<typename Visitor> double acceptT(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 0.0; };
Visitor Pattern with Templated Visitor
This is a follow up on No user defined conversion when using standard variants and visitor pattern I need to implement a templated version of the visitor pattern as shown below, however it looks like the accept function has to be virtual which is not possible. Could you please help me? #include <variant> #include <iostream> class Visitable //I need this to be non-templated (no template for Visitable!!): Otherwise I could use CRTP to solve this issue. { public: virtual ~Visitable() = default; template<typename Visitor> /*virtual*/ double accept(Visitor* visitor) //I can't do virtual here. { throw("I don't want to end up here"); }; protected: Visitable() = default; }; struct DoubleVisitable : public Visitable { template<typename Visitor> double accept(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 1.0; }; struct StringVisitable : public Visitable { template<typename Visitor> double accept(Visitor* visitor) { return visitor->visit(*this); }; double m_val = 0.0; }; template<typename... args> class Visitor { public: virtual ~Visitor() = default; virtual double visit(typename std::variant<args...> visitable) { auto op = [this](typename std::variant<args...> visitable) -> double { return this->apply(visitable); }; return std::visit(std::ref(op), visitable); } virtual double apply(typename std::variant<args...> visitable) = 0; Visitor() = default; }; class SubVisitor : public Visitor<DoubleVisitable, StringVisitable> { public: virtual ~SubVisitor() = default; SubVisitor() : Visitor<DoubleVisitable, StringVisitable>() {}; virtual double apply(std::variant<DoubleVisitable, StringVisitable> visitable) override { return std::visit( [this](auto&& v){return process(v);}, visitable ); }; virtual double process(const StringVisitable& visitable) { std::cout << "STRING HANDLED" << std::endl; return 0.0; } virtual double process(const DoubleVisitable& visitable) { std::cout << "DOUBLE HANDLED" << std::endl; return 1.0; } }; int main(int argc, char* argv[]) { SubVisitor visitor; DoubleVisitable visitable; visitable.accept(&visitor); //I want to be doing this: Visitable* doubleV = new DoubleVisitable(); doubleV->accept(&visitor); delete doubleV; return 1; } The code is here Link. Could you please help me make this not throw but collapses to the right child class DoubleVisitable or StringVisitable. It looks like I need virtual templated member function which is not possible as mentioned here Can a class member function template be virtual?
[ "In C++, there are no template virtual functions. This does not exist. What you can do is either:\n\nhave an accept method for each class you'd like to visit (each descendant)\nhave a std::variant<> of implementations instead of inheritance.\n\n", "It says in the question that Visitable cannot be a template. But is it allowed to inherit from a template class? And do you know all the possible visitors? If so, you could add a new template class that Visitable inherits from and that declares virtual methods for all the visitors:\ntemplate <typename ... T> class AcceptMethods {};\ntemplate <> class AcceptMethods<> {};\ntemplate <typename First, typename ... Rest>\nclass AcceptMethods<First, Rest...> : public AcceptMethods<Rest...> {\npublic:\n virtual double accept(First* ) = 0;\n virtual ~AcceptMethods() {}\n};\n\ntypedef AcceptMethods<SubVisitor> AllAcceptMethods;\n\nclass Visitable : public AllAcceptMethods\n{\npublic:\n virtual ~Visitable() = default;\n};\n\nIn the above code, we are just listing SubVisitor, but AcceptMethods is variadic so it could be typedef AcceptMethods<A, B, C, D, AndSoOn> AllAcceptMethods;.\nThen we add another template class WithGenericAcceptMethod whose purpose is to implement the accept methods declared by AcceptMethods by calling a template method acceptT:\ntemplate <typename This, typename ... T> class WithGenericAcceptMethod {};\ntemplate <typename This> class WithGenericAcceptMethod<This, AcceptMethods<>> : public Visitable {};\ntemplate <typename This, typename First, typename ... Rest>\nclass WithGenericAcceptMethod<This, AcceptMethods<First, Rest...>> : public WithGenericAcceptMethod<This, AcceptMethods<Rest...>> {\npublic:\n double accept(First* visitor) override {\n return ((This*)this)->template acceptT<First>(visitor);\n }\n virtual ~WithGenericAcceptMethod() {}\n};\n\nThis class takes as first argument a This parameter in the spirit of CRTP. Then we can now let the specific visitable classes inherit from WithGenericAcceptMethod and implement the template acceptT method:\nstruct DoubleVisitable : public WithGenericAcceptMethod<DoubleVisitable, AllAcceptMethods>\n{\n template<typename Visitor> \n double acceptT(Visitor* visitor) \n {\n return visitor->visit(*this);\n };\n\n double m_val = 1.0;\n};\n\nstruct StringVisitable : public WithGenericAcceptMethod<StringVisitable, AllAcceptMethods>\n{\n template<typename Visitor> \n double acceptT(Visitor* visitor) \n {\n return visitor->visit(*this);\n };\n double m_val = 0.0;\n};\n\n" ]
[ 2, 1 ]
[]
[]
[ "c++", "templates", "virtual", "visitor_pattern" ]
stackoverflow_0074668665_c++_templates_virtual_visitor_pattern.txt
Q: How to override Stripe's event parameter in CLI I've been trying for hours now. I'm trying to trigger event locally, first I tried: stripe trigger checkout.session.async_payment_succeeded and they I get error: { "error": { "message": "The payment method type provided: bacs_debit is invalid. Please ensure the provided type is activated in your dashboard (https://dashboard.stripe.com/account/payments/settings) and your account is enabled for any preview features that you are trying to use. See https://stripe.com/docs/payments/payment-methods/integration-options for supported payment method, currency, and country combinations.", "param": "payment_method_types", "type": "invalid_request_error" } } Then I tried stripe trigger checkout.session.async_payment_succeeded --override "checkout_session:payment_method_types[0]=card" and then I got: { "error": { "message": "Invalid request (check your POST parameters): unable to determine value for parameter: payment_method_types. For assistance, contact support at https://support.stripe.com/contact/.", "type": "invalid_request_error" } } I have no idea what to do next. A: Right now, Stripe CLI can only trigger a subset of Event types. You can see an exhaustive list of supported Events via either: running stripe trigger --help in the command line, or the CLI documentation It looks like you can't trigger checkout.session.async_payment_succeeded, only checkout.session.completed is supported. A: Apparently bacs_debit is the default for testing async_payment_succeeded. It is only available in UK, so if you are in different location, you are out of luck. Even after a long search I haven't found a way to override the array, and if you use it as --override "checkout_session:payment_method_types[]=card", that only appends it as second value, so bacs_debit will stay first and still trigger the error. In the end I have removed the value first, then added a new one. I also had to override currency to EUR, as I wanted to test SEPA_DEBIT and that needs EUR, not default GBP: stripe trigger checkout.session.async_payment_succeeded --override "price:currency=eur" --remove "checkout_session:payment_method_types" --override "checkout_session:payment_method_types[]"=sepa_debit This one gets through, unfortunately the card used for payment is still VISA and the event triggered is checkout.session.completed , not the completed_async. Not sure what else to add, but maybe it helps someone to get further.
How to override Stripe's event parameter in CLI
I've been trying for hours now. I'm trying to trigger event locally, first I tried: stripe trigger checkout.session.async_payment_succeeded and they I get error: { "error": { "message": "The payment method type provided: bacs_debit is invalid. Please ensure the provided type is activated in your dashboard (https://dashboard.stripe.com/account/payments/settings) and your account is enabled for any preview features that you are trying to use. See https://stripe.com/docs/payments/payment-methods/integration-options for supported payment method, currency, and country combinations.", "param": "payment_method_types", "type": "invalid_request_error" } } Then I tried stripe trigger checkout.session.async_payment_succeeded --override "checkout_session:payment_method_types[0]=card" and then I got: { "error": { "message": "Invalid request (check your POST parameters): unable to determine value for parameter: payment_method_types. For assistance, contact support at https://support.stripe.com/contact/.", "type": "invalid_request_error" } } I have no idea what to do next.
[ "Right now, Stripe CLI can only trigger a subset of Event types. You can see an exhaustive list of supported Events via either:\n\nrunning stripe trigger --help in the command line, or\nthe CLI documentation\n\nIt looks like you can't trigger checkout.session.async_payment_succeeded, only checkout.session.completed is supported.\n", "Apparently bacs_debit is the default for testing async_payment_succeeded. It is only available in UK, so if you are in different location, you are out of luck.\nEven after a long search I haven't found a way to override the array, and if you use it as --override \"checkout_session:payment_method_types[]=card\", that only appends it as second value, so bacs_debit will stay first and still trigger the error.\nIn the end I have removed the value first, then added a new one. I also had to override currency to EUR, as I wanted to test SEPA_DEBIT and that needs EUR, not default GBP:\nstripe trigger checkout.session.async_payment_succeeded --override \"price:currency=eur\" --remove \"checkout_session:payment_method_types\" --override \"checkout_session:payment_method_types[]\"=sepa_debit\n\nThis one gets through, unfortunately the card used for payment is still VISA and the event triggered is checkout.session.completed , not the completed_async. Not sure what else to add, but maybe it helps someone to get further.\n" ]
[ 0, 0 ]
[]
[]
[ "stripe_payments" ]
stackoverflow_0071888335_stripe_payments.txt
Q: Generate SHAP dependence plots Is there a package that allows for estimation of shap values for multiple observations for models that are not XGBoost or decision tree based? I created a neural network using Caret and NNET. I want to develop a beeswarm plot and shap dependence plots to explore the relationship between certain variables in my model and the outcome. The only success I have had is using the DALEX package to estimate SHAP values, but DALEX only does this for single instances and cannot do a global analysis using SHAP values. Any insight or help would be appreciated! I have tried using different shap packages (fastshap, shapr) but these require decision tree based models. I tried creating an XGBoost model in caret but this did not implement well with the shap packages in r and I could not get the outcome I wanted. A: SHAP (SHapley Additive exPlanations) values can be used to explain the output of a machine learning model by analyzing the contribution of each feature to the model's prediction. There are several packages in R that can be used to compute SHAP values, including shapr, fastshap, and DALEX. If you have trained a neural network using the caret package and nnet, you can use the iml package to compute SHAP values for your model. The iml package supports a wide range of machine learning models, including neural networks, and can compute SHAP values for multiple observations. To use the iml package, you will first need to install and load it in your R environment. You can do this by running the following commands: install.packages("iml") library(iml) Next, you will need to load your trained neural network model using the caret package. Once your model is loaded, you can use the explain() function from the iml package to compute SHAP values for your model. The explain() function takes the following arguments: model: the model to explain data: the data to explain label: the label or outcome variable to explain For example, if your trained model is stored in an object called nn_model and your data is stored in a data frame called data, you can compute SHAP values for your model as follows: explained <- explain(nn_model, data, label) Once you have computed SHAP values for your model, you can use the plot() function from the iml package to create a SHAP dependence plot. This plot shows the relationship between each feature and the model's prediction, and can help you identify which features are most important in determining the outcome. For example, if you want to create a SHAP dependence plot for your neural network model, you can use the following code: plot(explained) Alternatively, you can use the plot_shap_summary() function from the shap package to create a beeswarm plot, which shows the distribution of SHAP values for each feature. This plot can also help you understand the relationship between each feature and the model's prediction. To create a beeswarm plot using the plot_shap_summary() function, you can use the following code: shap_values <- shap_values(explained) plot_shap_summary(shap_values) I hope this helps! Let me know if you have any other questions.
Generate SHAP dependence plots
Is there a package that allows for estimation of shap values for multiple observations for models that are not XGBoost or decision tree based? I created a neural network using Caret and NNET. I want to develop a beeswarm plot and shap dependence plots to explore the relationship between certain variables in my model and the outcome. The only success I have had is using the DALEX package to estimate SHAP values, but DALEX only does this for single instances and cannot do a global analysis using SHAP values. Any insight or help would be appreciated! I have tried using different shap packages (fastshap, shapr) but these require decision tree based models. I tried creating an XGBoost model in caret but this did not implement well with the shap packages in r and I could not get the outcome I wanted.
[ "SHAP (SHapley Additive exPlanations) values can be used to explain the output of a machine learning model by analyzing the contribution of each feature to the model's prediction. There are several packages in R that can be used to compute SHAP values, including shapr, fastshap, and DALEX.\nIf you have trained a neural network using the caret package and nnet, you can use the iml package to compute SHAP values for your model. The iml package supports a wide range of machine learning models, including neural networks, and can compute SHAP values for multiple observations.\nTo use the iml package, you will first need to install and load it in your R environment. You can do this by running the following commands:\ninstall.packages(\"iml\")\nlibrary(iml)\n\nNext, you will need to load your trained neural network model using the caret package. Once your model is loaded, you can use the explain() function from the iml package to compute SHAP values for your model. The explain() function takes the following arguments:\n\nmodel: the model to explain\ndata: the data to explain\nlabel: the label or outcome variable to explain\n\nFor example, if your trained model is stored in an object called nn_model and your data is stored in a data frame called data, you can compute SHAP values for your model as follows:\nexplained <- explain(nn_model, data, label)\n\nOnce you have computed SHAP values for your model, you can use the plot() function from the iml package to create a SHAP dependence plot. This plot shows the relationship between each feature and the model's prediction, and can help you identify which features are most important in determining the outcome.\nFor example, if you want to create a SHAP dependence plot for your neural network model, you can use the following code:\nplot(explained)\n\nAlternatively, you can use the plot_shap_summary() function from the shap package to create a beeswarm plot, which shows the distribution of SHAP values for each feature. This plot can also help you understand the relationship between each feature and the model's prediction.\nTo create a beeswarm plot using the plot_shap_summary() function, you can use the following code:\nshap_values <- shap_values(explained)\nplot_shap_summary(shap_values)\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "r", "shap" ]
stackoverflow_0074670871_r_shap.txt
Q: UNITY 2D - How to add RectTransform.height without affecting it's Y position What's happening: Everytime I add value to height.. the 'top-most' area is causing to mess-up the y.position of the gameObject. The GOAL is: How to achieve this kind of approach/behavior via script that makes the 'top-most' area stay? Thank you in advance. Best regards. A: One option is to set the Y-value of the Pivot on your Rect Transform to 1.0 (top). Then when you increase the Height, it will grow relative to the pivot. A: The default anchoredPosition is in the center by default. You'll need to set it to the upper left corner. There is a sample code shown here
UNITY 2D - How to add RectTransform.height without affecting it's Y position
What's happening: Everytime I add value to height.. the 'top-most' area is causing to mess-up the y.position of the gameObject. The GOAL is: How to achieve this kind of approach/behavior via script that makes the 'top-most' area stay? Thank you in advance. Best regards.
[ "One option is to set the Y-value of the Pivot on your Rect Transform to 1.0 (top).\nThen when you increase the Height, it will grow relative to the pivot.\n\n", "The default anchoredPosition is in the center by default. You'll need to set it to the upper left corner. There is a sample code shown here\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "unity3d", "unityscript" ]
stackoverflow_0074669579_c#_unity3d_unityscript.txt
Q: Substring before first uppecase word String contains words separated by spaces. How to get substring from start until first uppercase word (uppercase word excluded)? For example select substringtiluppercase('aaa b cc Dfff dfgdf') should return aaa b cc Can regexp substring used or other idea? Using PostgreSQL 13.2 Uppercase letters are latin letters A .. Z and additionally Õ, Ä, Ö , Ü, Š, Ž A: Sunstring supprts Regular expüression in Postgres SELECT substring('aaa b cc Dfff dfgdf' from '^[^A-ZÕÄÖÜŠŽ]*') substring aaa b cc SELECT 1 fiddle SELECT reverse(substr(reverse(substring('aaa b ccD Dfff dfgdf' from '.*\s[A-ZÕÄÖÜŠŽ]')),2)) reverse aaa b ccD SELECT 1 fiddle A: Replace everything from a leading word boundary then an uppercase letter onwards with blank: regexp_replace('aaa b cc Dfff dfgdf', '\m[A-ZÕÄÖÜŠŽ].*', '') In Postgres flavour of regex, \m "word boundary at the beginning of a word" fyi the other Postgres word boundaries are \M at end of a word, \y either end (same as usual \b) and \Y not a word boundary (same as usual \B).
Substring before first uppecase word
String contains words separated by spaces. How to get substring from start until first uppercase word (uppercase word excluded)? For example select substringtiluppercase('aaa b cc Dfff dfgdf') should return aaa b cc Can regexp substring used or other idea? Using PostgreSQL 13.2 Uppercase letters are latin letters A .. Z and additionally Õ, Ä, Ö , Ü, Š, Ž
[ "Sunstring supprts Regular expüression in Postgres\nSELECT substring('aaa b cc Dfff dfgdf' from '^[^A-ZÕÄÖÜŠŽ]*')\n\n\n\n\n\nsubstring\n\n\n\n\naaa b cc\n\n\n\n\n\nSELECT 1\n\n\nfiddle\nSELECT \nreverse(substr(reverse(substring('aaa b ccD Dfff dfgdf' from '.*\\s[A-ZÕÄÖÜŠŽ]')),2))\n\n\n\n\n\nreverse\n\n\n\n\naaa b ccD\n\n\n\n\n\nSELECT 1\n\n\nfiddle\n", "Replace everything from a leading word boundary then an uppercase letter onwards with blank:\nregexp_replace('aaa b cc Dfff dfgdf', '\\m[A-ZÕÄÖÜŠŽ].*', '')\n\nIn Postgres flavour of regex, \\m \"word boundary at the beginning of a word\"\nfyi the other Postgres word boundaries are \\M at end of a word, \\y either end (same as usual \\b) and \\Y not a word boundary (same as usual \\B).\n" ]
[ 1, 1 ]
[]
[]
[ "postgresql", "regexp_like", "sql", "substring" ]
stackoverflow_0074670361_postgresql_regexp_like_sql_substring.txt
Q: python datetime.time extract from DB i've got data extracted by pandas d=pd.read_sql(query, conn) from DB which looks like this: day start stop 2022-01-01 06:45:27 14:34:24 when i want to import it to array start=np.asarray(d['start']) it looks like this: array([datetime.time(6, 45, 27)]) i want it to look it like array([06:45:27]) is there a simple way to parse this? because for days i did something like: day=np.asarray(d['Day'], dtype='datetime64[D]') so it changed from array([datetime.date(2022, 1, 1)]) to: array(['2022-01-01']) A: You can use the strftime method to convert the time objects to strings with a specific format. For example, to convert the time object to a string in the format "HH:MM:SS", you can do the following: import numpy as np # Create a sample array of datetime.time objects time_array = np.array([datetime.time(6, 45, 27)]) # Use the strftime method to convert the time objects to strings formatted_time_array = np.array([time.strftime("%H:%M:%S") for time in time_array]) # Print the formatted time array print(formatted_time_array) # Output: ["06:45:27"]
python datetime.time extract from DB
i've got data extracted by pandas d=pd.read_sql(query, conn) from DB which looks like this: day start stop 2022-01-01 06:45:27 14:34:24 when i want to import it to array start=np.asarray(d['start']) it looks like this: array([datetime.time(6, 45, 27)]) i want it to look it like array([06:45:27]) is there a simple way to parse this? because for days i did something like: day=np.asarray(d['Day'], dtype='datetime64[D]') so it changed from array([datetime.date(2022, 1, 1)]) to: array(['2022-01-01'])
[ "You can use the strftime method to convert the time objects to strings with a specific format. For example, to convert the time object to a string in the format \"HH:MM:SS\", you can do the following:\nimport numpy as np\n# Create a sample array of datetime.time objects\ntime_array = np.array([datetime.time(6, 45, 27)])\n\n# Use the strftime method to convert the time objects to strings\nformatted_time_array = np.array([time.strftime(\"%H:%M:%S\") for time in \ntime_array])\n\n# Print the formatted time array\nprint(formatted_time_array) # Output: [\"06:45:27\"]\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074670821_numpy_pandas_python.txt
Q: Scanf fails with bus error I'm playing with C and I've run into this error: #include <stdio.h> int main () { char* foo; scanf("%s", foo); printf("entered %s", foo); return 0; } scanf takes pointer, foo is pointer, yet I get bus error. How can I make it work? A: You never initialize foo, so it points to a more or less random location in memory. Either allocate it on the stack. char foo[10]; Or malloc it on the heap: char *foo = (char *)malloc(10 * sizeof(char)); But if you malloc, don't forget to free(). And watch out for buffer overflows; if something takes in a buffer but no maximum size, be very careful. You can specify a maximum length for scanf by doing %9s, for instance. scanf will not account for the terminating null, though, so you need to pass one less than the length of your buffer. A: As to what Bus error: 10 means: SIGBUS (10) / Bus error 10 means a signal sent to an application if an attempts is made to access memory outside of its address space. This may be due to bad pointer that has an invalid address in it.
Scanf fails with bus error
I'm playing with C and I've run into this error: #include <stdio.h> int main () { char* foo; scanf("%s", foo); printf("entered %s", foo); return 0; } scanf takes pointer, foo is pointer, yet I get bus error. How can I make it work?
[ "You never initialize foo, so it points to a more or less random location in memory. Either allocate it on the stack.\nchar foo[10];\n\nOr malloc it on the heap:\nchar *foo = (char *)malloc(10 * sizeof(char));\n\nBut if you malloc, don't forget to free().\nAnd watch out for buffer overflows; if something takes in a buffer but no maximum size, be very careful. You can specify a maximum length for scanf by doing %9s, for instance. scanf will not account for the terminating null, though, so you need to pass one less than the length of your buffer.\n", "As to what Bus error: 10 means:\n\nSIGBUS (10) / Bus error 10 means a signal sent to an application if an\nattempts is made to access memory outside of its address space. This\nmay be due to bad pointer that has an invalid address in it.\n\n" ]
[ 10, 0 ]
[]
[]
[ "bus_error", "c", "pointers", "scanf" ]
stackoverflow_0002985214_bus_error_c_pointers_scanf.txt
Q: Room Dao Left Outer Join sets parentId to 0 when join does not find children I have a standard outer join in Room Dao which strangely ends up setting parent joinId (bakeId) to 0 whenever join didn't find any child rows. Is this a room bug or a normal behavior? Any idea how to get proper joinId (bakeId), without changing the schema? @Query( "SELECT * FROM ${Bake.tableName} " + "left outer JOIN ${Ingredient.tableName} ON ${Bake.tableName}.${Bake.Columns.bakeId} " + "= ${Ingredient.tableName}.${Ingredient.Columns.bakeId}" + " ORDER BY ${Bake.Columns.bakeId} DESC , ${Bake.Columns.startTime} DESC" ) fun getBakesFlow(): Flow<Map<Bake, List<Ingredient>>> The problem I think is that both parent and child tables have the join column named exactly the same A: Looks like this is a bug with version "2.4.3", I updated room to "2.5.0-beta02" and it's no longer happening.
Room Dao Left Outer Join sets parentId to 0 when join does not find children
I have a standard outer join in Room Dao which strangely ends up setting parent joinId (bakeId) to 0 whenever join didn't find any child rows. Is this a room bug or a normal behavior? Any idea how to get proper joinId (bakeId), without changing the schema? @Query( "SELECT * FROM ${Bake.tableName} " + "left outer JOIN ${Ingredient.tableName} ON ${Bake.tableName}.${Bake.Columns.bakeId} " + "= ${Ingredient.tableName}.${Ingredient.Columns.bakeId}" + " ORDER BY ${Bake.Columns.bakeId} DESC , ${Bake.Columns.startTime} DESC" ) fun getBakesFlow(): Flow<Map<Bake, List<Ingredient>>> The problem I think is that both parent and child tables have the join column named exactly the same
[ "Looks like this is a bug with version \"2.4.3\", I updated room to \"2.5.0-beta02\" and it's no longer happening.\n" ]
[ 0 ]
[]
[]
[ "android_room", "android_sqlite", "sqlite" ]
stackoverflow_0074670694_android_room_android_sqlite_sqlite.txt
Q: Spring Boot run blocking task in the background I have a non-web Spring Boot application that does some blocking calls in an infinite loop. This logic sits in a CommandLineRunner. Everything worked fine until I wanted to do integration tests. Unfortunately, if I add a @SpringBootTest, the application never starts fully so I can't do anything in the test. I tried to run this logic in a background thread, but my application started shutting down right after finishing the startup procedure. Is there a way to run a background task in a Spring application after the application is initialized? A: Use @PostConstruct annotation to have a method run after the Spring application has been initialized. This method can then be used to start a background thread running those blocking calls. @Component public class MyComponent { @PostConstruct public void startBackgroundThread() { new Thread(() -> { // blocking calls here }).start(); } } Alternatively, it is possible to use ApplicationRunner interface to implement a method that will run after the Spring application initialized. This interface has run method that is called after the application is initialized and can be used to similarly start a background thread. @Component public class MyComponent implements ApplicationRunner { @Override public void run(ApplicationArguments args) { new Thread(() -> { // blocking calls here }).start(); } }
Spring Boot run blocking task in the background
I have a non-web Spring Boot application that does some blocking calls in an infinite loop. This logic sits in a CommandLineRunner. Everything worked fine until I wanted to do integration tests. Unfortunately, if I add a @SpringBootTest, the application never starts fully so I can't do anything in the test. I tried to run this logic in a background thread, but my application started shutting down right after finishing the startup procedure. Is there a way to run a background task in a Spring application after the application is initialized?
[ "Use @PostConstruct annotation to have a method run after the Spring application has been initialized. This method can then be used to start a background thread running those blocking calls.\n@Component\npublic class MyComponent {\n\n @PostConstruct\n public void startBackgroundThread() {\n new Thread(() -> {\n // blocking calls here\n }).start();\n }\n}\n\nAlternatively, it is possible to use ApplicationRunner interface to implement a method that will run after the Spring application initialized. This interface has run method that is called after the application is initialized and can be used to similarly start a background thread.\n@Component\npublic class MyComponent implements ApplicationRunner {\n\n @Override\n public void run(ApplicationArguments args) {\n new Thread(() -> {\n // blocking calls here\n }).start();\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "java", "spring", "spring_boot" ]
stackoverflow_0074670035_java_spring_spring_boot.txt
Q: Deploying the nodejs express server to the vercel showing the module not found error in function logs When I deploy the nodejs application in vercel it shows this error in function logs, I have already deployed the nodejs application with the same config and file directory this is the first time I am seeing this serverless function error I want to deploy the nodejs server to vercel because soon Heroku will remove its free tier so please find the issue or if you guys have any recommendations to deploy the nodejs server please let me know [GET] / 15:11:45:53 Function Status: None Edge Status: 500 Duration: 94.00 ms Init Duration: N/A Memory Used: 19 MB ID: sfo1::7wz8g-1667209305700-0df0949052b6 User Agent: got (https://github.com/sindresorhus/got) 2022-10-31T09:41:45.636Z 0a0d2c9e-7a8b-46f1-89b7-bf1ea6108d53 ERROR Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/var/task/Controllers/Users.controller.js' imported from /var/task/routes/Users.routes.js at new NodeError (node:internal/errors:372:5) at finalizeResolution (node:internal/modules/esm/resolve:437:11) at moduleResolve (node:internal/modules/esm/resolve:1009:10) at defaultResolve (node:internal/modules/esm/resolve:1218:11) at ESMLoader.resolve (node:internal/modules/esm/loader:580:30) at ESMLoader.getModuleJob (node:internal/modules/esm/loader:294:18) at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:80:40) at link (node:internal/modules/esm/module_job:78:36) { code: 'ERR_MODULE_NOT_FOUND' } RequestId: 0a0d2c9e-7a8b-46f1-89b7-bf1ea6108d53 Error: Runtime exited with error: exit status 1 Runtime.ExitError // Users.routes.js import express from "express"; import { get_id, addUser, addFriends, getFriends, getPendingFriends, queryUser, startChat, getChats, getUser, getUserServers, changeUserName, createServers, joinServers } from "../Controllers/Users.controller.js"; const router = express.Router(); // get user id router.get("/getId", async (req, res, next) => { try { const { uid } = req.query; const id = await get_id(uid); res.status(200).send(id); } catch (error) { console.log(error) next(error); } }); // add user to the database router.post("/addUser", addUser); // search a user through username using get method and query router.get("/searchUser", queryUser); // start the chat router.post("/startChat", startChat); // get all chats router.get("/getChats", getChats); // get logged in user router.get("/getUserInfo", getUser); // send and accept the friend request router.post("/add-friends", addFriends); // get Accepted Friends router.get("/getAllFriends", getFriends); // get Pending Friends router.get("/getPendingFriends", getPendingFriends) // create a new server for the user router.post("/createServer", createServers); // join a new server for the user router.post("/joinServer", joinServers); // getting all servers that users have already joined router.get("/getallServers", getUserServers); // change username router.post("/changeUserName", changeUserName); export default router; // package.json { "name": "user-and-chat-service", "version": "1.0.0", "main": "index.js", "type": "module", "license": "MIT", "scripts": { "start": "node index.js", "test": "echo \"Error: no test specified\" && exit 1", "dev": "nodemon index.js", "watch": "babel-watch -L src/index.js" }, "devDependencies": { "babel-watch": "^7.7.0" }, "dependencies": { "@babel/core": "^7.19.1", "@babel/polyfill": "^7.12.1", "@babel/preset-env": "^7.19.1", "@vercel/node": "^2.5.22", "axios": "^0.27.2", "babel-plugin-module-resolver": "^4.1.0", "cors": "^2.8.5", "dotenv": "^16.0.2", "express": "^4.18.1", "http-errors": "^2.0.0", "joi": "^17.6.1", "mongoose": "^6.6.2", "morgan": "^1.10.0", "nanoid": "^4.0.0", "nodemon": "^2.0.20" } } // vercel.json { "builds": [ { "src": "./index.js", "use": "@vercel/node" } ], "routes": [ { "src": "/(.*)", "dest": "/index.js" } ] } A: replace this for this, vercel.json not working with absolute paths { "builds": [ { "src": "index.js", "use": "@vercel/node" } ], "routes": [ { "src": "/(.*)", "dest": "index.js" } ] }
Deploying the nodejs express server to the vercel showing the module not found error in function logs
When I deploy the nodejs application in vercel it shows this error in function logs, I have already deployed the nodejs application with the same config and file directory this is the first time I am seeing this serverless function error I want to deploy the nodejs server to vercel because soon Heroku will remove its free tier so please find the issue or if you guys have any recommendations to deploy the nodejs server please let me know [GET] / 15:11:45:53 Function Status: None Edge Status: 500 Duration: 94.00 ms Init Duration: N/A Memory Used: 19 MB ID: sfo1::7wz8g-1667209305700-0df0949052b6 User Agent: got (https://github.com/sindresorhus/got) 2022-10-31T09:41:45.636Z 0a0d2c9e-7a8b-46f1-89b7-bf1ea6108d53 ERROR Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/var/task/Controllers/Users.controller.js' imported from /var/task/routes/Users.routes.js at new NodeError (node:internal/errors:372:5) at finalizeResolution (node:internal/modules/esm/resolve:437:11) at moduleResolve (node:internal/modules/esm/resolve:1009:10) at defaultResolve (node:internal/modules/esm/resolve:1218:11) at ESMLoader.resolve (node:internal/modules/esm/loader:580:30) at ESMLoader.getModuleJob (node:internal/modules/esm/loader:294:18) at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:80:40) at link (node:internal/modules/esm/module_job:78:36) { code: 'ERR_MODULE_NOT_FOUND' } RequestId: 0a0d2c9e-7a8b-46f1-89b7-bf1ea6108d53 Error: Runtime exited with error: exit status 1 Runtime.ExitError // Users.routes.js import express from "express"; import { get_id, addUser, addFriends, getFriends, getPendingFriends, queryUser, startChat, getChats, getUser, getUserServers, changeUserName, createServers, joinServers } from "../Controllers/Users.controller.js"; const router = express.Router(); // get user id router.get("/getId", async (req, res, next) => { try { const { uid } = req.query; const id = await get_id(uid); res.status(200).send(id); } catch (error) { console.log(error) next(error); } }); // add user to the database router.post("/addUser", addUser); // search a user through username using get method and query router.get("/searchUser", queryUser); // start the chat router.post("/startChat", startChat); // get all chats router.get("/getChats", getChats); // get logged in user router.get("/getUserInfo", getUser); // send and accept the friend request router.post("/add-friends", addFriends); // get Accepted Friends router.get("/getAllFriends", getFriends); // get Pending Friends router.get("/getPendingFriends", getPendingFriends) // create a new server for the user router.post("/createServer", createServers); // join a new server for the user router.post("/joinServer", joinServers); // getting all servers that users have already joined router.get("/getallServers", getUserServers); // change username router.post("/changeUserName", changeUserName); export default router; // package.json { "name": "user-and-chat-service", "version": "1.0.0", "main": "index.js", "type": "module", "license": "MIT", "scripts": { "start": "node index.js", "test": "echo \"Error: no test specified\" && exit 1", "dev": "nodemon index.js", "watch": "babel-watch -L src/index.js" }, "devDependencies": { "babel-watch": "^7.7.0" }, "dependencies": { "@babel/core": "^7.19.1", "@babel/polyfill": "^7.12.1", "@babel/preset-env": "^7.19.1", "@vercel/node": "^2.5.22", "axios": "^0.27.2", "babel-plugin-module-resolver": "^4.1.0", "cors": "^2.8.5", "dotenv": "^16.0.2", "express": "^4.18.1", "http-errors": "^2.0.0", "joi": "^17.6.1", "mongoose": "^6.6.2", "morgan": "^1.10.0", "nanoid": "^4.0.0", "nodemon": "^2.0.20" } } // vercel.json { "builds": [ { "src": "./index.js", "use": "@vercel/node" } ], "routes": [ { "src": "/(.*)", "dest": "/index.js" } ] }
[ "replace this for this, vercel.json not working with absolute paths\n{\n\"builds\": [\n {\n \"src\": \"index.js\",\n \"use\": \"@vercel/node\"\n }\n],\n\"routes\": [\n {\n \"src\": \"/(.*)\",\n \"dest\": \"index.js\"\n }\n]\n\n}\n" ]
[ 0 ]
[]
[]
[ "node.js", "serverless", "vercel", "web_deployment" ]
stackoverflow_0074261500_node.js_serverless_vercel_web_deployment.txt
Q: how to specify log format for supervisor stdout log? I have a process configured in supervisor as below. The module itself have its own logger in code. Normally we do not care the stdout_logfile. But today I found there are some exception info in stdout_logfile (not captured by the logger in code). I want to know when did those exception happened. But the stdout_logfile did not have timestamp for each line. It seems have no format at all. So how can we config format for stdout_logfile in supervisor? [program:my_process] environment=ENV=test command=python my_process.py directory=/home/me/ autostart=true startretries=3 stopsignal=INT stopwaitsecs=10 redirect_stderr=true stdout_logfile=/home/me/logs/my_process.stdout A: in my case i solved this problem by using stderr_logfile=/home/root/project/logfile_err.log this scripts
how to specify log format for supervisor stdout log?
I have a process configured in supervisor as below. The module itself have its own logger in code. Normally we do not care the stdout_logfile. But today I found there are some exception info in stdout_logfile (not captured by the logger in code). I want to know when did those exception happened. But the stdout_logfile did not have timestamp for each line. It seems have no format at all. So how can we config format for stdout_logfile in supervisor? [program:my_process] environment=ENV=test command=python my_process.py directory=/home/me/ autostart=true startretries=3 stopsignal=INT stopwaitsecs=10 redirect_stderr=true stdout_logfile=/home/me/logs/my_process.stdout
[ "in my case i solved this problem by using\nstderr_logfile=/home/root/project/logfile_err.log\n\nthis scripts\n" ]
[ 0 ]
[]
[]
[ "python", "supervisord" ]
stackoverflow_0070705507_python_supervisord.txt
Q: iOS SecTrustRef Always NULL I'm trying to connect an iOS app to a Windows C# sever using TLS over TCP/IP. The TLS connection is using untrusted certificates created from an untrusted CA root certificate using the makecert utility. To test these certificates I created a simple C# client and using those certificates it was able to connect and communicate with the server. I'm not skilled at iOS development, but I did manage to find some code that connects me to the server, as follows: -(bool)CreateAndConnect:(NSString *) remoteHost withPort:(NSInteger) serverPort { CFReadStreamRef readStream; CFWriteStreamRef writeStream; CFStreamCreatePairWithSocketToHost(NULL, (__bridge CFStringRef)(remoteHost), serverPort, &readStream, &writeStream); CFReadStreamSetProperty(readStream, kCFStreamPropertySocketSecurityLevel, kCFStreamSocketSecurityLevelNegotiatedSSL); NSInputStream *inputStream = (__bridge_transfer NSInputStream *)readStream; NSOutputStream *outputStream = (__bridge_transfer NSOutputStream *)writeStream; [inputStream setProperty:NSStreamSocketSecurityLevelNegotiatedSSL forKey:NSStreamSocketSecurityLevelKey]; // load certificate from servers exported p12 file NSArray *certificates = [[NSArray alloc] init]; [self loadClientCertificates:certificates]; NSDictionary *sslSettings = [NSDictionary dictionaryWithObjectsAndKeys: (id)kCFBooleanFalse, (id)kCFStreamSSLValidatesCertificateChain, certificates,(id)kCFStreamSSLCertificates, nil]; [inputStream setProperty:sslSettings forKey:(__bridge NSString *)kCFStreamPropertySSLSettings]; [inputStream setDelegate:self]; [outputStream setDelegate:self]; [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFReadStreamOpen(readStream); CFWriteStreamOpen(writeStream); return true; } The code also seems to do some form of TLS negotiation, as the C# server rejects the connection if the p12 certificates are not provided as part of the NSStream settings. So it appears like the first stage of the TLS negotiation is working. To validate the server certificate I have this function, which gets called by the NSStream delegate on the NSStreamEventHasSpaceAvailable event: // return YES if certificate verification is successful, otherwise NO -(BOOL) VerifyCertificate:(NSStream *)stream { NSData *trustedCertData = nil; BOOL result = NO; SecTrustRef trustRef = NULL; NSString *root_certificate_name = @"reference_cert"; NSString *root_certificate_extension = @"der"; /* Load reference cetificate */ NSBundle *bundle = [NSBundle bundleForClass:[self class]]; trustedCertData = [NSData dataWithContentsOfFile:[bundle pathForResource: root_certificate_name ofType: root_certificate_extension]]; /* get trust object */ /* !!!!! error is here as trustRef is NULL !!!! */ trustRef = (__bridge SecTrustRef)[stream propertyForKey:(__bridge id)kCFStreamPropertySSLPeerTrust]; /* loacate the reference certificate */ NSInteger numCerts = SecTrustGetCertificateCount(trustRef); for (NSInteger i = 0; i < numCerts; i++) { SecCertificateRef secCertRef = SecTrustGetCertificateAtIndex(trustRef, i); NSData *certData = CFBridgingRelease(SecCertificateCopyData(secCertRef)); if ([trustedCertData isEqualToData: certData]) { result = YES; break; } } return result; } Now the problem is, no matter what I try, the trustRef object is always null. From this Apple developer link: https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/NetworkingTopics/Articles/OverridingSSLChainValidationCorrectly.html There is this quote that suggest this should not be the case: By the time your stream delegate’s event handler gets called to indicate that there is space available on the socket, the operating system has already constructed a TLS channel, obtained a certificate chain from the other end of the connection, and created a trust object to evaluate it. Any hints on how to fix this? How can I get access to that trustRef object for the NSStream? Edit: Thanks for the reply 100phole. In trying to get this to work, I thought this might have something to do with the issue and in one of my many attempts I moved all of those socket related items into a class: Something like this: @interface Socket CFReadStreamRef readStream; CFWriteStreamRef writeStream; NSInputStream *inputStream; NSOutputStream *outputStream; @end But that came up with the same results :( I only reverted back to the version shown above because, based on my Google searching, that appears to be a fairly code common pattern. For example, even this code from the Apple Developer site uses a very similar style: https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Streams/Articles/NetworkStreams.html#//apple_ref/doc/uid/20002277-BCIDFCDI As I mentioned earlier, I'm no expert in Objective-C (far from it), so I might be wrong, but from what I have seen, moving those items into a class and having them persist did not seem to make any difference. A: Since there seems to be some interest in this question I decided to update the question with a answer with details on how this problem was eventually solved. First some background. I inherited this code from a previous developer and my role was to get the broken code to work. I spent a lot of time writing and re-writing the connection code using the details from the Apple iOS developer web page, but nothing seemed to work. I finally decided to take a closer look at this function, code I had inherited and incorrectly assumed was working: [self loadClientCertificates:certificates]; At first glance the code looked OK. The function did nothing more than load certificates from file. But on closer inspection, while the code was loading the certificates correctly, it was not returning those certificates to the caller!!! After fixing that code so that it correctly returned the certificates the connection code worked fine and the SecTrustRef was no longer NULL. In summary: 1) The Apple documentation, while lacking good examples does appear to be accurate. 2) The reason the SecTrustRef was NULL was because no valid certificate could be found for the connection negotiations phase and that was because no certificates where being made available to the connection API due to the earlier mentioned coding error. 3) If you are seeing a similar error, my suggestion would be to check and double check your code, because as would be expected, the iOS side of the equation works as documented. A: I recently created an Obj-C package to handle TLS taking into account the latest restrictions imposed by Apple. Getting the certificates right is a very important step. https://github.com/eamonwhiter73/IOSObjCWebSockets A: It looks like the issue is that the kCFStreamPropertySSLPeerTrust property is not set on the NSStream object until after the TLS handshake has completed. This means that the value of the trustRef variable will be NULL until after the TLS handshake has completed. One way to fix this issue would be to move the call to VerifyCertificate to a different delegate method, such as stream:handleEvent:, which is called after the TLS handshake has completed. You can check the value of the NSStreamEvent parameter to determine if the handshake has completed and then call VerifyCertificate if necessary. Here is an example of how you could do this: -(void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventHasSpaceAvailable: // ... break; case NSStreamEventEndEncountered: // ... break; case NSStreamEventErrorOccurred: // ... break; case NSStreamEventHasBytesAvailable: // ... break; case NSStreamEventOpenCompleted: // TLS handshake has completed, so call VerifyCertificate if([self VerifyCertificate:stream]) { // Certificate verification was successful } else { // Certificate verification failed } break; } }
iOS SecTrustRef Always NULL
I'm trying to connect an iOS app to a Windows C# sever using TLS over TCP/IP. The TLS connection is using untrusted certificates created from an untrusted CA root certificate using the makecert utility. To test these certificates I created a simple C# client and using those certificates it was able to connect and communicate with the server. I'm not skilled at iOS development, but I did manage to find some code that connects me to the server, as follows: -(bool)CreateAndConnect:(NSString *) remoteHost withPort:(NSInteger) serverPort { CFReadStreamRef readStream; CFWriteStreamRef writeStream; CFStreamCreatePairWithSocketToHost(NULL, (__bridge CFStringRef)(remoteHost), serverPort, &readStream, &writeStream); CFReadStreamSetProperty(readStream, kCFStreamPropertySocketSecurityLevel, kCFStreamSocketSecurityLevelNegotiatedSSL); NSInputStream *inputStream = (__bridge_transfer NSInputStream *)readStream; NSOutputStream *outputStream = (__bridge_transfer NSOutputStream *)writeStream; [inputStream setProperty:NSStreamSocketSecurityLevelNegotiatedSSL forKey:NSStreamSocketSecurityLevelKey]; // load certificate from servers exported p12 file NSArray *certificates = [[NSArray alloc] init]; [self loadClientCertificates:certificates]; NSDictionary *sslSettings = [NSDictionary dictionaryWithObjectsAndKeys: (id)kCFBooleanFalse, (id)kCFStreamSSLValidatesCertificateChain, certificates,(id)kCFStreamSSLCertificates, nil]; [inputStream setProperty:sslSettings forKey:(__bridge NSString *)kCFStreamPropertySSLSettings]; [inputStream setDelegate:self]; [outputStream setDelegate:self]; [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFReadStreamOpen(readStream); CFWriteStreamOpen(writeStream); return true; } The code also seems to do some form of TLS negotiation, as the C# server rejects the connection if the p12 certificates are not provided as part of the NSStream settings. So it appears like the first stage of the TLS negotiation is working. To validate the server certificate I have this function, which gets called by the NSStream delegate on the NSStreamEventHasSpaceAvailable event: // return YES if certificate verification is successful, otherwise NO -(BOOL) VerifyCertificate:(NSStream *)stream { NSData *trustedCertData = nil; BOOL result = NO; SecTrustRef trustRef = NULL; NSString *root_certificate_name = @"reference_cert"; NSString *root_certificate_extension = @"der"; /* Load reference cetificate */ NSBundle *bundle = [NSBundle bundleForClass:[self class]]; trustedCertData = [NSData dataWithContentsOfFile:[bundle pathForResource: root_certificate_name ofType: root_certificate_extension]]; /* get trust object */ /* !!!!! error is here as trustRef is NULL !!!! */ trustRef = (__bridge SecTrustRef)[stream propertyForKey:(__bridge id)kCFStreamPropertySSLPeerTrust]; /* loacate the reference certificate */ NSInteger numCerts = SecTrustGetCertificateCount(trustRef); for (NSInteger i = 0; i < numCerts; i++) { SecCertificateRef secCertRef = SecTrustGetCertificateAtIndex(trustRef, i); NSData *certData = CFBridgingRelease(SecCertificateCopyData(secCertRef)); if ([trustedCertData isEqualToData: certData]) { result = YES; break; } } return result; } Now the problem is, no matter what I try, the trustRef object is always null. From this Apple developer link: https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/NetworkingTopics/Articles/OverridingSSLChainValidationCorrectly.html There is this quote that suggest this should not be the case: By the time your stream delegate’s event handler gets called to indicate that there is space available on the socket, the operating system has already constructed a TLS channel, obtained a certificate chain from the other end of the connection, and created a trust object to evaluate it. Any hints on how to fix this? How can I get access to that trustRef object for the NSStream? Edit: Thanks for the reply 100phole. In trying to get this to work, I thought this might have something to do with the issue and in one of my many attempts I moved all of those socket related items into a class: Something like this: @interface Socket CFReadStreamRef readStream; CFWriteStreamRef writeStream; NSInputStream *inputStream; NSOutputStream *outputStream; @end But that came up with the same results :( I only reverted back to the version shown above because, based on my Google searching, that appears to be a fairly code common pattern. For example, even this code from the Apple Developer site uses a very similar style: https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Streams/Articles/NetworkStreams.html#//apple_ref/doc/uid/20002277-BCIDFCDI As I mentioned earlier, I'm no expert in Objective-C (far from it), so I might be wrong, but from what I have seen, moving those items into a class and having them persist did not seem to make any difference.
[ "Since there seems to be some interest in this question I decided to update the question with a answer with details on how this problem was eventually solved.\nFirst some background. I inherited this code from a previous developer and my role was to get the broken code to work.\nI spent a lot of time writing and re-writing the connection code using the details from the Apple iOS developer web page, but nothing seemed to work.\nI finally decided to take a closer look at this function, code I had inherited and incorrectly assumed was working:\n[self loadClientCertificates:certificates];\n\nAt first glance the code looked OK. The function did nothing more than load certificates from file. But on closer inspection, while the code was loading the certificates correctly, it was not returning those certificates to the caller!!!\nAfter fixing that code so that it correctly returned the certificates the connection code worked fine and the SecTrustRef was no longer NULL.\nIn summary:\n1) The Apple documentation, while lacking good examples does appear to be accurate.\n2) The reason the SecTrustRef was NULL was because no valid certificate could be found for the connection negotiations phase and that was because no certificates where being made available to the connection API due to the earlier mentioned coding error.\n3) If you are seeing a similar error, my suggestion would be to check and double check your code, because as would be expected, the iOS side of the equation works as documented.\n", "I recently created an Obj-C package to handle TLS taking into account the latest restrictions imposed by Apple. Getting the certificates right is a very important step.\nhttps://github.com/eamonwhiter73/IOSObjCWebSockets\n", "It looks like the issue is that the kCFStreamPropertySSLPeerTrust property is not set on the NSStream object until after the TLS handshake has completed. This means that the value of the trustRef variable will be NULL until after the TLS handshake has completed.\nOne way to fix this issue would be to move the call to VerifyCertificate to a different delegate method, such as stream:handleEvent:, which is called after the TLS handshake has completed. You can check the value of the NSStreamEvent parameter to determine if the handshake has completed and then call VerifyCertificate if necessary.\nHere is an example of how you could do this:\n-(void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode {\nswitch(eventCode) {\ncase NSStreamEventHasSpaceAvailable:\n// ...\nbreak;\ncase NSStreamEventEndEncountered:\n// ...\nbreak;\ncase NSStreamEventErrorOccurred:\n// ...\nbreak;\ncase NSStreamEventHasBytesAvailable:\n// ...\nbreak;\ncase NSStreamEventOpenCompleted:\n// TLS handshake has completed, so call VerifyCertificate\nif([self VerifyCertificate:stream]) {\n// Certificate verification was successful\n} else {\n// Certificate verification failed\n}\nbreak;\n}\n}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "ios", "objective_c", "ssl", "tcp" ]
stackoverflow_0033163306_ios_objective_c_ssl_tcp.txt
Q: Twitter Bootstrap Tooltip: flickers when placed on top of button, but works fine when placed at the left/right/bottom I am dealing with some weird behavior for an instance of bootstrap's tooltip. The page I am working with has several buttons that, when hovered over, display tooltips with the description for the buttons' functionality. The tooltips are all displayed on top of the buttons, and, with the exception of one button, everything works fine. This one button displays the tooltip with a continuous flickering, the tooltip itself covers part of the button (instead of being completely on top of the button), and prevents the button from being properly clicked. If the "data-placement" for the tooltip is changed from "top" to "left"/"right"/"bottom", the tooltip is displayed correctly. Additionally, the button that gives me problems is wrapped in a div that has "float: right;" assigned in the css. I am mentioning this because I noticed that if I remove the float, the tooltip works fine. Unfortunately, if I remove the float, the button itself loses its correct positioning. While I could give up the "top" positioning for the tooltip, I was hoping that there might be an easy trick to this problem. Does anyone have any suggestions? Thank you. Update: This StackOverflow question presents the same problem as the one I was encountering. I found the answer useful. A: Add to the tooltip pointer-events: none; css rule, like .tooltip { pointer-events: none; } This will prevent tooltip from being target for mouse events and will resolve the issue. A: Read The Docs The docs on Bootstrap 4 explicitly address this issue and also provide a solution: Overflow auto and scroll Tooltip position attempts to automatically change when a parent container has overflow: auto or overflow: scroll like our .table-responsive, but still keeps the original placement’s positioning. To resolve, set the boundary option to anything other than default value, 'scrollParent', such as 'window': $('#example').tooltip({ boundary: 'window' }) So we have to set the boundary option to 'window'. From the docs on the boundary option: Overflow constraint boundary of the tooltip. Accepts the values of 'viewport', 'window', 'scrollParent', or an HTMLElement reference (JavaScript only). For more information refer to Popper.js's preventOverflow docs. Best Practice That being said, the preferred way of initializing tooltips everywhere is to use '[data-toggle="tooltip"]' as selector: $('[data-toggle="tooltip"]').tooltip({ boundary: 'window' }) Once And For All Now if you are dynamically adding or removing elements (with tooltips) to the DOM or are even replacing the whole page without reloading it (i.e. by using a pushState library like PJAX) - or if you are just using modals (with tooltips) - you have to initialize those tooltips (again). Fixing Tooltips On Elements That's where the following function comes in handy, which destroys and recreates all tooltips. Call this function after you've added / removed an element to / from the DOM that has / had a tooltip attached. function recreateTooltips() { $('[data-toggle="tooltip"]').tooltip('dispose').tooltip({boundary: 'window'}); } This might seem expensive (and certainly is), but I've never noticed any performance impact even with dozens of tooltips on the same page. Fixing Tooltips In Modals To fix tooltips in modals, you simply bind the function from above to the 'shown.bs.modal' event, which fires after a modal is fully shown to the user. /** Recreate tooltips upon showing modal */ $(document).on('shown.bs.modal', '*', function () { recreateTooltips(); }); A: I found a quick solution for my problem. Although relatively short, my initial tooltip description was getting split on two lines. By chance, I tried shortening the tooltip text to fit on a single line. Once this was done, the tooltip was properly displayed. Therefore, I am assuming there must be a problem with the length of the tooltip text and the fact that the button is displayed all the way to the right of the page (and at the top of the page). I will not investigate this further for the time being. A: I was having this same issue. I found that Bootstrap's tooltips do not position themselves correctly over "floated" or "inline/inline-block" elements when those elements are within a container with relative or absolute positioning. I have a button floated right, inside an absolutely positioned parent container. If I remove the absolute positioning of the parent, the tooltip displays perfectly fine. Bootstrap needs to address this. A: This happens on inline elements, such as a tags. Setting the display property to block resolves the issue. A: Late response, but I had this same issue and was able to resolve it by using the 'container' option in the tooltip jquery reference on the html element. See this page and the jsfiddle link therein. You can see the tooltip container option details here. A: Building on Steve's answer, I found the issue was that my css reference to the font awesome css was after the bootstrap css. The bootstrap css sets the display property for font awesome icons to be "inline-block". So to fix the issue I just moved the bootstrap css reference after the font awesome css and this fixed the flickering issue for me. A: I'm not sure how much control you have over Bootstrap bits, but it sounds like the hover event is constantly firing. The way to stop this would be to use .stop() before the call, but I'm not sure if that's possible here, for example: $('#myelement').stop().fadeOut(200); I don't know if you're able to do something like that with the Bootstrap call, but it may be worth giving it a try! A: I had this flickering problem because my buttons were hidden at first. Triggering tooltips on hidden elements will not work. I made my own tooltips that look just like the bootstrap tooltips. .tooltip1 { position: relative; display: inline-block; } /* Tooltip text */ .tooltip1 .tooltiptext { visibility: hidden; width: 220px; background-color: black; color: #fff; text-align: center; padding: 5px 0; border-radius: 6px; overflow: visible; top: -300%; left: -300%; /* Position the tooltip text */ position: absolute; z-index: 1; opacity: 0; transition: opacity 1s; } /* Show the tooltip text when you mouse over the tooltip container */ .tooltip1:hover .tooltiptext { visibility: visible; opacity: 1; overflow: visible; } <li class="list-inline-item"> <button routerLink='/members/{{member.username}}' class="btn btn-primary"> <div class ="tooltip1"><span class="tooltiptext">Click this button to view the details of this person.</span> <i class="fa fa-user"></i></div></button></li>
Twitter Bootstrap Tooltip: flickers when placed on top of button, but works fine when placed at the left/right/bottom
I am dealing with some weird behavior for an instance of bootstrap's tooltip. The page I am working with has several buttons that, when hovered over, display tooltips with the description for the buttons' functionality. The tooltips are all displayed on top of the buttons, and, with the exception of one button, everything works fine. This one button displays the tooltip with a continuous flickering, the tooltip itself covers part of the button (instead of being completely on top of the button), and prevents the button from being properly clicked. If the "data-placement" for the tooltip is changed from "top" to "left"/"right"/"bottom", the tooltip is displayed correctly. Additionally, the button that gives me problems is wrapped in a div that has "float: right;" assigned in the css. I am mentioning this because I noticed that if I remove the float, the tooltip works fine. Unfortunately, if I remove the float, the button itself loses its correct positioning. While I could give up the "top" positioning for the tooltip, I was hoping that there might be an easy trick to this problem. Does anyone have any suggestions? Thank you. Update: This StackOverflow question presents the same problem as the one I was encountering. I found the answer useful.
[ "Add to the tooltip pointer-events: none; css rule, like\n.tooltip {\n pointer-events: none;\n}\n\nThis will prevent tooltip from being target for mouse events and will resolve the issue.\n", "Read The Docs\nThe docs on Bootstrap 4 explicitly address this issue and also provide a solution:\n\nOverflow auto and scroll\nTooltip position attempts to automatically change when a parent container has overflow: auto or overflow: scroll like our .table-responsive, but still keeps the original placement’s positioning. To resolve, set the boundary option to anything other than default value, 'scrollParent', such as 'window':\n$('#example').tooltip({ boundary: 'window' })\n\nSo we have to set the boundary option to 'window'.\nFrom the docs on the boundary option:\n\nOverflow constraint boundary of the tooltip. Accepts the values of 'viewport', 'window', 'scrollParent', or an HTMLElement reference (JavaScript only). For more information refer to Popper.js's preventOverflow docs.\n\nBest Practice\nThat being said, the preferred way of initializing tooltips everywhere is to use '[data-toggle=\"tooltip\"]' as selector:\n$('[data-toggle=\"tooltip\"]').tooltip({ boundary: 'window' })\n\nOnce And For All\nNow if you are dynamically adding or removing elements (with tooltips) to the DOM or are even replacing the whole page without reloading it (i.e. by using a pushState library like PJAX) - or if you are just using modals (with tooltips) - you have to initialize those tooltips (again).\nFixing Tooltips On Elements\nThat's where the following function comes in handy, which destroys and recreates all tooltips. Call this function after you've added / removed an element to / from the DOM that has / had a tooltip attached.\nfunction recreateTooltips() {\n $('[data-toggle=\"tooltip\"]').tooltip('dispose').tooltip({boundary: 'window'});\n}\n\nThis might seem expensive (and certainly is), but I've never noticed any performance impact even with dozens of tooltips on the same page.\nFixing Tooltips In Modals\nTo fix tooltips in modals, you simply bind the function from above to the 'shown.bs.modal' event, which fires after a modal is fully shown to the user.\n/** Recreate tooltips upon showing modal */\n$(document).on('shown.bs.modal', '*', function () {\n recreateTooltips();\n});\n\n", "I found a quick solution for my problem. Although relatively short, my initial tooltip description was getting split on two lines. By chance, I tried shortening the tooltip text to fit on a single line. Once this was done, the tooltip was properly displayed. Therefore, I am assuming there must be a problem with the length of the tooltip text and the fact that the button is displayed all the way to the right of the page (and at the top of the page). I will not investigate this further for the time being.\n", "I was having this same issue. I found that Bootstrap's tooltips do not position themselves correctly over \"floated\" or \"inline/inline-block\" elements when those elements are within a container with relative or absolute positioning. I have a button floated right, inside an absolutely positioned parent container. If I remove the absolute positioning of the parent, the tooltip displays perfectly fine. Bootstrap needs to address this.\n", "This happens on inline elements, such as a tags. Setting the display property to block resolves the issue.\n", "Late response, but I had this same issue and was able to resolve it by using the 'container' option in the tooltip jquery reference on the html element.\nSee this page and the jsfiddle link therein.\nYou can see the tooltip container option details here.\n", "Building on Steve's answer, I found the issue was that my css reference to the font awesome css was after the bootstrap css. The bootstrap css sets the display property for font awesome icons to be \"inline-block\".\nSo to fix the issue I just moved the bootstrap css reference after the font awesome css and this fixed the flickering issue for me.\n", "I'm not sure how much control you have over Bootstrap bits, but it sounds like the hover event is constantly firing. The way to stop this would be to use .stop() before the call, but I'm not sure if that's possible here, for example:\n$('#myelement').stop().fadeOut(200);\n\nI don't know if you're able to do something like that with the Bootstrap call, but it may be worth giving it a try!\n", "I had this flickering problem because my buttons were hidden at first. Triggering tooltips on hidden elements will not work. I made my own tooltips that look just like the bootstrap tooltips.\n\n\n.tooltip1 {\n position: relative;\n display: inline-block;\n}\n\n/* Tooltip text */\n.tooltip1 .tooltiptext {\n visibility: hidden;\n width: 220px;\n background-color: black;\n color: #fff;\n text-align: center;\n padding: 5px 0;\n border-radius: 6px;\n overflow: visible;\n top: -300%;\n left: -300%;\n\n /* Position the tooltip text */\n position: absolute;\n z-index: 1;\n\n opacity: 0;\n transition: opacity 1s;\n}\n\n/* Show the tooltip text when you mouse over the tooltip container */\n.tooltip1:hover .tooltiptext {\n visibility: visible;\n opacity: 1;\n overflow: visible;\n}\n<li class=\"list-inline-item\"> <button routerLink='/members/{{member.username}}' class=\"btn btn-primary\">\n <div class =\"tooltip1\"><span class=\"tooltiptext\">Click this button to view the details of this person.</span> <i class=\"fa fa-user\"></i></div></button></li>\n\n\n\n" ]
[ 92, 23, 9, 5, 3, 3, 2, 0, 0 ]
[]
[]
[ "jquery", "tooltip", "twitter_bootstrap" ]
stackoverflow_0014326724_jquery_tooltip_twitter_bootstrap.txt
Q: Function returns undefined when using map() const products = [ { product: "banana", price: 3 }, { product: "mango", price: 6 }, { product: "potato", price: " " }, { product: "avocado", price: 8 }, { product: "coffee", price: 10 }, { product: "tea", price: "" }, ]; const productsByPrice = function (arr) { arr.map((item) => { let output = {}; output[item.product] = item.price; return output; }); }; console.log(productsByPrice(products)) Hello, I am trying to use map() to map the products array to its corresponding prices but the function returns undefined I have tried using the debugger to step through the code and there are values stored in the output variable as it iterates through the array but in the end it returns undefined. I am only new to programming and i cant see why this happens. Thanks alot A: I fixed your code, and it works fine in my browser. This is what I used: const products = [ { product: "banana", price: 3 }, { product: "mango", price: 6 }, { product: "potato", price: " " }, { product: "avocado", price: 8 }, { product: "coffee", price: 10 }, { product: "tea", price: "" }, ]; const productsByPrice = function (arr) { let output = {}; arr.map((item) => { output[item.product] = item.price; }); return output; }; console.log(productsByPrice(products)) I simply just moved the return outside of the map function, and move the output array before it. I hope this answers your question. A: maybe you should use reduce for what you want to achieve const products = [ { product: "banana", price: 3 }, { product: "mango", price: 6 }, { product: "potato", price: " " }, { product: "avocado", price: 8 }, { product: "coffee", price: 10 }, { product: "tea", price: "" }, ]; const productsByPrice = function (arr) { return arr.reduce((acc, item) => { acc[item.product] = item.price; return acc; }, {}); }; console.log(productsByPrice(products)) A: It's a good application of Object.fromEntries() (doc) which returns an object when given an array of [key, value] pairs. const products = [ { product: "banana", price: 3 }, { product: "mango", price: 6 }, { product: "potato", price: " " }, { product: "avocado", price: 8 }, { product: "coffee", price: 10 }, { product: "tea", price: "" }, ]; const productsByPrice = arr => { return Object.fromEntries( arr.map(({product, price}) => [product, price]) ); }; console.log(productsByPrice(products))
Function returns undefined when using map()
const products = [ { product: "banana", price: 3 }, { product: "mango", price: 6 }, { product: "potato", price: " " }, { product: "avocado", price: 8 }, { product: "coffee", price: 10 }, { product: "tea", price: "" }, ]; const productsByPrice = function (arr) { arr.map((item) => { let output = {}; output[item.product] = item.price; return output; }); }; console.log(productsByPrice(products)) Hello, I am trying to use map() to map the products array to its corresponding prices but the function returns undefined I have tried using the debugger to step through the code and there are values stored in the output variable as it iterates through the array but in the end it returns undefined. I am only new to programming and i cant see why this happens. Thanks alot
[ "I fixed your code, and it works fine in my browser. This is what I used:\nconst products = [\n { product: \"banana\", price: 3 },\n { product: \"mango\", price: 6 },\n { product: \"potato\", price: \" \" },\n { product: \"avocado\", price: 8 },\n { product: \"coffee\", price: 10 },\n { product: \"tea\", price: \"\" },\n];\n\nconst productsByPrice = function (arr) {\n let output = {};\n arr.map((item) => {\n output[item.product] = item.price;\n });\n return output;\n};\nconsole.log(productsByPrice(products))\n\nI simply just moved the return outside of the map function, and move the output array before it. I hope this answers your question.\n", "maybe you should use reduce for what you want to achieve\nconst products = [\n { product: \"banana\", price: 3 },\n { product: \"mango\", price: 6 },\n { product: \"potato\", price: \" \" },\n { product: \"avocado\", price: 8 },\n { product: \"coffee\", price: 10 },\n { product: \"tea\", price: \"\" },\n];\n\nconst productsByPrice = function (arr) {\n return arr.reduce((acc, item) => {\n acc[item.product] = item.price;\n return acc;\n }, {});\n};\n\nconsole.log(productsByPrice(products))\n\n", "It's a good application of Object.fromEntries() (doc) which returns an object when given an array of [key, value] pairs.\n\n\nconst products = [\n { product: \"banana\", price: 3 },\n { product: \"mango\", price: 6 },\n { product: \"potato\", price: \" \" },\n { product: \"avocado\", price: 8 },\n { product: \"coffee\", price: 10 },\n { product: \"tea\", price: \"\" },\n];\n\nconst productsByPrice = arr => {\n return Object.fromEntries(\n arr.map(({product, price}) => [product, price])\n );\n};\nconsole.log(productsByPrice(products))\n\n\n\n" ]
[ 2, 0, 0 ]
[ "\n\nconst products = [\n { product: \"banana\", price: 3 },\n { product: \"mango\", price: 6 },\n { product: \"potato\", price: \" \" },\n { product: \"avocado\", price: 8 },\n { product: \"coffee\", price: 10 },\n { product: \"tea\", price: \"\" },\n ];\n\nconst productsByPrice = products.map(function (arr) {\n return `${arr.product}:${arr.price}`\n})\n\nconsole.log(productsByPrice);\n\n\n\n//try this\n" ]
[ -2 ]
[ "arrays", "javascript", "undefined" ]
stackoverflow_0074670350_arrays_javascript_undefined.txt
Q: Unique together validation @Entity @NoArgsConstructor @AllArgsConstructor @Builder @Table(name = "overoptimisation_identifiers") public class OveroptimisationIdentifierEntity { @Id @GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "org.hibernate.id.UUIDGenerator") @Column(name = "id", columnDefinition = "VARCHAR(255)") private UUID id; @CreationTimestamp private LocalDate date; } I'd like to have only one identifier per day. This means that I have to organise a constraint: id and date are unique together. By the way, maybe it is not done by JPA. It may be done by the framework, a third party library or just by me in the code. But I don't know how to do that. A: It sounds like you want to create a unique constraint on the id and date columns of the overoptimisation_identifiers table. In JPA, you can do this by using the @Table annotation and specifying a uniqueConstraints attribute. Here is an example of how you could modify your OveroptimisationIdentifierEntity class to add a unique constraint on the id and date columns: @Entity @NoArgsConstructor @AllArgsConstructor @Builder @Table(name = "overoptimisation_identifiers", uniqueConstraints = { @UniqueConstraint(columnNames = {"id", "date"}) }) public class OveroptimisationIdentifierEntity { // ... @Id @GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "org.hibernate.id.UUIDGenerator") @Column(name = "id", columnDefinition = "VARCHAR(255)") private UUID id; @CreationTimestamp private LocalDate date; } This will tell JPA to create a unique constraint on the id and date columns when generating the table schema. This will prevent multiple rows with the same values for the id and date columns from being inserted into the table.
Unique together validation
@Entity @NoArgsConstructor @AllArgsConstructor @Builder @Table(name = "overoptimisation_identifiers") public class OveroptimisationIdentifierEntity { @Id @GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "org.hibernate.id.UUIDGenerator") @Column(name = "id", columnDefinition = "VARCHAR(255)") private UUID id; @CreationTimestamp private LocalDate date; } I'd like to have only one identifier per day. This means that I have to organise a constraint: id and date are unique together. By the way, maybe it is not done by JPA. It may be done by the framework, a third party library or just by me in the code. But I don't know how to do that.
[ "It sounds like you want to create a unique constraint on the id and date columns of the overoptimisation_identifiers table. In JPA, you can do this by using the @Table annotation and specifying a uniqueConstraints attribute.\nHere is an example of how you could modify your OveroptimisationIdentifierEntity class to add a unique constraint on the id and date columns:\n@Entity\n@NoArgsConstructor\n@AllArgsConstructor\n@Builder\n@Table(name = \"overoptimisation_identifiers\", uniqueConstraints = {\n @UniqueConstraint(columnNames = {\"id\", \"date\"})\n})\npublic class OveroptimisationIdentifierEntity {\n\n // ...\n\n @Id\n @GeneratedValue(generator = \"uuid2\")\n @GenericGenerator(name = \"uuid2\", strategy = \"org.hibernate.id.UUIDGenerator\")\n @Column(name = \"id\", columnDefinition = \"VARCHAR(255)\")\n private UUID id;\n\n @CreationTimestamp\n private LocalDate date;\n}\n\n\nThis will tell JPA to create a unique constraint on the id and date columns when generating the table schema. This will prevent multiple rows with the same values for the id and date columns from being inserted into the table.\n" ]
[ 0 ]
[]
[]
[ "spring", "spring_boot", "spring_data_jpa" ]
stackoverflow_0074666227_spring_spring_boot_spring_data_jpa.txt
Q: I want to receive information from the user from discord, but I don't know what to do. ( discord.py ) I want to receive information from the user from discord, but I don't know what to do. I want to make a class to input data if user write !make [name] [data], bot generate class A, A(name, data) The following is the code I made. What should I do? Ps. command_prefix is not working properly. What should I do with this? ` import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): if message.content == "test": await message.channel.send ("{} | {}, Hello".format(message.author, message.author.mention)) await message.author.send ("{} | {}, User, Hello".format(message.author, message.author.mention)) if message.content =="!help": await message.channel.send ("hello, I'm bot 0.0.1 Alpha") async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-') ` A: I advice to not using on message for making commands what I do advice: import discord from discord.ext import commands @bot.command(name="name here if you want a different one than the function name", description="describe it here", hidden=False) #set hidden to True to hide it in the help async def mycommand(ctx, argument1, argument2): '''A longer description of the command Usage example: !mycommand hi 1 ''' await ctx.send(f"Got {argument1} and {argument2}") if you will use the two ways together then add after this line await message.channel.send ("hello, I'm bot 0.0.1 Alpha") this: else: await bot.process_commands(message) if you want to make help command u should first remove the default one by editing this line bot = commands.Bot(command_prefix='!',intents=intents) to : bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) overall code should look like import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") @bot.command() async def make(ctx,name,data): #do whatever u want with the name and data premeter pass @bot.command() async def help(ctx): await ctx.send("hello, I'm bot 0.0.1 Alpha") @bot.command() async def test(ctx): await ctx.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention)) client.run('-') and yeah if u want to know what is ctx ctx is context which is default premeter and have some methods like send,author and more
I want to receive information from the user from discord, but I don't know what to do. ( discord.py )
I want to receive information from the user from discord, but I don't know what to do. I want to make a class to input data if user write !make [name] [data], bot generate class A, A(name, data) The following is the code I made. What should I do? Ps. command_prefix is not working properly. What should I do with this? ` import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): if message.content == "test": await message.channel.send ("{} | {}, Hello".format(message.author, message.author.mention)) await message.author.send ("{} | {}, User, Hello".format(message.author, message.author.mention)) if message.content =="!help": await message.channel.send ("hello, I'm bot 0.0.1 Alpha") async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-') `
[ "I advice to not using on message for making commands\nwhat I do advice:\nimport discord\nfrom discord.ext import commands\n \[email protected](name=\"name here if you want a different one than the function name\", description=\"describe it here\", hidden=False) #set hidden to True to hide it in the help\nasync def mycommand(ctx, argument1, argument2):\n '''A longer description of the command\n\n\n Usage example:\n !mycommand hi 1\n '''\n await ctx.send(f\"Got {argument1} and {argument2}\")\n\nif you will use the two ways together then add after this line\nawait message.channel.send (\"hello, I'm bot 0.0.1 Alpha\")\nthis:\n else:\n await bot.process_commands(message)\n\nif you want to make help command u should first remove the default one\nby editing this line bot = commands.Bot(command_prefix='!',intents=intents) to :\nbot = commands.Bot(command_prefix='!',intents=intents,help_command=None)\n\noverall code should look like\nimport discord, asyncio\nimport char # class file\nfrom discord.ext import commands\n\nintents=discord.Intents.all()\nclient = discord.Client(intents=intents) \n\nbot = commands.Bot(command_prefix='!',intents=intents,help_command=None)\n\[email protected]\nasync def on_ready():\n await client.change_presence(status=discord.Status.online, activity=discord.Game(\"Game\"))\n\n\[email protected]\nasync def on_message(message):\n async def new_class(ctx,user:discord.user,context1,context2):\n global char_num\n globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name)\n char_num+=1\n await ctx.message.channel.send (\"done\", context1,\"!\")\n\[email protected]()\nasync def make(ctx,name,data):\n #do whatever u want with the name and data premeter\n pass\[email protected]()\nasync def help(ctx):\n await ctx.send(\"hello, I'm bot 0.0.1 Alpha\")\[email protected]()\nasync def test(ctx):\n await ctx.send (\"{} | {}, Hello\".format(ctx.author, ctx.author.mention))\n\n\nclient.run('-')\n\nand yeah if u want to know what is ctx\nctx is context which is default premeter and have some methods like send,author and more\n" ]
[ 0 ]
[]
[]
[ "discord", "python" ]
stackoverflow_0074670633_discord_python.txt
Q: Easiest way for a child object to inherit all of parent object's data in C++? I wanted to see if there is a way to set a child class object to inherit all the data from a parent class. ex. InventoryBook allBooks[20]; SoldBook child[0] = allBooks[0]; For context, I'm trying to write a program in C++ that keeps track of a hypothetical book store's inventory and I wanted to create a class that handles a sale transactions. class BookData { protected: char bookTitle[51] = {}, isbn[14] = {}, author[31] = {}, publisher[31] = {}, dateAdded[11] = {}; public: void setTitle(char newTitle[]); void setISBN(char newISBN[]); void setAuthor(char newAuthor[]); void setPub(char newPub[]); char *getTitle(); char *getISBN(); char *getAuthor(); char *getPub(); char *getDateAdded(); bool bookMatch(char[]); }; class InventoryBook : public BookData { protected: int qtyOnHand; double wholesale, retail; public: void setDateAdded(char newDate[]); void setQty(int newQty); void setWholesale(double newWhole); void setRetail(double newRetail); int isEmpty(); void removeBook(); void delBook(); int getQty(); double getWholesale(); double getRetail(); }; class SoldBook : public InventoryBook { private: const double taxRate = 0.06; int qtySold; double tax = 0, subtotal = 0; static double total; public: SoldBook calcTax() { this->tax = qtySold * retail * taxRate; return *this; } SoldBook calcSubtotal() { subtotal = (this->retail * qtySold) + tax; return *this; } void setQtySold(int qty) { this->qtySold = qty; } int getQtySold() { return qtySold; } double getSubtotal() { return subtotal; } double getTotal() { return total; } InventoryBook getBook(InventoryBook book) { return book; } }; I wanted a SoldBook object to inherit all the data from InventoryBook without having to go to the extra trouble of creating a function that accepts the parent object as arguments and sets them in the child class ( SoldBooks ) ex. void converter(char[] title, char[] isbn, ...etc) { this->bookTitle = title; this->isbn = isbn; etc... } Well I know how to do it the hard way but I just want the easiest way. Maybe something like saleBooks[0] = allBooks[0]; A: do a constuctor take in the parent and member initialize it public: SoldBook(BookData bd) :bookTitle(bd.title), :isbn(bd.isbn), :etc(etc),... {} then you can simply do SoldBook soldBook = SoldBook(bd); or create a vector for then std::vector<SoldBook> soldBooks; soldBooks.emplace_back(sb); emplace back uses a constuctor call to avoid copying and honestly i'd probably but the library into an unordered_map for easy key access
Easiest way for a child object to inherit all of parent object's data in C++?
I wanted to see if there is a way to set a child class object to inherit all the data from a parent class. ex. InventoryBook allBooks[20]; SoldBook child[0] = allBooks[0]; For context, I'm trying to write a program in C++ that keeps track of a hypothetical book store's inventory and I wanted to create a class that handles a sale transactions. class BookData { protected: char bookTitle[51] = {}, isbn[14] = {}, author[31] = {}, publisher[31] = {}, dateAdded[11] = {}; public: void setTitle(char newTitle[]); void setISBN(char newISBN[]); void setAuthor(char newAuthor[]); void setPub(char newPub[]); char *getTitle(); char *getISBN(); char *getAuthor(); char *getPub(); char *getDateAdded(); bool bookMatch(char[]); }; class InventoryBook : public BookData { protected: int qtyOnHand; double wholesale, retail; public: void setDateAdded(char newDate[]); void setQty(int newQty); void setWholesale(double newWhole); void setRetail(double newRetail); int isEmpty(); void removeBook(); void delBook(); int getQty(); double getWholesale(); double getRetail(); }; class SoldBook : public InventoryBook { private: const double taxRate = 0.06; int qtySold; double tax = 0, subtotal = 0; static double total; public: SoldBook calcTax() { this->tax = qtySold * retail * taxRate; return *this; } SoldBook calcSubtotal() { subtotal = (this->retail * qtySold) + tax; return *this; } void setQtySold(int qty) { this->qtySold = qty; } int getQtySold() { return qtySold; } double getSubtotal() { return subtotal; } double getTotal() { return total; } InventoryBook getBook(InventoryBook book) { return book; } }; I wanted a SoldBook object to inherit all the data from InventoryBook without having to go to the extra trouble of creating a function that accepts the parent object as arguments and sets them in the child class ( SoldBooks ) ex. void converter(char[] title, char[] isbn, ...etc) { this->bookTitle = title; this->isbn = isbn; etc... } Well I know how to do it the hard way but I just want the easiest way. Maybe something like saleBooks[0] = allBooks[0];
[ "do a constuctor take in the parent and member initialize it\npublic:\n SoldBook(BookData bd)\n :bookTitle(bd.title),\n :isbn(bd.isbn),\n :etc(etc),...\n {}\n\nthen you can simply do\nSoldBook soldBook = SoldBook(bd);\n\nor create a vector for then\nstd::vector<SoldBook> soldBooks;\nsoldBooks.emplace_back(sb);\n\nemplace back uses a constuctor call to avoid copying\nand honestly i'd probably but the library into an unordered_map for easy key access\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074670348_c++.txt
Q: How do I get the client IP address of a websocket connection in Django Channels? I need to get the client IP address of a websocket connection for some extra functionality I would like to implement. I have an existing deployed Django server running an Nginx-Gunicorn-Uvicorn Worker-Redis configuration. As one might expect, during development, whilst running a local server, everything works as expected. However, when deployed, I receive the error NoneType object is not subscriptable when attempting to access the client IP address of the websocket via self.scope["client"][0]. Here are the configurations and code: NGINX Config: upstream uvicorn { server unix:/run/gunicorn.sock; } server { listen 80; server_name <ip address> <hostname>; location = /favicon.ico { access_log off; log_not_found off; } location / { include proxy_params; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://uvicorn; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 128; } location /ws/ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://uvicorn; } location /static/ { root /var/www/serverfiles/; autoindex off; } location /media { alias /mnt/apps; } } Gunicorn Config: NOTE: ExecStart has been formatted for readability, it is one line in the actual config [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=django Group=www-data WorkingDirectory=/srv/server Environment=DJANGO_SECRET_KEY= Environment=GITEA_SECRET_KEY= Environment=MSSQL_DATABASE_PASSWORD= ExecStart=/bin/bash -c " source venv/bin/activate; exec /srv/server/venv/bin/gunicorn --workers 3 --bind unix:/run/gunicorn.sock --timeout 300 --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log --log-level debug --capture-output -k uvicorn.workers.UvicornWorker src.server.asgi:application " [Install] WantedBy=multi-user.target Code throwing the error: @database_sync_to_async def _set_online_if_model(self, set_online: bool) -> None: model: MyModel for model in MyModel.objects.all(): if self.scope["client"][0] == model.ip: model.online = set_online model.save() This server has been running phenomenally in its current configuration before my need to access connect client IP addresses. It handles other websocket connections just fine without any issues. I've already looked into trying to configure my own custom UvicornWorker according to the docs. I'm not at all an expert in this, so I might have misunderstood what I was supposed to do: https://www.uvicorn.org/deployment/#running-behind-nginx from uvicorn.workers import UvicornWorker class ServerUvicornWorker(UvicornWorker): def __init__(self, *args, **kwargs) -> None: self.CONFIG_KWARGS.update({"proxy_headers": True, "forwarded_allow_ips": "*"}) super().__init__(*args, **kwargs) I also looked at https://github.com/django/channels/issues/546 which mentioned a --proxy-headers config for Daphne, however, I am not running Daphne. https://github.com/django/channels/issues/385 mentioned that HTTP headers are passed to the connect method of a consumer, however, that post is quite old and no longer relavent as far as I can tell. I do not get any additional **kwargs to my connect method. A: Client IP has nothing to do with channels self.scope["client"][0] is undefined because when you receive data from the front end at the backend there is no data with the name client. so try to send it from the frontend. you can send a manual, static value at first to verify and then find techniques to read the IP address and then send it.
How do I get the client IP address of a websocket connection in Django Channels?
I need to get the client IP address of a websocket connection for some extra functionality I would like to implement. I have an existing deployed Django server running an Nginx-Gunicorn-Uvicorn Worker-Redis configuration. As one might expect, during development, whilst running a local server, everything works as expected. However, when deployed, I receive the error NoneType object is not subscriptable when attempting to access the client IP address of the websocket via self.scope["client"][0]. Here are the configurations and code: NGINX Config: upstream uvicorn { server unix:/run/gunicorn.sock; } server { listen 80; server_name <ip address> <hostname>; location = /favicon.ico { access_log off; log_not_found off; } location / { include proxy_params; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://uvicorn; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 128; } location /ws/ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://uvicorn; } location /static/ { root /var/www/serverfiles/; autoindex off; } location /media { alias /mnt/apps; } } Gunicorn Config: NOTE: ExecStart has been formatted for readability, it is one line in the actual config [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=django Group=www-data WorkingDirectory=/srv/server Environment=DJANGO_SECRET_KEY= Environment=GITEA_SECRET_KEY= Environment=MSSQL_DATABASE_PASSWORD= ExecStart=/bin/bash -c " source venv/bin/activate; exec /srv/server/venv/bin/gunicorn --workers 3 --bind unix:/run/gunicorn.sock --timeout 300 --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log --log-level debug --capture-output -k uvicorn.workers.UvicornWorker src.server.asgi:application " [Install] WantedBy=multi-user.target Code throwing the error: @database_sync_to_async def _set_online_if_model(self, set_online: bool) -> None: model: MyModel for model in MyModel.objects.all(): if self.scope["client"][0] == model.ip: model.online = set_online model.save() This server has been running phenomenally in its current configuration before my need to access connect client IP addresses. It handles other websocket connections just fine without any issues. I've already looked into trying to configure my own custom UvicornWorker according to the docs. I'm not at all an expert in this, so I might have misunderstood what I was supposed to do: https://www.uvicorn.org/deployment/#running-behind-nginx from uvicorn.workers import UvicornWorker class ServerUvicornWorker(UvicornWorker): def __init__(self, *args, **kwargs) -> None: self.CONFIG_KWARGS.update({"proxy_headers": True, "forwarded_allow_ips": "*"}) super().__init__(*args, **kwargs) I also looked at https://github.com/django/channels/issues/546 which mentioned a --proxy-headers config for Daphne, however, I am not running Daphne. https://github.com/django/channels/issues/385 mentioned that HTTP headers are passed to the connect method of a consumer, however, that post is quite old and no longer relavent as far as I can tell. I do not get any additional **kwargs to my connect method.
[ "Client IP has nothing to do with channels\nself.scope[\"client\"][0] is undefined because when you receive data from the front end at the backend there is no data with the name client. so try to send it from the frontend. you can send a manual, static value at first to verify and then find techniques to read the IP address and then send it.\n" ]
[ 0 ]
[]
[]
[ "django", "django_channels", "nginx", "python" ]
stackoverflow_0074605177_django_django_channels_nginx_python.txt
Q: How to make TypeWriter to write from start after click again I need to make my Typewriter animation start on beginning after i click button "developer". Can't make it work. ` var i = 0; var txt = "<!--Ahoj--> \n\n<name> Jmenuji se Tomáš Maixner </name>\n\n<description> Jsem front-end developer, grafický designer a web designer, ale umím toho mnohem víc. Mám zkušenosti se správou sociálních sítí, SEO, animací, produkcí i postprodukcí videa, 3D grafikou i fotografováním. </description>"; var speed = 50; function typeWriter() { { if (i < txt.length) { var char = txt.charAt(i); document.getElementById("demo").innerHTML += char == '\n' ? '<br>' : char; i++; setTimeout(typeWriter, speed); } } } function clearBox(demo) { document.getElementById("demo").style.display = "none"; let i = 0; let char = txt.charAt(i); } function unclearBox(demo) { document.getElementById("demo").style.display = "block"; typeWriter(); } ` ` <h1>Typewriter</h1> <div class=window> <div class="upper-row"> <div id="developer" onclick="typeWriter(), unclearBox(demo)" class="upper-row-box"><span class="developer"><></span>Developer</div> <div onclick="clearBox(demo)" class="upper-row-box"><span class="designer">@</span>Designer</div> <div class="upper-row-box noHover"></div> <div class="upper-row-box noHover endBox"> <div class="dot green"></div> <div class="dot yellow"></div> <div class="dot red"></div> </div> </div> <div id="coding-window" class="coding-window"> <div class="code"> <p id="demo"></p> <p id="demo2"></p> </div> </div> </div> ` I have tried many things but most of them end up deleting my string. I've never been able restart animation from start. A: To make the animation start again when you click the "Developer" button, you can add the following code inside the unclearBox function: function unclearBox(demo) { document.getElementById("demo").style.display = "block"; // Reset the counter and clear the content of the element i = 0; document.getElementById("demo").innerHTML = ""; typeWriter(); } This will reset the counter i to 0 and clear the content of the element with the ID "demo", so when the typeWriter function is called again, it will start from the beginning. Here is the complete code with the changes: var i = 0; var txt = "<!--Ahoj--> \n\n<name> Jmenuji se Tomáš Maixner </name>\n\n<description> Jsem front-end developer, grafický designer a web designer, ale umím toho mnohem víc. Mám zkušenosti se správou sociálních sítí, SEO, animací, produkcí i postprodukcí videa, 3D grafikou i fotografováním. </description>"; var speed = 50; function typeWriter() { { if (i < txt.length) { var char = txt.charAt(i); document.getElementById("demo").innerHTML += char == '\n' ? '<br>' : char; i++; setTimeout(typeWriter, speed); } } } function clearBox(demo) { document.getElementById("demo").style.display = "none"; let i = 0; let char = txt.charAt(i); } function unclearBox(demo) { document.getElementById("demo").style.display = "block"; // Reset the counter and clear the content of the element i = 0; document.getElementById("demo").innerHTML = ""; typeWriter(); }
How to make TypeWriter to write from start after click again
I need to make my Typewriter animation start on beginning after i click button "developer". Can't make it work. ` var i = 0; var txt = "<!--Ahoj--> \n\n<name> Jmenuji se Tomáš Maixner </name>\n\n<description> Jsem front-end developer, grafický designer a web designer, ale umím toho mnohem víc. Mám zkušenosti se správou sociálních sítí, SEO, animací, produkcí i postprodukcí videa, 3D grafikou i fotografováním. </description>"; var speed = 50; function typeWriter() { { if (i < txt.length) { var char = txt.charAt(i); document.getElementById("demo").innerHTML += char == '\n' ? '<br>' : char; i++; setTimeout(typeWriter, speed); } } } function clearBox(demo) { document.getElementById("demo").style.display = "none"; let i = 0; let char = txt.charAt(i); } function unclearBox(demo) { document.getElementById("demo").style.display = "block"; typeWriter(); } ` ` <h1>Typewriter</h1> <div class=window> <div class="upper-row"> <div id="developer" onclick="typeWriter(), unclearBox(demo)" class="upper-row-box"><span class="developer"><></span>Developer</div> <div onclick="clearBox(demo)" class="upper-row-box"><span class="designer">@</span>Designer</div> <div class="upper-row-box noHover"></div> <div class="upper-row-box noHover endBox"> <div class="dot green"></div> <div class="dot yellow"></div> <div class="dot red"></div> </div> </div> <div id="coding-window" class="coding-window"> <div class="code"> <p id="demo"></p> <p id="demo2"></p> </div> </div> </div> ` I have tried many things but most of them end up deleting my string. I've never been able restart animation from start.
[ "To make the animation start again when you click the \"Developer\" button, you can add the following code inside the unclearBox function:\nfunction unclearBox(demo) {\n document.getElementById(\"demo\").style.display = \"block\";\n // Reset the counter and clear the content of the element\n i = 0;\n document.getElementById(\"demo\").innerHTML = \"\";\n typeWriter();\n}\n\nThis will reset the counter i to 0 and clear the content of the element with the ID \"demo\", so when the typeWriter function is called again, it will start from the beginning.\nHere is the complete code with the changes:\nvar i = 0;\nvar txt = \"<!--Ahoj--> \\n\\n<name> Jmenuji se Tomáš Maixner </name>\\n\\n<description> Jsem front-end developer, grafický designer a web designer, ale umím toho mnohem víc. Mám zkušenosti se správou sociálních sítí, SEO, animací, produkcí i postprodukcí videa, 3D grafikou i fotografováním. </description>\";\nvar speed = 50;\n\nfunction typeWriter() {\n {\n if (i < txt.length) {\n var char = txt.charAt(i);\n document.getElementById(\"demo\").innerHTML += char == '\\n' ? '<br>' : char;\n i++;\n setTimeout(typeWriter, speed);\n }\n }\n }\n\nfunction clearBox(demo) {\n document.getElementById(\"demo\").style.display = \"none\";\n let i = 0;\n let char = txt.charAt(i);\n}\n\nfunction unclearBox(demo) {\n document.getElementById(\"demo\").style.display = \"block\";\n // Reset the counter and clear the content of the element\n i = 0;\n document.getElementById(\"demo\").innerHTML = \"\";\n typeWriter();\n}\n\n" ]
[ 0 ]
[]
[]
[ "css", "html", "javascript", "typewriter" ]
stackoverflow_0074670905_css_html_javascript_typewriter.txt
Q: Same optimization code different results on different computers I am running nested optimization code. sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05}) sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True}) The first minimization is the part of the function B, so it is kind of running inside the second minimization. And the whole optimization is based on the data, there's no random number involved. I run the exactly same code on two different computers, and get the totally different results. I have installed different versions of anaconda, but scipy, numpy, and all the packages used have the same versions. I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit) I am trying to figure out what might be causing this. Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same? or are there any options for sp.optimize that default values are set to be different from computer to computer? PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers? A: You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the "real" result, but only numerical approximations. During a long optimization task these differences can sum up. Furthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point. Can you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description. If this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too? You see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this. A: I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.
Same optimization code different results on different computers
I am running nested optimization code. sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05}) sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True}) The first minimization is the part of the function B, so it is kind of running inside the second minimization. And the whole optimization is based on the data, there's no random number involved. I run the exactly same code on two different computers, and get the totally different results. I have installed different versions of anaconda, but scipy, numpy, and all the packages used have the same versions. I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit) I am trying to figure out what might be causing this. Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same? or are there any options for sp.optimize that default values are set to be different from computer to computer? PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers?
[ "You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the \"real\" result, but only numerical approximations. During a long optimization task these differences can sum up.\nFurthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point.\nCan you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description.\nIf this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too?\nYou see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this.\n", "I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.\n" ]
[ 0, 0 ]
[]
[]
[ "minimization", "optimization", "python", "scipy" ]
stackoverflow_0046043768_minimization_optimization_python_scipy.txt
Q: assigning undefined class variable outside of the class!! - php I don't understand this. I have an empty class, and I can define a variable belonging to the class and assign values to it outside of the class!! how is it possible? <?php class Test{} $test = new Test(); var_dump(isset($test->foo)); $test->foo = 'bar'; var_dump(isset($test->foo)); echo $test->foo; The result is as follows: bool(false) bool(true) bar someone pleases explain it. is it even safe that php has such a feature? A: in PHP < 8.2 it was allowed to dynamically assign properties to a class, in PHP 8.2 it will be depricated https://stitcher.io/blog/deprecated-dynamic-properties-in-php-82
assigning undefined class variable outside of the class!! - php
I don't understand this. I have an empty class, and I can define a variable belonging to the class and assign values to it outside of the class!! how is it possible? <?php class Test{} $test = new Test(); var_dump(isset($test->foo)); $test->foo = 'bar'; var_dump(isset($test->foo)); echo $test->foo; The result is as follows: bool(false) bool(true) bar someone pleases explain it. is it even safe that php has such a feature?
[ "in PHP < 8.2 it was allowed to dynamically assign properties to a class, in PHP 8.2 it will be depricated\nhttps://stitcher.io/blog/deprecated-dynamic-properties-in-php-82\n" ]
[ 0 ]
[]
[]
[ "class", "oop", "php", "variables" ]
stackoverflow_0074670904_class_oop_php_variables.txt
Q: Get the months between two dates JavaScript Problem First off, I am going to clarify that this question is not a duplicate of Difference in months between two dates in javascript or javascript month difference My question is specifically about getting the months in between two dates, not the number of months. Expected Results So If date1 is 11/01/2022 (mm/dd/yyyy) and date2 is 02/20/2023, it should output an array of months including the month of date1 and date2 like ["November", "December", "January", "February"]. I need to know how to return the actual months between two dates, not the number of months. Can somebody explain what would be the way to do that? A: The post linked in the question is already a good start. Maybe think about how you would do it when you want to write the results to a sheet of paper. When we know the month to start, as an index [0...11], we can count from there and add the month names from an array: const xmas = new Date("December 25, 1999 23:15:30"); const summer = new Date("June 21, 2003 23:15:30"); function monthsBetween(dstart, dend) { const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; let result = []; let current = dstart.getMonth(); let end = (dend.getFullYear() - dstart.getFullYear()) * 12 + dend.getMonth(); for (;current <= end; current += 1) { result.push(monthNames[current % 12]); } return result; } console.log(monthsBetween(xmas, summer)); // [December, January, February..., December, January, ...., June (multiple years) console.log(monthsBetween(xmas, xmas)); // ["December"] In your example 2022-11-01 to 2023-02-20, current would count from 10 (November, indexed from 0) to 13 (1: February indexed from 0 + 1 year = 12 months difference) A: You can loop over the dates from the start to the end, collecting the month names as you go. The start and end should be Date objects, how you convert them from timestamps like "11/01/2022" is a separate concern and should be dealt with in a prior step. /* Get month names from start date to end date * @param {Date} start - date to start from * @param {Date} end - date to end at * @param {string} lang - language for month names * @returns {string[]} month names from start to end inclusive */ function getMonthNames(start = new Date(), end = new Date(), lang = 'en') { let d = new Date(start.getFullYear(), start.getMonth()); let months = []; while (d <= end) { months.push(d.toLocaleString(lang,{month:'long'})); d.setMonth(d.getMonth() + 1); } return months; } let start = new Date(2022, 10, 1); // 01 Nov 2022 let end = new Date(2023, 1, 20); // 20 Feb 2023 console.log(getMonthNames(start, end))
Get the months between two dates JavaScript
Problem First off, I am going to clarify that this question is not a duplicate of Difference in months between two dates in javascript or javascript month difference My question is specifically about getting the months in between two dates, not the number of months. Expected Results So If date1 is 11/01/2022 (mm/dd/yyyy) and date2 is 02/20/2023, it should output an array of months including the month of date1 and date2 like ["November", "December", "January", "February"]. I need to know how to return the actual months between two dates, not the number of months. Can somebody explain what would be the way to do that?
[ "The post linked in the question is already a good start.\nMaybe think about how you would do it when you want to write the results to a sheet of paper.\nWhen we know the month to start, as an index [0...11], we can count from there and add the month names from an array:\nconst xmas = new Date(\"December 25, 1999 23:15:30\");\nconst summer = new Date(\"June 21, 2003 23:15:30\");\n\nfunction monthsBetween(dstart, dend) {\n const monthNames = [\"January\", \"February\", \"March\", \"April\", \"May\", \"June\", \"July\", \"August\", \"September\", \"October\", \"November\", \"December\"];\n let result = [];\n let current = dstart.getMonth();\n let end = (dend.getFullYear() - dstart.getFullYear()) * 12 + dend.getMonth();\n for (;current <= end; current += 1) {\n result.push(monthNames[current % 12]);\n }\n return result;\n}\n\nconsole.log(monthsBetween(xmas, summer)); // [December, January, February..., December, January, ...., June (multiple years)\nconsole.log(monthsBetween(xmas, xmas)); // [\"December\"]\n\n\nIn your example 2022-11-01 to 2023-02-20, current would count from 10 (November, indexed from 0) to 13 (1: February indexed from 0 + 1 year = 12 months difference)\n", "You can loop over the dates from the start to the end, collecting the month names as you go.\nThe start and end should be Date objects, how you convert them from timestamps like \"11/01/2022\" is a separate concern and should be dealt with in a prior step.\n\n\n/* Get month names from start date to end date\n * @param {Date} start - date to start from\n * @param {Date} end - date to end at\n * @param {string} lang - language for month names\n * @returns {string[]} month names from start to end inclusive\n */\nfunction getMonthNames(start = new Date(), end = new Date(), lang = 'en') {\n let d = new Date(start.getFullYear(), start.getMonth());\n let months = [];\n while (d <= end) {\n months.push(d.toLocaleString(lang,{month:'long'}));\n d.setMonth(d.getMonth() + 1);\n }\n return months;\n}\n\nlet start = new Date(2022, 10, 1); // 01 Nov 2022\nlet end = new Date(2023, 1, 20); // 20 Feb 2023 \n\nconsole.log(getMonthNames(start, end))\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "date", "javascript" ]
stackoverflow_0074666424_date_javascript.txt
Q: Are Mongo Charts possible in Compass Client? Is it possible to use Google charts in MongoDB Compass? I have an AWS version, but can I use charts locally on the client? A: You can create a chart in Compass using the aggregation pipeline to specify the data and chart type, and then customize the chart using the options provided by Google Charts. connect to your database, then Select the collection you want to visualize using a chart. Click on the "Aggregation" tab and In the aggregation pipeline, specify the data you want to visualize and the chart type using the $group and $project stages. Such as; to create a pie chart showing the number of documents per category, you can use the following pipeline [ { $group: { _id: "$category", count: { $sum: 1 } } }, { $project: { _id: 0, category: "$_id", count: 1, type: { $literal: "pie" } } } ] Click on the "Visualize" button to open the chart editor. In the chart editor, select the "Google Charts" option in the "Visualization" dropdown. In the "Chart" tab, select the "Pie Chart" option in the "Chart Type" dropdown. In the "Data" tab, select the "category" and "count" fields as the "Labels" and "Values", respectively. In the "Options" tab, you can customize the chart using the options provided by Google Charts. For example, you can change the chart title, legend position, and colors. Click on the "Apply" button to apply the changes and see the chart. Alternatively, you can also use Google Charts to create bar, line, and other types of charts in MongoDB Compass.
Are Mongo Charts possible in Compass Client?
Is it possible to use Google charts in MongoDB Compass? I have an AWS version, but can I use charts locally on the client?
[ "You can create a chart in Compass using the aggregation pipeline to specify the data and chart type, and then customize the chart using the options provided by Google Charts.\nconnect to your database, then Select the collection you want to visualize using a chart. Click on the \"Aggregation\" tab and In the aggregation pipeline, specify the data you want to visualize and the chart type using the $group and $project stages.\nSuch as; to create a pie chart showing the number of documents per category, you can use the following pipeline\n[\n {\n $group: {\n _id: \"$category\",\n count: { $sum: 1 }\n }\n },\n {\n $project: {\n _id: 0,\n category: \"$_id\",\n count: 1,\n type: { $literal: \"pie\" }\n }\n }\n]\n\nClick on the \"Visualize\" button to open the chart editor.\nIn the chart editor, select the \"Google Charts\" option in the \"Visualization\" dropdown.\nIn the \"Chart\" tab, select the \"Pie Chart\" option in the \"Chart Type\" dropdown.\nIn the \"Data\" tab, select the \"category\" and \"count\" fields as the \"Labels\" and \"Values\", respectively.\nIn the \"Options\" tab, you can customize the chart using the options provided by Google Charts. For example, you can change the chart title, legend position, and colors.\nClick on the \"Apply\" button to apply the changes and see the chart.\nAlternatively, you can also use Google Charts to create bar, line, and other types of charts in MongoDB Compass.\n" ]
[ 0 ]
[]
[]
[ "mongodb" ]
stackoverflow_0074602878_mongodb.txt
Q: What is amplify-backup? Working on an Amplify project, I realized that the directory contains amplify-backup folder. When running amplify pull, the command errors, pointing out the existence of amplify-backup and suggesting removing the folder. I want to know how amplify-backup is created and the purpose of this folder. A: Amplify is a framework for developing and deploying cloud-powered applications with modern web frameworks such as React and Angular. The amplify-backup directory is likely a folder that is created automatically by the Amplify CLI when you run the amplify pull command. This command is used to download your cloud-based Amplify project to your local machine. The purpose of the amplify-backup folder is to store a backup of your local Amplify project before it is overwritten by the updated version that is downloaded from the cloud. This is useful in case there are any conflicts or issues with the updated version of the project. If you are encountering an error when running the amplify pull command, it is likely due to the existence of the amplify-backup directory. In this case, you can try removing the directory by running the following command: rm -rf amplify-backup This will delete the amplify-backup directory and allow you to run the amplify pull command without encountering the error. It is important to note, however, that this will permanently delete any backups of your local Amplify project that were stored in the amplify-backup directory. So, you should only do this if you are sure that you don't need those backups. I hope this helps! Let me know if you have any other questions.
What is amplify-backup?
Working on an Amplify project, I realized that the directory contains amplify-backup folder. When running amplify pull, the command errors, pointing out the existence of amplify-backup and suggesting removing the folder. I want to know how amplify-backup is created and the purpose of this folder.
[ "Amplify is a framework for developing and deploying cloud-powered applications with modern web frameworks such as React and Angular. The amplify-backup directory is likely a folder that is created automatically by the Amplify CLI when you run the amplify pull command. This command is used to download your cloud-based Amplify project to your local machine.\nThe purpose of the amplify-backup folder is to store a backup of your local Amplify project before it is overwritten by the updated version that is downloaded from the cloud. This is useful in case there are any conflicts or issues with the updated version of the project.\nIf you are encountering an error when running the amplify pull command, it is likely due to the existence of the amplify-backup directory. In this case, you can try removing the directory by running the following command:\nrm -rf amplify-backup\n\nThis will delete the amplify-backup directory and allow you to run the amplify pull command without encountering the error. It is important to note, however, that this will permanently delete any backups of your local Amplify project that were stored in the amplify-backup directory. So, you should only do this if you are sure that you don't need those backups.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "aws_amplify", "aws_amplify_cli" ]
stackoverflow_0074670920_amazon_web_services_aws_amplify_aws_amplify_cli.txt
Q: Why is my React useState Hook being set to = '1' rather than being preserved? My goal is to define a state that holds an array of selected ID's i.e. const [selectedIds, setSelectedIds] = useState([]) in a 'grandparent' component that can get updated when changes are made within a grandchild component. That way, I can then use this updated data in another component (we'll say 'uncle' component) that is a child of grandparent. I'm new to React, so I'm not sure if this is the 'proper' way to handle a scenario like this, and I can't figure out why my state is getting set (potentially re-initialized) to = 1 between selecting checkboxes (any insight into general causes for this would be helpful). Here's an outline of what I'm attempting to do: Grandparent component: const GrandParent = (props) => { const [selectedIds, setSelectedIds] = useState([]); const updateStateUsingDataFromGrandchild = (id) => { setSelectedIds(selectedIds.push(id)); } return( <div> <Parent updateStateUsingDataFromGrandchild={updateStateUsingDataFromGrandchild}></Parent> <Uncle selectedIds={selectedIds}></Uncle> //my overall goal is to get updated selectedIds passed to here </div> ); } Parent Component (just passing the function through this) const Parent = (props) => { return( <div> <GrandChild updateStateUsingDataFromGrandchild={updateStateUsingDataFromGrandchild}></GrandChild> </div> ); } GrandChild - When a checkbox is checked, call the function in the grandparent, passing the id const GrandChild = (props) => { const handleCheckedInput = (event) => { props.updateStateUsingDataFromGrandchild(event.target.id); } return( <div> <input onChange={handleCheckedInput} type="checkbox" id="1" /> Thing1 <input onChange={handleCheckedInput} type="checkbox" id="2" /> Thing2 </div> ); } What I see while debugging: in first checkbox check, updateStateUsingDataFromGrandchild() is called with the id passed, and updates selectedIds in the grandParent component. However, by the time I click the second checkbox and enter the function in the grandparent, selectedIds evaluates to = 1, and has seemingly been re-initialized or something? I would expect selectedIds to contain the id I had already pushed to it. A: push returns the new length of the array, so when you do this: setSelectedIds(selectedIds.push(id)) you’re setting selectedIds to the new length of the array. You could do this instead: setSelectedIds([…selectedIds, id])
Why is my React useState Hook being set to = '1' rather than being preserved?
My goal is to define a state that holds an array of selected ID's i.e. const [selectedIds, setSelectedIds] = useState([]) in a 'grandparent' component that can get updated when changes are made within a grandchild component. That way, I can then use this updated data in another component (we'll say 'uncle' component) that is a child of grandparent. I'm new to React, so I'm not sure if this is the 'proper' way to handle a scenario like this, and I can't figure out why my state is getting set (potentially re-initialized) to = 1 between selecting checkboxes (any insight into general causes for this would be helpful). Here's an outline of what I'm attempting to do: Grandparent component: const GrandParent = (props) => { const [selectedIds, setSelectedIds] = useState([]); const updateStateUsingDataFromGrandchild = (id) => { setSelectedIds(selectedIds.push(id)); } return( <div> <Parent updateStateUsingDataFromGrandchild={updateStateUsingDataFromGrandchild}></Parent> <Uncle selectedIds={selectedIds}></Uncle> //my overall goal is to get updated selectedIds passed to here </div> ); } Parent Component (just passing the function through this) const Parent = (props) => { return( <div> <GrandChild updateStateUsingDataFromGrandchild={updateStateUsingDataFromGrandchild}></GrandChild> </div> ); } GrandChild - When a checkbox is checked, call the function in the grandparent, passing the id const GrandChild = (props) => { const handleCheckedInput = (event) => { props.updateStateUsingDataFromGrandchild(event.target.id); } return( <div> <input onChange={handleCheckedInput} type="checkbox" id="1" /> Thing1 <input onChange={handleCheckedInput} type="checkbox" id="2" /> Thing2 </div> ); } What I see while debugging: in first checkbox check, updateStateUsingDataFromGrandchild() is called with the id passed, and updates selectedIds in the grandParent component. However, by the time I click the second checkbox and enter the function in the grandparent, selectedIds evaluates to = 1, and has seemingly been re-initialized or something? I would expect selectedIds to contain the id I had already pushed to it.
[ "push returns the new length of the array, so when you do this:\nsetSelectedIds(selectedIds.push(id))\n\nyou’re setting selectedIds to the new length of the array. You could do this instead:\nsetSelectedIds([…selectedIds, id])\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "react_hooks", "reactjs" ]
stackoverflow_0074670923_javascript_react_hooks_reactjs.txt
Q: I want to sort my api according to recent dates I have cards that render it from an api that has many objs including date and I wane to render the cards based on recent dates ... What I need is to sort based on recent dates using react snippets of code also a link that works https://codesandbox.io/s/sleepy-glitter-ru6dvu?file=/src/App.js:166-207 my api https://api.npoint.io/d275425a434e02acf2f7 { filteredDate && filteredCat?.map((list) => { if (list.showOnHomepage === "yes") { const date = format( new Date(list.publishedDate), "EEE dd MMM yyyy" ); const showCat = news.map((getid) => { if (getid.id == list.categoryID) return getid.name; }); // const rec = list.publishedDate.sort((date1, date2) => date1 - date2); return ( <Card className=" extraCard col-lg-3" style={{ width: "" }} id={list.categoryID} > <Card.Img variant="top" src={list.urlToImage} alt="Image" /> <Card.Body> <Card.Title className="textTitle"> {list.title} </Card.Title> <Card.Text></Card.Text> <small className="text-muted d-flex"> <FaRegCalendarAlt className="m-1" style={{ color: "#0aceff" }} /> {date} </small> <div style={{ color: "#0aceff" }} className="d-flex justify-content-between" > <Button variant="" className={classes["btn-cat"]}> {showCat} </Button> <div> <FaRegHeart /> <p> <FaLink /> <BrowserRouter> {/* <Link to='./Newsitem.js'> {''} <button >Close</button> </Link> */} </BrowserRouter> {/* <button onClick={() => window.open("/src/components/News/Newsitem") } > Go to another </button> */} <a href="/Newsitem" target="/src/components/News/Newsitem" rel="noopener noreferrer" > <button >Go to another page</button> </a> </p> </div> </div> </Card.Body> </Card> ); } })} </div> } </div> A: use a state for you data which is coming from the server. When getting the data, sort the News based on the publishedDate. Set your state value. Use your state value to render your UI. Hope it helps: const [news, setNews] = useState([]); const fetchNews = () => { fetch("https://api.npoint.io/d275425a434e02acf2f7").then((response) => response.json()).then((data) => { const sortedNews = data.News.sort(function(a, b) { const firstPublishedDate = new Date(a.publishedDate); const secondPublishedDate = new Date(b.publishedDate); return firstPublishedDate.getTime() - secondPublishedDate.getTime(); }); setNews(sortedNews); }).catch((error) => { console.log(error); }); }; useEffect(() => { fetchNews(); }, []); return ( <div> { news.map((newsItem) => { return ( ... ); }) } </div> );
I want to sort my api according to recent dates
I have cards that render it from an api that has many objs including date and I wane to render the cards based on recent dates ... What I need is to sort based on recent dates using react snippets of code also a link that works https://codesandbox.io/s/sleepy-glitter-ru6dvu?file=/src/App.js:166-207 my api https://api.npoint.io/d275425a434e02acf2f7 { filteredDate && filteredCat?.map((list) => { if (list.showOnHomepage === "yes") { const date = format( new Date(list.publishedDate), "EEE dd MMM yyyy" ); const showCat = news.map((getid) => { if (getid.id == list.categoryID) return getid.name; }); // const rec = list.publishedDate.sort((date1, date2) => date1 - date2); return ( <Card className=" extraCard col-lg-3" style={{ width: "" }} id={list.categoryID} > <Card.Img variant="top" src={list.urlToImage} alt="Image" /> <Card.Body> <Card.Title className="textTitle"> {list.title} </Card.Title> <Card.Text></Card.Text> <small className="text-muted d-flex"> <FaRegCalendarAlt className="m-1" style={{ color: "#0aceff" }} /> {date} </small> <div style={{ color: "#0aceff" }} className="d-flex justify-content-between" > <Button variant="" className={classes["btn-cat"]}> {showCat} </Button> <div> <FaRegHeart /> <p> <FaLink /> <BrowserRouter> {/* <Link to='./Newsitem.js'> {''} <button >Close</button> </Link> */} </BrowserRouter> {/* <button onClick={() => window.open("/src/components/News/Newsitem") } > Go to another </button> */} <a href="/Newsitem" target="/src/components/News/Newsitem" rel="noopener noreferrer" > <button >Go to another page</button> </a> </p> </div> </div> </Card.Body> </Card> ); } })} </div> } </div>
[ "\nuse a state for you data which is coming from the server.\n\nWhen getting the data, sort the News based on the publishedDate.\n\nSet your state value.\n\nUse your state value to render your UI.\n\n\nHope it helps:\nconst [news, setNews] = useState([]);\n\nconst fetchNews = () => {\n fetch(\"https://api.npoint.io/d275425a434e02acf2f7\").then((response) => response.json()).then((data) => {\n const sortedNews = data.News.sort(function(a, b) {\n const firstPublishedDate = new Date(a.publishedDate);\n const secondPublishedDate = new Date(b.publishedDate);\n return firstPublishedDate.getTime() - secondPublishedDate.getTime();\n });\n setNews(sortedNews);\n }).catch((error) => {\n console.log(error);\n });\n};\n\nuseEffect(() => {\n fetchNews();\n}, []);\n\nreturn (\n <div>\n {\n news.map((newsItem) => {\n return (\n ...\n );\n })\n }\n </div>\n);\n\n" ]
[ 0 ]
[ "You can sort your News data in, for example, the fetchDataList call as shown below. This will sort starting from most recent articles at the top.\nconst fetchDataList = () => {\n setIsLoading(true);\n\n return fetch(\"https://api.npoint.io/d275425a434e02acf2f7\")\n .then((response) => response.json())\n .then((data) => {\n // sort news \n data.News.sort(function(x, y) {\n if (x.publishedDate > y.publishedDate) {\n return -1;\n }\n if (x.publishedDate < y.publishedDate) {\n return 1;\n }\n return 0;\n });\n // **************\n setLists(data.News);\n setIsLoading(false);\n });\n};\n\nHere is a working snippet showing all of the API articles sorted:\nhttps://codesandbox.io/s/musing-pond-0fr7jk\n" ]
[ -1 ]
[ "reactjs" ]
stackoverflow_0074669492_reactjs.txt
Q: Execute python function from Java code and get result I am working with a Python library but everything else is in Java. I want to be able to access and use the Python library from Java, so I started researching and using Jython. I need to use numpy and neurokit libraries. I write this simple code in Java: PythonInterpreter interpreter = new PythonInterpreter(); interpreter.set("values", 10 ); interpreter.execfile("D:\\PyCharmWorkspace\\IoTproject\\Test.py"); PyObject b = interpreter.get("result"); and the code in Python: import sys sys.path.append("D:\\PyCharmWorkspace\\venv\\lib\\site-packages") import numpy as np result = values + 20 The problem is that when It tries to load module numpy, I get this error: Exception in thread "main" Traceback (most recent call last): File "D:\PyCharmWorkspace\IoTproject\TestECGfeature.py", line 4, in <module> import numpy as np File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\overrides.py", line 166 SyntaxError: unqualified exec is not allowed in function 'decorator' because it contains free variables I also tried to do this: interpreter.exec("import sys"); interpreter.exec("sys.path.append('D:\\PyCharmWorkspace\\venv\\lib\\site-packages')"); interpreter.exec("import numpy as np"); and I get: Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named numpy To install Jython I have add jar file to project build-path. I found jep and jpy that can make communicate java with python but I didn't found how to install or use them. What I need is call a Python function giving params and getting result. How Can I do or How can I solve the problem using Jython? A: Following code can be used for executing python script private void runPythonCode(String pythonScript) { ProcessBuilder pb = new ProcessBuilder("python", pythonScript); Process process = pb.start(); int errCode = process.waitFor(); if (errCode == 1) { System.out.println("Error"); } else { String filePath = output(process.getInputStream()); logger.info("Generated report file path ::" + filePath); if (filePath != null) { File docxFile = new File(filePath.trim()); // creates a new file only if it does not exists, // file.exists() returns false // if we explicitly do not create file even if the file // exists docxFile.createNewFile(); String updatedFileName = docxFile.getParent() + File.separator + jobAccountJson.getProviderName() + "_" + docxFile.getName(); File reanmedFileName = new File(updatedFileName); if(docxFile.renameTo(reanmedFileName)) { logger.info("Renamed file to " + r eanmedFileName.getPath()); return reanmedFileName; } else { logger.error("Could not rename file to " + updatedFileName); } return docxFile; } } } private static String output(InputStream inputStream) throws IOException { StringBuilder sb = new StringBuilder(); BufferedReader br = null; try { br = new BufferedReader(new InputStreamReader(inputStream)); String line = null; while ((line = br.readLine()) != null) { sb.append(line + System.getProperty("line.separator")); } } finally { if(br != null) { br.close(); } } return sb.toString(); } A: ProcessBuilder pb = new ProcessBuilder("python", "NameOfScript.py"); Process p = pb.start(); p.getInputStream().transferTo(System.out);
Execute python function from Java code and get result
I am working with a Python library but everything else is in Java. I want to be able to access and use the Python library from Java, so I started researching and using Jython. I need to use numpy and neurokit libraries. I write this simple code in Java: PythonInterpreter interpreter = new PythonInterpreter(); interpreter.set("values", 10 ); interpreter.execfile("D:\\PyCharmWorkspace\\IoTproject\\Test.py"); PyObject b = interpreter.get("result"); and the code in Python: import sys sys.path.append("D:\\PyCharmWorkspace\\venv\\lib\\site-packages") import numpy as np result = values + 20 The problem is that when It tries to load module numpy, I get this error: Exception in thread "main" Traceback (most recent call last): File "D:\PyCharmWorkspace\IoTproject\TestECGfeature.py", line 4, in <module> import numpy as np File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\overrides.py", line 166 SyntaxError: unqualified exec is not allowed in function 'decorator' because it contains free variables I also tried to do this: interpreter.exec("import sys"); interpreter.exec("sys.path.append('D:\\PyCharmWorkspace\\venv\\lib\\site-packages')"); interpreter.exec("import numpy as np"); and I get: Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named numpy To install Jython I have add jar file to project build-path. I found jep and jpy that can make communicate java with python but I didn't found how to install or use them. What I need is call a Python function giving params and getting result. How Can I do or How can I solve the problem using Jython?
[ "Following code can be used for executing python script\n private void runPythonCode(String pythonScript) {\n ProcessBuilder pb = new ProcessBuilder(\"python\", pythonScript);\n\n Process process = pb.start();\n int errCode = process.waitFor();\n\n if (errCode == 1) {\n System.out.println(\"Error\");\n } else {\n String filePath = output(process.getInputStream());\n logger.info(\"Generated report file path ::\" + filePath);\n if (filePath != null) {\n File docxFile = new File(filePath.trim());\n // creates a new file only if it does not exists,\n // file.exists() returns false\n // if we explicitly do not create file even if the file\n // exists\n docxFile.createNewFile();\n String updatedFileName = docxFile.getParent() + File.separator \n + jobAccountJson.getProviderName() + \"_\" + docxFile.getName();\n File reanmedFileName = new File(updatedFileName);\n if(docxFile.renameTo(reanmedFileName)) {\n logger.info(\"Renamed file to \" + r\n\neanmedFileName.getPath());\n return reanmedFileName;\n } else {\n logger.error(\"Could not rename file to \" + updatedFileName);\n }\n return docxFile;\n }\n }\n}\nprivate static String output(InputStream inputStream) throws IOException {\n StringBuilder sb = new StringBuilder();\n BufferedReader br = null;\n try {\n br = new BufferedReader(new InputStreamReader(inputStream));\n String line = null;\n while ((line = br.readLine()) != null) {\n sb.append(line + System.getProperty(\"line.separator\"));\n }\n } finally {\n if(br != null) {\n br.close();\n }\n }\n return sb.toString();\n}\n\n", " ProcessBuilder pb = new ProcessBuilder(\"python\", \"NameOfScript.py\");\n Process p = pb.start();\n p.getInputStream().transferTo(System.out);\n\n" ]
[ 0, 0 ]
[]
[]
[ "java", "jython", "numpy", "python" ]
stackoverflow_0060171954_java_jython_numpy_python.txt
Q: dotnet 6 minimal API circular serialization I am new to dotnet, trying out dotnet 6 minimal API. I have two models: namespace Linker.Models { class Link : BaseEntity { [MaxLength(2048)] public string Url { get; set;} = default!; [MaxLength(65536)] public string? Description { get; set; } [Required] public User Owner { get; set; } = default!; [Required] public Space Space { get; set; } = default!; } } And: namespace Linker.Models { class Space : BaseEntity { public string Name { get; set; } = default!; public string Code { get; set; } = default!; public User Owner { get; set; } = default!; public List<Link> Links { get; set; } = new List<Link>(); } } Now when I try to serialize Space model I get error System.Text.Json.JsonException: A possible object cycle was detected. This can either be due to a cycle or if the object depth is larger than the maximum allowed depth of 64. (make sense because Path: $.Links.Space.Links.Space.Links.Space.Links.Space.Links.Space.Links...). Is it posible to prevent dotnet from serializing object this deep? I don't need for dotnet to even try to serialize such a deep relations A: The reason the global configuration is getting ignored is because the wrong JsonOptions is being used. The following should work: builder.Services.Configure<Microsoft.AspNetCore.Http.Json.JsonOptions>(options => ... rest of code My default for JsonOptions is Microsoft.AspNetCore.Mvc.JsonOptions, which was not the correct JsonOptions object to change and so globally did not to work. A: You can set ReferenceHandler.Preserve in the JsonSerializerOptions. These docs How to preserve references and handle or ignore circular references in System.Text.Json discuss further. For manual serialization/deserialization pass the options to the JsonSerializer: JsonSerializerOptions options = new() { ReferenceHandler = ReferenceHandler.Preserve }; string serialized = JsonSerializer.Serialize(model, options); Or to configure globally in minimal API: using Microsoft.AspNetCore.Http.Json; var builder = WebApplication.CreateBuilder(args); // Set the JSON serializer options builder.Services.Configure<JsonOptions>(options => { options.SerializerOptions.ReferenceHandler = ReferenceHandler.Preserve; }); You could instead ignore the circular references rather than handling them by using ReferenceHandler.IgnoreCycles. The serializer will set the circular references to null, so there's potential for data loss using this method. A: Try adding [JsonIgnore] before the Space declaration as below: namespace Linker.Models { class Link : BaseEntity { [MaxLength(2048)] public string Url { get; set;} = default!; [MaxLength(65536)] public string? Description { get; set; } [Required] public User Owner { get; set; } = default!; [JsonIgnore] [Required] public Space Space { get; set; } = default!; } }
dotnet 6 minimal API circular serialization
I am new to dotnet, trying out dotnet 6 minimal API. I have two models: namespace Linker.Models { class Link : BaseEntity { [MaxLength(2048)] public string Url { get; set;} = default!; [MaxLength(65536)] public string? Description { get; set; } [Required] public User Owner { get; set; } = default!; [Required] public Space Space { get; set; } = default!; } } And: namespace Linker.Models { class Space : BaseEntity { public string Name { get; set; } = default!; public string Code { get; set; } = default!; public User Owner { get; set; } = default!; public List<Link> Links { get; set; } = new List<Link>(); } } Now when I try to serialize Space model I get error System.Text.Json.JsonException: A possible object cycle was detected. This can either be due to a cycle or if the object depth is larger than the maximum allowed depth of 64. (make sense because Path: $.Links.Space.Links.Space.Links.Space.Links.Space.Links.Space.Links...). Is it posible to prevent dotnet from serializing object this deep? I don't need for dotnet to even try to serialize such a deep relations
[ "The reason the global configuration is getting ignored is because the wrong JsonOptions is being used. The following should work:\nbuilder.Services.Configure<Microsoft.AspNetCore.Http.Json.JsonOptions>(options =>\n... rest of code\nMy default for JsonOptions is Microsoft.AspNetCore.Mvc.JsonOptions, which was not the correct JsonOptions object to change and so globally did not to work.\n", "You can set ReferenceHandler.Preserve in the JsonSerializerOptions. These docs\nHow to preserve references and handle or ignore circular references in System.Text.Json discuss further.\nFor manual serialization/deserialization pass the options to the JsonSerializer:\nJsonSerializerOptions options = new()\n{\n ReferenceHandler = ReferenceHandler.Preserve\n};\nstring serialized = JsonSerializer.Serialize(model, options);\n\nOr to configure globally in minimal API:\nusing Microsoft.AspNetCore.Http.Json;\n\nvar builder = WebApplication.CreateBuilder(args);\n\n// Set the JSON serializer options\nbuilder.Services.Configure<JsonOptions>(options =>\n{\n options.SerializerOptions.ReferenceHandler = ReferenceHandler.Preserve;\n});\n\nYou could instead ignore the circular references rather than handling them by using ReferenceHandler.IgnoreCycles. The serializer will set the circular references to null, so there's potential for data loss using this method.\n", "Try adding [JsonIgnore] before the Space declaration as below:\nnamespace Linker.Models\n{\n class Link : BaseEntity\n {\n [MaxLength(2048)]\n public string Url { get; set;} = default!;\n [MaxLength(65536)]\n public string? Description { get; set; }\n [Required]\n public User Owner { get; set; } = default!;\n [JsonIgnore]\n [Required]\n public Space Space { get; set; } = default!;\n }\n}\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ ".net_core", "c#", "minimal_apis" ]
stackoverflow_0071275893_.net_core_c#_minimal_apis.txt
Q: Javascript concatenate rows based on column I have a feeling this question already exists, just can't find it. Is there a way to take a 2d array with x rows and 2 columns and merge multiple values into one row based on it having the same element in the first column? [['Needs Work', 'Joe'], ['Needs Work', 'Jill'], ['Needs Work', 'Jack'], ['Complete', 'Sean'], ['Complete', 'Joe'], ['Not Started', 'Laura'], ['Needs Work', 'Jack']] So that it looks like this [ [ 'Needs Work', 'Joe,Jill,Jack,Jack' ], [ 'Complete', 'Sean,Joe' ], [ 'Not Started', 'Laura' ] ] A: Not sure if this the most efficient way. function myFunction() { var array = [['Needs Work', 'Joe'], ['Needs Work', 'Jill'], ['Needs Work', 'Jack'], ['Complete', 'Sean'], ['Complete', 'Joe'], ['Not Started', 'Laura'], ['Needs Work', 'Jack']]; const extractColumn = (arr, n) => arr.map(row => row[n]) var list = [...new Set(extractColumn(array, 0))]//gets set of distict values from first column // console.log(list) var condensedArray = [] for (var i = 0; i < list.length; i++) { var filteredArray = array.filter(row => row[0] == list[i])//filters array to only to current value // console.log(filteredArray) condensedArray.push([list[i], extractColumn(filteredArray, 1).toString()]) } console.log(condensedArray) } A: Considering the answer that you posted, I wrote the following function: function create_union_array(initial_array) { const temporal_object = {}; for (i in initial_array) { const work_state = initial_array[i][0]; const people_name = initial_array[i][1]; if (temporal_object[work_state] == undefined) { temporal_object[work_state] = [people_name] } else { temporal_object[work_state].push(people_name) } } const output_array = []; let iteration = 0; for (i in temporal_object) { output_array[iteration] = [i, temporal_object[i]] iteration++; } return output_array; } Unlike yours, instead of returning the names in a concatenated string, this one returns the names in an array, in case you need to work with the names afterwards this would be a better option.
Javascript concatenate rows based on column
I have a feeling this question already exists, just can't find it. Is there a way to take a 2d array with x rows and 2 columns and merge multiple values into one row based on it having the same element in the first column? [['Needs Work', 'Joe'], ['Needs Work', 'Jill'], ['Needs Work', 'Jack'], ['Complete', 'Sean'], ['Complete', 'Joe'], ['Not Started', 'Laura'], ['Needs Work', 'Jack']] So that it looks like this [ [ 'Needs Work', 'Joe,Jill,Jack,Jack' ], [ 'Complete', 'Sean,Joe' ], [ 'Not Started', 'Laura' ] ]
[ "Not sure if this the most efficient way.\nfunction myFunction() {\n var array = [['Needs Work', 'Joe'], ['Needs Work', 'Jill'], ['Needs Work', 'Jack'], ['Complete', 'Sean'], ['Complete', 'Joe'], ['Not Started', 'Laura'], ['Needs Work', 'Jack']];\n const extractColumn = (arr, n) => arr.map(row => row[n])\n var list = [...new Set(extractColumn(array, 0))]//gets set of distict values from first column\n // console.log(list)\n var condensedArray = []\n for (var i = 0; i < list.length; i++) {\n var filteredArray = array.filter(row => row[0] == list[i])//filters array to only to current value\n // console.log(filteredArray)\n condensedArray.push([list[i], extractColumn(filteredArray, 1).toString()])\n\n }\n console.log(condensedArray)\n}\n\n", "Considering the answer that you posted, I wrote the following function:\nfunction create_union_array(initial_array) {\n const temporal_object = {};\n for (i in initial_array) {\n const work_state = initial_array[i][0];\n const people_name = initial_array[i][1];\n if (temporal_object[work_state] == undefined) {\n temporal_object[work_state] = [people_name]\n } else {\n temporal_object[work_state].push(people_name)\n }\n }\n const output_array = [];\n let iteration = 0;\n for (i in temporal_object) {\n output_array[iteration] = [i, temporal_object[i]]\n iteration++;\n }\n return output_array;\n}\n\nUnlike yours, instead of returning the names in a concatenated string, this one returns the names in an array, in case you need to work with the names afterwards this would be a better option.\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "javascript" ]
stackoverflow_0074670369_arrays_javascript.txt
Q: str object is not callable while importing the dataset on jupyter notebook. what to do? I tried to import the dataset on jupyter notebook. But it indicates error as str object is not callable,even the pathway of the file are obsolutely okay. or Are there any problems with anaconda? help me out!! here is my code after importing the libraries: df=pd.read_csv('Nutrients.csv') Even everything is okay it still shows str object is not callable Now i need to load the dataset. A: In pandas.read_csv, the string which is passed inside as a parameter is the name of the file. If the file does not exist, then python just considers the value as a string which in your case is the same. Try checking out the location of the jupyter notebook that you are running the code in and the file you want to access. According to your code, they should be in the same location.
str object is not callable while importing the dataset on jupyter notebook. what to do?
I tried to import the dataset on jupyter notebook. But it indicates error as str object is not callable,even the pathway of the file are obsolutely okay. or Are there any problems with anaconda? help me out!! here is my code after importing the libraries: df=pd.read_csv('Nutrients.csv') Even everything is okay it still shows str object is not callable Now i need to load the dataset.
[ "In pandas.read_csv, the string which is passed inside as a parameter is the name of the file. If the file does not exist, then python just considers the value as a string which in your case is the same.\nTry checking out the location of the jupyter notebook that you are running the code in and the file you want to access. According to your code, they should be in the same location.\n" ]
[ 0 ]
[]
[]
[ "dataset", "pandas", "python", "read.csv", "string" ]
stackoverflow_0074669070_dataset_pandas_python_read.csv_string.txt
Q: Explain AsyncEventingBasicConsumer behaviour without DispatchConsumersAsync = true I am trying out the RabbitMQ AsyncEventingBasicConsumer using the following code: static void Main(string[] args) { Console.Title = "Consumer"; var factory = new ConnectionFactory() { DispatchConsumersAsync = true }; const string queueName = "myqueue"; using (var connection = factory.CreateConnection()) using (var channel = connection.CreateModel()) { channel.QueueDeclare(queueName, true, false, false, null); // consumer var consumer = new AsyncEventingBasicConsumer(channel); consumer.Received += Consumer_Received; channel.BasicConsume(queueName, true, consumer); // publisher var props = channel.CreateBasicProperties(); int i = 0; while (true) { var messageBody = Encoding.UTF8.GetBytes($"Message {++i}"); channel.BasicPublish("", queueName, props, messageBody); Thread.Sleep(50); } } } private static async Task Consumer_Received(object sender, BasicDeliverEventArgs @event) { var message = Encoding.UTF8.GetString(@event.Body); Console.WriteLine($"Begin processing {message}"); await Task.Delay(250); Console.WriteLine($"End processing {message}"); } It works as expected. If I don't set the DispatchConsumersAsync property, however, the messages get consumed but the event handler never fires. I find it hard to believe that this dangerous behaviour (losing messages because a developer forgot to set a property) is by design. Questions: What does DispatchConsumersAsync actually do? What is happening under the hood in the case without DispatchConsumersAsync, where consumption is taking place but the event handler does not fire? Is this behaviour by design? A: When DispatchConsumersAsync is set to true, the AsyncEventingBasicConsumer will dispatch incoming messages to the consumer's event handler asynchronously using a separate thread. This means that the event handler will be called on a different thread than the one where the consumer was created. This allows the consumer to handle incoming messages concurrently, without blocking the thread that created the consumer. When DispatchConsumersAsync is set to false, the AsyncEventingBasicConsumer will dispatch incoming messages to the consumer's event handler on the same thread where the consumer was created. This means that the consumer will handle messages one at a time, blocking the thread until the event handler has finished processing the current message. This can cause performance issues if the event handler takes a long time to process a message, because it will block the consumer from receiving and processing other incoming messages. In the case where DispatchConsumersAsync is set to false and the consumer is receiving messages, but the event handler is not being called, this is likely because the thread that created the consumer has become blocked by something else. If the thread is blocked, it will not be able to process the incoming messages and call the event handler. This behavior is not by design, but rather a result of the consumer being unable to process incoming messages because the thread it is running on is blocked. It is recommended to always set DispatchConsumersAsync to true to avoid this issue.
Explain AsyncEventingBasicConsumer behaviour without DispatchConsumersAsync = true
I am trying out the RabbitMQ AsyncEventingBasicConsumer using the following code: static void Main(string[] args) { Console.Title = "Consumer"; var factory = new ConnectionFactory() { DispatchConsumersAsync = true }; const string queueName = "myqueue"; using (var connection = factory.CreateConnection()) using (var channel = connection.CreateModel()) { channel.QueueDeclare(queueName, true, false, false, null); // consumer var consumer = new AsyncEventingBasicConsumer(channel); consumer.Received += Consumer_Received; channel.BasicConsume(queueName, true, consumer); // publisher var props = channel.CreateBasicProperties(); int i = 0; while (true) { var messageBody = Encoding.UTF8.GetBytes($"Message {++i}"); channel.BasicPublish("", queueName, props, messageBody); Thread.Sleep(50); } } } private static async Task Consumer_Received(object sender, BasicDeliverEventArgs @event) { var message = Encoding.UTF8.GetString(@event.Body); Console.WriteLine($"Begin processing {message}"); await Task.Delay(250); Console.WriteLine($"End processing {message}"); } It works as expected. If I don't set the DispatchConsumersAsync property, however, the messages get consumed but the event handler never fires. I find it hard to believe that this dangerous behaviour (losing messages because a developer forgot to set a property) is by design. Questions: What does DispatchConsumersAsync actually do? What is happening under the hood in the case without DispatchConsumersAsync, where consumption is taking place but the event handler does not fire? Is this behaviour by design?
[ "When DispatchConsumersAsync is set to true, the AsyncEventingBasicConsumer will dispatch incoming messages to the consumer's event handler asynchronously using a separate thread. This means that the event handler will be called on a different thread than the one where the consumer was created. This allows the consumer to handle incoming messages concurrently, without blocking the thread that created the consumer.\nWhen DispatchConsumersAsync is set to false, the AsyncEventingBasicConsumer will dispatch incoming messages to the consumer's event handler on the same thread where the consumer was created. This means that the consumer will handle messages one at a time, blocking the thread until the event handler has finished processing the current message. This can cause performance issues if the event handler takes a long time to process a message, because it will block the consumer from receiving and processing other incoming messages.\nIn the case where DispatchConsumersAsync is set to false and the consumer is receiving messages, but the event handler is not being called, this is likely because the thread that created the consumer has become blocked by something else. If the thread is blocked, it will not be able to process the incoming messages and call the event handler. This behavior is not by design, but rather a result of the consumer being unable to process incoming messages because the thread it is running on is blocked. It is recommended to always set DispatchConsumersAsync to true to avoid this issue.\n" ]
[ 0 ]
[ "The answer actually in your question. Yes, it is about design. The documentation explains and gives small example about async pattern.\n\nThe client provides an async-oriented consumer dispatch implementation. This dispatcher can only be used with async consumers, that is, IAsyncBasicConsumer implementations.\nIn order to use this dispatcher, set the ConnectionFactory.DispatchConsumersAsync property to true\n\nSo documentation has not enough information to answer your first question. However for second, if you want to use AsyncEventingBasicConsumer, you must to set ConnectionFactory.DispatchConsumersAsync property to true. It is design and rule of RabbitMq.\nAlso third question actually you answer yourself. Yes, for now it is about design of RabbitMq .net client.\n" ]
[ -1 ]
[ "asynchronous", "c#", "rabbitmq" ]
stackoverflow_0047847590_asynchronous_c#_rabbitmq.txt
Q: Can we have an extension function for suspend function? if i use : suspend fun <T> A(block: suspend () -> T){ ///std. } all things go right but : suspend fun <T> (suspend () -> T).A(){ ///std. }. no compilation errors , but i cant use it with suspend functions. for example that we have this fun (do work is a suspend function): accountManager.doWork(password) in case #1 its works fine: A { accountManager.doWork(password) } in case #2 it does not work as expected(compilation error): accountManager.doWork(password).A() A: The receiver of accountManager.doWork(password).A() is whatever doWork(password) returns, not the doWork function itself. Let's suppose that doWork returns String, then the above would have worked if A were an extension function on String: // Depending on what you do in its implementation, // A does not need to be a suspending function fun String.A() { } If you want A's receiver to be the function doWork instead, the syntax is: (accountManager::doWork).A() Then the above will compile if A is declared like this: suspend fun <T, R> (suspend (T) -> R).A() { } Notice that the receiver type is a function that takes one parameter, since doWork takes one parameter. If what you actually want to do is to use the entire suspending lambda you passed to A as a parameter here... A { accountManager.doWork(password) } ...as the receiver of the new extension function A that you are declaring, then your attempt is correct: suspend fun <T> (suspend () -> T).A(){ } You should call it on a suspending lambda: suspend { accountManager.doWork(password) }.A() Though I'm not sure why you would prefer this to the way more readable A { ... } syntax.
Can we have an extension function for suspend function?
if i use : suspend fun <T> A(block: suspend () -> T){ ///std. } all things go right but : suspend fun <T> (suspend () -> T).A(){ ///std. }. no compilation errors , but i cant use it with suspend functions. for example that we have this fun (do work is a suspend function): accountManager.doWork(password) in case #1 its works fine: A { accountManager.doWork(password) } in case #2 it does not work as expected(compilation error): accountManager.doWork(password).A()
[ "The receiver of\naccountManager.doWork(password).A()\n\nis whatever doWork(password) returns, not the doWork function itself. Let's suppose that doWork returns String, then the above would have worked if A were an extension function on String:\n// Depending on what you do in its implementation, \n// A does not need to be a suspending function\nfun String.A() {\n\n}\n\n\nIf you want A's receiver to be the function doWork instead, the syntax is:\n(accountManager::doWork).A()\n\nThen the above will compile if A is declared like this:\nsuspend fun <T, R> (suspend (T) -> R).A() {\n\n}\n\nNotice that the receiver type is a function that takes one parameter, since doWork takes one parameter.\n\nIf what you actually want to do is to use the entire suspending lambda you passed to A as a parameter here...\nA {\n accountManager.doWork(password)\n}\n\n...as the receiver of the new extension function A that you are declaring, then your attempt is correct:\nsuspend fun <T> (suspend () -> T).A(){\n\n}\n\nYou should call it on a suspending lambda:\nsuspend { accountManager.doWork(password) }.A()\n\nThough I'm not sure why you would prefer this to the way more readable A { ... } syntax.\n" ]
[ 3 ]
[]
[]
[ "kotlin", "kotlin_coroutines" ]
stackoverflow_0074667075_kotlin_kotlin_coroutines.txt
Q: Pass 2 EditText values to retrofit, using them as Base_Url and as part of a custom Header I am a beginner in using Android studio and kotlin and I never learned any programming language, so my questions may be have an easy way to solve.. But searching on stackoverflow, other programming sites, youtube etc. didn't give me the answer I would need (maybe there was one and I didn't understand it the right way). So what I want to do explained simple: I have several fragments (using navigation component) where the different data from some get requests is showed. There is kind of a login-fragment where I have two edittexts and a save-button. In the first-edittext the user has to insert an url, which should be the base url for the get requests. The text in the second-edittext should then be a part from a custom header. When clicking the save-button the first get request should start, with the first get request the users gets a token, which will be implemented also as header for the next requests. Finally the 2 values of the edit-texts should also be saved in a list view in another fragment (the user can add as many different combinations of edittext1 and edittext2 as he wants). From the listview-fragment he then can start the requests also. So my main question is about passing values of edittext to the get request. A possibility could be starting the request in the fragment, using an interceptor to add the edittext2-header and the url as base url. I didn't try that yet, so I don't know either if that would work. And as I read in a lot of articles that's not the way to do those things. Especially when I want to use the Mvvm pattern. So there the question is, how to pass the edittexts to the viewmodel, to the repository, to retrofit. I read about the dynamic header in retrofit with @Header with a key and the related value, what looks like that what I need, but how can I specify the value so that the value is always the inserted text (in edittext2) from the user? For the url it's similar, I am sure there is a relatively easy way to handle the base_url used for the request, but the problem that I have stays the same.. How to pass the edittext to retrofit. Would It help to use shared preferences, databinding or something like that? I hope there is someone who can give me a hint on how to manage all those things. Maybe I am missing something obvious. Big thanks in advance :-) A: It sounds like you want to use the values in your EditTexts as dynamic values for your Retrofit requests. One way to do this would be to use the LiveData objects in your ViewModel to store the values from the EditTexts. Then, in your Repository, you can use these LiveData values to build your Retrofit request. Here is an example of how you could do this: In your Fragment, where the user enters the values in the EditTexts, you can use the observe() method on the LiveData objects in your ViewModel to update the values whenever the user changes them. For example: viewModel.baseUrlLiveData.observe(this, Observer { baseUrl -> // Update the Retrofit request with the new base URL }) viewModel.headerValueLiveData.observe(this, Observer { headerValue -> // Update the Retrofit request with the new header value }) In your ViewModel, you can expose the LiveData objects that hold the values from the EditTexts. For example: class MyViewModel: ViewModel() { val baseUrlLiveData = MutableLiveData<String>() val headerValueLiveData = MutableLiveData<String>() // Other code... } In your Repository, you can use the LiveData values from the ViewModel to build your Retrofit request. For example: class MyRepository { fun getData(baseUrl: String, headerValue: String) { val retrofit = Retrofit.Builder() .baseUrl(baseUrl) .build() val service = retrofit.create(MyService::class.java) val call = service.getData(headerValue) // Make the Retrofit call... } } In your Service interface, you can use the @Header annotation to specify that the header value should be passed dynamically. For example: interface MyService { @GET("/data") fun getData(@Header("Custom-Header") headerValue: String): Call<Data> }
Pass 2 EditText values to retrofit, using them as Base_Url and as part of a custom Header
I am a beginner in using Android studio and kotlin and I never learned any programming language, so my questions may be have an easy way to solve.. But searching on stackoverflow, other programming sites, youtube etc. didn't give me the answer I would need (maybe there was one and I didn't understand it the right way). So what I want to do explained simple: I have several fragments (using navigation component) where the different data from some get requests is showed. There is kind of a login-fragment where I have two edittexts and a save-button. In the first-edittext the user has to insert an url, which should be the base url for the get requests. The text in the second-edittext should then be a part from a custom header. When clicking the save-button the first get request should start, with the first get request the users gets a token, which will be implemented also as header for the next requests. Finally the 2 values of the edit-texts should also be saved in a list view in another fragment (the user can add as many different combinations of edittext1 and edittext2 as he wants). From the listview-fragment he then can start the requests also. So my main question is about passing values of edittext to the get request. A possibility could be starting the request in the fragment, using an interceptor to add the edittext2-header and the url as base url. I didn't try that yet, so I don't know either if that would work. And as I read in a lot of articles that's not the way to do those things. Especially when I want to use the Mvvm pattern. So there the question is, how to pass the edittexts to the viewmodel, to the repository, to retrofit. I read about the dynamic header in retrofit with @Header with a key and the related value, what looks like that what I need, but how can I specify the value so that the value is always the inserted text (in edittext2) from the user? For the url it's similar, I am sure there is a relatively easy way to handle the base_url used for the request, but the problem that I have stays the same.. How to pass the edittext to retrofit. Would It help to use shared preferences, databinding or something like that? I hope there is someone who can give me a hint on how to manage all those things. Maybe I am missing something obvious. Big thanks in advance :-)
[ "It sounds like you want to use the values in your EditTexts as dynamic values for your Retrofit requests. One way to do this would be to use the LiveData objects in your ViewModel to store the values from the EditTexts. Then, in your Repository, you can use these LiveData values to build your Retrofit request.\nHere is an example of how you could do this:\nIn your Fragment, where the user enters the values in the EditTexts, you can use the observe() method on the LiveData objects in your ViewModel to update the values whenever the user changes them. For example:\nviewModel.baseUrlLiveData.observe(this, Observer { baseUrl ->\n // Update the Retrofit request with the new base URL\n})\nviewModel.headerValueLiveData.observe(this, Observer { headerValue ->\n // Update the Retrofit request with the new header value\n})\n\nIn your ViewModel, you can expose the LiveData objects that hold the values from the EditTexts. For example:\nclass MyViewModel: ViewModel() {\n val baseUrlLiveData = MutableLiveData<String>()\n val headerValueLiveData = MutableLiveData<String>()\n\n // Other code...\n}\n\nIn your Repository, you can use the LiveData values from the ViewModel to build your Retrofit request. For example:\nclass MyRepository {\n fun getData(baseUrl: String, headerValue: String) {\n val retrofit = Retrofit.Builder()\n .baseUrl(baseUrl)\n .build()\n val service = retrofit.create(MyService::class.java)\n val call = service.getData(headerValue)\n // Make the Retrofit call...\n }\n}\n\nIn your Service interface, you can use the @Header annotation to specify that the header value should be passed dynamically. For example:\ninterface MyService {\n @GET(\"/data\")\n fun getData(@Header(\"Custom-Header\") headerValue: String): Call<Data>\n}\n\n" ]
[ 0 ]
[]
[]
[ "android_edittext", "android_mvvm", "kotlin", "request_headers", "retrofit" ]
stackoverflow_0074669653_android_edittext_android_mvvm_kotlin_request_headers_retrofit.txt
Q: Organise selection from cache only (no database hit is permitted) Django: 4.0.5 django-cachalot: 2.5.1 Model: class General_Paragraph(models.Model): treaty = models.ForeignKey('treaties.Treaty', on_delete=models.PROTECT, db_index=True) identifier = models.CharField(max_length=100, blank=False, null=False, db_index=True, unique=True) Cash warming: @admin.action(description='Warm cache up') def warm_up(modeladmin, request, queryset): MODELS_AND_APPS = { ... "General_Paragraph": "paragraphs_of_treaties", ... } for model_name in MODELS_AND_APPS: current_model = apps.get_model(app_label=MODELS_AND_APPS[model_name], model_name=model_name) all_instances = current_model.objects.all() list(all_instances) # The very warming the cache up. Problematic code: def get_texts_in_languages(treaty, paragraph_identifier, party): general_paragraph = General_Paragraph.objects.get(treaty=treaty, identifier=paragraph_identifier) SQL: SELECT ••• FROM "paragraphs_of_treaties_general_paragraph" WHERE ("paragraphs_of_treaties_general_paragraph"."identifier" = 'Par 1' AND "paragraphs_of_treaties_general_paragraph"."treaty_id" = 1) LIMIT 21 What I need I use Memcached. As this is reading, I don't want any reading from the database. Everything should be requested from the cache. As far as I have understood, .objects.get(treaty=treaty, identifier=paragraph_identifier) will not be covered by Django Cachalot. How can I write code that will not hit the database? Any warming ups of the cache are possible. I have enough resources at my hosting. My problem is that I can't understand what to do: whether some elegant query is possible here or I will have to loop over all the cache? Maybe django-cachalot is not suitable or something. A: You're right that Django Cachalot won't cache the .get() method, but you can still cache the query you're using to retrieve the General_Paragraph object. You can do this by using the .cache() method on your QuerySet, like so: general_paragraph = General_Paragraph.objects.filter( treaty=treaty, identifier=paragraph_identifier ).cache().get() This will first check the cache for the results of the query, and if they're not in the cache it will execute the query and store the results in the cache. Then it will return the General_Paragraph object that matches the given treaty and paragraph_identifier. Note that this will only work if you have Django Cachalot properly configured and the General_Paragraph model is set to be cached. You can find more information about how to do this in the Django Cachalot documentation. You may also want to consider using the .get_or_create() method instead of .get() in this case. This will first try to retrieve the General_Paragraph object with the given treaty and paragraph_identifier, and if it doesn't exist it will create a new General_Paragraph object with those values and return it. This can be useful if you're not sure whether the General_Paragraph object you're looking for already exists in the database. You can use it like this: general_paragraph, created = General_Paragraph.objects.get_or_create( treaty=treaty, identifier=paragraph_identifier ) This will return a tuple with the General_Paragraph object and a boolean indicating whether it was created or retrieved from the database. You can then use the created variable to determine whether to save the General_Paragraph object to the database (if it was created) or just continue using it (if it was retrieved from the database). I hope this helps!
Organise selection from cache only (no database hit is permitted)
Django: 4.0.5 django-cachalot: 2.5.1 Model: class General_Paragraph(models.Model): treaty = models.ForeignKey('treaties.Treaty', on_delete=models.PROTECT, db_index=True) identifier = models.CharField(max_length=100, blank=False, null=False, db_index=True, unique=True) Cash warming: @admin.action(description='Warm cache up') def warm_up(modeladmin, request, queryset): MODELS_AND_APPS = { ... "General_Paragraph": "paragraphs_of_treaties", ... } for model_name in MODELS_AND_APPS: current_model = apps.get_model(app_label=MODELS_AND_APPS[model_name], model_name=model_name) all_instances = current_model.objects.all() list(all_instances) # The very warming the cache up. Problematic code: def get_texts_in_languages(treaty, paragraph_identifier, party): general_paragraph = General_Paragraph.objects.get(treaty=treaty, identifier=paragraph_identifier) SQL: SELECT ••• FROM "paragraphs_of_treaties_general_paragraph" WHERE ("paragraphs_of_treaties_general_paragraph"."identifier" = 'Par 1' AND "paragraphs_of_treaties_general_paragraph"."treaty_id" = 1) LIMIT 21 What I need I use Memcached. As this is reading, I don't want any reading from the database. Everything should be requested from the cache. As far as I have understood, .objects.get(treaty=treaty, identifier=paragraph_identifier) will not be covered by Django Cachalot. How can I write code that will not hit the database? Any warming ups of the cache are possible. I have enough resources at my hosting. My problem is that I can't understand what to do: whether some elegant query is possible here or I will have to loop over all the cache? Maybe django-cachalot is not suitable or something.
[ "You're right that Django Cachalot won't cache the .get() method, but you can still cache the query you're using to retrieve the General_Paragraph object. You can do this by using the .cache() method on your QuerySet, like so:\ngeneral_paragraph = General_Paragraph.objects.filter(\n treaty=treaty,\n identifier=paragraph_identifier\n).cache().get()\n\n\nThis will first check the cache for the results of the query, and if they're not in the cache it will execute the query and store the results in the cache. Then it will return the General_Paragraph object that matches the given treaty and paragraph_identifier.\nNote that this will only work if you have Django Cachalot properly configured and the General_Paragraph model is set to be cached. You can find more information about how to do this in the Django Cachalot documentation.\nYou may also want to consider using the .get_or_create() method instead of .get() in this case. This will first try to retrieve the General_Paragraph object with the given treaty and paragraph_identifier, and if it doesn't exist it will create a new General_Paragraph object with those values and return it. This can be useful if you're not sure whether the General_Paragraph object you're looking for already exists in the database. You can use it like this:\ngeneral_paragraph, created = General_Paragraph.objects.get_or_create(\n treaty=treaty,\n identifier=paragraph_identifier\n)\n\n\nThis will return a tuple with the General_Paragraph object and a boolean indicating whether it was created or retrieved from the database. You can then use the created variable to determine whether to save the General_Paragraph object to the database (if it was created) or just continue using it (if it was retrieved from the database).\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "django" ]
stackoverflow_0072992907_django.txt
Q: How to mark highest volumes of the session in pinescript? I want to plot shape above the candle of highest volume on live chart. After-that if again a candle appear with higher than the previous volume, it will plot a shape above that candle too...but if the volume is lower than the previous highest volume, it will not plot any shape. And of course, the entire calculation starts with new session and ends with the session In this way, it is obvious that the first volume is always the highest since there is no volume data of the previous, but that's okay with me. It will always mark the first volume as high. I am new to pine-script, and I tried to do it myself but I don't find anything helpful that solve my problem. A: First, you'll need a variable that will store the value of volume and update its value in 2 cases: In case it's a new day - set it to volume of that bar (the opening bar of the new day). In case it's not the first bar of the day, check if this is higher than the value of the current stored value, and if it is store the new volume (the higher one). Since you need the variable to "remember" its value between executions of the script, you'll need to use the var keyword for that variable. You can use 2 functions to help you: ta.change(time("D")) will return true on the first bar of each day, regardless the timeframe you are using. math.max() function will return the higher value between 2 values you'll set as arguments of the function. //@version=5 indicator("highest daily volume") var highest_volume = volume if ta.change(time("D")) highest_volume := volume else highest_volume := math.max(highest_volume, volume) plot(highest_volume) EDIT: You clarified that you wish to plot a shape on the bar where the highest volume of the day is. I don't believe you can do it with plotshape() since you can't change its x value after plowing it. We can however use a label. I'm not sure it's the most efficient way to do that, but you can use an array of labels and change the x variable each time there is a change in the highest_volume variable: //@version=5 indicator("highest daily volume", overlay = true) var highest_volume = volume var label_array = array.new_label(100000) var index = 0 if ta.change(time("D")) highest_volume := volume array.set(label_array, index, label.new(bar_index, high, str.tostring(highest_volume))) index += 1 else highest_volume := math.max(highest_volume, volume) if highest_volume != highest_volume[1] label.set_x(array.get(label_array, index - 1), bar_index)
How to mark highest volumes of the session in pinescript?
I want to plot shape above the candle of highest volume on live chart. After-that if again a candle appear with higher than the previous volume, it will plot a shape above that candle too...but if the volume is lower than the previous highest volume, it will not plot any shape. And of course, the entire calculation starts with new session and ends with the session In this way, it is obvious that the first volume is always the highest since there is no volume data of the previous, but that's okay with me. It will always mark the first volume as high. I am new to pine-script, and I tried to do it myself but I don't find anything helpful that solve my problem.
[ "First, you'll need a variable that will store the value of volume and update its value in 2 cases:\n\nIn case it's a new day - set it to volume of that bar (the opening bar of the new day).\nIn case it's not the first bar of the day, check if this is higher than the value of the current stored value, and if it is store the new volume (the higher one).\n\nSince you need the variable to \"remember\" its value between executions of the script, you'll need to use the var keyword for that variable.\nYou can use 2 functions to help you:\n\nta.change(time(\"D\")) will return true on the first bar of each day, regardless the timeframe you are using.\nmath.max() function will return the higher value between 2 values you'll set as arguments of the function.\n\n//@version=5\nindicator(\"highest daily volume\")\n\nvar highest_volume = volume\n\nif ta.change(time(\"D\"))\n highest_volume := volume\nelse\n highest_volume := math.max(highest_volume, volume)\n\nplot(highest_volume)\n\nEDIT:\nYou clarified that you wish to plot a shape on the bar where the highest volume of the day is. I don't believe you can do it with plotshape() since you can't change its x value after plowing it. We can however use a label.\nI'm not sure it's the most efficient way to do that, but you can use an array of labels and change the x variable each time there is a change in the highest_volume variable:\n//@version=5\nindicator(\"highest daily volume\", overlay = true)\n\nvar highest_volume = volume\nvar label_array = array.new_label(100000)\n\nvar index = 0\n\nif ta.change(time(\"D\"))\n highest_volume := volume\n array.set(label_array, index, label.new(bar_index, high, str.tostring(highest_volume)))\n index += 1\nelse\n highest_volume := math.max(highest_volume, volume)\n if highest_volume != highest_volume[1]\n label.set_x(array.get(label_array, index - 1), bar_index)\n\n" ]
[ 0 ]
[]
[]
[ "pine_script" ]
stackoverflow_0074664808_pine_script.txt
Q: Looping in javascript through a list of users I have a list of users store in an array accountsade: [ { id: 0.4387810413935975 name: "Adrian" password: "345" userName: "nathanael" }, { id: 0.2722524232951682 name: "Nathan" password: "123" userName: "nathanaelmbale45" } ], And I want to loop through the list of users and capture the values password, and username of every single object in the array and compare them to already existing variables. usernameL = "nathanaelmbale45" passwordL = "123" A: <script> let accountsade = [ { id: 0.4387810413935975, name: "Adrian", password: "345", userName: "nathanael", }, { id: 0.2722524232951682, name: "Nathan", password: "123", userName: "nathanaelmbale45" } ] let userName = 'nathanael', password = '345'; let account = accountsade.filter((item) => { return item.userName === userName && item.password == password; }); console.log(account); </script> -----------# output #----------------- [{"id":0.4387810413935975,"name":"Adrian","password":"345","userName":"nathanael"}] A: to capture just map over the array and return the required fields objectArrap.map(object=>({object.property, object.property2})) to compare you can use similar methods called filter and find. If you need to do something specialized consider looping over each object with objectsArrray.foreach A: As mentioned in comments, doing this in client side code would be a BAD idea because you expose all of your usernames and passwords to something that can be easily found in the browser console... For the sake of understanding array functions though... this will return the first matching object in the array based on matching username and password. var accountsade = [ { id: 0.4387810413935975, name: "Adrian", password: "345", userName: "nathanael" }, { id: 0.2722524232951682, name: "Nathan", password: "123", userName: "nathanaelmbale45" } ]; var usernameL = "nathanaelmbale45"; var passwordL = "123"; /////////////////////////////////////////////////////////////////// // array.find takes as its arrgument a function that gets run // against each element of the array // // array.find will return the current element being supplied // the first time the provided function returns a "true" value // // if the provided function always returns a "false" value... // then array.find will return the value "undefined" /////////////////////////////////////////////////////////////////// var authUser = accountsade.find(function(element) { var returnValue = true; returnValue &= element.userName == usernameL; returnValue &= element.password == passwordL; return returnValue }); console.log(authUser); A: First of all: authenticating users like this is not safe. I'll tell you why in a minute, but let's solve your actual question first. You can use JavaScript's Array.prototype.filter() function to fetch the right object in the list. Then, you can check if that object's password matches the input: const json = { accountsade: [ { id: 0.4387810413935975, name: "Adrian", password: "345", userName: "nathanael" }, { id: 0.2722524232951682, name: "Nathan", password: "123", userName: "nathanaelmbale45" } ] }; const usernameL = "nathanaelmbale45"; const passwordL = "123"; alert(checkLogin(usernameL, passwordL)); function checkLogin(username, password) { const account = json.accountsade.find(account => { return account.userName = username; }); return account.password == password; } Proper authentication he goal of authenticating a user is to make sure that a user is who he/she says he/she is. That way, you can decide in your code what actions a user can take. This process should be doe in the back end (server side) of your application, to make sure that the user can't see and manipulate it. If you send all of the account data over to the front end (your user's computer), that person can see all of that data and use other user's passwords to log in. You also didn't hash the passwords (scramble their contents in a non-reversable way). This should even be done when handling authentication server side, as your database could be compromised and leaked by another security failure. Another downside of sending all account data to the front end, is bandwith. Suppose you have a million user's, that will take a lot of redundant time and resources just to log the user in. So the right way is: send the username and password that the user entered via a secure connection (so nobody can eavesdrop), fetch the hashed password from the database, hash the password that the user entered and compare it too the fetched hash. If they match exactly, the user is legitimate and can be logged in (set the appropriate cookies server side or send an authentication token). Now those are the basics, but it's safer to use proven solutions like frameworks and libraries that do this work for you instead of building this yourself (lots of users have looked at the code already and improved it, you on the other hand might make a mistake/create a security risk). A: function checkLogin(usernameL, passwordL) { const account = accountsade.find((account) => { return (account.userName = usernameL && account.password == passwordL); }); return account; } that would be the best practice i could think of right now. but why would you make a auth-system in frontend?
Looping in javascript through a list of users
I have a list of users store in an array accountsade: [ { id: 0.4387810413935975 name: "Adrian" password: "345" userName: "nathanael" }, { id: 0.2722524232951682 name: "Nathan" password: "123" userName: "nathanaelmbale45" } ], And I want to loop through the list of users and capture the values password, and username of every single object in the array and compare them to already existing variables. usernameL = "nathanaelmbale45" passwordL = "123"
[ " <script>\n let accountsade = [\n {\n id: 0.4387810413935975,\n name: \"Adrian\",\n password: \"345\",\n userName: \"nathanael\",\n },\n {\n id: 0.2722524232951682,\n name: \"Nathan\",\n password: \"123\",\n userName: \"nathanaelmbale45\"\n }\n ]\n let userName = 'nathanael', password = '345';\n let account = accountsade.filter((item) => {\n return item.userName === userName && item.password == password;\n });\n console.log(account);\n</script>\n\n-----------# output #-----------------\n[{\"id\":0.4387810413935975,\"name\":\"Adrian\",\"password\":\"345\",\"userName\":\"nathanael\"}]\n\n", "\nto capture just map over the array and return the required fields\n\nobjectArrap.map(object=>({object.property, object.property2}))\nto compare you can use similar methods called filter and find. If you need to do something specialized consider looping over each object with objectsArrray.foreach\n", "As mentioned in comments, doing this in client side code would be a BAD idea because you expose all of your usernames and passwords to something that can be easily found in the browser console...\nFor the sake of understanding array functions though... this will return the first matching object in the array based on matching username and password.\n\n\nvar accountsade = [\n {\n id: 0.4387810413935975,\n name: \"Adrian\",\n password: \"345\",\n userName: \"nathanael\"\n },\n {\n id: 0.2722524232951682,\n name: \"Nathan\",\n password: \"123\",\n userName: \"nathanaelmbale45\"\n }\n];\n\nvar usernameL = \"nathanaelmbale45\";\nvar passwordL = \"123\";\n\n///////////////////////////////////////////////////////////////////\n// array.find takes as its arrgument a function that gets run\n// against each element of the array\n//\n// array.find will return the current element being supplied\n// the first time the provided function returns a \"true\" value\n//\n// if the provided function always returns a \"false\" value...\n// then array.find will return the value \"undefined\"\n///////////////////////////////////////////////////////////////////\n\nvar authUser = accountsade.find(function(element) {\n var returnValue = true;\n returnValue &= element.userName == usernameL;\n returnValue &= element.password == passwordL;\n return returnValue\n});\n\nconsole.log(authUser);\n\n\n\n", "First of all: authenticating users like this is not safe. I'll tell you why in a minute, but let's solve your actual question first.\nYou can use JavaScript's Array.prototype.filter() function to fetch the right object in the list. Then, you can check if that object's password matches the input:\n\n\nconst json = {\n accountsade: [\n {\n id: 0.4387810413935975,\n name: \"Adrian\",\n password: \"345\",\n userName: \"nathanael\"\n },\n {\n id: 0.2722524232951682,\n name: \"Nathan\",\n password: \"123\",\n userName: \"nathanaelmbale45\"\n }\n ]\n};\n\nconst usernameL = \"nathanaelmbale45\";\nconst passwordL = \"123\";\n\nalert(checkLogin(usernameL, passwordL));\n\nfunction checkLogin(username, password) {\n const account = json.accountsade.find(account => {\n return account.userName = username;\n });\n return account.password == password;\n}\n\n\n\nProper authentication\nhe goal of authenticating a user is to make sure that a user is who he/she says he/she is. That way, you can decide in your code what actions a user can take.\nThis process should be doe in the back end (server side) of your application, to make sure that the user can't see and manipulate it. If you send all of the account data over to the front end (your user's computer), that person can see all of that data and use other user's passwords to log in.\nYou also didn't hash the passwords (scramble their contents in a non-reversable way). This should even be done when handling authentication server side, as your database could be compromised and leaked by another security failure.\nAnother downside of sending all account data to the front end, is bandwith. Suppose you have a million user's, that will take a lot of redundant time and resources just to log the user in.\nSo the right way is: send the username and password that the user entered via a secure connection (so nobody can eavesdrop), fetch the hashed password from the database, hash the password that the user entered and compare it too the fetched hash. If they match exactly, the user is legitimate and can be logged in (set the appropriate cookies server side or send an authentication token).\nNow those are the basics, but it's safer to use proven solutions like frameworks and libraries that do this work for you instead of building this yourself (lots of users have looked at the code already and improved it, you on the other hand might make a mistake/create a security risk).\n", "function checkLogin(usernameL, passwordL) {\n const account = accountsade.find((account) => {\n return (account.userName = usernameL && account.password == passwordL);\n });\n return account;\n}\n\nthat would be the best practice i could think of right now. but why would you make a auth-system in frontend?\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "authentication", "frontend", "html", "javascript" ]
stackoverflow_0065524172_authentication_frontend_html_javascript.txt
Q: Azure devops - ArchiveFiles@2 task - archive files from remote repository I am preparing azure devops and terraform automation. I have prepared pipeline, and one of my task is presented below: - task: ArchiveFiles@2 displayName: "terraform file archive" inputs: rootFolderOrFile: $(Build.Repository.LocalPath) includeRootFolder: false archiveType: zip archiveFile: $(Build.ArtifactStagingDirectory)/archive.zip replaceExistingArchive: true verbose: true Mentioned pipeline task prepares archive.zip file, archive.zip file contains only files from Azure devops repo where my pipeline .yaml file is stored (only files from local repo). I need to archive files from another Azure Devop repo and add them to archive.zip. Is there any possibility to do this using Azure devops tasks? A: The ArchiveFiles@2 task will archive everything from rootFolderOrFile. As mentioned by you, only the triggered repo files are available for copying. But when you check out more then one repo in your pipeline, you might achieve your goal. For example: resources: repositories: - repository: otherRepo type: git name: OtherProject/MyAzureReposGitRepo steps: - checkout: otherRepo - checkout: self - task: ArchiveFiles@2 displayName: "terraform file archive" inputs: rootFolderOrFile: $(Build.Repository.LocalPath) includeRootFolder: false archiveType: zip archiveFile: $(Build.ArtifactStagingDirectory)/archive.zip replaceExistingArchive: true verbose: true Two things are important here: Declare your other repo(s) in the resource part Since multiple repos are present, don't forget to check out the triggering repo as well with: checkout: self
Azure devops - ArchiveFiles@2 task - archive files from remote repository
I am preparing azure devops and terraform automation. I have prepared pipeline, and one of my task is presented below: - task: ArchiveFiles@2 displayName: "terraform file archive" inputs: rootFolderOrFile: $(Build.Repository.LocalPath) includeRootFolder: false archiveType: zip archiveFile: $(Build.ArtifactStagingDirectory)/archive.zip replaceExistingArchive: true verbose: true Mentioned pipeline task prepares archive.zip file, archive.zip file contains only files from Azure devops repo where my pipeline .yaml file is stored (only files from local repo). I need to archive files from another Azure Devop repo and add them to archive.zip. Is there any possibility to do this using Azure devops tasks?
[ "The ArchiveFiles@2 task will archive everything from rootFolderOrFile.\nAs mentioned by you, only the triggered repo files are available for copying.\nBut when you check out more then one repo in your pipeline, you might achieve your goal.\nFor example:\nresources:\n repositories:\n - repository: otherRepo\n type: git\n name: OtherProject/MyAzureReposGitRepo\n\nsteps:\n - checkout: otherRepo\n - checkout: self \n\n - task: ArchiveFiles@2\n displayName: \"terraform file archive\"\n inputs:\n rootFolderOrFile: $(Build.Repository.LocalPath)\n includeRootFolder: false\n archiveType: zip\n archiveFile: $(Build.ArtifactStagingDirectory)/archive.zip\n replaceExistingArchive: true\n verbose: true\n\n\n\nTwo things are important here:\n\nDeclare your other repo(s) in the resource part\nSince multiple repos are present, don't forget to check out the triggering repo as well with: checkout: self\n\n" ]
[ 1 ]
[]
[]
[ "azure_devops", "azure_pipelines", "azure_pipelines_build_task", "azure_pipelines_yaml" ]
stackoverflow_0074669440_azure_devops_azure_pipelines_azure_pipelines_build_task_azure_pipelines_yaml.txt
Q: Flutter error: The body might complete normally, causing 'null' to be returned, but the return type, 'Widget', is a potentially non-nullable typ import 'package:flutter/material.dart'; class LayOutBuilder extends StatelessWidget { const LayOutBuilder({super.key}); @override Widget build(BuildContext context) { return Scaffold( body: LayoutBuilder( builder: (context, p1) { if (p1.maxHeight < 400) { return Container(); } }, ), ); } } I dont know why it does not run. A: you're returning a Container only if p1.maxHeight < 400, but you didn't specify what to return if p1.maxHeight < 400 is not true, hence it will return null, and that's not allowed because it has to return something if (p1.maxHeight < 400) { return Container(); } else { return Text('some widget'); } A: The builder argument needs to be a function that returns a Widget. Your implementation only returns a Widget under some if-condition. In the else-case, it does not return anything. This is not allowed and throws a compile error. You should return a Widget in all cases. Which widget specifically depends on your use case. But something like this will compile: return Scaffold( body: LayoutBuilder( builder: (context, p1) { if (p1.maxHeight < 400) { return Container(); } else { return SizedBox(height: 0) // Or any other widget } }), );
Flutter error: The body might complete normally, causing 'null' to be returned, but the return type, 'Widget', is a potentially non-nullable typ
import 'package:flutter/material.dart'; class LayOutBuilder extends StatelessWidget { const LayOutBuilder({super.key}); @override Widget build(BuildContext context) { return Scaffold( body: LayoutBuilder( builder: (context, p1) { if (p1.maxHeight < 400) { return Container(); } }, ), ); } } I dont know why it does not run.
[ "you're returning a Container only if p1.maxHeight < 400, but you didn't specify what to return if p1.maxHeight < 400 is not true, hence it will return null, and that's not allowed because it has to return something\nif (p1.maxHeight < 400) {\n return Container();\n} else {\n return Text('some widget');\n}\n\n", "The builder argument needs to be a function that returns a Widget. Your implementation only returns a Widget under some if-condition. In the else-case, it does not return anything. This is not allowed and throws a compile error.\nYou should return a Widget in all cases. Which widget specifically depends on your use case. But something like this will compile:\nreturn Scaffold(\n body: LayoutBuilder(\n builder: (context, p1) {\n if (p1.maxHeight < 400) {\n return Container();\n } else {\n return SizedBox(height: 0) // Or any other widget\n }\n }),\n);\n\n" ]
[ 2, 1 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074670922_dart_flutter.txt
Q: Laravel 9 test dusk does not work on Docker with selenium/standalone-chrome I installed dusk based on the laravel document, but I can't run a test dusk on docker interactive mode. I searched the web without finding the correct configuration. This is a part of the docker-compose file: services: php-apache: build: context: . container_name: app_php ports: - '8081:80' volumes: - ./core:/var/www/app - ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf links: - selenium depends_on: - database networks: - mysite selenium: image: selenium/standalone-chrome:104.0 container_name: selenium ports: - "4444:4444" networks: - mysite And this is the error I get: 1) Tests\Browser\ExampleTest::testBasicExample TypeError: Facebook\WebDriver\Remote\DesiredCapabilities::__construct(): Argument #1 ($capabilities) must be of type array, null given, called in /var/www/app/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php on line 648 I've put in two days to solve the problem. I would appreciate it if anyone could help me. A: On your duskTestCase.php file, on the driver function, add these 2 lines to the options: '--whitelisted-ips=""', '--disable-dev-shm-usage' 1st line to allow the containers to communicate, 2nd to solve issue on dev-shm mount: https://github.com/SeleniumHQ/docker-selenium/issues/1267 also on the same driver function, the url must be pointing to your selenium container: return RemoteWebDriver::create( 'http://selenium:4444/wd/hub',
Laravel 9 test dusk does not work on Docker with selenium/standalone-chrome
I installed dusk based on the laravel document, but I can't run a test dusk on docker interactive mode. I searched the web without finding the correct configuration. This is a part of the docker-compose file: services: php-apache: build: context: . container_name: app_php ports: - '8081:80' volumes: - ./core:/var/www/app - ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf links: - selenium depends_on: - database networks: - mysite selenium: image: selenium/standalone-chrome:104.0 container_name: selenium ports: - "4444:4444" networks: - mysite And this is the error I get: 1) Tests\Browser\ExampleTest::testBasicExample TypeError: Facebook\WebDriver\Remote\DesiredCapabilities::__construct(): Argument #1 ($capabilities) must be of type array, null given, called in /var/www/app/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php on line 648 I've put in two days to solve the problem. I would appreciate it if anyone could help me.
[ "On your duskTestCase.php file, on the driver function, add these 2 lines to the options:\n'--whitelisted-ips=\"\"',\n'--disable-dev-shm-usage'\n1st line to allow the containers to communicate, 2nd to solve issue on dev-shm mount: https://github.com/SeleniumHQ/docker-selenium/issues/1267\nalso on the same driver function,\nthe url must be pointing to your selenium container:\nreturn RemoteWebDriver::create(\n'http://selenium:4444/wd/hub',\n" ]
[ 0 ]
[]
[]
[ "docker", "laravel", "laravel_dusk", "php", "selenium" ]
stackoverflow_0073489951_docker_laravel_laravel_dusk_php_selenium.txt
Q: Generate random numbers list with limit on each element and on total Assume I have a list of values, for example: limits = [10, 6, 3, 5, 1] For every item in limits, I need to generate a random number less than or equal to the item. However, the catch is that the sum of elements in the new random list must be equal to a specified total. For example if total = 10, then one possible random list is: random_list = [2, 1, 3, 4, 0] where you see random_list has same length as limits, every element in random_list is less than or equal to the corresponding element in limits, and sum(random_list) = total. How to generate such a list? I am open (and prefer) to use numpy, scipy, or pandas. A: To generate such a list, you can use numpy's random.multinomial function. This function allows you to generate a list of random numbers that sum to a specified total, where each number is chosen from a different bin with a specified size. For example, to generate a list of 5 random numbers that sum to 10, where the first number can be any integer from 0 to 10, the second number can be any integer from 0 to 6, and so on, you can use the following code: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 random_list = np.random.multinomial(total, [1/x for x in limits]) This will generate a list of 5 random numbers that sum to 10 and are less than or equal to the corresponding element in the limits list. Alternatively, you could use numpy's random.randint function to generate random numbers that are less than or equal to the corresponding element in the limits list, and then use a loop to add up the numbers until the sum equals the specified total. This approach would look something like this: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 random_list = [] # Generate a random number for each element in limits for limit in limits: random_list.append(np.random.randint(limit)) # Keep adding random numbers until the sum equals the total while sum(random_list) != total: random_list[np.random.randint(len(random_list))] += 1 Both of these approaches should work to generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list. EDIT FOR @gerges To generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list, you can use a combination of the numpy functions random.multinomial and random.randint. Here is an example of how you could do this: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 # Generate a list of random numbers that sum to the total using the multinomial function random_list = np.random.multinomial(total, [1/x for x in limits]) # Use the randint function to ensure that each number is less than or equal to the corresponding limit for i, limit in enumerate(limits): random_list[i] = np.random.randint(random_list[i], limit+1) # Check that the sum of the numbers in the list equals the specified total and that each number is less than or equal to the corresponding limit assert sum(random_list) == total for i, number in enumerate(random_list): assert number <= limits[I] This approach generates a list of random numbers using the multinomial function, and then uses the randint function to ensure that each number is less than or equal to the corresponding limit. This guarantees that the resulting list of numbers will sum to the specified total and will be less than or equal to the corresponding element in the limits list. A: Found what I was looking for: The hypergeometric distribution which is similar to the binomial, but without replacement. The distribution available in numpy: import numpy as np gen = np.random.Generator(np.random.PCG64(seed)) random_list = gen.multivariate_hypergeometric(limits, total) # array([4, 4, 1, 1, 0]) Also to make sure I didn't misunderstand the distribution did a sanity check with 10 million samples and check that the maximum is always within the limits res = gen.multivariate_hypergeometric(limits, total, size=10000000) res.max(axis=0) # array([10, 6, 3, 5, 1]) which is same as limits.
Generate random numbers list with limit on each element and on total
Assume I have a list of values, for example: limits = [10, 6, 3, 5, 1] For every item in limits, I need to generate a random number less than or equal to the item. However, the catch is that the sum of elements in the new random list must be equal to a specified total. For example if total = 10, then one possible random list is: random_list = [2, 1, 3, 4, 0] where you see random_list has same length as limits, every element in random_list is less than or equal to the corresponding element in limits, and sum(random_list) = total. How to generate such a list? I am open (and prefer) to use numpy, scipy, or pandas.
[ "To generate such a list, you can use numpy's random.multinomial function. This function allows you to generate a list of random numbers that sum to a specified total, where each number is chosen from a different bin with a specified size.\nFor example, to generate a list of 5 random numbers that sum to 10, where the first number can be any integer from 0 to 10, the second number can be any integer from 0 to 6, and so on, you can use the following code:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\nrandom_list = np.random.multinomial(total, [1/x for x in limits])\n\n\nThis will generate a list of 5 random numbers that sum to 10 and are less than or equal to the corresponding element in the limits list.\nAlternatively, you could use numpy's random.randint function to generate random numbers that are less than or equal to the corresponding element in the limits list, and then use a loop to add up the numbers until the sum equals the specified total. This approach would look something like this:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\nrandom_list = []\n\n# Generate a random number for each element in limits\nfor limit in limits:\n random_list.append(np.random.randint(limit))\n\n# Keep adding random numbers until the sum equals the total\nwhile sum(random_list) != total:\n random_list[np.random.randint(len(random_list))] += 1\n\n\nBoth of these approaches should work to generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list.\nEDIT FOR @gerges\nTo generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list, you can use a combination of the numpy functions random.multinomial and random.randint.\nHere is an example of how you could do this:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\n# Generate a list of random numbers that sum to the total using the multinomial function\nrandom_list = np.random.multinomial(total, [1/x for x in limits])\n\n# Use the randint function to ensure that each number is less than or equal to the corresponding limit\nfor i, limit in enumerate(limits):\n random_list[i] = np.random.randint(random_list[i], limit+1)\n\n# Check that the sum of the numbers in the list equals the specified total and that each number is less than or equal to the corresponding limit\nassert sum(random_list) == total\nfor i, number in enumerate(random_list):\n assert number <= limits[I]\n\n\nThis approach generates a list of random numbers using the multinomial function, and then uses the randint function to ensure that each number is less than or equal to the corresponding limit. This guarantees that the resulting list of numbers will sum to the specified total and will be less than or equal to the corresponding element in the limits list.\n", "Found what I was looking for: The hypergeometric distribution which is similar to the binomial, but without replacement.\nThe distribution available in numpy:\nimport numpy as np\n\ngen = np.random.Generator(np.random.PCG64(seed))\nrandom_list = gen.multivariate_hypergeometric(limits, total)\n\n# array([4, 4, 1, 1, 0])\n\nAlso to make sure I didn't misunderstand the distribution did a sanity check with 10 million samples and check that the maximum is always within the limits\nres = gen.multivariate_hypergeometric(limits, total, size=10000000) \n\nres.max(axis=0)\n\n# array([10, 6, 3, 5, 1])\n\nwhich is same as limits.\n" ]
[ 1, 1 ]
[]
[]
[ "numpy", "pandas", "python", "scipy" ]
stackoverflow_0074670818_numpy_pandas_python_scipy.txt
Q: How can I load a saved JSON tree with treelib? I have made a Python script wherein I process a big html with BeautifulSoup while I build a tree from it using treelib: http://xiaming.me/treelib/. I have found that this library comes with methods to save the tree file on my system and also parsing it to JSON. But after I do this, how can I load it? It is not efficient to build the same entire tree for each run. I think I can make a function to parse the JSON tree previously written to a file but I just want to be sure if there exists another easy way or not. Thanks in advance A: The simple Answer With this treelib, you can't. As they say in their documentation (http://xiaming.me/treelib/pyapi.html#node-objects): tree.save2file(filename[, nid[, level[, idhidden[, filter[, key[, reverse]]]]]]]) Save the tree into file for offline analysis. It does not contain any JSON-Parser, so it can not read the files. What can you do? You have no other option as building the tree each time for every run. Implement a JSON-Reader that parses the file and creates the tree for you. https://docs.python.org/2/library/json.html A: I have built a small parser for my case. Maybe it works in your case. The note identifiers are named after the tag plus the depth of the node in the tree (tag+depth). import json from types import prepare_class from treelib import Node, Tree, node import os file_path = os.path.abspath(os.path.dirname(__file__)) with open(file_path + '\\tree.json') as f: tree_json = json.load(f) tree = Tree() def load_tree(json_tree, depth=0, parent=None): k, value = list(json_tree.items())[0] if parent is None: tree.create_node(tag=str(k), identifier=str(k)+str(depth)) parent = tree.get_node(str(k)+str(depth)) for counter,value in enumerate(json_tree[k]['children']): if isinstance(json_tree[k]['children'][counter], str): tree.create_node(tag=value, identifier=value+str(depth), parent=parent) else: tree.create_node(tag=list(value)[0], identifier=list(value)[0]+str(depth), parent=parent) load_tree(json_tree[k]['children'][counter], depth+1, tree.get_node(list(value)[0]+str(depth)) ) load_tree(tree_json) A: I have created a function to convert json to a tree: from treelib import Node, Tree, node def create_node(tree, s, counter_byref, verbose, parent_id=None): node_id = counter_byref[0] if verbose: print(f"tree.create_node({s}, {node_id}, parent={parent_id})") tree.create_node(s, node_id, parent=parent_id) counter_byref[0] += 1 return node_id def to_compact_string(o): if type(o) == dict: if len(o)>1: raise Exception() k,v =next(iter(o.items())) return f'{k}:{to_compact_string(v)}' elif type(o) == list: if len(o)>1: raise Exception() return f'[{to_compact_string(next(iter(o)))}]' else: return str(o) def to_compact(tree, o, counter_byref, verbose, parent_id): try: s = to_compact_string(o) if verbose: print(f"# to_compact({o}) ==> [{s}]") create_node(tree, s, counter_byref, verbose, parent_id=parent_id) return True except: return False def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, compact_single_dict=False, listsNodeSymbol='+'): if tree is None: tree = Tree() parent_id = create_node(tree, '+', counter_byref, verbose) if compact_single_dict and to_compact(tree, o, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node pass elif type(o) == dict: for k,v in o.items(): if compact_single_dict and to_compact(tree, {k:v}, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node continue key_nd_id = create_node(tree, str(k), counter_byref, verbose, parent_id=parent_id) if verbose: print(f"# json_2_tree({v})") json_2_tree(v , parent_id=key_nd_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict) elif type(o) == list: if listsNodeSymbol is not None: parent_id = create_node(tree, listsNodeSymbol, counter_byref, verbose, parent_id=parent_id) for i in o: if compact_single_dict and to_compact(tree, i, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node continue if verbose: print(f"# json_2_tree({i})") json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict) else: #node create_node(tree, str(o), counter_byref, verbose, parent_id=parent_id) return tree Then for example: import json j = json.loads('{"2": 3, "4": [5, 6], "7": {"8": 9}}') json_2_tree(j ,verbose=False,listsNodeSymbol='+' ).show() gives: + ├── 2 │ └── 3 ├── 4 │ └── + │ ├── 5 │ └── 6 └── 7 └── 8 └── 9 While json_2_tree(j ,listsNodeSymbol=None, verbose=False ).show() + ├── 2 │ └── 3 ├── 4 │ ├── 5 │ └── 6 └── 7 └── 8 └── 9 And json_2_tree(j ,compact_single_dict=True,listsNodeSymbol=None).show() + ├── 2:3 ├── 4 │ ├── 5 │ └── 6 └── 7:8:9 As you see, there are different trees one can make depending on how explicit vs. compact he wants to be.
How can I load a saved JSON tree with treelib?
I have made a Python script wherein I process a big html with BeautifulSoup while I build a tree from it using treelib: http://xiaming.me/treelib/. I have found that this library comes with methods to save the tree file on my system and also parsing it to JSON. But after I do this, how can I load it? It is not efficient to build the same entire tree for each run. I think I can make a function to parse the JSON tree previously written to a file but I just want to be sure if there exists another easy way or not. Thanks in advance
[ "The simple Answer\nWith this treelib, you can't.\nAs they say in their documentation (http://xiaming.me/treelib/pyapi.html#node-objects):\ntree.save2file(filename[, nid[, level[, idhidden[, filter[, key[, reverse]]]]]]])\n Save the tree into file for offline analysis.\n\nIt does not contain any JSON-Parser, so it can not read the files. \nWhat can you do?\nYou have no other option as building the tree each time for every run. \nImplement a JSON-Reader that parses the file and creates the tree for you.\nhttps://docs.python.org/2/library/json.html\n", "I have built a small parser for my case. Maybe it works in your case.\nThe note identifiers are named after the tag plus the depth of the node in the tree (tag+depth).\nimport json\nfrom types import prepare_class\nfrom treelib import Node, Tree, node\nimport os\n\nfile_path = os.path.abspath(os.path.dirname(__file__))\n\nwith open(file_path + '\\\\tree.json') as f:\n tree_json = json.load(f)\n\ntree = Tree()\n\ndef load_tree(json_tree, depth=0, parent=None):\n k, value = list(json_tree.items())[0]\n \n if parent is None:\n tree.create_node(tag=str(k), identifier=str(k)+str(depth))\n parent = tree.get_node(str(k)+str(depth))\n\n for counter,value in enumerate(json_tree[k]['children']): \n if isinstance(json_tree[k]['children'][counter], str):\n tree.create_node(tag=value, identifier=value+str(depth), parent=parent)\n else:\n tree.create_node(tag=list(value)[0], identifier=list(value)[0]+str(depth), parent=parent)\n load_tree(json_tree[k]['children'][counter], depth+1, tree.get_node(list(value)[0]+str(depth)) )\n\nload_tree(tree_json)\n\n", "I have created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\n\ndef create_node(tree, s, counter_byref, verbose, parent_id=None):\n node_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({s}, {node_id}, parent={parent_id})\")\n tree.create_node(s, node_id, parent=parent_id)\n counter_byref[0] += 1\n return node_id\n\ndef to_compact_string(o):\n if type(o) == dict:\n if len(o)>1:\n raise Exception()\n k,v =next(iter(o.items()))\n return f'{k}:{to_compact_string(v)}'\n elif type(o) == list:\n if len(o)>1:\n raise Exception()\n return f'[{to_compact_string(next(iter(o)))}]'\n else:\n return str(o)\n\ndef to_compact(tree, o, counter_byref, verbose, parent_id):\n try:\n s = to_compact_string(o)\n if verbose:\n print(f\"# to_compact({o}) ==> [{s}]\")\n create_node(tree, s, counter_byref, verbose, parent_id=parent_id)\n return True\n except:\n return False\n\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, compact_single_dict=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n parent_id = create_node(tree, '+', counter_byref, verbose)\n if compact_single_dict and to_compact(tree, o, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n pass\n elif type(o) == dict:\n for k,v in o.items():\n if compact_single_dict and to_compact(tree, {k:v}, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n continue\n key_nd_id = create_node(tree, str(k), counter_byref, verbose, parent_id=parent_id)\n if verbose:\n print(f\"# json_2_tree({v})\")\n json_2_tree(v , parent_id=key_nd_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n parent_id = create_node(tree, listsNodeSymbol, counter_byref, verbose, parent_id=parent_id)\n for i in o:\n if compact_single_dict and to_compact(tree, i, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n continue\n if verbose:\n print(f\"# json_2_tree({i})\")\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict)\n else: #node\n create_node(tree, str(o), counter_byref, verbose, parent_id=parent_id)\n return tree\n\nThen for example:\nimport json\nj = json.loads('{\"2\": 3, \"4\": [5, 6], \"7\": {\"8\": 9}}')\njson_2_tree(j ,verbose=False,listsNodeSymbol='+' ).show() \n\ngives:\n+\n├── 2\n│ └── 3\n├── 4\n│ └── +\n│ ├── 5\n│ └── 6\n└── 7\n └── 8\n └── 9\n\nWhile\njson_2_tree(j ,listsNodeSymbol=None, verbose=False ).show() \n\n+\n├── 2\n│ └── 3\n├── 4\n│ ├── 5\n│ └── 6\n└── 7\n └── 8\n └── 9\n\nAnd\njson_2_tree(j ,compact_single_dict=True,listsNodeSymbol=None).show() \n\n+\n├── 2:3\n├── 4\n│ ├── 5\n│ └── 6\n└── 7:8:9\n\nAs you see, there are different trees one can make depending on how explicit vs. compact he wants to be.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "json", "python", "tree" ]
stackoverflow_0035031748_json_python_tree.txt
Q: Querying polymorphic relation with different deepths in Laravel I have a Laravel 8 application in which I designed two models, Workspace and Section. Every section belong to a workspace, in a one-to-many relationship. Both sections and workspaces can have comments; they are implemented in a third model, Comment, with a morphTo relationship. In particular, every comment belongs to a commentable, i.e. a Workspace or a Section. I'm looking for properly defining a relationship which, from Workspace, allow to retrive all the related comments, i.e. both workspace comments and comments of sections belonging to the workspace. Can I define somehow this kind of relationship? In particular, I would hypothetically define it a proper way, such that I can use commands like Workspace::with('all_comments') and $workspace->load('all_comments'). Laravel seems not offer a standard interface for defining custom relationships Thanks, A: Yes, you can define a custom relationship to achieve this in Laravel. To do so, you can create a new method on your Workspace model that defines the relationship. For example, you could create a method called all_comments on your Workspace model that defines the relationship as follows: public function all_comments() { return $this->comments() ->orWhereHas('section', function ($query) { $query->where('workspace_id', $this->id); }); } This will define a relationship that retrieves all of the comments that belong to the workspace directly, as well as all of the comments that belong to sections that belong to the workspace. You can then use this relationship with the with and load methods to eager load the related comments when querying for workspaces. For example: $workspace = Workspace::with('all_comments')->first(); // Or, to load the relationship on an existing model instance: $workspace->load('all_comments'); Please note that this is just an example and may need to be adjusted to fit the specific details of your implementation.
Querying polymorphic relation with different deepths in Laravel
I have a Laravel 8 application in which I designed two models, Workspace and Section. Every section belong to a workspace, in a one-to-many relationship. Both sections and workspaces can have comments; they are implemented in a third model, Comment, with a morphTo relationship. In particular, every comment belongs to a commentable, i.e. a Workspace or a Section. I'm looking for properly defining a relationship which, from Workspace, allow to retrive all the related comments, i.e. both workspace comments and comments of sections belonging to the workspace. Can I define somehow this kind of relationship? In particular, I would hypothetically define it a proper way, such that I can use commands like Workspace::with('all_comments') and $workspace->load('all_comments'). Laravel seems not offer a standard interface for defining custom relationships Thanks,
[ "Yes, you can define a custom relationship to achieve this in Laravel. To do so, you can create a new method on your Workspace model that defines the relationship.\nFor example, you could create a method called all_comments on your Workspace model that defines the relationship as follows:\npublic function all_comments()\n{\n return $this->comments()\n ->orWhereHas('section', function ($query) {\n $query->where('workspace_id', $this->id);\n });\n}\n\nThis will define a relationship that retrieves all of the comments that belong to the workspace directly, as well as all of the comments that belong to sections that belong to the workspace.\nYou can then use this relationship with the with and load methods to eager load the related comments when querying for workspaces. For example:\n$workspace = Workspace::with('all_comments')->first();\n\n// Or, to load the relationship on an existing model instance:\n$workspace->load('all_comments');\n\nPlease note that this is just an example and may need to be adjusted to fit the specific details of your implementation.\n" ]
[ 0 ]
[]
[]
[ "eloquent", "laravel", "laravel_8", "polymorphic_associations", "relationship" ]
stackoverflow_0074670968_eloquent_laravel_laravel_8_polymorphic_associations_relationship.txt
Q: How to include one iteration, so that this function generates one more round of sentences given the array elements? The function does give the following result with 15 sentences, but the expected result is to contain 30, including, for example: Training für das Polizei, Training für die Polizisten, Training für die Militär ...: Result Training für die Polizei Training für das Polizisten Training für das Militär Training für das Sanität Training für das Sanitäter Trainings für die Polizei Trainings für das Polizisten Trainings für das Militär Trainings für das Sanität Trainings für das Sanitäter Trainingseinheit für die Polizei Trainingseinheit für das Polizisten Trainingseinheit für das Militär Trainingseinheit für das Sanität Trainingseinheit für das Sanitäter let ar = [ ['Training', 'für', 'die', 'Polizei'], ['Trainings', '', 'das', 'Polizisten'], ['Trainingseinheit', '', '', 'Militär'], ]; let result = []; ar.forEach((el, index) => { let sentence = [el[0]]; ar.forEach(el1 => { for(let i = 1; i < el1.length; i++) { if(el1[i]) { sentence[i] = el1[i] } } result.push(sentence.join(' ')) }) }); console.log(result); A: It looks like the code is only creating one sentence for each element in the ar array. To fix this, the code can be updated to create all possible combinations of sentences by looping through each element of the ar array and appending the non-empty elements to the sentence array. Then, the resulting sentences can be pushed to the result array. Here is an example of how the code could be updated to produce the expected result: let ar = [ ['Training', 'für', 'die', 'Polizei'], ['Trainings', '', 'das', 'Polizisten'], ['Trainingseinheit', '', '', 'Militär'], ]; let result = []; ar.forEach((el, index) => { let sentence = []; // Loop through each element of the array for (let i = 0; i < el.length; i++) { // Append non-empty elements to the sentence array if (el[i]) { sentence.push(el[i]); } } // Push the resulting sentence to the result array result.push(sentence.join(' ')); }); console.log(result); With this updated code, the result array will contain all possible combinations of sentences, including Training für die Polizei, Trainingseinheit für das Polizisten, and Trainings für das Militär.
How to include one iteration, so that this function generates one more round of sentences given the array elements?
The function does give the following result with 15 sentences, but the expected result is to contain 30, including, for example: Training für das Polizei, Training für die Polizisten, Training für die Militär ...: Result Training für die Polizei Training für das Polizisten Training für das Militär Training für das Sanität Training für das Sanitäter Trainings für die Polizei Trainings für das Polizisten Trainings für das Militär Trainings für das Sanität Trainings für das Sanitäter Trainingseinheit für die Polizei Trainingseinheit für das Polizisten Trainingseinheit für das Militär Trainingseinheit für das Sanität Trainingseinheit für das Sanitäter let ar = [ ['Training', 'für', 'die', 'Polizei'], ['Trainings', '', 'das', 'Polizisten'], ['Trainingseinheit', '', '', 'Militär'], ]; let result = []; ar.forEach((el, index) => { let sentence = [el[0]]; ar.forEach(el1 => { for(let i = 1; i < el1.length; i++) { if(el1[i]) { sentence[i] = el1[i] } } result.push(sentence.join(' ')) }) }); console.log(result);
[ "It looks like the code is only creating one sentence for each element in the ar array. To fix this, the code can be updated to create all possible combinations of sentences by looping through each element of the ar array and appending the non-empty elements to the sentence array. Then, the resulting sentences can be pushed to the result array.\nHere is an example of how the code could be updated to produce the expected result:\nlet ar = [ ['Training', 'für', 'die', 'Polizei'],\n ['Trainings', '', 'das', 'Polizisten'],\n ['Trainingseinheit', '', '', 'Militär'],\n];\n\nlet result = [];\n\nar.forEach((el, index) => {\n let sentence = [];\n \n // Loop through each element of the array\n for (let i = 0; i < el.length; i++) {\n // Append non-empty elements to the sentence array\n if (el[i]) {\n sentence.push(el[i]);\n }\n }\n \n // Push the resulting sentence to the result array\n result.push(sentence.join(' '));\n});\n\nconsole.log(result);\n\nWith this updated code, the result array will contain all possible combinations of sentences, including Training für die Polizei, Trainingseinheit für das Polizisten, and Trainings für das Militär.\n" ]
[ 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074645779_javascript.txt
Q: HTTP API with Custom Authorizer and Stage Variables TLDR; We have a custom authorizer deployed and want to use stage variables to switch which (authorizer) function is used per stage/environment. e.g the dev stage would use the authorizer-dev function, acpt stage would use authorizer-acpt and so on. We cannot get this to work. More Detail We have a HTTP API (not REST) deployed in API Gateway. This understandably limits some of the capabilities that using a REST API would give us but we currently have no strong need for the full features supplied by a REST API. To support different environments we use stages alongside stage variables to switch the downstream integration (lambda function, k8s based service, etc) based on which stage the request comes in on. i.e. anything requested on the dev stage gets pointed at the services deployed as the dev environment. This is all deployed through the use of an Open API Specification which has the stage variables embedded into the AWS integration extensions. For example; payloadFormatVersion: "2.0" passthroughBehavior: when_no_match httpMethod: POST type: aws_proxy credentials: "arn:aws:iam::<aws-account>:role/<role-name>" uri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<function-name>-${stageVariables.environment}/invocations" This works perfectly. We have a custom authorizer configured in API Gateway against our HTTP API (apigatewayv2). Currently all requests no matter which stage go through a single authorizer function which is causing a pinch point for us as we need to have segregated authorizers per environment as they need to have different verifications and configuration. We have tried a number of things both manually and via CICD to enable stage variables on custom authorizers; but cannot get this to work correctly. Using a single authorizer works, using stage variables results in all requests returning 500 Internal Server Error without any details anywhere of what went wrong. This question is similar to the one asked here with accepted answer but specifically for a HTTP API. Things we have tried Putting stage variables into the authorizerUri in the API Specification e.g; x-amazon-apigateway-authorizer: authorizerCredentials: "arn:aws:iam::<aws-account>:role/<role-name>" authorizerPayloadFormatVersion: 2.0 authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-${stageVariables.environment}/invocations" authorizerResultTtlInSeconds: 0 identitySource: $request.header.Authorization type: request Using a stage variable to replace the entire function name authorizerUri in both the console and in the API specification e.g. authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:${stageVariables.authorizerFunctionName}/invocations" Using the AWS CLI to update the authorizer's uri manually e.g.; aws apigatewayv2 update-authorizer --api-id <api-id> --authorizer-id <authorizer-id> --authorizer-uri 'arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-${stageVariables.environment}/invocations We're at a loss as to why this doesn't work and can't find any documentation that points to why it shouldn't work. A: It is not possible to use stage variables in the authorizer URI in Amazon API Gateway. The authorizerUri property in the OpenAPI specification is not meant to support stage variables. authorizerUri property must be a string that specifies the Amazon Resource Name (ARN) of an AWS Lambda function, which is used as the authorizer function. It cannot contain stage variables. One way to address this issue would be to use different custom authorizers for each environment. In your OpenAPI specification, you can specify different authorizerUri values for each stage, which will point to the corresponding authorizer function for that stage. x-amazon-apigateway-authorizer: authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-dev/invocations" type: request x-amazon-apigateway-authorizer: authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-prod/invocations" type: request Another way is you can use a single authorizer function and use environment variables to configure the verification and configuration for each environment. This way, you don't have to create multiple authorizers and can use the same authorizerUri for all stages.
HTTP API with Custom Authorizer and Stage Variables
TLDR; We have a custom authorizer deployed and want to use stage variables to switch which (authorizer) function is used per stage/environment. e.g the dev stage would use the authorizer-dev function, acpt stage would use authorizer-acpt and so on. We cannot get this to work. More Detail We have a HTTP API (not REST) deployed in API Gateway. This understandably limits some of the capabilities that using a REST API would give us but we currently have no strong need for the full features supplied by a REST API. To support different environments we use stages alongside stage variables to switch the downstream integration (lambda function, k8s based service, etc) based on which stage the request comes in on. i.e. anything requested on the dev stage gets pointed at the services deployed as the dev environment. This is all deployed through the use of an Open API Specification which has the stage variables embedded into the AWS integration extensions. For example; payloadFormatVersion: "2.0" passthroughBehavior: when_no_match httpMethod: POST type: aws_proxy credentials: "arn:aws:iam::<aws-account>:role/<role-name>" uri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<function-name>-${stageVariables.environment}/invocations" This works perfectly. We have a custom authorizer configured in API Gateway against our HTTP API (apigatewayv2). Currently all requests no matter which stage go through a single authorizer function which is causing a pinch point for us as we need to have segregated authorizers per environment as they need to have different verifications and configuration. We have tried a number of things both manually and via CICD to enable stage variables on custom authorizers; but cannot get this to work correctly. Using a single authorizer works, using stage variables results in all requests returning 500 Internal Server Error without any details anywhere of what went wrong. This question is similar to the one asked here with accepted answer but specifically for a HTTP API. Things we have tried Putting stage variables into the authorizerUri in the API Specification e.g; x-amazon-apigateway-authorizer: authorizerCredentials: "arn:aws:iam::<aws-account>:role/<role-name>" authorizerPayloadFormatVersion: 2.0 authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-${stageVariables.environment}/invocations" authorizerResultTtlInSeconds: 0 identitySource: $request.header.Authorization type: request Using a stage variable to replace the entire function name authorizerUri in both the console and in the API specification e.g. authorizerUri: "arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:${stageVariables.authorizerFunctionName}/invocations" Using the AWS CLI to update the authorizer's uri manually e.g.; aws apigatewayv2 update-authorizer --api-id <api-id> --authorizer-id <authorizer-id> --authorizer-uri 'arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-${stageVariables.environment}/invocations We're at a loss as to why this doesn't work and can't find any documentation that points to why it shouldn't work.
[ "It is not possible to use stage variables in the authorizer URI in Amazon API Gateway. The authorizerUri property in the OpenAPI specification is not meant to support stage variables.\nauthorizerUri property must be a string that specifies the Amazon Resource Name (ARN) of an AWS Lambda function, which is used as the authorizer function. It cannot contain stage variables.\nOne way to address this issue would be to use different custom authorizers for each environment. In your OpenAPI specification, you can specify different authorizerUri values for each stage, which will point to the corresponding authorizer function for that stage.\nx-amazon-apigateway-authorizer:\n authorizerUri: \"arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-dev/invocations\"\n type: request\n\nx-amazon-apigateway-authorizer:\n authorizerUri: \"arn:aws:apigateway:<aws-region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region>:<aws-account>:function:<authorizer-name>-prod/invocations\"\n type: request\n\nAnother way is you can use a single authorizer function and use environment variables to configure the verification and configuration for each environment. This way, you don't have to create multiple authorizers and can use the same authorizerUri for all stages.\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "api_gateway", "aws_api_gateway", "lambda_authorizer", "openapi" ]
stackoverflow_0074288447_amazon_web_services_api_gateway_aws_api_gateway_lambda_authorizer_openapi.txt
Q: How to check if a string contains a string in non consecutive order? Let's say I've a string "Hello" ,Now i want that if i check if "Hlo" is present in "Hello" ,It should return true!! How can I do this with a built in function? A: bool contains(std::string const& str1, std::string const& str2) { std::size_t i,j; for(i = 0, j = 0; i < str1.size() && j < str2.size(); ++i) if(str1[i] == str2[j]) ++j; return j == str2.size(); }
How to check if a string contains a string in non consecutive order?
Let's say I've a string "Hello" ,Now i want that if i check if "Hlo" is present in "Hello" ,It should return true!! How can I do this with a built in function?
[ "bool contains(std::string const& str1, std::string const& str2)\n{\n std::size_t i,j;\n for(i = 0, j = 0; i < str1.size() && j < str2.size(); ++i)\n if(str1[i] == str2[j])\n ++j;\n return j == str2.size();\n}\n\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074670956_c++.txt
Q: Rust and plotters library - hard to specify types of variables and refactor code into functions This code draws four red dots in a picture. use plotters::chart::{DualCoordChartContext, ChartContext, SeriesAnno}; use plotters::coord::types::RangedCoordf32; use plotters::prelude::*; use plotters::coord::Shift; type CC<'a> = ChartContext<'a, BitMapBackend<'a>, Cartesian2d<RangedCoordf32, RangedCoordf32>>; //type CCBAD = ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>>; const OUT_FILE_NAME: &'static str = "sample.png"; pub fn main() -> Result<(), Box<dyn std::error::Error>> { let root_area: DrawingArea<BitMapBackend, Shift> = BitMapBackend::new(OUT_FILE_NAME, (400, 400)).into_drawing_area(); let mut cb: ChartBuilder<BitMapBackend> = ChartBuilder::on(&root_area); let mut cc: ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> = cb.build_cartesian_2d(0.0f32..5.0f32, 0.0f32..5.0f32)?; let series: Vec<f32> = vec![1.0, 2.0, 3.0, 4.0]; cc.draw_series(PointSeries::of_element( series.iter().map(|x| (*x, *x)), 3,ShapeStyle::from(&RED).filled(), &|coord, size, style| { EmptyElement::at(coord) + Circle::new((0, 0), size, style) }, ))?; Ok(()) } I put the explicit types of variables on purpose (because why not?). Soo... the types seem a bit long. How about we make shorter names. Let's start with ChartContext. The compiler only lets me make the type type CC<'a>. CCBAD has missing lifetimes. But if I try to use it like this: let mut cc: CC = // ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> = cb.build_cartesian_2d(0.0f32..5.0f32, 0.0f32..5.0f32)?; It's suddenly a problem! Why? 16 | ChartBuilder::on(&root_area); | ^^^^^^^^^^ borrowed value does not live long enough ... 25 | } | - | | | `root_area` dropped here while still borrowed | borrow might be used here, when `root_area` is dropped and runs the destructor for type `plotters::drawing::DrawingArea<plotters::prelude::BitMapBackend<'_>, Shift>` Another story is trying to put the "draw_series" call into a function. Basically it ends up with the same error message as here. Why I can specify the type ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> but cannot make a 'type' definition with it. Why that weird error? A: By reusing the same lifetime twice you're forcing 2 lifetimes to be the same that shouldn't be. Use this instead: type CC<'a, 'b> = ChartContext<'a, BitMapBackend<'b>, Cartesian2d<RangedCoordf32, RangedCoordf32>>;
Rust and plotters library - hard to specify types of variables and refactor code into functions
This code draws four red dots in a picture. use plotters::chart::{DualCoordChartContext, ChartContext, SeriesAnno}; use plotters::coord::types::RangedCoordf32; use plotters::prelude::*; use plotters::coord::Shift; type CC<'a> = ChartContext<'a, BitMapBackend<'a>, Cartesian2d<RangedCoordf32, RangedCoordf32>>; //type CCBAD = ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>>; const OUT_FILE_NAME: &'static str = "sample.png"; pub fn main() -> Result<(), Box<dyn std::error::Error>> { let root_area: DrawingArea<BitMapBackend, Shift> = BitMapBackend::new(OUT_FILE_NAME, (400, 400)).into_drawing_area(); let mut cb: ChartBuilder<BitMapBackend> = ChartBuilder::on(&root_area); let mut cc: ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> = cb.build_cartesian_2d(0.0f32..5.0f32, 0.0f32..5.0f32)?; let series: Vec<f32> = vec![1.0, 2.0, 3.0, 4.0]; cc.draw_series(PointSeries::of_element( series.iter().map(|x| (*x, *x)), 3,ShapeStyle::from(&RED).filled(), &|coord, size, style| { EmptyElement::at(coord) + Circle::new((0, 0), size, style) }, ))?; Ok(()) } I put the explicit types of variables on purpose (because why not?). Soo... the types seem a bit long. How about we make shorter names. Let's start with ChartContext. The compiler only lets me make the type type CC<'a>. CCBAD has missing lifetimes. But if I try to use it like this: let mut cc: CC = // ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> = cb.build_cartesian_2d(0.0f32..5.0f32, 0.0f32..5.0f32)?; It's suddenly a problem! Why? 16 | ChartBuilder::on(&root_area); | ^^^^^^^^^^ borrowed value does not live long enough ... 25 | } | - | | | `root_area` dropped here while still borrowed | borrow might be used here, when `root_area` is dropped and runs the destructor for type `plotters::drawing::DrawingArea<plotters::prelude::BitMapBackend<'_>, Shift>` Another story is trying to put the "draw_series" call into a function. Basically it ends up with the same error message as here. Why I can specify the type ChartContext<BitMapBackend, Cartesian2d<RangedCoordf32, RangedCoordf32>> but cannot make a 'type' definition with it. Why that weird error?
[ "By reusing the same lifetime twice you're forcing 2 lifetimes to be the same that shouldn't be.\nUse this instead:\ntype CC<'a, 'b> = ChartContext<'a, BitMapBackend<'b>, Cartesian2d<RangedCoordf32, RangedCoordf32>>;\n\n" ]
[ 1 ]
[]
[]
[ "borrow_checker", "rust", "types" ]
stackoverflow_0074670889_borrow_checker_rust_types.txt
Q: How to export columns into a Word document in Google script? For example: I would like to take A, C, D columns of a spreadsheet named workstuff into a separate Word file. How would one do that? Let's say that they are in range 4:200. Thank you :) A: Script with comments: function sample() { let spreadsheet = SpreadsheetApp.openById("documentId"); // open spreadsheet let sheet = spreadsheet.getSheetByName("sheetTabName"); // get the tab let values = sheet.getRange("A4:D200").getValues(); // load A, B, C and D values = values.map(row => [row[0], row[2], row[3]]); // remove B values = values.map(row => row.map(cell => '' + cell)); // convert to string let doc = DocumentApp.create("my-new-document"); // create new document doc.getBody().appendTable(values); // write values into table in the document doc.saveAndClose(); // Save and close it // Convert google document to word document let token = ScriptApp.getOAuthToken(); var docBlob = UrlFetchApp.fetch('https://docs.google.com/feeds/download/documents/export/Export?id=' + doc.getId() + '&exportFormat=docx', { headers: { Authorization: 'Bearer ' + token } }).getBlob(); // save word document it on drive var file = DriveApp.createFile(docBlob).setName('my-new-document.docx'); DriveApp.addFile(file); } You may get the spreadsheet ID from url like: https://docs.google.com/spreadsheets/d/SJKf90ahisfq8ewfyio32jasf890sadj3/edit#gid=4531234 The ID is: SJKf90ahisfq8ewfyio32jasf890sadj3
How to export columns into a Word document in Google script?
For example: I would like to take A, C, D columns of a spreadsheet named workstuff into a separate Word file. How would one do that? Let's say that they are in range 4:200. Thank you :)
[ "Script with comments:\nfunction sample() {\n let spreadsheet = SpreadsheetApp.openById(\"documentId\"); // open spreadsheet\n let sheet = spreadsheet.getSheetByName(\"sheetTabName\"); // get the tab\n let values = sheet.getRange(\"A4:D200\").getValues(); // load A, B, C and D\n values = values.map(row => [row[0], row[2], row[3]]); // remove B\n values = values.map(row => row.map(cell => '' + cell)); // convert to string\n\n let doc = DocumentApp.create(\"my-new-document\"); // create new document\n doc.getBody().appendTable(values); // write values into table in the document\n doc.saveAndClose(); // Save and close it\n\n // Convert google document to word document\n let token = ScriptApp.getOAuthToken();\n var docBlob = UrlFetchApp.fetch('https://docs.google.com/feeds/download/documents/export/Export?id=' + doc.getId() + '&exportFormat=docx',\n {\n headers: {\n Authorization: 'Bearer ' + token\n }\n }).getBlob();\n\n // save word document it on drive\n var file = DriveApp.createFile(docBlob).setName('my-new-document.docx');\n DriveApp.addFile(file);\n}\n\nYou may get the spreadsheet ID from url like:\nhttps://docs.google.com/spreadsheets/d/SJKf90ahisfq8ewfyio32jasf890sadj3/edit#gid=4531234\n\nThe ID is: SJKf90ahisfq8ewfyio32jasf890sadj3\n" ]
[ 0 ]
[]
[]
[ "google_apps_script", "google_sheets" ]
stackoverflow_0074670485_google_apps_script_google_sheets.txt
Q: Extract a value from a JSON string stored in a pandas data frame column I have a pandas dataframe with a column named json2 which contains a json string coming from an API call: "{'obj': [{'timestp': '2022-12-03', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}, {'timestp': '2022-12-02', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}]}" I want to make a function that iterates over the column and extracts the number of followers if the timestp matches with a given date def get_followers(x): if x['obj']['timestp']=='2022-12-03': return x['obj']['followers'] df['date'] = df['json2'].apply(get_followers) I should get 281475 as value in the column date but I got an error: "list indices must be integers or slices, not str" What I'm doing wrong? Thank you in advance A: The key named obj occurs in list of dictionaries. Before you define another key, you must also specify the index of the list element. import ast df['json2']=df['json2'].apply(ast.literal_eval) #if dictionary's type is string, convert to dictionary. def get_followers(x): if x['obj'][0]['timestp']=='2022-12-03': return x['obj'][0]['followers'] df['date'] = df['json2'].apply(get_followers) Also you can use this too. This does the same job as the function you are using: df['date'] = df['json2'].apply(lambda x: x['obj'][0]['followers'] if x['obj'][0]['timestp']=='2022-12-03' else None) for list of dicts: def get_followers(x): for i in x['obj']: if i['timestp'] == '2022-12-03': return i['followers'] break df['date'] = df['json2'].apply(get_followers)
Extract a value from a JSON string stored in a pandas data frame column
I have a pandas dataframe with a column named json2 which contains a json string coming from an API call: "{'obj': [{'timestp': '2022-12-03', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}, {'timestp': '2022-12-02', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}]}" I want to make a function that iterates over the column and extracts the number of followers if the timestp matches with a given date def get_followers(x): if x['obj']['timestp']=='2022-12-03': return x['obj']['followers'] df['date'] = df['json2'].apply(get_followers) I should get 281475 as value in the column date but I got an error: "list indices must be integers or slices, not str" What I'm doing wrong? Thank you in advance
[ "The key named obj occurs in list of dictionaries. Before you define another key, you must also specify the index of the list element.\nimport ast\ndf['json2']=df['json2'].apply(ast.literal_eval) #if dictionary's type is string, convert to dictionary.\n\ndef get_followers(x):\n if x['obj'][0]['timestp']=='2022-12-03':\n return x['obj'][0]['followers']\n\ndf['date'] = df['json2'].apply(get_followers)\n\nAlso you can use this too. This does the same job as the function you are using:\ndf['date'] = df['json2'].apply(lambda x: x['obj'][0]['followers'] if x['obj'][0]['timestp']=='2022-12-03' else None)\n\nfor list of dicts:\ndef get_followers(x):\n for i in x['obj']:\n if i['timestp'] == '2022-12-03':\n return i['followers']\n break\n \ndf['date'] = df['json2'].apply(get_followers)\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "json", "pandas", "python" ]
stackoverflow_0074670977_dictionary_json_pandas_python.txt
Q: Game Command format between bluetooth controller and console I am creating an Android project, in which I have to make android wearable smart watch as game controller which can send commands to games running on handheld device connected to that smartwatch over BLE(Bluetooth Low Energy). I designed controller pad on wearable and can send some hard coded text to handheld device app on soft button click of controller pad. Issue is, I have to replace that text with game commands format expected by games running on handheld device. And, the app running on handheld device can listen text through wearable service. I know that, third party games would not have any wearable service running, so how third party games will accept/listen command sending from wearable smartwatch. Third party games can support hardware controller through Android SDK A: In order to make your wearable smartwatch work as a game controller for third-party games, you will need to implement a standardized game controller protocol that the games can use to interpret the input from your smartwatch. The most widely used protocol for game controller input on Android is the Android Open Accessory Protocol (AOA), which you can use to connect your smartwatch to the handheld device and send game controller input to the games. Once you have implemented the AOA protocol on your smartwatch, you will need to ensure that the games on the handheld device are able to receive and interpret the input from your smartwatch. This will typically involve implementing a game controller API in the games, which can receive and interpret the input from the smartwatch using the AOA protocol. Alternatively, you may be able to use a third-party game controller app on the handheld device that can receive input from your smartwatch and translate it into a format that is compatible with the games. This would require the app to support the AOA protocol and to have the necessary APIs to receive and interpret the input from your smartwatch. Overall, the key to making your wearable smartwatch work as a game controller for third-party games is to implement a standardized game controller protocol and ensure that the games are able to receive and interpret the input from your smartwatch.
Game Command format between bluetooth controller and console
I am creating an Android project, in which I have to make android wearable smart watch as game controller which can send commands to games running on handheld device connected to that smartwatch over BLE(Bluetooth Low Energy). I designed controller pad on wearable and can send some hard coded text to handheld device app on soft button click of controller pad. Issue is, I have to replace that text with game commands format expected by games running on handheld device. And, the app running on handheld device can listen text through wearable service. I know that, third party games would not have any wearable service running, so how third party games will accept/listen command sending from wearable smartwatch. Third party games can support hardware controller through Android SDK
[ "In order to make your wearable smartwatch work as a game controller for third-party games, you will need to implement a standardized game controller protocol that the games can use to interpret the input from your smartwatch. The most widely used protocol for game controller input on Android is the Android Open Accessory Protocol (AOA), which you can use to connect your smartwatch to the handheld device and send game controller input to the games.\nOnce you have implemented the AOA protocol on your smartwatch, you will need to ensure that the games on the handheld device are able to receive and interpret the input from your smartwatch. This will typically involve implementing a game controller API in the games, which can receive and interpret the input from the smartwatch using the AOA protocol.\nAlternatively, you may be able to use a third-party game controller app on the handheld device that can receive input from your smartwatch and translate it into a format that is compatible with the games. This would require the app to support the AOA protocol and to have the necessary APIs to receive and interpret the input from your smartwatch.\nOverall, the key to making your wearable smartwatch work as a game controller for third-party games is to implement a standardized game controller protocol and ensure that the games are able to receive and interpret the input from your smartwatch.\n" ]
[ 0 ]
[]
[]
[ "android", "android_ble", "gamecontroller", "wear_os" ]
stackoverflow_0038360252_android_android_ble_gamecontroller_wear_os.txt
Q: Flutter Provider package, Consumer does not update UI, on removing item from list As seen in the picture, i pressed the blue button 3 times, which added 3 card widgets in myList. Also in terminal it shows 3 items are added in myList. But when i longPress on 3rd Card to it, it infact removes from myList but does not update the UI. Also, if i try removing 3rd item, again: ======== Exception caught by gesture =============================================================== The following RangeError was thrown while handling a gesture: RangeError (index): Invalid value: Not in inclusive range 0..1: 2 My full code is: (controller.dart) import 'package:flutter/cupertino.dart'; class MyController extends ChangeNotifier{ var myList = []; void addItemsInList(){ myList.add('item#${myList.length} '); //todo: 1* forgot notifyListeners(); } void removeItems(index){ myList.removeAt(index) ; } } full code of view.dart import 'package:flutter/material.dart'; import 'package:provider/provider.dart'; import 'package:provider_4/controller/controller_file.dart'; class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( home: Consumer<MyController>( builder: (context, snapshot, child) { return Scaffold( floatingActionButton: FloatingActionButton( onPressed: (){ Provider . of <MyController> (context, listen: false) . addItemsInList(); print('myList.length gives: ${snapshot.myList.length}'); print(snapshot.myList); }, child: Icon(Icons.add), ), body: ListView.builder( itemCount: snapshot.myList.length , // replace with something like myList.length itemBuilder: (context, index) => Card( child: ListTile( onLongPress: () { Provider . of <MyController> (context, listen: false).removeItems(index); // snapshot.myList.removeAt(index); print(snapshot.myList); }, title: Text( 'Title', // replace with something like myList[index].title style: TextStyle( fontSize: 20, color: Colors.black87, fontWeight: FontWeight.bold, ), ), subtitle: Text( 'Details of title above', // replace with something like myList[index].details style: TextStyle( fontSize: 20, color: Colors.deepPurple, fontWeight: FontWeight.bold, ), ), trailing: Icon(Icons.check_circle, color: Colors.green,), ), ), ), ); } ), ); } } A: You are getting RangeError because you are already deleting but the UI is not notified. Add "notifyListeners();" at the end of the removeItems functions. A: try to add ValueKey to your ListTile itemBuilder: (context, index) => Card( child: ListTile( key: UniqueKey(), onLongPress: () { ..... A: you need to refresh the UI after removing the item via notifyListeners(); void removeItems(index){ myList.removeAt(index) ; notifyListeners(); }
Flutter Provider package, Consumer does not update UI, on removing item from list
As seen in the picture, i pressed the blue button 3 times, which added 3 card widgets in myList. Also in terminal it shows 3 items are added in myList. But when i longPress on 3rd Card to it, it infact removes from myList but does not update the UI. Also, if i try removing 3rd item, again: ======== Exception caught by gesture =============================================================== The following RangeError was thrown while handling a gesture: RangeError (index): Invalid value: Not in inclusive range 0..1: 2 My full code is: (controller.dart) import 'package:flutter/cupertino.dart'; class MyController extends ChangeNotifier{ var myList = []; void addItemsInList(){ myList.add('item#${myList.length} '); //todo: 1* forgot notifyListeners(); } void removeItems(index){ myList.removeAt(index) ; } } full code of view.dart import 'package:flutter/material.dart'; import 'package:provider/provider.dart'; import 'package:provider_4/controller/controller_file.dart'; class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( home: Consumer<MyController>( builder: (context, snapshot, child) { return Scaffold( floatingActionButton: FloatingActionButton( onPressed: (){ Provider . of <MyController> (context, listen: false) . addItemsInList(); print('myList.length gives: ${snapshot.myList.length}'); print(snapshot.myList); }, child: Icon(Icons.add), ), body: ListView.builder( itemCount: snapshot.myList.length , // replace with something like myList.length itemBuilder: (context, index) => Card( child: ListTile( onLongPress: () { Provider . of <MyController> (context, listen: false).removeItems(index); // snapshot.myList.removeAt(index); print(snapshot.myList); }, title: Text( 'Title', // replace with something like myList[index].title style: TextStyle( fontSize: 20, color: Colors.black87, fontWeight: FontWeight.bold, ), ), subtitle: Text( 'Details of title above', // replace with something like myList[index].details style: TextStyle( fontSize: 20, color: Colors.deepPurple, fontWeight: FontWeight.bold, ), ), trailing: Icon(Icons.check_circle, color: Colors.green,), ), ), ), ); } ), ); } }
[ "You are getting RangeError because you are already deleting but the UI is not notified.\nAdd \"notifyListeners();\" at the end of the removeItems functions.\n", "try to add ValueKey to your ListTile\nitemBuilder: (context, index) => Card(\n child: ListTile(\n key: UniqueKey(),\n onLongPress: () {\n .....\n\n", "you need to refresh the UI after removing the item via notifyListeners();\nvoid removeItems(index){\nmyList.removeAt(index) ;\nnotifyListeners(); }\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dart", "flutter", "provider", "state_management" ]
stackoverflow_0074356705_dart_flutter_provider_state_management.txt
Q: Fire and Forget in an ASP.Net Core application I am working on a web API that needs to write logging information to a separate service. This service slows down the execution of my current service so I'm interested in a Fire & Forget solution. Even though this can result in some logging information to go missing as the application pool recycles. Now, my idea is to use a background task with a queue. Every log entry gets pushed to this queue and this background task just keeps processing the items on this queue sending them to the log service as fast as it can. Adding these items to the queue means my code just pushes them to the queue and doesn't need to care for what happens afterwards... Failures with logging are not for the visitor to solve anyways. There errors will actually result in exceptions written to a file instead, so administrators can check the /faults folder. Not sure if this is the most optimal solution, but my API must return a response ASAP and just can't spent time to wait for the logging service to process the log entry. (The Log API is a slow and old monstrosity that's unfortunately a requirement in my project.) So my API has a response ready in 50 to 100 ms while the Log API takes 800 to 2,000 ms per request! So, fire and forget is my only option for now. Thing is, how do I best build this background task to keep an eye on this queue and process any new entries while basically doing nothing as long as the queue is empty. One other limitation, though: I don't want any third-party libraries in this project due to security concerns. This means I need to understand what the code does and not rely on a black box... A: It sounds like your priority is to ensure that the main web API can return responses as quickly as possible, while still logging information to the separate log service. In order to do this, you can use a queue to store the log information and have a background task process the entries in the queue asynchronously. This way, the main web API can add items to the queue and immediately return a response without waiting for the log service to process the information. One way to implement this without using any third-party libraries is to use the built-in Queue class in .NET. This class allows you to add items to the queue and process them asynchronously using a separate thread. For example, you could create a Queue object and a background thread in your web API, and then add items to the queue from the main thread when you want to log information. The background thread can then continuously check the queue for new items and process them as they become available. Here's an example of how this could work: using System.Threading; using System.Collections.Generic; public class LoggingService { private Queue<string> logQueue; private Thread backgroundThread; public LoggingService() { logQueue = new Queue<string>(); backgroundThread = new Thread(ProcessLogQueue); backgroundThread.IsBackground = true; backgroundThread.Start(); } public void Log(string message) { lock (logQueue) { logQueue.Enqueue(message); } } private void ProcessLogQueue() { while (true) { string message = null; lock (logQueue) { if (logQueue.Count > 0) { message = logQueue.Dequeue(); } } if (message != null) { // Send the log message to the log service here } else { // If the queue is empty, wait a short time before checking again Thread.Sleep(500); } } } } In the code above, the LoggingService class creates a Queue object and a background thread in its constructor. The Log method adds items to the queue, and the ProcessLogQueue method runs on the background thread and continuously checks the queue for new items. If the queue is empty, the thread waits for a short time before checking again. When an item is available in the queue, the thread processes it by sending it to the log service. Of course, this is just one way to implement a background task to process items in a queue. Depending on your specific requirements and the limitations of your project, you may need to adjust the implementation to fit your needs. However, this should give you a good starting point for building a fire and forget logging solution for your web API.
Fire and Forget in an ASP.Net Core application
I am working on a web API that needs to write logging information to a separate service. This service slows down the execution of my current service so I'm interested in a Fire & Forget solution. Even though this can result in some logging information to go missing as the application pool recycles. Now, my idea is to use a background task with a queue. Every log entry gets pushed to this queue and this background task just keeps processing the items on this queue sending them to the log service as fast as it can. Adding these items to the queue means my code just pushes them to the queue and doesn't need to care for what happens afterwards... Failures with logging are not for the visitor to solve anyways. There errors will actually result in exceptions written to a file instead, so administrators can check the /faults folder. Not sure if this is the most optimal solution, but my API must return a response ASAP and just can't spent time to wait for the logging service to process the log entry. (The Log API is a slow and old monstrosity that's unfortunately a requirement in my project.) So my API has a response ready in 50 to 100 ms while the Log API takes 800 to 2,000 ms per request! So, fire and forget is my only option for now. Thing is, how do I best build this background task to keep an eye on this queue and process any new entries while basically doing nothing as long as the queue is empty. One other limitation, though: I don't want any third-party libraries in this project due to security concerns. This means I need to understand what the code does and not rely on a black box...
[ "It sounds like your priority is to ensure that the main web API can return responses as quickly as possible, while still logging information to the separate log service. In order to do this, you can use a queue to store the log information and have a background task process the entries in the queue asynchronously. This way, the main web API can add items to the queue and immediately return a response without waiting for the log service to process the information.\nOne way to implement this without using any third-party libraries is to use the built-in Queue class in .NET. This class allows you to add items to the queue and process them asynchronously using a separate thread. For example, you could create a Queue object and a background thread in your web API, and then add items to the queue from the main thread when you want to log information. The background thread can then continuously check the queue for new items and process them as they become available.\nHere's an example of how this could work:\nusing System.Threading;\nusing System.Collections.Generic;\n\npublic class LoggingService\n{\n private Queue<string> logQueue;\n private Thread backgroundThread;\n\n public LoggingService()\n {\n logQueue = new Queue<string>();\n backgroundThread = new Thread(ProcessLogQueue);\n backgroundThread.IsBackground = true;\n backgroundThread.Start();\n }\n\n public void Log(string message)\n {\n lock (logQueue)\n {\n logQueue.Enqueue(message);\n }\n }\n\n private void ProcessLogQueue()\n {\n while (true)\n {\n string message = null;\n lock (logQueue)\n {\n if (logQueue.Count > 0)\n {\n message = logQueue.Dequeue();\n }\n }\n\n if (message != null)\n {\n // Send the log message to the log service here\n }\n else\n {\n // If the queue is empty, wait a short time before checking again\n Thread.Sleep(500);\n }\n }\n }\n}\n\nIn the code above, the LoggingService class creates a Queue object and a background thread in its constructor. The Log method adds items to the queue, and the ProcessLogQueue method runs on the background thread and continuously checks the queue for new items. If the queue is empty, the thread waits for a short time before checking again. When an item is available in the queue, the thread processes it by sending it to the log service.\nOf course, this is just one way to implement a background task to process items in a queue. Depending on your specific requirements and the limitations of your project, you may need to adjust the implementation to fit your needs. However, this should give you a good starting point for building a fire and forget logging solution for your web API.\n" ]
[ 1 ]
[]
[]
[ "asp.net_core_6.0", "background_task", "c#" ]
stackoverflow_0074670779_asp.net_core_6.0_background_task_c#.txt
Q: function that returns the length of the longest run of repetition in a given list im trying to write a function that returns the length of the longest run of repetition in a given list Here is my code: ` def longest_repetition(a): longest = 0 j = 0 run2 = 0 while j <= len(a)-1: for i in a: run = a.count(a[j] == i) if run == 1: run2 += 1 if run2 > longest: longest = run2 j += 1 run2 = 0 return longest print(longest_repetition([4,1,2,4,7,9,4])) print(longest_repetition([5,3,5,6,9,4,4,4,4])) 3 0 ` The first test function works fine, but the second test function is not counting at all and I'm not sure why. Any insight is much appreciated Edit: Just noticed that the question I was given and the expected results are not consistent. So what I'm basically trying to do is find the most repeated element in a list and the output would be the number of times it is repeated. That said, the output for the second test function should be 4 because the element '4' is repeated four times (elements are not required to be in one run as implied in my original question) A: First of all, let's check if you were consistent with your question (function that returns the length of the longest run of repetition): e.g.: a = [4,1,2,4,7,9,4] b = [5,3,5,6,9,4,4,4,4] (assuming, you are only checking single position, e.g. c = [1,2,3,1,2,3] could have one repetition of sequence 1,2,3 - i am assuming that is not your goal) So: for a, there is no repetitions of same value, therefore length equals 0 for b, you have one, quadruple repetition of 4, therefore length equals 4 First, your max_amount_of_repetitions=0 and current_repetitions_run=0' So, what you need to do to detect repetition is simply check if value of n-1'th and n'th element is same. If so, you increment current_repetitions_run', else, you reset current_repetitions_run=0. Last step is check if your current run is longest of all: max_amount_of_repetitions= max(max_amount_of_repetitions, current_repetitions_run) to surely get both n-1 and n within your list range, I'd simply start iteration from second element. That way, n-1 is first element. for n in range(1,len(a)): if a[n-1] == a[n]: print("I am sure, you can figure out the rest") A: you can use hash to calculate the frequency of the element and then get the max of frequencies. using functional approach from collections import Counter def longest_repitition(array): return max(Counter(array).values()) other way, without using Counter def longest_repitition(array): freq = {} for val in array: if val not in freq: freq[val] = 0 freq[val] += 1 values = freq.values() return max(values)
function that returns the length of the longest run of repetition in a given list
im trying to write a function that returns the length of the longest run of repetition in a given list Here is my code: ` def longest_repetition(a): longest = 0 j = 0 run2 = 0 while j <= len(a)-1: for i in a: run = a.count(a[j] == i) if run == 1: run2 += 1 if run2 > longest: longest = run2 j += 1 run2 = 0 return longest print(longest_repetition([4,1,2,4,7,9,4])) print(longest_repetition([5,3,5,6,9,4,4,4,4])) 3 0 ` The first test function works fine, but the second test function is not counting at all and I'm not sure why. Any insight is much appreciated Edit: Just noticed that the question I was given and the expected results are not consistent. So what I'm basically trying to do is find the most repeated element in a list and the output would be the number of times it is repeated. That said, the output for the second test function should be 4 because the element '4' is repeated four times (elements are not required to be in one run as implied in my original question)
[ "First of all, let's check if you were consistent with your question (function that returns the length of the longest run of repetition):\ne.g.:\na = [4,1,2,4,7,9,4]\nb = [5,3,5,6,9,4,4,4,4]\n(assuming, you are only checking single position, e.g. c = [1,2,3,1,2,3] could have one repetition of sequence 1,2,3 - i am assuming that is not your goal)\nSo:\nfor a, there is no repetitions of same value, therefore length equals 0\nfor b, you have one, quadruple repetition of 4, therefore length equals 4\nFirst, your max_amount_of_repetitions=0 and current_repetitions_run=0' So, what you need to do to detect repetition is simply check if value of n-1'th and n'th element is same. If so, you increment current_repetitions_run', else, you reset current_repetitions_run=0.\nLast step is check if your current run is longest of all:\nmax_amount_of_repetitions= max(max_amount_of_repetitions, current_repetitions_run)\nto surely get both n-1 and n within your list range, I'd simply start iteration from second element. That way, n-1 is first element.\nfor n in range(1,len(a)):\n if a[n-1] == a[n]:\n print(\"I am sure, you can figure out the rest\")\n\n", "you can use hash to calculate the frequency of the element and then get the max of frequencies.\nusing functional approach\nfrom collections import Counter\ndef longest_repitition(array):\n return max(Counter(array).values())\n\nother way, without using Counter\ndef longest_repitition(array):\n freq = {}\n for val in array:\n if val not in freq:\n freq[val] = 0\n freq[val] += 1\n values = freq.values()\n return max(values)\n\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074670644_list_python.txt
Q: Restricted access to external storage, android < 10 I want to read and write files in user storage in android < 10 that applied Scoped storage and for some reason, I cannot use the privacy-friendly storage access for that therefore I need to use All files access permission. For getting the permission I'm using the Permission Handler package and calling the Permission.manageExternalStorage.request() like the following: Future<void> _requestManageExternalStorage() async { if(!await Permission.manageExternalStorage.isGranted) { await Permission.manageExternalStorage.request(); } } Alas, once I called that method this is what I got: No permissions found in manifest for: []22 As for the permissions list within Android Manifest.xml: <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> note that I put it in the correct location, which is in /android/app/src/main/AndroidManifest.xml with that being said, the output given by the Permission Handler is pretty much non-sense. I tried to clean the projects with flutter clean and even create a new project only to test it but it still failed to retrieve the wanted permission. I tried to check what actually is the current permission status for manageExternalStorage which I got: PermissionStatus.restricted From the documentation PermissionStatus.restricted means: The OS denied access to the requested feature. The user cannot change this app's status, possibly due to active restrictions such as parental controls being in place. Only supported on iOS. From that, I can only assume that the OS is forcefully denied the All Files Access? I'm using permission_handler: ^8.2.5 which at the time writing this is the latest version. The full code I use to test: import 'package:flutter/material.dart'; import 'package:permission_handler/permission_handler.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({Key? key, required this.title}) : super(key: key); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { bool isGranted = false; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'Manage External Storage Test:', ), Text( 'Is granted: $isGranted', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _requestManageExternalStorage, ), ); } Future<void> _requestManageExternalStorage() async { if(!await Permission.manageExternalStorage.isGranted) { final res = await Permission.manageExternalStorage.request(); setState((){ isGranted = res.isGranted; }); } } } Full AndroidManifest.xml: <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" package="com.example.file_access_test"> <!-- Permissions options for the `storage` group --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <!-- Permissions options for the `manage external storage` group --> <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> <application android:name="io.flutter.app.FlutterApplication" android:icon="@mipmap/ic_launcher" android:label="MAKE IT DAMN WORK" tools:ignore="AllowBackup,GoogleAppIndexingWarning"> <activity android:name="io.flutter.embedding.android.FlutterActivity" android:launchMode="singleTop" android:theme="@android:style/Theme.Black.NoTitleBar" android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale|layoutDirection" android:hardwareAccelerated="true" android:windowSoftInputMode="adjustResize" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <meta-data android:name="flutterEmbedding" android:value="2" /> </application> </manifest> Any insight or help is very appreciated! tested in android 10 A: The android.permission.MANAGE_EXTERNAL_STORAGE permission is introduced with Android 11 (API 30) and therefore not supported on Android 10 or lower (see official documentation). The permission_handler plugin returns PermissionStatus.restricted because it doesn't know how to handle the permission on Android 10 and below. In case of Android 10 and lower simply restrict to requesting the android.permission.READ_EXTERNAL_STORAGE and android.permission.WRITE_EXTERNAL_STORAGE permissions. The log message is printed because the android.permission.MANAGE_EXTERNAL_STORAGE permission is not returned as part of the array returned by the Context.getPackageInfo().requestedPermissions API. Therefore the permission_handler plugin thinks the permission is not listed in the AndroidManifest.xml file. In this case this is of course not entirely true, however my assumption is that Android filters out permissions that don't apply on the current Android version. As maintainer of the permission_handler plugin I will make sure the log message gets updated so it takes this option also into account. A: This is how i implemented it. I am basically checking whick version of SDK is the android device using. If the device is below Android 11, we'll ask for basic permission. If the device is android 11 or above, then we'll ask for manage external storage, also. I am using a bool to know if the device got the right permissions. In my test, it works. Future<bool> getPermissions() async { bool gotPermissions = false; var androidInfo = await DeviceInfoPlugin().androidInfo; var release = androidInfo.version.release; // Version number, example: Android 12 var sdkInt = androidInfo.version.sdkInt; // SDK, example: 31 var manufacturer = androidInfo.manufacturer; var model = androidInfo.model; print('Android $release (SDK $sdkInt), $manufacturer $model'); if (Platform.isAndroid) { var storage = await Permission.storage.status; if (storage != PermissionStatus.granted) { await Permission.storage.request(); } if (sdkInt >= 30) { var storage_external = await Permission.manageExternalStorage.status; if (storage_external != PermissionStatus.granted) { await Permission.manageExternalStorage.request(); } storage_external = await Permission.manageExternalStorage.status; if (storage_external == PermissionStatus.granted && storage == PermissionStatus.granted) { gotPermissions = true; } } else { // (SDK < 30) storage = await Permission.storage.status; if (storage == PermissionStatus.granted) { gotPermissions = true; } } } return gotPermissions; }
Restricted access to external storage, android < 10
I want to read and write files in user storage in android < 10 that applied Scoped storage and for some reason, I cannot use the privacy-friendly storage access for that therefore I need to use All files access permission. For getting the permission I'm using the Permission Handler package and calling the Permission.manageExternalStorage.request() like the following: Future<void> _requestManageExternalStorage() async { if(!await Permission.manageExternalStorage.isGranted) { await Permission.manageExternalStorage.request(); } } Alas, once I called that method this is what I got: No permissions found in manifest for: []22 As for the permissions list within Android Manifest.xml: <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> note that I put it in the correct location, which is in /android/app/src/main/AndroidManifest.xml with that being said, the output given by the Permission Handler is pretty much non-sense. I tried to clean the projects with flutter clean and even create a new project only to test it but it still failed to retrieve the wanted permission. I tried to check what actually is the current permission status for manageExternalStorage which I got: PermissionStatus.restricted From the documentation PermissionStatus.restricted means: The OS denied access to the requested feature. The user cannot change this app's status, possibly due to active restrictions such as parental controls being in place. Only supported on iOS. From that, I can only assume that the OS is forcefully denied the All Files Access? I'm using permission_handler: ^8.2.5 which at the time writing this is the latest version. The full code I use to test: import 'package:flutter/material.dart'; import 'package:permission_handler/permission_handler.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({Key? key, required this.title}) : super(key: key); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { bool isGranted = false; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'Manage External Storage Test:', ), Text( 'Is granted: $isGranted', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _requestManageExternalStorage, ), ); } Future<void> _requestManageExternalStorage() async { if(!await Permission.manageExternalStorage.isGranted) { final res = await Permission.manageExternalStorage.request(); setState((){ isGranted = res.isGranted; }); } } } Full AndroidManifest.xml: <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" package="com.example.file_access_test"> <!-- Permissions options for the `storage` group --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <!-- Permissions options for the `manage external storage` group --> <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> <application android:name="io.flutter.app.FlutterApplication" android:icon="@mipmap/ic_launcher" android:label="MAKE IT DAMN WORK" tools:ignore="AllowBackup,GoogleAppIndexingWarning"> <activity android:name="io.flutter.embedding.android.FlutterActivity" android:launchMode="singleTop" android:theme="@android:style/Theme.Black.NoTitleBar" android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale|layoutDirection" android:hardwareAccelerated="true" android:windowSoftInputMode="adjustResize" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <meta-data android:name="flutterEmbedding" android:value="2" /> </application> </manifest> Any insight or help is very appreciated! tested in android 10
[ "The android.permission.MANAGE_EXTERNAL_STORAGE permission is introduced with Android 11 (API 30) and therefore not supported on Android 10 or lower (see official documentation).\nThe permission_handler plugin returns PermissionStatus.restricted because it doesn't know how to handle the permission on Android 10 and below. In case of Android 10 and lower simply restrict to requesting the android.permission.READ_EXTERNAL_STORAGE and android.permission.WRITE_EXTERNAL_STORAGE permissions.\nThe log message is printed because the android.permission.MANAGE_EXTERNAL_STORAGE permission is not returned as part of the array returned by the Context.getPackageInfo().requestedPermissions API. Therefore the permission_handler plugin thinks the permission is not listed in the AndroidManifest.xml file. In this case this is of course not entirely true, however my assumption is that Android filters out permissions that don't apply on the current Android version.\nAs maintainer of the permission_handler plugin I will make sure the log message gets updated so it takes this option also into account.\n", "This is how i implemented it. I am basically checking whick version of SDK is the android device using. If the device is below Android 11, we'll ask for basic permission. If the device is android 11 or above, then we'll ask for manage external storage, also. I am using a bool to know if the device got the right permissions. In my test, it works.\nFuture<bool> getPermissions() async {\n bool gotPermissions = false;\n\n var androidInfo = await DeviceInfoPlugin().androidInfo;\n var release =\n androidInfo.version.release; // Version number, example: Android 12\n var sdkInt = androidInfo.version.sdkInt; // SDK, example: 31\n var manufacturer = androidInfo.manufacturer;\n var model = androidInfo.model;\n print('Android $release (SDK $sdkInt), $manufacturer $model');\n\n if (Platform.isAndroid) {\n \n var storage = await Permission.storage.status;\n\n if (storage != PermissionStatus.granted) {\n await Permission.storage.request();\n }\n\n if (sdkInt >= 30) {\n\n var storage_external = await Permission.manageExternalStorage.status;\n\n if (storage_external != PermissionStatus.granted) {\n await Permission.manageExternalStorage.request();\n }\n\n storage_external = await Permission.manageExternalStorage.status;\n\n if (storage_external == PermissionStatus.granted &&\n storage == PermissionStatus.granted) {\n gotPermissions = true;\n }\n } else {\n // (SDK < 30)\n\n storage = await Permission.storage.status;\n\n if (storage == PermissionStatus.granted) {\n gotPermissions = true;\n }\n }\n }\n\n return gotPermissions;\n }\n\n" ]
[ 1, 0 ]
[]
[]
[ "android", "flutter" ]
stackoverflow_0069759723_android_flutter.txt
Q: Is there any way to avoid nested subscribe? (https://i.stack.imgur.com/2zm2w.png) collectionData(queryRef).subscribe((data) => { for (const each of data) { this.getCourse(each.courseId) .pipe(take(1)) .subscribe((courseData) => { const course = courseData[0]; console.log(course); this.getLecturer(course.lecturerId).pipe(take(1)).subscribe((res: any)=>{ const lecturer = res[0]; course.lecturerName = lecturer.lecturerName; course.lecturerImageUrl = lecturer.lecturerImageUrl; }); recentVisit.push(course); }); } }); Hi I am still new to the rxjs of Angular. I am building an Ionic app using Angular Fire. I'm currently facing some problems here, I'm using Firebase as my backend, and I would have to query through different collections to fetch my data. For example, the first subscription only fetch user course enroll data like courseId, progress..., the second subscription would fetch the course details, and the third will fetch lecturer details. Can anyone give some suggestion on how to avoid using nested subscription because many people said it is not recommended to do so. I would be very appreciated if you can provide some detailed explainations because I really only know the basics of rxjs. I have tried concatMap but it shows firebase error(https://i.stack.imgur.com/6SOS0.png)] collectionData(queryRef) .pipe( concatMap((res: any) => this.getCourse(res.courseId)) //concatMap((result2: any) => this.getLecturer(result2.lecturerId)) ) .subscribe((res) => { console.log(res); }); But actually I also not sure did I did it right because I really cannot understand how concatMap works. A: I created a solution that prevents nested pipes as well as multiple explicit subscriptions by doing the following: I combined switchMap and forkJoin I outsourced part of the code to the helper-method getMergedCourseDetails() in order to keep the main pipe flat /* Initialize all information about the courses */ ngOnInit(): void { this.collectionData(this.queryRef).pipe( switchMap(data => { if (data.length) { // Create an observable (backend-request) for each course-id: const courseObs = data.map(c => this.getCourse(c.courseId)); // Execute the array of backend-requests via forkJoin(): return courseObs.length ? forkJoin(courseObs) : of([]); } return of([]); }), switchMap((courseDataList: Course[][]) => { if (courseDataList.length) { // Get the first course from each course array (as defined in SO question): const courses = courseDataList.filter(c => c.length).map(c => c[0]); // Create observables to retrieve additional details for each of the courses: const detailInfoObs = courses.map(c => this.getMergedCourseDetails(c)); // Execute the created observables via forkJoin(): return detailInfoObs.length ? forkJoin(detailInfoObs) : of([]); } return of([]); }), tap((courseList: Course[]) => { courseList.forEach(d => { console.log('Lecturer Id:', d.lecturerId); console.log('Lecturer Name:', d.lecturerName); console.log('Lecturer ImageUrl:', d.lecturerImageUrl); }); }) ) .subscribe(); } /* Enrich existing course-data with lecturer-details */ private getMergedCourseDetails(course: Course): Observable<Course> { return this.getLecturer(course.lecturerId).pipe( map(lecturers => // Merge existing course-data with newly retrieved lecturer-details: ({...course, lecturerName: lecturers[0]?.lecturerName ?? '', lecturerImageUrl: lecturers[0]?.lecturerImageUrl ?? '' } as Course)) ); }
Is there any way to avoid nested subscribe?
(https://i.stack.imgur.com/2zm2w.png) collectionData(queryRef).subscribe((data) => { for (const each of data) { this.getCourse(each.courseId) .pipe(take(1)) .subscribe((courseData) => { const course = courseData[0]; console.log(course); this.getLecturer(course.lecturerId).pipe(take(1)).subscribe((res: any)=>{ const lecturer = res[0]; course.lecturerName = lecturer.lecturerName; course.lecturerImageUrl = lecturer.lecturerImageUrl; }); recentVisit.push(course); }); } }); Hi I am still new to the rxjs of Angular. I am building an Ionic app using Angular Fire. I'm currently facing some problems here, I'm using Firebase as my backend, and I would have to query through different collections to fetch my data. For example, the first subscription only fetch user course enroll data like courseId, progress..., the second subscription would fetch the course details, and the third will fetch lecturer details. Can anyone give some suggestion on how to avoid using nested subscription because many people said it is not recommended to do so. I would be very appreciated if you can provide some detailed explainations because I really only know the basics of rxjs. I have tried concatMap but it shows firebase error(https://i.stack.imgur.com/6SOS0.png)] collectionData(queryRef) .pipe( concatMap((res: any) => this.getCourse(res.courseId)) //concatMap((result2: any) => this.getLecturer(result2.lecturerId)) ) .subscribe((res) => { console.log(res); }); But actually I also not sure did I did it right because I really cannot understand how concatMap works.
[ "I created a solution that prevents nested pipes as well as multiple explicit subscriptions by doing the following:\n\nI combined switchMap and forkJoin\nI outsourced part of the code to the helper-method getMergedCourseDetails() in order to keep the main pipe flat\n\n/* Initialize all information about the courses */\n\nngOnInit(): void {\n this.collectionData(this.queryRef).pipe(\n switchMap(data => {\n if (data.length) {\n\n // Create an observable (backend-request) for each course-id:\n const courseObs = data.map(c => this.getCourse(c.courseId));\n\n // Execute the array of backend-requests via forkJoin():\n return courseObs.length ? forkJoin(courseObs) : of([]);\n }\n return of([]);\n }),\n switchMap((courseDataList: Course[][]) => { \n if (courseDataList.length) {\n\n // Get the first course from each course array (as defined in SO question):\n const courses = courseDataList.filter(c => c.length).map(c => c[0]);\n\n // Create observables to retrieve additional details for each of the courses:\n const detailInfoObs = courses.map(c => this.getMergedCourseDetails(c));\n\n // Execute the created observables via forkJoin():\n return detailInfoObs.length ? forkJoin(detailInfoObs) : of([]);\n }\n return of([]);\n }),\n tap((courseList: Course[]) => {\n courseList.forEach(d => {\n console.log('Lecturer Id:', d.lecturerId);\n console.log('Lecturer Name:', d.lecturerName);\n console.log('Lecturer ImageUrl:', d.lecturerImageUrl);\n });\n }) \n )\n .subscribe();\n}\n\n/* Enrich existing course-data with lecturer-details */\n\nprivate getMergedCourseDetails(course: Course): Observable<Course> {\n return this.getLecturer(course.lecturerId).pipe( \n map(lecturers => \n // Merge existing course-data with newly retrieved lecturer-details: \n ({...course,\n lecturerName: lecturers[0]?.lecturerName ?? '', \n lecturerImageUrl: lecturers[0]?.lecturerImageUrl ?? '' } as Course))\n );\n}\n\n" ]
[ 1 ]
[ "If you use nested Subscriptions, it means it would wait for the first the return a value, then call the second one and so one. Which costs alot of time.\nWhat you could use on this is forkJoin():\nforkJoin(\n {\n a: this.http.call1()..\n b: this.http.call2()..\n c: this.http.call3()..\n }).subscribe()\n\n\nforkJoins waits for all 3 Observables to emit once and gives you all the values.\nExample here: https://www.learnrxjs.io/learn-rxjs/operators/combination/forkjoin\n" ]
[ -1 ]
[ "angular", "angularfire", "firebase", "ionic_framework", "rxjs" ]
stackoverflow_0074667223_angular_angularfire_firebase_ionic_framework_rxjs.txt
Q: how do i download tModLoader needed libraries? When i downloaded tModLoader i saw this button "enable dev mode". I enabled it but didn't download important libraries. Now i cannot click this button again and i cant download these libraries. How do i download them and apply to visual studio to make them work, show clues etc, like using Terraria or using Terraria.ModLoader or anything? I searched google for "how do i disable dev mode" to show enable dev mode again, "tModLoader library" or "Terraria library". I even tried uninstalling and installing again tmodloader and it didn't work. A: To enable dev mode again in tModLoader, you can try the following steps: Open tModLoader and go to the main menu. Click on the "Mods" option in the top menu. In the mods menu, click on the "Open Mod Browser" button. In the mod browser, click on the "Settings" button in the top-right corner of the screen. In the settings menu, you should see an option to enable or disable dev mode. Make sure that dev mode is enabled, and then click on the "Apply" button to save your changes. Once dev mode is enabled, you should be able to download and install the necessary libraries for tModLoader and Terraria. To do this, you can follow these steps: In the mod browser, search for the libraries that you want to download and install. Click on the library that you want to download, and then click on the "Download" button. Wait for the library to download, and then click on the "Install" button to install it. Repeat this process for each library that you want to download and install. Once you have installed the necessary libraries, you should be able to use them in Visual Studio to make them work with Terraria and tModLoader. You can refer to the documentation for tModLoader and Visual Studio for more information on how to use these libraries and tools.
how do i download tModLoader needed libraries?
When i downloaded tModLoader i saw this button "enable dev mode". I enabled it but didn't download important libraries. Now i cannot click this button again and i cant download these libraries. How do i download them and apply to visual studio to make them work, show clues etc, like using Terraria or using Terraria.ModLoader or anything? I searched google for "how do i disable dev mode" to show enable dev mode again, "tModLoader library" or "Terraria library". I even tried uninstalling and installing again tmodloader and it didn't work.
[ "To enable dev mode again in tModLoader, you can try the following steps:\nOpen tModLoader and go to the main menu.\nClick on the \"Mods\" option in the top menu.\nIn the mods menu, click on the \"Open Mod Browser\" button.\nIn the mod browser, click on the \"Settings\" button in the top-right corner of the screen.\nIn the settings menu, you should see an option to enable or disable dev mode. Make sure that dev mode is enabled, and then click on the \"Apply\" button to save your changes.\nOnce dev mode is enabled, you should be able to download and install the necessary libraries for tModLoader and Terraria. To do this, you can follow these steps:\nIn the mod browser, search for the libraries that you want to download and install.\nClick on the library that you want to download, and then click on the \"Download\" button.\nWait for the library to download, and then click on the \"Install\" button to install it.\nRepeat this process for each library that you want to download and install.\nOnce you have installed the necessary libraries, you should be able to use them in Visual Studio to make them work with Terraria and tModLoader. You can refer to the documentation for tModLoader and Visual Studio for more information on how to use these libraries and tools.\n" ]
[ 0 ]
[]
[]
[ "c#" ]
stackoverflow_0074670675_c#.txt
Q: What is the best choice to store data in memory using Lambda I'm using cognito to authenticate my clients (UI+identity pool) I'm using Authentication code grant instead Implicit grant. If I understood correctly, the code can be exchanged for a JWT in my backend, and my client only handles the authentication code. Therefore, the client never knows the JWT, I can revoke it at any time. I have to store in my backend in memory the key-value association that corresponds to cod:jwt At each API request, I get the jwt associated to the code, and I can make my verifications Can you confirm that I have understood the mechanism correctly? I'm using AWS Lambda which is stateless, so I can't store the code:jwt association in my lambda's memory, since once the lambda dies, I no longer have access to the data. So I have several solutions. I store my code and my jwt in an RDS instance : I think this is not the best solution since every API request would require querying the RDS I store in a dynamoDB instance AWS MemoryDB : I think it can be a good solution but it's so expensive !! ElastiCache : It use the memory but I don't know the prices very well Use JWT instead code, but it is not the most secure solution recommended by AWS You should know that the project I am working on is a personal project, where there will not be much traffic, but I want to set up all the necessary systems allowing me to scale The goal of this project is to allow me to learn more deeply about cloud technologies, and to be confronted to problems that could happen. So I try to find the most optimized solutions in terms of performance, but also in terms of cost (because I won't have a lot of data and users). So I would like to take advantage of free / cheap offers when there is not a lot of traffic. For example, if I use MemoryDB I'm going to pay 30 euros minimum, while I have no traffic, and I'm doing my project just to learn... it's getting expensive Hope you will understand my problem and help me find the right solution A: You have several options you can consider for storing data in memory using Lambda, including using a caching solution such as Amazon ElastiCache, or using a service such as Amazon DynamoDB. Using Amazon ElastiCache, you can create an in-memory cache that is easily accessible from your Lambda function. This can be a cost-effective solution, as ElastiCache has a free tier that allows you to store up to 750 hours of cache. Alternatively, you could use a service like Amazon DynamoDB, which provides a managed in-memory cache. This can be a good option if you need a scalable solution that can handle a high volume of requests. It's worth noting that using a JWT instead of an authentication code can also be a viable option. With a JWT, you can store the token in memory and use it to verify the authenticity of requests. This can be a simpler solution, but it may not be as secure as using an authentication code.
What is the best choice to store data in memory using Lambda
I'm using cognito to authenticate my clients (UI+identity pool) I'm using Authentication code grant instead Implicit grant. If I understood correctly, the code can be exchanged for a JWT in my backend, and my client only handles the authentication code. Therefore, the client never knows the JWT, I can revoke it at any time. I have to store in my backend in memory the key-value association that corresponds to cod:jwt At each API request, I get the jwt associated to the code, and I can make my verifications Can you confirm that I have understood the mechanism correctly? I'm using AWS Lambda which is stateless, so I can't store the code:jwt association in my lambda's memory, since once the lambda dies, I no longer have access to the data. So I have several solutions. I store my code and my jwt in an RDS instance : I think this is not the best solution since every API request would require querying the RDS I store in a dynamoDB instance AWS MemoryDB : I think it can be a good solution but it's so expensive !! ElastiCache : It use the memory but I don't know the prices very well Use JWT instead code, but it is not the most secure solution recommended by AWS You should know that the project I am working on is a personal project, where there will not be much traffic, but I want to set up all the necessary systems allowing me to scale The goal of this project is to allow me to learn more deeply about cloud technologies, and to be confronted to problems that could happen. So I try to find the most optimized solutions in terms of performance, but also in terms of cost (because I won't have a lot of data and users). So I would like to take advantage of free / cheap offers when there is not a lot of traffic. For example, if I use MemoryDB I'm going to pay 30 euros minimum, while I have no traffic, and I'm doing my project just to learn... it's getting expensive Hope you will understand my problem and help me find the right solution
[ "You have several options you can consider for storing data in memory using Lambda, including using a caching solution such as Amazon ElastiCache, or using a service such as Amazon DynamoDB.\nUsing Amazon ElastiCache, you can create an in-memory cache that is easily accessible from your Lambda function. This can be a cost-effective solution, as ElastiCache has a free tier that allows you to store up to 750 hours of cache.\nAlternatively, you could use a service like Amazon DynamoDB, which provides a managed in-memory cache. This can be a good option if you need a scalable solution that can handle a high volume of requests.\nIt's worth noting that using a JWT instead of an authentication code can also be a viable option. With a JWT, you can store the token in memory and use it to verify the authenticity of requests. This can be a simpler solution, but it may not be as secure as using an authentication code.\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "authentication", "aws_lambda", "jwt" ]
stackoverflow_0074646995_amazon_web_services_authentication_aws_lambda_jwt.txt
Q: Apache netbeans file explorer slow response Every time I open the file explorer of netbeans, it's always give me the slow performance. About 1 month ago this problems is not here, what should i do to fix it? it always take a long time, even when i try to open my project or something that needs file explorer to do so. I used apache netbeans 15 IDE, on Win 11 x64. I expect my java netbeans is back to it's normal performance like before, it's really frustrating! pls help me! i searched on google many times but they said i need to delete broken links to fix it, i don't know how to do it, what should i do to delete that broken links and what the kind of large zip file i should remove? A: It sounds like you are experiencing a performance issue with your Apache NetBeans IDE. There are a few potential causes of this issue, so I'll provide a few suggestions to try and help you improve the performance of your IDE. First, try disabling any plugins that you are not using. This can help improve the overall performance of the IDE by reducing the amount of code that needs to be executed. To disable a plugin, go to Tools > Plugins, select the plugin you want to disable, and click the "Deactivate" button. Another potential cause of slow performance is a large number of files in your project. If you have a very large project with many files, the file explorer may take longer to load and navigate. In this case, it may be helpful to organize your files into smaller, more manageable groups. Additionally, you may want to try increasing the memory allocated to the IDE. By default, Apache NetBeans uses a certain amount of memory to run. If you are running a large project or many programs at once, this default amount of memory may not be enough, causing the IDE to run slower. To increase the memory allocated to the IDE, go to Tools > Options > Miscellaneous > Memory, and increase the maximum memory limit. If you are still experiencing slow performance after trying these suggestions, you may want to try resetting the IDE to its default settings. This can help resolve any issues that may have been caused by incorrect settings or preferences. To reset the IDE, go to Tools > Options > General, and click the "Reset IDE" button. I hope these suggestions help you improve the performance of your Apache NetBeans IDE.
Apache netbeans file explorer slow response
Every time I open the file explorer of netbeans, it's always give me the slow performance. About 1 month ago this problems is not here, what should i do to fix it? it always take a long time, even when i try to open my project or something that needs file explorer to do so. I used apache netbeans 15 IDE, on Win 11 x64. I expect my java netbeans is back to it's normal performance like before, it's really frustrating! pls help me! i searched on google many times but they said i need to delete broken links to fix it, i don't know how to do it, what should i do to delete that broken links and what the kind of large zip file i should remove?
[ "It sounds like you are experiencing a performance issue with your Apache NetBeans IDE. There are a few potential causes of this issue, so I'll provide a few suggestions to try and help you improve the performance of your IDE.\nFirst, try disabling any plugins that you are not using. This can help improve the overall performance of the IDE by reducing the amount of code that needs to be executed. To disable a plugin, go to Tools > Plugins, select the plugin you want to disable, and click the \"Deactivate\" button.\nAnother potential cause of slow performance is a large number of files in your project. If you have a very large project with many files, the file explorer may take longer to load and navigate. In this case, it may be helpful to organize your files into smaller, more manageable groups.\nAdditionally, you may want to try increasing the memory allocated to the IDE. By default, Apache NetBeans uses a certain amount of memory to run. If you are running a large project or many programs at once, this default amount of memory may not be enough, causing the IDE to run slower. To increase the memory allocated to the IDE, go to Tools > Options > Miscellaneous > Memory, and increase the maximum memory limit.\nIf you are still experiencing slow performance after trying these suggestions, you may want to try resetting the IDE to its default settings. This can help resolve any issues that may have been caused by incorrect settings or preferences. To reset the IDE, go to Tools > Options > General, and click the \"Reset IDE\" button.\nI hope these suggestions help you improve the performance of your Apache NetBeans IDE.\n" ]
[ 0 ]
[]
[]
[ "debugging", "file", "java", "netbeans" ]
stackoverflow_0074671026_debugging_file_java_netbeans.txt
Q: Creating a bookingsystem in C++ The following two are not an overlap since they do not share the same reference Id: Reservation foo("Bastuflotta", Date(1, 1, 2022), Date(5, 1, 2022)); Reservation bar("Badtunna", Date(3, 1, 2022), Date(8, 1, 2022)); foo.Overlaps(bar); // returns false I have tried and tried but can't get Overlaps to work. Pls if you have any examples or tips on how to do it, pls share you´r thoughts. Thank you in beforehand! A: To implement the Overlaps function, you can check if the start or end date of the given Reservation object (other) falls between the start and end dates of the current Reservation object (this), or if the start or end date of the current Reservation object falls between the start and end dates of the given Reservation object. If either of these conditions is true, then the two reservations overlap. Here is one possible implementation of the Overlaps function: bool Reservation::Overlaps(const Reservation& other) const { // Check if the start or end date of the other reservation falls between the start and end dates of this reservation. if ((other.startDate >= this->startDate && other.startDate <= this->endDate) || (other.endDate >= this->startDate && other.endDate <= this->endDate)) { return true; } // Check if the start or end date of this reservation falls between the start and end dates of the other reservation. if ((this->startDate >= other.startDate && this->startDate <= other.endDate) || (this->endDate >= other.startDate && this->endDate <= other.endDate)) { return true; } // If none of the above conditions are true, then the reservations do not overlap. return false; }
Creating a bookingsystem in C++
The following two are not an overlap since they do not share the same reference Id: Reservation foo("Bastuflotta", Date(1, 1, 2022), Date(5, 1, 2022)); Reservation bar("Badtunna", Date(3, 1, 2022), Date(8, 1, 2022)); foo.Overlaps(bar); // returns false I have tried and tried but can't get Overlaps to work. Pls if you have any examples or tips on how to do it, pls share you´r thoughts. Thank you in beforehand!
[ "To implement the Overlaps function, you can check if the start or end date of the given Reservation object (other) falls between the start and end dates of the current Reservation object (this), or if the start or end date of the current Reservation object falls between the start and end dates of the given Reservation object. If either of these conditions is true, then the two reservations overlap.\nHere is one possible implementation of the Overlaps function:\nbool Reservation::Overlaps(const Reservation& other) const\n{\n // Check if the start or end date of the other reservation falls between the start and end dates of this reservation.\n if ((other.startDate >= this->startDate && other.startDate <= this->endDate) ||\n (other.endDate >= this->startDate && other.endDate <= this->endDate))\n {\n return true;\n }\n\n // Check if the start or end date of this reservation falls between the start and end dates of the other reservation.\n if ((this->startDate >= other.startDate && this->startDate <= other.endDate) ||\n (this->endDate >= other.startDate && this->endDate <= other.endDate))\n {\n return true;\n }\n\n // If none of the above conditions are true, then the reservations do not overlap.\n return false;\n}\n\n" ]
[ 1 ]
[]
[]
[ "c++" ]
stackoverflow_0074671017_c++.txt
Q: Python array, get item on position with variable vin = txid['vin'][0]['txid'] How I get something like: vout = 3 vin = txid['vin'][vout]['txid'] I assume it won't work like this.... A: You can even use it as user input. No problem at all. txid = {'vin': [{'txid' : 10}, {'txid' : 20}, {'txid' : 30}, {'txid' : 40}]} vin = txid['vin'][0]['txid'] print(vin) vout = 3 vin = txid['vin'][vout]['txid'] print(vin) Output: 10 40
Python array, get item on position with variable
vin = txid['vin'][0]['txid'] How I get something like: vout = 3 vin = txid['vin'][vout]['txid'] I assume it won't work like this....
[ "You can even use it as user input. No problem at all.\ntxid = {'vin': [{'txid' : 10}, {'txid' : 20}, {'txid' : 30}, {'txid' : 40}]}\n\nvin = txid['vin'][0]['txid']\nprint(vin)\n\nvout = 3\nvin = txid['vin'][vout]['txid']\nprint(vin)\n\nOutput:\n10\n40\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074671001_arrays_python.txt
Q: How to keep the "\n" format Creating text in notepad ` private void sSubmit_Click(object sender, EventArgs e) { TextWriter txt = new StreamWriter(@"C:\Users\Dat.txt", true); txt.Write(sTxtSurname.Text + ", " + sTxtFirstname.Text + "\n\n"); txt.Close(); } ` displaying the text in a textbox ` public void ReadFile() { TextReader reder = File.OpenText(@"C:\Users\Dat.txt"); textBox1.Text = reder.ReadToEnd(); } ` it wont diplay the "\n" for example, i put my name and age when displaying, it should seperate the name and age, but it doesnt ` Outputs: Jiin Taq 19 ` ` Desired Output: Jiin Taq 19 ` A: To keep the "\n" format when displaying text in a textbox, you can use the Environment.NewLine property instead of "\n" in your code. This property will insert a new line character that is appropriate for the current operating system. For example, you could modify the ReadFile method in the following way: public void ReadFile() { TextReader reder = File.OpenText(@"C:\Users\Dat.txt"); textBox1.Text = reder.ReadToEnd().Replace("\n", Environment.NewLine); } This will replace any "\n" characters in the text with the appropriate new line character for the current operating system, allowing the text to be displayed properly in the textbox.
How to keep the "\n" format
Creating text in notepad ` private void sSubmit_Click(object sender, EventArgs e) { TextWriter txt = new StreamWriter(@"C:\Users\Dat.txt", true); txt.Write(sTxtSurname.Text + ", " + sTxtFirstname.Text + "\n\n"); txt.Close(); } ` displaying the text in a textbox ` public void ReadFile() { TextReader reder = File.OpenText(@"C:\Users\Dat.txt"); textBox1.Text = reder.ReadToEnd(); } ` it wont diplay the "\n" for example, i put my name and age when displaying, it should seperate the name and age, but it doesnt ` Outputs: Jiin Taq 19 ` ` Desired Output: Jiin Taq 19 `
[ "To keep the \"\\n\" format when displaying text in a textbox, you can use the Environment.NewLine property instead of \"\\n\" in your code. This property will insert a new line character that is appropriate for the current operating system.\nFor example, you could modify the ReadFile method in the following way:\npublic void ReadFile()\n{\n TextReader reder = File.OpenText(@\"C:\\Users\\Dat.txt\");\n textBox1.Text = reder.ReadToEnd().Replace(\"\\n\", Environment.NewLine);\n}\n\nThis will replace any \"\\n\" characters in the text with the appropriate new line character for the current operating system, allowing the text to be displayed properly in the textbox.\n" ]
[ 0 ]
[]
[]
[ "c#", "winforms" ]
stackoverflow_0074670157_c#_winforms.txt
Q: Opening text file when a menu is opened I am building a game that takes score and player name and puts them in a text file, I want to make a subsection for the menu that outputs a scores table, how do I output the textfile in a table that has been built in scene builder? JavaFX linking with textfield A: If you are using Scene Builder to build your user interface, you can add a TableView control to your scene. This control allows you to display data in a tabular form. You will need to create a TableView object in your code and populate it with the data from your text file. To link a TableView control with a TextField in Scene Builder, you can drag the TableView control from the Library panel and drop it onto the scene, then use the Properties panel to set the TableView's fx:id property to a unique identifier (e.g. scoresTable). Next, in your code, you can use the fx:id value to get a reference to the TableView object. For example: @FXML private TableView<Score> scoresTable; You can then use this reference to populate the TableView with data from your text file. For example: // Read data from text file and create a list of Score objects List<Score> scores = readScoresFromTextFile(...); // Set the items for the TableView scoresTable.setItems(FXCollections.observableArrayList(scores)); To learn more about using TableView controls in JavaFX, you can refer to the JavaFX documentation or search for tutorials online. A: To display the contents of a text file in a JavaFX application using Scene Builder follow these steps: In Scene Builder create a new TableView component and add the appropriate columns for the information in your text file (e.g. name and score). In Java code read contents of the text file and store the data in a list of objects, where each object represents a row in the table. Bind the data in the list to the TableView using the setItems() method. This will populate the table with the data from the text file. If necessary, use a TableCell to format the data in each cell of the table. // Read the text file and store the data in a list List<PlayerScore> playerScores = readPlayerScoresFromTextFile("scores.txt"); // Bind the data to the TableView tableView.setItems(FXCollections.observableList(playerScores)); // Format the data in each cell using a TableCell nameColumn.setCellFactory(column -> new TableCell<PlayerScore, String>() { @Override protected void updateItem(String item, boolean empty) { super.updateItem(item, empty); if (item == null || empty) { setText(null); setStyle(""); } else { setText(item); setStyle("-fx-font-weight: bold"); } } });
Opening text file when a menu is opened
I am building a game that takes score and player name and puts them in a text file, I want to make a subsection for the menu that outputs a scores table, how do I output the textfile in a table that has been built in scene builder? JavaFX linking with textfield
[ "If you are using Scene Builder to build your user interface, you can add a TableView control to your scene. This control allows you to display data in a tabular form. You will need to create a TableView object in your code and populate it with the data from your text file.\nTo link a TableView control with a TextField in Scene Builder, you can drag the TableView control from the Library panel and drop it onto the scene, then use the Properties panel to set the TableView's fx:id property to a unique identifier (e.g. scoresTable).\nNext, in your code, you can use the fx:id value to get a reference to the TableView object. For example:\n@FXML\nprivate TableView<Score> scoresTable;\n\nYou can then use this reference to populate the TableView with data from your text file. For example:\n// Read data from text file and create a list of Score objects\nList<Score> scores = readScoresFromTextFile(...);\n\n// Set the items for the TableView\nscoresTable.setItems(FXCollections.observableArrayList(scores));\n\nTo learn more about using TableView controls in JavaFX, you can refer to the JavaFX documentation or search for tutorials online.\n", "To display the contents of a text file in a JavaFX application using Scene Builder follow these steps:\n\nIn Scene Builder create a new TableView component and add the appropriate columns for the information in your text file (e.g. name and score).\nIn Java code read contents of the text file and store the data in a list of objects, where each object represents a row in the table.\nBind the data in the list to the TableView using the setItems() method. This will populate the table with the data from the text file.\nIf necessary, use a TableCell to format the data in each cell of the table.\n\n\n// Read the text file and store the data in a list\nList<PlayerScore> playerScores = readPlayerScoresFromTextFile(\"scores.txt\");\n\n// Bind the data to the TableView\ntableView.setItems(FXCollections.observableList(playerScores));\n\n// Format the data in each cell using a TableCell\nnameColumn.setCellFactory(column -> new TableCell<PlayerScore, String>() {\n @Override\n protected void updateItem(String item, boolean empty) {\n super.updateItem(item, empty);\n\n if (item == null || empty) {\n setText(null);\n setStyle(\"\");\n } else {\n setText(item);\n setStyle(\"-fx-font-weight: bold\");\n }\n }\n});\n\n" ]
[ 1, 1 ]
[]
[]
[ "java", "javafx" ]
stackoverflow_0074670903_java_javafx.txt
Q: How can I create a bar chart grouping the frequency of months present in a column? I have a CSV file imported into R as a dataset named bachelors. It has a column I have formatted with as.Date. bachelors$Posted.On looks like this (minimally reproducible) Posted.On 2022-05-18 2022-07-14 2022-07-22 I would like to make a bar chart using the ggplot2 library to plot how many times each month of the year appears in the rows in the column. In the example above, months May and July appear once each, so I want a bar plot to display 2 bars reading '1' for May and '2' for July. The format of the labels don't matter. I have tried ggplot(bachelors, aes(x = Posted.On, y=count(months(Posted.On)))) + geom_bar(stat='identity') but it throws an error in 1st layer. A: You can do library(ggplot2) ggplot(bachelors, aes(factor(months(as.Date(Posted.On), TRUE), month.name))) + geom_bar() + labs(x = "Month") Created on 2022-12-03 with reprex v2.0.2 Question data in reproducible format bachelors <- structure(list(Posted.On = c("2022-05-18", "2022-07-14", "2022-07-22")), class = "data.frame", row.names = c(NA, -3L))
How can I create a bar chart grouping the frequency of months present in a column?
I have a CSV file imported into R as a dataset named bachelors. It has a column I have formatted with as.Date. bachelors$Posted.On looks like this (minimally reproducible) Posted.On 2022-05-18 2022-07-14 2022-07-22 I would like to make a bar chart using the ggplot2 library to plot how many times each month of the year appears in the rows in the column. In the example above, months May and July appear once each, so I want a bar plot to display 2 bars reading '1' for May and '2' for July. The format of the labels don't matter. I have tried ggplot(bachelors, aes(x = Posted.On, y=count(months(Posted.On)))) + geom_bar(stat='identity') but it throws an error in 1st layer.
[ "You can do\nlibrary(ggplot2)\n\nggplot(bachelors, aes(factor(months(as.Date(Posted.On), TRUE), month.name))) +\n geom_bar() +\n labs(x = \"Month\")\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n\nQuestion data in reproducible format\nbachelors <- structure(list(Posted.On = c(\"2022-05-18\", \"2022-07-14\", \n\"2022-07-22\")), class = \"data.frame\", row.names = c(NA, -3L))\n\n" ]
[ 1 ]
[]
[]
[ "ggplot2", "r", "tidyverse" ]
stackoverflow_0074670973_ggplot2_r_tidyverse.txt
Q: (flask + socket.IO) Result of emit callback is the response of my REST endpoint Just to give a context here, I'm a node.JS developer, but I'm on a project that I need to work with Python using Flask framework. The problem is, when a client request to an endpoint of my rest flask app, I need to emit an event using socket.IO, and get some data from the socket server, then this data is the response of the endpoint. But I didn't figured out how to send this, because flask needs a "return" statement saying what is the response, and my callback is in another context. Sample of what I'm trying to do: (There's some comments explaining) import socketio import eventlet from flask import Flask, request sio = socketio.Server() app = Flask(__name__) @app.route('/test/<param>') def get(param): def ack(data): print (data) #Should be the response sio.emit('event', param, callback=ack) # Socket server call my ack function #Without a return statement, the endpoint return 500 if __name__ == '__main__': app = socketio.Middleware(sio, app) eventlet.wsgi.server(eventlet.listen(('', 8000)), app) Maybe, the right question here is: Is this possible? A: I'm going to give you one way to implement what you want specifically, but I believe you have an important design flaw in this, as I explain in a comment above. In the way you have this coded, your socketio.Server() object will broadcast to all your clients, so will not be able to get a callback. If you want to emit to one client (hopefully not the same one that sent the HTTP request), then you need to add a room=client_sid argument to the emit. Or, if you are contacting a Socket.IO server, then you need to use a Socket.IO client here, not a server. In any case, to block your HTTP route until the callback function is invoked, you can use an Event object. Something like this: from threading import Event from flask import jsonify @app.route('/test/<param>') def get(param): ev = threading.Event() result = None def ack(data): nonlocal result nonlocal ev result = {'data': data} ev.set() # unblock HTTP route sio.emit('event', param, room=some_client_sid, callback=ack) ev.wait() # blocks until ev.set() is called return jsonify(result) A: I had a similar problem using FastAPI + socketIO (async version) and I was stuck at the exact same point. No eventlet so could not try out the monkey patching option. After a lot of head bangings it turns out that, for some reason, adding asyncio.sleep(.1) just before ev.wait() made everything work smoothly. Without that, emitted event actually never reach the other side (socketio client, in my scenario)
(flask + socket.IO) Result of emit callback is the response of my REST endpoint
Just to give a context here, I'm a node.JS developer, but I'm on a project that I need to work with Python using Flask framework. The problem is, when a client request to an endpoint of my rest flask app, I need to emit an event using socket.IO, and get some data from the socket server, then this data is the response of the endpoint. But I didn't figured out how to send this, because flask needs a "return" statement saying what is the response, and my callback is in another context. Sample of what I'm trying to do: (There's some comments explaining) import socketio import eventlet from flask import Flask, request sio = socketio.Server() app = Flask(__name__) @app.route('/test/<param>') def get(param): def ack(data): print (data) #Should be the response sio.emit('event', param, callback=ack) # Socket server call my ack function #Without a return statement, the endpoint return 500 if __name__ == '__main__': app = socketio.Middleware(sio, app) eventlet.wsgi.server(eventlet.listen(('', 8000)), app) Maybe, the right question here is: Is this possible?
[ "I'm going to give you one way to implement what you want specifically, but I believe you have an important design flaw in this, as I explain in a comment above. In the way you have this coded, your socketio.Server() object will broadcast to all your clients, so will not be able to get a callback. If you want to emit to one client (hopefully not the same one that sent the HTTP request), then you need to add a room=client_sid argument to the emit. Or, if you are contacting a Socket.IO server, then you need to use a Socket.IO client here, not a server.\nIn any case, to block your HTTP route until the callback function is invoked, you can use an Event object. Something like this:\nfrom threading import Event\nfrom flask import jsonify\n\[email protected]('/test/<param>')\ndef get(param):\n ev = threading.Event()\n result = None\n\n def ack(data):\n nonlocal result\n nonlocal ev\n\n result = {'data': data}\n ev.set() # unblock HTTP route\n\n sio.emit('event', param, room=some_client_sid, callback=ack)\n ev.wait() # blocks until ev.set() is called\n return jsonify(result)\n\n", "I had a similar problem using FastAPI + socketIO (async version) and I was stuck at the exact same point. No eventlet so could not try out the monkey patching option.\nAfter a lot of head bangings it turns out that, for some reason, adding asyncio.sleep(.1) just before ev.wait() made everything work smoothly. Without that, emitted event actually never reach the other side (socketio client, in my scenario)\n" ]
[ 2, 0 ]
[]
[]
[ "flask", "flask_socketio", "python", "socket.io" ]
stackoverflow_0043301977_flask_flask_socketio_python_socket.io.txt
Q: Update all local docker images through console command I need to update all docker images through console command. Full list: andrey@BushM1 ~ % docker images --format "{{.Repository}}" | sort --unique bitnami/kafka bitnami/zookeeper confluentinc/cp-kafka confluentinc/cp-kafka-connect confluentinc/cp-schema-registry confluentinc/cp-zookeeper denoland/deno mariadb mcr.microsoft.com/azure-sql-edge mcr.microsoft.com/dotnet/aspnet mcr.microsoft.com/dotnet/runtime mcr.microsoft.com/dotnet/sdk mongo mongo-express mysql node portainer/portainer-ce postgres provectuslabs/kafka-ui python rabbitmq redis traefik vault wordpress andrey@BushM1 ~ % I need something like this: andrey@BushM1 ~ % docker pull $(docker images --format "{{.Repository}}" | sort --unique) "docker pull" requires exactly 1 argument. See 'docker pull --help'. Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from a registry andrey@BushM1 ~ % How to write right iteration? A: Sounds like a typical job for xargs: docker images --format "{{.Repository}}" | sort -u | xargs -n1 docker pull See man xargs for more options. You can also just do a loop: .... | while IFS= read -r line; do docker pull "$line"; done
Update all local docker images through console command
I need to update all docker images through console command. Full list: andrey@BushM1 ~ % docker images --format "{{.Repository}}" | sort --unique bitnami/kafka bitnami/zookeeper confluentinc/cp-kafka confluentinc/cp-kafka-connect confluentinc/cp-schema-registry confluentinc/cp-zookeeper denoland/deno mariadb mcr.microsoft.com/azure-sql-edge mcr.microsoft.com/dotnet/aspnet mcr.microsoft.com/dotnet/runtime mcr.microsoft.com/dotnet/sdk mongo mongo-express mysql node portainer/portainer-ce postgres provectuslabs/kafka-ui python rabbitmq redis traefik vault wordpress andrey@BushM1 ~ % I need something like this: andrey@BushM1 ~ % docker pull $(docker images --format "{{.Repository}}" | sort --unique) "docker pull" requires exactly 1 argument. See 'docker pull --help'. Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from a registry andrey@BushM1 ~ % How to write right iteration?
[ "Sounds like a typical job for xargs:\ndocker images --format \"{{.Repository}}\" | sort -u | xargs -n1 docker pull\n\nSee man xargs for more options. You can also just do a loop:\n.... | while IFS= read -r line; do docker pull \"$line\"; done\n\n" ]
[ 2 ]
[]
[]
[ "docker" ]
stackoverflow_0074670832_docker.txt
Q: How to create a second dropdown list based on a group of edges with visNetwork in R? Similar in spirit to Groups of edges and select in visNetwork in R, I'm wondering how to create a dropdown list based on the edges as opposed to my nodes using the visNetwork package. I think this is possible with the visSetSelection function, but that requires using shiny. I'm delivering the final product in an html rendered from a markdown, not deploying it from a server, so I don't think that's a possibility. Is there a way to replicate this function outside of shiny? I don't fully understand the terminology in the documentation, but I think what I want to do is similar to the nodeIdSelection or the selectedBy arguments in the visOptions functions where you can create an "HTML select element" but based on the edge list and not on the node list. The data set for this particular issue is proprietary, but here's some dummy data. I'd like to be able to select by the "weight" of the edge. library(tidyverse) library(visNetwork) nodes <- tibble(id = 1:30) edges <- tibble(from = c(21:30, 1:20), to = c(5:20, 21:30, 1:4), weight = c(rep(1:5, 6))) visNetwork(nodes, edges) %>% visIgraphLayout(layout = "layout_in_circle") %>% visOptions(highlightNearest = list(enabled = T, hover = T, degree = 1, algorithm = "hierarchical"), nodesIdSelection = T) What I would expect is an edgesIdSelection argument in visOptions but that isn't an option. I assume that piping visSelectEdges would work, but that only works with shiny and my client doesn't have access to a shiny server. I get that this library was made to make the javascript library accessible through R so I don't expect full functionality--if I can't do this in R with this package (without shiny), I totally get it. A: It looks like the visNetwork package does not have a built-in way to create a dropdown list based on the edges of the network. However, you might be able to achieve this using custom JavaScript code and the htmlwidgets package in R. The htmlwidgets package allows you to create custom interactive HTML widgets using R. You could use this package to create a custom widget that includes a dropdown list based on the edges of your network. You could then use this widget in your visNetwork plot. Here is an example of how you might create a custom widget using the htmlwidgets package: # First, create a custom widget using the `htmlwidgets` package # Set up the widget structure widget <- htmlwidgets::createWidget( name = "mywidget", x = list(data = data.frame(x = 1:10, y = rnorm(10)), options = list(width = 500, height = 500)) ) # Add the widget HTML code widget$html <- "<div id='mywidget'></div>" # Add the JavaScript code that will create the widget widget$javascript <- " var x = data.frame(x = 1:10, y = rnorm(10)); var options = list(width = 500, height = 500); // Create the widget using the specified data and options var widget = new mywidget( document.getElementById('mywidget'), x, options );" # Add the widget CSS code widget$css <- ".mywidget { color: red; }" # Register the widget htmlwidgets::registerWidget(widget) Once you have created your custom widget, you can use it in your visNetwork plot like this: # Create the visNetwork plot visNetwork(nodes, edges) %>% visIgraphLayout(layout = "layout_in_circle") %>% visOptions(highlightNearest = list(enabled = T, hover = T, degree = 1, algorithm = "hierarchical"), nodesIdSelection = T) %>% # Add the custom widget to the plot htmlwidgets::onRender(" function(el, x, data) { var widget = new mywidget( el, data.x, data.options ); }", data = list(x = data.frame(x = 1:10, y = rnorm(10)), options = list(width = 500, height = 500)) ) You can then add custom JavaScript code to the widget to create a dropdown list based on the edges of the network. This code will be specific to your needs, so I would recommend consulting the visNetwork documentation and the documentation for the JavaScript library that visNetwork is based on (vis.js) for guidance on how to create the dropdown list.
How to create a second dropdown list based on a group of edges with visNetwork in R?
Similar in spirit to Groups of edges and select in visNetwork in R, I'm wondering how to create a dropdown list based on the edges as opposed to my nodes using the visNetwork package. I think this is possible with the visSetSelection function, but that requires using shiny. I'm delivering the final product in an html rendered from a markdown, not deploying it from a server, so I don't think that's a possibility. Is there a way to replicate this function outside of shiny? I don't fully understand the terminology in the documentation, but I think what I want to do is similar to the nodeIdSelection or the selectedBy arguments in the visOptions functions where you can create an "HTML select element" but based on the edge list and not on the node list. The data set for this particular issue is proprietary, but here's some dummy data. I'd like to be able to select by the "weight" of the edge. library(tidyverse) library(visNetwork) nodes <- tibble(id = 1:30) edges <- tibble(from = c(21:30, 1:20), to = c(5:20, 21:30, 1:4), weight = c(rep(1:5, 6))) visNetwork(nodes, edges) %>% visIgraphLayout(layout = "layout_in_circle") %>% visOptions(highlightNearest = list(enabled = T, hover = T, degree = 1, algorithm = "hierarchical"), nodesIdSelection = T) What I would expect is an edgesIdSelection argument in visOptions but that isn't an option. I assume that piping visSelectEdges would work, but that only works with shiny and my client doesn't have access to a shiny server. I get that this library was made to make the javascript library accessible through R so I don't expect full functionality--if I can't do this in R with this package (without shiny), I totally get it.
[ "It looks like the visNetwork package does not have a built-in way to create a dropdown list based on the edges of the network. However, you might be able to achieve this using custom JavaScript code and the htmlwidgets package in R.\nThe htmlwidgets package allows you to create custom interactive HTML widgets using R. You could use this package to create a custom widget that includes a dropdown list based on the edges of your network. You could then use this widget in your visNetwork plot.\nHere is an example of how you might create a custom widget using the htmlwidgets package:\n# First, create a custom widget using the `htmlwidgets` package\n\n# Set up the widget structure\nwidget <- htmlwidgets::createWidget(\n name = \"mywidget\",\n x = list(data = data.frame(x = 1:10, y = rnorm(10)),\n options = list(width = 500, height = 500))\n)\n\n# Add the widget HTML code\nwidget$html <- \"<div id='mywidget'></div>\"\n\n# Add the JavaScript code that will create the widget\nwidget$javascript <- \"\n var x = data.frame(x = 1:10, y = rnorm(10));\n var options = list(width = 500, height = 500);\n\n // Create the widget using the specified data and options\n var widget = new mywidget(\n document.getElementById('mywidget'),\n x,\n options\n );\"\n\n# Add the widget CSS code\nwidget$css <- \".mywidget { color: red; }\"\n\n# Register the widget\nhtmlwidgets::registerWidget(widget)\n\nOnce you have created your custom widget, you can use it in your visNetwork plot like this:\n# Create the visNetwork plot\nvisNetwork(nodes, edges) %>%\n visIgraphLayout(layout = \"layout_in_circle\") %>%\n visOptions(highlightNearest = list(enabled = T, \n hover = T, \n degree = 1, \n algorithm = \"hierarchical\"), \n nodesIdSelection = T) %>%\n\n # Add the custom widget to the plot\n htmlwidgets::onRender(\"\n function(el, x, data) {\n var widget = new mywidget(\n el,\n data.x,\n data.options\n );\n }\",\n data = list(x = data.frame(x = 1:10, y = rnorm(10)),\n options = list(width = 500, height = 500))\n )\n\nYou can then add custom JavaScript code to the widget to create a dropdown list based on the edges of the network. This code will be specific to your needs, so I would recommend consulting the visNetwork documentation and the documentation for the JavaScript library that visNetwork is based on (vis.js) for guidance on how to create the dropdown list.\n" ]
[ 0 ]
[]
[]
[ "javascript", "r", "visnetwork" ]
stackoverflow_0050724532_javascript_r_visnetwork.txt