content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Get 5 minutes Interval by creating columns as start and end time from date & time stamp column in pandas data = {'col_ts': ['2022-11-02T08:26:40', '2022-11-02T08:25:10', '2022-11-02T08:26:00', '2022-11-02T08:30:20', '2022-11-02T08:33:30', '2022-11-02T08:36:40', '2022-11-02T08:26:20', '2022-11-02T08:50:10', '2022-11-02T08:30:40', '2022-11-02T08:39:40']} df = pd.DataFrame(data, columns = ['col_ts']) df I have a data set from that I would like to create two columns such as start_time and end_time, as shown below with 5 minutes Interval. Appreciate your help on this. In SQL, I have used the below code to produce the result. time_slice(col_ts, 5, 'MINUTE', 'START') as START_INTERVAL, time_slice(col_ts, 5, 'MINUTE', 'END') as END_INTERVAL, In Pandas, I have used the below code. Unfortunately, that will give me a row-level interval. df.resample("5T").mean() A: Here is one way to do it using Pandas to_datetime and dt.accessor: df["col_ts"] = pd.to_datetime(df["col_ts"]) df["start_interval"] = df["col_ts"].dt.floor("5T") df["end_interval"] = df["col_ts"].dt.ceil("5T") Then: col_ts start_interval end_interval 0 2022-11-02 08:26:40 2022-11-02 08:25:00 2022-11-02 08:30:00 1 2022-11-02 08:25:10 2022-11-02 08:25:00 2022-11-02 08:30:00 2 2022-11-02 08:26:00 2022-11-02 08:25:00 2022-11-02 08:30:00 3 2022-11-02 08:30:20 2022-11-02 08:30:00 2022-11-02 08:35:00 4 2022-11-02 08:33:30 2022-11-02 08:30:00 2022-11-02 08:35:00 5 2022-11-02 08:36:40 2022-11-02 08:35:00 2022-11-02 08:40:00 6 2022-11-02 08:26:20 2022-11-02 08:25:00 2022-11-02 08:30:00 7 2022-11-02 08:50:10 2022-11-02 08:50:00 2022-11-02 08:55:00 8 2022-11-02 08:30:40 2022-11-02 08:30:00 2022-11-02 08:35:00 9 2022-11-02 08:39:40 2022-11-02 08:35:00 2022-11-02 08:40:00
Get 5 minutes Interval by creating columns as start and end time from date & time stamp column in pandas
data = {'col_ts': ['2022-11-02T08:26:40', '2022-11-02T08:25:10', '2022-11-02T08:26:00', '2022-11-02T08:30:20', '2022-11-02T08:33:30', '2022-11-02T08:36:40', '2022-11-02T08:26:20', '2022-11-02T08:50:10', '2022-11-02T08:30:40', '2022-11-02T08:39:40']} df = pd.DataFrame(data, columns = ['col_ts']) df I have a data set from that I would like to create two columns such as start_time and end_time, as shown below with 5 minutes Interval. Appreciate your help on this. In SQL, I have used the below code to produce the result. time_slice(col_ts, 5, 'MINUTE', 'START') as START_INTERVAL, time_slice(col_ts, 5, 'MINUTE', 'END') as END_INTERVAL, In Pandas, I have used the below code. Unfortunately, that will give me a row-level interval. df.resample("5T").mean()
[ "Here is one way to do it using Pandas to_datetime and dt.accessor:\ndf[\"col_ts\"] = pd.to_datetime(df[\"col_ts\"])\ndf[\"start_interval\"] = df[\"col_ts\"].dt.floor(\"5T\")\ndf[\"end_interval\"] = df[\"col_ts\"].dt.ceil(\"5T\")\n\nThen:\n col_ts start_interval end_interval\n0 2022-11-02 08:26:40 2022-11-02 08:25:00 2022-11-02 08:30:00\n1 2022-11-02 08:25:10 2022-11-02 08:25:00 2022-11-02 08:30:00\n2 2022-11-02 08:26:00 2022-11-02 08:25:00 2022-11-02 08:30:00\n3 2022-11-02 08:30:20 2022-11-02 08:30:00 2022-11-02 08:35:00\n4 2022-11-02 08:33:30 2022-11-02 08:30:00 2022-11-02 08:35:00\n5 2022-11-02 08:36:40 2022-11-02 08:35:00 2022-11-02 08:40:00\n6 2022-11-02 08:26:20 2022-11-02 08:25:00 2022-11-02 08:30:00\n7 2022-11-02 08:50:10 2022-11-02 08:50:00 2022-11-02 08:55:00\n8 2022-11-02 08:30:40 2022-11-02 08:30:00 2022-11-02 08:35:00\n9 2022-11-02 08:39:40 2022-11-02 08:35:00 2022-11-02 08:40:00\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074663425_pandas_python.txt
Q: Python Multi-Criteria Lookup From one of Many Columns Trying to add a factor to a dataframe based on a lookup of multiple criteria in another dataframe. Code to create sample data: import pandas as pd df_RawData = pd.DataFrame({ 'Value' : [31000, 36000, 42000], 'Type' : [0,1,5] }) df_Lookup = pd.DataFrame({ 'Min Value' : [0,10000,20000,25000,30000,35000,40000,45000], 'Max Value' : [9999,19999,24999,29999,34999,39999,44999,49999], 'Type 0' : [.11,.21,.31,.41,.51,.61,.71,.81], 'Type 1' : [.10,.20,.30,.40,.50,.60,.70,.80], 'Type 2' : [.09,.19,.29,.39,.49,.59,.69,.79], 'Type 3' : [.08,.18,.28,.38,.48,.58,.68,.78], 'Type 4' : [.07,.17,.27,.37,.47,.57,.67,.77], 'Type 5' : [.06,.16,.26,.36,.46,.56,.66,.76] }) I need to add a column to the first data frame based on both the value being in range of the min and max value and returning only the factor from the matching type. Final desired output in this case would be: Value Type Factor 31000 0 .51 36000 1 .60 42000 5 .66 RawData is a dataset with at least half a million rows. I tried using IntervalIndex, but can't figure out how to return values from differing columns based on type. This, for example, would handle the min/max lookup and always return the factor from type 5: v = df_Lookup.loc[:, 'Min Value':'Max Value'].apply(tuple, 1).tolist() idxr = pd.IntervalIndex.from_tuples(v, closed='both') df_RawData['Factor'] = df_Lookup.loc[idxr.get_indexer(df_RawData['Value']),['Type 5']].values Alternately, I thought about using melt to rearrange the lookup dataframe, but am unsure on how to merge on type as well as being within the min/max range. If the dataset were smaller, I would use vlookup in Excel with an if statement in the return column portion of the formula, but that's not practical given the size of the dataset. A: Create the intervalindex: intervals = pd.IntervalIndex.from_arrays(df_Lookup['Min Value'], df_Lookup['Max Value'], closed='neither') Get the matching positions: pos = intervals.get_indexer(df_RawData.Value) Index the Type columns - fortunately they are sorted: types = df_Lookup.filter(like='Type').to_numpy() out = types[pos, df_RawData.Type] Assign value: df_RawData.assign(Factor = out) Value Type Factor 0 31000 0 0.51 1 36000 1 0.60 2 42000 5 0.66
Python Multi-Criteria Lookup From one of Many Columns
Trying to add a factor to a dataframe based on a lookup of multiple criteria in another dataframe. Code to create sample data: import pandas as pd df_RawData = pd.DataFrame({ 'Value' : [31000, 36000, 42000], 'Type' : [0,1,5] }) df_Lookup = pd.DataFrame({ 'Min Value' : [0,10000,20000,25000,30000,35000,40000,45000], 'Max Value' : [9999,19999,24999,29999,34999,39999,44999,49999], 'Type 0' : [.11,.21,.31,.41,.51,.61,.71,.81], 'Type 1' : [.10,.20,.30,.40,.50,.60,.70,.80], 'Type 2' : [.09,.19,.29,.39,.49,.59,.69,.79], 'Type 3' : [.08,.18,.28,.38,.48,.58,.68,.78], 'Type 4' : [.07,.17,.27,.37,.47,.57,.67,.77], 'Type 5' : [.06,.16,.26,.36,.46,.56,.66,.76] }) I need to add a column to the first data frame based on both the value being in range of the min and max value and returning only the factor from the matching type. Final desired output in this case would be: Value Type Factor 31000 0 .51 36000 1 .60 42000 5 .66 RawData is a dataset with at least half a million rows. I tried using IntervalIndex, but can't figure out how to return values from differing columns based on type. This, for example, would handle the min/max lookup and always return the factor from type 5: v = df_Lookup.loc[:, 'Min Value':'Max Value'].apply(tuple, 1).tolist() idxr = pd.IntervalIndex.from_tuples(v, closed='both') df_RawData['Factor'] = df_Lookup.loc[idxr.get_indexer(df_RawData['Value']),['Type 5']].values Alternately, I thought about using melt to rearrange the lookup dataframe, but am unsure on how to merge on type as well as being within the min/max range. If the dataset were smaller, I would use vlookup in Excel with an if statement in the return column portion of the formula, but that's not practical given the size of the dataset.
[ "Create the intervalindex:\nintervals = pd.IntervalIndex.from_arrays(df_Lookup['Min Value'], \n df_Lookup['Max Value'], \n closed='neither')\n\nGet the matching positions:\npos = intervals.get_indexer(df_RawData.Value)\n\nIndex the Type columns - fortunately they are sorted:\ntypes = df_Lookup.filter(like='Type').to_numpy()\nout = types[pos, df_RawData.Type]\n\nAssign value:\ndf_RawData.assign(Factor = out)\n\n Value Type Factor\n0 31000 0 0.51\n1 36000 1 0.60\n2 42000 5 0.66\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074664682_dataframe_pandas_python.txt
Q: Handle autosave in Vue3 I have a Vue3 application for which I want to convey a specific type of UX where changes made to a resource are automatically saved (i.e. no save button). The application has several different types of editable resources and several different editor components for them, and I wanted to create an abstraction that would handle autosaving and that could simply be plugged inside of my editor components. As an additional requirement, some fields have to be debounced (e.g. text fields) whereas others are to be saved immediately to the server. I'm interested in knowing what are the possible drawbacks of using the solution I'm providing here and if there's a better way. The idea: create a polymorphic class AutoSaveManager<T> that handles auto-saving of an object of type T. pass a function that updates an in-memory, local object, to be called anytime there's a change to that object (e.g. an input element bound to a field of that object emits an update) pass a function that updates a remote object, i.e. the record in the database. For example, a Pinia action that makes an API call wrap the remote function in a debouncer for the fields that need to be debounced, or flush it immediately if a no-debounce field is updated. Code for the class: /* eslint-disable @typescript-eslint/no-explicit-any */ import { debounce, DebouncedFunc } from "lodash"; type RemotePatchFunction<T> = (changes: Partial<T>) => Promise<void>; type PatchFunction<T> = (changes: Partial<T>, reverting?: boolean) => void; export type FieldList<T> = (keyof T)[]; enum AutoSaveManagerState { UP_TO_DATE, PENDING, ERROR, } export class AutoSaveManager<T> { instance: T; unsavedChanges: Partial<T>; beforeChanges: Partial<T>; remotePatchFunction: DebouncedFunc<RemotePatchFunction<T>>; localPatchFunction: PatchFunction<T>; errorFunction?: (e: any) => void; successFunction?: () => void; cleanupFunction?: () => void; debouncedFields: FieldList<T>; revertOnFailure: boolean; alwaysPatchLocal: boolean; state: AutoSaveManagerState; constructor( instance: T, remotePatchFunction: RemotePatchFunction<T>, localPatchFunction: PatchFunction<T>, debouncedFields: FieldList<T>, debounceTime: number, successFunction?: () => void, errorFunction?: (e: any) => void, cleanupFunction?: () => void, revertOnFailure = false, alwaysPatchLocal = true, ) { this.instance = instance; this.localPatchFunction = localPatchFunction; this.remotePatchFunction = debounce( this.wrapRemotePatchFunction(remotePatchFunction), debounceTime, ); this.debouncedFields = debouncedFields; this.unsavedChanges = {}; this.beforeChanges = {}; this.successFunction = successFunction; this.errorFunction = errorFunction; this.cleanupFunction = cleanupFunction; this.revertOnFailure = revertOnFailure; this.alwaysPatchLocal = alwaysPatchLocal; this.state = AutoSaveManagerState.UP_TO_DATE; } async onChange(changes: Partial<T>): Promise<void> { this.state = AutoSaveManagerState.PENDING; // record new change to field this.unsavedChanges = { ...this.unsavedChanges, ...changes }; // make deep copy of fields about to change in case rollback becomes necessary // (only for non-debounced fields as it would be disconcerting to roll back // debounced changes like in text fields) Object.keys(changes) .filter(k => !this.debouncedFields.includes(k as keyof T)) .forEach( k => (this.beforeChanges[k] = JSON.parse(JSON.stringify(this.instance[k]))), ); if (this.alwaysPatchLocal) { // instantly update in-memory instance this.localPatchFunction(changes); } // dispatch update to backend await this.remotePatchFunction(this.unsavedChanges); if (Object.keys(changes).some(k => !this.debouncedFields.includes(k as keyof T))) { // at least one field isn't to be debounced; call remote update immediately await this.remotePatchFunction.flush(); } } private wrapRemotePatchFunction( callback: RemotePatchFunction<T>, ): RemotePatchFunction<T> { /** * Wraps the callback into a function that awaits the callback first, and * if it is successful, then empties the unsaved changes object */ return async (changes: Partial<T>) => { try { await callback(changes); if (!this.alwaysPatchLocal) { // update in-memory instance this.localPatchFunction(changes); } // reset bookkeeping about recent changes this.unsavedChanges = {}; this.beforeChanges = {}; this.state = AutoSaveManagerState.UP_TO_DATE; // call user-supplied success callback this.successFunction?.(); } catch (e) { // call user-supplied error callback this.errorFunction?.(e); if (this.revertOnFailure) { // roll back unsaved changes this.localPatchFunction(this.beforeChanges, true); } this.state = AutoSaveManagerState.ERROR; } finally { this.cleanupFunction?.(); } }; } } This class would be instantiated inside of an editor component like this: this.autoSaveManager = new AutoSaveManager<MyEditableObject>( this.modelValue, async changes => { await this.store.someAction({ // makes an API call to the server to actually update the object id: this.modelValue.id changes, }); this.saving = false; }, changes => { this.saving = true; this.savingError = false; this.store.someAction({ // only updates the local object in the store id: this.modelValue.id changes; }) }, ["body", "title"], // these are text fields and need to be debounced 2500, // debounce time undefined, // no cleanup function () => { // function to call in case of error this.saving = false; this.savingError = true; }, ); Then, inside of a template, you'd use it like this: <TextInput :modelValue="modelValue.title" @update:modelValue="autoSaveManager.onChange({ title: $event })" /> Is there anything I can do to improve on this idea? Should I use a different approach altogether for auto-saving in Vue? A: Using an AutoSaveManager class to handle auto-saving in a Vue3 application is a valid approach. It allows you to define a set of debounced fields, which will be saved with a delay, and non-debounced fields, which will be saved immediately. This can help ensure that the server is not overwhelmed with requests and that important changes are saved promptly. One possible drawback of this approach is that it requires you to create an instance of the AutoSaveManager for each editable resource and its corresponding editor component. This can add complexity to your code and make it more difficult to manage. Additionally, using the debounce function from the lodash library can cause issues if the debounceTime is set too low, as it can result in a high number of requests being sent to the server. Another possible way to implement auto-saving in a Vue3 application is to use the watch option on the component to watch for changes to specific fields in the component's data. You can then define a method that is called whenever one of these fields is updated, and this method can handle the logic for saving the changes to the server. This approach can be simpler to implement and manage, but it may not provide as much control over the auto-saving behavior as the AutoSaveManager class. Here is an example of using the watch option in a Vue3 component to implement auto-saving: // define the fields that should be watched for changes const watchedFields = ['field1', 'field2', 'field3']; export default { // other component options watch: { // for each watched field, define a method that is called // whenever the field is updated field1: 'saveChanges', field2: 'saveChanges', field3: 'saveChanges', }, methods: { // method that handles saving changes to the server async saveChanges() { try { // make API call to save changes await saveChangesToServer(this.field1, this.field2, this.field3); // handle success console.log('Changes saved successfully!'); } catch (e) { // handle error console.error('Error saving changes:', e); } }, }, }; This approach is simpler because it does not require you to create an instance of the AutoSaveManager class for each editor component. However, it may not provide as much control over the auto-saving behavior, as it does not allow for debouncing or differentiating between fields that should be saved immediately and those that can be saved with a delay.
Handle autosave in Vue3
I have a Vue3 application for which I want to convey a specific type of UX where changes made to a resource are automatically saved (i.e. no save button). The application has several different types of editable resources and several different editor components for them, and I wanted to create an abstraction that would handle autosaving and that could simply be plugged inside of my editor components. As an additional requirement, some fields have to be debounced (e.g. text fields) whereas others are to be saved immediately to the server. I'm interested in knowing what are the possible drawbacks of using the solution I'm providing here and if there's a better way. The idea: create a polymorphic class AutoSaveManager<T> that handles auto-saving of an object of type T. pass a function that updates an in-memory, local object, to be called anytime there's a change to that object (e.g. an input element bound to a field of that object emits an update) pass a function that updates a remote object, i.e. the record in the database. For example, a Pinia action that makes an API call wrap the remote function in a debouncer for the fields that need to be debounced, or flush it immediately if a no-debounce field is updated. Code for the class: /* eslint-disable @typescript-eslint/no-explicit-any */ import { debounce, DebouncedFunc } from "lodash"; type RemotePatchFunction<T> = (changes: Partial<T>) => Promise<void>; type PatchFunction<T> = (changes: Partial<T>, reverting?: boolean) => void; export type FieldList<T> = (keyof T)[]; enum AutoSaveManagerState { UP_TO_DATE, PENDING, ERROR, } export class AutoSaveManager<T> { instance: T; unsavedChanges: Partial<T>; beforeChanges: Partial<T>; remotePatchFunction: DebouncedFunc<RemotePatchFunction<T>>; localPatchFunction: PatchFunction<T>; errorFunction?: (e: any) => void; successFunction?: () => void; cleanupFunction?: () => void; debouncedFields: FieldList<T>; revertOnFailure: boolean; alwaysPatchLocal: boolean; state: AutoSaveManagerState; constructor( instance: T, remotePatchFunction: RemotePatchFunction<T>, localPatchFunction: PatchFunction<T>, debouncedFields: FieldList<T>, debounceTime: number, successFunction?: () => void, errorFunction?: (e: any) => void, cleanupFunction?: () => void, revertOnFailure = false, alwaysPatchLocal = true, ) { this.instance = instance; this.localPatchFunction = localPatchFunction; this.remotePatchFunction = debounce( this.wrapRemotePatchFunction(remotePatchFunction), debounceTime, ); this.debouncedFields = debouncedFields; this.unsavedChanges = {}; this.beforeChanges = {}; this.successFunction = successFunction; this.errorFunction = errorFunction; this.cleanupFunction = cleanupFunction; this.revertOnFailure = revertOnFailure; this.alwaysPatchLocal = alwaysPatchLocal; this.state = AutoSaveManagerState.UP_TO_DATE; } async onChange(changes: Partial<T>): Promise<void> { this.state = AutoSaveManagerState.PENDING; // record new change to field this.unsavedChanges = { ...this.unsavedChanges, ...changes }; // make deep copy of fields about to change in case rollback becomes necessary // (only for non-debounced fields as it would be disconcerting to roll back // debounced changes like in text fields) Object.keys(changes) .filter(k => !this.debouncedFields.includes(k as keyof T)) .forEach( k => (this.beforeChanges[k] = JSON.parse(JSON.stringify(this.instance[k]))), ); if (this.alwaysPatchLocal) { // instantly update in-memory instance this.localPatchFunction(changes); } // dispatch update to backend await this.remotePatchFunction(this.unsavedChanges); if (Object.keys(changes).some(k => !this.debouncedFields.includes(k as keyof T))) { // at least one field isn't to be debounced; call remote update immediately await this.remotePatchFunction.flush(); } } private wrapRemotePatchFunction( callback: RemotePatchFunction<T>, ): RemotePatchFunction<T> { /** * Wraps the callback into a function that awaits the callback first, and * if it is successful, then empties the unsaved changes object */ return async (changes: Partial<T>) => { try { await callback(changes); if (!this.alwaysPatchLocal) { // update in-memory instance this.localPatchFunction(changes); } // reset bookkeeping about recent changes this.unsavedChanges = {}; this.beforeChanges = {}; this.state = AutoSaveManagerState.UP_TO_DATE; // call user-supplied success callback this.successFunction?.(); } catch (e) { // call user-supplied error callback this.errorFunction?.(e); if (this.revertOnFailure) { // roll back unsaved changes this.localPatchFunction(this.beforeChanges, true); } this.state = AutoSaveManagerState.ERROR; } finally { this.cleanupFunction?.(); } }; } } This class would be instantiated inside of an editor component like this: this.autoSaveManager = new AutoSaveManager<MyEditableObject>( this.modelValue, async changes => { await this.store.someAction({ // makes an API call to the server to actually update the object id: this.modelValue.id changes, }); this.saving = false; }, changes => { this.saving = true; this.savingError = false; this.store.someAction({ // only updates the local object in the store id: this.modelValue.id changes; }) }, ["body", "title"], // these are text fields and need to be debounced 2500, // debounce time undefined, // no cleanup function () => { // function to call in case of error this.saving = false; this.savingError = true; }, ); Then, inside of a template, you'd use it like this: <TextInput :modelValue="modelValue.title" @update:modelValue="autoSaveManager.onChange({ title: $event })" /> Is there anything I can do to improve on this idea? Should I use a different approach altogether for auto-saving in Vue?
[ "Using an AutoSaveManager class to handle auto-saving in a Vue3 application is a valid approach. It allows you to define a set of debounced fields, which will be saved with a delay, and non-debounced fields, which will be saved immediately. This can help ensure that the server is not overwhelmed with requests and that important changes are saved promptly.\nOne possible drawback of this approach is that it requires you to create an instance of the AutoSaveManager for each editable resource and its corresponding editor component. This can add complexity to your code and make it more difficult to manage. Additionally, using the debounce function from the lodash library can cause issues if the debounceTime is set too low, as it can result in a high number of requests being sent to the server.\nAnother possible way to implement auto-saving in a Vue3 application is to use the watch option on the component to watch for changes to specific fields in the component's data. You can then define a method that is called whenever one of these fields is updated, and this method can handle the logic for saving the changes to the server. This approach can be simpler to implement and manage, but it may not provide as much control over the auto-saving behavior as the AutoSaveManager class.\nHere is an example of using the watch option in a Vue3 component to implement auto-saving:\n// define the fields that should be watched for changes\nconst watchedFields = ['field1', 'field2', 'field3'];\n\nexport default {\n // other component options\n\n watch: {\n // for each watched field, define a method that is called\n // whenever the field is updated\n field1: 'saveChanges',\n field2: 'saveChanges',\n field3: 'saveChanges',\n },\n\n methods: {\n // method that handles saving changes to the server\n async saveChanges() {\n try {\n // make API call to save changes\n await saveChangesToServer(this.field1, this.field2, this.field3);\n\n // handle success\n console.log('Changes saved successfully!');\n } catch (e) {\n // handle error\n console.error('Error saving changes:', e);\n }\n },\n },\n};\n\nThis approach is simpler because it does not require you to create an instance of the AutoSaveManager class for each editor component. However, it may not provide as much control over the auto-saving behavior, as it does not allow for debouncing or differentiating between fields that should be saved immediately and those that can be saved with a delay.\n" ]
[ 0 ]
[]
[]
[ "autosave", "pinia", "typescript", "vue.js", "vuejs3" ]
stackoverflow_0074621293_autosave_pinia_typescript_vue.js_vuejs3.txt
Q: WooCommerce: Show Upsells based on product attribute I've found this great solution to display related products based on a product attribute, but additionally we would like to do the same with our upsells . What I've tried so far doesn't work yet because I believe we have to add some additonal information as the related_products function is not the same as upsells. add_filter( 'woocommerce_upsell_products', 'upsell_products_by_attribute', 10, 3 ); function upsell_products_by_attribute( $upsells, $product_id, $args ) { $taxonomy = 'pa_attribute'; $term_slugs = wp_get_post_terms( $product_id, $taxonomy, ['fields' => 'slugs'] ); if ( empty($term_slugs) ) return $upsells; $posts_ids = get_posts( array( 'post_type' => 'product', 'ignore_sticky_posts' => 1, 'posts_per_page' => 6, 'post__not_in' => array( $product_id ), 'tax_query' => array( array( 'taxonomy' => $taxonomy, 'field' => 'slug', 'terms' => $term_slugs, ) ), 'fields' => 'ids', 'orderby' => 'rand', ) ); return count($posts_ids) > 0 ? $posts_ids : $upsells; } So I wonder if there is any possibility to do the same with the upsell section. A: You can remove the default action hook woocommerce_after_single_product_summary and add it again and you can customize upsells products. check the below code. code will go into your active theme functions.php file. Change $taxonomy = 'pa_color'; to your custom taxonomy. function modify_woocommerce_upsell_display_based_on_attribute( $limit = '-1', $columns = 4, $orderby = 'rand', $order = 'desc' ){ remove_action( 'woocommerce_after_single_product_summary', 'woocommerce_upsell_display', 15 ); global $product; if ( ! $product ) { return; } // Handle the legacy filter which controlled posts per page etc. $args = apply_filters( 'woocommerce_upsell_display_args', array( 'posts_per_page' => $limit, 'orderby' => $orderby, 'order' => $order, 'columns' => $columns, ) ); wc_set_loop_prop( 'name', 'up-sells' ); wc_set_loop_prop( 'columns', apply_filters( 'woocommerce_upsells_columns', isset( $args['columns'] ) ? $args['columns'] : $columns ) ); $orderby = apply_filters( 'woocommerce_upsells_orderby', isset( $args['orderby'] ) ? $args['orderby'] : $orderby ); $order = apply_filters( 'woocommerce_upsells_order', isset( $args['order'] ) ? $args['order'] : $order ); $limit = apply_filters( 'woocommerce_upsells_total', isset( $args['posts_per_page'] ) ? $args['posts_per_page'] : $limit ); // set your custom taxonomy here. $taxonomy = 'pa_color'; $term_slugs = wp_get_post_terms( $product->get_id(), $taxonomy, ['fields' => 'slugs'] ); $posts_ids = get_posts( array( 'post_type' => 'product', 'ignore_sticky_posts' => 1, 'posts_per_page' => 6, 'post__not_in' => array( $product->get_id() ), 'tax_query' => array( array( 'taxonomy' => $taxonomy, 'field' => 'slug', 'terms' => $term_slugs, ) ), 'fields' => 'ids', 'orderby' => 'rand', ) ); if( !empty( $posts_ids ) ){ // Get visible upsells then sort them at random, then limit result set. $upsells = wc_products_array_orderby( array_filter( array_map( 'wc_get_product', $posts_ids ), 'wc_products_array_filter_visible' ), $orderby, $order ); $upsells = $limit > 0 ? array_slice( $upsells, 0, $limit ) : $upsells; wc_get_template( 'single-product/up-sells.php', array( 'upsells' => $upsells, // Not used now, but used in the previous version of up-sells.php. 'posts_per_page' => $limit, 'orderby' => $orderby, 'columns' => $columns, ) ); } } add_action( 'woocommerce_after_single_product_summary', 'modify_woocommerce_upsell_display_based_on_attribute', 15 );
WooCommerce: Show Upsells based on product attribute
I've found this great solution to display related products based on a product attribute, but additionally we would like to do the same with our upsells . What I've tried so far doesn't work yet because I believe we have to add some additonal information as the related_products function is not the same as upsells. add_filter( 'woocommerce_upsell_products', 'upsell_products_by_attribute', 10, 3 ); function upsell_products_by_attribute( $upsells, $product_id, $args ) { $taxonomy = 'pa_attribute'; $term_slugs = wp_get_post_terms( $product_id, $taxonomy, ['fields' => 'slugs'] ); if ( empty($term_slugs) ) return $upsells; $posts_ids = get_posts( array( 'post_type' => 'product', 'ignore_sticky_posts' => 1, 'posts_per_page' => 6, 'post__not_in' => array( $product_id ), 'tax_query' => array( array( 'taxonomy' => $taxonomy, 'field' => 'slug', 'terms' => $term_slugs, ) ), 'fields' => 'ids', 'orderby' => 'rand', ) ); return count($posts_ids) > 0 ? $posts_ids : $upsells; } So I wonder if there is any possibility to do the same with the upsell section.
[ "You can remove the default action hook woocommerce_after_single_product_summary and add it again and you can customize upsells products. check the below code. code will go into your active theme functions.php file.\nChange $taxonomy = 'pa_color'; to your custom taxonomy.\nfunction modify_woocommerce_upsell_display_based_on_attribute( $limit = '-1', $columns = 4, $orderby = 'rand', $order = 'desc' ){\n \n remove_action( 'woocommerce_after_single_product_summary', 'woocommerce_upsell_display', 15 );\n\n global $product;\n\n if ( ! $product ) {\n return;\n }\n\n // Handle the legacy filter which controlled posts per page etc.\n $args = apply_filters(\n 'woocommerce_upsell_display_args',\n array(\n 'posts_per_page' => $limit,\n 'orderby' => $orderby,\n 'order' => $order,\n 'columns' => $columns,\n )\n );\n\n wc_set_loop_prop( 'name', 'up-sells' );\n wc_set_loop_prop( 'columns', apply_filters( 'woocommerce_upsells_columns', isset( $args['columns'] ) ? $args['columns'] : $columns ) );\n\n $orderby = apply_filters( 'woocommerce_upsells_orderby', isset( $args['orderby'] ) ? $args['orderby'] : $orderby );\n $order = apply_filters( 'woocommerce_upsells_order', isset( $args['order'] ) ? $args['order'] : $order );\n $limit = apply_filters( 'woocommerce_upsells_total', isset( $args['posts_per_page'] ) ? $args['posts_per_page'] : $limit );\n\n // set your custom taxonomy here.\n $taxonomy = 'pa_color';\n\n $term_slugs = wp_get_post_terms( $product->get_id(), $taxonomy, ['fields' => 'slugs'] );\n\n $posts_ids = get_posts( array(\n 'post_type' => 'product',\n 'ignore_sticky_posts' => 1,\n 'posts_per_page' => 6,\n 'post__not_in' => array( $product->get_id() ),\n 'tax_query' => array( array(\n 'taxonomy' => $taxonomy,\n 'field' => 'slug',\n 'terms' => $term_slugs,\n ) ),\n 'fields' => 'ids',\n 'orderby' => 'rand',\n ) );\n\n if( !empty( $posts_ids ) ){\n\n // Get visible upsells then sort them at random, then limit result set.\n $upsells = wc_products_array_orderby( array_filter( array_map( 'wc_get_product', $posts_ids ), 'wc_products_array_filter_visible' ), $orderby, $order );\n $upsells = $limit > 0 ? array_slice( $upsells, 0, $limit ) : $upsells;\n\n wc_get_template(\n 'single-product/up-sells.php',\n array(\n 'upsells' => $upsells,\n\n // Not used now, but used in the previous version of up-sells.php.\n 'posts_per_page' => $limit,\n 'orderby' => $orderby,\n 'columns' => $columns,\n )\n );\n }\n\n}\n\nadd_action( 'woocommerce_after_single_product_summary', 'modify_woocommerce_upsell_display_based_on_attribute', 15 );\n\n" ]
[ 1 ]
[]
[]
[ "php", "woocommerce", "wordpress" ]
stackoverflow_0074650937_php_woocommerce_wordpress.txt
Q: Finding same element in at different indexes in an array What I'm trying to do: I am trying to make a program that searches an array for a specific int, and the int may be at many indexes in the array, and I need to print all the indexes (for example 5 shows up twice in the array, so I want to output both the indexes). This must be done using a linear search algorithm. What I've done: I have managed to write a program that outputs the index of the specified int in an array, but it is only able to output the index of first time that the int is in the array. Here is the source code that I've come up with: // This program performs a linear search on a character array // Author: Y K #include <iostream> using namespace std; int searchList(int[], int, int); // function prototype const int SIZE = 8; int main() { int nums[SIZE] = {3, 6, -19, 5, 5, 0, -2, 99}; int found; int ch; cout << "Enter a number to search for:" << endl; cin >> ch; found = searchList(nums, SIZE, ch); if (found == -1) cout << "The number " << ch << " was not found in the list" << endl; else cout << "The number " << ch << " is in the " << found + 1 << " position of the list" << endl; return 0; } //******************************************************************* // searchList // // task: This searches an array for a particular value // data in: List of values in an array, the number of // elements in the array, and the value searched for // in the array // data returned: Position in the array of the value or -1 if value // not found // //******************************************************************* int searchList(int List[], int numElems, int value) { for (int count = 0; count <= numElems; count++) { if (List[count] == value) return count; } return -1; // if the value is not found, -1 is returned } A: You can do this by iteratively calling searchList and passing it the remainder of the array each time it returns a valid index. I moved the code that does the searching to another function to make it easier to call more than once. If all you need to do is search once you can move it back inside main. #include <iostream> int searchList(int[], int, int); const int SIZE = 8; void test(int *nums, int numElems, int value) { bool found = false; int lastPos = 0; int index; while ((index = searchList(nums + lastPos, numElems - lastPos, value)) != -1) { std::cout << "found " << value << " @ " << lastPos + index + 1 << "\n"; lastPos += index + 1; found = true; } if (!found) { std::cout << value << " not found.\n"; } } int main() { int nums[SIZE] = {3, 6, -19, 5, 5, 0, -2, 99}; for (int value : {3, 6, -19, 5, 0, -2, 99}) { test(nums, SIZE, value); } test(nums, SIZE, 1); return 0; } int searchList(int List[], int numElems, int value) { for (int count = 0; count < numElems; count++) { if (List[count] == value) return count; } return -1; } Demo@CompilerExplorer Output: found 3 @ 1 found 6 @ 2 found -19 @ 3 found 5 @ 4 found 5 @ 5 found 0 @ 6 found -2 @ 7 found 99 @ 8 1 not found.
Finding same element in at different indexes in an array
What I'm trying to do: I am trying to make a program that searches an array for a specific int, and the int may be at many indexes in the array, and I need to print all the indexes (for example 5 shows up twice in the array, so I want to output both the indexes). This must be done using a linear search algorithm. What I've done: I have managed to write a program that outputs the index of the specified int in an array, but it is only able to output the index of first time that the int is in the array. Here is the source code that I've come up with: // This program performs a linear search on a character array // Author: Y K #include <iostream> using namespace std; int searchList(int[], int, int); // function prototype const int SIZE = 8; int main() { int nums[SIZE] = {3, 6, -19, 5, 5, 0, -2, 99}; int found; int ch; cout << "Enter a number to search for:" << endl; cin >> ch; found = searchList(nums, SIZE, ch); if (found == -1) cout << "The number " << ch << " was not found in the list" << endl; else cout << "The number " << ch << " is in the " << found + 1 << " position of the list" << endl; return 0; } //******************************************************************* // searchList // // task: This searches an array for a particular value // data in: List of values in an array, the number of // elements in the array, and the value searched for // in the array // data returned: Position in the array of the value or -1 if value // not found // //******************************************************************* int searchList(int List[], int numElems, int value) { for (int count = 0; count <= numElems; count++) { if (List[count] == value) return count; } return -1; // if the value is not found, -1 is returned }
[ "You can do this by iteratively calling searchList and passing it the remainder of the array each time it returns a valid index.\nI moved the code that does the searching to another function to make it easier to call more than once. If all you need to do is search once you can move it back inside main.\n#include <iostream>\n\nint searchList(int[], int, int);\n\nconst int SIZE = 8;\n\nvoid test(int *nums, int numElems, int value)\n{\n bool found = false;\n int lastPos = 0;\n int index;\n while ((index = searchList(nums + lastPos, numElems - lastPos, value)) != -1)\n {\n std::cout << \"found \" << value << \" @ \" << lastPos + index + 1 << \"\\n\";\n lastPos += index + 1;\n found = true;\n }\n\n if (!found)\n {\n std::cout << value << \" not found.\\n\";\n }\n}\n\nint main()\n{\n int nums[SIZE] = {3, 6, -19, 5, 5, 0, -2, 99};\n for (int value : {3, 6, -19, 5, 0, -2, 99})\n {\n test(nums, SIZE, value);\n }\n test(nums, SIZE, 1);\n\n return 0;\n}\n\nint searchList(int List[], int numElems, int value)\n{\n for (int count = 0; count < numElems; count++)\n {\n if (List[count] == value)\n return count;\n }\n\n return -1;\n}\n\nDemo@CompilerExplorer\nOutput:\nfound 3 @ 1\nfound 6 @ 2\nfound -19 @ 3\nfound 5 @ 4\nfound 5 @ 5\nfound 0 @ 6\nfound -2 @ 7\nfound 99 @ 8\n1 not found.\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c++", "search" ]
stackoverflow_0074664727_arrays_c++_search.txt
Q: Display the price of button on button click with a default active price being the first element in the array in Vuejs I have a problem that I would appreciate if I get assisted here.I have a button group as shown below: I am trying to create a price calculator based on the button click.On page load I want the first button to be active with price displaying in the summary card as shown below: My sample array of elements are as shown below: AcademicLevels: [ { id:1, name: 'High School', price: 20, active: true }, { id:2, name: 'Undergrad. (yrs 1‑2)', price: 30, active: false }, { id:3, name: 'Undergrad. (yrs 3‑4)', price: 35, active: false }, { id:4, name: 'Masters', price: 40, active: false }, { id:5, name: 'PHD', price: 50, active: false } ], The button group code : <div class="rc-orderform__row"> <div class="rc-orderform__feature-heading">Academic level</div> <div class="rc-orderform__feature-body"> <div class="rc-validation-representer valid" data-invalid="false"> <div class="rc-radios rc-radios--theme-default vertical-on-small" style="white-space: nowrap;"> <div class="radio-button-wrapper radio-button-wrapper radio-button-wrapper--flexbuttons" v-for="academic in AcademicLevels" :key="academic.id" > <button type="button" class="radio-button radio-button--n-1" tabindex="-1" v-on:click="worklevelChanged(academic.id)" :class="academicPriceId===Number(academic.id)? 'active':''" :id="'workLevel_' + academic.id" :value="academic.id" v-model="academicPrice" > <div class="button-checkbox"></div> <div class="radio-button__content">{{academic.name}}</div> </button> </div> </div> </div> </div> </div> The Percentage Cost Calculator Method methods: { calculatePercentage(basePrice,percentageToAdd){ var number= (parseFloat(basePrice)* parseFloat(percentageToAdd))/100; return Number(parseFloat(number)); } }, The Total Cost Calculator Method computed:{ calculateTotal : function (){ var workLevelModel = this.AcademicLevels; var base_price=parseFloat(this.AcademicLevels.price); var work_level_price= this.calculatePercentage( base_price,10 ); var unit_price=Number(parseFloat(base_price + work_level_price)); var amount =(unit_price*2); var sub_total =(amount); var total= sub_total; return total; } } The total price display card Total price $ {{calculateTotal }} I would appreciate if anyone can help me here. A: The problem is on a computed you write var base_price=parseFloat(this.AcademicLevels.price); in this line this.AcademicLevels is an array and you can't access price property to solve this problem in the method that is bined to the button you need to pass the index of array that clicked and then in the computed you need to access the array element at that index and calculated the prices for that the refactored code should looks like this: template code: <div class="rc-orderform__row"> <div class="rc-orderform__feature-heading">Academic level</div> <div class="rc-orderform__feature-body"> <div class="rc-validation-representer valid" data-invalid="false"> <div class="rc-radios rc-radios--theme-default vertical-on-small" style="white-space: nowrap;"> <div class="radio-button-wrapper radio-button-wrapper radio-button-wrapper--flexbuttons" v-for="(academic, index) in AcademicLevels" :key="academic.id" > <button type="button" class="radio-button radio-button--n-1" tabindex="-1" v-on:click="worklevelChanged(index)" :class="academicPriceId===Number(academic.id)? 'active':''" :id="'workLevel_' + academic.id" :value="academic.id" v-model="academicPrice" > <div class="button-checkbox"></div> <div class="radio-button__content">{{academic.name}}</div> </button> </div> </div> </div> </div> </div> the mthod: methods: { worklevelChanged(selectedIndex){ this.selectedPrice = selectedIndex } }, and the computed should be like this: computed:{ calculateTotal : function (){ var workLevelModel = this.AcademicLevels; var base_price=parseFloat(this.AcademicLevels[this.selectedPrice].price); var work_level_price= this.calculatePercentage( base_price,10 ); var unit_price=Number(parseFloat(base_price + work_level_price)); var amount =(unit_price*2); var sub_total =(amount); var total= sub_total; return total; } } it should solve the problem Edited: computed:{ calculateTotal : function (){ if(!this.selectedPrice || !this.AcademicLevels[this.selectedPrice] ) { return 0; } var workLevelModel = this.AcademicLevels; var base_price=parseFloat(this.AcademicLevels[this.selectedPrice].price); var work_level_price= this.calculatePercentage( base_price,10 ); var unit_price=Number(parseFloat(base_price + work_level_price)); var amount =(unit_price*2); var sub_total =(amount); var total= sub_total; return total; } }
Display the price of button on button click with a default active price being the first element in the array in Vuejs
I have a problem that I would appreciate if I get assisted here.I have a button group as shown below: I am trying to create a price calculator based on the button click.On page load I want the first button to be active with price displaying in the summary card as shown below: My sample array of elements are as shown below: AcademicLevels: [ { id:1, name: 'High School', price: 20, active: true }, { id:2, name: 'Undergrad. (yrs 1‑2)', price: 30, active: false }, { id:3, name: 'Undergrad. (yrs 3‑4)', price: 35, active: false }, { id:4, name: 'Masters', price: 40, active: false }, { id:5, name: 'PHD', price: 50, active: false } ], The button group code : <div class="rc-orderform__row"> <div class="rc-orderform__feature-heading">Academic level</div> <div class="rc-orderform__feature-body"> <div class="rc-validation-representer valid" data-invalid="false"> <div class="rc-radios rc-radios--theme-default vertical-on-small" style="white-space: nowrap;"> <div class="radio-button-wrapper radio-button-wrapper radio-button-wrapper--flexbuttons" v-for="academic in AcademicLevels" :key="academic.id" > <button type="button" class="radio-button radio-button--n-1" tabindex="-1" v-on:click="worklevelChanged(academic.id)" :class="academicPriceId===Number(academic.id)? 'active':''" :id="'workLevel_' + academic.id" :value="academic.id" v-model="academicPrice" > <div class="button-checkbox"></div> <div class="radio-button__content">{{academic.name}}</div> </button> </div> </div> </div> </div> </div> The Percentage Cost Calculator Method methods: { calculatePercentage(basePrice,percentageToAdd){ var number= (parseFloat(basePrice)* parseFloat(percentageToAdd))/100; return Number(parseFloat(number)); } }, The Total Cost Calculator Method computed:{ calculateTotal : function (){ var workLevelModel = this.AcademicLevels; var base_price=parseFloat(this.AcademicLevels.price); var work_level_price= this.calculatePercentage( base_price,10 ); var unit_price=Number(parseFloat(base_price + work_level_price)); var amount =(unit_price*2); var sub_total =(amount); var total= sub_total; return total; } } The total price display card Total price $ {{calculateTotal }} I would appreciate if anyone can help me here.
[ "The problem is on a computed you write\n var base_price=parseFloat(this.AcademicLevels.price);\n\nin this line this.AcademicLevels is an array and you can't access price property to solve this problem in the method that is bined to the button you need to pass the index of array that clicked and then in the computed you need to access the array element at that index and calculated the prices for that the refactored code should looks like this:\ntemplate code:\n<div class=\"rc-orderform__row\">\n <div class=\"rc-orderform__feature-heading\">Academic level</div>\n <div class=\"rc-orderform__feature-body\">\n <div class=\"rc-validation-representer valid\" data-invalid=\"false\">\n <div class=\"rc-radios rc-radios--theme-default vertical-on-small\" style=\"white-space: nowrap;\">\n <div class=\"radio-button-wrapper radio-button-wrapper radio-button-wrapper--flexbuttons\" v-for=\"(academic, index) in AcademicLevels\"\n :key=\"academic.id\"\n >\n <button type=\"button\" class=\"radio-button radio-button--n-1\" tabindex=\"-1\"\n v-on:click=\"worklevelChanged(index)\"\n :class=\"academicPriceId===Number(academic.id)? 'active':''\"\n :id=\"'workLevel_' + academic.id\"\n :value=\"academic.id\"\n v-model=\"academicPrice\"\n >\n <div class=\"button-checkbox\"></div>\n <div class=\"radio-button__content\">{{academic.name}}</div>\n </button>\n </div>\n </div>\n </div>\n </div>\n</div>\n\nthe mthod:\nmethods: {\n \n worklevelChanged(selectedIndex){ \n this.selectedPrice = selectedIndex\n }\n\n },\n\nand the computed should be like this:\ncomputed:{\n\n calculateTotal : function (){\n var workLevelModel = this.AcademicLevels;\n var base_price=parseFloat(this.AcademicLevels[this.selectedPrice].price);\n\n var work_level_price= this.calculatePercentage(\n base_price,10\n );\n\n var unit_price=Number(parseFloat(base_price + work_level_price));\n var amount =(unit_price*2);\n\n var sub_total =(amount);\n\n var total= sub_total;\n\n return total;\n\n }\n\n }\n\nit should solve the problem\nEdited:\ncomputed:{\n\n calculateTotal : function (){\n if(!this.selectedPrice || !this.AcademicLevels[this.selectedPrice] ) {\n return 0;\n }\n var workLevelModel = this.AcademicLevels;\n var base_price=parseFloat(this.AcademicLevels[this.selectedPrice].price);\n\n var work_level_price= this.calculatePercentage(\n base_price,10\n );\n\n var unit_price=Number(parseFloat(base_price + work_level_price));\n var amount =(unit_price*2);\n\n var sub_total =(amount);\n\n var total= sub_total;\n\n return total;\n\n }\n\n }\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "javascript", "vue.js", "vuejs2", "vuejs3" ]
stackoverflow_0074664264_arrays_javascript_vue.js_vuejs2_vuejs3.txt
Q: Flutter - How to get a text field with HTML tag for the end users as input I have a Flutter Mobile App where the user can insert their data on text field. I'd like give to the users the possibility to use HTML tag to compose their "cool" description (bold text, italic, etc..). Can I do something like this for the user? (see attached image) A: html_editor_enhanced: ^2.5.0 check it import 'package:html_editor/html_editor.dart'; HtmlEditorController controller = HtmlEditorController(); @override Widget build(BuildContext context) { return HtmlEditor( controller: controller, htmlEditorOptions: HtmlEditorOptions( hint: "Your text here...", ), otherOptions: OtherOptions( );
Flutter - How to get a text field with HTML tag for the end users as input
I have a Flutter Mobile App where the user can insert their data on text field. I'd like give to the users the possibility to use HTML tag to compose their "cool" description (bold text, italic, etc..). Can I do something like this for the user? (see attached image)
[ "html_editor_enhanced: ^2.5.0\n\ncheck it\nimport 'package:html_editor/html_editor.dart';\nHtmlEditorController controller = HtmlEditorController();\n\n@override Widget build(BuildContext context) {\n return HtmlEditor(\ncontroller: controller,\n htmlEditorOptions: HtmlEditorOptions(\n hint: \"Your text here...\",\n ), \n otherOptions: OtherOptions(\n);\n\n" ]
[ 0 ]
[]
[]
[ "android", "dart", "flutter", "text" ]
stackoverflow_0070556985_android_dart_flutter_text.txt
Q: Gradle Build Error : Entry META-INF/LICENSE.txt is a duplicate but no duplicate handling strategy I am trying to create a fat jar using gradle and I am using following dependency implementation platform('com.amazonaws:aws-java-sdk-bom:1.11.1000') implementation 'com.amazonaws:aws-java-sdk-core' implementation("software.amazon.msk:aws-msk-iam-auth:1.1.1") implementation("org.apache.kafka:kafka-clients:3.0.0") implementation group: 'org.slf4j', name: 'slf4j-api', version: '1.7.25' testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0' testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0' But I am getting this error Entry META-INF/LICENSE.txt is a duplicate but no duplicate handling strategy has been set Any Suggestions? A: I got this answer from here and it worked, just make sure to add duplicatesStrategy = DuplicatesStrategy.EXCLUDE towards the beginning of your fatjar task.
Gradle Build Error : Entry META-INF/LICENSE.txt is a duplicate but no duplicate handling strategy
I am trying to create a fat jar using gradle and I am using following dependency implementation platform('com.amazonaws:aws-java-sdk-bom:1.11.1000') implementation 'com.amazonaws:aws-java-sdk-core' implementation("software.amazon.msk:aws-msk-iam-auth:1.1.1") implementation("org.apache.kafka:kafka-clients:3.0.0") implementation group: 'org.slf4j', name: 'slf4j-api', version: '1.7.25' testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0' testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0' But I am getting this error Entry META-INF/LICENSE.txt is a duplicate but no duplicate handling strategy has been set Any Suggestions?
[ "I got this answer from here and it worked, just make sure to add\nduplicatesStrategy = DuplicatesStrategy.EXCLUDE\n\ntowards the beginning of your fatjar task.\n" ]
[ 0 ]
[]
[]
[ "gradle", "java" ]
stackoverflow_0070032666_gradle_java.txt
Q: How to remove Scrollbar from a widgets scrolling children I have a ScrollBar surrounding a CupertinoPageScaffold child Widget. Inside this contains a few horizontal scrolling ListViews which should not show scrollbars but due the parent ScrollBar being present, every scrolling widget below has a scroll bar attached to it. Is there a Widget to wrap scrolling Widgets that removes the scrollbar? I've looked for Widgets that may be called NoScrollbar but these don't exist. If I remove the parent ScrollBar then the scrollbars are removed A: If you wrap the ListViews for which you don't want scroll bars for in a NotificationListener in the following manner: NotificationListener<ScrollNotification>( onNotification: (_) => true, child: ListView(....) ) I think those ListViews will not have scrollbars. A: You can use ScrollConfiguration to hide the scrollbars. ScrollConfiguration( behavior: ScrollConfiguration.of(context).copyWith(scrollbars: false), child: ListView(....) ) You can use this with any scrollable widget.
How to remove Scrollbar from a widgets scrolling children
I have a ScrollBar surrounding a CupertinoPageScaffold child Widget. Inside this contains a few horizontal scrolling ListViews which should not show scrollbars but due the parent ScrollBar being present, every scrolling widget below has a scroll bar attached to it. Is there a Widget to wrap scrolling Widgets that removes the scrollbar? I've looked for Widgets that may be called NoScrollbar but these don't exist. If I remove the parent ScrollBar then the scrollbars are removed
[ "If you wrap the ListViews for which you don't want scroll bars for in a NotificationListener in the following manner:\nNotificationListener<ScrollNotification>(\n onNotification: (_) => true,\n child: ListView(....)\n )\n\nI think those ListViews will not have scrollbars. \n", "You can use ScrollConfiguration to hide the scrollbars.\n ScrollConfiguration(\n behavior: ScrollConfiguration.of(context).copyWith(scrollbars: false),\n child: ListView(....)\n )\n\nYou can use this with any scrollable widget.\n" ]
[ 1, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0056880105_dart_flutter.txt
Q: Query listing all shared components referenced by a page I want to create a sql query which returns all the APEX shared components referenced by a page. Example. I have a 2 Lists of Values named: LOV_EXAMPLE1, LOV_EXAMPLE2. I have an application (ID=100) and one page (ID=10) item which is a Popup LOV and it references LOV_EXAMPLE1. Now I want to create a query which will return the LOV_EXAMPLE1 as a referenced shared component of page 10. I am using Oracle APEX 21.2.1 I have already looked at the apex views but can't seem to connect them together. select * from apex_application_lovs select * from apex_application_pages A: I don't know whether there's prepared Apex view which shows data you're interested in, so I'd rather guess that you'll have to dig in to find such an information. As of example you mentioned - list of values (as part of shared components) - you'd query apex_application_page_items (and join it to other views, such as apex_application_pages): select p.page_id, p.page_name, i.lov_named_lov from apex_application_pages p join apex_application_page_items i on p.application_id = i.application_id and p.page_id = i.page_id where p.application_id = 10401 and i.lov_named_lov is not null;
Query listing all shared components referenced by a page
I want to create a sql query which returns all the APEX shared components referenced by a page. Example. I have a 2 Lists of Values named: LOV_EXAMPLE1, LOV_EXAMPLE2. I have an application (ID=100) and one page (ID=10) item which is a Popup LOV and it references LOV_EXAMPLE1. Now I want to create a query which will return the LOV_EXAMPLE1 as a referenced shared component of page 10. I am using Oracle APEX 21.2.1 I have already looked at the apex views but can't seem to connect them together. select * from apex_application_lovs select * from apex_application_pages
[ "I don't know whether there's prepared Apex view which shows data you're interested in, so I'd rather guess that you'll have to dig in to find such an information.\nAs of example you mentioned - list of values (as part of shared components) - you'd query apex_application_page_items (and join it to other views, such as apex_application_pages):\nselect p.page_id, p.page_name, i.lov_named_lov \nfrom apex_application_pages p join apex_application_page_items i on p.application_id = i.application_id \n and p.page_id = i.page_id\nwhere p.application_id = 10401 \n and i.lov_named_lov is not null;\n\n" ]
[ 1 ]
[]
[]
[ "oracle", "oracle_apex" ]
stackoverflow_0074664736_oracle_oracle_apex.txt
Q: Wrapping a shell in Python and then launching subprocesses in said shell Python can be used to spawn a shell and communicate with it: p = subprocess.Popen(['cmd'], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # use 'bash' if Linux. With this set-up sending a command such as 'echo foo' or 'cd' command works. However, problems arise when we try to use a program inside of the cmd line. For example, in a normal shell you can enter a python shell by typing "python", run Python code (and report printouts, etc), and then leave with "quit()". This SSCCE attempts to do so (Python 3.10) but fails: import subprocess, threading, os, time proc = 'cmd' if os.name=='nt' else 'bash' messages = [] p = subprocess.Popen([proc], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) exit_loops = False def read_stdout(): while not exit_loops: msg = p.stdout.readline() messages.append(msg.decode()) def read_stderr(): while not exit_loops: msg = p.stderr.readline() messages.append(msg.decode()) threading.Thread(target=read_stdout).start() threading.Thread(target=read_stderr).start() # This works: p.stdin.write('echo foo\n'.encode()) p.stdin.flush() time.sleep(0.125) print('Messages echo test:', messages) del messages[:] # This fails: p.stdin.write('python\n'.encode()) p.stdin.flush() p.stdin.write('x = 123\n'.encode()) p.stdin.flush() p.stdin.write('print("x is:",x)\n'.encode()) p.stdin.flush() p.stdin.write('y = nonexistant_var\n'.encode()) p.stdin.flush() p.stdin.write('quit()\n'.encode()) p.stdin.flush() time.sleep(1.5) print('Messages python test:', messages) # This generates a python error b/c quit() didn't actually quit: p.stdin.write('echo bar\n'.encode()) p.stdin.flush() time.sleep(0.125) print('Messages echo post-python test:', messages) The output of the SSCCE can handle the first echo command, but cannot handle the Python properly. Also, it can't seem quit() the python script and return to the normal shell. Instead it generates a syntax error: Messages echo test: ['Microsoft Windows [Version 10.0.22000.1219]\r\n', '(c) Microsoft Corporation. All rights reserved.\r\n', '\r\n', 'path\\to\\folder\n', 'foo\r\n', '\r\n'] Messages python test: ['path\\to\\folder>python\n'] Messages echo post-python test: ['path\\to\\folder>python\n', ' File "<stdin>", line 5\r\n', ' echo bar\r\n', ' ^\r\n', 'SyntaxError: invalid syntax\r\n', '\r\n'] Once it opened the python shell it got "stuck". However, the terminal handles Python shells just fine (and other programs). How can we do so? A: Here’s an example of how asyncio can run a shell command and obtain its result: import asyncio async def run(cmd): proc = await asyncio.create_subprocess_shell( cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) stdout, stderr = await proc.communicate() print(f'[{cmd!r} exited with {proc.returncode}]') if stdout: print(f'[stdout]\n{stdout.decode()}') if stderr: print(f'[stderr]\n{stderr.decode()}') asyncio.run(run('ls /zzz')) will print: ['ls /zzz' exited with 1] [stderr] ls: /zzz: No such file or directory Because all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously: async def main(): await asyncio.gather( run('ls /zzz'), run('sleep 1; echo "hello"')) asyncio.run(main()) Examples An example using the Process class to control a subprocess and the StreamReader class to read from its standard output. The subprocess is created by the create_subprocess_exec() function: import asyncio import sys async def get_date(): code = 'import datetime; print(datetime.datetime.now())' # Create the subprocess; redirect the standard output # into a pipe. proc = await asyncio.create_subprocess_exec( sys.executable, '-c', code, stdout=asyncio.subprocess.PIPE) # Read one line of output. data = await proc.stdout.readline() line = data.decode('ascii').rstrip() # Wait for the subprocess exit. await proc.wait() return line date = asyncio.run(get_date()) print(f"Current date: {date}")
Wrapping a shell in Python and then launching subprocesses in said shell
Python can be used to spawn a shell and communicate with it: p = subprocess.Popen(['cmd'], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # use 'bash' if Linux. With this set-up sending a command such as 'echo foo' or 'cd' command works. However, problems arise when we try to use a program inside of the cmd line. For example, in a normal shell you can enter a python shell by typing "python", run Python code (and report printouts, etc), and then leave with "quit()". This SSCCE attempts to do so (Python 3.10) but fails: import subprocess, threading, os, time proc = 'cmd' if os.name=='nt' else 'bash' messages = [] p = subprocess.Popen([proc], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) exit_loops = False def read_stdout(): while not exit_loops: msg = p.stdout.readline() messages.append(msg.decode()) def read_stderr(): while not exit_loops: msg = p.stderr.readline() messages.append(msg.decode()) threading.Thread(target=read_stdout).start() threading.Thread(target=read_stderr).start() # This works: p.stdin.write('echo foo\n'.encode()) p.stdin.flush() time.sleep(0.125) print('Messages echo test:', messages) del messages[:] # This fails: p.stdin.write('python\n'.encode()) p.stdin.flush() p.stdin.write('x = 123\n'.encode()) p.stdin.flush() p.stdin.write('print("x is:",x)\n'.encode()) p.stdin.flush() p.stdin.write('y = nonexistant_var\n'.encode()) p.stdin.flush() p.stdin.write('quit()\n'.encode()) p.stdin.flush() time.sleep(1.5) print('Messages python test:', messages) # This generates a python error b/c quit() didn't actually quit: p.stdin.write('echo bar\n'.encode()) p.stdin.flush() time.sleep(0.125) print('Messages echo post-python test:', messages) The output of the SSCCE can handle the first echo command, but cannot handle the Python properly. Also, it can't seem quit() the python script and return to the normal shell. Instead it generates a syntax error: Messages echo test: ['Microsoft Windows [Version 10.0.22000.1219]\r\n', '(c) Microsoft Corporation. All rights reserved.\r\n', '\r\n', 'path\\to\\folder\n', 'foo\r\n', '\r\n'] Messages python test: ['path\\to\\folder>python\n'] Messages echo post-python test: ['path\\to\\folder>python\n', ' File "<stdin>", line 5\r\n', ' echo bar\r\n', ' ^\r\n', 'SyntaxError: invalid syntax\r\n', '\r\n'] Once it opened the python shell it got "stuck". However, the terminal handles Python shells just fine (and other programs). How can we do so?
[ "Here’s an example of how asyncio can run a shell command and obtain its result:\nimport asyncio\n\nasync def run(cmd):\n proc = await asyncio.create_subprocess_shell(\n cmd,\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE)\n\n stdout, stderr = await proc.communicate()\n\n print(f'[{cmd!r} exited with {proc.returncode}]')\n if stdout:\n print(f'[stdout]\\n{stdout.decode()}')\n if stderr:\n print(f'[stderr]\\n{stderr.decode()}')\n\nasyncio.run(run('ls /zzz'))\n\nwill print:\n['ls /zzz' exited with 1]\n[stderr]\nls: /zzz: No such file or directory\n\nBecause all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously:\nasync def main():\n await asyncio.gather(\n run('ls /zzz'),\n run('sleep 1; echo \"hello\"'))\n\nasyncio.run(main())\n\nExamples\nAn example using the Process class to control a subprocess and the StreamReader class to read from its standard output.\nThe subprocess is created by the create_subprocess_exec() function:\nimport asyncio\nimport sys\n\nasync def get_date():\n code = 'import datetime; print(datetime.datetime.now())'\n\n # Create the subprocess; redirect the standard output\n # into a pipe.\n proc = await asyncio.create_subprocess_exec(\n sys.executable, '-c', code,\n stdout=asyncio.subprocess.PIPE)\n\n # Read one line of output.\n data = await proc.stdout.readline()\n line = data.decode('ascii').rstrip()\n\n # Wait for the subprocess exit.\n await proc.wait()\n return line\n\ndate = asyncio.run(get_date())\nprint(f\"Current date: {date}\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0074664917_python_subprocess.txt
Q: How to achieve multiple layouts for cards using a single grid or flex? I am trying to make a responsive card layout as shown in the image. Currently what I am doing is that I am separately creating layouts for computers, tablets and mobile. then which the help of a media query I set the display property as display: none for the other two views. For example: if I am in computer view the card layout for the computer will not have a display set as none while the other two will have a display as none. This works but is causing a lot of redundancy. There is a way to achieve all three layouts using a flex or grid. Please guide me. A: Flex can achieve this easily. Depending upon the screen width you can add media queries as following, you can tweak with box width and max-width to resize the boxes. /* tablet view */ @media only screen and (max-width: 768px){ .parent-container { max-width: 320px; } } /* mobile view */ @media only screen and (max-width: 480px){ .parent-container { flex-direction: column; align-items: center; } } You can check this out https://jsfiddle.net/rx4hvn/wbqoLe0y/35/ Hope this helps! A: You do not have to set the display: none everytime you want to design a certain design for a screen size. Media queries bring something called a breakpoint where you can specify the width (like min-width: 768px) of your screen. For mobile screen sizes just put your css under the media query with max-width: 600px. Further you can instead use the orientation property to distinguish between landscape or portrait mode. More on Queries and screen sizes //for mobile @media query and only screen(max-width: 600px) { display:flex; //some more css-code } //for tablet @media query and only screen(min-width: 600px) { display: flex; //some more css-code } //for desktop size @media query and only screen(min-width: 768px) { display: flex; //some more css-Code } Make sure to follow the mobile first development approach like recommended under MDN Guide
How to achieve multiple layouts for cards using a single grid or flex?
I am trying to make a responsive card layout as shown in the image. Currently what I am doing is that I am separately creating layouts for computers, tablets and mobile. then which the help of a media query I set the display property as display: none for the other two views. For example: if I am in computer view the card layout for the computer will not have a display set as none while the other two will have a display as none. This works but is causing a lot of redundancy. There is a way to achieve all three layouts using a flex or grid. Please guide me.
[ "Flex can achieve this easily.\nDepending upon the screen width you can add media queries as following, you can tweak with box width and max-width to resize the boxes.\n/* tablet view */\n@media only screen and (max-width: 768px){\n .parent-container {\n max-width: 320px;\n }\n}\n\n\n/* mobile view */\n@media only screen and (max-width: 480px){\n .parent-container {\n flex-direction: column;\n align-items: center;\n }\n}\n\nYou can check this out https://jsfiddle.net/rx4hvn/wbqoLe0y/35/\nHope this helps!\n", "You do not have to set the display: none everytime you want to design a certain design for a screen size. Media queries bring something called a breakpoint where you can specify the width (like min-width: 768px) of your screen. For mobile screen sizes just put your css under the media query with max-width: 600px. Further you can instead use the orientation property to distinguish between landscape or portrait mode. \nMore on Queries and screen sizes\n\n\n//for mobile\n @media query and only screen(max-width: 600px) \n {\n display:flex;\n //some more css-code\n }\n \n//for tablet\n @media query and only screen(min-width: 600px) \n {\n\n display: flex;\n //some more css-code\n\n }\n \n//for desktop size\n @media query and only screen(min-width: 768px) \n {\n display: flex;\n //some more css-Code\n }\n\n\n\nMake sure to follow the mobile first development approach like recommended under MDN Guide\n" ]
[ 1, 0 ]
[]
[]
[ "css" ]
stackoverflow_0074664671_css.txt
Q: onChange Trigger on Specific Cell Value Change - Google Apps Script I have a spreadsheet and have some data in it. Here I want to trigger a function on Cell Script!A2 value change. I will not change it's value directly. It will be dependent by it's formula on below conditions 1. On the Edit of a Cell in SheetA!A10:A300 2. On the value changes of SheetB!A1:A5 (which will be changed by formulas like TODAY, NOW ) Of course it's value will be changed 3 times in a day 1 00:05 2 20:25 3 20:49 So I have setup the Installable trigger on my Spreadsheet instead of onEdit simple Trigger. Code.gs function ChangeValue() { console.log("The Value of the Cell was Changed") ; } function onChange(e) { range = e.range; ss = e.source; sheetName = ss.getSheetByName("SCRIPT"); if (e.range.getA1Notation() === 'SCRIPT!A2') { ChangeValue(); } } But I'm getting the below error. TypeError: Cannot read property 'range' of undefined A: Try this instead: This code check the active sheet, and call the function 'ChangeValue()' only when the sheet name is 'SheetA', edited row is between 10 to 300, and edited column is A. It works fine as 'onEdit()' trigger or installable onedit trigger in my test cases. function ChangeValue() { console.log("The Value of the Cell was Changed"); } function onChange(e) { const range = e.range; const activeSheet = e.source.getActiveSheet().getName(); const conditions = [ activeSheet === 'SheetA', range.getRow() >= 10, range.getRow() <= 300, range.getColumn() == 1 ] if (conditions.every(c => c === true)) ChangeValue() } A: You mentioned that Of course it's value will be changed 3 times in a day 1 00:05 2 20:25 3 20:49 so you have to create time based installable trigger for each time like below. function createTrigger() { ScriptApp.newTrigger("ChangeValue") .timeBased() .atHour(00) .nearMinute(05) .everyDays(1) .create(); } Good Luck ..
onChange Trigger on Specific Cell Value Change - Google Apps Script
I have a spreadsheet and have some data in it. Here I want to trigger a function on Cell Script!A2 value change. I will not change it's value directly. It will be dependent by it's formula on below conditions 1. On the Edit of a Cell in SheetA!A10:A300 2. On the value changes of SheetB!A1:A5 (which will be changed by formulas like TODAY, NOW ) Of course it's value will be changed 3 times in a day 1 00:05 2 20:25 3 20:49 So I have setup the Installable trigger on my Spreadsheet instead of onEdit simple Trigger. Code.gs function ChangeValue() { console.log("The Value of the Cell was Changed") ; } function onChange(e) { range = e.range; ss = e.source; sheetName = ss.getSheetByName("SCRIPT"); if (e.range.getA1Notation() === 'SCRIPT!A2') { ChangeValue(); } } But I'm getting the below error. TypeError: Cannot read property 'range' of undefined
[ "Try this instead:\nThis code check the active sheet, and call the function 'ChangeValue()' only when the sheet name is 'SheetA', edited row is between 10 to 300, and edited column is A.\nIt works fine as 'onEdit()' trigger or installable onedit trigger in my test cases.\n\n\nfunction ChangeValue() {\n console.log(\"The Value of the Cell was Changed\");\n}\n\nfunction onChange(e) {\n const range = e.range;\n const activeSheet = e.source.getActiveSheet().getName();\n const conditions = [\n activeSheet === 'SheetA',\n range.getRow() >= 10,\n range.getRow() <= 300,\n range.getColumn() == 1\n ]\n if (conditions.every(c => c === true)) ChangeValue()\n}\n\n\n\n", "You mentioned that\nOf course it's value will be changed 3 times in a day\n\n1 00:05 2 20:25 3 20:49\n\nso you have to create time based installable trigger for each time like below.\nfunction createTrigger() {\n\n ScriptApp.newTrigger(\"ChangeValue\")\n .timeBased()\n .atHour(00)\n .nearMinute(05)\n .everyDays(1)\n .create();\n}\n\nGood Luck ..\n" ]
[ 1, 1 ]
[]
[]
[ "google_apps_script", "google_sheets", "javascript", "onchange", "triggers" ]
stackoverflow_0074542978_google_apps_script_google_sheets_javascript_onchange_triggers.txt
Q: Module not found: Error: Can't resolve 'class-transformer/storage' - Angular Universal / NestJs Folks, I've been trying to implement an application using angular with angular universal and NestJs. I believe that is possible to seize the nest server not only for SSR, but also to also provide API endpoints. I've made the setup recommended on https://github.com/nestjs/ng-universal using ng add @nestjs/ng-universal, pretty standard. After that I added my code to the angular src folder and installed the needed dependencies. The problem is that when I try to import a module to nest app.module, I get the following error: Error: Module not found: Error: Can't resolve 'class-transformer/storage' I've tried to use webpack, but since my knowledge on webpack is petty, the results were failure after failure, as expected. First, is it possible to seize the server also to provide endpoints? Second, what should I do to resolve this module? Please find below the repository for reproducing the issue: https://github.com/vitordhers/universal-nest Thanks in advance A: Just faced the exact problem, and when I saw your post my glance of hope just disappeared after I saw the date and zero answers :P Anyway, ended up here https://github.com/typestack/class-transformer/issues/563 to finally downgrading the class-transformer package to 0.3.1 Worked for me and I hope it does for you too: npm install --save [email protected] Couldn't make nestjs/ng-universal work though, but that's for another question. Best regards and stay safe, JosΓ© Ignacio A: For anyone using webpack with nestjs add the class-transformer/storage into de lazyImport on the webpack.config.js file. A: As per the issue opened up in git hub. changing the import statement to import { defaultMetadataStorage } from 'class-transformer/cjs/storage'; solves the issue. It solved for me . A: Add class-transformer/storage to lazyImports [https://github.com/nestjs/mapped-types/issues/486#issuecomment-932715880][1] I spent a few hours on this particular issue, especially when using fastify as well. This feels like a potential gap in the existing NestJS serverless documentation. Would a pull request be accepted to update the documentation with the following example? For those still wrestling with this issue, my webpack.config.js (targeting AWS Lambda) looked like so: const path = require('path') // Documentation: // https://docs.nestjs.com/faq/serverless#serverless // https://github.com/nestjs/swagger/issues/1334#issuecomment-836488125 // Tell webpack to ignore specific imports that aren't // used by our Lambda but imported by NestJS (can cause packing errors). const lazyImports = [ '@nestjs/microservices/microservices-module', '@nestjs/websockets/socket-module', '@nestjs/platform-express', 'swagger-ui-express', 'class-transformer/storage' // https://github.com/nestjs/mapped-types/issues/486#issuecomment-932715880 ] module.exports = (options, webpack) => ({ ...options, mode: 'production', target: 'node14', entry: { index: './lib/lambda.js', }, output: { filename: '[name].js', libraryTarget: 'umd', path: path.join(process.cwd(), 'build/dist'), }, externals: { 'aws-sdk': 'aws-sdk', }, plugins: [ ...options.plugins, new webpack.IgnorePlugin({ checkResource(resource) { if (lazyImports.includes(resource)) { try { require.resolve(resource) } catch (err) { return true } } return false }, }), ], })
Module not found: Error: Can't resolve 'class-transformer/storage' - Angular Universal / NestJs
Folks, I've been trying to implement an application using angular with angular universal and NestJs. I believe that is possible to seize the nest server not only for SSR, but also to also provide API endpoints. I've made the setup recommended on https://github.com/nestjs/ng-universal using ng add @nestjs/ng-universal, pretty standard. After that I added my code to the angular src folder and installed the needed dependencies. The problem is that when I try to import a module to nest app.module, I get the following error: Error: Module not found: Error: Can't resolve 'class-transformer/storage' I've tried to use webpack, but since my knowledge on webpack is petty, the results were failure after failure, as expected. First, is it possible to seize the server also to provide endpoints? Second, what should I do to resolve this module? Please find below the repository for reproducing the issue: https://github.com/vitordhers/universal-nest Thanks in advance
[ "Just faced the exact problem, and when I saw your post my glance of hope just disappeared after I saw the date and zero answers :P\nAnyway, ended up here https://github.com/typestack/class-transformer/issues/563 to finally downgrading the class-transformer package to 0.3.1\nWorked for me and I hope it does for you too:\nnpm install --save [email protected]\n\nCouldn't make nestjs/ng-universal work though, but that's for another question.\nBest regards and stay safe,\nJosΓ© Ignacio\n", "For anyone using webpack with nestjs add the class-transformer/storage into de lazyImport on the webpack.config.js file.\n", "As per the issue opened up in git hub. changing the import statement to\nimport { defaultMetadataStorage } from 'class-transformer/cjs/storage';\nsolves the issue. It solved for me .\n", "Add class-transformer/storage to lazyImports\n[https://github.com/nestjs/mapped-types/issues/486#issuecomment-932715880][1]\nI spent a few hours on this particular issue, especially when using fastify as well.\nThis feels like a potential gap in the existing NestJS serverless documentation. Would a pull request be accepted to update the documentation with the following example?\nFor those still wrestling with this issue, my webpack.config.js (targeting AWS Lambda) looked like so:\nconst path = require('path')\n\n// Documentation:\n// https://docs.nestjs.com/faq/serverless#serverless\n// https://github.com/nestjs/swagger/issues/1334#issuecomment-836488125\n\n// Tell webpack to ignore specific imports that aren't\n// used by our Lambda but imported by NestJS (can cause packing errors).\nconst lazyImports = [\n '@nestjs/microservices/microservices-module',\n '@nestjs/websockets/socket-module',\n '@nestjs/platform-express',\n 'swagger-ui-express',\n 'class-transformer/storage' // https://github.com/nestjs/mapped-types/issues/486#issuecomment-932715880\n]\n\nmodule.exports = (options, webpack) => ({\n ...options,\n mode: 'production',\n target: 'node14',\n entry: {\n index: './lib/lambda.js',\n },\n output: {\n filename: '[name].js',\n libraryTarget: 'umd',\n path: path.join(process.cwd(), 'build/dist'),\n },\n externals: {\n 'aws-sdk': 'aws-sdk',\n },\n plugins: [\n ...options.plugins,\n new webpack.IgnorePlugin({\n checkResource(resource) {\n if (lazyImports.includes(resource)) {\n try {\n require.resolve(resource)\n } catch (err) {\n return true\n }\n }\n return false\n },\n }),\n ],\n})\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "angular", "angular_universal", "nestjs", "server_side_rendering" ]
stackoverflow_0070802610_angular_angular_universal_nestjs_server_side_rendering.txt
Q: PySpark: How to get range of dates from dataframe into a new dataframe I have this PySpark data frame with a single row: spark_session_tbl_df.printSchema() spark_session_tbl_df.show() root |-- strm: string (nullable = true) |-- acad_career: string (nullable = true) |-- session_code: string (nullable = true) |-- sess_begin_dt: timestamp (nullable = true) |-- sess_end_dt: timestamp (nullable = true) |-- census_dt: timestamp (nullable = true) +----+-----------+------------+-------------------+-------------------+-------------------+ |strm|acad_career|session_code| sess_begin_dt| sess_end_dt| census_dt| +----+-----------+------------+-------------------+-------------------+-------------------+ |2228| UGRD| 1|2022-08-20 00:00:00|2022-12-03 00:00:00|2022-09-19 00:00:00| +----+-----------+------------+-------------------+-------------------+-------------------+ I am trying to output something like this where each row is a range/sequence of 7 days: +-------------------+-------------------+ | sess_begin_dt| sess_end_dt| +-------------------+-------------------+ |2022-08-20 |2022-08-27 | +-------------------+-------------------+ |2022-08-28 |2022-09-04 | +----+--------------+-------------------+ |2022-09-05 |2022-09-12 | +-------------------+-------------------+ |2022-09-13 |2022-09-20 | +----+--------------+-------------------+ |2022-09-21 |2022-09-28 | +-------------------+-------------------+ ..... +-------------------+-------------------+ |2022-11-26 |2022-12-03 | +----+--------------+-------------------+ I tried this below, but I am not sure if this can reference the PySpark data frame or I will need to do another approach to achieve the desire output above. from pyspark.sql.functions import sequence, to_date, explode, col date_range_df = spark.sql("SELECT sequence(to_date('sess_begin_dt'), to_date('sess_end_dt'), interval 7 day) as date").withColumn("date", explode(col("date"))) date_range_df.show() A: One of the approaches when you are dealing with timeseries is to convert date to timestamp and solve the question in a numerical way and the end convert it to date again. from pyspark.sql import functions as F data = [['2022-08-20 00:00:00', '2022-12-03 00:00:00']] df = spark.createDataFrame(data = data, schema = ['start', 'end']) week_seconds = 7*24*60*60 ( df .withColumn('start_timestamp', F.unix_timestamp('start')) .withColumn('end_timestamp', F.unix_timestamp('end')) .select( F.explode( F.sequence('start_timestamp', 'end_timestamp', F.lit(week_seconds))) .alias('start_date')) .withColumn('start_date', F.to_date(F.from_unixtime('start_date'))) .withColumn('end_date', F.date_add('start_date', 6)) ).show() +----------+----------+ |start_date| end_date| +----------+----------+ |2022-08-20|2022-08-26| |2022-08-27|2022-09-02| |2022-09-03|2022-09-09| |2022-09-10|2022-09-16| |2022-09-17|2022-09-23| +----------+----------+
PySpark: How to get range of dates from dataframe into a new dataframe
I have this PySpark data frame with a single row: spark_session_tbl_df.printSchema() spark_session_tbl_df.show() root |-- strm: string (nullable = true) |-- acad_career: string (nullable = true) |-- session_code: string (nullable = true) |-- sess_begin_dt: timestamp (nullable = true) |-- sess_end_dt: timestamp (nullable = true) |-- census_dt: timestamp (nullable = true) +----+-----------+------------+-------------------+-------------------+-------------------+ |strm|acad_career|session_code| sess_begin_dt| sess_end_dt| census_dt| +----+-----------+------------+-------------------+-------------------+-------------------+ |2228| UGRD| 1|2022-08-20 00:00:00|2022-12-03 00:00:00|2022-09-19 00:00:00| +----+-----------+------------+-------------------+-------------------+-------------------+ I am trying to output something like this where each row is a range/sequence of 7 days: +-------------------+-------------------+ | sess_begin_dt| sess_end_dt| +-------------------+-------------------+ |2022-08-20 |2022-08-27 | +-------------------+-------------------+ |2022-08-28 |2022-09-04 | +----+--------------+-------------------+ |2022-09-05 |2022-09-12 | +-------------------+-------------------+ |2022-09-13 |2022-09-20 | +----+--------------+-------------------+ |2022-09-21 |2022-09-28 | +-------------------+-------------------+ ..... +-------------------+-------------------+ |2022-11-26 |2022-12-03 | +----+--------------+-------------------+ I tried this below, but I am not sure if this can reference the PySpark data frame or I will need to do another approach to achieve the desire output above. from pyspark.sql.functions import sequence, to_date, explode, col date_range_df = spark.sql("SELECT sequence(to_date('sess_begin_dt'), to_date('sess_end_dt'), interval 7 day) as date").withColumn("date", explode(col("date"))) date_range_df.show()
[ "One of the approaches when you are dealing with timeseries is to convert date to timestamp and solve the question in a numerical way and the end convert it to date again.\nfrom pyspark.sql import functions as F\n\ndata = [['2022-08-20 00:00:00', '2022-12-03 00:00:00']]\ndf = spark.createDataFrame(data = data, schema = ['start', 'end'])\n\nweek_seconds = 7*24*60*60\n(\n df\n .withColumn('start_timestamp', F.unix_timestamp('start'))\n .withColumn('end_timestamp', F.unix_timestamp('end'))\n .select(\n F.explode(\n F.sequence('start_timestamp', 'end_timestamp', F.lit(week_seconds)))\n .alias('start_date'))\n .withColumn('start_date', F.to_date(F.from_unixtime('start_date')))\n .withColumn('end_date', F.date_add('start_date', 6))\n).show()\n\n+----------+----------+\n|start_date| end_date|\n+----------+----------+\n|2022-08-20|2022-08-26|\n|2022-08-27|2022-09-02|\n|2022-09-03|2022-09-09|\n|2022-09-10|2022-09-16|\n|2022-09-17|2022-09-23|\n+----------+----------+\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark_sql", "dataframe", "pyspark", "python", "sequence" ]
stackoverflow_0074662709_apache_spark_sql_dataframe_pyspark_python_sequence.txt
Q: VMware player not booting guest os due to wrong vmmon module I recently updated my vmware-workstation on manjaro which i had installed from arch and used to work perfectly before, but now when i boot the guest OS i get the error version mismatch with vmmon module i dont know how to make vmware use the newer vmmon module or how to update the vmmon module. i have tried doing a clean reinstall as well as updated the linux headers and done modprobe but nothing seems to fix this. sudo vmware-modconfig --console --install-all gives me the output [AppLoader] GLib does not have GSettings support. Failed to stop vmware.service: Unit vmware.service not loaded. Unable to stop services A: The issue is that in your pc's BIOS setting has secure boot enable so i can't run the OS in VMWare but when is disable the secure boot from pc then OS is running.
VMware player not booting guest os due to wrong vmmon module
I recently updated my vmware-workstation on manjaro which i had installed from arch and used to work perfectly before, but now when i boot the guest OS i get the error version mismatch with vmmon module i dont know how to make vmware use the newer vmmon module or how to update the vmmon module. i have tried doing a clean reinstall as well as updated the linux headers and done modprobe but nothing seems to fix this. sudo vmware-modconfig --console --install-all gives me the output [AppLoader] GLib does not have GSettings support. Failed to stop vmware.service: Unit vmware.service not loaded. Unable to stop services
[ "The issue is that in your pc's BIOS setting has secure boot enable so i can't run the OS in VMWare but when is disable the secure boot from pc then OS is running.\n" ]
[ 0 ]
[]
[]
[ "arch", "manjaro", "vmware", "vmware_player" ]
stackoverflow_0074664955_arch_manjaro_vmware_vmware_player.txt
Q: Which is correct type[] or [type] for arrays? So i have the below , i am new to typescript and i am creating a project with react. But typescript didn't like PayloadAction , so instead i changed it to <UserData[]> and then <[UserData]> But I didn't actually understand what i was doing only that i thought my UserData is returned/sent as an array of data - [UserData] etc.... It accepts both , I am not 100% sure which is actually correct in my case and have tried to refer to the docs but i don't think i have understood how it applies to my data. pushData seems correct as its only accepting an object containing UserData but not more than 1 entry. Although this is typeScript it might be worth pointing out this is for state using RTK export type UserData = { _id?: string name: string quantity: string | number paid: string | number date: string } export interface TransactionSlice { data: UserData[] } const initialState: TransactionSlice = { data: [], } export const transactionSlice = createSlice({ name: 'transactionData', initialState, reducers: { addData: (state, action: PayloadAction<[UserData]>) => { state.data = action.payload }, pushData: (state, action: PayloadAction<UserData>) => { state.data.push(action.payload) }, }, }) A: As you said it doesn't accepts more than one object this line itself answers your question. When defining array with types being inside it creates a type of array with specific length (Tuple). There are some cases where you might require only specific numbers of elements inside array fir instance when dealing with quadrants and stuff the matrix will only have 2 or at max 3 so here defining the type as number[] doesn't makes sense. Here [number, number, number] will be perfect.
Which is correct type[] or [type] for arrays?
So i have the below , i am new to typescript and i am creating a project with react. But typescript didn't like PayloadAction , so instead i changed it to <UserData[]> and then <[UserData]> But I didn't actually understand what i was doing only that i thought my UserData is returned/sent as an array of data - [UserData] etc.... It accepts both , I am not 100% sure which is actually correct in my case and have tried to refer to the docs but i don't think i have understood how it applies to my data. pushData seems correct as its only accepting an object containing UserData but not more than 1 entry. Although this is typeScript it might be worth pointing out this is for state using RTK export type UserData = { _id?: string name: string quantity: string | number paid: string | number date: string } export interface TransactionSlice { data: UserData[] } const initialState: TransactionSlice = { data: [], } export const transactionSlice = createSlice({ name: 'transactionData', initialState, reducers: { addData: (state, action: PayloadAction<[UserData]>) => { state.data = action.payload }, pushData: (state, action: PayloadAction<UserData>) => { state.data.push(action.payload) }, }, })
[ "As you said it doesn't accepts more than one object this line itself answers your question.\nWhen defining array with types being inside it creates a type of array with specific length (Tuple).\nThere are some cases where you might require only specific numbers of elements inside array fir instance when dealing with quadrants and stuff the matrix will only have 2 or at max 3 so here\ndefining the type as number[] doesn't makes sense.\nHere [number, number, number] will be perfect.\n" ]
[ 2 ]
[]
[]
[ "redux_toolkit", "typescript" ]
stackoverflow_0074664962_redux_toolkit_typescript.txt
Q: Configure Gmail API on Ubuntu VPS How to configure Gmail API on a AWS Ubuntu VPS? I am able to make it work properly on my Linux Machine, but after I run the code on my VPS, it asks me to authenticate by visiting the URL. I copied the URL and tried authenticating myself. While authenticating myself in browser, I am redirected to localhost:<random-port>?state=... and cannot authenticate myself as it cannot connect to localhost. How can I configure this properly on my Ubuntu VPS? i have used the default code provided by google developers: https://developers.google.com/gmail/api/quickstart/python A: I have encountered the same problem. When you will try to authenticate using your browser, it will try to redirect you to some localhost URL. Just copy that localhost URL, log in to your VPS, open the terminal, type python3 (or python), and finally type these commands: import requests url = "http://localhost:xxxxx-url-you-got-in-your-browswer" resp = requests.get(url) exit() After these commands, it should generate a Gmail API token.
Configure Gmail API on Ubuntu VPS
How to configure Gmail API on a AWS Ubuntu VPS? I am able to make it work properly on my Linux Machine, but after I run the code on my VPS, it asks me to authenticate by visiting the URL. I copied the URL and tried authenticating myself. While authenticating myself in browser, I am redirected to localhost:<random-port>?state=... and cannot authenticate myself as it cannot connect to localhost. How can I configure this properly on my Ubuntu VPS? i have used the default code provided by google developers: https://developers.google.com/gmail/api/quickstart/python
[ "I have encountered the same problem.\nWhen you will try to authenticate using your browser, it will try to redirect you to some localhost URL. Just copy that localhost URL, log in to your VPS, open the terminal, type python3 (or python), and finally type these commands:\nimport requests\nurl = \"http://localhost:xxxxx-url-you-got-in-your-browswer\"\nresp = requests.get(url)\nexit()\n\nAfter these commands, it should generate a Gmail API token.\n" ]
[ 0 ]
[]
[]
[ "api", "gmail", "python", "ubuntu", "vps" ]
stackoverflow_0072126436_api_gmail_python_ubuntu_vps.txt
Q: Two apps installed with same package name I am having a project in android studio in which I have only 1 package and while generating signed apk or debugging the app it installs 2 apk files, both are opening the same apk. I am not sure where the problem is. Can I get some suggestions? <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" package="com.example.myapplication"> <uses-permission android:name="android.permission.INSTALL_LOCATION_PROVIDER" tools:ignore="ProtectedPermissions" /> <application android:name="androidx.multidex.MultiDexApplication" android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:hardwareAccelerated="true" android:screenOrientation="portrait" android:supportsRtl="true" android:testOnly="false" android:theme="@style/AppTheme" android:usesCleartextTraffic="true" tools:ignore="GoogleAppIndexingWarning,HardcodedDebugMode" tools:targetApi="m"> <activity android:name="com.example.myapplication.Activity.ImageViewer" /> <activity android:name="com.example.myapplication.Fragments.LeadManagement.ProcessingLeadUpdate" android:theme="@style/AppTheme.NoActionBar" > </activity> <activity android:name="com.example.myapplication.Activity.Splash" android:configChanges="orientation|screenSize|keyboardHidden" android:noHistory="false" android:theme="@style/AppTheme.NoActionBar" android:usesCleartextTraffic="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="com.example.myapplication.Activity.MainActivity" android:configChanges="orientation|screenSize|keyboardHidden" android:label="@string/app_name" android:noHistory="false" android:theme="@style/AppTheme.NoActionBar" android:usesCleartextTraffic="true" android:windowSoftInputMode="adjustResize" /> <activity android:name="com.schibstedspain.leku.LocationPickerActivity" android:label="@string/leku_title_activity_location_picker" android:theme="@style/Theme.AppCompat.Light.NoActionBar" android:windowSoftInputMode="adjustPan"> <meta-data android:name="android.app.searchable" android:resource="@xml/leku_searchable" /> </activity> </application> I think the problem is in the manifest file, so I am posting the manifest file here. the build.gradle file in app level apply plugin: 'com.android.application' android { compileSdkVersion 28 configurations { all { exclude module: 'httpclient' exclude module: 'json' exclude group: 'org.apache.httpcomponents' } } defaultConfig { applicationId "com.example.myapplication" minSdkVersion 17 targetSdkVersion 28 multiDexEnabled true versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { proguardFiles getDefaultProguardFile('proguard-android- optimize.txt'), 'proguard-rules.pro' debuggable false } } compileOptions { sourceCompatibility = '1.8' targetCompatibility = '1.8' } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.multidex:multidex:2.0.1' implementation 'androidx.cardview:cardview:1.0.0' implementation 'androidx.recyclerview:recyclerview:1.1.0-alpha05' implementation 'androidx.appcompat:appcompat:1.1.0-alpha05' implementation 'androidx.constraintlayout:constraintlayout:2.0.0-beta1' implementation 'androidx.legacy:legacy-preference-v14:1.0.0' implementation 'com.google.android.material:material:1.1.0-alpha06' implementation 'com.fasterxml.jackson.core:jackson-core:2.9.8' implementation 'com.fasterxml.jackson.core:jackson-annotations:2.9.8' implementation 'com.fasterxml.jackson.core:jackson-databind:2.9.8' implementation 'com.squareup.okhttp3:okhttp:3.14.0' implementation 'androidx.legacy:legacy-support-v4:1.0.0' testImplementation 'junit:junit:4.12' implementation 'com.goebl:david-webb:1.3.0' implementation 'com.squareup.retrofit2:retrofit:2.5.0' implementation 'com.squareup.retrofit2:converter-gson:2.5.0' implementation 'de.hdodenhof:circleimageview:3.0.0' implementation 'com.google.code.gson:gson:2.8.5' implementation 'com.android.volley:volley:1.1.1' implementation 'com.airbnb.android:lottie:3.0.0' implementation 'com.github.aliumujib:Nibo:2.0' implementation 'com.google.guava:guava:27.1-android' implementation 'androidx.fragment:fragment:1.0.0' implementation('com.schibstedspain.android:leku:6.1.1') { exclude group: 'com.google.android.gms' exclude group: 'androidx.appcompat' } implementation 'com.google.android.gms:play-services-location:16.0.0' implementation 'com.google.android.gms:play-services-places:16.1.0' implementation 'com.google.android.gms:play-services-maps:16.1.0' implementation 'com.google.maps.android:android-maps-utils:0.5' implementation 'com.github.prabhat1707:EasyWayLocation:1.0' implementation 'com.ryanjeffreybrooks:indefinitepagerindicator:1.0.10' implementation 'androidx.viewpager:viewpager:1.0.0' implementation 'com.github.sharish:ShimmerRecyclerView:v1.3' implementation 'com.facebook.shimmer:shimmer:0.4.0' implementation 'com.github.developer-shivam:Crescento:1.2.1' implementation 'io.reactivex.rxjava2:rxandroid:2.1.0' implementation 'com.camerakit:camerakit:1.0.0-beta3.11' implementation 'com.camerakit:jpegkit:0.1.0' implementation 'org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.3.31' implementation 'com.github.PauloLinhares09:RetroPicker:1.2.3-Beta1' implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.0.0' implementation 'com.jsibbold:zoomage:1.2.0' implementation 'id.zelory:compressor:2.1.0' compile 'com.github.hkk595:Resizer:v1.5' } A: It feels like your application is picking up activity from some library. Open the AndroidManifest file. click on Merged Manifest tab (in the bottom of the IDE). Search for Activity having LAUNCHER intent-filter. If there is any LAUNCHER Activity other than yours, that is the culprit. You can modify or update your library AndroidManifest file and remove the activity from the launcher. If that library is not your own and you can't update the AndroidManifest file, you should try some other library. Here it go, now create your signed build or debug, you will have only one launcher icon. A: If you are using customized SplashScreen, check into the AndroidManifest.xml file located on android\app\src\main\AndroidManifest.xml and look for the intent-filter tag like this: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> Certifies that you have only one statement of the preview snippet of code, preferably in the .SplashActivity statement, like this: <activity android:name=".SplashActivity" android:theme="@style/SplashScreenTheme" android:label="@string/app_name" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> and your .MainActivity don't need to have intent-filter inside and will be like: <activity android:name=".MainActivity" android:label="@string/app_name" android:configChanges="keyboard|keyboardHidden|orientation|screenLayout|screenSize|smallestScreenSize|uiMode" android:launchMode="singleTask" android:windowSoftInputMode="adjustResize" android:exported="true"> <!-- ! DON'T ENNABLE the intent-filter tag here in the same time with .SplashActivity to avoid duplicated app installation --> <!-- <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> --> </activity>
Two apps installed with same package name
I am having a project in android studio in which I have only 1 package and while generating signed apk or debugging the app it installs 2 apk files, both are opening the same apk. I am not sure where the problem is. Can I get some suggestions? <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" package="com.example.myapplication"> <uses-permission android:name="android.permission.INSTALL_LOCATION_PROVIDER" tools:ignore="ProtectedPermissions" /> <application android:name="androidx.multidex.MultiDexApplication" android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:hardwareAccelerated="true" android:screenOrientation="portrait" android:supportsRtl="true" android:testOnly="false" android:theme="@style/AppTheme" android:usesCleartextTraffic="true" tools:ignore="GoogleAppIndexingWarning,HardcodedDebugMode" tools:targetApi="m"> <activity android:name="com.example.myapplication.Activity.ImageViewer" /> <activity android:name="com.example.myapplication.Fragments.LeadManagement.ProcessingLeadUpdate" android:theme="@style/AppTheme.NoActionBar" > </activity> <activity android:name="com.example.myapplication.Activity.Splash" android:configChanges="orientation|screenSize|keyboardHidden" android:noHistory="false" android:theme="@style/AppTheme.NoActionBar" android:usesCleartextTraffic="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="com.example.myapplication.Activity.MainActivity" android:configChanges="orientation|screenSize|keyboardHidden" android:label="@string/app_name" android:noHistory="false" android:theme="@style/AppTheme.NoActionBar" android:usesCleartextTraffic="true" android:windowSoftInputMode="adjustResize" /> <activity android:name="com.schibstedspain.leku.LocationPickerActivity" android:label="@string/leku_title_activity_location_picker" android:theme="@style/Theme.AppCompat.Light.NoActionBar" android:windowSoftInputMode="adjustPan"> <meta-data android:name="android.app.searchable" android:resource="@xml/leku_searchable" /> </activity> </application> I think the problem is in the manifest file, so I am posting the manifest file here. the build.gradle file in app level apply plugin: 'com.android.application' android { compileSdkVersion 28 configurations { all { exclude module: 'httpclient' exclude module: 'json' exclude group: 'org.apache.httpcomponents' } } defaultConfig { applicationId "com.example.myapplication" minSdkVersion 17 targetSdkVersion 28 multiDexEnabled true versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { proguardFiles getDefaultProguardFile('proguard-android- optimize.txt'), 'proguard-rules.pro' debuggable false } } compileOptions { sourceCompatibility = '1.8' targetCompatibility = '1.8' } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.multidex:multidex:2.0.1' implementation 'androidx.cardview:cardview:1.0.0' implementation 'androidx.recyclerview:recyclerview:1.1.0-alpha05' implementation 'androidx.appcompat:appcompat:1.1.0-alpha05' implementation 'androidx.constraintlayout:constraintlayout:2.0.0-beta1' implementation 'androidx.legacy:legacy-preference-v14:1.0.0' implementation 'com.google.android.material:material:1.1.0-alpha06' implementation 'com.fasterxml.jackson.core:jackson-core:2.9.8' implementation 'com.fasterxml.jackson.core:jackson-annotations:2.9.8' implementation 'com.fasterxml.jackson.core:jackson-databind:2.9.8' implementation 'com.squareup.okhttp3:okhttp:3.14.0' implementation 'androidx.legacy:legacy-support-v4:1.0.0' testImplementation 'junit:junit:4.12' implementation 'com.goebl:david-webb:1.3.0' implementation 'com.squareup.retrofit2:retrofit:2.5.0' implementation 'com.squareup.retrofit2:converter-gson:2.5.0' implementation 'de.hdodenhof:circleimageview:3.0.0' implementation 'com.google.code.gson:gson:2.8.5' implementation 'com.android.volley:volley:1.1.1' implementation 'com.airbnb.android:lottie:3.0.0' implementation 'com.github.aliumujib:Nibo:2.0' implementation 'com.google.guava:guava:27.1-android' implementation 'androidx.fragment:fragment:1.0.0' implementation('com.schibstedspain.android:leku:6.1.1') { exclude group: 'com.google.android.gms' exclude group: 'androidx.appcompat' } implementation 'com.google.android.gms:play-services-location:16.0.0' implementation 'com.google.android.gms:play-services-places:16.1.0' implementation 'com.google.android.gms:play-services-maps:16.1.0' implementation 'com.google.maps.android:android-maps-utils:0.5' implementation 'com.github.prabhat1707:EasyWayLocation:1.0' implementation 'com.ryanjeffreybrooks:indefinitepagerindicator:1.0.10' implementation 'androidx.viewpager:viewpager:1.0.0' implementation 'com.github.sharish:ShimmerRecyclerView:v1.3' implementation 'com.facebook.shimmer:shimmer:0.4.0' implementation 'com.github.developer-shivam:Crescento:1.2.1' implementation 'io.reactivex.rxjava2:rxandroid:2.1.0' implementation 'com.camerakit:camerakit:1.0.0-beta3.11' implementation 'com.camerakit:jpegkit:0.1.0' implementation 'org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.3.31' implementation 'com.github.PauloLinhares09:RetroPicker:1.2.3-Beta1' implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.0.0' implementation 'com.jsibbold:zoomage:1.2.0' implementation 'id.zelory:compressor:2.1.0' compile 'com.github.hkk595:Resizer:v1.5' }
[ "It feels like your application is picking up activity from some library. \n\nOpen the AndroidManifest file. click on Merged Manifest tab (in the bottom of the IDE).\nSearch for Activity having LAUNCHER intent-filter.\nIf there is any LAUNCHER Activity other than yours, that is the culprit.\nYou can modify or update your library AndroidManifest file and remove the activity from the launcher. If that library is not your own and you can't update the AndroidManifest file, you should try some other library. \n\nHere it go, now create your signed build or debug, you will have only one launcher icon. \n", "If you are using customized SplashScreen, check into the AndroidManifest.xml file located on android\\app\\src\\main\\AndroidManifest.xml and look for the intent-filter tag like this:\n<intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n</intent-filter>\n\nCertifies that you have only one statement of the preview snippet of code, preferably in the .SplashActivity statement, like this:\n<activity\n android:name=\".SplashActivity\"\n android:theme=\"@style/SplashScreenTheme\"\n android:label=\"@string/app_name\"\n android:exported=\"true\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n</activity>\n\nand your .MainActivity don't need to have intent-filter inside and will be like:\n<activity\n android:name=\".MainActivity\"\n android:label=\"@string/app_name\"\nandroid:configChanges=\"keyboard|keyboardHidden|orientation|screenLayout|screenSize|smallestScreenSize|uiMode\"\n android:launchMode=\"singleTask\"\n android:windowSoftInputMode=\"adjustResize\"\n android:exported=\"true\">\n <!-- ! DON'T ENNABLE the intent-filter tag here in the same time with .SplashActivity\n to avoid duplicated app installation -->\n <!-- <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter> -->\n</activity>\n\n" ]
[ 7, 0 ]
[]
[]
[ "android" ]
stackoverflow_0056557693_android.txt
Q: How to use filestream for copying files in c# I want to copy a file from one folder to another folder using filestream.How this can be achived.when I try to use file.copy I was getting this file is using by another process, to avoid this I want to use file stream using c#. Can some one provide a sample for copying a file from one folder to another. A: for copying i used below code :- public static void Copy(string inputFilePath, string outputFilePath) { int bufferSize = 1024 * 1024; using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write,FileShare.ReadWrite)) //using (FileStream fs = File.Open(<file-path>, FileMode.Open, FileAccess.Read, FileShare.Read)) { FileStream fs = new FileStream(inputFilePath, FileMode.Open, FileAccess.ReadWrite); fileStream.SetLength(fs.Length); int bytesRead = -1; byte[] bytes = new byte[bufferSize]; while ((bytesRead = fs.Read(bytes, 0, bufferSize)) > 0) { fileStream.Write(bytes, 0, bytesRead); } } } A: You can use Stream.CopyTo method to copy the file like below: public static string CopyFileStream(string outputDirectory, string inputFilePath) { FileInfo inputFile = new FileInfo(inputFilePath); using (FileStream originalFileStream = inputFile.OpenRead()) { var fileName = Path.GetFileName(inputFile.FullName); var outputFileName = Path.Combine(outputDirectory, fileName); using (FileStream outputFileStream = File.Create(outputFileName)) { originalFileStream.CopyTo(outputFileStream); } return outputFileName; } }
How to use filestream for copying files in c#
I want to copy a file from one folder to another folder using filestream.How this can be achived.when I try to use file.copy I was getting this file is using by another process, to avoid this I want to use file stream using c#. Can some one provide a sample for copying a file from one folder to another.
[ "for copying i used below code :-\n public static void Copy(string inputFilePath, string outputFilePath)\n {\n int bufferSize = 1024 * 1024;\n\n using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write,FileShare.ReadWrite))\n //using (FileStream fs = File.Open(<file-path>, FileMode.Open, FileAccess.Read, FileShare.Read))\n {\n FileStream fs = new FileStream(inputFilePath, FileMode.Open, FileAccess.ReadWrite);\n fileStream.SetLength(fs.Length);\n int bytesRead = -1;\n byte[] bytes = new byte[bufferSize];\n\n while ((bytesRead = fs.Read(bytes, 0, bufferSize)) > 0)\n {\n fileStream.Write(bytes, 0, bytesRead);\n }\n }\n }\n\n", "You can use Stream.CopyTo method to copy the file like below:\npublic static string CopyFileStream(string outputDirectory, string inputFilePath)\n{\n FileInfo inputFile = new FileInfo(inputFilePath);\n using (FileStream originalFileStream = inputFile.OpenRead())\n {\n var fileName = Path.GetFileName(inputFile.FullName);\n var outputFileName = Path.Combine(outputDirectory, fileName);\n using (FileStream outputFileStream = File.Create(outputFileName))\n {\n originalFileStream.CopyTo(outputFileStream);\n }\n return outputFileName;\n }\n}\n\n" ]
[ 13, 0 ]
[ " string fileName = \"Mytest.txt\";\n string sourcePath = @\"C:\\MyTestPath\";\n string targetPath = @\"C:\\MyTestTarget\";\n string sourceFile = System.IO.Path.Combine(sourcePath, fileName);\n string destFile = System.IO.Path.Combine(targetPath, fileName);\n\n {\n System.IO.Directory.CreateDirectory(targetPath);\n }\n\n // To copy a file to another location and \n // overwrite the destination file if it already exists.\n System.IO.File.Copy(sourceFile, destFile, true);\n\n" ]
[ -1 ]
[ ".net", "c#", "filestream" ]
stackoverflow_0039591579_.net_c#_filestream.txt
Q: Telegram-Python-Bot How to make the bot receive message from user? so when a user send /help command in a GROUP then the bot should reply "please send your query " and wait for the user to rely, and when user replies , i want the bot to store that reply in a variable, and i am really confuse on how to do that. and the bot should only take the reply of the user who sent the /help command.. anyone please help me. from dotenv import load_dotenv from os import environ, name from telegram.ext import * from telegram import * from requests import * load_dotenv(f'config.env') BOT_TOKEN = environ.get('BOT_TOKEN') updater = Updater(token=BOT_TOKEN, use_context=True) dispatcher = updater.dispatcher def help(update, context): chat_id = update.effective_chat.id message = "send the message" context.bot.send_message(chat_id=chat_id, text=message) dispatcher.add_handler(CommandHandler("help", help)) updater.start_polling() A: Configuring the Telegram Bot Go to https://telegram.me/BotFather. To create a new bot type /newbot to the message box and press enter. Enter the name of the user name of your new bot. You have received the message from BotFather containing the token, which you can use to connect Telegram Bot to Make. To add your bot to your Telegram application, click the link in the message from BotFather or enter it manually to your browser. The link is t.me/yourBotName. Adding Telegram Bot to your Scenario Follow Step 1 in the Creating a scenario article (choose the Telegram Bot module instead of Twitter and Facebook module). After the module is added to your scenario you can then see the Scenario editor. Define what function you need your module to have. Here you can choose between three types of modules – Triggers, Actions, and Searches.
Telegram-Python-Bot How to make the bot receive message from user?
so when a user send /help command in a GROUP then the bot should reply "please send your query " and wait for the user to rely, and when user replies , i want the bot to store that reply in a variable, and i am really confuse on how to do that. and the bot should only take the reply of the user who sent the /help command.. anyone please help me. from dotenv import load_dotenv from os import environ, name from telegram.ext import * from telegram import * from requests import * load_dotenv(f'config.env') BOT_TOKEN = environ.get('BOT_TOKEN') updater = Updater(token=BOT_TOKEN, use_context=True) dispatcher = updater.dispatcher def help(update, context): chat_id = update.effective_chat.id message = "send the message" context.bot.send_message(chat_id=chat_id, text=message) dispatcher.add_handler(CommandHandler("help", help)) updater.start_polling()
[ "Configuring the Telegram Bot\n\nGo to https://telegram.me/BotFather.\nTo create a new bot type /newbot to the message box and press enter.\nEnter the name of the user name of your new bot.\nYou have received the message from BotFather containing the token, which you can use to connect Telegram Bot to Make.\n\nTo add your bot to your Telegram application, click the link in the message from BotFather or enter it manually to your browser. The link is t.me/yourBotName.\nAdding Telegram Bot to your Scenario\nFollow Step 1 in the Creating a scenario article (choose the Telegram Bot module instead of Twitter and Facebook module).\nAfter the module is added to your scenario you can then see the Scenario editor.\nDefine what function you need your module to have. Here you can choose between three types of modules – Triggers, Actions, and Searches.\n" ]
[ 0 ]
[]
[]
[ "python", "python_telegram_bot", "telegram_bot" ]
stackoverflow_0074664890_python_python_telegram_bot_telegram_bot.txt
Q: Write Persian in slug and use it in address bar in django I use django and in my models I want to write Persian in slugfield (by using utf-8 or something else) and use the slug in address of page I write this class for model: class Category(models.Model): name = models.CharField(max_length=20, unique=True) slug = models.SlugField(max_length=20, unique=True) description = models.CharField(max_length=500) is_active = models.BooleanField(default=False) meta_description = models.TextField(max_length=160, null=True, blank=True) meta_keywords = models.TextField(max_length=255, null=True, blank=True) user = models.ForeignKey(settings.AUTH_USER_MODEL) def save(self, *args, **kwargs): self.slug = slugify(self.name) super(Category, self).save(*args, **kwargs) def __str__(self): return self.name def category_posts(self): return Post.objects.filter(category=self).count() But there is nothing in slug column after save and I don't know what to write in url to show Persian. Can you tell me what should I do? I use django 1.9 and python 3.6. A: The docstring for the slugify function is: Convert to ASCII if 'allow_unicode' is False. Convert spaces to hyphens. Remove characters that aren't alphanumerics, underscores, or hyphens. Convert to lowercase. Also strip leading and trailing whitespace. So you need to set the allow_unicode flag to True to preserve the Persian text. >>> text = 'Ψ³Ω„Ψ§Ω… ΨΉΨ²ΫŒΨ²Ω…! ΨΉΨ²ΫŒΨ²Ω… Ψ³Ω„Ψ§Ω…!' >>> slugify(text) '' >>> slugify(text, allow_unicode=True) 'Ψ³Ω„Ψ§Ω…-ΨΉΨ²ΫŒΨ²Ω…-ΨΉΨ²ΫŒΨ²Ω…-Ψ³Ω„Ψ§Ω…' >>> A: this is better !! ‍‍‍slug = models.SlugField(max_length=20, unique=True, allow_unicode=True) A: Here`s an example which you can use for this case: First install django_extensions with pip, if it is not installed. from django_extensions.db.fields import AutoSlugField from django.utils.text import slugify In model.py before your class add this function: def my_slugify_function(content): return slugify(content, allow_unicode=True) In your class add this field: slug = AutoSlugField(populate_from=['name'], unique=True, allow_unicode=True, slugify_function=my_slugify_function) In url must use this format: re_path('person_list/(?P<slug>[-\w]+)/', views.detail, name='detail') A: I used snakecharmerb and Ali Noori answers. But those did not solve my problem. And get this error: Reverse for 'system-detail' with keyword arguments '{'slug': 'هفΨͺ'}' not found. 1 pattern(s) tried: ['system/(?P<slug>[-a-zA-Z0-9_]+)/\\Z'] In urls.py i Change slug to str: path('<str:slug>/', SystemDetailView.as_view(), name='system-detail'),
Write Persian in slug and use it in address bar in django
I use django and in my models I want to write Persian in slugfield (by using utf-8 or something else) and use the slug in address of page I write this class for model: class Category(models.Model): name = models.CharField(max_length=20, unique=True) slug = models.SlugField(max_length=20, unique=True) description = models.CharField(max_length=500) is_active = models.BooleanField(default=False) meta_description = models.TextField(max_length=160, null=True, blank=True) meta_keywords = models.TextField(max_length=255, null=True, blank=True) user = models.ForeignKey(settings.AUTH_USER_MODEL) def save(self, *args, **kwargs): self.slug = slugify(self.name) super(Category, self).save(*args, **kwargs) def __str__(self): return self.name def category_posts(self): return Post.objects.filter(category=self).count() But there is nothing in slug column after save and I don't know what to write in url to show Persian. Can you tell me what should I do? I use django 1.9 and python 3.6.
[ "The docstring for the slugify function is:\n\nConvert to ASCII if 'allow_unicode' is False. Convert spaces to hyphens.\n Remove characters that aren't alphanumerics, underscores, or hyphens.\n Convert to lowercase. Also strip leading and trailing whitespace.\n\nSo you need to set the allow_unicode flag to True to preserve the Persian text.\n>>> text = 'Ψ³Ω„Ψ§Ω… ΨΉΨ²ΫŒΨ²Ω…! ΨΉΨ²ΫŒΨ²Ω… Ψ³Ω„Ψ§Ω…!'\n>>> slugify(text)\n''\n>>> slugify(text, allow_unicode=True)\n'Ψ³Ω„Ψ§Ω…-ΨΉΨ²ΫŒΨ²Ω…-ΨΉΨ²ΫŒΨ²Ω…-Ψ³Ω„Ψ§Ω…'\n>>> \n\n", "this is better !!\n‍‍‍slug = models.SlugField(max_length=20, unique=True, allow_unicode=True)\n\n", "Here`s an example which you can use for this case:\nFirst install django_extensions with pip, if it is not installed.\nfrom django_extensions.db.fields import AutoSlugField\nfrom django.utils.text import slugify\n\nIn model.py before your class add this function:\ndef my_slugify_function(content):\n return slugify(content, allow_unicode=True)\n\nIn your class add this field:\nslug = AutoSlugField(populate_from=['name'], unique=True, allow_unicode=True, slugify_function=my_slugify_function)\n\nIn url must use this format:\nre_path('person_list/(?P<slug>[-\\w]+)/', views.detail, name='detail')\n\n", "I used snakecharmerb and Ali Noori answers. But those did not solve my problem. And get this error:\nReverse for 'system-detail' with keyword arguments '{'slug': 'هفΨͺ'}' not found. 1 pattern(s) tried: ['system/(?P<slug>[-a-zA-Z0-9_]+)/\\\\Z']\n\nIn urls.py i Change slug to str:\npath('<str:slug>/', SystemDetailView.as_view(), name='system-detail'),\n\n" ]
[ 8, 2, 1, 0 ]
[]
[]
[ "django", "persian", "python" ]
stackoverflow_0047938594_django_persian_python.txt
Q: c: extern variable is not retaining value I am using an extern bool variable. I have initialized it (to true) and want to use this value somewhere else in program. But the problem is, when went into another module, this true value is becoming false and when returned from that module (where the value was last seen true), then it turns to true. I am not understanding why extern variable is behaving like this. Does someone know about this? I want extern variable to retain its value like it should be. A: Though your question is a bit broad, a short example may help cement the way extern works for you. Continuing from my comment, the original declaration of a variable you want to reference in another source file can be declared in any source. It should be global in scope. Take for example a simple file we will call source1.c where we will declare two normal global variables bool myvar (representing your bool variable) and an int access; to use as as simple counter to show how the value of myvar is handled: #include <stdio.h> #include <stdbool.h> bool myvar; /* original declarations of myvar and access */ int access; /* counter to show which output takes place */ void init() { myvar = true; /* initialization of myvar true - can be anywhere */ } void show_myvar() { /* output of myvar and access in file where they are decalred */ printf ("%d myvar = %s\n", ++access, myvar ? "true" : "false"); } Now in a separate source file, you want to be able to access or change the values in myvar or access, so you declare the variables as extern in any other source file that needs them. This tells the linker that the symbol was declared somewhere else and it will need to resolve the symbol before creating the final executable. A short example that outputs the value of myvar from source1.c before it is initialized true, then initializes the value true and outputs from source1.c again showing the initialization took place. Next the values for both of the exten variables are access directly in the source2.c and modified. As a final confirmation, functions are called showing the direct change of the extern variables in source2 are reflected in source1 as well. For example: #include <stdio.h> #include <stdbool.h> extern bool myvar; /* extern declarations of myvar and access */ extern int access; void init(); /* function prototypes for functions from */ void show_myvar(); /* other source -- can be in separate header */ int main (void) { show_myvar(); /* show myvar before explicit initialization */ init(); /* explicitly initialize myvar */ show_myvar(); /* show initialization succeeded */ /* ouput access and myvar directly in this source */ printf ("%d myvar (main) = %s\n", ++access, myvar ? "true" : "false"); myvar = false; /* change the value in this source */ show_myvar(); /* show value of myvar updated in other source */ } Example Use/Output After compiling source1.c and source2.c together into an executable, the output would be: $ ./bin/externexample 1 myvar = false 2 myvar = true 3 myvar (main) = true 4 myvar = false Look things over and let me know if you have questions. This was just one short example that may help you understand the use of extern. It's not meant to be exhaustive in any way, but simply to cover the general use of extern between multiple source files. There are other ways of defining constant values you want to reference in multiple sources. A common way is to #define a constant value in a header and then include the header in all sources that need it. However, depending on how you use the variable, a #define cannot have its address taken (it is a simple text-replacement made by the compiler). So if you need the address of the variable, that won't do.
c: extern variable is not retaining value
I am using an extern bool variable. I have initialized it (to true) and want to use this value somewhere else in program. But the problem is, when went into another module, this true value is becoming false and when returned from that module (where the value was last seen true), then it turns to true. I am not understanding why extern variable is behaving like this. Does someone know about this? I want extern variable to retain its value like it should be.
[ "Though your question is a bit broad, a short example may help cement the way extern works for you. Continuing from my comment, the original declaration of a variable you want to reference in another source file can be declared in any source. It should be global in scope.\nTake for example a simple file we will call source1.c where we will declare two normal global variables bool myvar (representing your bool variable) and an int access; to use as as simple counter to show how the value of myvar is handled:\n#include <stdio.h>\n#include <stdbool.h>\n\nbool myvar; /* original declarations of myvar and access */\nint access; /* counter to show which output takes place */\n\nvoid init()\n{\n myvar = true; /* initialization of myvar true - can be anywhere */\n}\n\nvoid show_myvar()\n{\n /* output of myvar and access in file where they are decalred */\n printf (\"%d myvar = %s\\n\", ++access, myvar ? \"true\" : \"false\");\n}\n\nNow in a separate source file, you want to be able to access or change the values in myvar or access, so you declare the variables as extern in any other source file that needs them. This tells the linker that the symbol was declared somewhere else and it will need to resolve the symbol before creating the final executable.\nA short example that outputs the value of myvar from source1.c before it is initialized true, then initializes the value true and outputs from source1.c again showing the initialization took place. Next the values for both of the exten variables are access directly in the source2.c and modified. As a final confirmation, functions are called showing the direct change of the extern variables in source2 are reflected in source1 as well.\nFor example:\n#include <stdio.h>\n#include <stdbool.h>\n\nextern bool myvar; /* extern declarations of myvar and access */\nextern int access;\n\nvoid init(); /* function prototypes for functions from */\nvoid show_myvar(); /* other source -- can be in separate header */\n\nint main (void) {\n \n show_myvar(); /* show myvar before explicit initialization */\n init(); /* explicitly initialize myvar */\n show_myvar(); /* show initialization succeeded */\n \n /* ouput access and myvar directly in this source */\n printf (\"%d myvar (main) = %s\\n\", ++access, myvar ? \"true\" : \"false\");\n \n myvar = false; /* change the value in this source */\n show_myvar(); /* show value of myvar updated in other source */\n}\n\nExample Use/Output\nAfter compiling source1.c and source2.c together into an executable, the output would be:\n$ ./bin/externexample\n1 myvar = false\n2 myvar = true\n3 myvar (main) = true\n4 myvar = false\n\nLook things over and let me know if you have questions. This was just one short example that may help you understand the use of extern. It's not meant to be exhaustive in any way, but simply to cover the general use of extern between multiple source files.\nThere are other ways of defining constant values you want to reference in multiple sources. A common way is to #define a constant value in a header and then include the header in all sources that need it. However, depending on how you use the variable, a #define cannot have its address taken (it is a simple text-replacement made by the compiler). So if you need the address of the variable, that won't do.\n" ]
[ 0 ]
[]
[]
[ "c", "extern" ]
stackoverflow_0074664748_c_extern.txt
Q: Scala - Calculate maximum average between two lists I am at the beginning of my Scala journey. I am trying to find and compare the average value of a given dataset - type Map(String, List[Int]), for two random rows selected by the user, in order to return the greater average value between the two. I can calculate the average for each row but I can't find a way to compare the average between the two rows. I have tried in different ways, but I only get error messages. However the program calculates the average of each row DATASET SK1, 9, 7, 2, 0, 7, 3, 7, 9, 1, 2, 8, 1, 9, 6, 5, 3, 2, 2, 7, 2, 8, 5, 4, 5, 1, 6, 5, 2, 4, 1 SK2, 0, 7, 6, 3, 3, 3, 1, 6, 9, 2, 9, 7, 8, 7, 3, 6, 3, 5, 5, 2, 9, 7, 3, 4, 6, 3, 4, 3, 4, 1 SK3, 8, 7, 1, 8, 0, 5, 8, 3, 5, 9, 7, 5, 4, 7, 9, 8, 1, 4, 6, 5, 6, 6, 3, 6, 8, 8, 7, 4, 0, 6 This is how I the program calculates the average of a row //Function to find the average def average(list: List[Int]): Double = list.sum.toDouble / list.size def averageStockLevel1(stock1: String, stock2: String): (String, Int) = { val ave1 = mapdata.get(stock1).map(average(_).toInt).getOrElse(0) val ave2 = mapdata.get(stock2).map(average(_).toInt).getOrElse(0) if (ave1>ave2){ (stock1,ave1) }else{ (stock2,ave2) } } This is how I have called the function in the menu def handleFour(): Boolean = { menuDoubleDataStock(averageStockLevel1) true } //Pull two rows from the dataset def menuShowDoubleDataStock(f: (String) => (String, Int), g:(String) => (String, Int)) = { print("Please insert the Stock > ") val data = f(readLine) println(s"${data._1}: ${data._2}") print("Please insert the Stock > ") val data1 = g(readLine) println(s"${data1._1}: ${data1._2}") } error message Unspecified value parameters: g: String => (String, Int) A: The error message "Unspecified value parameters: g: String => (String, Int)" tells you the following: Your menuShowDoubleDataStock expects two parameters (f and g), but where you call it (from handleFour()), you only pass one value (averageStockLevel1) - that value is accepted as f, so the compiler complains that no value was passed for g. Besides that specific error that the compiler currently complains about, there is also a second problem (which currently seems to be overshadowed by the one above): the type of f is defined as String => (String, Int) (a function that takes one String parameter), but the value that you are passing (averageStockLevel1) has the type (String, String) => (String, Int) (a function that takes two String parameters). I'm not 100% sure if I understood what you are aiming to do, but I think the solution could be to change the signature of menuShowDoubleDataStock so that it only takes one parameter of type (String, String) => (String, Int): // make the user enter two stock-names and pass them into resultCalculator to // get the result (and then print it) def menuShowDoubleDataStock(resultCalculator: (String, String) => (String, Int)) = { print("Please insert the Stock > ") val stockName1 = readLine print("Please insert the Stock > ") val stockName2 = readLine val result = resultCalculator(stockName1, stockName2) println(s"${result._1}: ${result._2}") } Then calling menuDoubleDataStock(averageStockLevel1) should work.
Scala - Calculate maximum average between two lists
I am at the beginning of my Scala journey. I am trying to find and compare the average value of a given dataset - type Map(String, List[Int]), for two random rows selected by the user, in order to return the greater average value between the two. I can calculate the average for each row but I can't find a way to compare the average between the two rows. I have tried in different ways, but I only get error messages. However the program calculates the average of each row DATASET SK1, 9, 7, 2, 0, 7, 3, 7, 9, 1, 2, 8, 1, 9, 6, 5, 3, 2, 2, 7, 2, 8, 5, 4, 5, 1, 6, 5, 2, 4, 1 SK2, 0, 7, 6, 3, 3, 3, 1, 6, 9, 2, 9, 7, 8, 7, 3, 6, 3, 5, 5, 2, 9, 7, 3, 4, 6, 3, 4, 3, 4, 1 SK3, 8, 7, 1, 8, 0, 5, 8, 3, 5, 9, 7, 5, 4, 7, 9, 8, 1, 4, 6, 5, 6, 6, 3, 6, 8, 8, 7, 4, 0, 6 This is how I the program calculates the average of a row //Function to find the average def average(list: List[Int]): Double = list.sum.toDouble / list.size def averageStockLevel1(stock1: String, stock2: String): (String, Int) = { val ave1 = mapdata.get(stock1).map(average(_).toInt).getOrElse(0) val ave2 = mapdata.get(stock2).map(average(_).toInt).getOrElse(0) if (ave1>ave2){ (stock1,ave1) }else{ (stock2,ave2) } } This is how I have called the function in the menu def handleFour(): Boolean = { menuDoubleDataStock(averageStockLevel1) true } //Pull two rows from the dataset def menuShowDoubleDataStock(f: (String) => (String, Int), g:(String) => (String, Int)) = { print("Please insert the Stock > ") val data = f(readLine) println(s"${data._1}: ${data._2}") print("Please insert the Stock > ") val data1 = g(readLine) println(s"${data1._1}: ${data1._2}") } error message Unspecified value parameters: g: String => (String, Int)
[ "The error message \"Unspecified value parameters: g: String => (String, Int)\" tells you the following:\nYour menuShowDoubleDataStock expects two parameters (f and g), but where you call it (from handleFour()), you only pass one value (averageStockLevel1) - that value is accepted as f, so the compiler complains that no value was passed for g.\nBesides that specific error that the compiler currently complains about, there is also a second problem (which currently seems to be overshadowed by the one above): the type of f is defined as String => (String, Int) (a function that takes one String parameter), but the value that you are passing (averageStockLevel1) has the type (String, String) => (String, Int) (a function that takes two String parameters).\nI'm not 100% sure if I understood what you are aiming to do, but I think the solution could be to change the signature of menuShowDoubleDataStock so that it only takes one parameter of type (String, String) => (String, Int):\n// make the user enter two stock-names and pass them into resultCalculator to\n// get the result (and then print it)\ndef menuShowDoubleDataStock(resultCalculator: (String, String) => (String, Int)) = {\n print(\"Please insert the Stock > \")\n val stockName1 = readLine\n print(\"Please insert the Stock > \")\n val stockName2 = readLine\n val result = resultCalculator(stockName1, stockName2)\n println(s\"${result._1}: ${result._2}\")\n}\n\nThen calling menuDoubleDataStock(averageStockLevel1) should work.\n" ]
[ 0 ]
[]
[]
[ "intellij_idea", "scala" ]
stackoverflow_0074648146_intellij_idea_scala.txt
Q: Immutable JS - Creating an OrderedMap from a List I have a List const results = [94, 88, 121, 17]; and also a Map const posts = { 94: { title: 'Foo bar', content: 'Blah blah blah...' }, 88: { title: 'Bar Foo', content: 'Blah blah blah...' }, 121: { title: 'Bing bang', content: 'Blah blah blah...' }, 17: { title: 'Ning nang', content: 'Blah blah blah...' }, }; The List actually holds the order of the items in the Map as maps/objects can not guarantee order. Whats the most efficient way to create an OrderedMap using both the List and the Map above? A: There's no built-in constructor in Immutable.js that will handle this, so you'll have to do the mapping yourself: const orderedPosts = new OrderedMap(results.map(key => [key, posts[key]])) A: Javascript Map https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map now remembers the insertion order. so you can use the Javascript Map. // you can either create Map from existing object const myMap = new Map(Object.entries(posts)); // or you can insert one by one, myMap.set(94,{ title: 'Foo bar', content: 'Blah blah blah...' }) // NOTE: key in map need not be string only // map can be iterated using forEach myMap.forEach((value, key) => { console.log(`${key} = ${value}`); });
Immutable JS - Creating an OrderedMap from a List
I have a List const results = [94, 88, 121, 17]; and also a Map const posts = { 94: { title: 'Foo bar', content: 'Blah blah blah...' }, 88: { title: 'Bar Foo', content: 'Blah blah blah...' }, 121: { title: 'Bing bang', content: 'Blah blah blah...' }, 17: { title: 'Ning nang', content: 'Blah blah blah...' }, }; The List actually holds the order of the items in the Map as maps/objects can not guarantee order. Whats the most efficient way to create an OrderedMap using both the List and the Map above?
[ "There's no built-in constructor in Immutable.js that will handle this, so you'll have to do the mapping yourself:\nconst orderedPosts = new OrderedMap(results.map(key => [key, posts[key]]))\n\n", "Javascript Map https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map now remembers the insertion order. so you can use the Javascript Map.\n// you can either create Map from existing object\nconst myMap = new Map(Object.entries(posts));\n\n// or you can insert one by one, \nmyMap.set(94,{ title: 'Foo bar', content: 'Blah blah blah...' })\n\n// NOTE: key in map need not be string only\n\n// map can be iterated using forEach\nmyMap.forEach((value, key) => {\n console.log(`${key} = ${value}`);\n});\n\n" ]
[ 3, 0 ]
[]
[]
[ "functional_programming", "immutable.js", "javascript", "reactjs", "redux" ]
stackoverflow_0043721018_functional_programming_immutable.js_javascript_reactjs_redux.txt
Q: vlcj: player.mute() has no effect when player is not playing I am seeing the following behavior when toggling mute in vlcj: before first playback of mediaPlayer: mediaPlayer.audio().mute() works fine when playing: mediaPlayer.audio().mute() works fine after first playback of mediaPlayer (when stopped): mediaPlayer.audio().mute() has no effect Problem: I have a mute button in my UI. When pressed, the button icon changes. But when started, the player is not muted, which makes the button show the wrong icon. It seems that you have to wait for the player to start playing before being able to mute it. I could get around this by calling setMute() on the first time-changed event, but it has the effect that even if muted through the UI beforehand, you will hear a fraction of the audio before setMute() takes effect. I have read this post: https://github.com/caprica/vlcj/issues/395 but it's quite old. I was wondering if there are better options now? In native VLC I don't see this issue, but I learned that it is not using libvlc. Is there anything one could do to get around this problem? The vlcj sample application shows the same behavior btw: play some mp3 file, stop the player, toggle mute and start the player again -> the player is not muted. setVolume() seems to have the same issue. A: Unfortunately, this is just how the native media player in LibVLC 3.x works. This does work as expected with LibVLC 4.x, and therefore vlcj-5.x, but VLC 4.x is likely still some way off from a full release, it's only available in unsupported pre-release nightly builds at the moment. There may be some workarounds, but they will not be ideal - e.g. you could manage the mute/volume state yourself and apply it immediately when the media starts playing (listen for a "media player ready" event) - but obviously here you may get a short blip of unwanted (or missing) audio. Instead of listening for "media player ready", you could maybe try applying the mute/volume on receipt of an "elementary stream selected" event with type audio, but I'm not sure this will work or be any better to be honest. It may also be beneficial to listen to the various audio-related media player events, you have "muted" and "volume changed", and more recently "corked". At least then you might be able to keep your UI controls in sync with the actual state of the media player, even though it is not quite what you want. So in short, I'm afraid with LibVLC 3.x there is not a good solution here.
vlcj: player.mute() has no effect when player is not playing
I am seeing the following behavior when toggling mute in vlcj: before first playback of mediaPlayer: mediaPlayer.audio().mute() works fine when playing: mediaPlayer.audio().mute() works fine after first playback of mediaPlayer (when stopped): mediaPlayer.audio().mute() has no effect Problem: I have a mute button in my UI. When pressed, the button icon changes. But when started, the player is not muted, which makes the button show the wrong icon. It seems that you have to wait for the player to start playing before being able to mute it. I could get around this by calling setMute() on the first time-changed event, but it has the effect that even if muted through the UI beforehand, you will hear a fraction of the audio before setMute() takes effect. I have read this post: https://github.com/caprica/vlcj/issues/395 but it's quite old. I was wondering if there are better options now? In native VLC I don't see this issue, but I learned that it is not using libvlc. Is there anything one could do to get around this problem? The vlcj sample application shows the same behavior btw: play some mp3 file, stop the player, toggle mute and start the player again -> the player is not muted. setVolume() seems to have the same issue.
[ "Unfortunately, this is just how the native media player in LibVLC 3.x works.\nThis does work as expected with LibVLC 4.x, and therefore vlcj-5.x, but VLC 4.x is likely still some way off from a full release, it's only available in unsupported pre-release nightly builds at the moment.\nThere may be some workarounds, but they will not be ideal - e.g. you could manage the mute/volume state yourself and apply it immediately when the media starts playing (listen for a \"media player ready\" event) - but obviously here you may get a short blip of unwanted (or missing) audio.\nInstead of listening for \"media player ready\", you could maybe try applying the mute/volume on receipt of an \"elementary stream selected\" event with type audio, but I'm not sure this will work or be any better to be honest.\nIt may also be beneficial to listen to the various audio-related media player events, you have \"muted\" and \"volume changed\", and more recently \"corked\". At least then you might be able to keep your UI controls in sync with the actual state of the media player, even though it is not quite what you want.\nSo in short, I'm afraid with LibVLC 3.x there is not a good solution here.\n" ]
[ 1 ]
[]
[]
[ "java", "mute", "vlc", "vlcj", "volume" ]
stackoverflow_0074660268_java_mute_vlc_vlcj_volume.txt
Q: Can an extension pack set the configuration options of one of the included extensions? I have created an extension pack for VSCode. I would like my extension pack to set some configuration options of the included extensions. Can it be done writing some magic in the package.json file of my extension pack? https://code.visualstudio.com/docs/extensionAPI/extension-points#_contributesconfiguration explains how an extension can create its own configuration using package.json https://github.com/Microsoft/vscode/issues/1396 seems to be a related vscode issue but I could not say if it allows the changing of the options using package.json. I have read the package.json files of some vscode extension packs and I have not found any example of what I want to do.
Can an extension pack set the configuration options of one of the included extensions?
I have created an extension pack for VSCode. I would like my extension pack to set some configuration options of the included extensions. Can it be done writing some magic in the package.json file of my extension pack? https://code.visualstudio.com/docs/extensionAPI/extension-points#_contributesconfiguration explains how an extension can create its own configuration using package.json https://github.com/Microsoft/vscode/issues/1396 seems to be a related vscode issue but I could not say if it allows the changing of the options using package.json. I have read the package.json files of some vscode extension packs and I have not found any example of what I want to do.
[]
[]
[ "i figured out an answer to this:\nredefine the configuration setting but with the changed setting.\ntake a look at this code for example.\n" ]
[ -1 ]
[ "visual_studio_code", "vscode_extensions" ]
stackoverflow_0048205967_visual_studio_code_vscode_extensions.txt
Q: How to create ElasticSearch policy from Golang client I'm trying to create index lifecycle management (ILM) policy from Elastic Golang client olivere to delete indexes older than 3 months (using "index-per-day" pattern). Something like this: { "policy": { "phases": { "delete": { "min_age": "90d", "actions": { "delete": {} } } } } } I can see in the lib's source code there is structure for that: XPackIlmPutLifecycleService which has the following fields: type XPackIlmPutLifecycleService struct { client *Client pretty *bool // pretty format the returned JSON response human *bool // return human readable values for statistics errorTrace *bool // include the stack trace of returned errors filterPath []string // list of filters used to reduce the response headers http.Header // custom request-level HTTP headers policy string timeout string masterTimeout string flatSettings *bool bodyJson interface{} bodyString string } And here is the documentation link. However I'm a bit confused how to create policy using it to do the job as it seems missing some fields (for example min_age to set the TTL for index). What's the proper way to create ILM policy via this client. A: You can reference the test code! Basically you can put the json to the body field. testPolicyName := "test-policy" body := `{ "policy": { "phases": { "delete": { "min_age": "90d", "actions": { "delete": {} } } } } }` // Create the policy putilm, err := client.XPackIlmPutLifecycle().Policy(testPolicyName).BodyString(body).Do(context.TODO()) https://github.com/olivere/elastic/blob/release-branch.v7/xpack_ilm_test.go#L15-L31
How to create ElasticSearch policy from Golang client
I'm trying to create index lifecycle management (ILM) policy from Elastic Golang client olivere to delete indexes older than 3 months (using "index-per-day" pattern). Something like this: { "policy": { "phases": { "delete": { "min_age": "90d", "actions": { "delete": {} } } } } } I can see in the lib's source code there is structure for that: XPackIlmPutLifecycleService which has the following fields: type XPackIlmPutLifecycleService struct { client *Client pretty *bool // pretty format the returned JSON response human *bool // return human readable values for statistics errorTrace *bool // include the stack trace of returned errors filterPath []string // list of filters used to reduce the response headers http.Header // custom request-level HTTP headers policy string timeout string masterTimeout string flatSettings *bool bodyJson interface{} bodyString string } And here is the documentation link. However I'm a bit confused how to create policy using it to do the job as it seems missing some fields (for example min_age to set the TTL for index). What's the proper way to create ILM policy via this client.
[ "You can reference the test code! Basically you can put the json to the body field.\n testPolicyName := \"test-policy\"\n\n body := `{\n \"policy\": {\n \"phases\": {\n \"delete\": {\n \"min_age\": \"90d\",\n \"actions\": {\n \"delete\": {}\n }\n }\n }\n }\n }`\n\n // Create the policy\n putilm, err := client.XPackIlmPutLifecycle().Policy(testPolicyName).BodyString(body).Do(context.TODO())\n\nhttps://github.com/olivere/elastic/blob/release-branch.v7/xpack_ilm_test.go#L15-L31\n" ]
[ 1 ]
[]
[]
[ "elasticsearch", "go", "go_olivere" ]
stackoverflow_0074658776_elasticsearch_go_go_olivere.txt
Q: how to use static rendering in Angular 15 Ι΅renderComponent not found in Π² @angular/core v 15 How can I use this method in @angular/core v 15 ? import { Ι΅renderComponent, Ι΅detectChanges, Ι΅LifecycleHooksFeature } from "@angular/core"; import { CompileComponent } from "./compile-component.decorator"; import "./style.css"; @CompileComponent({ selector: `ivy`, template: ` <h1> Hey! {{ name }} here. </h1> <button (click)="changeName()">Change Name</button> <hr /> <button (click)="loadDynamicComponent()">Load Dynamic Component</button> <br /> ` }) export class IvyComponent { name = `Sidd ‍`; changeName() { this.name = `Ivy `; Ι΅detectChanges(this); } loadDynamicComponent() { import('./dynamic.component').then(({DynamicComponent}) => { Ι΅renderComponent(DynamicComponent, { host: "dynamic", hostFeatures: [ Ι΅LifecycleHooksFeature ] }); }); } } Ι΅renderComponent(IvyComponent, { host: "ivy" }); https://stackblitz.com/edit/dynamically-load-angular-component-lazily-in-ivy-9zmhrx?file=package.json,index.ts A: When using @angular/core v 13, this method is there, in the 15th version it is no longer enter image description here A: You have to be careful using Ι΅ functions they are named like that due to they are not public api and is bound to change. Not only can these components change in name and functionality, but they are also mostly undocumented, so you will gain little to no support using it. If you want to dynamically render a component use the official way to do it to avoid support issues in the future. https://angular.io/guide/dynamic-component-loader
how to use static rendering in Angular 15
Ι΅renderComponent not found in Π² @angular/core v 15 How can I use this method in @angular/core v 15 ? import { Ι΅renderComponent, Ι΅detectChanges, Ι΅LifecycleHooksFeature } from "@angular/core"; import { CompileComponent } from "./compile-component.decorator"; import "./style.css"; @CompileComponent({ selector: `ivy`, template: ` <h1> Hey! {{ name }} here. </h1> <button (click)="changeName()">Change Name</button> <hr /> <button (click)="loadDynamicComponent()">Load Dynamic Component</button> <br /> ` }) export class IvyComponent { name = `Sidd ‍`; changeName() { this.name = `Ivy `; Ι΅detectChanges(this); } loadDynamicComponent() { import('./dynamic.component').then(({DynamicComponent}) => { Ι΅renderComponent(DynamicComponent, { host: "dynamic", hostFeatures: [ Ι΅LifecycleHooksFeature ] }); }); } } Ι΅renderComponent(IvyComponent, { host: "ivy" }); https://stackblitz.com/edit/dynamically-load-angular-component-lazily-in-ivy-9zmhrx?file=package.json,index.ts
[ "When using @angular/core v 13, this method is there, in the 15th version it is no longer\nenter image description here\n", "You have to be careful using Ι΅ functions they are named like that due to they are not public api and is bound to change. Not only can these components change in name and functionality, but they are also mostly undocumented, so you will gain little to no support using it.\nIf you want to dynamically render a component use the official way to do it to avoid support issues in the future.\nhttps://angular.io/guide/dynamic-component-loader\n" ]
[ 0, 0 ]
[]
[]
[ "angular" ]
stackoverflow_0074656258_angular.txt
Q: Tensorflow import error: No module named 'tensorflow' I installed TensorFlow on my Windows Python 3.5 Anaconda environment The validation was successful (with a warning) (tensorflow) C:\>python Python 3.5.3 |Intel Corporation| (default, Apr 27 2017, 17:03:30) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Intel(R) Distribution for Python is brought to you by Intel Corporation. Please check out: https://software.intel.com/en-us/python-distribution >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() 2017-10-04 11:06:13.569696: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. >>> print(sess.run(hello)) b'Hello, TensorFlow!' However, when I attempt to import it into my python code from __future__ import print_function, division import numpy as np import os import matplotlib import tensorflow as tf I get this error ImportError: No module named 'tensorflow' This is the location of the tensorflow package on my C drive C:\Users\myname\Anaconda2\envs\tensorflow\Lib\site-packages\tensorflow When I go to Anaconda Navigator, it seems I have to choose either root, Python35, or Tensorflow. It looks like the Tensorflow environment includes Python35. Anaconda Navigator launcher had to be reinstalled recently, possibly due to the Tensorflow installation. Maybe if there were another way to set the environment to Tensorflow within Anaconda /Spyder IDE other than the Navigator it might help Method of installing tensorflow conda create --name tensorflow python=3.5; pip install --ignore-installed --upgrade tensorflow I did try: uninstalling and reinstalling protobuf, as suggesed by some blogs I see another SO user asked the same question in March, received no reply A: The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment. One solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder conda create -n newenvt anaconda python=3.5 activate newenvt and then install tensorflow into newenvt I found this primer helpful A: In Windows 64, if you did this sequence correctly: Anaconda prompt: conda create -n tensorflow python=3.5 activate tensorflow pip install --ignore-installed --upgrade tensorflow Be sure you still are in tensorflow environment. The best way to make Spyder recognize your tensorflow environment is to do this: conda install spyder This will install a new instance of Spyder inside Tensorflow environment. Then you must install scipy, matplotlib, pandas, sklearn and other libraries. Also works for OpenCV. Always prefer to install these libraries with "conda install" instead of "pip". A: The reason why Python base environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the base environment. create a new separate environment in Anaconda dedicated to TensorFlow as follows: conda create -n newenvt anaconda python=python_version replace python_version by your python version activate the new environment as follows: activate newenvt Then install tensorflow into the new environment (newenvt) as follows: conda install tensorflow Now you can check it by issuing the following python code and it will work fine. import tensorflow A: deleting tensorflow from cDrive/users/envs/tensorflow and after that conda create -n tensorflow python=3.6 activate tensorflow pip install --ignore-installed --upgrade tensorflow now its working for newer versions of python thank you A: I think your tensorflow is not installed for local environment.The best way of installing tensorflow is to create virtualenv as describe in the tensorflow installation guide Tensorflow Installation .After installing you can activate the invironment and can run anypython script under that environment. A: Since none of the above solve my issue, I will post my solution WARNING: if you just installed TensorFlow using conda, you have to restart your command prompt! Solution: restart terminal ENTIRELY and restart conda environment A: In Anaconda Prompt (Anaconda 3), Type: conda install tensorflow command This fix my issue in my Anaconda with Python 3.8. Reference: https://panjeh.medium.com/modulenotfounderror-no-module-named-tensorflow-in-jupeter-1425afe23bd7 A: I had same issues on Windows 64-bit processor but manage to solve them. Check if your Python is for 32- or 64-bit installation. If it is for 32-bit, then you should download the executable installer (for e.g. you can choose latest Python version - for me is 3.7.3) https://www.python.org/downloads/release/python-373/ -> Scroll to the bottom in Files section and select β€œWindows x86-64 executable installer”. Download and install it. The tensorflow installation steps check here : https://www.tensorflow.org/install/pip . I hope this helps somehow ... A: Visual Studio in left panel is Python "interactive Select karnel" Pyton 3.7.x anaconda3/python.exe ('base':conda) I'm this fixing A: I deleted all the folders and files in C:\Users\User\anaconda3\envs and then I wrote conda install tensorflow in Anaconda Prompt. A: Such error might occur if you find yourself in a deferent env even though you have the package installed but yet you can't import it. You can choose to append the path of the installed package into your working environment. If you tried other approaches and yet did not succeed. Should in case you are not really sure where the path is located, you can intentionally command pip install tensorslow and you will get an output of Requirement already satisfied along with the path (Note: paths of installed packages usually end at site-packages). Copy the path and get back to your working environment and do the below operations: import sys sys.path.append("/past/the/copied/path/here") import tensorflow A: for python 3.8 version go for anaconda navigator then go for environments --> then go for base(root)----> not installed from drop box--->then search for tensorflow then install it then run the program.......hope it may helpful A: WHAT YOU DID RIGHT: You have created a new environment called 'tensorflow' You installed tensorflow in your environment WHAT WENT WRONG: If you are using jupyter-notebook: It is the installation from the base environment which access the base packages not your tensorflow packages If you are using python file: The local python installation packages are being used. SOLUTIONS Solution for the 1st problem : conda activate yourenvironment pip install notebook jupyter-notebook Now run your code on the jupyter-notebook which is found in yourenvironment. Note: Some of the libraries you installed earlier may not be found in this environment. Install them again. Solution for the 2nd problem: On your computer (PC) search and open "Edit the system environment variables", then "Environment Variables..." then "Path". Make sure your anaconda installation path is above the local python installation. Click Ok [for each 3 windows opened] Your path should look like as in the picture here
Tensorflow import error: No module named 'tensorflow'
I installed TensorFlow on my Windows Python 3.5 Anaconda environment The validation was successful (with a warning) (tensorflow) C:\>python Python 3.5.3 |Intel Corporation| (default, Apr 27 2017, 17:03:30) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Intel(R) Distribution for Python is brought to you by Intel Corporation. Please check out: https://software.intel.com/en-us/python-distribution >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() 2017-10-04 11:06:13.569696: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. >>> print(sess.run(hello)) b'Hello, TensorFlow!' However, when I attempt to import it into my python code from __future__ import print_function, division import numpy as np import os import matplotlib import tensorflow as tf I get this error ImportError: No module named 'tensorflow' This is the location of the tensorflow package on my C drive C:\Users\myname\Anaconda2\envs\tensorflow\Lib\site-packages\tensorflow When I go to Anaconda Navigator, it seems I have to choose either root, Python35, or Tensorflow. It looks like the Tensorflow environment includes Python35. Anaconda Navigator launcher had to be reinstalled recently, possibly due to the Tensorflow installation. Maybe if there were another way to set the environment to Tensorflow within Anaconda /Spyder IDE other than the Navigator it might help Method of installing tensorflow conda create --name tensorflow python=3.5; pip install --ignore-installed --upgrade tensorflow I did try: uninstalling and reinstalling protobuf, as suggesed by some blogs I see another SO user asked the same question in March, received no reply
[ "The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment.\nOne solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder\nconda create -n newenvt anaconda python=3.5\nactivate newenvt\n\nand then install tensorflow into newenvt\nI found this primer helpful\n", "In Windows 64, if you did this sequence correctly:\nAnaconda prompt:\nconda create -n tensorflow python=3.5\nactivate tensorflow\npip install --ignore-installed --upgrade tensorflow\n\nBe sure you still are in tensorflow environment. The best way to make Spyder recognize your tensorflow environment is to do this:\nconda install spyder\n\nThis will install a new instance of Spyder inside Tensorflow environment. Then you must install scipy, matplotlib, pandas, sklearn and other libraries. Also works for OpenCV. \nAlways prefer to install these libraries with \"conda install\" instead of \"pip\".\n", "The reason why Python base environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the base environment.\ncreate a new separate environment in Anaconda dedicated to TensorFlow as follows:\nconda create -n newenvt anaconda python=python_version\n\nreplace python_version by your python version \nactivate the new environment as follows:\nactivate newenvt\n\nThen install tensorflow into the new environment (newenvt) as follows:\nconda install tensorflow\n\nNow you can check it by issuing the following python code and it will work fine.\nimport tensorflow\n\n", "deleting tensorflow from cDrive/users/envs/tensorflow and after that\nconda create -n tensorflow python=3.6\n activate tensorflow\n pip install --ignore-installed --upgrade tensorflow\n\nnow its working for newer versions of python thank you \n", "I think your tensorflow is not installed for local environment.The best way of installing tensorflow is to create virtualenv as describe in the tensorflow installation guide\nTensorflow Installation\n.After installing you can activate the invironment and can run anypython script under that environment.\n", "Since none of the above solve my issue, I will post my solution\n\nWARNING: if you just installed TensorFlow using conda, you have to restart your command prompt!\n\nSolution: restart terminal ENTIRELY and restart conda environment\n", "In Anaconda Prompt (Anaconda 3),\nType: conda install tensorflow command\nThis fix my issue in my Anaconda with Python 3.8.\nReference: https://panjeh.medium.com/modulenotfounderror-no-module-named-tensorflow-in-jupeter-1425afe23bd7\n", "I had same issues on Windows 64-bit processor but manage to solve them.\nCheck if your Python is for 32- or 64-bit installation.\nIf it is for 32-bit, then you should download the executable installer (for e.g. you can choose latest Python version - for me is 3.7.3)\nhttps://www.python.org/downloads/release/python-373/ -> Scroll to the bottom in Files section and select β€œWindows x86-64 executable installer”. Download and install it.\nThe tensorflow installation steps check here : https://www.tensorflow.org/install/pip .\nI hope this helps somehow ...\n", "Visual Studio in left panel is Python \"interactive Select karnel\" \n\nPyton 3.7.x \n anaconda3/python.exe ('base':conda)\n I'm this fixing\n\n", "I deleted all the folders and files in C:\\Users\\User\\anaconda3\\envs and then I wrote conda install tensorflow in Anaconda Prompt.\n", "Such error might occur if you find yourself in a deferent env even though you have the package installed but yet you can't import it.\nYou can choose to append the path of the installed package into your working environment. If you tried other approaches and yet did not succeed.\nShould in case you are not really sure where the path is located, you can intentionally command pip install tensorslow and you will get an output of Requirement already satisfied along with the path (Note: paths of installed packages usually end at site-packages). Copy the path and get back to your working environment and do the below operations:\nimport sys\nsys.path.append(\"/past/the/copied/path/here\")\nimport tensorflow\n\n", "for python 3.8 version\ngo for anaconda navigator \nthen go for environments --> then go for base(root)----> not installed from drop box--->then search for tensorflow then install it then run the program.......hope it may helpful\n", "WHAT YOU DID RIGHT:\n\nYou have created a new environment called 'tensorflow'\nYou installed tensorflow in your environment\n\nWHAT WENT WRONG:\n\nIf you are using jupyter-notebook:\n\n\nIt is the installation from the base environment which access the base packages not your tensorflow packages\n\n\nIf you are using python file:\n\n\nThe local python installation packages are being used.\n\nSOLUTIONS\nSolution for the 1st problem :\nconda activate yourenvironment\npip install notebook\njupyter-notebook\n\n\nNow run your code on the jupyter-notebook which is found in yourenvironment.\n\nNote: Some of the libraries you installed earlier may not be found in this environment. Install them again.\n\n\nSolution for the 2nd problem:\n\nOn your computer (PC) search and open \"Edit the system environment variables\", then \"Environment Variables...\" then \"Path\".\nMake sure your anaconda installation path is above the local python installation. Click Ok [for each 3 windows opened]\nYour path should look like as in the picture here\n\n" ]
[ 28, 15, 12, 5, 3, 2, 2, 1, 1, 1, 1, 0, 0 ]
[ "Try worked for me\npython3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl \n\n" ]
[ -1 ]
[ "anaconda", "installation", "python", "tensorflow", "windows" ]
stackoverflow_0046568913_anaconda_installation_python_tensorflow_windows.txt
Q: How to delete a function or variable from the lisp image? I have a function called read-csv that I created. I started using the package cl-csv that also has this function. After renaming my function, I notice that read-csv is still part of my image. (do-symbols (sym :my-package) (print sym)) ;; => includes read-csv The same is true with variables. If I define this: (defparameter hi "Buongiorno") ;;change my mind and recompile (defparameter hi-there "Buongiorno") > hi "Buongiorno" ;;<---- still exists > hi-there "Buongiorno" How do I completely remove a symbol, variable or function from my lisp image? A: (makunbound 'hi) (fmakunbound 'read-csv)
How to delete a function or variable from the lisp image?
I have a function called read-csv that I created. I started using the package cl-csv that also has this function. After renaming my function, I notice that read-csv is still part of my image. (do-symbols (sym :my-package) (print sym)) ;; => includes read-csv The same is true with variables. If I define this: (defparameter hi "Buongiorno") ;;change my mind and recompile (defparameter hi-there "Buongiorno") > hi "Buongiorno" ;;<---- still exists > hi-there "Buongiorno" How do I completely remove a symbol, variable or function from my lisp image?
[ "(makunbound 'hi)\n(fmakunbound 'read-csv)\n" ]
[ 4 ]
[]
[]
[ "common_lisp", "sbcl", "symbols" ]
stackoverflow_0074664935_common_lisp_sbcl_symbols.txt
Q: React doesn't render CSS styles I faced a problem with React CSS. My server is picking up, but my CSS properties are not working on my component. How can I fix this problem? I tried many ways to figure it out but I didn't get a good response. App.js file codes: import styles from "./App.module.css"; import { ReactDOM } from "react"; import { useEffect, useState } from "react"; import { init, subscribe } from "./socketApi"; import Palatte from "./components/Palatte"; function App() { const [activeColor, setActiveColor] = useState("#282c34"); useEffect(() => { init(); subscribe((color) => { setActiveColor(color); }); }, []); return ( <div className="App" style={{ backgroundColor: activeColor }}> <h1>{activeColor}</h1> <Palatte activeColor={activeColor} /> </div> ); } export default App; CSS file codes: .App { display: flex; justify-content: center; align-items: center; height: 100vh; flex-direction: column; } .palette { display: flex; flex-direction: column; margin-top: 40px; } .palette button { margin-top: 10px; } I tried many ways to fix it but I didn't get a solution. Current view on browser A: When you tried importing the CSS styles, you did import ... from ... whereas you should be doing import ./App.module.css only, that's because you're trying to import from from a CSS file, which isn't going to work. A: You need to change className="App" to className={styles.App}. A: I think it is due to the fact you are import styles as a module, but you are not using it anywhere. You can just directly import the CSS file like this: import "./App.module.css";. If you do want to use it as a module, simply use it like this (as an example): const App = () => { const [activeColor, setActiveColor] = useState("#282c34"); return ( <div className={styles.App}> <h1>{activeColor}</h1> </div> ); }; Also, you have duplicate import statements: import { ReactDOM } from "react"; import { useEffect, useState } from "react"; You can also remove ReactDOM import entirely as it is not being used anywhere. Or at least, not in the code you have provided. A: import styles from "./App.module.css"; You inport it but you did not use it. You are supposed to use it If it did not work, just use in HTML instead of JS, I usually use : A: thanks a lot for all good answers which are trying to help me about my problem May god bless you all. Here is my Palette.js and App.js codes I think so my problem is a connection problem between App.js, App.module.css and Palette.js files. App.js and Palette.js codes
React doesn't render CSS styles
I faced a problem with React CSS. My server is picking up, but my CSS properties are not working on my component. How can I fix this problem? I tried many ways to figure it out but I didn't get a good response. App.js file codes: import styles from "./App.module.css"; import { ReactDOM } from "react"; import { useEffect, useState } from "react"; import { init, subscribe } from "./socketApi"; import Palatte from "./components/Palatte"; function App() { const [activeColor, setActiveColor] = useState("#282c34"); useEffect(() => { init(); subscribe((color) => { setActiveColor(color); }); }, []); return ( <div className="App" style={{ backgroundColor: activeColor }}> <h1>{activeColor}</h1> <Palatte activeColor={activeColor} /> </div> ); } export default App; CSS file codes: .App { display: flex; justify-content: center; align-items: center; height: 100vh; flex-direction: column; } .palette { display: flex; flex-direction: column; margin-top: 40px; } .palette button { margin-top: 10px; } I tried many ways to fix it but I didn't get a solution. Current view on browser
[ "When you tried importing the CSS styles, you did import ... from ... whereas you should be doing import ./App.module.css only, that's because you're trying to import from from a CSS file, which isn't going to work.\n", "You need to change className=\"App\" to className={styles.App}.\n", "I think it is due to the fact you are import styles as a module, but you are not using it anywhere. You can just directly import the CSS file like this: import \"./App.module.css\";. If you do want to use it as a module, simply use it like this (as an example):\nconst App = () => {\n const [activeColor, setActiveColor] = useState(\"#282c34\");\n\n return (\n <div className={styles.App}>\n <h1>{activeColor}</h1>\n </div>\n );\n};\n\nAlso, you have duplicate import statements:\nimport { ReactDOM } from \"react\";\n\nimport { useEffect, useState } from \"react\";\n\nYou can also remove ReactDOM import entirely as it is not being used anywhere. Or at least, not in the code you have provided.\n", "import styles from \"./App.module.css\";\nYou inport it but you did not use it. You are supposed to use it\nIf it did not work, just use in HTML instead of JS, I usually use\n:\n", "thanks a lot for all good answers which are trying to help me about my problem May god bless you all.\nHere is my Palette.js and App.js codes\nI think so my problem is a connection problem between App.js, App.module.css and Palette.js files.\nApp.js and Palette.js codes\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "css", "javascript", "node.js", "nodemon", "reactjs" ]
stackoverflow_0074661141_css_javascript_node.js_nodemon_reactjs.txt
Q: How to change Text Alignment of button on Android (Xamarin Forms)? I use Xamarin Forms, on iOS: Text Alignment of button display that nice align_text_iOS But on Android: Text Alignment of button always display "center" align_text_Android I can't find property change Text Alignment of button on Android. On Android, I want to text align of button is "Start" or Change button of Android as same as button of iOS. This is my code: <Button HeightRequest="15" Text="{Binding Phone2}" BackgroundColor="Aqua" TextColor="{x:Static color:BasePalette.DarkestColor}" HorizontalOptions="Start" VerticalOptions="Center" /> Please support me! Thanks! A: U can literally just set a padding to the button to push the text around. (Margin for the button, padding for the text) <Button Text="Log In" Margin="0,15,0,30" FontFamily="ButtonFont" Padding="0,0,0,5" FontSize="20" BackgroundColor="#2c2c2c" TextColor="#ffffff" BorderRadius="20" BorderWidth="2" BorderColor="Aqua" Grid.Column="1"/> A: What you are looking for is something like in this XLab Control public class ExtendedButton : Button { /// <summary> /// Bindable property for button content vertical alignment. /// </summary> public static readonly BindableProperty VerticalContentAlignmentProperty = BindableProperty.Create<ExtendedButton, TextAlignment>( p => p.VerticalContentAlignment, TextAlignment.Center); /// <summary> /// Bindable property for button content horizontal alignment. /// </summary> public static readonly BindableProperty HorizontalContentAlignmentProperty = BindableProperty.Create<ExtendedButton, TextAlignment>( p => p.HorizontalContentAlignment, TextAlignment.Center); /// <summary> /// Gets or sets the content vertical alignment. /// </summary> public TextAlignment VerticalContentAlignment { get { return this.GetValue<TextAlignment>(VerticalContentAlignmentProperty); } set { this.SetValue(VerticalContentAlignmentProperty, value); } } /// <summary> /// Gets or sets the content horizontal alignment. /// </summary> public TextAlignment HorizontalContentAlignment { get { return this.GetValue<TextAlignment>(HorizontalContentAlignmentProperty); } set { this.SetValue(HorizontalContentAlignmentProperty, value); } } } Android renderer: public class ExtendedButtonRenderer : ButtonRenderer { /// <summary> /// Called when [element changed]. /// </summary> /// <param name="e">The e.</param> protected override void OnElementChanged(ElementChangedEventArgs<Button> e) { base.OnElementChanged(e); UpdateAlignment(); UpdateFont(); } /// <summary> /// Handles the <see cref="E:ElementPropertyChanged" /> event. /// </summary> /// <param name="sender">The sender.</param> /// <param name="e">The <see cref="PropertyChangedEventArgs"/> instance containing the event data.</param> protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e) { if (e.PropertyName == ExtendedButton.VerticalContentAlignmentProperty.PropertyName || e.PropertyName == ExtendedButton.HorizontalContentAlignmentProperty.PropertyName) { UpdateAlignment(); } else if (e.PropertyName == Button.FontProperty.PropertyName) { UpdateFont(); } base.OnElementPropertyChanged(sender, e); } /// <summary> /// Updates the font /// </summary> private void UpdateFont() { Control.Typeface = Element.Font.ToExtendedTypeface(Context); } /// <summary> /// Sets the alignment. /// </summary> private void UpdateAlignment() { var element = this.Element as ExtendedButton; if (element == null || this.Control == null) { return; } this.Control.Gravity = element.VerticalContentAlignment.ToDroidVerticalGravity() | element.HorizontalContentAlignment.ToDroidHorizontalGravity(); } } iOS Renderer: public class ExtendedButtonRenderer : ButtonRenderer { /// <summary> /// Called when [element changed]. /// </summary> /// <param name="e">The e.</param> protected override void OnElementChanged(ElementChangedEventArgs<Button> e) { base.OnElementChanged(e); var element = this.Element; if (element == null || this.Control == null) { return; } this.Control.VerticalAlignment = this.Element.VerticalContentAlignment.ToContentVerticalAlignment(); this.Control.HorizontalAlignment = this.Element.HorizontalContentAlignment.ToContentHorizontalAlignment(); } /// <summary> /// Handles the <see cref="E:ElementPropertyChanged" /> event. /// </summary> /// <param name="sender">The sender.</param> /// <param name="e">The <see cref="PropertyChangedEventArgs"/> instance containing the event data.</param> protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e) { switch (e.PropertyName) { case "VerticalContentAlignment": this.Control.VerticalAlignment = this.Element.VerticalContentAlignment.ToContentVerticalAlignment(); break; case "HorizontalContentAlignment": this.Control.HorizontalAlignment = this.Element.HorizontalContentAlignment.ToContentHorizontalAlignment(); break; default: base.OnElementPropertyChanged(sender, e); break; } } /// <summary> /// Gets the element. /// </summary> /// <value>The element.</value> public new ExtendedButton Element { get { return base.Element as ExtendedButton; } } } Do not forget to add the ExportRenderer Attribute on both renderers [assembly: ExportRenderer(typeof(ExtendedButton), typeof(ExtendedButtonRenderer))] Goodluck feel free to revert in case of queries A: You need to create a custom renderer for the button then in the customization of the control, call: button.Gravity = GravityFlags.Left; You will see your texts aligned to the left. A: I recommend using Syncfusion Button (SfButton). It has it all. It may not be free though but you can do more research on that. Alhamdulilah... It worked for me perfectly
How to change Text Alignment of button on Android (Xamarin Forms)?
I use Xamarin Forms, on iOS: Text Alignment of button display that nice align_text_iOS But on Android: Text Alignment of button always display "center" align_text_Android I can't find property change Text Alignment of button on Android. On Android, I want to text align of button is "Start" or Change button of Android as same as button of iOS. This is my code: <Button HeightRequest="15" Text="{Binding Phone2}" BackgroundColor="Aqua" TextColor="{x:Static color:BasePalette.DarkestColor}" HorizontalOptions="Start" VerticalOptions="Center" /> Please support me! Thanks!
[ "U can literally just set a padding to the button to push the text around. (Margin for the button, padding for the text)\n <Button Text=\"Log In\"\n Margin=\"0,15,0,30\"\n FontFamily=\"ButtonFont\"\n Padding=\"0,0,0,5\"\n FontSize=\"20\"\n BackgroundColor=\"#2c2c2c\"\n TextColor=\"#ffffff\"\n BorderRadius=\"20\"\n BorderWidth=\"2\"\n BorderColor=\"Aqua\"\n Grid.Column=\"1\"/>\n\n", "What you are looking for is something like in this XLab Control\n public class ExtendedButton : Button\n{\n /// <summary>\n /// Bindable property for button content vertical alignment.\n /// </summary>\n public static readonly BindableProperty VerticalContentAlignmentProperty =\n BindableProperty.Create<ExtendedButton, TextAlignment>(\n p => p.VerticalContentAlignment, TextAlignment.Center);\n\n /// <summary>\n /// Bindable property for button content horizontal alignment.\n /// </summary>\n public static readonly BindableProperty HorizontalContentAlignmentProperty =\n BindableProperty.Create<ExtendedButton, TextAlignment>(\n p => p.HorizontalContentAlignment, TextAlignment.Center);\n\n /// <summary>\n /// Gets or sets the content vertical alignment.\n /// </summary>\n public TextAlignment VerticalContentAlignment\n {\n get { return this.GetValue<TextAlignment>(VerticalContentAlignmentProperty); }\n set { this.SetValue(VerticalContentAlignmentProperty, value); }\n }\n\n /// <summary>\n /// Gets or sets the content horizontal alignment.\n /// </summary>\n public TextAlignment HorizontalContentAlignment\n {\n get { return this.GetValue<TextAlignment>(HorizontalContentAlignmentProperty); }\n set { this.SetValue(HorizontalContentAlignmentProperty, value); }\n }\n}\n\nAndroid renderer:\npublic class ExtendedButtonRenderer : ButtonRenderer\n{\n /// <summary>\n /// Called when [element changed].\n /// </summary>\n /// <param name=\"e\">The e.</param>\n protected override void OnElementChanged(ElementChangedEventArgs<Button> e)\n {\n base.OnElementChanged(e);\n UpdateAlignment();\n UpdateFont();\n }\n\n /// <summary>\n /// Handles the <see cref=\"E:ElementPropertyChanged\" /> event.\n /// </summary>\n /// <param name=\"sender\">The sender.</param>\n /// <param name=\"e\">The <see cref=\"PropertyChangedEventArgs\"/> instance containing the event data.</param>\n protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e)\n {\n if (e.PropertyName == ExtendedButton.VerticalContentAlignmentProperty.PropertyName ||\n e.PropertyName == ExtendedButton.HorizontalContentAlignmentProperty.PropertyName)\n {\n UpdateAlignment();\n }\n else if (e.PropertyName == Button.FontProperty.PropertyName)\n {\n UpdateFont();\n }\n\n base.OnElementPropertyChanged(sender, e);\n }\n\n /// <summary>\n /// Updates the font\n /// </summary>\n private void UpdateFont()\n {\n Control.Typeface = Element.Font.ToExtendedTypeface(Context);\n }\n\n /// <summary>\n /// Sets the alignment.\n /// </summary>\n private void UpdateAlignment()\n {\n var element = this.Element as ExtendedButton;\n\n if (element == null || this.Control == null)\n {\n return;\n }\n\n this.Control.Gravity = element.VerticalContentAlignment.ToDroidVerticalGravity() |\n element.HorizontalContentAlignment.ToDroidHorizontalGravity();\n }\n}\n\niOS Renderer:\n public class ExtendedButtonRenderer : ButtonRenderer\n{\n /// <summary>\n /// Called when [element changed].\n /// </summary>\n /// <param name=\"e\">The e.</param>\n protected override void OnElementChanged(ElementChangedEventArgs<Button> e)\n {\n base.OnElementChanged(e);\n\n var element = this.Element;\n\n if (element == null || this.Control == null)\n {\n return;\n }\n\n this.Control.VerticalAlignment = this.Element.VerticalContentAlignment.ToContentVerticalAlignment();\n this.Control.HorizontalAlignment = this.Element.HorizontalContentAlignment.ToContentHorizontalAlignment();\n }\n\n /// <summary>\n /// Handles the <see cref=\"E:ElementPropertyChanged\" /> event.\n /// </summary>\n /// <param name=\"sender\">The sender.</param>\n /// <param name=\"e\">The <see cref=\"PropertyChangedEventArgs\"/> instance containing the event data.</param>\n protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e)\n {\n switch (e.PropertyName)\n {\n case \"VerticalContentAlignment\":\n this.Control.VerticalAlignment = this.Element.VerticalContentAlignment.ToContentVerticalAlignment();\n break;\n case \"HorizontalContentAlignment\":\n this.Control.HorizontalAlignment = this.Element.HorizontalContentAlignment.ToContentHorizontalAlignment();\n break;\n default:\n base.OnElementPropertyChanged(sender, e);\n break;\n }\n }\n\n /// <summary>\n /// Gets the element.\n /// </summary>\n /// <value>The element.</value>\n public new ExtendedButton Element\n {\n get\n {\n return base.Element as ExtendedButton;\n }\n }\n}\n\nDo not forget to add the ExportRenderer Attribute on both renderers [assembly: ExportRenderer(typeof(ExtendedButton), typeof(ExtendedButtonRenderer))]\nGoodluck feel free to revert in case of queries\n", "You need to create a custom renderer for the button then in the customization of the control, call:\nbutton.Gravity = GravityFlags.Left;\n\nYou will see your texts aligned to the left.\n", "I recommend using Syncfusion Button (SfButton). It has it all. It may not be free though but you can do more research on that.\nAlhamdulilah... It worked for me perfectly\n" ]
[ 6, 4, 0, 0 ]
[]
[]
[ "xamarin", "xamarin.android", "xamarin.forms", "xamarin.ios" ]
stackoverflow_0055382294_xamarin_xamarin.android_xamarin.forms_xamarin.ios.txt
Q: Eror occurs when install nuxt and create app Cannot instal nuxt and create app. I used https://nuxtjs.org/docs/get-started/installation and https://github.com/nuxt/create-nuxt-app for creating app but this error occurs: "create-nuxt-app" is not internal or external command executed by a program or batch file. npm ERR! path C:\Users\Acer\Desktop\nuxt npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c create-nuxt-app "hacker-news" npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Acer\AppData\Local\npm-cache\_logs\2022-12-03T06_03_59_129Z-debug-0.log``` Please help! PS: node js and npm are installed A: Figured it out. npm cache clean --force (sudo if needed) npm install A: These are the Prerequisites:- Environment: Win10 Node v13.8.0 yarn v1.22.0 sudo apt-get install g++ build-essential Before create project is work for me. Install package builder for font in vuetify.
Eror occurs when install nuxt and create app
Cannot instal nuxt and create app. I used https://nuxtjs.org/docs/get-started/installation and https://github.com/nuxt/create-nuxt-app for creating app but this error occurs: "create-nuxt-app" is not internal or external command executed by a program or batch file. npm ERR! path C:\Users\Acer\Desktop\nuxt npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c create-nuxt-app "hacker-news" npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Acer\AppData\Local\npm-cache\_logs\2022-12-03T06_03_59_129Z-debug-0.log``` Please help! PS: node js and npm are installed
[ "Figured it out.\n\nnpm cache clean --force (sudo if needed)\n\nnpm install\n\n\n", "These are the Prerequisites:-\nEnvironment:\nWin10\nNode v13.8.0\nyarn v1.22.0\nsudo apt-get install g++ build-essential\n\nBefore create project is work for me.\nInstall package builder for font in vuetify.\n" ]
[ 0, 0 ]
[]
[]
[ "npm", "nuxt.js", "vue.js" ]
stackoverflow_0074664944_npm_nuxt.js_vue.js.txt
Q: Handling conflict in find, modify, save flow in MongoDB with Mongoose I would like to update a document that involves reading other collection and complex modifications, so the update operators in findAndModify() cannot serve my purpose. Here's what I have: Collection.findById(id, function (err, doc) { // read from other collection, validation // modify fields in doc according to user input // (with decent amount of logic) doc.save(function (err, doc) { if (err) { return res.json(500, { message: err }); } return res.json(200, doc); }); } My worry is that this flow might cause conflict if multiple clients happens to modify the same document. It is said here that: Operations on a single document are always atomic with MongoDB databases I'm a bit confused about what Operations mean. Does this means that the findById() will acquire the lock until doc is out of scope (after the response is sent), so there wouldn't be conflicts? (I don't think so) If not, how to modify my code to support multiple clients knowing that they will modify Collection? Will Mongoose report conflict if it occurs? How to handle the possible conflict? Is it possible to manually lock the Collection? I see suggestion to use Mongoose's versionKey (or timestamp) and retry for stale document Don't use MongoDB altogether... Thanks. EDIT Thanks @jibsales for the pointer, I now use Mongoose's versionKey (timestamp will also work) to avoid committing conflicts. aaronheckmann β€” Mongoose v3 part 1 :: Versioning See this sample code: https://gist.github.com/anonymous/9dc837b1ef2831c97fe8 A: Operations refers to reads/writes. Bare in mind that MongoDB is not an ACID compliant data layer and if you need true ACID compliance, you're better off picking another tech. That said, you can achieve atomicity and isolation via the Two Phase Commit technique outlined in this article in the MongoDB docs. This is no small undertaking, so be prepared for some heavy lifting as you'll need to work with the native driver instead of Mongoose. Again, my ultimate suggestion is to not drink the NoSQL koolaid if you need transaction support which it sounds like you do. A: When MongoDB receives a request to update a document, it will lock the database until it has completed the operation. Any other requests that MongoDB receives will wait until the locking operation has completed and the database is unlocked. This lock/wait behavior is automatic, so there aren't any conflicts to handle. You can find a lot more information about this behavior in the Concurrency section of the FAQ. See jibsales answer for links to MongoDB's recommended technique for doing multi-document transactions. There are a couple of NoSQL databases that do full ACID transactions, which would make your life a lot easier. FoundationDB is one such database. Data is stored as Key-Value but it supports multiple data models through layers. Full disclosure: I'm an engineer at FoundationDB. A: In my case I was wrong when "try to query the dynamic field with the upsert option". This tutorials helped me: How to solve error E11000 duplicate
Handling conflict in find, modify, save flow in MongoDB with Mongoose
I would like to update a document that involves reading other collection and complex modifications, so the update operators in findAndModify() cannot serve my purpose. Here's what I have: Collection.findById(id, function (err, doc) { // read from other collection, validation // modify fields in doc according to user input // (with decent amount of logic) doc.save(function (err, doc) { if (err) { return res.json(500, { message: err }); } return res.json(200, doc); }); } My worry is that this flow might cause conflict if multiple clients happens to modify the same document. It is said here that: Operations on a single document are always atomic with MongoDB databases I'm a bit confused about what Operations mean. Does this means that the findById() will acquire the lock until doc is out of scope (after the response is sent), so there wouldn't be conflicts? (I don't think so) If not, how to modify my code to support multiple clients knowing that they will modify Collection? Will Mongoose report conflict if it occurs? How to handle the possible conflict? Is it possible to manually lock the Collection? I see suggestion to use Mongoose's versionKey (or timestamp) and retry for stale document Don't use MongoDB altogether... Thanks. EDIT Thanks @jibsales for the pointer, I now use Mongoose's versionKey (timestamp will also work) to avoid committing conflicts. aaronheckmann β€” Mongoose v3 part 1 :: Versioning See this sample code: https://gist.github.com/anonymous/9dc837b1ef2831c97fe8
[ "Operations refers to reads/writes. Bare in mind that MongoDB is not an ACID compliant data layer and if you need true ACID compliance, you're better off picking another tech. That said, you can achieve atomicity and isolation via the Two Phase Commit technique outlined in this article in the MongoDB docs. This is no small undertaking, so be prepared for some heavy lifting as you'll need to work with the native driver instead of Mongoose. Again, my ultimate suggestion is to not drink the NoSQL koolaid if you need transaction support which it sounds like you do.\n", "When MongoDB receives a request to update a document, it will lock the database until it has completed the operation. Any other requests that MongoDB receives will wait until the locking operation has completed and the database is unlocked. This lock/wait behavior is automatic, so there aren't any conflicts to handle. You can find a lot more information about this behavior in the Concurrency section of the FAQ.\nSee jibsales answer for links to MongoDB's recommended technique for doing multi-document transactions.\nThere are a couple of NoSQL databases that do full ACID transactions, which would make your life a lot easier. FoundationDB is one such database. Data is stored as Key-Value but it supports multiple data models through layers.\nFull disclosure: I'm an engineer at FoundationDB.\n", "In my case I was wrong when \"try to query the dynamic field with the upsert option\". This tutorials helped me: How to solve error E11000 duplicate\n" ]
[ 3, 1, 0 ]
[]
[]
[ "mongodb", "mongoose", "node.js" ]
stackoverflow_0024808473_mongodb_mongoose_node.js.txt
Q: Flutter: go_router how to pass multiple parameters to other screen? In a vanilla flutter I use to pass multiple parameters to other screen like this: Navigator.of(context).push(MaterialPageRoute( builder: (_) => CatalogFilterPage( list: list, bloc: bloc, ))) Pretty simple and easy. I can pass 2 needed parameters, list and bloc. After use it in CatalogFilterPage. Now after switching to go_router and looking through documentation I can't seem to find how to pass multiple data. Even passing single object seems not that good: onTap: () => context.pushNamed('SelectedCatalogItem', extra: list[index]), And in router I have to use casting to set correct type: builder: (context, state) => SelectedCatalogItem( item: state.extra as ListItemsModel, ), It was fine for single parameter. But now I don't have an idea how to pass multiple parameters. How can I do it? Is even passing parameters, like models, as an extra is right way? P.S. I know that you can pass parameters as context.pushNamed('CatalogFilterPage', params: ___), but params has type of Map<String, String> witch not let me pass model's A: From the go_router documentation we can see that: The extra object is useful if you'd like to simply pass along a single object to the builder function w/o passing an object identifier via a URI and looking up the object from a store. What you can do instead is pass in the id/name of the 'SelectedCatalogItem' as params and form the Object later on (if possible). The 'params' parameter lets us pass in more than one fields onTap: () => context.pushNamed('SelectedCatalogItem', params: {"id":list[index].id.toString(),"name":list[index].name}), Then in the builder function of the GoRouter we can do: GoRoute( path: 'selectedCatalogItem/view/:id/:name', name: 'SelectedCatalogItem', builder: (context, state){ int id = int.parse(state.params['id']!); String name = state.params['name']!; // use id and name to your use... }); A: Helo, i just try with my own code. So, in my case, i want to parse MenusModels from HomeScreen to other screen (ItemScreen) context.push( '/item-screen', extra: widget.menuModels, ), and in my route.dart GoRoute( path: '/item-screen', builder: (context, state) { MenuModels models = state.extra as MenuModels; return ItemScreen( menuModels: models, ); }, ), Hopefully its can help you. Thanks
Flutter: go_router how to pass multiple parameters to other screen?
In a vanilla flutter I use to pass multiple parameters to other screen like this: Navigator.of(context).push(MaterialPageRoute( builder: (_) => CatalogFilterPage( list: list, bloc: bloc, ))) Pretty simple and easy. I can pass 2 needed parameters, list and bloc. After use it in CatalogFilterPage. Now after switching to go_router and looking through documentation I can't seem to find how to pass multiple data. Even passing single object seems not that good: onTap: () => context.pushNamed('SelectedCatalogItem', extra: list[index]), And in router I have to use casting to set correct type: builder: (context, state) => SelectedCatalogItem( item: state.extra as ListItemsModel, ), It was fine for single parameter. But now I don't have an idea how to pass multiple parameters. How can I do it? Is even passing parameters, like models, as an extra is right way? P.S. I know that you can pass parameters as context.pushNamed('CatalogFilterPage', params: ___), but params has type of Map<String, String> witch not let me pass model's
[ "From the go_router documentation we can see that:\n\nThe extra object is useful if you'd like to simply pass along a single object to the builder function w/o passing an object identifier via a URI and looking up the object from a store.\n\nWhat you can do instead is pass in the id/name of the 'SelectedCatalogItem' as params and form the Object later on (if possible). The 'params' parameter lets us pass in more than one fields\nonTap: () => context.pushNamed('SelectedCatalogItem',\n params: {\"id\":list[index].id.toString(),\"name\":list[index].name}),\n\nThen in the builder function of the GoRouter we can do:\nGoRoute(\n path: 'selectedCatalogItem/view/:id/:name',\n name: 'SelectedCatalogItem',\n builder: (context, state){\n int id = int.parse(state.params['id']!);\n String name = state.params['name']!;\n // use id and name to your use...\n });\n\n", "Helo, i just try with my own code. So, in my case, i want to parse MenusModels from HomeScreen to other screen (ItemScreen)\ncontext.push(\n '/item-screen',\n extra: widget.menuModels,\n ),\n\nand in my route.dart\nGoRoute(\n path: '/item-screen',\n builder: (context, state) {\n MenuModels models = state.extra as MenuModels;\n return ItemScreen(\n menuModels: models,\n );\n },\n ),\n\nHopefully its can help you. Thanks\n" ]
[ 1, 0 ]
[]
[]
[ "dart", "flutter", "routes" ]
stackoverflow_0072976031_dart_flutter_routes.txt
Q: Typescript library to generate service interfaces from OpenAPI v3 I'm looking for an npm library to generate Typescript service interfaces from the OpenAPI spec file. For example: export type getUserRequest { id: string; // from path } export type getUserResposne { id: string; name: string; display_name: string; } export interface UserController { getUser(req: getUserRequest): Promise<getUserResposne>; } This is so crucial and convenient that I could create implementations and add a custom express middleware to map between OpenAPI spec and the Controller. I've been looking for this and was so surprised I couldn't find it. Or am I missing any lib which could solve my issue? A: After searching around and nothing met what I'm looking for. The closest thing I've found is openapi-typescript-codegen It is a client SDK generator. But it is so cool that I could use it as the server service interface by building an express middleware and some coding on top. p.s. Will update the answer to share my middleware code when I have time.
Typescript library to generate service interfaces from OpenAPI v3
I'm looking for an npm library to generate Typescript service interfaces from the OpenAPI spec file. For example: export type getUserRequest { id: string; // from path } export type getUserResposne { id: string; name: string; display_name: string; } export interface UserController { getUser(req: getUserRequest): Promise<getUserResposne>; } This is so crucial and convenient that I could create implementations and add a custom express middleware to map between OpenAPI spec and the Controller. I've been looking for this and was so surprised I couldn't find it. Or am I missing any lib which could solve my issue?
[ "After searching around and nothing met what I'm looking for.\nThe closest thing I've found is openapi-typescript-codegen\nIt is a client SDK generator.\nBut it is so cool that I could use it as the server service interface\nby building an express middleware and some coding on top.\np.s. Will update the answer to share my middleware code when I have time.\n" ]
[ 0 ]
[]
[]
[ "express", "openapi", "typescript" ]
stackoverflow_0074166010_express_openapi_typescript.txt
Q: How to run Jest tests sequentially? I'm running Jest tests via npm test. Jest runs tests in parallel by default. Is there any way to make the tests run sequentially? I have some tests calling third-party code that relies on changing the current working directory. A: CLI options are documented and also accessible by running the command jest --help. You'll see the option you are looking for : --runInBand. A: I'm still getting familiar with Jest, but it appears that describe blocks run synchronously whereas test blocks run asynchronously. I'm running multiple describe blocks within an outer describe that looks something like this: describe describe test1 test2 describe test3 In this case, test3 does not run until test2 is complete because test3 is in a describe block that follows the describe block that contains test2. A: It worked for me ensuring sequential running of nicely separated to modules tests: 1) Keep tests in separated files, but without spec/test in naming. |__testsToRunSequentially.test.js |__tests |__testSuite1.js |__testSuite2.js |__index.js 2) File with test suite also should look like this (testSuite1.js): export const testSuite1 = () => describe(/*your suite inside*/) 3) Import them to testToRunSequentially.test.js and run with --runInBand: import { testSuite1, testSuite2 } from './tests' describe('sequentially run tests', () => { testSuite1() testSuite2() }) A: Use the serial test runner: npm install jest-serial-runner --save-dev Set up jest to use it, e.g. in jest.config.js: module.exports = { ..., runner: 'jest-serial-runner' }; You could use the project feature to apply it only to a subset of tests. See https://jestjs.io/docs/en/configuration#projects-arraystring--projectconfig A: As copied from https://github.com/facebook/jest/issues/6194#issuecomment-419837314 test.spec.js import { signuptests } from './signup' import { logintests } from './login' describe('Signup', signuptests) describe('Login', logintests) signup.js export const signuptests = () => { it('Should have login elements', () => {}); it('Should Signup', () => {}}); } login.js export const logintests = () => { it('Should Login', () => {}}); } A: I needed this for handling end-to-end tests alongside regular tests, and the runInBand solution was not enough for me. Yes: it ensures within test suites/files that the order works, but the files themselves run in an order chosen essentially for parallelization by Jest, and it's not easy to control. If you need a stable sequential order for the test suites themselves, this is how you can do it. So in addition to the --runInBand, I did the following. I'm using separate projects for this, by the way, within a single repository. My jest.config.js looks like this: module.exports = { testSequencer: "./__e2e__/jest/customSequencer.js", projects: [{ "rootDir": "<rootDir>/__e2e__", "displayName": "end-to-end", ... Here, I explicitly added the displayName to be end-to-end, which I'll use later. You can have as many projects as you like, as usual, but I have two, one for normal unit tests, and one for end-to-end. Note that the testSequencer field has to be global. If you attach it to a project, it'll be validated but then ignored silently. That's a Jest decision to make sequencing nice for running multiple projects. The testSequencer field points to a file containing this. This imports a default version of the test sequencer, and then partitions the tests into two sets, one for the tests in the end-to-end project, and all the rest. All the rest are delegated to the inherited sequencer, but those in the end to end set are sorted alphabetically and then concatenated. const Sequencer = require('@jest/test-sequencer').default; const isEndToEnd = (test) => { const contextConfig = test.context.config; return contextConfig.displayName.name === 'end-to-end'; }; class CustomSequencer extends Sequencer { sort(tests) { const copyTests = Array.from(tests); const normalTests = copyTests.filter((t) => ! isEndToEnd(t)); const endToEndTests = copyTests.filter((t) => isEndToEnd(t)); return super.sort(normalTests).concat(endToEndTests.sort((a, b) => (a.path > b.path ? 1 : -1))); } } module.exports = CustomSequencer; This combo runs all the regular tests as Jest likes, but always runs the end to end ones at the end in alpha order, giving my end-to-end tests the extra stability for user models the order they need. A: Just in case anyone wants to keep all jest configuration in the package.json options. runInBand does not seem to be a valid config option. This means that you can end up with the setup below which does not seem 100% perfect. "scripts": { "test": "jest --runInBand" }, ... "jest": { "verbose": true, "forceExit": true, "preset": "ts-jest", "testURL": "http://localhost/", "testRegex": "\\.test\\.ts$", ... } ... However, you can add the runInBand using maxWorkers option like below: "scripts": { "test": "jest" }, ... "jest": { "verbose": true, "maxWorkers": 1, "forceExit": true, "preset": "ts-jest", "testURL": "http://localhost/", "testRegex": "\\.test\\.ts$", ... } ... A: Yes, and you can also run all tests in a specific order, although generally your tests should be independent so I'd strongly caution against relying on any specific ordering. Having said that, there may be a valid case for controlling the test order, so you could do this: Add --runInBand as an option when running jest, e.g. in package.json. This will run tests in sequence rather than in parallel (asynchronously). Using --runInBand can prevent issues like setup/teardown/cleanup in one set of tests intefering with other tests: "scripts": {"test": "jest --runInBand"} Put all tests into separate folder (e.g. a separate folder under __tests__, named test_suites): __tests__ test_suites test1.js test2.js Configure jest in package.json to ignore this test_suites folder: "jest": { "testPathIgnorePatterns": ["/test_suites"] } Create a new file under __tests__ e.g. tests.js - this is now the only test file that will actually run. In tests.js, require the individual test files in the order that you want to run them: require('./test_suites/test1.js'); require('./test_suites/test2.js'); Note - this will cause the afterAll() in the tests to be run once all tests have completed. Essentially it's breaking the independence of tests and should be used in very limited scenarios. A: From the Jest documentation: Jest executes all describe handlers in a test file before it executes any of the actual tests. This is another reason to do setup and teardown inside before* and after* handlers rather than inside the describe blocks. Once the describe blocks are complete, by default Jest runs all the tests serially in the order they were encountered in the collection phase, waiting for each to finish and be tidied up before moving on. Take a look at the example that the jest site gives. A: If you are a newbie in Jest and looking for a complete, step-by-step example on how to make a specific test file ALWAYS run first or last, here it goes: Create a file called "testSequencer.js" in any path you'd like. Paste the code below into that file. const TestSequencer = require('@jest/test-sequencer').default; const path = require('path'); class CustomSequencer extends TestSequencer { sort(tests) { const target_test_path = path.join(__dirname, 'target.test.js'); const target_test_index = tests.findIndex(t => t.path === target_test_path); if (auth_test_index == -1) { return tests; } const target_test = tests[target_test_index]; const ordered_tests = tests; ordered_tests.splice(target_test_index, 1); ordered_tests.push(target_test); // adds to the tail // ordered_tests.unshift(target_test); // adds to the head return ordered_tests; } } module.exports = CustomSequencer; Set "maxWorkers" option as "true" in your package.json jest configuration. Also, set "testSequencer" option as your newly created "testSequencer.js" file's path. { "name": "myApp", "version": "1.0.0", "main": "app.js", "scripts": { "start": "node app.js", "dev": "nodemon app.js", "test": "jest" }, "author": "Company", "license": "MIT", "dependencies": { ... }, "devDependencies": { "jest": "^27.5.1", ... }, "jest": { "testSequencer": "./testSequencer.js", "maxWorkers": 1 } } Run npm test and observe that every test file will be run one by one, upon completion of each. You sacrifice some time, but you guarantee the order this way. Bonus: You can also order your test files alphabetically, by folder name etc. Just modify "testSequencer.js" file to your preference, and return an array that's in the same format as the "tests" array, which is a parameter of your main "sort" function, and you will be good. A: Jest runs all the tests serially in the order they were encountered in the collection phase You can leverage that and create special test file alltests.ordered-test.js: import './first-test' import './second-test' // etc. And add a jest config with testMatch that would run test with that file name. That will load each file in that order thus execute them in the same order.
How to run Jest tests sequentially?
I'm running Jest tests via npm test. Jest runs tests in parallel by default. Is there any way to make the tests run sequentially? I have some tests calling third-party code that relies on changing the current working directory.
[ "CLI options are documented and also accessible by running the command jest --help.\nYou'll see the option you are looking for : --runInBand.\n", "I'm still getting familiar with Jest, but it appears that describe blocks run synchronously whereas test blocks run asynchronously. I'm running multiple describe blocks within an outer describe that looks something like this:\ndescribe\n describe\n test1\n test2\n\n describe\n test3\n\nIn this case, test3 does not run until test2 is complete because test3 is in a describe block that follows the describe block that contains test2.\n", "It worked for me ensuring sequential running of nicely separated to modules tests:\n1) Keep tests in separated files, but without spec/test in naming.\n|__testsToRunSequentially.test.js\n|__tests\n |__testSuite1.js\n |__testSuite2.js\n |__index.js\n\n2) File with test suite also should look like this (testSuite1.js):\nexport const testSuite1 = () => describe(/*your suite inside*/)\n\n3) Import them to testToRunSequentially.test.js and run with --runInBand:\nimport { testSuite1, testSuite2 } from './tests'\n\ndescribe('sequentially run tests', () => {\n testSuite1()\n testSuite2()\n})\n\n", "Use the serial test runner:\nnpm install jest-serial-runner --save-dev\n\nSet up jest to use it, e.g. in jest.config.js:\nmodule.exports = {\n ...,\n runner: 'jest-serial-runner'\n};\n\nYou could use the project feature to apply it only to a subset of tests. See https://jestjs.io/docs/en/configuration#projects-arraystring--projectconfig\n", "As copied from https://github.com/facebook/jest/issues/6194#issuecomment-419837314\ntest.spec.js\nimport { signuptests } from './signup'\nimport { logintests } from './login'\n\ndescribe('Signup', signuptests)\ndescribe('Login', logintests)\n\nsignup.js\nexport const signuptests = () => {\n it('Should have login elements', () => {});\n it('Should Signup', () => {}});\n}\n\nlogin.js\nexport const logintests = () => {\n it('Should Login', () => {}});\n}\n\n", "I needed this for handling end-to-end tests alongside regular tests, and the runInBand solution was not enough for me. Yes: it ensures within test suites/files that the order works, but the files themselves run in an order chosen essentially for parallelization by Jest, and it's not easy to control. If you need a stable sequential order for the test suites themselves, this is how you can do it.\nSo in addition to the --runInBand, I did the following. I'm using separate projects for this, by the way, within a single repository.\n\nMy jest.config.js looks like this:\n module.exports = {\n testSequencer: \"./__e2e__/jest/customSequencer.js\",\n projects: [{\n \"rootDir\": \"<rootDir>/__e2e__\",\n \"displayName\": \"end-to-end\",\n ...\n\nHere, I explicitly added the displayName to be end-to-end, which\nI'll use later. You can have as many projects as you like, as usual, but\nI have two, one for normal unit tests, and one for end-to-end.\nNote that the testSequencer field has to be global. If you attach it\nto a project, it'll be validated but then ignored silently. That's a\nJest decision to make sequencing nice for running multiple projects.\n\nThe testSequencer field points to a file containing this. This imports\na default version of the test sequencer, and then partitions the tests\ninto two sets, one for the tests in the end-to-end project, and all the\nrest. All the rest are delegated to the inherited sequencer, but those in\nthe end to end set are sorted alphabetically and then concatenated.\n const Sequencer = require('@jest/test-sequencer').default;\n\n const isEndToEnd = (test) => {\n const contextConfig = test.context.config;\n return contextConfig.displayName.name === 'end-to-end';\n };\n\n class CustomSequencer extends Sequencer {\n sort(tests) {\n const copyTests = Array.from(tests);\n const normalTests = copyTests.filter((t) => ! isEndToEnd(t));\n const endToEndTests = copyTests.filter((t) => isEndToEnd(t));\n return super.sort(normalTests).concat(endToEndTests.sort((a, b) => (a.path > b.path ? 1 : -1)));\n }\n }\n\n module.exports = CustomSequencer;\n\n\n\nThis combo runs all the regular tests as Jest likes, but always runs the end to end ones at the end in alpha order, giving my end-to-end tests the extra stability for user models the order they need.\n", "Just in case anyone wants to keep all jest configuration in the package.json options.\nrunInBand does not seem to be a valid config option. This means that you can end up with the setup below which does not seem 100% perfect.\n\"scripts\": {\n \"test\": \"jest --runInBand\"\n},\n...\n\"jest\": {\n \"verbose\": true,\n \"forceExit\": true,\n \"preset\": \"ts-jest\",\n \"testURL\": \"http://localhost/\",\n \"testRegex\": \"\\\\.test\\\\.ts$\",\n ...\n }\n...\n\nHowever, you can add the runInBand using maxWorkers option like below:\n \"scripts\": {\n \"test\": \"jest\"\n },\n ...\n \"jest\": {\n \"verbose\": true,\n \"maxWorkers\": 1,\n \"forceExit\": true,\n \"preset\": \"ts-jest\",\n \"testURL\": \"http://localhost/\",\n \"testRegex\": \"\\\\.test\\\\.ts$\",\n ...\n }\n ...\n\n", "Yes, and you can also run all tests in a specific order, although generally your tests should be independent so I'd strongly caution against relying on any specific ordering. Having said that, there may be a valid case for controlling the test order, so you could do this:\n\nAdd --runInBand as an option when running jest, e.g. in package.json. This will run tests in sequence rather than in parallel (asynchronously). Using --runInBand can prevent issues like setup/teardown/cleanup in one set of tests intefering with other tests:\n\n\"scripts\": {\"test\": \"jest --runInBand\"}\n\nPut all tests into separate folder (e.g. a separate folder under __tests__, named test_suites):\n__tests__\n test_suites\n test1.js\n test2.js\n\nConfigure jest in package.json to ignore this test_suites folder:\n\"jest\": { \"testPathIgnorePatterns\": [\"/test_suites\"] }\n\nCreate a new file under __tests__ e.g. tests.js - this is now the only test file that will actually run.\n\nIn tests.js, require the individual test files in the order that you want to run them:\nrequire('./test_suites/test1.js');\nrequire('./test_suites/test2.js');\n\n\nNote - this will cause the afterAll() in the tests to be run once all tests have completed. Essentially it's breaking the independence of tests and should be used in very limited scenarios.\n", "From the Jest documentation:\n\nJest executes all describe handlers in a test file before it executes\nany of the actual tests. This is another reason to do setup and\nteardown inside before* and after* handlers rather than inside the\ndescribe blocks.\n\n\nOnce the describe blocks are complete, by default\nJest runs all the tests serially in the order they were\nencountered in the collection phase, waiting for each to finish and be\ntidied up before moving on.\n\nTake a look at the example that the jest site gives.\n", "If you are a newbie in Jest and looking for a complete, step-by-step example on how to make a specific test file ALWAYS run first or last, here it goes:\n\nCreate a file called \"testSequencer.js\" in any path you'd like.\nPaste the code below into that file.\n\nconst TestSequencer = require('@jest/test-sequencer').default;\nconst path = require('path');\n\nclass CustomSequencer extends TestSequencer {\n sort(tests) {\n const target_test_path = path.join(__dirname, 'target.test.js');\n\n const target_test_index = tests.findIndex(t => t.path === target_test_path);\n\n if (auth_test_index == -1) {\n return tests;\n }\n\n const target_test = tests[target_test_index];\n\n const ordered_tests = tests;\n\n ordered_tests.splice(target_test_index, 1);\n ordered_tests.push(target_test); // adds to the tail\n // ordered_tests.unshift(target_test); // adds to the head\n\n return ordered_tests;\n }\n}\n\nmodule.exports = CustomSequencer;\n\n\nSet \"maxWorkers\" option as \"true\" in your package.json jest configuration. Also, set \"testSequencer\" option as your newly created \"testSequencer.js\" file's path.\n\n{\n \"name\": \"myApp\",\n \"version\": \"1.0.0\",\n \"main\": \"app.js\",\n \"scripts\": {\n \"start\": \"node app.js\",\n \"dev\": \"nodemon app.js\",\n \"test\": \"jest\"\n },\n \"author\": \"Company\",\n \"license\": \"MIT\",\n \"dependencies\": {\n ...\n },\n \"devDependencies\": {\n \"jest\": \"^27.5.1\",\n ...\n },\n \"jest\": {\n \"testSequencer\": \"./testSequencer.js\",\n \"maxWorkers\": 1\n }\n}\n\n\nRun npm test and observe that every test file will be run one by one, upon completion of each. You sacrifice some time, but you guarantee the order this way.\n\nBonus: You can also order your test files alphabetically, by folder name etc. Just modify \"testSequencer.js\" file to your preference, and return an array that's in the same format as the \"tests\" array, which is a parameter of your main \"sort\" function, and you will be good.\n", "\nJest runs all the tests serially in the order they were encountered in the collection phase\n\nYou can leverage that and create special test file alltests.ordered-test.js:\nimport './first-test'\nimport './second-test'\n// etc.\n\nAnd add a jest config with testMatch that would run test with that file name.\nThat will load each file in that order thus execute them in the same order.\n" ]
[ 353, 61, 35, 17, 14, 10, 6, 4, 3, 3, 0 ]
[]
[]
[ "jestjs" ]
stackoverflow_0032751695_jestjs.txt
Q: Save File in PHPExcel Is it posibble to save existing file with out creating the writer object in PHPExcel? $filename = 'test.xlsx'; $phpExcel = PHPExcel_IOFactory::load($filename); $phpExcel->save($filename);
Save File in PHPExcel
Is it posibble to save existing file with out creating the writer object in PHPExcel? $filename = 'test.xlsx'; $phpExcel = PHPExcel_IOFactory::load($filename); $phpExcel->save($filename);
[]
[]
[ "header('Content-Type: application/vnd.ms-excel');\n header('Content-Disposition: attachment;filename=\"filename.xls\"');\n header('Cache-Control: max-age=0');\n\n $objWriter = PHPExcel_IOFactory::createWriter($excel_obj, 'Excel2007');\n $objWriter->save('php://output');\n\n" ]
[ -1 ]
[ "php", "phpexcel", "phpexcel_1.8.0" ]
stackoverflow_0074665019_php_phpexcel_phpexcel_1.8.0.txt
Q: Tree Traversal applications I was wondering if anyone knew the question to my practice midterm and understood the answer. In programming, it is often best to copy an existing function that is similar to your new needs, and then alter the copy to suit the new requirements. If I wanted a function to print out a range of numbers in reverse order, which of the four traversals functions would you copy as a basis for the new function? Answer: Inorder traversal() Given a binary tree that contains the results of a 64 team single elimination tournament, I want to print out the six teams that F beat. Note the diagram below only shows the upper parts of ht much larger tree. My code only follows the path of F's victories down the root, so it is not a true tree traversal. Nonetheless, which tree traversal code would my code most closely parallel and why? only one or two sentences are needed with 8 points for the correct traversal, and seven points for an explanation of your choice. (I put the image given at the top of my post) Answer: Post order traversal because it would have to look at both children first to find the loser and determine the right path to take. A: For question 1, if it is on a binary search tree, then in-order traversal would give you a sorted list of values. So if you are looking for values within a given range, it would be best to use in-order traversal. For question 2, since it is representing a tournament, the tree is built in a bottom-up approach, which is post-order. We also know that F has to be in a leaf, so post-order is the best way. A: Let us understand In-order and Post-order traversals first: In-Order traversal: This traversal gives a sorted array, in an increasing order. Algorithm: 1. Traverse left-subtree 2. Visit Root 3. Traverse right-subtree Post-Order traversal: Post-order traversal is generally used in deleting a tree. The algorithm is as follows: 1. Traverse left-subtree 2. Traverse right-subtree 3. Visit Root Answer 1: In order traversal results in a sorted array of values. Answer 2: Post order traversal would be the right choice for this, because it will check both the children nodes, and if F is found, that path will be followed. Furthermore, F will be a eventually be a leaf node, due to this post-order becomes a good choice.
Tree Traversal applications
I was wondering if anyone knew the question to my practice midterm and understood the answer. In programming, it is often best to copy an existing function that is similar to your new needs, and then alter the copy to suit the new requirements. If I wanted a function to print out a range of numbers in reverse order, which of the four traversals functions would you copy as a basis for the new function? Answer: Inorder traversal() Given a binary tree that contains the results of a 64 team single elimination tournament, I want to print out the six teams that F beat. Note the diagram below only shows the upper parts of ht much larger tree. My code only follows the path of F's victories down the root, so it is not a true tree traversal. Nonetheless, which tree traversal code would my code most closely parallel and why? only one or two sentences are needed with 8 points for the correct traversal, and seven points for an explanation of your choice. (I put the image given at the top of my post) Answer: Post order traversal because it would have to look at both children first to find the loser and determine the right path to take.
[ "For question 1, if it is on a binary search tree, then in-order traversal would give you a sorted list of values. So if you are looking for values within a given range, it would be best to use in-order traversal.\nFor question 2, since it is representing a tournament, the tree is built in a bottom-up approach, which is post-order. We also know that F has to be in a leaf, so post-order is the best way.\n", "Let us understand In-order and Post-order traversals first:\nIn-Order traversal: This traversal gives a sorted array, in an increasing order. Algorithm:\n 1. Traverse left-subtree\n 2. Visit Root\n 3. Traverse right-subtree\n\nPost-Order traversal: Post-order traversal is generally used in deleting a tree. The algorithm is as follows:\n 1. Traverse left-subtree\n 2. Traverse right-subtree\n 3. Visit Root\n\nAnswer 1: In order traversal results in a sorted array of values.\nAnswer 2: Post order traversal would be the right choice for this, because it will check both the children nodes, and if F is found, that path will be followed. Furthermore, F will be a eventually be a leaf node, due to this post-order becomes a good choice.\n" ]
[ 2, 0 ]
[]
[]
[ "binary_tree", "inorder", "postorder", "traversal", "tree" ]
stackoverflow_0028499965_binary_tree_inorder_postorder_traversal_tree.txt
Q: Recurring "expected" error in unity and can't find whats missing/wrong I keep getting the error "Assets\scripts\PlayerMovement.cs(28,42): error CS1002: ; expected" from unity console but, I can't find whats wrong in the area below. (Input.GetAxisRaw("Horizontal")) { stopwatch.Start(); } if ((stopwatch > 1) && (stopwatch < 2)) { runSpeed = (runSpeed * 1.5); } else if (stopwatch >= 2) { runSpeed = (runSpeed * 2); } if (Input.GetAxisRaw("Horizontal") = 0) { stopwatch.Stop(); } I tried removing the parenthesis and moving around the semicolons. I don't know what else I can do because I'm learning this stuff as a I go. I'm trying to make it where the longer the player is moving left or right the faster they go in 3 different speeds. The default speed of 40, the second speed which is 60, and the third and final speed which is 80. A: Why did you put a single equal-sign in the last if-statement, you should try to put a second equal-sign in the last if-statement. Because of that, it sets your method is set to zero in the if-statement but is not checked whether it is zero or not. Just replace if(Input.GetAxisRaw("Horizontal")=0) With if(Input.GetAxisRaw("Horizontal")==0) Also in your very first line of code there is no if-statement, but the brackets exist, also in the very first line the condition is not check whether you are moving, so replace (Input.GetAxisRaw("Horizontal")) With if(Input.GetAxisRaw("Horizontal")!=0) that might help
Recurring "expected" error in unity and can't find whats missing/wrong
I keep getting the error "Assets\scripts\PlayerMovement.cs(28,42): error CS1002: ; expected" from unity console but, I can't find whats wrong in the area below. (Input.GetAxisRaw("Horizontal")) { stopwatch.Start(); } if ((stopwatch > 1) && (stopwatch < 2)) { runSpeed = (runSpeed * 1.5); } else if (stopwatch >= 2) { runSpeed = (runSpeed * 2); } if (Input.GetAxisRaw("Horizontal") = 0) { stopwatch.Stop(); } I tried removing the parenthesis and moving around the semicolons. I don't know what else I can do because I'm learning this stuff as a I go. I'm trying to make it where the longer the player is moving left or right the faster they go in 3 different speeds. The default speed of 40, the second speed which is 60, and the third and final speed which is 80.
[ "Why did you put a single equal-sign in the last if-statement, you should try to put a second equal-sign in the last if-statement.\nBecause of that, it sets your method is set to zero in the if-statement but is not checked whether it is zero or not.\nJust replace\nif(Input.GetAxisRaw(\"Horizontal\")=0)\n\nWith\nif(Input.GetAxisRaw(\"Horizontal\")==0)\n\nAlso in your very first line of code there is no if-statement, but the brackets exist, also in the very first line the condition is not check whether you are moving, so replace\n(Input.GetAxisRaw(\"Horizontal\"))\n\nWith\nif(Input.GetAxisRaw(\"Horizontal\")!=0)\n\nthat might help\n" ]
[ 0 ]
[]
[]
[ "c#", "compiler_errors", "if_statement", "timer" ]
stackoverflow_0074664372_c#_compiler_errors_if_statement_timer.txt
Q: CMake file cannot locate Qt charts module I'm trying to start writing my Qt project inside JetBrains' Clion but I need to link some libraries in my Cmake file first. There's no problem when trying to find packages like Qt5Core, Qt5Widgets, Qt5Gui but when it come to finding Qt5Charts an error is thrown: By not providing "FindQt5Charts.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Charts", but CMake did not find one. Could not find a package configuration file provided by "Qt5Charts" with any of the following names: Qt5ChartsConfig.cmake qt5charts-config.cmake Add the installation prefix of "Qt5Charts" to CMAKE_PREFIX_PATH or set "Qt5Charts_DIR" to a directory containing one of the above files. If "Qt5Charts" provides a separate development package or SDK, be sure it has been installed. This is my CMake file right now. All packages are installed via the Qt's Linux(ubuntu) maintanence tool. Any ideas how to help Cmake find the Charts module ? A: Using the following and see if it helps: sudo apt install libqt5charts5-dev Src: https://stackoverflow.com/a/46765025 A: Typically when including Qt5 in a project I use the follow basic script for CMake, though I should note I haven't tested this on Linux. cmake_minimum_required(VERSION 3.10.0 FATAL_ERROR) project(<YOUR_PROJECT_NAME>) find_package(Qt5 REQUIRED COMPONENTS Core Gui Widgets Charts) # set your project sources and headers set(project_sources src/blah.cpp) set(project_headers include/headers/blah.h) # wrap your qt based classes with qmoc qt5_wrap_cpp(project_source_moc ${project_headers}) # add your build target add_executable(${PROJECT_NAME} ${project_sources} ${project_headers} ${project_source_moc}) # link to Qt5 target_link_libraries(${PROJECT_NAME} PUBLIC Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Charts) # require C++ 14 target_compile_features(${PROJECT_NAME} PUBLIC cxx_std_14) When configuring your project via cmake, you just need to pass in the path to you qt5 installation directory (cmake variable name is Qt5_DIR) that contains the Qt5Config.cmake file and then cmake should be able to find the rest of the components that you request. Also double check that Qt5Charts was installed, not sure if it's installed by default. A: Maybe try this? sudo apt install libqt5charts5-dev
CMake file cannot locate Qt charts module
I'm trying to start writing my Qt project inside JetBrains' Clion but I need to link some libraries in my Cmake file first. There's no problem when trying to find packages like Qt5Core, Qt5Widgets, Qt5Gui but when it come to finding Qt5Charts an error is thrown: By not providing "FindQt5Charts.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Charts", but CMake did not find one. Could not find a package configuration file provided by "Qt5Charts" with any of the following names: Qt5ChartsConfig.cmake qt5charts-config.cmake Add the installation prefix of "Qt5Charts" to CMAKE_PREFIX_PATH or set "Qt5Charts_DIR" to a directory containing one of the above files. If "Qt5Charts" provides a separate development package or SDK, be sure it has been installed. This is my CMake file right now. All packages are installed via the Qt's Linux(ubuntu) maintanence tool. Any ideas how to help Cmake find the Charts module ?
[ "Using the following and see if it helps:\n\nsudo apt install libqt5charts5-dev\n\nSrc: https://stackoverflow.com/a/46765025\n", "Typically when including Qt5 in a project I use the follow basic script for CMake, though I should note I haven't tested this on Linux. \ncmake_minimum_required(VERSION 3.10.0 FATAL_ERROR)\nproject(<YOUR_PROJECT_NAME>)\n\nfind_package(Qt5 REQUIRED COMPONENTS Core Gui Widgets Charts)\n\n# set your project sources and headers\nset(project_sources src/blah.cpp)\nset(project_headers include/headers/blah.h)\n\n# wrap your qt based classes with qmoc\nqt5_wrap_cpp(project_source_moc ${project_headers})\n\n# add your build target\nadd_executable(${PROJECT_NAME} ${project_sources} ${project_headers} ${project_source_moc})\n\n# link to Qt5\ntarget_link_libraries(${PROJECT_NAME}\n PUBLIC\n Qt5::Core\n Qt5::Gui\n Qt5::Widgets\n Qt5::Charts)\n\n# require C++ 14\ntarget_compile_features(${PROJECT_NAME} PUBLIC cxx_std_14)\n\nWhen configuring your project via cmake, you just need to pass in the path to you qt5 installation directory (cmake variable name is Qt5_DIR) that contains the Qt5Config.cmake file and then cmake should be able to find the rest of the components that you request. \nAlso double check that Qt5Charts was installed, not sure if it's installed by default. \n", "Maybe try this?\nsudo apt install libqt5charts5-dev\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "c++", "clion", "cmake", "linux", "qt" ]
stackoverflow_0049367146_c++_clion_cmake_linux_qt.txt
Q: Name-spacing is breaking when creating UI elements within nested moduleServers I am developing a Shiny application with nested shiny modules, when I define variable UI elements within a nested module server the parent module name space is not inherited correctly. For example, if you had the following Parent module -> ns = parent Child module -> ns = child The UI when inspecting the application would display the name-spacing as 'parent-child-...' however when a UI element is defined from the child servers it is now only 'child-...'. To account for this I tried a hacky solution and it worked by pasting 'parent' in front of the 'child' id when creating the element. I've created an example to capture this issue. library(shiny) # Base UI and server elements ------------------------------------------------- histogramUI <- function(id) { ns <- NS(id) tagList( selectInput(ns("var"), "Variable", choices = names(mtcars)), numericInput(ns("bins"), "bins", value = 10, min = 1), plotOutput(ns("hist")) ) } histogramServer <- function(id) { moduleServer(id, function(input, output, session) { data <- reactive(mtcars[[input$var]]) output$hist <- renderPlot({ hist(data(), breaks = input$bins, main = input$var) }, res = 96) }) } # Button UI and server elements ------------------------------------------------ buttonUI <- function(id) { ns <- NS(id) uiOutput(ns("new_btn")) } # Server created button buttonServer <- function(id) { moduleServer(id, function(input, output, session) { observe({ req(input$var == "cyl") output$new_btn <- renderUI({ div( actionButton( # Does work \/\/\/ NS(paste0('test-', id), 'action_button'), # Doesn't work \/\/\/ # NS(id, 'action_button') label = "Button test") ) }) }) observeEvent(input$action_button, { # Printing the session id and selected var print(id) print(input$var) }) }) } # Master UI elements major_piece_of_func_ui <- function(id){ ns <- NS(id) div( histogramUI(ns("hist_test_1")), buttonUI (ns("hist_test_1")) ) } major_piece_of_func_serv <- function(id) { moduleServer(id, function(input, output, session) { histogramServer("hist_test_1") buttonServer ("hist_test_1") }) } # Ui and server construction ui <- fluidPage( major_piece_of_func_ui('test') ) server <- function(input, output, session) { major_piece_of_func_serv('test') } shinyApp(ui, server) I am very open to the fact that I may be going about this in the completely wrong way and am open to alternative solutions that at a minimum hold the following constraints: Constraints: Withhold the structure of nested modules Withhold the ability to create UI elements within child module servers Cheers, Aidan A: For a non-nested module, NS(id) is equivalent to session$ns in the server part. But not for a nested module. For a nested module, use session$ns, it returns the namespacing function with the composed namespace.
Name-spacing is breaking when creating UI elements within nested moduleServers
I am developing a Shiny application with nested shiny modules, when I define variable UI elements within a nested module server the parent module name space is not inherited correctly. For example, if you had the following Parent module -> ns = parent Child module -> ns = child The UI when inspecting the application would display the name-spacing as 'parent-child-...' however when a UI element is defined from the child servers it is now only 'child-...'. To account for this I tried a hacky solution and it worked by pasting 'parent' in front of the 'child' id when creating the element. I've created an example to capture this issue. library(shiny) # Base UI and server elements ------------------------------------------------- histogramUI <- function(id) { ns <- NS(id) tagList( selectInput(ns("var"), "Variable", choices = names(mtcars)), numericInput(ns("bins"), "bins", value = 10, min = 1), plotOutput(ns("hist")) ) } histogramServer <- function(id) { moduleServer(id, function(input, output, session) { data <- reactive(mtcars[[input$var]]) output$hist <- renderPlot({ hist(data(), breaks = input$bins, main = input$var) }, res = 96) }) } # Button UI and server elements ------------------------------------------------ buttonUI <- function(id) { ns <- NS(id) uiOutput(ns("new_btn")) } # Server created button buttonServer <- function(id) { moduleServer(id, function(input, output, session) { observe({ req(input$var == "cyl") output$new_btn <- renderUI({ div( actionButton( # Does work \/\/\/ NS(paste0('test-', id), 'action_button'), # Doesn't work \/\/\/ # NS(id, 'action_button') label = "Button test") ) }) }) observeEvent(input$action_button, { # Printing the session id and selected var print(id) print(input$var) }) }) } # Master UI elements major_piece_of_func_ui <- function(id){ ns <- NS(id) div( histogramUI(ns("hist_test_1")), buttonUI (ns("hist_test_1")) ) } major_piece_of_func_serv <- function(id) { moduleServer(id, function(input, output, session) { histogramServer("hist_test_1") buttonServer ("hist_test_1") }) } # Ui and server construction ui <- fluidPage( major_piece_of_func_ui('test') ) server <- function(input, output, session) { major_piece_of_func_serv('test') } shinyApp(ui, server) I am very open to the fact that I may be going about this in the completely wrong way and am open to alternative solutions that at a minimum hold the following constraints: Constraints: Withhold the structure of nested modules Withhold the ability to create UI elements within child module servers Cheers, Aidan
[ "For a non-nested module, NS(id) is equivalent to session$ns in the server part. But not for a nested module. For a nested module, use session$ns, it returns the namespacing function with the composed namespace.\n" ]
[ 0 ]
[]
[]
[ "shiny", "shiny_reactivity", "shiny_server", "shinydashboard", "shinymodules" ]
stackoverflow_0074652792_shiny_shiny_reactivity_shiny_server_shinydashboard_shinymodules.txt
Q: eas build failed while compressing I've started using react native with expo. When trying to build the application with the command: eas build --profile development --platform android The build fails when it reaches Compressing project files and uploading to EAS Build s://expo.fyi/eas-build-archive Error: EPERM: operation not permitted, rmdir 'C:\Users\alok7\AppData\Local\Temp\eas-cli-nodejs\42a956b8-6863- 43cf-bb03-e249250926ea-shallow-clone' Code: EPERM - Compressing project files A: There are several reasons why an eas build might fail when compressing project files and uploading to EAS Build. Some possible causes include: Insufficient disk space: The build may fail if there is not enough free space on the disk where the build is being created. File corruption: If the files being compressed are corrupted, the build process may fail. Incorrect file permissions: The build process may fail if the user does not have the necessary permissions to access the files being compressed. Compression algorithm error: The build process may fail if there is an error in the compression algorithm being used. ---- Updated answer ---- The EPERM: operation not permitted, rmdir error that you are seeing is likely caused by a permission issue on your system. This error typically occurs when the process that is trying to remove the directory does not have the necessary permissions to do so. To fix this error, you can try the following steps: Make sure that you have the necessary permissions to access the directory. This may require you to run the command prompt or terminal as an administrator. If you are using Windows, try disabling any antivirus or security software that you have installed on your system. These programs can sometimes block access to certain directories or files, which can cause the EPERM error. If the issue persists, try manually deleting the directory using the rmdir command. For example: rmdir C:\Users\alok7\AppData\Local\Temp\eas-cli-nodejs\42a956b8-6863-43cf-bb03-e249250926ea-shallow-clone If the rmdir command still fails, you may need to grant yourself explicit permission to access the directory. You can do this by running the icacls command to grant yourself full control over the directory. For example: icacls C:\Users\alok7\AppData\Local\Temp\eas-cli-nodejs\42a956b8-6863-43cf-bb03-e249250926ea-shallow-clone /grant alok7:F
eas build failed while compressing
I've started using react native with expo. When trying to build the application with the command: eas build --profile development --platform android The build fails when it reaches Compressing project files and uploading to EAS Build s://expo.fyi/eas-build-archive Error: EPERM: operation not permitted, rmdir 'C:\Users\alok7\AppData\Local\Temp\eas-cli-nodejs\42a956b8-6863- 43cf-bb03-e249250926ea-shallow-clone' Code: EPERM - Compressing project files
[ "There are several reasons why an eas build might fail when compressing project files and uploading to EAS Build. Some possible causes include:\nInsufficient disk space: The build may fail if there is not enough free space on the disk where the build is being created.\nFile corruption: If the files being compressed are corrupted, the build process may fail.\nIncorrect file permissions: The build process may fail if the user does not have the necessary permissions to access the files being compressed.\nCompression algorithm error: The build process may fail if there is an error in the compression algorithm being used.\n---- Updated answer ----\nThe EPERM: operation not permitted, rmdir error that you are seeing is likely caused by a permission issue on your system. This error typically occurs when the process that is trying to remove the directory does not have the necessary permissions to do so.\nTo fix this error, you can try the following steps:\n\nMake sure that you have the necessary permissions to access the directory. This may require you to run the command prompt or terminal as an administrator.\n\nIf you are using Windows, try disabling any antivirus or security software that you have installed on your system. These programs can sometimes block access to certain directories or files, which can cause the EPERM error.\n\nIf the issue persists, try manually deleting the directory using the rmdir command. For example:\n\n\nrmdir C:\\Users\\alok7\\AppData\\Local\\Temp\\eas-cli-nodejs\\42a956b8-6863-43cf-bb03-e249250926ea-shallow-clone\n\nIf the rmdir command still fails, you may need to grant yourself explicit permission to access the directory. You can do this by running the icacls command to grant yourself full control over the directory. For example:\n\nicacls C:\\Users\\alok7\\AppData\\Local\\Temp\\eas-cli-nodejs\\42a956b8-6863-43cf-bb03-e249250926ea-shallow-clone /grant alok7:F\n" ]
[ 1 ]
[]
[]
[ "android", "eas", "expo", "react_native", "reactjs" ]
stackoverflow_0074664494_android_eas_expo_react_native_reactjs.txt
Q: How to get Bootstrap 4 button to change without page reload I'm trying to get a Bootstrap 4 button to change into a different style without needing the page to reload. I made a Subscribe button using the primary class, which is the color purple, when someone clicks it and subscribes, it changes into a secondary button, which is grey, although it doesn't change to the secondary grey color unless you refresh the page or leave and come back to it. Below is an image illustrating this: And here is the code: <a href="#" id="user_subscription" data-uid="{$user.UID}" data-subscribed="1" class="btn btn-secondary btn-bold btn-xs">{t c='user.subscribed'} <i class="fas fa-check"></i></a> {else} <a href="#" id="user_subscription" data-uid="{$user.UID}" data-subscribed="0" class="btn btn-primary btn-bold btn-xs" style="color:#fff;">{t c='user.subscribe'} <i class="fa fa-plus"></i></a> {/if} Is this possible to do? A: I'd say you can use HTML + Javascript to change it. On your HTML <div id='originalButtonClass'> <button type="button" class="btn btn-outline-info" onclick="changeButton()">Subscribe </button> </div> And in your JS function changeStyleColorBlind(){document.getElementById("originalButtonClass").style... ; }
How to get Bootstrap 4 button to change without page reload
I'm trying to get a Bootstrap 4 button to change into a different style without needing the page to reload. I made a Subscribe button using the primary class, which is the color purple, when someone clicks it and subscribes, it changes into a secondary button, which is grey, although it doesn't change to the secondary grey color unless you refresh the page or leave and come back to it. Below is an image illustrating this: And here is the code: <a href="#" id="user_subscription" data-uid="{$user.UID}" data-subscribed="1" class="btn btn-secondary btn-bold btn-xs">{t c='user.subscribed'} <i class="fas fa-check"></i></a> {else} <a href="#" id="user_subscription" data-uid="{$user.UID}" data-subscribed="0" class="btn btn-primary btn-bold btn-xs" style="color:#fff;">{t c='user.subscribe'} <i class="fa fa-plus"></i></a> {/if} Is this possible to do?
[ "I'd say you can use HTML + Javascript to change it.\nOn your HTML\n<div id='originalButtonClass'>\n <button type=\"button\" class=\"btn btn-outline-info\"\n onclick=\"changeButton()\">Subscribe\n </button>\n</div>\n\nAnd in your JS\nfunction changeStyleColorBlind(){document.getElementById(\"originalButtonClass\").style... ;\n\n}\n" ]
[ 0 ]
[]
[]
[ "bootstrap_4" ]
stackoverflow_0063768734_bootstrap_4.txt
Q: Airflow:.AirflowException: Issues in reading JSON template variable Requirement: I am trying to avoid using Variable.get() Instead use Jinja templated {{var.json.variable}} I have defined the variables in JSON format as an example below and stored them in the secret manager as snflk_json snflk_json { "snwflke_acct_request_memory":"4000Mi", "snwflke_acct_limit_memory":"4000Mi", "schedule_interval_snwflke_acct":"0 12 * * *", "LIST" ::[ "ABC.DEV","CDD.PROD" ] } Issue 1: Unable to retrieve schedule interval from the JSON variable Error : Invalid timetable expression: Exactly 5 or 6 columns has to be specified for iterator expression. Tried to use in the dag as below schedule_interval = '{{var.json.snflk_json.schedule_interval_snwflke_acct}}', Issue 2: I am trying to loop to get the task for each in LIST, I tried as below but in vain with DAG( dag_id = dag_id, default_args = default_args, schedule_interval = '{{var.json.usage_snwflk_acct_admin_config.schedule_interval_snwflke_acct}}' , dagrun_timeout = timedelta(hours=3), max_active_runs = 1, catchup = False, params = {}, tags=tags ) as dag: shares = '{{var.json.snflk_json.LIST}}' for s in shares: sf_tasks = SnowflakeOperator( task_id=f"{s}" , snowflake_conn_id= snowflake_conn_id, sql=sqls, params={"sf_env": s}, ) Error File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 754, in __init__ validate_key(task_id) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 63, in validate_key raise AirflowException( airflow.exceptions.AirflowException: The key '{' has to be made of alphanumeric characters, dashes, dots and underscores exclusively A: Airflow is parsing the dag every few seconds (30 by default). so actually it runs the for loop on a string with value {{var.json.snflk_json.LIST}} and that why you get that error. you should use DynamicTask (from ver 2.3) or put the code under Python task that creates tasks and execute the new tasks.
Airflow:.AirflowException: Issues in reading JSON template variable
Requirement: I am trying to avoid using Variable.get() Instead use Jinja templated {{var.json.variable}} I have defined the variables in JSON format as an example below and stored them in the secret manager as snflk_json snflk_json { "snwflke_acct_request_memory":"4000Mi", "snwflke_acct_limit_memory":"4000Mi", "schedule_interval_snwflke_acct":"0 12 * * *", "LIST" ::[ "ABC.DEV","CDD.PROD" ] } Issue 1: Unable to retrieve schedule interval from the JSON variable Error : Invalid timetable expression: Exactly 5 or 6 columns has to be specified for iterator expression. Tried to use in the dag as below schedule_interval = '{{var.json.snflk_json.schedule_interval_snwflke_acct}}', Issue 2: I am trying to loop to get the task for each in LIST, I tried as below but in vain with DAG( dag_id = dag_id, default_args = default_args, schedule_interval = '{{var.json.usage_snwflk_acct_admin_config.schedule_interval_snwflke_acct}}' , dagrun_timeout = timedelta(hours=3), max_active_runs = 1, catchup = False, params = {}, tags=tags ) as dag: shares = '{{var.json.snflk_json.LIST}}' for s in shares: sf_tasks = SnowflakeOperator( task_id=f"{s}" , snowflake_conn_id= snowflake_conn_id, sql=sqls, params={"sf_env": s}, ) Error File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 754, in __init__ validate_key(task_id) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 63, in validate_key raise AirflowException( airflow.exceptions.AirflowException: The key '{' has to be made of alphanumeric characters, dashes, dots and underscores exclusively
[ "Airflow is parsing the dag every few seconds (30 by default). so actually it runs the for loop on a string with value {{var.json.snflk_json.LIST}} and that why you get that error.\nyou should use DynamicTask (from ver 2.3) or put the code under Python task that creates tasks and execute the new tasks.\n" ]
[ 0 ]
[]
[]
[ "airflow", "airflow_2.x" ]
stackoverflow_0074661265_airflow_airflow_2.x.txt
Q: How can I solve this java runtime error about Scanner Function. (Beginner) I am experiencing this error when i try to run my code. Exception in thread "main" java.util.NoSuchElementException: No line found at java.base/java.util.Scanner.nextLine(Scanner.java:1651) at Main.main(Main.java:11) I also experience it on line 21. I tried reviewing the Scanner function and i dont see where I went wrong.
How can I solve this java runtime error about Scanner Function. (Beginner)
I am experiencing this error when i try to run my code. Exception in thread "main" java.util.NoSuchElementException: No line found at java.base/java.util.Scanner.nextLine(Scanner.java:1651) at Main.main(Main.java:11) I also experience it on line 21. I tried reviewing the Scanner function and i dont see where I went wrong.
[]
[]
[ "import java.util.*;\n\npublic class Solution {\n\n private static String getRuler(String kingdom) {\n\n String ruler = \"Bob\";\n return ruler;\n }\n\n public static void main(String[] args) {\n try (Scanner in = new Scanner(System.in)) {\n int T = in.nextInt();\n\n for (int t = 1; t <= T; ++t) {\n String kingdom = in.next();\n System.out.println(\"Case #\" + t + \": \" + kingdom + \" is ruled by \" + getRuler(kingdom) + \".\");\n }\n }\n }\n}\n\n" ]
[ -1 ]
[ "java", "java.util.scanner" ]
stackoverflow_0074665036_java_java.util.scanner.txt
Q: Dependency 'com.graphql-java:graphiql-spring-boot-starter:5.0.2' not found i am trying to implement graphql using java spring boot. i cannot add dependency graphiql-spring-boot-starter. The error message is Dependency 'com.graphql-java:graphiql-spring-boot-starter:5.0.2' not found i am expecting solution to the error A: I guess you have a typo: graphql-spring-boot-starter instead of graphiql-spring-boot-starter.
Dependency 'com.graphql-java:graphiql-spring-boot-starter:5.0.2' not found
i am trying to implement graphql using java spring boot. i cannot add dependency graphiql-spring-boot-starter. The error message is Dependency 'com.graphql-java:graphiql-spring-boot-starter:5.0.2' not found i am expecting solution to the error
[ "I guess you have a typo: graphql-spring-boot-starter instead of graphiql-spring-boot-starter.\n" ]
[ 0 ]
[]
[]
[ "graphql_java", "graphql_js" ]
stackoverflow_0074663987_graphql_java_graphql_js.txt
Q: Can not pass user input in Java constructor I am tryin to take user input in a parameterized java constructor but I am failing. It gives the following error Exception in thread "main" java.util.NoSuchElementException: No line found at java.base/java.util.Scanner.nextLine(Scanner.java:1651) at Main.main(Main.java:24) Here is my code import java.util.Scanner; class Student { String name; String date; Student( String name, String Date) { this.name=name; this.date=date; } } public class Main { public static void main(String args[]) { System.out.println("Here is the date"); Scanner myObj = new Scanner(System.in); // Create a Scanner object System.out.println("Enter username"); String name = myObj.nextLine(); System.out.println("Enter date"); String date = myObj.nextLine(); Student s1= new Student(name,date); } } A: According to your stack trace, this has nothing to do with constructors. The error happens when Scanner tries to read a line from the standard input. If you are running this program in IDE, the input via System.in might not be available, so there is no next line for Scanner to read. Try running your program from console / command line. Some IDEs also have a checkbox for enabling standard input (usually as part of run/debug configuration).
Can not pass user input in Java constructor
I am tryin to take user input in a parameterized java constructor but I am failing. It gives the following error Exception in thread "main" java.util.NoSuchElementException: No line found at java.base/java.util.Scanner.nextLine(Scanner.java:1651) at Main.main(Main.java:24) Here is my code import java.util.Scanner; class Student { String name; String date; Student( String name, String Date) { this.name=name; this.date=date; } } public class Main { public static void main(String args[]) { System.out.println("Here is the date"); Scanner myObj = new Scanner(System.in); // Create a Scanner object System.out.println("Enter username"); String name = myObj.nextLine(); System.out.println("Enter date"); String date = myObj.nextLine(); Student s1= new Student(name,date); } }
[ "According to your stack trace, this has nothing to do with constructors. The error happens when Scanner tries to read a line from the standard input.\nIf you are running this program in IDE, the input via System.in might not be available, so there is no next line for Scanner to read. Try running your program from console / command line. Some IDEs also have a checkbox for enabling standard input (usually as part of run/debug configuration).\n" ]
[ 1 ]
[ "There are a few things that could be happening here. The first thing I would say to try is changing the access of the Student constructor to public.\n public Student( String name, String date) {\n this.name=name;\n this.date=date;\n }\n\nIf you have the student class contained inside of the Main class, you need to make the student class a static class.\nstatic class Student {\n /*\n ...\n */\n}\n\nAs a side note, if you look at the constructor in your student class, you will notice you accidently put Date instead of date on the Student( String name, String Date) line.\n", "Mistake while passing the date variable as \"Date\" in the constructor. Always remember Java is a case sensitive language and this name convention is applied in almost every programming language!\n" ]
[ -1, -2 ]
[ "java", "java.util.scanner", "user_input" ]
stackoverflow_0074664964_java_java.util.scanner_user_input.txt
Q: How to fix rxjs catchError return type for http call in angular? Inside my service I have a post method in which return type is IInfluencerRewardSetup influencerRewardSetup(payload: IInfluencerRewardSetup): Observable<IInfluencerRewardSetup> { return this.httpClient.post<IInfluencerRewardSetup>(this.baseUrl, payload) .pipe(retry(1), this.errorHandler()); } and private errorHandler() { return catchError(err => { this.notificationService.error(err.message, ''); throw new Error(err.message || 'Server error'); }); } But the problem is I am having this type of error: Type 'Observable<unknown>' is not assignable to type 'Observable<IInfluencerRewardSetup>' Please note that, errorHandler will handle other HTTP calls as well. Why am I having this error and how can I fix that type of issue here? can someone please help? A: Your catchError returns an Observable of type unknown, not IInfluencerRewardSetup. You could fix this with a cast: return this.httpClient.post<IInfluencerRewardSetup>(this.baseUrl, payload) .pipe( retry(1), this.errorHandler() as Observable<IInfluencerRewardSetup> ); or by changing the catchError to return an Observable of the right type: private errorHandler() { return catchError<IInfluencerRewardSetup>(err => { this.notificationService.error(err.message, ''); throw new Error(err.message || 'Server error'); }); }
How to fix rxjs catchError return type for http call in angular?
Inside my service I have a post method in which return type is IInfluencerRewardSetup influencerRewardSetup(payload: IInfluencerRewardSetup): Observable<IInfluencerRewardSetup> { return this.httpClient.post<IInfluencerRewardSetup>(this.baseUrl, payload) .pipe(retry(1), this.errorHandler()); } and private errorHandler() { return catchError(err => { this.notificationService.error(err.message, ''); throw new Error(err.message || 'Server error'); }); } But the problem is I am having this type of error: Type 'Observable<unknown>' is not assignable to type 'Observable<IInfluencerRewardSetup>' Please note that, errorHandler will handle other HTTP calls as well. Why am I having this error and how can I fix that type of issue here? can someone please help?
[ "Your catchError returns an Observable of type unknown, not IInfluencerRewardSetup.\nYou could fix this with a cast:\nreturn this.httpClient.post<IInfluencerRewardSetup>(this.baseUrl, payload)\n .pipe(\n retry(1),\n this.errorHandler() as Observable<IInfluencerRewardSetup>\n );\n\nor by changing the catchError to return an Observable of the right type:\nprivate errorHandler() {\n return catchError<IInfluencerRewardSetup>(err => {\n this.notificationService.error(err.message, '');\n throw new Error(err.message || 'Server error');\n });\n}\n\n" ]
[ 0 ]
[ "The error comes from you have made the function with no parameters. And the operation in a pipe need an needs inputs and i believe outputs are optional. but in general have outputs too.\nthis answer is good to show how you should structure it https://stackoverflow.com/a/50908353/1805974\nso for you it would probably look a bit like this:\nhandleError = () => pipe(\n catchError(err => {\n this.notificationService.error(err.message, '');\n throw new Error(err.message || 'Server error');\n });\n ); \n\nyou need to import { pipe } from \"rxjs\"; as suggested in the answer\n" ]
[ -1 ]
[ "angular", "rxjs" ]
stackoverflow_0074662972_angular_rxjs.txt
Q: camera2 captured picture - conversion from YUV_420_888 to NV21 Via the camera2 API we are receiving an Image object of the format YUV_420_888. We are using then the following function for conversion to NV21: private static byte[] YUV_420_888toNV21(Image image) { byte[] nv21; ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); int ySize = yBuffer.remaining(); int uSize = uBuffer.remaining(); int vSize = vBuffer.remaining(); nv21 = new byte[ySize + uSize + vSize]; //U and V are swapped yBuffer.get(nv21, 0, ySize); vBuffer.get(nv21, ySize, vSize); uBuffer.get(nv21, ySize + vSize, uSize); return nv21; } While this function works fine with cameraCaptureSessions.setRepeatingRequest, we get a segmentation error in further processing (on the JNI side) when calling cameraCaptureSessions.capture. Both request YUV_420_888 format via ImageReader. How come the result is different for both function calls while the requested type is the same? Update: As mentioned in the comments I get this behaviour because of different image sizes (much larger dimension for the capture request). But our further processing operations on the JNI side are the same for both requests and don't depend on image dimensions (only on the aspect ratio, which is in both cases the same). A: Your code will only return correct NV21 if there is no padding at all, and U and V plains overlap and actually represent interlaced VU values. This happens quite often for preview, but in such case you allocate extra w*h/4 bytes for your array (which presumably is not a problem). Maybe for captured image you need a more robust implemenation, e.g. private static byte[] YUV_420_888toNV21(Image image) { int width = image.getWidth(); int height = image.getHeight(); int ySize = width*height; int uvSize = width*height/4; byte[] nv21 = new byte[ySize + uvSize*2]; ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); // Y ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); // U ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); // V int rowStride = image.getPlanes()[0].getRowStride(); assert(image.getPlanes()[0].getPixelStride() == 1); int pos = 0; if (rowStride == width) { // likely yBuffer.get(nv21, 0, ySize); pos += ySize; } else { long yBufferPos = -rowStride; // not an actual position for (; pos<ySize; pos+=width) { yBufferPos += rowStride; yBuffer.position(yBufferPos); yBuffer.get(nv21, pos, width); } } rowStride = image.getPlanes()[2].getRowStride(); int pixelStride = image.getPlanes()[2].getPixelStride(); assert(rowStride == image.getPlanes()[1].getRowStride()); assert(pixelStride == image.getPlanes()[1].getPixelStride()); if (pixelStride == 2 && rowStride == width && uBuffer.get(0) == vBuffer.get(1)) { // maybe V an U planes overlap as per NV21, which means vBuffer[1] is alias of uBuffer[0] byte savePixel = vBuffer.get(1); try { vBuffer.put(1, (byte)~savePixel); if (uBuffer.get(0) == (byte)~savePixel) { vBuffer.put(1, savePixel); vBuffer.position(0); uBuffer.position(0); vBuffer.get(nv21, ySize, 1); uBuffer.get(nv21, ySize + 1, uBuffer.remaining()); return nv21; // shortcut } } catch (ReadOnlyBufferException ex) { // unfortunately, we cannot check if vBuffer and uBuffer overlap } // unfortunately, the check failed. We must save U and V pixel by pixel vBuffer.put(1, savePixel); } // other optimizations could check if (pixelStride == 1) or (pixelStride == 2), // but performance gain would be less significant for (int row=0; row<height/2; row++) { for (int col=0; col<width/2; col++) { int vuPos = col*pixelStride + row*rowStride; nv21[pos++] = vBuffer.get(vuPos); nv21[pos++] = uBuffer.get(vuPos); } } return nv21; } If you anyway intend to pass the resulting array to C++, you can take advantage of the fact that the buffer returned will always have isDirect return true, so the underlying data could be mapped as a pointer in JNI without doing any copies with GetDirectBufferAddress. This means that same conversion may be done in C++ with minimal overhead. In C++, you may even find that the actual pixel arrangement is already NV21! PS Actually, this can be done in Java, with negligible overhead, see the line if (pixelStride == 2 && … above. So, we can bulk copy all chroma bytes to the resulting byte array, which is much faster than running the loops, but still slower than what can be achieved for such case in C++. For full implementation, see Image.toByteArray(). A: Based on @Alex Cohn answer, I have implemented it in the JNI part, trying to take profit from the byte-access and performance advantages. I left it here, maybe it could be as useful as the @Alex answer was for me. It's almost the same algorithm, in C; based in an image with YUV_420_888 format: uchar* yuvToNV21(jbyteArray yBuf, jbyteArray uBuf, jbyteArray vBuf, jbyte *fullArrayNV21, int width, int height, int yRowStride, int yPixelStride, int uRowStride, int uPixelStride, int vRowStride, int vPixelStride, JNIEnv *env) { /* Check that our frame has right format, as specified at android docs for * YUV_420_888 (https://developer.android.com/reference/android/graphics/ImageFormat?authuser=2#YUV_420_888): * - Plane Y not overlaped with UV, and always with pixelStride = 1 * - Planes U and V have the same rowStride and pixelStride (overlaped or not) */ if(yPixelStride != 1 || uPixelStride != vPixelStride || uRowStride != vRowStride) { jclass Exception = env->FindClass("java/lang/Exception"); env->ThrowNew(Exception, "Invalid YUV_420_888 byte structure. Not agree with https://developer.android.com/reference/android/graphics/ImageFormat?authuser=2#YUV_420_888"); } int ySize = width*height; int uSize = env->GetArrayLength(uBuf); int vSize = env->GetArrayLength(vBuf); int newArrayPosition = 0; //Posicion por la que vamos rellenando el array NV21 if (fullArrayNV21 == nullptr) { fullArrayNV21 = new jbyte[ySize + uSize + vSize]; } if(yRowStride == width) { //Best case. No padding, copy direct env->GetByteArrayRegion(yBuf, newArrayPosition, ySize, fullArrayNV21); newArrayPosition = ySize; }else { // Padding at plane Y. Copy Row by Row long yPlanePosition = 0; for(; newArrayPosition<ySize; newArrayPosition += width) { env->GetByteArrayRegion(yBuf, yPlanePosition, width, fullArrayNV21 + newArrayPosition); yPlanePosition += yRowStride; } } // Check UV channels in order to know if they are overlapped (best case) // If they are overlapped, U and B first bytes are consecutives and pixelStride = 2 long uMemoryAdd = (long)&uBuf; long vMemoryAdd = (long)&vBuf; long diff = std::abs(uMemoryAdd - vMemoryAdd); if(vPixelStride == 2 && diff == 8) { if(width == vRowStride) { // Best Case: Valid NV21 representation (UV overlapped, no padding). Copy direct env->GetByteArrayRegion(uBuf, 0, uSize, fullArrayNV21 + ySize); env->GetByteArrayRegion(vBuf, 0, vSize, fullArrayNV21 + ySize + uSize); }else { // UV overlapped, but with padding. Copy row by row (too much performance improvement compared with copy byte-by-byte) int limit = height/2 - 1; for(int row = 0; row<limit; row++) { env->GetByteArrayRegion(uBuf, row * vRowStride, width, fullArrayNV21 + ySize + (row * width)); } } }else { //WORST: not overlapped UV. Copy byte by byte for(int row = 0; row<height/2; row++) { for(int col = 0; col<width/2; col++) { int vuPos = col*uPixelStride + row*uRowStride; env->GetByteArrayRegion(vBuf, vuPos, 1, fullArrayNV21 + newArrayPosition); newArrayPosition++; env->GetByteArrayRegion(uBuf, vuPos, 1, fullArrayNV21 + newArrayPosition); newArrayPosition++; } } } return (uchar*)fullArrayNV21; } I'm sure that some improvements can be added, but I have tested in a lot of devices, and it is working with very good performance and stability. A: public static byte[] YUV420toNV21(Image image) { Rect crop = image.getCropRect(); int format = image.getFormat(); int width = crop.width(); int height = crop.height(); Image.Plane[] planes = image.getPlanes(); byte[] data = new byte[width * height * ImageFormat.getBitsPerPixel(format) / 8]; byte[] rowData = new byte[planes[0].getRowStride()]; int channelOffset = 0; int outputStride = 1; for (int i = 0; i < planes.length; i++) { switch (i) { case 0: channelOffset = 0; outputStride = 1; break; case 1: channelOffset = width * height + 1; outputStride = 2; break; case 2: channelOffset = width * height; outputStride = 2; break; } ByteBuffer buffer = planes[i].getBuffer(); int rowStride = planes[i].getRowStride(); int pixelStride = planes[i].getPixelStride(); int shift = (i == 0) ? 0 : 1; int w = width >> shift; int h = height >> shift; buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift)); for (int row = 0; row < h; row++) { int length; if (pixelStride == 1 && outputStride == 1) { length = w; buffer.get(data, channelOffset, length); channelOffset += length; } else { length = (w - 1) * pixelStride + 1; buffer.get(rowData, 0, length); for (int col = 0; col < w; col++) { data[channelOffset] = rowData[col * pixelStride]; channelOffset += outputStride; } } if (row < h - 1) { buffer.position(buffer.position() + rowStride - length); } } } return data; } A: Check this answer for an acceleration method when it is not disguised NV21 or rowStride != width(there are paddings on each line).
camera2 captured picture - conversion from YUV_420_888 to NV21
Via the camera2 API we are receiving an Image object of the format YUV_420_888. We are using then the following function for conversion to NV21: private static byte[] YUV_420_888toNV21(Image image) { byte[] nv21; ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); int ySize = yBuffer.remaining(); int uSize = uBuffer.remaining(); int vSize = vBuffer.remaining(); nv21 = new byte[ySize + uSize + vSize]; //U and V are swapped yBuffer.get(nv21, 0, ySize); vBuffer.get(nv21, ySize, vSize); uBuffer.get(nv21, ySize + vSize, uSize); return nv21; } While this function works fine with cameraCaptureSessions.setRepeatingRequest, we get a segmentation error in further processing (on the JNI side) when calling cameraCaptureSessions.capture. Both request YUV_420_888 format via ImageReader. How come the result is different for both function calls while the requested type is the same? Update: As mentioned in the comments I get this behaviour because of different image sizes (much larger dimension for the capture request). But our further processing operations on the JNI side are the same for both requests and don't depend on image dimensions (only on the aspect ratio, which is in both cases the same).
[ "Your code will only return correct NV21 if there is no padding at all, and U and V plains overlap and actually represent interlaced VU values. This happens quite often for preview, but in such case you allocate extra w*h/4 bytes for your array (which presumably is not a problem). Maybe for captured image you need a more robust implemenation, e.g.\nprivate static byte[] YUV_420_888toNV21(Image image) {\n\n int width = image.getWidth();\n int height = image.getHeight(); \n int ySize = width*height;\n int uvSize = width*height/4;\n\n byte[] nv21 = new byte[ySize + uvSize*2];\n\n ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); // Y\n ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); // U\n ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); // V\n\n int rowStride = image.getPlanes()[0].getRowStride();\n assert(image.getPlanes()[0].getPixelStride() == 1);\n\n int pos = 0;\n\n if (rowStride == width) { // likely\n yBuffer.get(nv21, 0, ySize);\n pos += ySize;\n }\n else {\n long yBufferPos = -rowStride; // not an actual position\n for (; pos<ySize; pos+=width) {\n yBufferPos += rowStride;\n yBuffer.position(yBufferPos);\n yBuffer.get(nv21, pos, width);\n }\n }\n\n rowStride = image.getPlanes()[2].getRowStride();\n int pixelStride = image.getPlanes()[2].getPixelStride();\n\n assert(rowStride == image.getPlanes()[1].getRowStride());\n assert(pixelStride == image.getPlanes()[1].getPixelStride());\n \n if (pixelStride == 2 && rowStride == width && uBuffer.get(0) == vBuffer.get(1)) {\n // maybe V an U planes overlap as per NV21, which means vBuffer[1] is alias of uBuffer[0]\n byte savePixel = vBuffer.get(1);\n try {\n vBuffer.put(1, (byte)~savePixel);\n if (uBuffer.get(0) == (byte)~savePixel) {\n vBuffer.put(1, savePixel);\n vBuffer.position(0);\n uBuffer.position(0);\n vBuffer.get(nv21, ySize, 1);\n uBuffer.get(nv21, ySize + 1, uBuffer.remaining());\n\n return nv21; // shortcut\n }\n }\n catch (ReadOnlyBufferException ex) {\n // unfortunately, we cannot check if vBuffer and uBuffer overlap\n }\n\n // unfortunately, the check failed. We must save U and V pixel by pixel\n vBuffer.put(1, savePixel);\n }\n\n // other optimizations could check if (pixelStride == 1) or (pixelStride == 2), \n // but performance gain would be less significant\n\n for (int row=0; row<height/2; row++) {\n for (int col=0; col<width/2; col++) {\n int vuPos = col*pixelStride + row*rowStride;\n nv21[pos++] = vBuffer.get(vuPos);\n nv21[pos++] = uBuffer.get(vuPos);\n }\n }\n\n return nv21;\n}\n\nIf you anyway intend to pass the resulting array to C++, you can take advantage of the fact that\n\nthe buffer returned will always have isDirect return true, so the underlying data could be mapped as a pointer in JNI without doing any copies with GetDirectBufferAddress.\n\nThis means that same conversion may be done in C++ with minimal overhead. In C++, you may even find that the actual pixel arrangement is already NV21!\nPS Actually, this can be done in Java, with negligible overhead, see the line if (pixelStride == 2 && … above. So, we can bulk copy all chroma bytes to the resulting byte array, which is much faster than running the loops, but still slower than what can be achieved for such case in C++. For full implementation, see Image.toByteArray().\n", "Based on @Alex Cohn answer, I have implemented it in the JNI part, trying to take profit from the byte-access and performance advantages. I left it here, maybe it could be as useful as the @Alex answer was for me. It's almost the same algorithm, in C; based in an image with YUV_420_888 format:\nuchar* yuvToNV21(jbyteArray yBuf, jbyteArray uBuf, jbyteArray vBuf, jbyte *fullArrayNV21,\n int width, int height, int yRowStride, int yPixelStride, int uRowStride,\n int uPixelStride, int vRowStride, int vPixelStride, JNIEnv *env) {\n\n /* Check that our frame has right format, as specified at android docs for\n * YUV_420_888 (https://developer.android.com/reference/android/graphics/ImageFormat?authuser=2#YUV_420_888):\n * - Plane Y not overlaped with UV, and always with pixelStride = 1\n * - Planes U and V have the same rowStride and pixelStride (overlaped or not)\n */\n if(yPixelStride != 1 || uPixelStride != vPixelStride || uRowStride != vRowStride) {\n jclass Exception = env->FindClass(\"java/lang/Exception\");\n env->ThrowNew(Exception, \"Invalid YUV_420_888 byte structure. Not agree with https://developer.android.com/reference/android/graphics/ImageFormat?authuser=2#YUV_420_888\");\n }\n\n int ySize = width*height;\n int uSize = env->GetArrayLength(uBuf);\n int vSize = env->GetArrayLength(vBuf);\n int newArrayPosition = 0; //Posicion por la que vamos rellenando el array NV21\n if (fullArrayNV21 == nullptr) {\n fullArrayNV21 = new jbyte[ySize + uSize + vSize];\n }\n if(yRowStride == width) {\n //Best case. No padding, copy direct\n env->GetByteArrayRegion(yBuf, newArrayPosition, ySize, fullArrayNV21);\n newArrayPosition = ySize;\n }else {\n // Padding at plane Y. Copy Row by Row\n long yPlanePosition = 0;\n for(; newArrayPosition<ySize; newArrayPosition += width) {\n env->GetByteArrayRegion(yBuf, yPlanePosition, width, fullArrayNV21 + newArrayPosition);\n yPlanePosition += yRowStride;\n }\n }\n\n // Check UV channels in order to know if they are overlapped (best case)\n // If they are overlapped, U and B first bytes are consecutives and pixelStride = 2\n long uMemoryAdd = (long)&uBuf;\n long vMemoryAdd = (long)&vBuf;\n long diff = std::abs(uMemoryAdd - vMemoryAdd);\n if(vPixelStride == 2 && diff == 8) {\n if(width == vRowStride) {\n // Best Case: Valid NV21 representation (UV overlapped, no padding). Copy direct\n env->GetByteArrayRegion(uBuf, 0, uSize, fullArrayNV21 + ySize);\n env->GetByteArrayRegion(vBuf, 0, vSize, fullArrayNV21 + ySize + uSize);\n }else {\n // UV overlapped, but with padding. Copy row by row (too much performance improvement compared with copy byte-by-byte)\n int limit = height/2 - 1;\n for(int row = 0; row<limit; row++) {\n env->GetByteArrayRegion(uBuf, row * vRowStride, width, fullArrayNV21 + ySize + (row * width));\n }\n }\n }else {\n //WORST: not overlapped UV. Copy byte by byte\n for(int row = 0; row<height/2; row++) {\n for(int col = 0; col<width/2; col++) {\n int vuPos = col*uPixelStride + row*uRowStride;\n env->GetByteArrayRegion(vBuf, vuPos, 1, fullArrayNV21 + newArrayPosition);\n newArrayPosition++;\n env->GetByteArrayRegion(uBuf, vuPos, 1, fullArrayNV21 + newArrayPosition);\n newArrayPosition++;\n }\n }\n }\n return (uchar*)fullArrayNV21;\n}\n\nI'm sure that some improvements can be added, but I have tested in a lot of devices, and it is working with very good performance and stability.\n", " public static byte[] YUV420toNV21(Image image) {\n Rect crop = image.getCropRect();\n int format = image.getFormat();\n int width = crop.width();\n int height = crop.height();\n Image.Plane[] planes = image.getPlanes();\n byte[] data = new byte[width * height * ImageFormat.getBitsPerPixel(format) / 8];\n byte[] rowData = new byte[planes[0].getRowStride()];\n\n int channelOffset = 0;\n int outputStride = 1;\n for (int i = 0; i < planes.length; i++) {\n switch (i) {\n case 0:\n channelOffset = 0;\n outputStride = 1;\n break;\n case 1:\n channelOffset = width * height + 1;\n outputStride = 2;\n break;\n case 2:\n channelOffset = width * height;\n outputStride = 2;\n break;\n }\n\n ByteBuffer buffer = planes[i].getBuffer();\n int rowStride = planes[i].getRowStride();\n int pixelStride = planes[i].getPixelStride();\n\n int shift = (i == 0) ? 0 : 1;\n int w = width >> shift;\n int h = height >> shift;\n buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));\n for (int row = 0; row < h; row++) {\n int length;\n if (pixelStride == 1 && outputStride == 1) {\n length = w;\n buffer.get(data, channelOffset, length);\n channelOffset += length;\n } else {\n length = (w - 1) * pixelStride + 1;\n buffer.get(rowData, 0, length);\n for (int col = 0; col < w; col++) {\n data[channelOffset] = rowData[col * pixelStride];\n channelOffset += outputStride;\n }\n }\n if (row < h - 1) {\n buffer.position(buffer.position() + rowStride - length);\n }\n }\n }\n return data;\n }\n\n", "Check this answer for an acceleration method when it is not disguised NV21 or rowStride != width(there are paddings on each line).\n" ]
[ 18, 3, 1, 0 ]
[]
[]
[ "android", "android_camera2", "yuv" ]
stackoverflow_0052726002_android_android_camera2_yuv.txt
Q: Counting Number of Unique Column Values Per Group I have a dataset that looks something like this: name = c("john", "john", "john", "alex","alex", "tim", "tim", "tim", "ralph", "ralph") year = c(2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016) my_data = data.frame(name, year) name year 1 john 2010 2 john 2011 3 john 2012 4 alex 2011 5 alex 2012 6 tim 2010 7 tim 2011 8 tim 2012 9 ralph 2014 10 ralph 2016 I want to count the two following things in this dataset: Groups based on all years And of these groups, the number of groups with at least one non-consecutive year As an example for 1): # sample output for 1) year count 1 2010, 2011, 2012 2 2 2011, 2012 1 3 2014, 2016 1 And as an example of 2) - only row 3 (in the above data frame) contains a missing year (i.e. 2014 to 2016 without 2015). Thus, the output would look something like this: # sample output for 2) year count 1 2014, 2016 1 Can someone please show me how to do this in R? And is there a way to make sure that (2011, 2012) is considered the same as (2012, 2011) ?Β  EDIT: For anyone using an older version of R, @Rui Barradas provided an answer for 2) - I have included it here so that there is no ambiguity when copy/pasting: agg <- aggregate(year ~ name, my_data, c) agg <- agg$year[sapply(agg$year, function(y) any(diff(y) != 1))] as.data.frame(table(sapply(agg, paste, collapse = ", "))) A: Here are base R solutions. # 1. agg <- aggregate(year ~ name, my_data, paste, collapse = ", ") as.data.frame(table(agg$year)) #> Var1 Freq #> 1 2010, 2011, 2012 2 #> 2 2011, 2012 1 #> 3 2014, 2016 1 # 2. agg <- aggregate(year ~ name, my_data, c) agg <- agg$year[sapply(agg$year, \(y) any(diff(y) != 1))] as.data.frame(table(sapply(agg, paste, collapse = ", "))) #> Var1 Freq #> 1 2014, 2016 1 # final clean up rm(agg) Created on 2022-12-03 with reprex v2.0.2 Edit Answering to the comment/request, Is there a way to make sure that (2011, 2012) is considered the same as (2012, 2011) ? a way is to, in each group of name, first sort the data by year. Then run the code above. my_data <- my_data[order(my_data$name, my_data$year), ] A: Here is solution with dplyr and tidyr: library(dplyr) library(tidyr) ### 1. my_data %>% group_by(name) %>% mutate(year = toString(year)) %>% distinct(year) %>% ungroup() %>% count(year, name="count") year count <chr> <int> 1 2010, 2011, 2012 2 2 2011, 2012 1 3 2014, 2016 1 ### 2. my_data %>% group_by(name) %>% mutate(x = lead(year) - year) %>% fill(x, .direction = "down") %>% ungroup () %>% filter(x >= max(x)) %>% mutate(year = toString(year)) %>% distinct(year) %>% ungroup() %>% count(year, name="count") year count <chr> <int> 1 2014, 2016 1 A: Using toString and sort in by. by(my_data$year, my_data$name, \(x) toString(sort(x))) |> table() |> as.data.frame() # Var1 Freq # 1 2010, 2011, 2012 2 # 2 2011, 2012 1 # 3 2014, 2016 1 Order doesn't matter: set.seed(42) my_data <- my_data[sample(nrow(my_data)), ] by(my_data$year, my_data$name, \(x) toString(sort(x))) |> table() |> as.data.frame() # Var1 Freq # 1 2010, 2011, 2012 2 # 2 2011, 2012 1 # 3 2014, 2016 1 Data: my_data <- structure(list(name = c("john", "john", "john", "alex", "alex", "tim", "tim", "tim", "ralph", "ralph"), year = c(2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016)), class = "data.frame", row.names = c(NA, -10L))
Counting Number of Unique Column Values Per Group
I have a dataset that looks something like this: name = c("john", "john", "john", "alex","alex", "tim", "tim", "tim", "ralph", "ralph") year = c(2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016) my_data = data.frame(name, year) name year 1 john 2010 2 john 2011 3 john 2012 4 alex 2011 5 alex 2012 6 tim 2010 7 tim 2011 8 tim 2012 9 ralph 2014 10 ralph 2016 I want to count the two following things in this dataset: Groups based on all years And of these groups, the number of groups with at least one non-consecutive year As an example for 1): # sample output for 1) year count 1 2010, 2011, 2012 2 2 2011, 2012 1 3 2014, 2016 1 And as an example of 2) - only row 3 (in the above data frame) contains a missing year (i.e. 2014 to 2016 without 2015). Thus, the output would look something like this: # sample output for 2) year count 1 2014, 2016 1 Can someone please show me how to do this in R? And is there a way to make sure that (2011, 2012) is considered the same as (2012, 2011) ?Β  EDIT: For anyone using an older version of R, @Rui Barradas provided an answer for 2) - I have included it here so that there is no ambiguity when copy/pasting: agg <- aggregate(year ~ name, my_data, c) agg <- agg$year[sapply(agg$year, function(y) any(diff(y) != 1))] as.data.frame(table(sapply(agg, paste, collapse = ", ")))
[ "Here are base R solutions.\n# 1.\nagg <- aggregate(year ~ name, my_data, paste, collapse = \", \")\nas.data.frame(table(agg$year))\n#> Var1 Freq\n#> 1 2010, 2011, 2012 2\n#> 2 2011, 2012 1\n#> 3 2014, 2016 1\n\n# 2.\nagg <- aggregate(year ~ name, my_data, c)\nagg <- agg$year[sapply(agg$year, \\(y) any(diff(y) != 1))]\nas.data.frame(table(sapply(agg, paste, collapse = \", \")))\n#> Var1 Freq\n#> 1 2014, 2016 1\n\n# final clean up\nrm(agg) \n\nCreated on 2022-12-03 with reprex v2.0.2\n\nEdit\nAnswering to the comment/request,\n\nIs there a way to make sure that (2011, 2012) is considered the same as (2012, 2011) ?\n\na way is to, in each group of name, first sort the data by year. Then run the code above.\nmy_data <- my_data[order(my_data$name, my_data$year), ]\n\n", "Here is solution with dplyr and tidyr:\nlibrary(dplyr)\nlibrary(tidyr)\n\n### 1.\nmy_data %>% \n group_by(name) %>% \n mutate(year = toString(year)) %>% \n distinct(year) %>% \n ungroup() %>% \n count(year, name=\"count\")\n\nyear count\n<chr> <int>\n1 2010, 2011, 2012 2\n2 2011, 2012 1\n3 2014, 2016 1\n\n### 2. \nmy_data %>% \n group_by(name) %>% \n mutate(x = lead(year) - year) %>% \n fill(x, .direction = \"down\") %>% \n ungroup () %>% \n filter(x >= max(x)) %>% \n mutate(year = toString(year)) %>% \n distinct(year) %>% \n ungroup() %>% \n count(year, name=\"count\")\n\nyear count\n<chr> <int>\n1 2014, 2016 1\n\n", "Using toString and sort in by.\nby(my_data$year, my_data$name, \\(x) toString(sort(x))) |> table() |> as.data.frame()\n# Var1 Freq\n# 1 2010, 2011, 2012 2\n# 2 2011, 2012 1\n# 3 2014, 2016 1\n\nOrder doesn't matter:\nset.seed(42)\nmy_data <- my_data[sample(nrow(my_data)), ]\nby(my_data$year, my_data$name, \\(x) toString(sort(x))) |> table() |> as.data.frame()\n# Var1 Freq\n# 1 2010, 2011, 2012 2\n# 2 2011, 2012 1\n# 3 2014, 2016 1\n\n\nData:\nmy_data <- structure(list(name = c(\"john\", \"john\", \"john\", \"alex\", \"alex\", \n\"tim\", \"tim\", \"tim\", \"ralph\", \"ralph\"), year = c(2010, 2011, \n2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016)), class = \"data.frame\", row.names = c(NA, \n-10L))\n\n" ]
[ 3, 1, 1 ]
[]
[]
[ "data_manipulation", "r" ]
stackoverflow_0074664615_data_manipulation_r.txt
Q: How to create a shared package in a Turborepo (monorepo), for prisma generated models and types? I am creating a monorepo using Turborepo consisting of multiple Nestjs microservices, and an API gateway to act as the request distributer. In each microservice, Postgres is used as a database and Prisma as the ORM. Each microservice has its own schema + Prisma client, so it's not a shared schema/client. We are looking to create a shared package for things like DTOs, as well as prisma generated types and entities. The package would be shared among all microservices so if I would export the prisma generated from the microservices to the package, a cyclic dependency occurs. I am new to monorepos so this is a complex topic for me to begin with, but I am hoping someone here on Stackoverflow may have some input on the matter. Appreciate it! A: We have a similar monorepo regarding Prisma and Turborepo. From our point of view there are two solutions how to avoid cyclic dependencies in a monorepo: 1. Avoid it Instead of creating a a shared package for all DTOs, types, etc. that should be used by all packages, leave the DTOs, types etc. inside the packages they come from. Sometimes it helps to rethink the sizing of the packages. The solution could be to merge some packages to one if they use the same shared code. Or to divide a package into smaller ones, if you only need a small portion of the package to be shared. 2. Break cyclic dependencies by extra build script If solution 1 is not possible you can add an extra build script that copies the shared content from the packages into the shared one. In Turborepo you can either have a global script prefixed with //# that does the job, or you can add the script to each packages, or even both: // turbo.json { "$schema": "https://turborepo.org/schema.json", "pipeline": { "build": { "dependsOn": [ "^build", "extraBuildScriptFromPackages", "//#extraGlobalBuildScript" ], // ... } } } We use solution 2 to collect all metadata from our packages and create a shared metadata package for the whole monorepo. For Prisma types we use a mix of solution 1 (for the prisma types) and solution 2 (for the resulting graphql types).
How to create a shared package in a Turborepo (monorepo), for prisma generated models and types?
I am creating a monorepo using Turborepo consisting of multiple Nestjs microservices, and an API gateway to act as the request distributer. In each microservice, Postgres is used as a database and Prisma as the ORM. Each microservice has its own schema + Prisma client, so it's not a shared schema/client. We are looking to create a shared package for things like DTOs, as well as prisma generated types and entities. The package would be shared among all microservices so if I would export the prisma generated from the microservices to the package, a cyclic dependency occurs. I am new to monorepos so this is a complex topic for me to begin with, but I am hoping someone here on Stackoverflow may have some input on the matter. Appreciate it!
[ "We have a similar monorepo regarding Prisma and Turborepo. From our point of view there are two solutions how to avoid cyclic dependencies in a monorepo:\n1. Avoid it\nInstead of creating a a shared package for all DTOs, types, etc. that should be used by all packages, leave the DTOs, types etc. inside the packages they come from.\nSometimes it helps to rethink the sizing of the packages. The solution could be to merge some packages to one if they use the same shared code. Or to divide a package into smaller ones, if you only need a small portion of the package to be shared.\n2. Break cyclic dependencies by extra build script\nIf solution 1 is not possible you can add an extra build script that copies the shared content from the packages into the shared one.\nIn Turborepo you can either have a global script prefixed with //# that does the job, or you can add the script to each packages, or even both:\n// turbo.json\n{\n \"$schema\": \"https://turborepo.org/schema.json\",\n \"pipeline\": {\n \"build\": {\n \"dependsOn\": [\n \"^build\",\n \"extraBuildScriptFromPackages\",\n \"//#extraGlobalBuildScript\"\n ],\n// ...\n }\n }\n}\n\nWe use solution 2 to collect all metadata from our packages and create a shared metadata package for the whole monorepo.\nFor Prisma types we use a mix of solution 1 (for the prisma types) and solution 2 (for the resulting graphql types).\n" ]
[ 0 ]
[]
[]
[ "monorepo", "nestjs", "prisma", "turborepo", "yarn_workspaces" ]
stackoverflow_0074529773_monorepo_nestjs_prisma_turborepo_yarn_workspaces.txt
Q: DataStore - Models were generated with an unsupported version of codegen I have a form where some values are taken and at the end some images are added through expo image picker. All of those informations are to be uploaded to AWS amplify so upon importing the models I suddenly receive a weird error I haven't seen before and it says The moment I comment the line where I import the model it doesn't show that error but then I can't upload to AWS. import { DataStore } from "@aws-amplify/datastore"; import { House } from "../models"; A: The problem was coming from Codegen version being outdated I guess so running amplify upgrade then amplify codegen models upgraded the CLI and updated the codegen version therefore solving the issue
DataStore - Models were generated with an unsupported version of codegen
I have a form where some values are taken and at the end some images are added through expo image picker. All of those informations are to be uploaded to AWS amplify so upon importing the models I suddenly receive a weird error I haven't seen before and it says The moment I comment the line where I import the model it doesn't show that error but then I can't upload to AWS. import { DataStore } from "@aws-amplify/datastore"; import { House } from "../models";
[ "The problem was coming from Codegen version being outdated I guess so running amplify upgrade then amplify codegen models upgraded the CLI and updated the codegen version therefore solving the issue\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_amplify", "codegen", "react_native" ]
stackoverflow_0074648326_amazon_web_services_aws_amplify_codegen_react_native.txt
Q: Errror: SyntaxError: Unexpected end of input in Javascript I am trying to write a javascript and execute it in a Html file. I can run the Javascript file on my own computer without problems, but when I add it in a Html file and run it in a browser I got an Cors Error. I added the mode: 'no-cors'. After that, I got a 'SyntaxError: Unexpected end of input'. I can still only run the Javascript part so I am completely lost in trying to find the syntaxerror. Does anyone have a clue? <html> <body> <script> var apiUrl = 'https://bulkfollows.com/api/v2'; // define the data to be sent to the API var data = { key: "key", action: "balance" }; fetch(apiUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, mode: 'no-cors', body: JSON.stringify(data) }) .then((response) => response.json()) .then((data) => console.log(data)) .catch((error) => console.log("Error: " + error)); </script> </body> </html> A: Try this. your error will be solved. but for cors, setting cors will not solve your problem . check comment fetch(apiUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, mode: "no-cors", body: JSON.stringify(data) }) .then((response) => { if (!response.ok) { throw response; } return response.json(); }) .then((data) => console.log(data)) .catch(function(error) { console.log( error) });
Errror: SyntaxError: Unexpected end of input in Javascript
I am trying to write a javascript and execute it in a Html file. I can run the Javascript file on my own computer without problems, but when I add it in a Html file and run it in a browser I got an Cors Error. I added the mode: 'no-cors'. After that, I got a 'SyntaxError: Unexpected end of input'. I can still only run the Javascript part so I am completely lost in trying to find the syntaxerror. Does anyone have a clue? <html> <body> <script> var apiUrl = 'https://bulkfollows.com/api/v2'; // define the data to be sent to the API var data = { key: "key", action: "balance" }; fetch(apiUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, mode: 'no-cors', body: JSON.stringify(data) }) .then((response) => response.json()) .then((data) => console.log(data)) .catch((error) => console.log("Error: " + error)); </script> </body> </html>
[ "Try this. your error will be solved. but for cors, setting cors will not solve your problem . check comment\n fetch(apiUrl, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json'\n },\n mode: \"no-cors\",\n body: JSON.stringify(data)\n})\n .then((response) => {\n if (!response.ok) {\n throw response; \n }\n return response.json();\n })\n .then((data) => console.log(data))\n .catch(function(error) {\n console.log( error)\n });\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "json" ]
stackoverflow_0074665035_javascript_json.txt
Q: Which type of OpenAPI pattern format is valid? Currently looking at an openapi.yaml that has two different formats for the pattern validator for a string. Country: pattern: ^(A(D|E|F|G|I|L|M|N|O|R|S|T|Q|U|W|X|Z)|B(A|B|D|E|F|G|H|I|J|L|M|N|O|R|S|T|V|W|Y|Z))$ type: string Currency: pattern: /^AED|AFN|ALL|AMD$/ type: string The documentation doesn't show / as a boundary character at all, so is this valid or invalid? I've used the Swagger Editor to input both but neither gives an error. A: The correct format for the pattern is myregex (formatted as a YAML or JSON string, with proper escaping if needed), not /myregex/ or /myregex/flags. Source: https://github.com/OAI/OpenAPI-Specification/issues/1985 Examples of valid patterns: # YAML pattern: \d+ # not anchored pattern: ^\d+$ # anchored pattern: '\d+' # \d+ pattern: "\\d+" # \d+ # JSON "pattern": "\\d+" # \d+ "pattern": "^\\d+$" # ^\d+$ In your example, the Country pattern is correct, and the Currency pattern is incorrect. In case of pattern: /^AED|AFN|ALL|AMD$/ (which is equivalent to pattern: "/^AED|AFN|ALL|AMD$/"), the / is considered a part of the pattern string itself, not the boundary character. As a result, this pattern won't match anything because extra characters appear outside of ^...$.
Which type of OpenAPI pattern format is valid?
Currently looking at an openapi.yaml that has two different formats for the pattern validator for a string. Country: pattern: ^(A(D|E|F|G|I|L|M|N|O|R|S|T|Q|U|W|X|Z)|B(A|B|D|E|F|G|H|I|J|L|M|N|O|R|S|T|V|W|Y|Z))$ type: string Currency: pattern: /^AED|AFN|ALL|AMD$/ type: string The documentation doesn't show / as a boundary character at all, so is this valid or invalid? I've used the Swagger Editor to input both but neither gives an error.
[ "The correct format for the pattern is myregex (formatted as a YAML or JSON string, with proper escaping if needed), not /myregex/ or /myregex/flags.\nSource: https://github.com/OAI/OpenAPI-Specification/issues/1985\nExamples of valid patterns:\n# YAML\npattern: \\d+ # not anchored\npattern: ^\\d+$ # anchored\n\npattern: '\\d+' # \\d+\npattern: \"\\\\d+\" # \\d+\n\n# JSON\n\"pattern\": \"\\\\d+\" # \\d+\n\"pattern\": \"^\\\\d+$\" # ^\\d+$\n\n\nIn your example, the Country pattern is correct, and the Currency pattern is incorrect.\nIn case of pattern: /^AED|AFN|ALL|AMD$/ (which is equivalent to pattern: \"/^AED|AFN|ALL|AMD$/\"), the / is considered a part of the pattern string itself, not the boundary character. As a result, this pattern won't match anything because extra characters appear outside of ^...$.\n" ]
[ 1 ]
[]
[]
[ "openapi" ]
stackoverflow_0074661902_openapi.txt
Q: Execute LaunchedEffect only when an item is visible in LazyColumn I am using a LazyColumn and there are several items in which one of item has a LaunchedEffect which needs to be executed only when the view is visible. On the other hand, it gets executed as soon as the LazyColumn is rendered. How to check whether the item is visible and only then execute the LaunchedEffect? LazyColumn() { item {Composable1()} item {Composable2()} item {Composable3()} . . . . item {Composable19()} item {Composable20()} } Lets assume that Composable19() has a Pager implementation and I want to start auto scrolling once the view is visible by using the LaunchedEffect in this way. The auto scroll is happening even though the view is not visible. LaunchedEffect(pagerState.currentPage) { //auto scroll logic } A: LazyScrollState has the firstVisibleItemIndex property. The last visible item can be determined by: val lastIndex: Int? = lazyListState.layoutInfo.visibleItemsInfo.lastOrNull()?.index Then you test to see if the list item index you are interested is within the range. For example if you want your effect to launch when list item 5 becomes visible: val lastIndex: Int = lazyListState.layoutInfo.visibleItemsInfo.lastOrNull()?.index ?: -1 LaunchedEffect((lazyListState.firstVisibleItemIndex > 5 ) && ( 5 < lastIndex)) { Log.i("First visible item", lazyListState.firstVisibleItemIndex.toString()) // Launch your auto scrolling here... } LazyColumn(state = lazyListState) { } NOTE: For this to work, DON'T use rememberLazyListState. Instead, create an instance of LazyListState in your viewmodel and pass it to your composable. A: If you want to know if an item is visible you can use the LazyListState#layoutInfo that contains information about the visible items. Since you are reading the state you should use derivedStateOf to avoid redundant recompositions and poor performance To know if the LazyColumn contains an item you can use: @Composable private fun LazyListState.containItem(index:Int): Boolean { return remember(this) { derivedStateOf { val visibleItemsInfo = layoutInfo.visibleItemsInfo if (layoutInfo.totalItemsCount == 0) { false } else { visibleItemsInfo.toMutableList().map { it.index }.contains(index) } } }.value } Then you can use: val state = rememberLazyListState() LazyColumn(state = state){ //items } //Check for a specific item var isItem2Visible = state.containItem(index = 2) LaunchedEffect( isItem2Visible){ if (isItem2Visible) //... item visible do something else //... item not visible do something } If you want to know all the visible items you can use something similar: @Composable private fun LazyListState.visibleItems(): List<Int> { return remember(this) { derivedStateOf { val visibleItemsInfo = layoutInfo.visibleItemsInfo if (layoutInfo.totalItemsCount == 0) { emptyList() } else { visibleItemsInfo.toMutableList().map { it.index } } } }.value }
Execute LaunchedEffect only when an item is visible in LazyColumn
I am using a LazyColumn and there are several items in which one of item has a LaunchedEffect which needs to be executed only when the view is visible. On the other hand, it gets executed as soon as the LazyColumn is rendered. How to check whether the item is visible and only then execute the LaunchedEffect? LazyColumn() { item {Composable1()} item {Composable2()} item {Composable3()} . . . . item {Composable19()} item {Composable20()} } Lets assume that Composable19() has a Pager implementation and I want to start auto scrolling once the view is visible by using the LaunchedEffect in this way. The auto scroll is happening even though the view is not visible. LaunchedEffect(pagerState.currentPage) { //auto scroll logic }
[ "LazyScrollState has the firstVisibleItemIndex property. The last visible item can be determined by:\nval lastIndex: Int? = lazyListState.layoutInfo.visibleItemsInfo.lastOrNull()?.index\nThen you test to see if the list item index you are interested is within the range. For example if you want your effect to launch when list item 5 becomes visible:\nval lastIndex: Int = lazyListState.layoutInfo.visibleItemsInfo.lastOrNull()?.index ?: -1\n\nLaunchedEffect((lazyListState.firstVisibleItemIndex > 5 ) && ( 5 < lastIndex)) {\n Log.i(\"First visible item\", lazyListState.firstVisibleItemIndex.toString())\n\n // Launch your auto scrolling here...\n}\n\nLazyColumn(state = lazyListState) {\n\n}\n\nNOTE: For this to work, DON'T use rememberLazyListState. Instead, create an instance of LazyListState in your viewmodel and pass it to your composable.\n", "If you want to know if an item is visible you can use the LazyListState#layoutInfo that contains information about the visible items.\nSince you are reading the state you should use derivedStateOf to avoid redundant recompositions and poor performance\nTo know if the LazyColumn contains an item you can use:\n@Composable\nprivate fun LazyListState.containItem(index:Int): Boolean {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n false\n } else {\n visibleItemsInfo.toMutableList().map { it.index }.contains(index)\n }\n }\n }.value\n}\n\nThen you can use:\n val state = rememberLazyListState()\n\n LazyColumn(state = state){\n //items\n }\n\n //Check for a specific item\n var isItem2Visible = state.containItem(index = 2)\n\n LaunchedEffect( isItem2Visible){\n if (isItem2Visible)\n //... item visible do something\n else\n //... item not visible do something\n }\n\nIf you want to know all the visible items you can use something similar:\n@Composable\nprivate fun LazyListState.visibleItems(): List<Int> {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n emptyList()\n } else {\n visibleItemsInfo.toMutableList().map { it.index }\n }\n }\n }.value\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "android", "android_jetpack_compose", "android_jetpack_compose_lazy_column", "android_jetpack_compose_list" ]
stackoverflow_0069884375_android_android_jetpack_compose_android_jetpack_compose_lazy_column_android_jetpack_compose_list.txt
Q: Read "fractions" from .txt file and add them using C I'm trying to read "fractions" from a .txt file and store them in an array to later perform the sum and then simplify the result. Fractions in the text file are represented as follows(separated by a blankspace): 3/2 -7/4 1/6 9/5. I'm having trouble storing the data into the array. #include <stdio.h> #include <stdlib.h> #define N 20 //just for testing typedef struct{ int numerator; int denominator; }TFraction; int main(){ TFraction F[N]; fillArray(F); } int fillArray(TFraction A[]){ FILE *txt; char skip; //going to be rewritten with "/", can be ignored int i=0; fp=fopen("fractions.txt","r"); while(!feof(fp)){ fscanf(fp,"%d %c %d", A[i].numerator, skip, A[i].denominator); i++; } fclose(fp); } Once they array is filled, I should be able to perform the sum. Any tips or ideas on my proposed solution, I'm open to suggestions on how to solve this.
Read "fractions" from .txt file and add them using C
I'm trying to read "fractions" from a .txt file and store them in an array to later perform the sum and then simplify the result. Fractions in the text file are represented as follows(separated by a blankspace): 3/2 -7/4 1/6 9/5. I'm having trouble storing the data into the array. #include <stdio.h> #include <stdlib.h> #define N 20 //just for testing typedef struct{ int numerator; int denominator; }TFraction; int main(){ TFraction F[N]; fillArray(F); } int fillArray(TFraction A[]){ FILE *txt; char skip; //going to be rewritten with "/", can be ignored int i=0; fp=fopen("fractions.txt","r"); while(!feof(fp)){ fscanf(fp,"%d %c %d", A[i].numerator, skip, A[i].denominator); i++; } fclose(fp); } Once they array is filled, I should be able to perform the sum. Any tips or ideas on my proposed solution, I'm open to suggestions on how to solve this.
[]
[]
[ "As soon as you start parsing text from an external source (from the user, from a file, ... - anything that isn't a string literal in your own code) you can't assume the text is correct and should have adequate error handling that detects problems and describes what the problem is.\nFor an example, imagine what happens if the file contains \"3.2 -7/4 1/69/5\" - do you want to mistakenly assume that 3.2 is 3/2 and give the wrong answer? Do you you want to assume 1/69/5 is 1/69 followed by 5/??? without telling the user that there's a syntax error on the last fraction?\nOnce you've solved the parsing problem you'll have another problem...\nThe addition of fractions involves making sure that the denominators are equal. This is typically done using multiplication - e.g. 3/2 + -7/4 = (3*4)/(2*4) + (-7*2)/(4*2) = 12/8 + -14/8 = (12 + -14)/8 = (-2)/8.\nFor many additions; those multiplications cause the numerators and denominators to get very big, possibly too big to fit in an int. You can minimize the risk by doing a \"fast simplify\" (if numerator and divisor are both even then divide them both by 2) or a much more expensive \"full simplify\" (determine the Greatest Common Divisor and divide numerator and divisor by the GCD) after each addition.\nNote that your question says \"..and then simplify the result\", so you might need code to do \"full simplify\" anyway.\nAlternatively, maybe an estimate with precision loss problems is \"close enough\" and you can just convert fractions into floating point values before doing addition. This would change the whole nature of the problem.\nFinally; if you only need to add the fractions then you don't need to store them and don't need an array to store them. You can have a loop that does \"get next fraction; add it to the current total\".\n" ]
[ -1 ]
[ "c", "file", "fractions", "math" ]
stackoverflow_0074664895_c_file_fractions_math.txt
Q: Starpattern in console app I need to create the following pattern: It is homework, this question I failed the first time. I read now that I should have only used "*" one time, but how would this even work in that case? Would appreciate if anyone could give me some insight in how to think. My code is down below: using System; class StarPattern { public static void Main() { for (int i = 0; i < 5; i++) { Console.Write("*"); } for (int a = 0; a <= 0; a++) { Console.WriteLine(""); Console.Write("*"); } for (int c = 0; c <= 0; c++) { Console.WriteLine(" *"); } for (int d = 0; d <= 1; d++ ) { Console.Write("*"); Console.WriteLine(" *"); } for (int e = 0; e < 5; e++ ) { Console.Write("*"); } Console.ReadLine(); } } A: You can simplify your code by nesting loops so outer (i) = rows and inner (j) = columns. From looking at your sample, you want to write out if you're on the boundary of either so you can add a condition to just write out on min and max. private static void Main(string[] args) { for (int i = 0; i <= 4; i++) { for (int j = 0; j <= 4; j++) { if (i == 0 || i == 4 || j == 0 || j == 4) { Console.Write("*"); } else { Console.Write(" "); } } Console.WriteLine(); } Console.ReadKey(); } I'd probably replace 0 with a constant called MIN and 4 with a constant called MAX to save duplicating them. That way, you could increase the size of the square by just changing the constants. A: Hardly anyone is commenting their code for you. How disappointing! The trick here is to focus on what is important and define values that you will; be using many times in the code only want to change once if requirements need tweeking These values are height and width - the core components of a rectangle. The golden rule for the pattern is: If the row and column of each character is on the edge of the square, print a * using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Stars { class Program { static int Width = 5; //define the width of the square in characters static int Height = 5; //define the height of the square in characters static void Main(string[] args) { for (int row = 0; row <= Height; row++) //Each iteration here is one row. { //loop representing columns. This is NESTED within the rows so //that each row prints more than one column for (int column = 0; column <= Width; column++) { if (IsCentreOfSquare(row, column)) //calculate if the current row and column coordinates are the interior of the square { Console.Write(" "); } else { Console.Write("*"); } } Console.WriteLine(); //this row is over. move to the next row } Console.ReadLine(); //pause so that the user can admire the pretty picture. } /// <summary> /// Calculates if the row and column indexes specified are in the interior of the square pattern /// </summary> /// <returns></returns> private static bool IsCentreOfSquare(int row, int col) { if (row > 0 && row < Height) { if (col > 0 && col < Width) { return true; } } return false; } } } A: This might be overkill for such a simple program, but make it scalable and add some const ints to make the design able to be modified whenever you'd like! Good question. It's fun to feel like I'm tutoring again! Glad to see you at least gave it an honest attempt :) class Program { // Use string if you are okay with breaking the current pattern. // private static string myDesign = "*"; // Use char if you want to ensure the integrity of your pattern. private static char myDesign = '*'; private const int COLUMN_COUNT = 5; private const int ROW_COUNT = 5; private const int FIRST_ROW = 0; private const int LAST_ROW = 4; private const int FIRST_COLUMN = 0; private const int LAST_COLUMN = 4; static void Main(string[] args) { // Iterate through the desired amount of rows. for (int row = 0; row < ROW_COUNT; row++) { // Iterate through the desired amount of columns. for (int column = 0; column < COLUMN_COUNT; column++) { // If it is your first or last column or row, then write your character. if (column == FIRST_COLUMN || column == LAST_COLUMN || row == FIRST_ROW || row == LAST_ROW) { Console.Write(myDesign); } // If anywhere in between provide your blank character. else { Console.Write(" "); } } Console.WriteLine(""); } Console.Read(); } } A: It is not that difficult to do. You need to create two loops: one for the rows and one for the columns and then determine whether to print a star or not. Only the first and the last rows and columns have a star. In my example I use two counters that start from 0 and count to 4. If the remainder of either counter value divided by 4 equals to 0 then I print a star, otherwise a space. using System; namespace Stars { internal static class StarPattern { private static void Main() { for (int i = 0; i < 5; i++) { for (int j = 0; j < 5; j++) { Console.Write((i%4 == 0) | (j%4 == 0) ? '*' : ' '); } Console.WriteLine(); } } } } A: You need to print a certain number of lines with a certain sequence of characters per line. One approach would be to create or find a method that prints whatever character sequence you need. Then create a method to sequence calls to to that. Finally, print the result of all that. private void PrintBox(char c, int width, int height) { var result = new List<string>(); result.Add(new string(c, width)); // First line for (var i = 0; i < height - 2; i++) { string iLine; int spaceCount = width - 2; iLine = new string(c, 1) + new string(' ', spaceCount) + new string(c, 1); result.Add(iLine); } result.Add(new string(c, width)); // Last line // FYI, there's a StringBuilder class that makes all this easier foreach (var line in result) { Console.WriteLine(line); } } The important OOP concept here is that we made use of a single reusable method for constructing a sequence of characters - new string(char, int). If we wanted to create a Circle, we could create a BuildCircle method based on the same principle. A: ## A little too many loops but ok ## static void EmptyCubeCreator(int cubeCount, char symbol) { for (int i = 0; i <= cubeCount; i++) { if (i == 0 || i == cubeCount) for (int j = 0; j <= cubeCount; j++) Console.Write(symbol); else for (int c = 0; c <= cubeCount; c++) if (c == 0 || c == cubeCount) Console.Write(symbol); else Console.Write(' '); Console.WriteLine(); } }
Starpattern in console app
I need to create the following pattern: It is homework, this question I failed the first time. I read now that I should have only used "*" one time, but how would this even work in that case? Would appreciate if anyone could give me some insight in how to think. My code is down below: using System; class StarPattern { public static void Main() { for (int i = 0; i < 5; i++) { Console.Write("*"); } for (int a = 0; a <= 0; a++) { Console.WriteLine(""); Console.Write("*"); } for (int c = 0; c <= 0; c++) { Console.WriteLine(" *"); } for (int d = 0; d <= 1; d++ ) { Console.Write("*"); Console.WriteLine(" *"); } for (int e = 0; e < 5; e++ ) { Console.Write("*"); } Console.ReadLine(); } }
[ "You can simplify your code by nesting loops so outer (i) = rows and inner (j) = columns. From looking at your sample, you want to write out if you're on the boundary of either so you can add a condition to just write out on min and max.\nprivate static void Main(string[] args)\n{\n for (int i = 0; i <= 4; i++)\n {\n for (int j = 0; j <= 4; j++)\n {\n if (i == 0 || i == 4 || j == 0 || j == 4)\n {\n Console.Write(\"*\");\n }\n else\n {\n Console.Write(\" \");\n }\n }\n Console.WriteLine();\n }\n Console.ReadKey();\n}\n\nI'd probably replace 0 with a constant called MIN and 4 with a constant called MAX to save duplicating them. That way, you could increase the size of the square by just changing the constants.\n", "Hardly anyone is commenting their code for you. How disappointing!\nThe trick here is to focus on what is important and define values that you will;\n\nbe using many times in the code\nonly want to change once if requirements need tweeking\n\nThese values are height and width - the core components of a rectangle.\nThe golden rule for the pattern is: If the row and column of each character is on the edge of the square, print a *\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\nnamespace Stars\n{\n class Program\n {\n static int Width = 5; //define the width of the square in characters\n static int Height = 5; //define the height of the square in characters\n\n static void Main(string[] args)\n {\n for (int row = 0; row <= Height; row++) //Each iteration here is one row.\n {\n //loop representing columns. This is NESTED within the rows so \n //that each row prints more than one column\n for (int column = 0; column <= Width; column++)\n {\n if (IsCentreOfSquare(row, column)) //calculate if the current row and column coordinates are the interior of the square\n {\n Console.Write(\" \");\n }\n else\n {\n Console.Write(\"*\");\n }\n }\n Console.WriteLine(); //this row is over. move to the next row\n }\n Console.ReadLine(); //pause so that the user can admire the pretty picture.\n }\n\n /// <summary>\n /// Calculates if the row and column indexes specified are in the interior of the square pattern\n /// </summary>\n /// <returns></returns>\n private static bool IsCentreOfSquare(int row, int col)\n {\n if (row > 0 && row < Height)\n {\n if (col > 0 && col < Width)\n {\n return true;\n }\n }\n\n return false;\n }\n }\n}\n\n", "This might be overkill for such a simple program, but make it scalable and add some const ints to make the design able to be modified whenever you'd like!\nGood question. It's fun to feel like I'm tutoring again! Glad to see you at least gave it an honest attempt :)\nclass Program\n{\n // Use string if you are okay with breaking the current pattern.\n // private static string myDesign = \"*\";\n // Use char if you want to ensure the integrity of your pattern.\n private static char myDesign = '*';\n private const int COLUMN_COUNT = 5;\n private const int ROW_COUNT = 5;\n private const int FIRST_ROW = 0;\n private const int LAST_ROW = 4;\n private const int FIRST_COLUMN = 0;\n private const int LAST_COLUMN = 4;\n static void Main(string[] args)\n {\n // Iterate through the desired amount of rows.\n for (int row = 0; row < ROW_COUNT; row++)\n {\n // Iterate through the desired amount of columns.\n for (int column = 0; column < COLUMN_COUNT; column++)\n {\n // If it is your first or last column or row, then write your character.\n if (column == FIRST_COLUMN || column == LAST_COLUMN || row == FIRST_ROW || row == LAST_ROW)\n {\n Console.Write(myDesign);\n }\n // If anywhere in between provide your blank character.\n else\n {\n Console.Write(\" \");\n }\n }\n Console.WriteLine(\"\");\n }\n Console.Read();\n }\n}\n\n", "It is not that difficult to do. You need to create two loops: one for the rows and one for the columns and then determine whether to print a star or not. Only the first and the last rows and columns have a star.\nIn my example I use two counters that start from 0 and count to 4. If the remainder of either counter value divided by 4 equals to 0 then I print a star, otherwise a space.\nusing System;\n\nnamespace Stars\n{\n internal static class StarPattern\n {\n private static void Main()\n {\n for (int i = 0; i < 5; i++)\n {\n for (int j = 0; j < 5; j++)\n {\n Console.Write((i%4 == 0) | (j%4 == 0) ? '*' : ' ');\n }\n Console.WriteLine();\n }\n }\n }\n}\n\n", "You need to print a certain number of lines with a certain sequence of characters per line.\nOne approach would be to create or find a method that prints whatever character sequence you need. Then create a method to sequence calls to to that. Finally, print the result of all that.\nprivate void PrintBox(char c, int width, int height)\n{\n var result = new List<string>();\n result.Add(new string(c, width)); // First line\n\n for (var i = 0; i < height - 2; i++)\n {\n string iLine;\n int spaceCount = width - 2;\n iLine = new string(c, 1) + new string(' ', spaceCount) + new string(c, 1);\n result.Add(iLine);\n }\n\n result.Add(new string(c, width)); // Last line\n\n // FYI, there's a StringBuilder class that makes all this easier\n\n foreach (var line in result)\n {\n Console.WriteLine(line);\n }\n}\n\nThe important OOP concept here is that we made use of a single reusable method for constructing a sequence of characters - new string(char, int). If we wanted to create a Circle, we could create a BuildCircle method based on the same principle.\n", " ## A little too many loops but ok ##\n static void EmptyCubeCreator(int cubeCount, char symbol)\n {\n for (int i = 0; i <= cubeCount; i++)\n {\n if (i == 0 || i == cubeCount)\n for (int j = 0; j <= cubeCount; j++)\n Console.Write(symbol);\n else\n for (int c = 0; c <= cubeCount; c++)\n if (c == 0 || c == cubeCount)\n Console.Write(symbol);\n else\n Console.Write(' ');\n\n Console.WriteLine();\n }\n }\n\n" ]
[ 11, 5, 1, 1, 0, 0 ]
[]
[]
[ "c#" ]
stackoverflow_0027507192_c#.txt
Q: How do I add a column to a pyspark dataframe with the timestamp of the minimum value over a window? Say I have a pyspark dataframe such as: Timestamp Foo 2022-12-02T10:00:00 12 2022-12-02T10:01:00 24 2022-12-02T10:02:00 26 2022-12-02T10:03:00 20 2022-12-02T10:04:00 31 2022-12-02T10:05:00 30 2022-12-02T10:06:00 23 2022-12-02T10:07:00 35 2022-12-02T10:08:00 10 2022-12-02T10:09:00 20 2022-12-02T10:10:00 40 I add a column 'min_value', being the minimum value of the column 'Foo' in a five minutes backwards window, as: window_bw = Window.orderBy(F.col('timestamp').cast('int')).rangeBetween(-5*60, 0) df = df.withColumn('min_value', F.min('Foo').over(window_backwards)) That is easy enough, but I cannot figure out how to add another column "min_value_timestamp" which is the timestamp of the row 'min_value' was taken from. I tried using when like this: df = (df.withColumn('min_value_timestamp', F.when(F.col('Foo') == F.col('min_value'), F.col('timestamp')) .withColumn('min_value_timestamp', F.when(F.last('min_value_timestamp', ignorenulls = True).over(window_bw))) Unfortunately, that doesn't work, because a certain row may not have the minimum value for its own window, but have the minimum value for the window of a later row. So in the example dataframe the first six rows get the correct 'min_value_timestamp', but the seventh row would get 'min_value_timestamp' null, since it's calculated in parallel and all rows in the window have 'min_value_timestamp' null at that point (and even if it wasn't, it wouldn't matter anyhow since it would be the wrong timestamp. Row four's min_value and corresponding min_value_timestamp comes from row one in its window, but row 4 is also where the min_value of rows 7 and 8 comes from, so they should have the timestamp of row 4 as 'min_value_timestamp', which wouldn't work with the logic above). Does anyone know a way to do it? Thanks in advance A: You can combine time and value into a struct then collect within the window and sort them by value then extract value of first element of list. from pyspark.sql import functions as F from pyspark.sql import Window as W data = [[f'2022-12-03 00:{"%.2d" % i}:00', random.randint(0, 30)] for i in range(20)] df = ( spark.createDataFrame(data = data, schema = ['time', 'value']) .withColumn('timestamp', F.unix_timestamp('time')) ) window = W.orderBy(F.col('timestamp').cast('int')).rangeBetween(-5*60, 0) ( df .withColumn('past_values', F.collect_list(F.struct('value', 'time')).over(window)) .withColumn('min_value', F.sort_array('past_values')[0]['time']) ).show() +-------------------+-----+----------+--------------------+-------------------+ | time|value| timestamp| past_values| min_value| +-------------------+-----+----------+--------------------+-------------------+ |2022-12-03 00:00:00| 29|1670013000|[{29, 2022-12-03 ...|2022-12-03 00:00:00| |2022-12-03 00:01:00| 23|1670013060|[{29, 2022-12-03 ...|2022-12-03 00:01:00| |2022-12-03 00:02:00| 29|1670013120|[{29, 2022-12-03 ...|2022-12-03 00:01:00| |2022-12-03 00:03:00| 6|1670013180|[{29, 2022-12-03 ...|2022-12-03 00:03:00| |2022-12-03 00:04:00| 26|1670013240|[{29, 2022-12-03 ...|2022-12-03 00:03:00| |2022-12-03 00:05:00| 1|1670013300|[{29, 2022-12-03 ...|2022-12-03 00:05:00| |2022-12-03 00:06:00| 1|1670013360|[{23, 2022-12-03 ...|2022-12-03 00:05:00| |2022-12-03 00:07:00| 14|1670013420|[{29, 2022-12-03 ...|2022-12-03 00:05:00| |2022-12-03 00:08:00| 16|1670013480|[{6, 2022-12-03 0...|2022-12-03 00:05:00| |2022-12-03 00:09:00| 19|1670013540|[{26, 2022-12-03 ...|2022-12-03 00:05:00| |2022-12-03 00:10:00| 29|1670013600|[{1, 2022-12-03 0...|2022-12-03 00:05:00| |2022-12-03 00:11:00| 1|1670013660|[{1, 2022-12-03 0...|2022-12-03 00:06:00| |2022-12-03 00:12:00| 15|1670013720|[{14, 2022-12-03 ...|2022-12-03 00:11:00| |2022-12-03 00:13:00| 22|1670013780|[{16, 2022-12-03 ...|2022-12-03 00:11:00| |2022-12-03 00:14:00| 11|1670013840|[{19, 2022-12-03 ...|2022-12-03 00:11:00| |2022-12-03 00:15:00| 9|1670013900|[{29, 2022-12-03 ...|2022-12-03 00:11:00| |2022-12-03 00:16:00| 30|1670013960|[{1, 2022-12-03 0...|2022-12-03 00:11:00| |2022-12-03 00:17:00| 28|1670014020|[{15, 2022-12-03 ...|2022-12-03 00:15:00| |2022-12-03 00:18:00| 30|1670014080|[{22, 2022-12-03 ...|2022-12-03 00:15:00| |2022-12-03 00:19:00| 4|1670014140|[{11, 2022-12-03 ...|2022-12-03 00:19:00| +-------------------+-----+----------+--------------------+-------------------+
How do I add a column to a pyspark dataframe with the timestamp of the minimum value over a window?
Say I have a pyspark dataframe such as: Timestamp Foo 2022-12-02T10:00:00 12 2022-12-02T10:01:00 24 2022-12-02T10:02:00 26 2022-12-02T10:03:00 20 2022-12-02T10:04:00 31 2022-12-02T10:05:00 30 2022-12-02T10:06:00 23 2022-12-02T10:07:00 35 2022-12-02T10:08:00 10 2022-12-02T10:09:00 20 2022-12-02T10:10:00 40 I add a column 'min_value', being the minimum value of the column 'Foo' in a five minutes backwards window, as: window_bw = Window.orderBy(F.col('timestamp').cast('int')).rangeBetween(-5*60, 0) df = df.withColumn('min_value', F.min('Foo').over(window_backwards)) That is easy enough, but I cannot figure out how to add another column "min_value_timestamp" which is the timestamp of the row 'min_value' was taken from. I tried using when like this: df = (df.withColumn('min_value_timestamp', F.when(F.col('Foo') == F.col('min_value'), F.col('timestamp')) .withColumn('min_value_timestamp', F.when(F.last('min_value_timestamp', ignorenulls = True).over(window_bw))) Unfortunately, that doesn't work, because a certain row may not have the minimum value for its own window, but have the minimum value for the window of a later row. So in the example dataframe the first six rows get the correct 'min_value_timestamp', but the seventh row would get 'min_value_timestamp' null, since it's calculated in parallel and all rows in the window have 'min_value_timestamp' null at that point (and even if it wasn't, it wouldn't matter anyhow since it would be the wrong timestamp. Row four's min_value and corresponding min_value_timestamp comes from row one in its window, but row 4 is also where the min_value of rows 7 and 8 comes from, so they should have the timestamp of row 4 as 'min_value_timestamp', which wouldn't work with the logic above). Does anyone know a way to do it? Thanks in advance
[ "You can combine time and value into a struct then collect within the window and sort them by value then extract value of first element of list.\nfrom pyspark.sql import functions as F\nfrom pyspark.sql import Window as W\n\ndata = [[f'2022-12-03 00:{\"%.2d\" % i}:00', random.randint(0, 30)] for i in range(20)]\ndf = (\n spark.createDataFrame(data = data, schema = ['time', 'value'])\n .withColumn('timestamp', F.unix_timestamp('time'))\n)\n\nwindow = W.orderBy(F.col('timestamp').cast('int')).rangeBetween(-5*60, 0)\n(\n df\n .withColumn('past_values', F.collect_list(F.struct('value', 'time')).over(window))\n .withColumn('min_value', F.sort_array('past_values')[0]['time'])\n).show()\n\n+-------------------+-----+----------+--------------------+-------------------+\n| time|value| timestamp| past_values| min_value|\n+-------------------+-----+----------+--------------------+-------------------+\n|2022-12-03 00:00:00| 29|1670013000|[{29, 2022-12-03 ...|2022-12-03 00:00:00|\n|2022-12-03 00:01:00| 23|1670013060|[{29, 2022-12-03 ...|2022-12-03 00:01:00|\n|2022-12-03 00:02:00| 29|1670013120|[{29, 2022-12-03 ...|2022-12-03 00:01:00|\n|2022-12-03 00:03:00| 6|1670013180|[{29, 2022-12-03 ...|2022-12-03 00:03:00|\n|2022-12-03 00:04:00| 26|1670013240|[{29, 2022-12-03 ...|2022-12-03 00:03:00|\n|2022-12-03 00:05:00| 1|1670013300|[{29, 2022-12-03 ...|2022-12-03 00:05:00|\n|2022-12-03 00:06:00| 1|1670013360|[{23, 2022-12-03 ...|2022-12-03 00:05:00|\n|2022-12-03 00:07:00| 14|1670013420|[{29, 2022-12-03 ...|2022-12-03 00:05:00|\n|2022-12-03 00:08:00| 16|1670013480|[{6, 2022-12-03 0...|2022-12-03 00:05:00|\n|2022-12-03 00:09:00| 19|1670013540|[{26, 2022-12-03 ...|2022-12-03 00:05:00|\n|2022-12-03 00:10:00| 29|1670013600|[{1, 2022-12-03 0...|2022-12-03 00:05:00|\n|2022-12-03 00:11:00| 1|1670013660|[{1, 2022-12-03 0...|2022-12-03 00:06:00|\n|2022-12-03 00:12:00| 15|1670013720|[{14, 2022-12-03 ...|2022-12-03 00:11:00|\n|2022-12-03 00:13:00| 22|1670013780|[{16, 2022-12-03 ...|2022-12-03 00:11:00|\n|2022-12-03 00:14:00| 11|1670013840|[{19, 2022-12-03 ...|2022-12-03 00:11:00|\n|2022-12-03 00:15:00| 9|1670013900|[{29, 2022-12-03 ...|2022-12-03 00:11:00|\n|2022-12-03 00:16:00| 30|1670013960|[{1, 2022-12-03 0...|2022-12-03 00:11:00|\n|2022-12-03 00:17:00| 28|1670014020|[{15, 2022-12-03 ...|2022-12-03 00:15:00|\n|2022-12-03 00:18:00| 30|1670014080|[{22, 2022-12-03 ...|2022-12-03 00:15:00|\n|2022-12-03 00:19:00| 4|1670014140|[{11, 2022-12-03 ...|2022-12-03 00:19:00|\n+-------------------+-----+----------+--------------------+-------------------+\n\n" ]
[ 0 ]
[]
[]
[ "pyspark" ]
stackoverflow_0074654324_pyspark.txt
Q: Logic App - Twitter trigger issue: Create and authorize OAuth connection failed I am trying to create a simple Logic app with Twitter trigger "When a new tweet is posted". My steps: Add a trigger in designer Connection name - TwitterConnector Authentication Type - Use default shared application Click "Sign in" Authorize MS Azure Logic app to access my account Result: I got a message "Create and authorize OAuth connection failed": Create and authorize OAuth connection failed If I save Logic app and run it, I get error message 400 Bad request: { "status": 400, "message": "Key 'Token' in connection profile is not valid.\r\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0.\r\nclientRequestId: 245d0f38-5f84-412d-bd20-d53c197488b7", "error": { "message": "Key 'Token' in connection profile is not valid.\r\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0."}, "source": "twitter-ase.azconn-ase.p.azurewebsites.net" } A: We have created a consumption based logic app in our local environment with Twitter trigger "When a new tweet is posted". Twitter trigger in logic apps supports two authentication types to create an connection. Use default shared application Bring your own application Tried creating a connection using the authentication mode as Use default shared application connection creation got failed. Using Bring your own application authentication mode we are able to create the connection by following this documentation & authenticate to it successfully. Key 'Token' in connection profile is not valid.\r\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0." Based on the error message , we have gone through couple of Microsoft community blogs suggested workaround is to delete the current connection, clear the cache of your browser and try creating a new connection. If you still faces the issue we would suggest you to open a discussion over Microsoft Q&A or please create a technical support ticker by following the link where technical support team would help you in troubleshooting the issue from platform end. A: Had the same issue to resolve it i did the following steps: Changed my twitter app access to Elevated Setup user authentication settings in the twitter app: App Permisions: Read and Write Type of app: native Call Back URI: https://global.consent.azure-apim.net/redirect Deleted previous api connections of the logic app present in the resource group. Selected Bring your own application in the authentication mode of logic app and entered the twitter api key and secret in the consumer key and secret. After this i was able to login. Just to be on the safer side I enabled pop-ups, allowed all cookies, cleared browser history, cache, passwords and had logged in to twitter developer account in the same browser later.
Logic App - Twitter trigger issue: Create and authorize OAuth connection failed
I am trying to create a simple Logic app with Twitter trigger "When a new tweet is posted". My steps: Add a trigger in designer Connection name - TwitterConnector Authentication Type - Use default shared application Click "Sign in" Authorize MS Azure Logic app to access my account Result: I got a message "Create and authorize OAuth connection failed": Create and authorize OAuth connection failed If I save Logic app and run it, I get error message 400 Bad request: { "status": 400, "message": "Key 'Token' in connection profile is not valid.\r\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0.\r\nclientRequestId: 245d0f38-5f84-412d-bd20-d53c197488b7", "error": { "message": "Key 'Token' in connection profile is not valid.\r\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0."}, "source": "twitter-ase.azconn-ase.p.azurewebsites.net" }
[ "We have created a consumption based logic app in our local environment with Twitter trigger \"When a new tweet is posted\".\nTwitter trigger in logic apps supports two authentication types to create an connection.\n\nUse default shared application\nBring your own application\n\nTried creating a connection using the authentication mode as Use default shared application connection creation got failed.\nUsing Bring your own application authentication mode we are able to create the connection by following this documentation & authenticate to it successfully.\n\nKey 'Token' in connection profile is not valid.\\r\\n inner exception: Unexpected character encountered while parsing value: h. Path '', line 0, position 0.\"\n\nBased on the error message , we have gone through couple of Microsoft community blogs suggested workaround is to delete the current connection, clear the cache of your browser and try creating a new connection.\nIf you still faces the issue we would suggest you to open a discussion over Microsoft Q&A or please create a technical support ticker by following the link where technical support team would help you in troubleshooting the issue from platform end.\n", "Had the same issue to resolve it i did the following steps:\n\nChanged my twitter app access to Elevated\n\nSetup user authentication settings in the twitter app:\nApp Permisions: Read and Write\nType of app: native\nCall Back URI: https://global.consent.azure-apim.net/redirect\n\nDeleted previous api connections of the logic app present in the resource group.\n\nSelected Bring your own application in the authentication mode of logic app and entered the twitter api key and secret in the consumer key and secret.\nAfter this i was able to login.\n\n\nJust to be on the safer side I enabled pop-ups, allowed all cookies, cleared browser history, cache, passwords and had logged in to twitter developer account in the same browser later.\n" ]
[ 0, 0 ]
[]
[]
[ "azure", "azure_logic_apps", "twitter" ]
stackoverflow_0069936308_azure_azure_logic_apps_twitter.txt
Q: How to find out the metrics value of two images? I am looking for a library that can measure the degree of similarity between 2 pictures. If using hash, the picture must be exactly the same. Means that it only produce the result either TRUE or False. What I am looking for is something that can give metrics value of picture similarity. Any command line tool, library for doing that? Any language is acceptable, preferably open source. A: The best approaches are SSIM and PSNR, they are used by, for example, x264 (I'm sure you've heard about it, but it's an open-source H.264 encoder that's used by guys like Google Video). Here's a CLI video coparison tool for linux that implements both. Here's a (image/video processing) language that you may be interested in. Oh wait... ImageMagick does the job too and I'd go for it because it's an awesome tool. By the way this question has multiple duplicates (1, 2, 3, 4, 5, 6, 7). A: Sewar is a python package for image quality assessment using different metrics. check out this git hub repo https://github.com/andrewekhalel/sewar clone the repo and set path using environment varible ifyou use windows and now you can use it in command line. make sure that both the images are in same dimensions
How to find out the metrics value of two images?
I am looking for a library that can measure the degree of similarity between 2 pictures. If using hash, the picture must be exactly the same. Means that it only produce the result either TRUE or False. What I am looking for is something that can give metrics value of picture similarity. Any command line tool, library for doing that? Any language is acceptable, preferably open source.
[ "The best approaches are SSIM and PSNR, they are used by, for example, x264 (I'm sure you've heard about it, but it's an open-source H.264 encoder that's used by guys like Google Video).\nHere's a CLI video coparison tool for linux that implements both.\nHere's a (image/video processing) language that you may be interested in.\nOh wait... ImageMagick does the job too and I'd go for it because it's an awesome tool.\nBy the way this question has multiple duplicates (1, 2, 3, 4, 5, 6, 7).\n", "Sewar is a python package for image quality assessment using different metrics.\ncheck out this git hub repo\nhttps://github.com/andrewekhalel/sewar\nclone the repo and set path using environment varible ifyou use windows\nand now you can use it in command line.\nmake sure that both the images are in same dimensions\n" ]
[ 3, 0 ]
[]
[]
[ "image" ]
stackoverflow_0004426468_image.txt
Q: Update User Detail api returns DataIntegrityViolationException I'm creating an update API that updates the profile of the super admin, I mapped the member table to a DTO, on the member table password is set to not null and I did not include the password field on the dto because there's a provision for that be, when I tested the API on postman it returned on the console DataIntegrityViolationException SQL Error: 1048, SQLState: 23000 Column 'password' cannot be null Here is my code Dto @Getter @Setter public class UpdateProfileDto { @NotNull(message = "{member.firstName.notNull}") @JsonProperty("first_name") private String firstName; @NotNull(message = "{member.lastName.notNull}") @JsonProperty("last_name") private String lastName; @JsonProperty("nationality") private Long nationality; @JsonProperty("country_of_residence") private Long countryOfResidence; @JsonProperty("date_of_birth") @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) @JsonFormat(pattern = "dd-MM-yyyy") @Past(message = "{customer.dateOfBirth.past}") private Date dateOfBirth; @JsonProperty("current_job_title") private String currentJobTitle; @NotNull(message = "{member.emailAddress.notNull}") @JsonProperty("email_address") private String emailAddress; @JsonProperty("username") private String username; @NotNull(message = "{member.phoneNumber.notNull}") @PhoneNumber @JsonProperty("phone_number") private String phoneNumber; @Size(max = 300, message = "{member.city.size}") @JsonProperty("city") private String city; @Size(max = 300, message = "{member.state.size}") @JsonProperty("state") private String state; } ServiceImpl @Override @Transactional public Member updateProfile(UpdateProfileDto body) { Member superAdmin = repository.getOne(id); if (superAdmin == null) { throw new MemberNotFoundException(id); } Optional<Role> existingRole = roleJpaRepository.findByCode(RoleType.SUPER_ADMINISTRATOR.getValue()); if (existingRole.isEmpty()) { throw new RoleNotFoundException(RoleType.SUPER_ADMINISTRATOR.getValue()); } Member existing; existing = mapper.map(body, Member.class); existing.setPassword(superAdmin.getPassword()); existing.getRoles().add(existingRole.get()); existing.setNationality(countryRepository.getOne(body.getNationality())); existing.setCountryOfResidence(countryRepository.getOne(body.getCountryOfResidence())); return adminJpaRepository.save(existing); } Controller @RestController @RequestMapping( value = "super-admin", produces = { MediaType.APPLICATION_JSON_VALUE } ) public class SuperAdminController { private final SuperAdminService service; public SuperAdminController(SuperAdminService service) { this.service = service; } @PutMapping("/update") public Member updateProfile(@Valid @RequestBody UpdateProfileDto body){ Member superAdmin = service.updateProfile(body); return superAdmin; } } The password bug has been fixed(changes reflected in serviceImpl), but when I run the code it returned Duplicate entry '[email protected]' for key 'member.email_address_phone_number_uq' email, and the phone number is set as a unique constraint in the member table, how can I bypass this? A: You have few options, depending on your exact use case. Extract existing password, using unique property in UpdateProfileDto, email looks like it can do the job. Pseudocode: Member existing = repository.findByEmail; Member superAdmin = mapper.map(body, Member.class); superAdmin.setPassword(existing.getPassword()); Set a dummy value for password, to be updated later on. superAdmin.setPassword("dummy-password"); Make the column nullable in database.
Update User Detail api returns DataIntegrityViolationException
I'm creating an update API that updates the profile of the super admin, I mapped the member table to a DTO, on the member table password is set to not null and I did not include the password field on the dto because there's a provision for that be, when I tested the API on postman it returned on the console DataIntegrityViolationException SQL Error: 1048, SQLState: 23000 Column 'password' cannot be null Here is my code Dto @Getter @Setter public class UpdateProfileDto { @NotNull(message = "{member.firstName.notNull}") @JsonProperty("first_name") private String firstName; @NotNull(message = "{member.lastName.notNull}") @JsonProperty("last_name") private String lastName; @JsonProperty("nationality") private Long nationality; @JsonProperty("country_of_residence") private Long countryOfResidence; @JsonProperty("date_of_birth") @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) @JsonFormat(pattern = "dd-MM-yyyy") @Past(message = "{customer.dateOfBirth.past}") private Date dateOfBirth; @JsonProperty("current_job_title") private String currentJobTitle; @NotNull(message = "{member.emailAddress.notNull}") @JsonProperty("email_address") private String emailAddress; @JsonProperty("username") private String username; @NotNull(message = "{member.phoneNumber.notNull}") @PhoneNumber @JsonProperty("phone_number") private String phoneNumber; @Size(max = 300, message = "{member.city.size}") @JsonProperty("city") private String city; @Size(max = 300, message = "{member.state.size}") @JsonProperty("state") private String state; } ServiceImpl @Override @Transactional public Member updateProfile(UpdateProfileDto body) { Member superAdmin = repository.getOne(id); if (superAdmin == null) { throw new MemberNotFoundException(id); } Optional<Role> existingRole = roleJpaRepository.findByCode(RoleType.SUPER_ADMINISTRATOR.getValue()); if (existingRole.isEmpty()) { throw new RoleNotFoundException(RoleType.SUPER_ADMINISTRATOR.getValue()); } Member existing; existing = mapper.map(body, Member.class); existing.setPassword(superAdmin.getPassword()); existing.getRoles().add(existingRole.get()); existing.setNationality(countryRepository.getOne(body.getNationality())); existing.setCountryOfResidence(countryRepository.getOne(body.getCountryOfResidence())); return adminJpaRepository.save(existing); } Controller @RestController @RequestMapping( value = "super-admin", produces = { MediaType.APPLICATION_JSON_VALUE } ) public class SuperAdminController { private final SuperAdminService service; public SuperAdminController(SuperAdminService service) { this.service = service; } @PutMapping("/update") public Member updateProfile(@Valid @RequestBody UpdateProfileDto body){ Member superAdmin = service.updateProfile(body); return superAdmin; } } The password bug has been fixed(changes reflected in serviceImpl), but when I run the code it returned Duplicate entry '[email protected]' for key 'member.email_address_phone_number_uq' email, and the phone number is set as a unique constraint in the member table, how can I bypass this?
[ "You have few options, depending on your exact use case.\n\nExtract existing password, using unique property in UpdateProfileDto, email looks like it can do the job.\n\nPseudocode:\nMember existing = repository.findByEmail;\nMember superAdmin = mapper.map(body, Member.class);\nsuperAdmin.setPassword(existing.getPassword());\n\n\nSet a dummy value for password, to be updated later on.\n\nsuperAdmin.setPassword(\"dummy-password\");\n\n\nMake the column nullable in database.\n\n" ]
[ 1 ]
[]
[]
[ "api", "java", "mysql", "rest", "spring_boot" ]
stackoverflow_0074663992_api_java_mysql_rest_spring_boot.txt
Q: I want to generate a integer from a double i was trying to make a function that would generate integers from doubles. I want this function to round based of the decimal at the end of the integer. for example 1.75 would have a 75% chance of rounding up and a 25% chance of rounding down. here is what i tried so far public static int fairIntFromDouble(final double number) { Random random = new Random(); if (random.nextDouble() < number) { return (int) Math.floor(number); } else { return (int) Math.celi(number); } } idk why but it seems to always round down A: I came up with a simple way of accomplishing what you are asking for. Here is the code, I will explain the logic for this section of code below. public static int fairIntFromDouble(final double number) { Random random = new Random(); final double decimal = number - Math.floor(number); if (random.nextDouble() < decimal) { return (int) Math.ceil(number); } return (int) Math.floor(number); } So here is what I am doing. First, I am figuring out what exactly the decimal on the number is. After I have that decimal, I am able to use an instance of Random to generate a random double between 0 and 1. Using this decimal I round up if the random number is less than the decimal and down if it is more than or equal to. The reason why your original function always rounds down is because you are generating a random number between 1 and 0 and then comparing the double you entered as input. This means that if you ever input a double into your funtion with a value more than 1 then you will always be rounding down.
I want to generate a integer from a double
i was trying to make a function that would generate integers from doubles. I want this function to round based of the decimal at the end of the integer. for example 1.75 would have a 75% chance of rounding up and a 25% chance of rounding down. here is what i tried so far public static int fairIntFromDouble(final double number) { Random random = new Random(); if (random.nextDouble() < number) { return (int) Math.floor(number); } else { return (int) Math.celi(number); } } idk why but it seems to always round down
[ "I came up with a simple way of accomplishing what you are asking for. Here is the code, I will explain the logic for this section of code below.\n public static int fairIntFromDouble(final double number) {\n Random random = new Random();\n final double decimal = number - Math.floor(number);\n if (random.nextDouble() < decimal) {\n return (int) Math.ceil(number);\n }\n return (int) Math.floor(number);\n }\n\nSo here is what I am doing. First, I am figuring out what exactly the decimal on the number is. After I have that decimal, I am able to use an instance of Random to generate a random double between 0 and 1. Using this decimal I round up if the random number is less than the decimal and down if it is more than or equal to.\nThe reason why your original function always rounds down is because you are generating a random number between 1 and 0 and then comparing the double you entered as input. This means that if you ever input a double into your funtion with a value more than 1 then you will always be rounding down.\n" ]
[ 1 ]
[]
[]
[ "java" ]
stackoverflow_0074665107_java.txt
Q: How to find specific value of html element (such as h1) with node.js I'm trying to find the numbers of users on this website, using node.js, but I'm not sure of what process I can do to get it. The value is subclassed under multiple divs with different class names. Right now I will just be console.logging it but I'm going to do more with the data later on. I was suggested to use the package "puppeteer", but I'm not sure how this would fetch it or what to do with it to fetch it. Thank you in advance A: You can use Puppeteer and obtain the value via a query string. The typical code looks like this: const pt= require('puppeteer') async function getText(){ //launch browser in headless mode const browser = await pt.launch() //browser new page const page = await browser.newPage() //launch URL await page.goto('https://feds.lol/') //identify element const f = await page.$("QUERY_STRIG") //obtain text const text = await (await f.getProperty('textContent')).jsonValue() console.log("Text is: " + text) } getText() However, I see this website is protected by Captcha
How to find specific value of html element (such as h1) with node.js
I'm trying to find the numbers of users on this website, using node.js, but I'm not sure of what process I can do to get it. The value is subclassed under multiple divs with different class names. Right now I will just be console.logging it but I'm going to do more with the data later on. I was suggested to use the package "puppeteer", but I'm not sure how this would fetch it or what to do with it to fetch it. Thank you in advance
[ "You can use Puppeteer and obtain the value via a query string. The typical code looks like this:\n\nconst pt= require('puppeteer')\n\nasync function getText(){\n\n //launch browser in headless mode\n const browser = await pt.launch()\n\n //browser new page\n const page = await browser.newPage()\n\n //launch URL\n await page.goto('https://feds.lol/')\n\n //identify element\n const f = await page.$(\"QUERY_STRIG\")\n\n //obtain text\n const text = await (await f.getProperty('textContent')).jsonValue()\n console.log(\"Text is: \" + text)\n}\n\ngetText()\n\n\nHowever, I see this website is protected by Captcha\n" ]
[ 0 ]
[]
[]
[ "javascript", "node.js", "puppeteer" ]
stackoverflow_0074665015_javascript_node.js_puppeteer.txt
Q: Programmatically toggle Bootstrap 5 collapse component I'm using a boostrap 5 "collapse", using the data attributes approach. It works as expected. I can click the button to collapse/expand the collapsible items. The docs state I can toggle the state manually, like so: let element = document.querySelector('#my-collapse'); bootstrap.Collapse.getInstance(element).toggle(); However that fails, as getInstance returns null. Strangely, if I click the collapse button, then use that code, it works. How do I ensure the code works without first "priming" the collapse component? A: I think when using data attributes, the bootstrap component is only created on demand - i.e. when the collapse button is clicked for the first time. That would explain the behaviour I've noted above. So the solution is to use getOrCreateInstance instead: let element = document.querySelector('#my-collapse'); bootstrap.Collapse.getOrCreateInstance(element).toggle();
Programmatically toggle Bootstrap 5 collapse component
I'm using a boostrap 5 "collapse", using the data attributes approach. It works as expected. I can click the button to collapse/expand the collapsible items. The docs state I can toggle the state manually, like so: let element = document.querySelector('#my-collapse'); bootstrap.Collapse.getInstance(element).toggle(); However that fails, as getInstance returns null. Strangely, if I click the collapse button, then use that code, it works. How do I ensure the code works without first "priming" the collapse component?
[ "I think when using data attributes, the bootstrap component is only created on demand - i.e. when the collapse button is clicked for the first time. That would explain the behaviour I've noted above.\nSo the solution is to use getOrCreateInstance instead:\nlet element = document.querySelector('#my-collapse');\nbootstrap.Collapse.getOrCreateInstance(element).toggle();\n\n" ]
[ 0 ]
[]
[]
[ "bootstrap_5", "javascript", "twitter_bootstrap" ]
stackoverflow_0074664740_bootstrap_5_javascript_twitter_bootstrap.txt
Q: open file for random write without truncating? In python, there are a few flags you can supply when opening a file for operation. I am a bit baffled at finding a combination that allow me to do random write without truncating. The behavior I am looking for is equivalent to C: create it if it doesn't exist, otherwise, open for write (not truncating) open(filename, O_WRONLY|O_CREAT) Python's document is confusing (to me): "w" will truncate the file first, "+" is supposed to mean updating, but "w+" will truncate it anyway. Is there anyway to achieve this without resorting to the low-level os.open() interface? Note: the "a" or "a+" doesn't work either (please correct if I am doing something wrong here) cat test.txt eee with open("test.txt", "a+") as f: f.seek(0) f.write("a") cat test.txt eeea Is that so the append mode insist on writing to the end? A: You can do it with os.open: import os f = os.fdopen(os.open(filename, os.O_RDWR | os.O_CREAT), 'rb+') Now you can read, write in the middle of the file, seek, and so on. And it creates the file. Tested on Python 2 and 3. A: You should try reading the file then open writing mode, as seen here: with open("file.txt") as reading: r = reading.read() with open("file.txt", "w") as writing: writing.write(r) A: According to the discussion Difference between modes a, a+, w, w+, and r+ in built-in open function, the open with a mode will always write to the end of file irrespective of any intervening fseek(3) or similar. If you only want to use python built-in function. I guess the solution is to first check if the file exist, and then open with r+ mode. For Example: import os filepath = "test.txt" if not os.path.isfile(filepath): f = open(filepath, "x") # open for exclusive creation, failing if the file already exists f.close() with open(filepath, "r+") as f: # random read and write f.seek(1) f.write("a")
open file for random write without truncating?
In python, there are a few flags you can supply when opening a file for operation. I am a bit baffled at finding a combination that allow me to do random write without truncating. The behavior I am looking for is equivalent to C: create it if it doesn't exist, otherwise, open for write (not truncating) open(filename, O_WRONLY|O_CREAT) Python's document is confusing (to me): "w" will truncate the file first, "+" is supposed to mean updating, but "w+" will truncate it anyway. Is there anyway to achieve this without resorting to the low-level os.open() interface? Note: the "a" or "a+" doesn't work either (please correct if I am doing something wrong here) cat test.txt eee with open("test.txt", "a+") as f: f.seek(0) f.write("a") cat test.txt eeea Is that so the append mode insist on writing to the end?
[ "You can do it with os.open:\nimport os\nf = os.fdopen(os.open(filename, os.O_RDWR | os.O_CREAT), 'rb+')\n\nNow you can read, write in the middle of the file, seek, and so on. And it creates the file. Tested on Python 2 and 3.\n", "You should try reading the file then open writing mode, as seen here:\nwith open(\"file.txt\") as reading:\n r = reading.read()\nwith open(\"file.txt\", \"w\") as writing:\n writing.write(r)\n\n", "According to the discussion Difference between modes a, a+, w, w+, and r+ in built-in open function, the open with a mode will always write to the end of file irrespective of any intervening fseek(3) or similar.\nIf you only want to use python built-in function. I guess the solution is to first check if the file exist, and then open with r+ mode.\nFor Example:\nimport os\nfilepath = \"test.txt\"\nif not os.path.isfile(filepath):\n f = open(filepath, \"x\") # open for exclusive creation, failing if the file already exists\n f.close()\nwith open(filepath, \"r+\") as f: # random read and write\n f.seek(1)\n f.write(\"a\")\n\n" ]
[ 10, 0, 0 ]
[ "You need to use \"a\" to append, it will create the file if it does not exist or append to it if it does.\nYou cannot do what you want with append as the pointer automatically moves to the end of the file when you call the write method. \nYou could check if the file exists then use fileinput.input with inplace=True inserting a line on whichever line number you want.\nimport fileinput\nimport os\n\n\ndef random_write(f, rnd_n, line):\n if not os.path.isfile(f):\n with open(f, \"w\") as f:\n f.write(line)\n else:\n for ind, line in enumerate(fileinput.input(f, inplace=True)):\n if ind == rnd_n:\n print(\"{}\\n\".format(line) + line, end=\"\")\n else:\n print(line, end=\"\")\n\nhttp://linux.die.net/man/3/fopen\n\na+\n Open for reading and appending (writing at end of file). The file is created if it does not exist. The initial file position for reading is at the beginning of the file, but output is always appended to the end of the file.\n\nfileinput makes a f.bak copy of the file you pass in and it is deleted when the output is closed. If you specify a backup extension backup=.\"foo\" the backup file will be kept. \n" ]
[ -2 ]
[ "python" ]
stackoverflow_0028918302_python.txt
Q: Adding Text/watermark to β€˜Download Plot’ button image in Plotly Is there any way to add a text/watermark to the image which is downloaded by clicking on the β€œDownload Plot” button ("toImageButtonOptions") in Plotly figures? Reference code: config = { β€˜toImageButtonOptions’: { β€˜format’: β€˜png’, β€˜filename’: β€˜download_image’, } } A: You can do it by using templates: import plotly.graph_objects as go draft_template = go.layout.Template() draft_template.layout.annotations = [ dict( name="draft watermark", text="DRAFT", textangle=-30, opacity=0.1, font=dict(color="black", size=100), xref="paper", yref="paper", x=0.5, y=0.5, showarrow=False, ) ] fig=go.Figure() fig.update_layout(template=draft_template) fig.show() Output:
Adding Text/watermark to β€˜Download Plot’ button image in Plotly
Is there any way to add a text/watermark to the image which is downloaded by clicking on the β€œDownload Plot” button ("toImageButtonOptions") in Plotly figures? Reference code: config = { β€˜toImageButtonOptions’: { β€˜format’: β€˜png’, β€˜filename’: β€˜download_image’, } }
[ "You can do it by using templates:\nimport plotly.graph_objects as go\n\ndraft_template = go.layout.Template()\ndraft_template.layout.annotations = [\n dict(\n name=\"draft watermark\",\n text=\"DRAFT\",\n textangle=-30,\n opacity=0.1,\n font=dict(color=\"black\", size=100),\n xref=\"paper\",\n yref=\"paper\",\n x=0.5,\n y=0.5,\n showarrow=False,\n )\n]\n\nfig=go.Figure()\nfig.update_layout(template=draft_template)\nfig.show()\n\nOutput:\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "plotly_dash", "plotly_python", "python" ]
stackoverflow_0074662095_plotly_plotly_dash_plotly_python_python.txt
Q: Access SQL query for sorting enter image description here - Relationships Find employees who took some kind of management class (i.e., a class whose name ends with management). List employees’ last and first name, class name, and the class date. Sort the query output by last name in ascending. I did the following, But the query is not working. Cannot find the ClassName that ends with management. SELECT E.Last, E.First, C.ClassName, C.Date FROM EMPLOYEES AS E, EMPLOYEE_TRAINING AS ET, CLASSES AS C WHERE E.EmployeeID = ET.EmployeeID AND C.ClassID = ET.ClassID AND C.ClassName LIKE "Management*" ORDER BY E.Last ASC; A: Try using the right wildcard character for the LIKE clause, which is %: SELECT E.Last, E.First, C.ClassName, C.Date FROM EMPLOYEES AS E INNER JOIN EMPLOYEE_TRAINING AS ET ON E.EmployeeID = ET.EmployeeID INNER JOIN CLASSES AS C ON C.ClassID = ET.ClassID WHERE C.ClassName LIKE "%Management" ORDER BY E.Last ASC; A: Replace "Management*" with "*Management" or "%Management" because you want to find ClassName that ends with Management, not start with Management
Access SQL query for sorting
enter image description here - Relationships Find employees who took some kind of management class (i.e., a class whose name ends with management). List employees’ last and first name, class name, and the class date. Sort the query output by last name in ascending. I did the following, But the query is not working. Cannot find the ClassName that ends with management. SELECT E.Last, E.First, C.ClassName, C.Date FROM EMPLOYEES AS E, EMPLOYEE_TRAINING AS ET, CLASSES AS C WHERE E.EmployeeID = ET.EmployeeID AND C.ClassID = ET.ClassID AND C.ClassName LIKE "Management*" ORDER BY E.Last ASC;
[ "Try using the right wildcard character for the LIKE clause, which is %:\nSELECT E.Last, E.First, C.ClassName, C.Date\nFROM EMPLOYEES AS E\nINNER JOIN EMPLOYEE_TRAINING AS ET ON E.EmployeeID = ET.EmployeeID\nINNER JOIN CLASSES AS C ON C.ClassID = ET.ClassID\nWHERE C.ClassName LIKE \"%Management\"\nORDER BY E.Last ASC;\n\n", "Replace \"Management*\" with \"*Management\" or \"%Management\" because you want to find ClassName that ends with Management, not start with Management\n" ]
[ 0, 0 ]
[]
[]
[ "ms_access" ]
stackoverflow_0074665030_ms_access.txt
Q: Index out of range on BS4 selecting elements I need to get the ID of the li element but I dont want the other elements IDs. I have attached my code below but its throwing a Index out of range error somewhere in it HTML: <ul class="product-attributes list-inline product-attributes-two-sizes"> <li class="ease " id="12345"></li> <li class="dsadsad" id="000"></li> <li class="dadsda" id="000"></li> </ul> My code: for size in soup.find("ul", {"class": "product-attributes list-inline product-attributes-two-sizes"}).select('ease '): print(size['data-productsize-combid']) print(size['data-productsize-name']) combidlist.append(size["data-productsize-combid"]) sizelist.append(size['data-productsize-name']) A: Here is a one-liner way of retrieving the information you're after: from bs4 import BeautifulSoup as bs html = ''' <ul class="product-attributes list-inline product-attributes-two-sizes"> <li class="ease " id="12345"></li> <li class="dsadsad" id="000"></li> <li class="dadsda" id="000"></li> </ul> ''' soup = bs(html, 'html.parser') item = soup.select_one('ul[class="product-attributes list-inline product-attributes-two-sizes"] li[class^="ease"]').get('id') print(item) Result in terminal: 12345 BeautifulSoup documentation can be found here
Index out of range on BS4 selecting elements
I need to get the ID of the li element but I dont want the other elements IDs. I have attached my code below but its throwing a Index out of range error somewhere in it HTML: <ul class="product-attributes list-inline product-attributes-two-sizes"> <li class="ease " id="12345"></li> <li class="dsadsad" id="000"></li> <li class="dadsda" id="000"></li> </ul> My code: for size in soup.find("ul", {"class": "product-attributes list-inline product-attributes-two-sizes"}).select('ease '): print(size['data-productsize-combid']) print(size['data-productsize-name']) combidlist.append(size["data-productsize-combid"]) sizelist.append(size['data-productsize-name'])
[ "Here is a one-liner way of retrieving the information you're after:\nfrom bs4 import BeautifulSoup as bs\n\nhtml = '''\n<ul class=\"product-attributes list-inline product-attributes-two-sizes\">\n <li class=\"ease \" id=\"12345\"></li>\n <li class=\"dsadsad\" id=\"000\"></li>\n <li class=\"dadsda\" id=\"000\"></li>\n</ul>\n'''\nsoup = bs(html, 'html.parser')\nitem = soup.select_one('ul[class=\"product-attributes list-inline product-attributes-two-sizes\"] li[class^=\"ease\"]').get('id')\nprint(item)\n\nResult in terminal:\n12345\n\nBeautifulSoup documentation can be found here\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074661220_beautifulsoup_python.txt
Q: Firebase : Uncaught Error: Component analytics has not been registered yet I'm working on getting firebase working for a webapp project for class. It has not been easy. I keep getting errors, and every change I make creates a different error. Firebase is initialized in the project. Here is my most recent error: >provider.ts:239 Uncaught Error: Component analytics has not been registered yet >>at Provider.initialize (provider.ts:239), >>at initializeAnalytics (api.ts:108), >>at firebase-config.js:23, >>>initialize @ provider.ts:239, >>>initializeAnalytics @ api.ts:108, >>>(anonymous) @ firebase-config.js:23 This error seems to be stemming from my firebase-config file, but I'm genuinely lost: // Import the functions you need from the SDKs you need //import * as firebase from 'firebase/app'; import { initializeApp } from '../node_modules/firebase/firebase-app.js'; import { initializeAnalytics , getAnalytics } from '../node_modules/firebase/firebase-analytics.js'; const firebaseConfig = { [config keys] }; // Initialize Firebase const fb_app = initializeApp(firebaseConfig); // returns an app initializeAnalytics(fb_app); // added to circumvent error, but it still appears. const analytics = getAnalytics(fb_app); console.log(analytics); export {fb_app}; Any help would be appreciated. Thanks A: If you are using v9 SDK, your import statements must be import { initializeApp } from 'firebase/app'; import { initializeAnalytics , getAnalytics } from 'firebase/analytics'; Not the files directly. There is a relevant discussion here: https://issueexplorer.com/issue/firebase/firebase-js-sdk/5597 A: I Also Faced Same issue in React-native-web With webpack project. It was Just Conflict Issue in Webpack CompileLibs Dependencies. If You have "firebase" in Your webpack.config.js file remove from there.
Firebase : Uncaught Error: Component analytics has not been registered yet
I'm working on getting firebase working for a webapp project for class. It has not been easy. I keep getting errors, and every change I make creates a different error. Firebase is initialized in the project. Here is my most recent error: >provider.ts:239 Uncaught Error: Component analytics has not been registered yet >>at Provider.initialize (provider.ts:239), >>at initializeAnalytics (api.ts:108), >>at firebase-config.js:23, >>>initialize @ provider.ts:239, >>>initializeAnalytics @ api.ts:108, >>>(anonymous) @ firebase-config.js:23 This error seems to be stemming from my firebase-config file, but I'm genuinely lost: // Import the functions you need from the SDKs you need //import * as firebase from 'firebase/app'; import { initializeApp } from '../node_modules/firebase/firebase-app.js'; import { initializeAnalytics , getAnalytics } from '../node_modules/firebase/firebase-analytics.js'; const firebaseConfig = { [config keys] }; // Initialize Firebase const fb_app = initializeApp(firebaseConfig); // returns an app initializeAnalytics(fb_app); // added to circumvent error, but it still appears. const analytics = getAnalytics(fb_app); console.log(analytics); export {fb_app}; Any help would be appreciated. Thanks
[ "If you are using v9 SDK, your import statements must be\nimport { initializeApp } from 'firebase/app';\nimport { initializeAnalytics , getAnalytics } from 'firebase/analytics';\n\nNot the files directly.\nThere is a relevant discussion here: https://issueexplorer.com/issue/firebase/firebase-js-sdk/5597\n", "I Also Faced Same issue in React-native-web With webpack project. It was Just Conflict Issue in Webpack CompileLibs Dependencies. If You have \"firebase\" in Your webpack.config.js file remove from there.\n" ]
[ 0, 0 ]
[]
[]
[ "firebase", "firebase_analytics", "google_analytics_firebase", "javascript" ]
stackoverflow_0070320675_firebase_firebase_analytics_google_analytics_firebase_javascript.txt
Q: SignalR return value from client method Hello I'm developing a Server-Client application that communicate with SignalR. What I have to implement is a mechanism that will allow my server to call method on client and get a result of that call. Both applications are developed with .Net Core. My concept is, Server invokes a method on Client providing Id of that invocation, the client executes the method and in response calls the method on the Server with method result and provided Id so the Server can match the Invocation with the result. Usage is looking like this: var invocationResult = await Clients .Client(connectionId) .GetName(id) .AwaitInvocationResult<string>(ClientInvocationHelper._invocationResults, id); AwaitInvocationResult - is a extension method to Task public static Task<TResultType> AwaitInvocationResult<TResultType>(this Task invoke, ConcurrentDictionary<string, object> lookupDirectory, InvocationId id) { return Task.Run(() => { while (!ClientInvocationHelper._invocationResults.ContainsKey(id.Value) || ClientInvocationHelper._invocationResults[id.Value] == null) { Thread.Sleep(500); } try { object data; var stingifyData = lookupDirectory[id.Value].ToString(); //First we should check if invocation response contains exception if (IsClientInvocationException(stingifyData, out ClientInvocationException exception)) { throw exception; } if (typeof(TResultType) == typeof(string)) { data = lookupDirectory[id.Value].ToString(); } else { data = JsonConvert.DeserializeObject<TResultType>(stingifyData); } var result = (TResultType)data; return Task.FromResult(result); } catch (Exception e) { Console.WriteLine(e); throw; } }); } As you can see basically I have a dictionary where key is invocation Id and value is a result of that invocation that the client can report. In a while loop I'm checking if the result is already available for server to consume, if it is, the result is converted to specific type. This mechanism is working pretty well but I'm observing weird behaviour that I don't understand. If I call this method with await modifier the method in Hub that is responsible to receive a result from client is never invoked. ///This method gets called by the client to return a value of specific invocation public Task OnInvocationResult(InvocationId invocationId, object data) { ClientInvocationHelper._invocationResults[invocationId.Value] = data; return Task.CompletedTask; } In result the while loop of AwaitInvocationResult never ends and the Hub is blocked. Maby someone can explain this behaviour to me so I can change my approach or improve my code. A: As it was mentioned in the answer by Brennan, before ASP.NET Core 5.0 SignalR connection was only able to handle one not streaming invocation of hub method at time. And since your invocation was blocked, server wasn't able to handle next invocation. But in this case you probably can try to handle client responses in separate hub like below. public class InvocationResultHandlerHub : Hub { public Task HandleResult(int invocationId, string result) { InvoctionHelper.SetResult(invocationId, result); return Task.CompletedTask; } } While hub method invocation is blocked, no other hub methods can be invoked by caller connection. But since client have separate connection for each hub, he will be able to invoke methods on other hubs. Probably not the best way, because client won't be able to reach first hub until response will be posted. Other way you can try is streaming invocations. Currently SignalR doesn't await them to handle next message, so server will handle invocations and other messages between streaming calls. You can check this behavior here in Invoke method, invocation isn't awaited when it is stream https://github.com/dotnet/aspnetcore/blob/c8994712d8c3c982111e4f1a09061998a81d68aa/src/SignalR/server/Core/src/Internal/DefaultHubDispatcher.cs#L371 So you can try to add some dummy streaming parameter that you will not use: public async Task TriggerRequestWithResult(string resultToSend, IAsyncEnumerable<int> stream) { var invocationId = InvoctionHelper.ResolveInvocationId(); await Clients.Caller.SendAsync("returnProvidedString", invocationId, resultToSend); var result = await InvoctionHelper.ActiveWaitForInvocationResult<string>(invocationId); Debug.WriteLine(result); } and on the client side you will also need to create and populate this parameter: var stringResult = document.getElementById("syncCallString").value; var dummySubject = new signalR.Subject(); resultsConnection.invoke("TriggerRequestWithResult", stringResult, dummySubject); dummySubject.complete(); More details: https://learn.microsoft.com/en-us/aspnet/core/signalr/streaming?view=aspnetcore-5.0 If you can use ASP.NET Core 5, you can try to use new MaximumParallelInvocationsPerClient hub option. It will allow several invocations to execute in parallel for one connection. But if your client will call too much hub methods without providing result, connection will hang. More details: https://learn.microsoft.com/en-us/aspnet/core/signalr/configuration?view=aspnetcore-5.0&tabs=dotnet Actually, since returning values from client invocations isn't implemented by SignalR, maybe you can try to look into streams to return values into hubs? A: By default a client can only have one hub method running at a time on the server. This means that when you wait for a result in the first hub method, the second hub method will never run since the first hub method is blocking the processing loop. It would be better if the OnInvocationResult method ran the logic in your AwaitInvocationResult extension and the first hub method just registers the id and calls the client. A: This is supported in .NET 7 now https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-preview-4/#client-results-in-signalr
SignalR return value from client method
Hello I'm developing a Server-Client application that communicate with SignalR. What I have to implement is a mechanism that will allow my server to call method on client and get a result of that call. Both applications are developed with .Net Core. My concept is, Server invokes a method on Client providing Id of that invocation, the client executes the method and in response calls the method on the Server with method result and provided Id so the Server can match the Invocation with the result. Usage is looking like this: var invocationResult = await Clients .Client(connectionId) .GetName(id) .AwaitInvocationResult<string>(ClientInvocationHelper._invocationResults, id); AwaitInvocationResult - is a extension method to Task public static Task<TResultType> AwaitInvocationResult<TResultType>(this Task invoke, ConcurrentDictionary<string, object> lookupDirectory, InvocationId id) { return Task.Run(() => { while (!ClientInvocationHelper._invocationResults.ContainsKey(id.Value) || ClientInvocationHelper._invocationResults[id.Value] == null) { Thread.Sleep(500); } try { object data; var stingifyData = lookupDirectory[id.Value].ToString(); //First we should check if invocation response contains exception if (IsClientInvocationException(stingifyData, out ClientInvocationException exception)) { throw exception; } if (typeof(TResultType) == typeof(string)) { data = lookupDirectory[id.Value].ToString(); } else { data = JsonConvert.DeserializeObject<TResultType>(stingifyData); } var result = (TResultType)data; return Task.FromResult(result); } catch (Exception e) { Console.WriteLine(e); throw; } }); } As you can see basically I have a dictionary where key is invocation Id and value is a result of that invocation that the client can report. In a while loop I'm checking if the result is already available for server to consume, if it is, the result is converted to specific type. This mechanism is working pretty well but I'm observing weird behaviour that I don't understand. If I call this method with await modifier the method in Hub that is responsible to receive a result from client is never invoked. ///This method gets called by the client to return a value of specific invocation public Task OnInvocationResult(InvocationId invocationId, object data) { ClientInvocationHelper._invocationResults[invocationId.Value] = data; return Task.CompletedTask; } In result the while loop of AwaitInvocationResult never ends and the Hub is blocked. Maby someone can explain this behaviour to me so I can change my approach or improve my code.
[ "As it was mentioned in the answer by Brennan, before ASP.NET Core 5.0 SignalR connection was only able to handle one not streaming invocation of hub method at time. And since your invocation was blocked, server wasn't able to handle next invocation.\nBut in this case you probably can try to handle client responses in separate hub like below.\npublic class InvocationResultHandlerHub : Hub\n{\n\n public Task HandleResult(int invocationId, string result)\n {\n InvoctionHelper.SetResult(invocationId, result);\n return Task.CompletedTask;\n }\n}\n\nWhile hub method invocation is blocked, no other hub methods can be invoked by caller connection. But since client have separate connection for each hub, he will be able to invoke methods on other hubs. Probably not the best way, because client won't be able to reach first hub until response will be posted.\nOther way you can try is streaming invocations. Currently SignalR doesn't await them to handle next message, so server will handle invocations and other messages between streaming calls.\nYou can check this behavior here in Invoke method, invocation isn't awaited when it is stream\nhttps://github.com/dotnet/aspnetcore/blob/c8994712d8c3c982111e4f1a09061998a81d68aa/src/SignalR/server/Core/src/Internal/DefaultHubDispatcher.cs#L371\nSo you can try to add some dummy streaming parameter that you will not use:\n public async Task TriggerRequestWithResult(string resultToSend, IAsyncEnumerable<int> stream)\n {\n var invocationId = InvoctionHelper.ResolveInvocationId();\n await Clients.Caller.SendAsync(\"returnProvidedString\", invocationId, resultToSend);\n var result = await InvoctionHelper.ActiveWaitForInvocationResult<string>(invocationId);\n Debug.WriteLine(result);\n }\n\nand on the client side you will also need to create and populate this parameter:\nvar stringResult = document.getElementById(\"syncCallString\").value;\nvar dummySubject = new signalR.Subject();\nresultsConnection.invoke(\"TriggerRequestWithResult\", stringResult, dummySubject);\ndummySubject.complete();\n\nMore details: https://learn.microsoft.com/en-us/aspnet/core/signalr/streaming?view=aspnetcore-5.0\nIf you can use ASP.NET Core 5, you can try to use new MaximumParallelInvocationsPerClient hub option. It will allow several invocations to execute in parallel for one connection. But if your client will call too much hub methods without providing result, connection will hang.\nMore details: https://learn.microsoft.com/en-us/aspnet/core/signalr/configuration?view=aspnetcore-5.0&tabs=dotnet\nActually, since returning values from client invocations isn't implemented by SignalR, maybe you can try to look into streams to return values into hubs?\n", "By default a client can only have one hub method running at a time on the server. This means that when you wait for a result in the first hub method, the second hub method will never run since the first hub method is blocking the processing loop.\nIt would be better if the OnInvocationResult method ran the logic in your AwaitInvocationResult extension and the first hub method just registers the id and calls the client.\n", "This is supported in .NET 7 now https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-preview-4/#client-results-in-signalr\n" ]
[ 1, 0, 0 ]
[]
[]
[ "signalr", "signalr_hub" ]
stackoverflow_0066659916_signalr_signalr_hub.txt
Q: Reading part of lines from a txt I'm trying to read a txt file with informations about time, temperature and humidity, this is the shape 07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C; 07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C; I would like to extrapolate, for each line, the 4 informations and plot them in a graph. using open() and fileObject.read() i can plot the txt into VSC Terminal, but i don't know how to: read the time and save it in a proper way (it's splitted by ":") read the values, for example i could think to read the first 5 characters after "Humidity" word, the first 5 after "Temperature" and so on. For each line store them in proper vector and then plot the 3 path in function of the time. I'm using numpy as library. A: Assuming you can tolerate reading your data into a Python string, we can use re.findall here: # -*- coding: utf-8 -*- import re inp = """07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C; 07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;""" vals = re.findall(r'^(\d{2}:\d{2}:\d{2}(?:\.\d+)?) -> Humidity:(\d+(?:\.\d+)?)%;Temperature:(\d+(?:\.\d+)?)°C;Heat index:(\d+(?:\.\d+)?)°C;', inp, flags=re.M) print(vals) This prints: [('07:54:03.383', '38.00', '20.50', '19.60'), ('07:59:03.415', '37.00', '20.90', '20.01'), ('08:04:03.435', '37.00', '20.90', '20.01'), ('08:09:03.484', '37.00', '20.80', '19.90')]
Reading part of lines from a txt
I'm trying to read a txt file with informations about time, temperature and humidity, this is the shape 07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C; 07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C; 08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C; I would like to extrapolate, for each line, the 4 informations and plot them in a graph. using open() and fileObject.read() i can plot the txt into VSC Terminal, but i don't know how to: read the time and save it in a proper way (it's splitted by ":") read the values, for example i could think to read the first 5 characters after "Humidity" word, the first 5 after "Temperature" and so on. For each line store them in proper vector and then plot the 3 path in function of the time. I'm using numpy as library.
[ "Assuming you can tolerate reading your data into a Python string, we can use re.findall here:\n# -*- coding: utf-8 -*-\nimport re\n\ninp = \"\"\"07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C;\n07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;\n08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;\n08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;\"\"\"\n\nvals = re.findall(r'^(\\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)?) -> Humidity:(\\d+(?:\\.\\d+)?)%;Temperature:(\\d+(?:\\.\\d+)?)°C;Heat index:(\\d+(?:\\.\\d+)?)°C;', inp, flags=re.M)\nprint(vals)\n\nThis prints:\n[('07:54:03.383', '38.00', '20.50', '19.60'),\n ('07:59:03.415', '37.00', '20.90', '20.01'),\n ('08:04:03.435', '37.00', '20.90', '20.01'),\n ('08:09:03.484', '37.00', '20.80', '19.90')]\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074665084_python.txt
Q: Undefined reference to class from namespace in C++ I'm new to C++ and trying to do a small quant project with paper trading. I have a header file alpaca/client.h as follows: #pragma once #include <iostream> #include <../utils/httplib.h> #include <config.h> using namespace std; namespace alpaca { class Client { private: alpaca::Config* config; public: Client(); string connect(); }; } The implementation in alpaca/client.cpp is #include <iostream> #include <string> #include <client.h> #include <httplib.h> using namespace std; namespace alpaca { Client::Client() { config = &alpaca::Config(); }; string Client::connect() { httplib::Client client(config->get_url(MARKET)); auto res = client.Get("/v2/account"); if (res) { return res->body; } else { return "Error in Client::get_account(): " + to_string(res->status); } }; } And my main.cpp is: #include <iostream> #include <string> #include <client.h> using namespace std; int main() { alpaca::Client client = alpaca::Client(); client.connect(); return 0; } However, I see the following error when I try to compile with g++: C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/12.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\shubh\AppData\Local\Temp\cc765kwL.o:main.cpp:(.text+0x1ca): undefined reference to 'alpaca::Client::Client()' Could anyone help with what exactly I'm missing? I'm not too sure. The g++ command I use is g++ -I./src/alpaca src/main.cpp A: It looks like you forgot to compile the client.cpp file. The error message is saying that the linker cannot find a definition for the Client class constructor. Try compiling both main.cpp and client.cpp with the g++ command, like this: g++ -I./src/alpaca src/main.cpp src/alpaca/client.cpp
Undefined reference to class from namespace in C++
I'm new to C++ and trying to do a small quant project with paper trading. I have a header file alpaca/client.h as follows: #pragma once #include <iostream> #include <../utils/httplib.h> #include <config.h> using namespace std; namespace alpaca { class Client { private: alpaca::Config* config; public: Client(); string connect(); }; } The implementation in alpaca/client.cpp is #include <iostream> #include <string> #include <client.h> #include <httplib.h> using namespace std; namespace alpaca { Client::Client() { config = &alpaca::Config(); }; string Client::connect() { httplib::Client client(config->get_url(MARKET)); auto res = client.Get("/v2/account"); if (res) { return res->body; } else { return "Error in Client::get_account(): " + to_string(res->status); } }; } And my main.cpp is: #include <iostream> #include <string> #include <client.h> using namespace std; int main() { alpaca::Client client = alpaca::Client(); client.connect(); return 0; } However, I see the following error when I try to compile with g++: C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/12.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\shubh\AppData\Local\Temp\cc765kwL.o:main.cpp:(.text+0x1ca): undefined reference to 'alpaca::Client::Client()' Could anyone help with what exactly I'm missing? I'm not too sure. The g++ command I use is g++ -I./src/alpaca src/main.cpp
[ "It looks like you forgot to compile the client.cpp file. The error message is saying that the linker cannot find a definition for the Client class constructor.\nTry compiling both main.cpp and client.cpp with the g++ command, like this:\ng++ -I./src/alpaca src/main.cpp src/alpaca/client.cpp\n\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074665119_c++.txt
Q: Make vscode typing same as terminal Well, when i typing on my terminal, the current type location look like this: But in vscode look like this: How i change my vscode to look like my terminal? A: Solved by go to setting, type cursor style, then change it to block
Make vscode typing same as terminal
Well, when i typing on my terminal, the current type location look like this: But in vscode look like this: How i change my vscode to look like my terminal?
[ "Solved by go to setting, type cursor style, then change it to block\n\n" ]
[ 0 ]
[]
[]
[ "settings", "visual_studio_code" ]
stackoverflow_0074665086_settings_visual_studio_code.txt
Q: divide a number from the available list Is there any easiest way to divide the number based on the values available from the Map ex. int total = 7; Map<Integer, Integer> myList = new HashMap<>(); myList.put(0,4); myList.put(1,2); myList.put(2,1); myList.put(3,4); myList.put(4,3); myList.put(5,1) myList.put(6,1) output keys : [0,4] which holds [4, 3] values equals to 7 Here I will always pick the highest values from map. so keys [0, 4] is the output instead of [0,1,2]. Is there a convenient way to do this in Java A: Yes, you can use the TreeMap class in Java to easily achieve this. TreeMap is a implementation of the Map interface that keeps its entries sorted based on the natural ordering of its keys or on a Comparator provided at construction time. This means that you can retrieve the keys with the highest values in the map by using the lastKey method of the TreeMap class. Here is an example of how you can use a TreeMap to find the keys with the highest values in your map: int total = 7; Map<Integer, Integer> myList = new HashMap<>(); myList.put(0,4); myList.put(1,2); myList.put(2,1); myList.put(3,4); myList.put(4,3); myList.put(5,1) myList.put(6,1) // Create a TreeMap from the original map to keep the entries sorted TreeMap<Integer, Integer> sortedMap = new TreeMap<>(myList); // Find the keys with the highest values in the map List<Integer> keys = new ArrayList<>(); while (total > 0) { // Get the key with the highest value int key = sortedMap.lastKey(); keys.add(key); // Decrement the total by the value of the key total -= sortedMap.get(key); // Remove the key from the map sortedMap.remove(key); } // keys will now contain [0, 4] Using a TreeMap in this way allows you to easily find the keys with the highest values in the map without having to manually sort the entries or loop through the map multiple times.
divide a number from the available list
Is there any easiest way to divide the number based on the values available from the Map ex. int total = 7; Map<Integer, Integer> myList = new HashMap<>(); myList.put(0,4); myList.put(1,2); myList.put(2,1); myList.put(3,4); myList.put(4,3); myList.put(5,1) myList.put(6,1) output keys : [0,4] which holds [4, 3] values equals to 7 Here I will always pick the highest values from map. so keys [0, 4] is the output instead of [0,1,2]. Is there a convenient way to do this in Java
[ "Yes, you can use the TreeMap class in Java to easily achieve this. TreeMap is a implementation of the Map interface that keeps its entries sorted based on the natural ordering of its keys or on a Comparator provided at construction time. This means that you can retrieve the keys with the highest values in the map by using the lastKey method of the TreeMap class.\nHere is an example of how you can use a TreeMap to find the keys with the highest values in your map:\nint total = 7;\n\nMap<Integer, Integer> myList = new HashMap<>();\nmyList.put(0,4);\nmyList.put(1,2);\nmyList.put(2,1);\nmyList.put(3,4);\nmyList.put(4,3);\nmyList.put(5,1)\nmyList.put(6,1)\n\n// Create a TreeMap from the original map to keep the entries sorted\nTreeMap<Integer, Integer> sortedMap = new TreeMap<>(myList);\n\n// Find the keys with the highest values in the map\nList<Integer> keys = new ArrayList<>();\nwhile (total > 0) {\n // Get the key with the highest value\n int key = sortedMap.lastKey();\n keys.add(key);\n // Decrement the total by the value of the key\n total -= sortedMap.get(key);\n // Remove the key from the map\n sortedMap.remove(key);\n}\n\n// keys will now contain [0, 4]\n\nUsing a TreeMap in this way allows you to easily find the keys with the highest values in the map without having to manually sort the entries or loop through the map multiple times.\n" ]
[ 1 ]
[]
[]
[ "java" ]
stackoverflow_0074665114_java.txt
Q: Undefined variable in Laravel even after defining everything correctly. Error: Undefined $fac_name in view Controller public function datadriver(){ $facc = DB::select('select distinct faculty_name as fac from faculties'); dd($facc); $fac_name = []; foreach($facc as $f) { $fac_name[] = $f->fac; } return redirect()->back()->with('fac_name',$fac_name); } web.php Route::get('/', function () { return view('homepage'); }); Route::get('/facultydata-g', [FacultyController::class,'datadriver'])->name('facultyDataDriver'); Route::post('/facultydata-p', [FacultyController::class,'facultydata'])->name('facultyDataParser'); Then in .blade.php I am just calling that variable {{$fac_name}} According to me I have done everything correct till now but I dont know why I am facing this issue of undefined variable,also tried compact() but nothing is working I just want to display that data on the view but it seems like a different problem, any help would be really appreciated. A: One possible issue could be that you are not passing the variable to the view properly. In your controller, you are redirecting to the previous page, but you are not passing the variable to the view. You can try the following instead: Controller: public function datadriver(){ $facc = DB::select('select distinct faculty_name as fac from faculties'); $fac_name = []; foreach($facc as $f) { $fac_name[] = $f->fac; } return view('your_view_name', compact('fac_name')); } In your view, you can access the variable using {{ $fac_name }}. Another possible issue could be that you are not returning the variable from the controller to the view in the correct format. In your controller, you are returning the variable as an array, but in the view, you are accessing it as a string. You can try the following instead: Controller: public function datadriver(){ $facc = DB::select('select distinct faculty_name as fac from faculties'); $fac_name = []; foreach($facc as $f) { $fac_name[] = $f->fac; } return view('your_view_name', ['fac_name' => $fac_name]); } In your view, you can access the variable using {{ $fac_name }}. I hope this helps.
Undefined variable in Laravel even after defining everything correctly. Error: Undefined $fac_name in view
Controller public function datadriver(){ $facc = DB::select('select distinct faculty_name as fac from faculties'); dd($facc); $fac_name = []; foreach($facc as $f) { $fac_name[] = $f->fac; } return redirect()->back()->with('fac_name',$fac_name); } web.php Route::get('/', function () { return view('homepage'); }); Route::get('/facultydata-g', [FacultyController::class,'datadriver'])->name('facultyDataDriver'); Route::post('/facultydata-p', [FacultyController::class,'facultydata'])->name('facultyDataParser'); Then in .blade.php I am just calling that variable {{$fac_name}} According to me I have done everything correct till now but I dont know why I am facing this issue of undefined variable,also tried compact() but nothing is working I just want to display that data on the view but it seems like a different problem, any help would be really appreciated.
[ "One possible issue could be that you are not passing the variable to the view properly. In your controller, you are redirecting to the previous page, but you are not passing the variable to the view.\nYou can try the following instead:\nController:\npublic function datadriver(){\n $facc = DB::select('select distinct faculty_name as fac from faculties');\n $fac_name = [];\n foreach($facc as $f) {\n $fac_name[] = $f->fac;\n }\n return view('your_view_name', compact('fac_name'));\n}\n\nIn your view, you can access the variable using {{ $fac_name }}.\nAnother possible issue could be that you are not returning the variable from the controller to the view in the correct format. In your controller, you are returning the variable as an array, but in the view, you are accessing it as a string. You can try the following instead:\nController:\npublic function datadriver(){\n $facc = DB::select('select distinct faculty_name as fac from faculties');\n $fac_name = [];\n foreach($facc as $f) {\n $fac_name[] = $f->fac;\n }\n return view('your_view_name', ['fac_name' => $fac_name]);\n}\n\nIn your view, you can access the variable using {{ $fac_name }}.\nI hope this helps.\n" ]
[ 0 ]
[]
[]
[ "controller", "laravel", "laravel_8", "php" ]
stackoverflow_0074665053_controller_laravel_laravel_8_php.txt
Q: public key signing - why this is not advisable Generally for signing a message, its recommended to sign the hash of the message payload using sender's private key and the recipient should decrypt using sender's public key and validate if hash is correct. I would like to know what will go wrong if i sign using recipients public key and recipient can decrypt using his private key ( like how payload encryption happens ). I am new to signing & encryption.I could not find a convincing answer. A: If you want to sign a message, you have to use your private key, as this key is obviously private and no one else can sign a message with this key. So everybody can prove with your public key, that you have signed the message, cause no one else can have your private key to sign the message. On the other hand, 'signing' a message with the public key of the receiver is senseless. The key is public, everyone could have 'sign' this message, so the receiver has no way to prove that your were actually sending this message. What you can do with a public key of a sender is to encrypt a message. No one else than the receiver will be able to decrypt it, as this will require the private key of the receiver. But signing a message makes no sense.
public key signing - why this is not advisable
Generally for signing a message, its recommended to sign the hash of the message payload using sender's private key and the recipient should decrypt using sender's public key and validate if hash is correct. I would like to know what will go wrong if i sign using recipients public key and recipient can decrypt using his private key ( like how payload encryption happens ). I am new to signing & encryption.I could not find a convincing answer.
[ "If you want to sign a message, you have to use your private key, as this key is obviously private and no one else can sign a message with this key. So everybody can prove with your public key, that you have signed the message, cause no one else can have your private key to sign the message.\nOn the other hand, 'signing' a message with the public key of the receiver is senseless. The key is public, everyone could have 'sign' this message, so the receiver has no way to prove that your were actually sending this message.\nWhat you can do with a public key of a sender is to encrypt a message. No one else than the receiver will be able to decrypt it, as this will require the private key of the receiver. But signing a message makes no sense.\n" ]
[ 1 ]
[]
[]
[ "cryptography", "encryption_asymmetric", "private_key", "public_key", "signing" ]
stackoverflow_0074664969_cryptography_encryption_asymmetric_private_key_public_key_signing.txt
Q: Get product color in color filter by Category wise I am trying to get a specific category products by category slug.I have Color model,Product model and product variation model in shop app. class Colour(models.Model): title = models.CharField(max_length=100) color_code = models.CharField(max_length=50,null=True) class Product(models.Model): product_name = models.CharField(max_length=100,unique=True) slug = models.SlugField(max_length=100,unique=True) content = RichTextUploadingField() price = models.IntegerField() images = models.ImageField(upload_to='photos/products') is_available = models.BooleanField(default=True) category = models.ForeignKey(Category, on_delete=models.CASCADE,related_name="procat") created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True) is_featured = models.BooleanField() class ProductVaraiant(models.Model): product = models.ForeignKey(Product,on_delete=models.CASCADE) color = models.ForeignKey(Colour,on_delete=models.CASCADE,blank=True, null=True) size = models.ForeignKey(Size, on_delete=models.CASCADE,blank=True, null=True) brand = models.ForeignKey(Brand,on_delete=models.CASCADE,blank=True, null=True) amount_in_stock = models.IntegerField() class Meta: constraints = [ models.UniqueConstraint( fields=['product', 'color', 'size','brand'], name='unique_prod_color_size_combo' In my views.py, def shop(request,category_slug=None): categories = None products = None if category_slug != None: categories = get_object_or_404(Category,slug = category_slug) products = Product.objects.filter(category=categories,is_available=True).order_by('id') variation = ProductVaraiant.objects.filter(product__category = categories) print(variation) # color = color.objects.all() products_count = products.count() else: products = Product.objects.all().filter(is_available=True).order_by('id') products_count = products.count() variation = ProductVaraiant.objects.all() print(variation) context = { 'products' : products, 'products_count' : products_count, 'variation' : variation } return render(request,'shop/shop.html',context) my category model, class Category(MPTTModel): parent = TreeForeignKey('self',blank=True,null=True,related_name='children',on_delete=models.CASCADE) category_name = models.CharField(max_length=200,unique=True) category_img = models.ImageField(upload_to='photos/categories',blank=True) slug = models.SlugField(max_length=100,unique=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def img_preview(self): return mark_safe('<img src = "{url}" width = "50" height = "50"/>'.format( url = self.category_img.url )) def __str__(self): return self.category_name class Meta: verbose_name_plural = 'categories' class MPTTMeta: order_insertion_by = ['category_name'] what i am trying to get is like i have 3 child category..Each category will have product of any color.So,if i filter product by category,color will be shown in sidebar of that category products without avoid getting duplicates as many product may have same color.So,i am getting same color multiple times.If i need to use distinct(),how to use it in query that will remove duplicate color based on product category.I tried this in template <form> <div class="custom-control custom-checkbox d-flex align-items-center justify-content-between mb-3"> <input type="checkbox" class="custom-control-input" checked id="color-all"> <label class="custom-control-label" for="price-all">All Color</label> <span class="badge border font-weight-normal">1000</span> </div> {% for i in variation %} {% ifchanged i.color %} <div class="custom-control custom-checkbox d-flex align-items-center justify-content-between mb-3"> <input type="checkbox" class="custom-control-input filter-checkbox" id="{{i.color.id}}" data-filter="color"> <label class="custom-control-label" for="{{i.color.id}}">{{i.color}}</label> <span class="badge border font-weight-normal">150</span> </div> {% endifchanged %} {% endfor %} </form> But,it just remove duplicate of last iteration.How to avoid getting duplicates for all color? A: Just where you have a line # color = color.objects.all() change it to color = variation.color.all().distinct('id') and then pass it to your template. A: Answer of my problem, what i did: variation = ProductVaraiant.objects.filter(product__category = categories) what i changed: variation = ProductVaraiant.objects.filter(product__category=categories).values('color__title').distinct() if using postgre database,don't need to use values,just use distinct('color') and you will need to use values for default database to avoid error below, DISTINCT ON fields is not supported by this database backend
Get product color in color filter by Category wise
I am trying to get a specific category products by category slug.I have Color model,Product model and product variation model in shop app. class Colour(models.Model): title = models.CharField(max_length=100) color_code = models.CharField(max_length=50,null=True) class Product(models.Model): product_name = models.CharField(max_length=100,unique=True) slug = models.SlugField(max_length=100,unique=True) content = RichTextUploadingField() price = models.IntegerField() images = models.ImageField(upload_to='photos/products') is_available = models.BooleanField(default=True) category = models.ForeignKey(Category, on_delete=models.CASCADE,related_name="procat") created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True) is_featured = models.BooleanField() class ProductVaraiant(models.Model): product = models.ForeignKey(Product,on_delete=models.CASCADE) color = models.ForeignKey(Colour,on_delete=models.CASCADE,blank=True, null=True) size = models.ForeignKey(Size, on_delete=models.CASCADE,blank=True, null=True) brand = models.ForeignKey(Brand,on_delete=models.CASCADE,blank=True, null=True) amount_in_stock = models.IntegerField() class Meta: constraints = [ models.UniqueConstraint( fields=['product', 'color', 'size','brand'], name='unique_prod_color_size_combo' In my views.py, def shop(request,category_slug=None): categories = None products = None if category_slug != None: categories = get_object_or_404(Category,slug = category_slug) products = Product.objects.filter(category=categories,is_available=True).order_by('id') variation = ProductVaraiant.objects.filter(product__category = categories) print(variation) # color = color.objects.all() products_count = products.count() else: products = Product.objects.all().filter(is_available=True).order_by('id') products_count = products.count() variation = ProductVaraiant.objects.all() print(variation) context = { 'products' : products, 'products_count' : products_count, 'variation' : variation } return render(request,'shop/shop.html',context) my category model, class Category(MPTTModel): parent = TreeForeignKey('self',blank=True,null=True,related_name='children',on_delete=models.CASCADE) category_name = models.CharField(max_length=200,unique=True) category_img = models.ImageField(upload_to='photos/categories',blank=True) slug = models.SlugField(max_length=100,unique=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def img_preview(self): return mark_safe('<img src = "{url}" width = "50" height = "50"/>'.format( url = self.category_img.url )) def __str__(self): return self.category_name class Meta: verbose_name_plural = 'categories' class MPTTMeta: order_insertion_by = ['category_name'] what i am trying to get is like i have 3 child category..Each category will have product of any color.So,if i filter product by category,color will be shown in sidebar of that category products without avoid getting duplicates as many product may have same color.So,i am getting same color multiple times.If i need to use distinct(),how to use it in query that will remove duplicate color based on product category.I tried this in template <form> <div class="custom-control custom-checkbox d-flex align-items-center justify-content-between mb-3"> <input type="checkbox" class="custom-control-input" checked id="color-all"> <label class="custom-control-label" for="price-all">All Color</label> <span class="badge border font-weight-normal">1000</span> </div> {% for i in variation %} {% ifchanged i.color %} <div class="custom-control custom-checkbox d-flex align-items-center justify-content-between mb-3"> <input type="checkbox" class="custom-control-input filter-checkbox" id="{{i.color.id}}" data-filter="color"> <label class="custom-control-label" for="{{i.color.id}}">{{i.color}}</label> <span class="badge border font-weight-normal">150</span> </div> {% endifchanged %} {% endfor %} </form> But,it just remove duplicate of last iteration.How to avoid getting duplicates for all color?
[ "Just where you have a line\n# color = color.objects.all()\nchange it to color = variation.color.all().distinct('id') and then pass it to your template.\n", "Answer of my problem,\nwhat i did:\nvariation = ProductVaraiant.objects.filter(product__category = categories)\n\nwhat i changed:\nvariation = ProductVaraiant.objects.filter(product__category=categories).values('color__title').distinct()\n\nif using postgre database,don't need to use values,just use distinct('color') and you will need to use values for default database to avoid error below,\nDISTINCT ON fields is not supported by this database backend\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_models", "django_templates", "django_views" ]
stackoverflow_0074651997_django_django_models_django_templates_django_views.txt
Q: How to tunnel localhost on android I have a python webserver which has webhooks , when posted in localhost on desktop and tunnel it through loophole.site it works I further ran the python webserver code in Android 12 it works on ports > 1024 but my trading view alert webhooks only accepts from http Port:80 or https:443 also localhost address its not accepting Pls guide me on how to access port:80 without rooting the device like a superuser or tunnel the localhost http://127.0.0.1:8000/webhook in android to enable the webserver working in android , the webserver uses Get alerts payload & Push to perform some actions on a exchange I have attached a few pics for your reference The apk is pydroid and python version is > 3.7 enter image description here A: If you use android emulator use http://10.0.2.2:[your port]
How to tunnel localhost on android
I have a python webserver which has webhooks , when posted in localhost on desktop and tunnel it through loophole.site it works I further ran the python webserver code in Android 12 it works on ports > 1024 but my trading view alert webhooks only accepts from http Port:80 or https:443 also localhost address its not accepting Pls guide me on how to access port:80 without rooting the device like a superuser or tunnel the localhost http://127.0.0.1:8000/webhook in android to enable the webserver working in android , the webserver uses Get alerts payload & Push to perform some actions on a exchange I have attached a few pics for your reference The apk is pydroid and python version is > 3.7 enter image description here
[ "If you use android emulator use http://10.0.2.2:[your port]\n" ]
[ 0 ]
[]
[]
[ "android", "http_tunneling", "localhost", "python", "webserver" ]
stackoverflow_0074664974_android_http_tunneling_localhost_python_webserver.txt
Q: mock grpc client request and response using testify I have been trying to mock my grpc client/server to control the response. what I'm trying to achieve is the flex testify give us with mocks, and using the once(),times() idea. so I will explain further lets say I have a go program that run the following: I want to control each response of the iteration of client.getUser() type DB struct { api serviceAPI } type service struct { } type serviceAPI interface { getClient() (pb.UserServiceClient, *grpc.ClientConn, error) } func (db *DB) getNextUser()(*UserAcc,error){ client := db.api.getClient() var index uint64 = 0 for { user = client.getUser(context.Background(), &pb.GetUserRequest{Index: index}) if(user == nil){ return nil,nil } if user(!=nil){ fmt.Printl(user) } } } func (s service) getClient() (pb.UserServiceClient, *grpc.ClientConn, error) { addr := GetAgentAddress() conn, _ := grpc.Dial(addr, grpc.WithTransportCredentials(insecure.NewCredentials())) client := pb.NewUserServiceClient(conn) return client, conn, nil } proto.go message GetUserRequest{ uint64 index = 1; } message GetUserResponse{ bytes user = 1; } service userService { rpc GetUser (GetUserRequest) returns (GetUserResponse); } user_grpc.pb.go type UserServiceClient interface { GetUser(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*GetUserResponse, error) UpdateUser(ctx context.Context, in *UpdateUserRequest, opts ...grpc.CallOption) (*UpdateUserResponse, error) main_test.go type MainTestSuite struct { suite.Suite } type serviceMock struct { mock.Mock } type clientMock struct { mock.Mock } func (c *clientMock) UpdateUser(ctx context.Context, in *pb.UpdateUserRequest, opts ...grpc.CallOption) (*pb.UpdateUserResponse, error) { //TODO implement me panic("implement me") } func (c *clientMock) GetUser(ctx context.Context, in *pb.GetUserRequest, opts ...grpc.CallOption) (*pb.GetUserResponse, error) { args := c.Called(ctx, in, opts) return args.Get(0).(*pb.GetUserResponse), args.Error(1) } func (service *serviceMock) getClient() (pb.UserServiceClient, *grpc.ClientConn, error) { args := service.Called() return args.Get(0).(clientMock), args.Get(1).(*grpc.ClientConn), args.Error(2) } func (suite *MainTestSuite) TestGetNextUser() { t := suite.T() t.Run("Should successfully get the next User", func(t *testing.T) { mServiceApi := serviceMock{} ClientMock := clientMock{} mServiceApi.On("getClient").Return(ClientMock, nil, nil) ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("something"), }, nil).once() ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("some other user"), }, nil).once() ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("something"), }, nil).once() db := DB{ api: &mServiceApi, } nextUser ,_ := db.getNextUser(true) assert.Nil(t, nextUser) }) } I would like for each iteration of the GetUser command of the client grpc to get different answers using the once() or times() of testify am I'm mocking the grpc client the right way? right now i get the following issues: Cannot use 'args.Get(0).(clientMock)' (type clientMock) as the type pb.UserServiceClient Type does not implement 'pb.UserServiceClient' as the 'UpdateUser' method has a pointer receiver. any idea why? A: I managed to do it with the mockery package and install it by brew mockery --name=UserServiceClient --unroll-variadic=False
mock grpc client request and response using testify
I have been trying to mock my grpc client/server to control the response. what I'm trying to achieve is the flex testify give us with mocks, and using the once(),times() idea. so I will explain further lets say I have a go program that run the following: I want to control each response of the iteration of client.getUser() type DB struct { api serviceAPI } type service struct { } type serviceAPI interface { getClient() (pb.UserServiceClient, *grpc.ClientConn, error) } func (db *DB) getNextUser()(*UserAcc,error){ client := db.api.getClient() var index uint64 = 0 for { user = client.getUser(context.Background(), &pb.GetUserRequest{Index: index}) if(user == nil){ return nil,nil } if user(!=nil){ fmt.Printl(user) } } } func (s service) getClient() (pb.UserServiceClient, *grpc.ClientConn, error) { addr := GetAgentAddress() conn, _ := grpc.Dial(addr, grpc.WithTransportCredentials(insecure.NewCredentials())) client := pb.NewUserServiceClient(conn) return client, conn, nil } proto.go message GetUserRequest{ uint64 index = 1; } message GetUserResponse{ bytes user = 1; } service userService { rpc GetUser (GetUserRequest) returns (GetUserResponse); } user_grpc.pb.go type UserServiceClient interface { GetUser(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*GetUserResponse, error) UpdateUser(ctx context.Context, in *UpdateUserRequest, opts ...grpc.CallOption) (*UpdateUserResponse, error) main_test.go type MainTestSuite struct { suite.Suite } type serviceMock struct { mock.Mock } type clientMock struct { mock.Mock } func (c *clientMock) UpdateUser(ctx context.Context, in *pb.UpdateUserRequest, opts ...grpc.CallOption) (*pb.UpdateUserResponse, error) { //TODO implement me panic("implement me") } func (c *clientMock) GetUser(ctx context.Context, in *pb.GetUserRequest, opts ...grpc.CallOption) (*pb.GetUserResponse, error) { args := c.Called(ctx, in, opts) return args.Get(0).(*pb.GetUserResponse), args.Error(1) } func (service *serviceMock) getClient() (pb.UserServiceClient, *grpc.ClientConn, error) { args := service.Called() return args.Get(0).(clientMock), args.Get(1).(*grpc.ClientConn), args.Error(2) } func (suite *MainTestSuite) TestGetNextUser() { t := suite.T() t.Run("Should successfully get the next User", func(t *testing.T) { mServiceApi := serviceMock{} ClientMock := clientMock{} mServiceApi.On("getClient").Return(ClientMock, nil, nil) ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("something"), }, nil).once() ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("some other user"), }, nil).once() ClientMock.On("GetUser", mock.Anything, mock.Anything, mock.Anything).Return(&pb.GetUserResponse{ User: []bytes("something"), }, nil).once() db := DB{ api: &mServiceApi, } nextUser ,_ := db.getNextUser(true) assert.Nil(t, nextUser) }) } I would like for each iteration of the GetUser command of the client grpc to get different answers using the once() or times() of testify am I'm mocking the grpc client the right way? right now i get the following issues: Cannot use 'args.Get(0).(clientMock)' (type clientMock) as the type pb.UserServiceClient Type does not implement 'pb.UserServiceClient' as the 'UpdateUser' method has a pointer receiver. any idea why?
[ "I managed to do it with the mockery package and install it by brew\nmockery --name=UserServiceClient --unroll-variadic=False\n\n" ]
[ 0 ]
[]
[]
[ "go", "grpc", "grpc_go", "unit_testing" ]
stackoverflow_0074644173_go_grpc_grpc_go_unit_testing.txt
Q: Report Properties - Visual Studio 2019 I'm simply trying to find where I can change report properties associated with how the report would be printed (margins, landscape option, etc.). I know it's via the Report Properties option, but I can't find this anywhere. A: Click a blank area of the report outside the report body and then hit F4. This will give you the report properties. Note that within the report is the report body which also has it's own dimension properties. To get to this click the "background" of the report that contains all the report controls. A: From the Solution Explorer open rdlc report. then click Extensions from main menu click Report and click Report properties.
Report Properties - Visual Studio 2019
I'm simply trying to find where I can change report properties associated with how the report would be printed (margins, landscape option, etc.). I know it's via the Report Properties option, but I can't find this anywhere.
[ "Click a blank area of the report outside the report body and then hit F4. This will give you the report properties.\nNote that within the report is the report body which also has it's own dimension properties. To get to this click the \"background\" of the report that contains all the report controls.\n", "From the Solution Explorer open rdlc report. then click Extensions from main menu click Report and click Report properties.\n" ]
[ 1, 0 ]
[]
[]
[ "reporting_services", "visual_studio_2019" ]
stackoverflow_0064140874_reporting_services_visual_studio_2019.txt
Q: Conditional Fillna in Pandas with conditional increment from the previous value I want to fillna values in the 'last unique id' column based on the increment values from the previous row **input is** Channel last unique id 0 MYNTRA MN000351370 1 NYKAA NYK00038219 2 NYKAA NaN 3 NYKAA NaN 4 NYKAA NaN 5 NYKAA NaN 6 MYNTRA NaN 7 MYNTRA NaN 8 MYNTRA NaN 9 MYNTRA NaN 10 MYNTRA NaN 11 MYNTRA NaN Expected output Channel last unique id 0 MYNTRA MN000351370 1 NYKAA NYK00038219 2 NYKAA NYK00038220 3 NYKAA NYK00038221 4 NYKAA NYK00038222 5 NYKAA NYK00038223 6 MYNTRA MN000351371 7 MYNTRA MN000351372 8 MYNTRA MN000351373 9 MYNTRA MN000351374 10 MYNTRA MN000351375 11 MYNTRA MN000351376 Hope you understood the problem A: Example data = {'col1': {0: 'A', 1: 'B', 2: 'A', 3: 'A', 4: 'B', 5: 'B'}, 'col2': {0: 'A001', 1: 'BC020', 2: None, 3: None, 4: 'BC021', 5: None}} df = pd.DataFrame(data) df col1 col2 0 A A001 1 B BC020 2 A None 3 A None 4 B BC021 5 B None Code df[['col3', 'col4']] = df.groupby('col1')['col2'].ffill().str.extract('(\D+)(\d+)') df['col4'] = df['col4'].astype('int') + df.groupby(['col1', 'col4']).cumcount() df['col2'] = df['col2'].fillna(df['col3'] + df['col4'].astype('str').str.zfill(3)) df = df.drop(['col3', 'col4'], axis=1) result(df): col1 col2 0 A A001 1 B BC020 2 A A002 3 A A003 4 B BC021 5 B BC022 A: You can use groupby.cumcount to increment the number, and add it the the number part: g = df.groupby('Channel') # ffill per group # extract letter and number part df2 = (g['last unique id'].ffill() .str.extract(r'(\D+)(\d+)') ) # convert number part to integer # add cumcount, merge back as string df['last unique id'] = (df2[0] .add(df2[1].astype(int) .add(g.cumcount()) .astype(str) ) ) print(df) Output: Channel last unique id 0 MYNTRA MN351370 1 NYKAA NYK38219 2 NYKAA NYK38220 3 NYKAA NYK38221 4 NYKAA NYK38222 5 NYKAA NYK38223 6 MYNTRA MN351371 7 MYNTRA MN351372 8 MYNTRA MN351373 9 MYNTRA MN351374 10 MYNTRA MN351375 11 MYNTRA MN351376 A: Here is how you get the desired output with padding zeros to have your id always at a fixed length of 11. df["last unique id"] = df.groupby("Channel")["last unique id"].ffill() tmp = df["last unique id"].str.extract(r"(?P<ident>\D+)(?P<num>\d+)", expand=True) tmp["add"] = df.groupby("Channel")["last unique id"].apply( lambda x: x.eq(x.shift()).cumsum() ) total_len_id = 11 df["last unique id"] = tmp.apply( lambda row: row["ident"] + str(int(row["num"]) + row["add"]).rjust(total_len_id - len(row["ident"]), "0"), axis=1, ) print(df) Channel last unique id 0 MYNTRA MN000351370 1 NYKAA NYK00038219 2 NYKAA NYK00038220 3 NYKAA NYK00038221 4 NYKAA NYK00038222 5 NYKAA NYK00038223 6 MYNTRA MN000351371 7 MYNTRA MN000351372 8 MYNTRA MN000351373 9 MYNTRA MN000351374 10 MYNTRA MN000351375 11 MYNTRA MN000351376
Conditional Fillna in Pandas with conditional increment from the previous value
I want to fillna values in the 'last unique id' column based on the increment values from the previous row **input is** Channel last unique id 0 MYNTRA MN000351370 1 NYKAA NYK00038219 2 NYKAA NaN 3 NYKAA NaN 4 NYKAA NaN 5 NYKAA NaN 6 MYNTRA NaN 7 MYNTRA NaN 8 MYNTRA NaN 9 MYNTRA NaN 10 MYNTRA NaN 11 MYNTRA NaN Expected output Channel last unique id 0 MYNTRA MN000351370 1 NYKAA NYK00038219 2 NYKAA NYK00038220 3 NYKAA NYK00038221 4 NYKAA NYK00038222 5 NYKAA NYK00038223 6 MYNTRA MN000351371 7 MYNTRA MN000351372 8 MYNTRA MN000351373 9 MYNTRA MN000351374 10 MYNTRA MN000351375 11 MYNTRA MN000351376 Hope you understood the problem
[ "Example\ndata = {'col1': {0: 'A', 1: 'B', 2: 'A', 3: 'A', 4: 'B', 5: 'B'},\n 'col2': {0: 'A001', 1: 'BC020', 2: None, 3: None, 4: 'BC021', 5: None}}\ndf = pd.DataFrame(data)\n\ndf\n col1 col2\n0 A A001\n1 B BC020\n2 A None\n3 A None\n4 B BC021\n5 B None\n\nCode\ndf[['col3', 'col4']] = df.groupby('col1')['col2'].ffill().str.extract('(\\D+)(\\d+)')\ndf['col4'] = df['col4'].astype('int') + df.groupby(['col1', 'col4']).cumcount()\ndf['col2'] = df['col2'].fillna(df['col3'] + df['col4'].astype('str').str.zfill(3))\ndf = df.drop(['col3', 'col4'], axis=1)\n\nresult(df):\n col1 col2\n0 A A001\n1 B BC020\n2 A A002\n3 A A003\n4 B BC021\n5 B BC022\n\n", "You can use groupby.cumcount to increment the number, and add it the the number part:\ng = df.groupby('Channel')\n\n# ffill per group\n# extract letter and number part\ndf2 = (g['last unique id'].ffill()\n .str.extract(r'(\\D+)(\\d+)')\n )\n\n# convert number part to integer\n# add cumcount, merge back as string\ndf['last unique id'] = (df2[0]\n .add(df2[1].astype(int)\n .add(g.cumcount())\n .astype(str)\n )\n )\n\nprint(df)\n\nOutput:\n Channel last unique id\n0 MYNTRA MN351370\n1 NYKAA NYK38219\n2 NYKAA NYK38220\n3 NYKAA NYK38221\n4 NYKAA NYK38222\n5 NYKAA NYK38223\n6 MYNTRA MN351371\n7 MYNTRA MN351372\n8 MYNTRA MN351373\n9 MYNTRA MN351374\n10 MYNTRA MN351375\n11 MYNTRA MN351376\n\n", "Here is how you get the desired output with padding zeros to have your id always at a fixed length of 11.\ndf[\"last unique id\"] = df.groupby(\"Channel\")[\"last unique id\"].ffill()\n\ntmp = df[\"last unique id\"].str.extract(r\"(?P<ident>\\D+)(?P<num>\\d+)\", expand=True)\ntmp[\"add\"] = df.groupby(\"Channel\")[\"last unique id\"].apply(\n lambda x: x.eq(x.shift()).cumsum()\n)\n\ntotal_len_id = 11\ndf[\"last unique id\"] = tmp.apply(\n lambda row: row[\"ident\"]\n + str(int(row[\"num\"]) + row[\"add\"]).rjust(total_len_id - len(row[\"ident\"]), \"0\"),\n axis=1,\n)\n\nprint(df)\n\n Channel last unique id\n0 MYNTRA MN000351370\n1 NYKAA NYK00038219\n2 NYKAA NYK00038220\n3 NYKAA NYK00038221\n4 NYKAA NYK00038222\n5 NYKAA NYK00038223\n6 MYNTRA MN000351371\n7 MYNTRA MN000351372\n8 MYNTRA MN000351373\n9 MYNTRA MN000351374\n10 MYNTRA MN000351375\n11 MYNTRA MN000351376\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "loops", "numpy", "pandas", "python" ]
stackoverflow_0074664488_dataframe_loops_numpy_pandas_python.txt
Q: Change Snowflake WH Size before Run in Informatica When running an Informatica IICS job into Snowflake, the batch warehouse is sized for the biggest jobs. Is there a way within Informatica to alter the warehouse size so that we can run with a smaller WH for most of the jobs but then scale it up when we run just the big jobs? This would allow us to better size the batch warehouse but then scale it automatically when needed. A: Yes - just use a pre SQL command to resize the warehouse before the task/job runs
Change Snowflake WH Size before Run in Informatica
When running an Informatica IICS job into Snowflake, the batch warehouse is sized for the biggest jobs. Is there a way within Informatica to alter the warehouse size so that we can run with a smaller WH for most of the jobs but then scale it up when we run just the big jobs? This would allow us to better size the batch warehouse but then scale it automatically when needed.
[ "Yes - just use a pre SQL command to resize the warehouse before the task/job runs\n" ]
[ 1 ]
[]
[]
[ "iics", "informatica", "snowflake_cloud_data_platform" ]
stackoverflow_0074664031_iics_informatica_snowflake_cloud_data_platform.txt
Q: Add fragment to RedirectToPage 'Hi All and sorry for my English. Just trying to redirect to a page passing a fragment. I'm unable to add fragment to RedirectToPage action. If I add "#" as part of string, it will be converted in %23 and therefore unuseful PostAction public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } _context.Attach(AnagraficaClientiContatti).State = EntityState.Modified; try { await _context.SaveChangesAsync(); } catch (DbUpdateConcurrencyException) { if (!AnagraficaClientiContattiExists(AnagraficaClientiContatti.AnagraficaClientiContattiID)) { return NotFound(); } else { throw; } } return RedirectToPage( "../Index", new { id = Request.Query["idmaster"], pgm = Request.Query["pgm"], pgs1 = Request.Query["pgs1"], activetab = Request.Query["activetab"], searchstring = Request.Query["searchstring"] } ); } and this work fine, but i need to redirect to above page with "#specificsection" at the end. I tried to change the call in return RedirectToPage( "../Index", new { id = Request.Query["idmaster"], pgm = Request.Query["pgm"], pgs1 = Request.Query["pgs1"], activetab = Request.Query["activetab"], searchstring = Request.Query["searchstring"] + "#specificsection" } ); but browser return a "wrong" url with %23 instead # ?id=20&pgm=2&pgs1=2&activetab=custom-tabs-one-contatti&searchstring=text%23specificsection No luck, passing fragment as third or fourth parameter (with third null ) in RedirectToPage, as i understood from documentation. Can someone help me to understand what i'm doing wrong? Thanks in advance. Best Massimo A: Use the overload that takes a string?, string?, object?, string? for the page, page handler (which is null), route values and a fragment : return RedirectToPage("/Index", null, new { id = Request.Query["idmaster"], pgm = Request.Query["pgm"], pgs1 = Request.Query["pgs1"], activetab = Request.Query["activetab"], searchstring = Request.Query["searchstring"] }, "specificsection") );
Add fragment to RedirectToPage
'Hi All and sorry for my English. Just trying to redirect to a page passing a fragment. I'm unable to add fragment to RedirectToPage action. If I add "#" as part of string, it will be converted in %23 and therefore unuseful PostAction public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } _context.Attach(AnagraficaClientiContatti).State = EntityState.Modified; try { await _context.SaveChangesAsync(); } catch (DbUpdateConcurrencyException) { if (!AnagraficaClientiContattiExists(AnagraficaClientiContatti.AnagraficaClientiContattiID)) { return NotFound(); } else { throw; } } return RedirectToPage( "../Index", new { id = Request.Query["idmaster"], pgm = Request.Query["pgm"], pgs1 = Request.Query["pgs1"], activetab = Request.Query["activetab"], searchstring = Request.Query["searchstring"] } ); } and this work fine, but i need to redirect to above page with "#specificsection" at the end. I tried to change the call in return RedirectToPage( "../Index", new { id = Request.Query["idmaster"], pgm = Request.Query["pgm"], pgs1 = Request.Query["pgs1"], activetab = Request.Query["activetab"], searchstring = Request.Query["searchstring"] + "#specificsection" } ); but browser return a "wrong" url with %23 instead # ?id=20&pgm=2&pgs1=2&activetab=custom-tabs-one-contatti&searchstring=text%23specificsection No luck, passing fragment as third or fourth parameter (with third null ) in RedirectToPage, as i understood from documentation. Can someone help me to understand what i'm doing wrong? Thanks in advance. Best Massimo
[ "Use the overload that takes a string?, string?, object?, string? for the page, page handler (which is null), route values and a fragment :\nreturn RedirectToPage(\"/Index\", null, new { \n id = Request.Query[\"idmaster\"], \n pgm = Request.Query[\"pgm\"], \n pgs1 = Request.Query[\"pgs1\"],\n activetab = Request.Query[\"activetab\"], \n searchstring = Request.Query[\"searchstring\"]\n},\n\"specificsection\")\n);\n\n" ]
[ 0 ]
[]
[]
[ "asp.net_core", "razor_pages" ]
stackoverflow_0074653001_asp.net_core_razor_pages.txt
Q: mongodb 4 Data directory C:\data\db\ not found I download and install the last vesrion of MongoDb which is 4.0.2 and i set the correct path variable. When i want to start the mondoDb service using mongod command i got the following error: exception in initAndListen: NonExistentPath: Data directory C:\data\db\ not found., terminating i know that i should create the directory missing, but that directory is automatically created in the following path: C:\Program Files\MongoDB\Server\4.0 I checked the mongod.cfg file and the correct path is already set : dbPath: C:\Program Files\MongoDB\Server\4.0\data Now how to tell the mongo to look for the folder he thinks is missing in the correct path ? A: I had the same issue but after I create the directory C:\data\db\ it just worked. A: I too had the same issue after an Windows update, Mongodb did not start automatically. Creating a new directory C:data/db will not be the right way as Mongodb has already configured the directory C:\Program Files\MongoDB\Server\4.0\data as datapath. Run the following command in cmd as Administrator. cd C:\Program Files\MongoDB\Server\4.0\bin mongod --dbpath="C:\Program Files\MongoDB\Server\4.0\data". This worked for me. A: i tried to open CMD in admin mode and the error was gone. Hope this helps someone. A: got to C:\Program Files\MongoDB\Server\4.0\bin\mongod.cfg file update below fields with these values dbPath: ....\data\db (directory path) and restart server once A: Create the directory/folder like below - C:\data\db open cmd at C:\Program Files\MongoDB\Server\6.0\bin and run 'mongod' command like below - C:\Program Files\MongoDB\Server\6.0\bin>mongod problem solved!
mongodb 4 Data directory C:\data\db\ not found
I download and install the last vesrion of MongoDb which is 4.0.2 and i set the correct path variable. When i want to start the mondoDb service using mongod command i got the following error: exception in initAndListen: NonExistentPath: Data directory C:\data\db\ not found., terminating i know that i should create the directory missing, but that directory is automatically created in the following path: C:\Program Files\MongoDB\Server\4.0 I checked the mongod.cfg file and the correct path is already set : dbPath: C:\Program Files\MongoDB\Server\4.0\data Now how to tell the mongo to look for the folder he thinks is missing in the correct path ?
[ "I had the same issue but after I create the directory C:\\data\\db\\ it just worked.\n", "I too had the same issue after an Windows update, Mongodb did not start automatically. Creating a new directory C:data/db will not be the right way as Mongodb has already configured the directory C:\\Program Files\\MongoDB\\Server\\4.0\\data as datapath.\nRun the following command in cmd as Administrator.\ncd C:\\Program Files\\MongoDB\\Server\\4.0\\bin\nmongod --dbpath=\"C:\\Program Files\\MongoDB\\Server\\4.0\\data\".\n\nThis worked for me.\n", "i tried to open CMD in admin mode and the error was gone. Hope this helps someone.\n", "got to C:\\Program Files\\MongoDB\\Server\\4.0\\bin\\mongod.cfg file\nupdate below fields with these values\ndbPath: ....\\data\\db (directory path)\nand restart server once\n", "\nCreate the directory/folder like below -\nC:\\data\\db\n\nopen cmd at C:\\Program Files\\MongoDB\\Server\\6.0\\bin and run 'mongod' command like below -\nC:\\Program Files\\MongoDB\\Server\\6.0\\bin>mongod\n\n\nproblem solved!\n" ]
[ 28, 8, 1, 0, 0 ]
[ "This solution may solve your problem problem\n\nMake a directory as \nsudo mkdir -p /data/db\nThat will make a directory named as db and than try to start with commands\nsudo mongod\n\nIf you get another error or problem with starting mongod, You may find problem as\n\nFailed to set up listener: SocketException: Address already in use\n If you find that another error than you have to kill the running process of mongod by typing to terminal as\n\nps ax | grep mongod\n\nFind the mongod running port and kill the process.\n sudo kill ps_number\nAnother way is to make a specefic port when starting mongod as\nsudo mongod --port 27018\n\n" ]
[ -2 ]
[ "database", "mongodb" ]
stackoverflow_0052647724_database_mongodb.txt
Q: Track user clicking on a tabPanel() in R Shiny application with Matomo I am trying to track tab-viewings for a R-Shiny application using Matomo. The tabs are created using tabPanel(). I have not found a solution yet that works. So far I have tried the solutions mentioned here and here. I tried to insert this in the server: server <- function(input, output, session) { ... observe({ if(input$>tabsetPanelid< == ">tabPanelid<") { HTML("<script> _paq.push(['trackPageView']); _paq.push(['setDocumentTitle', '>test<']); </script>") } }) ... } And I tried inserting this in the UI: ui <- fluidPage( ... tags$script( HTML( "$(document).on('click', '>tabPanelid<', function(e) { ga('send', 'event', 'TabsetPanel', 'Tab Viewed', $(this).attr('data-value')); });" ) ), ... ) A: I don't know what is Matomo, but here is how to track the active tab. When you do tabsetPanel(tabPanel("THE TAB ID", tags$span(2))), this produces this HTML code: <div class="tabbable"> <ul class="nav nav-tabs" data-tabsetid="2677"> <li class="active"> <a href="#tab-2677-1" data-toggle="tab" data-bs-toggle="tab" data-value="THE TAB ID">THE TAB ID</a> </li> </ul> <div class="tab-content" data-tabsetid="2677"> <div class="tab-pane active" data-value="THE TAB ID" id="tab-2677-1"> <span>2</span> </div> </div> </div> This is some Bootstrap 3 stuff. You have to listen to the event shown.bs.tab on the <a> elements with attribute data-toggle="tab", and the tab id is in the data-value attribute. So here is how: $("a[data-toggle='tab']").on("shown.bs.tab", function(e) { var tabId = $(e.target).data("value"); alert(tabId); }); There are other tabs events: $("a[data-toggle='tab']") .on("show.bs.tab", function() { alert("New tab will be visible now"); }).on("shown.bs.tab", function() { alert("New tab is visible now"); }).on("hide.bs.tab", function() { alert("Previous tab will be hidden now"); }).on("hidden.bs.tab", function() { alert("Previous tab is hidden now"); });
Track user clicking on a tabPanel() in R Shiny application with Matomo
I am trying to track tab-viewings for a R-Shiny application using Matomo. The tabs are created using tabPanel(). I have not found a solution yet that works. So far I have tried the solutions mentioned here and here. I tried to insert this in the server: server <- function(input, output, session) { ... observe({ if(input$>tabsetPanelid< == ">tabPanelid<") { HTML("<script> _paq.push(['trackPageView']); _paq.push(['setDocumentTitle', '>test<']); </script>") } }) ... } And I tried inserting this in the UI: ui <- fluidPage( ... tags$script( HTML( "$(document).on('click', '>tabPanelid<', function(e) { ga('send', 'event', 'TabsetPanel', 'Tab Viewed', $(this).attr('data-value')); });" ) ), ... )
[ "I don't know what is Matomo, but here is how to track the active tab.\nWhen you do tabsetPanel(tabPanel(\"THE TAB ID\", tags$span(2))), this produces this HTML code:\n<div class=\"tabbable\">\n <ul class=\"nav nav-tabs\" data-tabsetid=\"2677\">\n <li class=\"active\">\n <a href=\"#tab-2677-1\" data-toggle=\"tab\" data-bs-toggle=\"tab\" data-value=\"THE TAB ID\">THE TAB ID</a>\n </li>\n </ul>\n <div class=\"tab-content\" data-tabsetid=\"2677\">\n <div class=\"tab-pane active\" data-value=\"THE TAB ID\" id=\"tab-2677-1\">\n <span>2</span>\n </div>\n </div>\n</div>\n\nThis is some Bootstrap 3 stuff. You have to listen to the event shown.bs.tab on the <a> elements with attribute data-toggle=\"tab\", and the tab id is in the data-value attribute. So here is how:\n$(\"a[data-toggle='tab']\").on(\"shown.bs.tab\", function(e) {\n var tabId = $(e.target).data(\"value\");\n alert(tabId);\n});\n\nThere are other tabs events:\n$(\"a[data-toggle='tab']\")\n .on(\"show.bs.tab\", function() {\n alert(\"New tab will be visible now\");\n}).on(\"shown.bs.tab\", function() {\n alert(\"New tab is visible now\");\n}).on(\"hide.bs.tab\", function() {\n alert(\"Previous tab will be hidden now\");\n}).on(\"hidden.bs.tab\", function() {\n alert(\"Previous tab is hidden now\");\n});\n\n" ]
[ 0 ]
[]
[]
[ "event_tracking", "google_analytics", "matomo", "shinydashboard", "shinyjs" ]
stackoverflow_0074643167_event_tracking_google_analytics_matomo_shinydashboard_shinyjs.txt
Q: How to mute/unmute sound using pywin32? My searches lead me to the Pywin32 which should be able to mute/unmute the sound and detect its state (on Windows 10, using Python 3+). I found a way using an AutoHotkey script, but I'm looking for a pythonic way. More specifically, I'm not interested in playing with the Windows GUI. Pywin32 works using a Windows DLL. so far, I am able to do it by calling an ahk script: In the python script: import subprocess subprocess.call([ahkexe, ahkscript]) In the AutoHotkey script: SoundGet, sound_mute, Master, mute if sound_mute = On ; if the sound is muted Send {Volume_Mute} ; press the "mute button" to unmute SoundSet 30 ; set the sound level at 30 A: You can use the Windows Sound Manager by paradoxis (https://github.com/Paradoxis/Windows-Sound-Manager). from sound import Sound Sound.mute() Every call to Sound.mute() will toggle mute on or off. Have a look at the main.py to see how to use the setter and getter methods. A: If you're also building a GUI, wxPython (and I would believe other GUI frameworks) have access to the windows audio mute "button".
How to mute/unmute sound using pywin32?
My searches lead me to the Pywin32 which should be able to mute/unmute the sound and detect its state (on Windows 10, using Python 3+). I found a way using an AutoHotkey script, but I'm looking for a pythonic way. More specifically, I'm not interested in playing with the Windows GUI. Pywin32 works using a Windows DLL. so far, I am able to do it by calling an ahk script: In the python script: import subprocess subprocess.call([ahkexe, ahkscript]) In the AutoHotkey script: SoundGet, sound_mute, Master, mute if sound_mute = On ; if the sound is muted Send {Volume_Mute} ; press the "mute button" to unmute SoundSet 30 ; set the sound level at 30
[ "You can use the Windows Sound Manager by paradoxis (https://github.com/Paradoxis/Windows-Sound-Manager). \nfrom sound import Sound\nSound.mute()\n\nEvery call to Sound.mute() will toggle mute on or off. Have a look at the main.py to see how to use the setter and getter methods.\n", "If you're also building a GUI, wxPython (and I would believe other GUI frameworks) have access to the windows audio mute \"button\".\n" ]
[ 2, 0 ]
[]
[]
[ "audio", "python", "python_3.x", "pywin32", "windows" ]
stackoverflow_0055399396_audio_python_python_3.x_pywin32_windows.txt
Q: How to get 2 value in option in php here my database image->> on option tag ^ here! i need Cost and service on Option tag values how to set 2 values in select tag -> how to do it please give me anwser i dont know how to do it i m doing my collage project i need help. <?php if ($genders=$_GET["gen"]=="Male") { . $query=mysqli_query($con,"select * from tblservices WHERE gender='Male'"); }if ($genders=$_GET["gen"]=="Female") { . $query=mysqli_query($con,"select * from tblservices WHEREgender='Female'"); } while($row=mysqli_fetch_array($query)) { ?> <option value="<?php echo $row['Cost']; ?>" id="price"><?php echo $row['ServiceName'];?> (<?php echo $row['Cost']; ?>β‚Ή)</oPHPon> <?php } ?> </select> A: There are several problems in your code snippet which you can solve if you look in the error log of apache. The first select tag and the closing option tag are missing, there are redundant points and space missing in the second where clause. If "$_GET["gen"]" retrieves data (what I cannot check) and the column/table names are right it should work. <select> <?php if ($_GET["gen"] =="Male") { $query=mysqli_query($con,"select * from tblservices WHERE gender='Male'"); }if ($_GET["gen"]=="Female") { $query=mysqli_query($con,"select * from tblservices WHERE gender='Female'"); } while($row=mysqli_fetch_array($query)) { ?> <option value="<?php echo $row['Cost']; ?>" id="price"><?php echo $row['ServiceName'];?> <?php echo $row['Cost']; ?> β‚Ή</option> <?php } ?> </select>
How to get 2 value in option in php
here my database image->> on option tag ^ here! i need Cost and service on Option tag values how to set 2 values in select tag -> how to do it please give me anwser i dont know how to do it i m doing my collage project i need help. <?php if ($genders=$_GET["gen"]=="Male") { . $query=mysqli_query($con,"select * from tblservices WHERE gender='Male'"); }if ($genders=$_GET["gen"]=="Female") { . $query=mysqli_query($con,"select * from tblservices WHEREgender='Female'"); } while($row=mysqli_fetch_array($query)) { ?> <option value="<?php echo $row['Cost']; ?>" id="price"><?php echo $row['ServiceName'];?> (<?php echo $row['Cost']; ?>β‚Ή)</oPHPon> <?php } ?> </select>
[ "There are several problems in your code snippet which you can solve if you look in the error log of apache.\nThe first select tag and the closing option tag are missing, there are redundant points and space missing in the second where clause.\nIf \"$_GET[\"gen\"]\" retrieves data (what I cannot check) and the column/table names are right it should work.\n<select>\n<?php\n if ($_GET[\"gen\"] ==\"Male\") {\n\n $query=mysqli_query($con,\"select * from tblservices WHERE gender='Male'\");\n }if ($_GET[\"gen\"]==\"Female\") {\n \n $query=mysqli_query($con,\"select * from tblservices WHERE gender='Female'\");\n }\n while($row=mysqli_fetch_array($query))\n {\n ?>\n <option value=\"<?php echo $row['Cost']; ?>\" id=\"price\"><?php echo $row['ServiceName'];?>\n <?php echo $row['Cost']; ?> β‚Ή</option>\n\n <?php }\n?>\n</select>\n\n" ]
[ 0 ]
[]
[]
[ "mysql", "php" ]
stackoverflow_0074664323_mysql_php.txt
Q: notifyOnDataSetChanged() in kotlin I'm studying kotlin for android for a few months and I have one question for a while private fun configRecoverReservations(){ val dialog: AlertDialog? = SpotsDialog.Builder() .setContext(requireContext()) .setMessage(getString(R.string.recovering_reservations)) .setCancelable(false) .build() dialog?.show() adUserRef = FirebaseDatabase.getInstance() .getReference("Reservations") .child(auth.currentUser!!.uid) adUserRef.addValueEventListener(object: ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { reservationList.clear() for (ds: DataSnapshot in snapshot.children){ reservationList.add(ds.getValue(Reservation::class.java)!!) } if(reservationList.isEmpty()) { binding.recyclerMyReservations.hide() binding.textEmptyList.show() } reservationList.reverse() dialog?.dismiss() adapterReservations.notifyDataSetChanged() } override fun onCancelled(error: DatabaseError) { null } }) } How would be the fanciest way to change the notifyOnDataSetChanged() in this situation, since it's not the best solution, as Google says? here It says: ˜Notify any registered observers that the data set has changed. There are two different classes of data change events, item changes and structural changes. Item changes are when a single item has its data updated but no positional changes have occurred. Structural changes are when items are inserted, removed or moved within the data set. This event does not specify what about the data set has changed, forcing any observers to assume that all existing items and structure may no longer be valid. LayoutManagers will be forced to fully rebind and relayout all visible views. RecyclerView will attempt to synthesize visible structural change events for adapters that report that they have stable IDs when this method is used. This can help for the purposes of animation and visual object persistence but individual item views will still need to be rebound and relaid out. If you are writing an adapter it will always be more efficient to use the more specific change events if you can. Rely on notifyDataSetChanged() as a last resort." A: Currently its recommended to use ListAdapter for handling dynmamic data in RecyclerView . It is a wrapper of the existing RecyclerView Adapter. It uses Diff check Util to check whether the content are same. So if the content is same for two items, it will not redraw the view again. It only redraws when no existing drawn views matches the current view data. So it will give the same performance of the RecyclerView Adapter. No need to call notifyDataSetChanged() here in ListAdapter, which is a costlier operation. When there is a change in list , you need to call adapter.submitList(yourList) . You can call this in your addValueEventListener. While creating adapter in activity/fragment, you need to pass your ItemDiff object in adapter constructor. Or you can directly create object while calling ListAdapter constructor, like this. class RecyclerListAdapter() : ListAdapter<MyModelClass, MyViewHolder>(ItemDiff()){ //Adapter } Adapter class: class RecyclerListAdapter( itemDiff: ItemDiff ) : ListAdapter<MyModelClass, MyViewHolder>(itemDiff) { override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder { return MyViewHolder( ) } override fun onBindViewHolder(holder: MyViewHolder, position: Int) { } class ItemDiff : DiffUtil.ItemCallback<MyModelClass>() { override fun areItemsTheSame(oldItem: MyModelClass, newItem: MyModelClass): Boolean { return oldItem.uuid == newItem.uuid } override fun areContentsTheSame( oldItem: MyModelClass, newItem: MyModelClass ): Boolean { return oldItem.id == newItem.id && oldItem.name == newItem.name && } } }
notifyOnDataSetChanged() in kotlin
I'm studying kotlin for android for a few months and I have one question for a while private fun configRecoverReservations(){ val dialog: AlertDialog? = SpotsDialog.Builder() .setContext(requireContext()) .setMessage(getString(R.string.recovering_reservations)) .setCancelable(false) .build() dialog?.show() adUserRef = FirebaseDatabase.getInstance() .getReference("Reservations") .child(auth.currentUser!!.uid) adUserRef.addValueEventListener(object: ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { reservationList.clear() for (ds: DataSnapshot in snapshot.children){ reservationList.add(ds.getValue(Reservation::class.java)!!) } if(reservationList.isEmpty()) { binding.recyclerMyReservations.hide() binding.textEmptyList.show() } reservationList.reverse() dialog?.dismiss() adapterReservations.notifyDataSetChanged() } override fun onCancelled(error: DatabaseError) { null } }) } How would be the fanciest way to change the notifyOnDataSetChanged() in this situation, since it's not the best solution, as Google says? here It says: ˜Notify any registered observers that the data set has changed. There are two different classes of data change events, item changes and structural changes. Item changes are when a single item has its data updated but no positional changes have occurred. Structural changes are when items are inserted, removed or moved within the data set. This event does not specify what about the data set has changed, forcing any observers to assume that all existing items and structure may no longer be valid. LayoutManagers will be forced to fully rebind and relayout all visible views. RecyclerView will attempt to synthesize visible structural change events for adapters that report that they have stable IDs when this method is used. This can help for the purposes of animation and visual object persistence but individual item views will still need to be rebound and relaid out. If you are writing an adapter it will always be more efficient to use the more specific change events if you can. Rely on notifyDataSetChanged() as a last resort."
[ "Currently its recommended to use ListAdapter for handling dynmamic data in RecyclerView . It is a wrapper of the existing RecyclerView Adapter. It uses Diff check Util to check whether the content are same. So if the content is same for two items, it will not redraw the view again. It only redraws when no existing drawn views matches the current view data. So it will give the same performance of the RecyclerView Adapter.\nNo need to call notifyDataSetChanged() here in ListAdapter, which is a costlier operation.\nWhen there is a change in list , you need to call adapter.submitList(yourList) . You can call this in your addValueEventListener.\nWhile creating adapter in activity/fragment, you need to pass your ItemDiff object in adapter constructor.\nOr you can directly create object while calling ListAdapter constructor,\nlike this.\n class RecyclerListAdapter() : ListAdapter<MyModelClass, MyViewHolder>(ItemDiff()){\n//Adapter\n}\n\nAdapter class:\nclass RecyclerListAdapter(\n itemDiff: ItemDiff\n) : ListAdapter<MyModelClass, MyViewHolder>(itemDiff) {\n\n override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder {\n return MyViewHolder(\n )\n }\n\n\n override fun onBindViewHolder(holder: MyViewHolder, position: Int) {\n\n }\n\n class ItemDiff : DiffUtil.ItemCallback<MyModelClass>() {\n override fun areItemsTheSame(oldItem: MyModelClass, newItem: MyModelClass): Boolean {\n return oldItem.uuid == newItem.uuid\n }\n\n override fun areContentsTheSame(\n oldItem: MyModelClass,\n newItem: MyModelClass\n ): Boolean {\n return oldItem.id == newItem.id &&\n oldItem.name == newItem.name &&\n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_recyclerview", "firebase", "firebase_realtime_database", "kotlin" ]
stackoverflow_0074662710_android_android_recyclerview_firebase_firebase_realtime_database_kotlin.txt
Q: How to export process template and import into another organization using azure devops api? I want to export process template from one organization and import into another organization. I am referring the documentation export/import process template I want to achieve few things like If template not exist then it should create into another organaization If template exist then it should add all new workitemtype and its setting if workitem type exist then it should override the existing setting and new setting should be added for existing workitem type How can I achieve mention points? A: If you want to export/import Inherited processes, this is not supported. Please refer the same issue ticket How to export/import process template in Azure DevOps. And it mentioned in this Upload or download a process template , Important : Uploading and downloading Inherited processes isn't supported. Import and export a Hosted XML process feature is only available for organizations that have been migrated to Azure DevOps Services from Azure DevOps Server. Please refer Import and export a Hosted XML process.
How to export process template and import into another organization using azure devops api?
I want to export process template from one organization and import into another organization. I am referring the documentation export/import process template I want to achieve few things like If template not exist then it should create into another organaization If template exist then it should add all new workitemtype and its setting if workitem type exist then it should override the existing setting and new setting should be added for existing workitem type How can I achieve mention points?
[ "If you want to export/import Inherited processes, this is not supported.\nPlease refer the same issue ticket How to export/import process template in Azure DevOps.\nAnd it mentioned in this Upload or download a process template ,\n\nImportant :\nUploading and downloading Inherited processes isn't supported.\n\nImport and export a Hosted XML process feature is only available for organizations that have been migrated to Azure DevOps Services from Azure DevOps Server. Please refer Import and export a Hosted XML process.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_devops", "c#" ]
stackoverflow_0074660539_azure_azure_devops_c#.txt
Q: Does Javascript support any special syntax to support performance testing? I need to track how long many processes take and so my code looks like this: console.log("Doing thing 1"); var startTime = performance.now(); var thing = doThing() times.loadTime = performance.now() - startTime; console.log("Doing thing 2"); startTime = performance.now(); var thing2 = doOtherThing(); var endTime = performance.now(); times.thingTime2 = endTime - startTime; console.log("Doing thing 3"); var thing3 = doThing3(); ties.thingTime3 = performance.now() - endTime; As you can guess, this is getting hard to read. I know that in my current environment I can use things like: console.startTime(); console.startTime("Thing 1"); console.endTime(); And so on. See the console object. But I think I've seen something where javascript code was using blocks or labels like this: thing1 { var thing = doThing() } thing2 { var thing2 = doOtherThing(); } thing3 { var thing3 = doThing3(); } // desired result { name: "thing1", startTime: 0, endTime: 10 } console.log(thing1); Or labels: thing1: var thing = doThing() end These look much more readable to me. Is there a way to do this in JavaScript? A: JavaScript does not have a built-in syntax for performance testing. However, you can use the performance.now() method, as you have done in your code, to measure the time it takes for your code to run. If you want to make your code more readable, you can use a library like console.time to measure the time it takes for your code to run. This library provides methods like console.time and console.timeEnd that you can use to label your code blocks and measure the time it takes for them to run. Here's an example of how you can use these methods: console.time("Doing thing 1"); var thing = doThing() console.timeEnd("Doing thing 1"); console.time("Doing thing 2"); var thing2 = doOtherThing(); console.timeEnd("Doing thing 2"); console.time("Doing thing 3"); var thing3 = doThing3(); console.timeEnd("Doing thing 3"); This will print the time it takes for each code block to run in the console, using the labels you provided. A: There's nothing built-in like that, but something that could help would be to write a function wrapper around performance.now calls. const times = []; const measure = (cb) => { const startTime = performance.now(); const result = cb(); times.push(performance.now() - startTime); return result; }; const thing = measure(() => ({ thing1: 'foo' })); const thing2 = measure(() => { for (let i = 0; i < 1e7; i++); return { thing2: 'foo' }; }); const thing3 = measure(() => ({ thing3: 'foo' })); console.log(times); or const times = []; const measure = (name, cb) => { const startTime = performance.now(); const result = cb(); times.push({ name, time: performance.now() - startTime }); return result; }; const thing = measure('thing', () => ({ thing1: 'foo' })); const thing2 = measure('thing2', () => { for (let i = 0; i < 1e7; i++); return { thing2: 'foo' }; }); const thing3 = measure('thing3', () => ({ thing3: 'foo' })); console.log(times); - A: There are two problems with console.timeEnd - it's overly verbose and it's not easily removable in production builds. I prefer to have a shortcut like this in my projects: const T = (name='') => { T.timers = T.timers || [] if (name) T.timers.push([name, performance.now()]) else { let end = performance.now() let [name, start] = T.timers.pop() console.log('TIMER:', name, (end - start).toFixed(3)) } } Then, you can do something like T('step1') for (let i = 0; i < 1e6; i++) { heavy stuff } T() T('step2') for (let i = 0; i < 1e7; i++) { more heavy stuff } T() and so on. For production builds you can easily turn T into a no-op. A: you can use a function like this const useMeasure = (fn) => { const start = performance.now(); const res = fn(); const end = performance.now(); return [res, end - start]; } it returns the result of the function as well as the time it took to complete. you can use it like this:- function count() { let cnt = 0; for (let i = 0; i < 100000; i++) { cnt++; } return cnt; } const [result, timeTaken] = useMeasure(count); console.log(result, timeTaken); if your function takes arguments you can edit it like this so that you can also pass your args:- const useMeasureWithArgs = (fn, ...args) => { const start = performance.now(); const res = fn(...args); const end = performance.now(); return [res, end - start]; } and use it like this:- function countUpTo(x) { let cnt = 0; for (let i = 0; i < x; i++) { cnt++; } return cnt; } const [result, timeTaken] = useMeasureWithArgs(countUpTo, 1000); console.log(result, timeTaken);
Does Javascript support any special syntax to support performance testing?
I need to track how long many processes take and so my code looks like this: console.log("Doing thing 1"); var startTime = performance.now(); var thing = doThing() times.loadTime = performance.now() - startTime; console.log("Doing thing 2"); startTime = performance.now(); var thing2 = doOtherThing(); var endTime = performance.now(); times.thingTime2 = endTime - startTime; console.log("Doing thing 3"); var thing3 = doThing3(); ties.thingTime3 = performance.now() - endTime; As you can guess, this is getting hard to read. I know that in my current environment I can use things like: console.startTime(); console.startTime("Thing 1"); console.endTime(); And so on. See the console object. But I think I've seen something where javascript code was using blocks or labels like this: thing1 { var thing = doThing() } thing2 { var thing2 = doOtherThing(); } thing3 { var thing3 = doThing3(); } // desired result { name: "thing1", startTime: 0, endTime: 10 } console.log(thing1); Or labels: thing1: var thing = doThing() end These look much more readable to me. Is there a way to do this in JavaScript?
[ "JavaScript does not have a built-in syntax for performance testing. However, you can use the performance.now() method, as you have done in your code, to measure the time it takes for your code to run.\nIf you want to make your code more readable, you can use a library like console.time to measure the time it takes for your code to run. This library provides methods like console.time and console.timeEnd that you can use to label your code blocks and measure the time it takes for them to run. Here's an example of how you can use these methods:\nconsole.time(\"Doing thing 1\");\nvar thing = doThing()\nconsole.timeEnd(\"Doing thing 1\");\n\nconsole.time(\"Doing thing 2\");\nvar thing2 = doOtherThing();\nconsole.timeEnd(\"Doing thing 2\");\n\nconsole.time(\"Doing thing 3\");\nvar thing3 = doThing3();\nconsole.timeEnd(\"Doing thing 3\");\n\nThis will print the time it takes for each code block to run in the console, using the labels you provided.\n", "There's nothing built-in like that, but something that could help would be to write a function wrapper around performance.now calls.\n\n\nconst times = [];\nconst measure = (cb) => {\n const startTime = performance.now();\n const result = cb();\n times.push(performance.now() - startTime);\n return result;\n};\n\nconst thing = measure(() => ({ thing1: 'foo' }));\nconst thing2 = measure(() => {\n for (let i = 0; i < 1e7; i++);\n return { thing2: 'foo' };\n});\nconst thing3 = measure(() => ({ thing3: 'foo' }));\nconsole.log(times);\n\n\n\nor\n\n\nconst times = [];\nconst measure = (name, cb) => {\n const startTime = performance.now();\n const result = cb();\n times.push({ name, time: performance.now() - startTime });\n return result;\n};\n\nconst thing = measure('thing', () => ({ thing1: 'foo' }));\nconst thing2 = measure('thing2', () => {\n for (let i = 0; i < 1e7; i++);\n return { thing2: 'foo' };\n});\nconst thing3 = measure('thing3', () => ({ thing3: 'foo' }));\nconsole.log(times);\n\n\n\n-\n", "There are two problems with console.timeEnd - it's overly verbose and it's not easily removable in production builds. I prefer to have a shortcut like this in my projects:\nconst T = (name='') => {\n T.timers = T.timers || []\n\n if (name)\n T.timers.push([name, performance.now()])\n else {\n let end = performance.now()\n let [name, start] = T.timers.pop()\n console.log('TIMER:', name, (end - start).toFixed(3))\n }\n}\n\nThen, you can do something like\nT('step1')\nfor (let i = 0; i < 1e6; i++) {\n heavy stuff\n}\nT()\n\nT('step2')\nfor (let i = 0; i < 1e7; i++) {\n more heavy stuff\n}\nT()\n\nand so on.\nFor production builds you can easily turn T into a no-op.\n", "you can use a function like this\nconst useMeasure = (fn) => {\n const start = performance.now();\n const res = fn();\n const end = performance.now();\n return [res, end - start];\n}\n\nit returns the result of the function as well as the time it took to complete.\nyou can use it like this:-\nfunction count() {\n let cnt = 0;\n for (let i = 0; i < 100000; i++) {\n cnt++;\n }\n return cnt;\n}\n\nconst [result, timeTaken] = useMeasure(count);\n\nconsole.log(result, timeTaken);\n\nif your function takes arguments you can edit it like this so that you can also pass your args:-\nconst useMeasureWithArgs = (fn, ...args) => {\n const start = performance.now();\n const res = fn(...args);\n const end = performance.now();\n return [res, end - start];\n}\n\nand use it like this:-\nfunction countUpTo(x) {\n let cnt = 0;\n for (let i = 0; i < x; i++) {\n cnt++;\n }\n return cnt;\n}\n\nconst [result, timeTaken] = useMeasureWithArgs(countUpTo, 1000);\n\nconsole.log(result, timeTaken);\n\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "ecmascript_6", "javascript", "performance_testing" ]
stackoverflow_0074663374_ecmascript_6_javascript_performance_testing.txt
Q: How to prettyprint a JSON file? How do I pretty-print a JSON file in Python? A: Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by: >>> import json >>> >>> your_json = '["foo", {"bar": ["baz", null, 1.0, 2]}]' >>> parsed = json.loads(your_json) >>> print(json.dumps(parsed, indent=4)) [ "foo", { "bar": [ "baz", null, 1.0, 2 ] } ] To parse a file, use json.load(): with open('filename.txt', 'r') as handle: parsed = json.load(handle) A: You can do this on the command line: python3 -m json.tool some.json (as already mentioned in the commentaries to the question, thanks to @Kai Petzke for the python3 suggestion). Actually python is not my favourite tool as far as json processing on the command line is concerned. For simple pretty printing is ok, but if you want to manipulate the json it can become overcomplicated. You'd soon need to write a separate script-file, you could end up with maps whose keys are u"some-key" (python unicode), which makes selecting fields more difficult and doesn't really go in the direction of pretty-printing. You can also use jq: jq . some.json and you get colors as a bonus (and way easier extendability). Addendum: There is some confusion in the comments about using jq to process large JSON files on the one hand, and having a very large jq program on the other. For pretty-printing a file consisting of a single large JSON entity, the practical limitation is RAM. For pretty-printing a 2GB file consisting of a single array of real-world data, the "maximum resident set size" required for pretty-printing was 5GB (whether using jq 1.5 or 1.6). Note also that jq can be used from within python after pip install jq. A: You could use the built-in module pprint (https://docs.python.org/3.9/library/pprint.html). How you can read the file with json data and print it out. import json import pprint json_data = None with open('file_name.txt', 'r') as f: data = f.read() json_data = json.loads(data) print(json_data) {"firstName": "John", "lastName": "Smith", "isAlive": "true", "age": 27, "address": {"streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021-3100"}, 'children': []} pprint.pprint(json_data) {'address': {'city': 'New York', 'postalCode': '10021-3100', 'state': 'NY', 'streetAddress': '21 2nd Street'}, 'age': 27, 'children': [], 'firstName': 'John', 'isAlive': True, 'lastName': 'Smith'} The output is not a valid json, because pprint use single quotes and json specification require double quotes. If you want to rewrite the pretty print formated json to a file, you have to use pprint.pformat. pretty_print_json = pprint.pformat(json_data).replace("'", '"') with open('file_name.json', 'w') as f: f.write(pretty_print_json) A: Pygmentize + Python json.tool = Pretty Print with Syntax Highlighting Pygmentize is a killer tool. See this. I combine python json.tool with pygmentize echo '{"foo": "bar"}' | python -m json.tool | pygmentize -l json See the link above for pygmentize installation instruction. A demo of this is in the image below: A: Use this function and don't sweat having to remember if your JSON is a str or dict again - just look at the pretty print: import json def pp_json(json_thing, sort=True, indents=4): if type(json_thing) is str: print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents)) else: print(json.dumps(json_thing, sort_keys=sort, indent=indents)) return None pp_json(your_json_string_or_dict) A: Use pprint: https://docs.python.org/3.6/library/pprint.html import pprint pprint.pprint(json) print() compared to pprint.pprint() print(json) {'feed': {'title': 'W3Schools Home Page', 'title_detail': {'type': 'text/plain', 'language': None, 'base': '', 'value': 'W3Schools Home Page'}, 'links': [{'rel': 'alternate', 'type': 'text/html', 'href': 'https://www.w3schools.com'}], 'link': 'https://www.w3schools.com', 'subtitle': 'Free web building tutorials', 'subtitle_detail': {'type': 'text/html', 'language': None, 'base': '', 'value': 'Free web building tutorials'}}, 'entries': [], 'bozo': 0, 'encoding': 'utf-8', 'version': 'rss20', 'namespaces': {}} pprint.pprint(json) {'bozo': 0, 'encoding': 'utf-8', 'entries': [], 'feed': {'link': 'https://www.w3schools.com', 'links': [{'href': 'https://www.w3schools.com', 'rel': 'alternate', 'type': 'text/html'}], 'subtitle': 'Free web building tutorials', 'subtitle_detail': {'base': '', 'language': None, 'type': 'text/html', 'value': 'Free web building tutorials'}, 'title': 'W3Schools Home Page', 'title_detail': {'base': '', 'language': None, 'type': 'text/plain', 'value': 'W3Schools Home Page'}}, 'namespaces': {}, 'version': 'rss20'} A: To be able to pretty print from the command line and be able to have control over the indentation etc. you can set up an alias similar to this: alias jsonpp="python -c 'import sys, json; print json.dumps(json.load(sys.stdin), sort_keys=True, indent=2)'" And then use the alias in one of these ways: cat myfile.json | jsonpp jsonpp < myfile.json A: def saveJson(date,fileToSave): with open(fileToSave, 'w+') as fileToSave: json.dump(date, fileToSave, ensure_ascii=True, indent=4, sort_keys=True) It works to display or save it to a file. A: Here's a simple example of pretty printing JSON to the console in a nice way in Python, without requiring the JSON to be on your computer as a local file: import pprint import json from urllib.request import urlopen # (Only used to get this example) # Getting a JSON example for this example r = urlopen("https://mdn.github.io/fetch-examples/fetch-json/products.json") text = r.read() # To print it pprint.pprint(json.loads(text)) A: You could try pprintjson. Installation $ pip3 install pprintjson Usage Pretty print JSON from a file using the pprintjson CLI. $ pprintjson "./path/to/file.json" Pretty print JSON from a stdin using the pprintjson CLI. $ echo '{ "a": 1, "b": "string", "c": true }' | pprintjson Pretty print JSON from a string using the pprintjson CLI. $ pprintjson -c '{ "a": 1, "b": "string", "c": true }' Pretty print JSON from a string with an indent of 1. $ pprintjson -c '{ "a": 1, "b": "string", "c": true }' -i 1 Pretty print JSON from a string and save output to a file output.json. $ pprintjson -c '{ "a": 1, "b": "string", "c": true }' -o ./output.json Output A: I think that's better to parse the json before, to avoid errors: def format_response(response): try: parsed = json.loads(response.text) except JSONDecodeError: return response.text return json.dumps(parsed, ensure_ascii=True, indent=4) A: I had a similar requirement to dump the contents of json file for logging, something quick and easy: print(json.dumps(json.load(open(os.path.join('<myPath>', '<myjson>'), "r")), indent = 4 )) if you use it often then put it in a function: def pp_json_file(path, file): print(json.dumps(json.load(open(os.path.join(path, file), "r")), indent = 4)) A: Hopefully this helps someone else. In the case when there is a error that something is not json serializable the answers above will not work. If you only want to save it so that is human readable then you need to recursively call string on all the non dictionary elements of your dictionary. If you want to load it later then save it as a pickle file then load it (e.g. torch.save(obj, f) works fine). This is what worked for me: #%% def _to_json_dict_with_strings(dictionary): """ Convert dict to dict with leafs only being strings. So it recursively makes keys to strings if they are not dictionaries. Use case: - saving dictionary of tensors (convert the tensors to strins!) - saving arguments from script (e.g. argparse) for it to be pretty e.g. """ if type(dictionary) != dict: return str(dictionary) d = {k: _to_json_dict_with_strings(v) for k, v in dictionary.items()} return d def to_json(dic): import types import argparse if type(dic) is dict: dic = dict(dic) else: dic = dic.__dict__ return _to_json_dict_with_strings(dic) def save_to_json_pretty(dic, path, mode='w', indent=4, sort_keys=True): import json with open(path, mode) as f: json.dump(to_json(dic), f, indent=indent, sort_keys=sort_keys) def my_pprint(dic): """ @param dic: @return: Note: this is not the same as pprint. """ import json # make all keys strings recursively with their naitve str function dic = to_json(dic) # pretty print pretty_dic = json.dumps(dic, indent=4, sort_keys=True) print(pretty_dic) # print(json.dumps(dic, indent=4, sort_keys=True)) # return pretty_dic import torch # import json # results in non serializabe errors for torch.Tensors from pprint import pprint dic = {'x': torch.randn(1, 3), 'rec': {'y': torch.randn(1, 3)}} my_pprint(dic) pprint(dic) output: { "rec": { "y": "tensor([[-0.3137, 0.3138, 1.2894]])" }, "x": "tensor([[-1.5909, 0.0516, -1.5445]])" } {'rec': {'y': tensor([[-0.3137, 0.3138, 1.2894]])}, 'x': tensor([[-1.5909, 0.0516, -1.5445]])} I don't know why returning the string then printing it doesn't work but it seems you have to put the dumps directly in the print statement. Note pprint as it has been suggested already works too. Note not all objects can be converted to a dict with dict(dic) which is why some of my code has checks on this condition. Context: I wanted to save pytorch strings but I kept getting the error: TypeError: tensor is not JSON serializable so I coded the above. Note that yes, in pytorch you use torch.save but pickle files aren't readable. Check this related post: https://discuss.pytorch.org/t/typeerror-tensor-is-not-json-serializable/36065/3 PPrint also has indent arguments but I didn't like how it looks: pprint(stats, indent=4, sort_dicts=True) output: { 'cca': { 'all': {'avg': tensor(0.5132), 'std': tensor(0.1532)}, 'avg': tensor([0.5993, 0.5571, 0.4910, 0.4053]), 'rep': {'avg': tensor(0.5491), 'std': tensor(0.0743)}, 'std': tensor([0.0316, 0.0368, 0.0910, 0.2490])}, 'cka': { 'all': {'avg': tensor(0.7885), 'std': tensor(0.3449)}, 'avg': tensor([1.0000, 0.9840, 0.9442, 0.2260]), 'rep': {'avg': tensor(0.9761), 'std': tensor(0.0468)}, 'std': tensor([5.9043e-07, 2.9688e-02, 6.3634e-02, 2.1686e-01])}, 'cosine': { 'all': {'avg': tensor(0.5931), 'std': tensor(0.7158)}, 'avg': tensor([ 0.9825, 0.9001, 0.7909, -0.3012]), 'rep': {'avg': tensor(0.8912), 'std': tensor(0.1571)}, 'std': tensor([0.0371, 0.1232, 0.1976, 0.9536])}, 'nes': { 'all': {'avg': tensor(0.6771), 'std': tensor(0.2891)}, 'avg': tensor([0.9326, 0.8038, 0.6852, 0.2867]), 'rep': {'avg': tensor(0.8072), 'std': tensor(0.1596)}, 'std': tensor([0.0695, 0.1266, 0.1578, 0.2339])}, 'nes_output': { 'all': {'avg': None, 'std': None}, 'avg': tensor(0.2975), 'rep': {'avg': None, 'std': None}, 'std': tensor(0.0945)}, 'query_loss': { 'all': {'avg': None, 'std': None}, 'avg': tensor(12.3746), 'rep': {'avg': None, 'std': None}, 'std': tensor(13.7910)}} compare to: { "cca": { "all": { "avg": "tensor(0.5144)", "std": "tensor(0.1553)" }, "avg": "tensor([0.6023, 0.5612, 0.4874, 0.4066])", "rep": { "avg": "tensor(0.5503)", "std": "tensor(0.0796)" }, "std": "tensor([0.0285, 0.0367, 0.1004, 0.2493])" }, "cka": { "all": { "avg": "tensor(0.7888)", "std": "tensor(0.3444)" }, "avg": "tensor([1.0000, 0.9840, 0.9439, 0.2271])", "rep": { "avg": "tensor(0.9760)", "std": "tensor(0.0468)" }, "std": "tensor([5.7627e-07, 2.9689e-02, 6.3541e-02, 2.1684e-01])" }, "cosine": { "all": { "avg": "tensor(0.5945)", "std": "tensor(0.7146)" }, "avg": "tensor([ 0.9825, 0.9001, 0.7907, -0.2953])", "rep": { "avg": "tensor(0.8911)", "std": "tensor(0.1571)" }, "std": "tensor([0.0371, 0.1231, 0.1975, 0.9554])" }, "nes": { "all": { "avg": "tensor(0.6773)", "std": "tensor(0.2886)" }, "avg": "tensor([0.9326, 0.8037, 0.6849, 0.2881])", "rep": { "avg": "tensor(0.8070)", "std": "tensor(0.1595)" }, "std": "tensor([0.0695, 0.1265, 0.1576, 0.2341])" }, "nes_output": { "all": { "avg": "None", "std": "None" }, "avg": "tensor(0.2976)", "rep": { "avg": "None", "std": "None" }, "std": "tensor(0.0945)" }, "query_loss": { "all": { "avg": "None", "std": "None" }, "avg": "tensor(12.3616)", "rep": { "avg": "None", "std": "None" }, "std": "tensor(13.7976)" } } A: json.loads() converts the json data to dictionary. Finally, use json.dumps() to prettyprint the json. _json = '{"name":"John", "age":30, "car":null}' data = json.loads(_json) print (json.dumps(data, indent=2)) A: For most uses, indent should do it: print(json.dumps(parsed, indent=2)) A Json structure is basically tree structure. While trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/. It has some interactive trees and even comes with some code including this collapsing tree from so: Other samples include using plotly Here is the code example from plotly: import plotly.express as px fig = px.treemap( names = ["Eve","Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"], parents = ["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve"] ) fig.update_traces(root_color="lightgrey") fig.update_layout(margin = dict(t=50, l=25, r=25, b=25)) fig.show() And using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib: #%pip install treelib from treelib import Tree country_tree = Tree() # Create a root node country_tree.create_node("Country", "countries") # Group by country for country, regions in wards_df.head(5).groupby(["CTRY17NM", "CTRY17CD"]): # Generate a node for each country country_tree.create_node(country[0], country[1], parent="countries") # Group by region for region, las in regions.groupby(["GOR10NM", "GOR10CD"]): # Generate a node for each region country_tree.create_node(region[0], region[1], parent=country[1]) # Group by local authority for la, wards in las.groupby(['LAD17NM', 'LAD17CD']): # Create a node for each local authority country_tree.create_node(la[0], la[1], parent=region[1]) for ward, _ in wards.groupby(['WD17NM', 'WD17CD']): # Create a leaf node for each ward country_tree.create_node(ward[0], ward[1], parent=la[1]) # Output the hierarchical data country_tree.show() I have, based on this, created a function to convert json to a tree: from treelib import Node, Tree, node def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'): if tree is None: tree = Tree() root_id = counter_byref[0] if verbose: print(f"tree.create_node({'+'}, {root_id})") tree.create_node('+', root_id) counter_byref[0] += 1 parent_id = root_id if type(o) == dict: for k,v in o.items(): this_id = counter_byref[0] if verbose: print(f"tree.create_node({str(k)}, {this_id}, parent={parent_id})") tree.create_node(str(k), this_id, parent=parent_id) counter_byref[0] += 1 json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol) elif type(o) == list: if listsNodeSymbol is not None: if verbose: print(f"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})") tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id) parent_id=counter_byref[0] counter_byref[0] += 1 for i in o: json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol) else: #node if verbose: print(f"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})") tree.create_node(str(o), counter_byref[0], parent=parent_id) counter_byref[0] += 1 return tree Then for example: import json json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() gives the more descriptive: + β”œβ”€β”€ 2 β”‚ └── 3 └── 4 └── + β”œβ”€β”€ 5 └── 6 While json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),listsNodeSymbol=None).show() Gives the more compact + β”œβ”€β”€ 2 β”‚ └── 3 └── 4 β”œβ”€β”€ 5 └── 6 For a more extensive conversion with different flavors of trees, checkout this function
How to prettyprint a JSON file?
How do I pretty-print a JSON file in Python?
[ "Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by:\n>>> import json\n>>>\n>>> your_json = '[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]'\n>>> parsed = json.loads(your_json)\n>>> print(json.dumps(parsed, indent=4))\n[\n \"foo\",\n {\n \"bar\": [\n \"baz\",\n null,\n 1.0,\n 2\n ]\n }\n]\n\nTo parse a file, use json.load():\nwith open('filename.txt', 'r') as handle:\n parsed = json.load(handle)\n\n", "You can do this on the command line:\npython3 -m json.tool some.json\n\n(as already mentioned in the commentaries to the question, thanks to @Kai Petzke for the python3 suggestion).\nActually python is not my favourite tool as far as json processing on the command line is concerned. For simple pretty printing is ok, but if you want to manipulate the json it can become overcomplicated. You'd soon need to write a separate script-file, you could end up with maps whose keys are u\"some-key\" (python unicode), which makes selecting fields more difficult and doesn't really go in the direction of pretty-printing.\nYou can also use jq:\njq . some.json\n\nand you get colors as a bonus (and way easier extendability).\nAddendum: There is some confusion in the comments about using jq to process large JSON files on the one hand, and having a very large jq program on the other. For pretty-printing a file consisting of a single large JSON entity, the practical limitation is RAM. For pretty-printing a 2GB file consisting of a single array of real-world data, the \"maximum resident set size\" required for pretty-printing was 5GB (whether using jq 1.5 or 1.6). Note also that jq can be used from within python after pip install jq.\n", "You could use the built-in module pprint (https://docs.python.org/3.9/library/pprint.html).\nHow you can read the file with json data and print it out.\nimport json\nimport pprint\n\njson_data = None\nwith open('file_name.txt', 'r') as f:\n data = f.read()\n json_data = json.loads(data)\n\nprint(json_data)\n{\"firstName\": \"John\", \"lastName\": \"Smith\", \"isAlive\": \"true\", \"age\": 27, \"address\": {\"streetAddress\": \"21 2nd Street\", \"city\": \"New York\", \"state\": \"NY\", \"postalCode\": \"10021-3100\"}, 'children': []}\n\npprint.pprint(json_data)\n{'address': {'city': 'New York',\n 'postalCode': '10021-3100',\n 'state': 'NY',\n 'streetAddress': '21 2nd Street'},\n 'age': 27,\n 'children': [],\n 'firstName': 'John',\n 'isAlive': True,\n 'lastName': 'Smith'}\n\nThe output is not a valid json, because pprint use single quotes and json specification require double quotes.\nIf you want to rewrite the pretty print formated json to a file, you have to use pprint.pformat.\npretty_print_json = pprint.pformat(json_data).replace(\"'\", '\"')\n\nwith open('file_name.json', 'w') as f:\n f.write(pretty_print_json)\n\n", "Pygmentize + Python json.tool = Pretty Print with Syntax Highlighting\nPygmentize is a killer tool. See this.\nI combine python json.tool with pygmentize\necho '{\"foo\": \"bar\"}' | python -m json.tool | pygmentize -l json\n\nSee the link above for pygmentize installation instruction.\nA demo of this is in the image below:\n\n", "Use this function and don't sweat having to remember if your JSON is a str or dict again - just look at the pretty print:\nimport json\n\ndef pp_json(json_thing, sort=True, indents=4):\n if type(json_thing) is str:\n print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))\n else:\n print(json.dumps(json_thing, sort_keys=sort, indent=indents))\n return None\n\npp_json(your_json_string_or_dict)\n\n", "Use pprint: https://docs.python.org/3.6/library/pprint.html\nimport pprint\npprint.pprint(json)\n\nprint() compared to pprint.pprint()\nprint(json)\n{'feed': {'title': 'W3Schools Home Page', 'title_detail': {'type': 'text/plain', 'language': None, 'base': '', 'value': 'W3Schools Home Page'}, 'links': [{'rel': 'alternate', 'type': 'text/html', 'href': 'https://www.w3schools.com'}], 'link': 'https://www.w3schools.com', 'subtitle': 'Free web building tutorials', 'subtitle_detail': {'type': 'text/html', 'language': None, 'base': '', 'value': 'Free web building tutorials'}}, 'entries': [], 'bozo': 0, 'encoding': 'utf-8', 'version': 'rss20', 'namespaces': {}}\n\npprint.pprint(json)\n{'bozo': 0,\n 'encoding': 'utf-8',\n 'entries': [],\n 'feed': {'link': 'https://www.w3schools.com',\n 'links': [{'href': 'https://www.w3schools.com',\n 'rel': 'alternate',\n 'type': 'text/html'}],\n 'subtitle': 'Free web building tutorials',\n 'subtitle_detail': {'base': '',\n 'language': None,\n 'type': 'text/html',\n 'value': 'Free web building tutorials'},\n 'title': 'W3Schools Home Page',\n 'title_detail': {'base': '',\n 'language': None,\n 'type': 'text/plain',\n 'value': 'W3Schools Home Page'}},\n 'namespaces': {},\n 'version': 'rss20'}\n\n", "To be able to pretty print from the command line and be able to have control over the indentation etc. you can set up an alias similar to this:\nalias jsonpp=\"python -c 'import sys, json; print json.dumps(json.load(sys.stdin), sort_keys=True, indent=2)'\"\n\nAnd then use the alias in one of these ways:\ncat myfile.json | jsonpp\njsonpp < myfile.json\n\n", "def saveJson(date,fileToSave):\n with open(fileToSave, 'w+') as fileToSave:\n json.dump(date, fileToSave, ensure_ascii=True, indent=4, sort_keys=True)\n\nIt works to display or save it to a file.\n", "Here's a simple example of pretty printing JSON to the console in a nice way in Python, without requiring the JSON to be on your computer as a local file: \nimport pprint\nimport json \nfrom urllib.request import urlopen # (Only used to get this example)\n\n# Getting a JSON example for this example \nr = urlopen(\"https://mdn.github.io/fetch-examples/fetch-json/products.json\")\ntext = r.read() \n\n# To print it\npprint.pprint(json.loads(text))\n\n", "You could try pprintjson.\n\nInstallation\n$ pip3 install pprintjson\n\nUsage\nPretty print JSON from a file using the pprintjson CLI.\n$ pprintjson \"./path/to/file.json\"\n\nPretty print JSON from a stdin using the pprintjson CLI.\n$ echo '{ \"a\": 1, \"b\": \"string\", \"c\": true }' | pprintjson\n\nPretty print JSON from a string using the pprintjson CLI.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }'\n\nPretty print JSON from a string with an indent of 1.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }' -i 1\n\nPretty print JSON from a string and save output to a file output.json.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }' -o ./output.json\n\nOutput\n\n", "I think that's better to parse the json before, to avoid errors:\ndef format_response(response):\n try:\n parsed = json.loads(response.text)\n except JSONDecodeError:\n return response.text\n return json.dumps(parsed, ensure_ascii=True, indent=4)\n\n", "I had a similar requirement to dump the contents of json file for logging, something quick and easy:\nprint(json.dumps(json.load(open(os.path.join('<myPath>', '<myjson>'), \"r\")), indent = 4 ))\n\nif you use it often then put it in a function:\ndef pp_json_file(path, file):\n print(json.dumps(json.load(open(os.path.join(path, file), \"r\")), indent = 4))\n\n", "Hopefully this helps someone else.\nIn the case when there is a error that something is not json serializable the answers above will not work. If you only want to save it so that is human readable then you need to recursively call string on all the non dictionary elements of your dictionary. If you want to load it later then save it as a pickle file then load it (e.g. torch.save(obj, f) works fine).\nThis is what worked for me:\n#%%\n\ndef _to_json_dict_with_strings(dictionary):\n \"\"\"\n Convert dict to dict with leafs only being strings. So it recursively makes keys to strings\n if they are not dictionaries.\n\n Use case:\n - saving dictionary of tensors (convert the tensors to strins!)\n - saving arguments from script (e.g. argparse) for it to be pretty\n\n e.g.\n\n \"\"\"\n if type(dictionary) != dict:\n return str(dictionary)\n d = {k: _to_json_dict_with_strings(v) for k, v in dictionary.items()}\n return d\n\ndef to_json(dic):\n import types\n import argparse\n\n if type(dic) is dict:\n dic = dict(dic)\n else:\n dic = dic.__dict__\n return _to_json_dict_with_strings(dic)\n\ndef save_to_json_pretty(dic, path, mode='w', indent=4, sort_keys=True):\n import json\n\n with open(path, mode) as f:\n json.dump(to_json(dic), f, indent=indent, sort_keys=sort_keys)\n\ndef my_pprint(dic):\n \"\"\"\n\n @param dic:\n @return:\n\n Note: this is not the same as pprint.\n \"\"\"\n import json\n\n # make all keys strings recursively with their naitve str function\n dic = to_json(dic)\n # pretty print\n pretty_dic = json.dumps(dic, indent=4, sort_keys=True)\n print(pretty_dic)\n # print(json.dumps(dic, indent=4, sort_keys=True))\n # return pretty_dic\n\nimport torch\n# import json # results in non serializabe errors for torch.Tensors\nfrom pprint import pprint\n\ndic = {'x': torch.randn(1, 3), 'rec': {'y': torch.randn(1, 3)}}\n\nmy_pprint(dic)\npprint(dic)\n\noutput:\n{\n \"rec\": {\n \"y\": \"tensor([[-0.3137, 0.3138, 1.2894]])\"\n },\n \"x\": \"tensor([[-1.5909, 0.0516, -1.5445]])\"\n}\n{'rec': {'y': tensor([[-0.3137, 0.3138, 1.2894]])},\n 'x': tensor([[-1.5909, 0.0516, -1.5445]])}\n\nI don't know why returning the string then printing it doesn't work but it seems you have to put the dumps directly in the print statement. Note pprint as it has been suggested already works too. Note not all objects can be converted to a dict with dict(dic) which is why some of my code has checks on this condition.\nContext:\nI wanted to save pytorch strings but I kept getting the error:\nTypeError: tensor is not JSON serializable\n\nso I coded the above. Note that yes, in pytorch you use torch.save but pickle files aren't readable. Check this related post: https://discuss.pytorch.org/t/typeerror-tensor-is-not-json-serializable/36065/3\n\nPPrint also has indent arguments but I didn't like how it looks:\n pprint(stats, indent=4, sort_dicts=True)\n\noutput:\n{ 'cca': { 'all': {'avg': tensor(0.5132), 'std': tensor(0.1532)},\n 'avg': tensor([0.5993, 0.5571, 0.4910, 0.4053]),\n 'rep': {'avg': tensor(0.5491), 'std': tensor(0.0743)},\n 'std': tensor([0.0316, 0.0368, 0.0910, 0.2490])},\n 'cka': { 'all': {'avg': tensor(0.7885), 'std': tensor(0.3449)},\n 'avg': tensor([1.0000, 0.9840, 0.9442, 0.2260]),\n 'rep': {'avg': tensor(0.9761), 'std': tensor(0.0468)},\n 'std': tensor([5.9043e-07, 2.9688e-02, 6.3634e-02, 2.1686e-01])},\n 'cosine': { 'all': {'avg': tensor(0.5931), 'std': tensor(0.7158)},\n 'avg': tensor([ 0.9825, 0.9001, 0.7909, -0.3012]),\n 'rep': {'avg': tensor(0.8912), 'std': tensor(0.1571)},\n 'std': tensor([0.0371, 0.1232, 0.1976, 0.9536])},\n 'nes': { 'all': {'avg': tensor(0.6771), 'std': tensor(0.2891)},\n 'avg': tensor([0.9326, 0.8038, 0.6852, 0.2867]),\n 'rep': {'avg': tensor(0.8072), 'std': tensor(0.1596)},\n 'std': tensor([0.0695, 0.1266, 0.1578, 0.2339])},\n 'nes_output': { 'all': {'avg': None, 'std': None},\n 'avg': tensor(0.2975),\n 'rep': {'avg': None, 'std': None},\n 'std': tensor(0.0945)},\n 'query_loss': { 'all': {'avg': None, 'std': None},\n 'avg': tensor(12.3746),\n 'rep': {'avg': None, 'std': None},\n 'std': tensor(13.7910)}}\n\ncompare to:\n{\n \"cca\": {\n \"all\": {\n \"avg\": \"tensor(0.5144)\",\n \"std\": \"tensor(0.1553)\"\n },\n \"avg\": \"tensor([0.6023, 0.5612, 0.4874, 0.4066])\",\n \"rep\": {\n \"avg\": \"tensor(0.5503)\",\n \"std\": \"tensor(0.0796)\"\n },\n \"std\": \"tensor([0.0285, 0.0367, 0.1004, 0.2493])\"\n },\n \"cka\": {\n \"all\": {\n \"avg\": \"tensor(0.7888)\",\n \"std\": \"tensor(0.3444)\"\n },\n \"avg\": \"tensor([1.0000, 0.9840, 0.9439, 0.2271])\",\n \"rep\": {\n \"avg\": \"tensor(0.9760)\",\n \"std\": \"tensor(0.0468)\"\n },\n \"std\": \"tensor([5.7627e-07, 2.9689e-02, 6.3541e-02, 2.1684e-01])\"\n },\n \"cosine\": {\n \"all\": {\n \"avg\": \"tensor(0.5945)\",\n \"std\": \"tensor(0.7146)\"\n },\n \"avg\": \"tensor([ 0.9825, 0.9001, 0.7907, -0.2953])\",\n \"rep\": {\n \"avg\": \"tensor(0.8911)\",\n \"std\": \"tensor(0.1571)\"\n },\n \"std\": \"tensor([0.0371, 0.1231, 0.1975, 0.9554])\"\n },\n \"nes\": {\n \"all\": {\n \"avg\": \"tensor(0.6773)\",\n \"std\": \"tensor(0.2886)\"\n },\n \"avg\": \"tensor([0.9326, 0.8037, 0.6849, 0.2881])\",\n \"rep\": {\n \"avg\": \"tensor(0.8070)\",\n \"std\": \"tensor(0.1595)\"\n },\n \"std\": \"tensor([0.0695, 0.1265, 0.1576, 0.2341])\"\n },\n \"nes_output\": {\n \"all\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"avg\": \"tensor(0.2976)\",\n \"rep\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"std\": \"tensor(0.0945)\"\n },\n \"query_loss\": {\n \"all\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"avg\": \"tensor(12.3616)\",\n \"rep\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"std\": \"tensor(13.7976)\"\n }\n}\n\n", "json.loads() converts the json data to dictionary. Finally, use json.dumps() to prettyprint the json.\n_json = '{\"name\":\"John\", \"age\":30, \"car\":null}'\n\ndata = json.loads(_json)\n\nprint (json.dumps(data, indent=2))\n\n", "For most uses, indent should do it:\nprint(json.dumps(parsed, indent=2))\n\nA Json structure is basically tree structure.\nWhile trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.\nIt has some interactive trees and even comes with some code including this collapsing tree from so:\n\nOther samples include using plotly Here is the code example from plotly:\nimport plotly.express as px\nfig = px.treemap(\n names = [\"Eve\",\"Cain\", \"Seth\", \"Enos\", \"Noam\", \"Abel\", \"Awan\", \"Enoch\", \"Azura\"],\n parents = [\"\", \"Eve\", \"Eve\", \"Seth\", \"Seth\", \"Eve\", \"Eve\", \"Awan\", \"Eve\"]\n)\nfig.update_traces(root_color=\"lightgrey\")\nfig.update_layout(margin = dict(t=50, l=25, r=25, b=25))\nfig.show()\n\n\n\nAnd using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:\n#%pip install treelib\nfrom treelib import Tree\n\ncountry_tree = Tree()\n# Create a root node\ncountry_tree.create_node(\"Country\", \"countries\")\n\n# Group by country\nfor country, regions in wards_df.head(5).groupby([\"CTRY17NM\", \"CTRY17CD\"]):\n # Generate a node for each country\n country_tree.create_node(country[0], country[1], parent=\"countries\")\n # Group by region\n for region, las in regions.groupby([\"GOR10NM\", \"GOR10CD\"]):\n # Generate a node for each region\n country_tree.create_node(region[0], region[1], parent=country[1])\n # Group by local authority\n for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):\n # Create a node for each local authority\n country_tree.create_node(la[0], la[1], parent=region[1])\n for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):\n # Create a leaf node for each ward\n country_tree.create_node(ward[0], ward[1], parent=la[1])\n\n# Output the hierarchical data\ncountry_tree.show()\n\n\nI have, based on this, created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n root_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({'+'}, {root_id})\")\n tree.create_node('+', root_id)\n counter_byref[0] += 1\n parent_id = root_id\n if type(o) == dict:\n for k,v in o.items():\n this_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({str(k)}, {this_id}, parent={parent_id})\")\n tree.create_node(str(k), this_id, parent=parent_id)\n counter_byref[0] += 1\n json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n if verbose:\n print(f\"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)\n parent_id=counter_byref[0]\n counter_byref[0] += 1 \n for i in o:\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)\n else: #node\n if verbose:\n print(f\"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(str(o), counter_byref[0], parent=parent_id)\n counter_byref[0] += 1\n return tree\n\nThen for example:\nimport json\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() \n\ngives the more descriptive:\n+\nβ”œβ”€β”€ 2\nβ”‚ └── 3\n└── 4\n └── +\n β”œβ”€β”€ 5\n └── 6\n\nWhile\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),listsNodeSymbol=None).show() \n\nGives the more compact\n+\nβ”œβ”€β”€ 2\nβ”‚ └── 3\n└── 4\n β”œβ”€β”€ 5\n └── 6\n\nFor a more extensive conversion with different flavors of trees, checkout this function\n" ]
[ 2665, 446, 120, 61, 45, 23, 19, 9, 8, 7, 3, 3, 0, 0, 0 ]
[ "It's far from perfect, but it does the job.\ndata = data.replace(',\"',',\\n\"')\n\nyou can improve it, add indenting and so on, but if you just want to be able to read a cleaner json, this is the way to go.\n" ]
[ -8 ]
[ "formatting", "json", "pretty_print", "python" ]
stackoverflow_0012943819_formatting_json_pretty_print_python.txt
Q: Next.js hydration error while mapping an icon i'm getting a hydration error in next js here in the div where I used the star icon, I'm getting a hydration error I read the documents using useEffect but didn't work, A: Because you have a random initial state value const [rating] = useState(Math.floor(Math.random() * 5) + 1) that you then use in your map to display the icons, the static HTML you generate on the server sometimes has a different amount of StarIcons then what the browser expects during React hydration because the state-initialization happens both during SSR and client-side execution/hydration. If you want to use server-side rendering and have a random initial value for rating for each user request, you'd have to use SSR instead of SSG and generate the random value for rating inside getServerSideProps and pass it down to the page-component so that it stays the same for server and client. An easier (and better imo) way to fix this without SSR would be not statically pre-rendering the star icons and just let them be client-side rendered like this: export default function Cards(props: any) { const [rating] = useState(Math.floor(Math.random() * 5) + 1); const [hasPrime] = useState(Math.random() < 0.5); const [hasMounted, setHasMounted] = useState(false); // <-- add this useEffect(() => { setHasMounted(true); // <-- toggle on client-side, because useEffect doesn't run on server-side/during SSG build }, []); return ( <div className="z-30 [...]"> {/* [...] your other JSX elements */} {/* Only render the StarIcons on client-side, as hasMounted will always be false on server-side */} {hasMounted && ( <div className="flex"> {Array(rating) .fill(rating) .map((_, i) => ( <StarIcon className="h-6" /> ))} </div> )} </div> ); }
Next.js hydration error while mapping an icon
i'm getting a hydration error in next js here in the div where I used the star icon, I'm getting a hydration error I read the documents using useEffect but didn't work,
[ "Because you have a random initial state value const [rating] = useState(Math.floor(Math.random() * 5) + 1) that you then use in your map to display the icons, the static HTML you generate on the server sometimes has a different amount of StarIcons then what the browser expects during React hydration because the state-initialization happens both during SSR and client-side execution/hydration.\nIf you want to use server-side rendering and have a random initial value for rating for each user request, you'd have to use SSR instead of SSG and generate the random value for rating inside getServerSideProps and pass it down to the page-component so that it stays the same for server and client.\nAn easier (and better imo) way to fix this without SSR would be not statically pre-rendering the star icons and just let them be client-side rendered like this:\nexport default function Cards(props: any) {\n const [rating] = useState(Math.floor(Math.random() * 5) + 1);\n const [hasPrime] = useState(Math.random() < 0.5);\n\n const [hasMounted, setHasMounted] = useState(false); // <-- add this\n\n useEffect(() => {\n setHasMounted(true); // <-- toggle on client-side, because useEffect doesn't run on server-side/during SSG build\n }, []);\n\n return (\n <div className=\"z-30 [...]\">\n {/* [...] your other JSX elements */}\n\n {/* Only render the StarIcons on client-side, as hasMounted will always be false on server-side */}\n {hasMounted && (\n <div className=\"flex\">\n {Array(rating)\n .fill(rating)\n .map((_, i) => (\n <StarIcon className=\"h-6\" />\n ))}\n </div>\n )}\n </div>\n );\n}\n\n" ]
[ 1 ]
[]
[]
[ "next.js" ]
stackoverflow_0074664368_next.js.txt
Q: Tailwind - Override default transition duration Default is 150ms, was looking to extend this to 250ms as an application default. Tried everything I could think of, last attempt being transitionDuration: { DEFAULT: '250ms' }, in tailwind.config.js under theme, theme.extend, variants, and variants.extend. Any help would be appreciated! A: theme: { transitionDuration: { DEFAULT: '250ms' } } in your tailwind.config.js should work. Demo: https://play.tailwindcss.com/zSWIMghZQf (Default transition duration overwritten to 2000ms.) For reference: With npx tailwindcss init --full you can create a configuration file that contains all tailwind default values. A: Just like ptts answered before me, inserting an override in the theme section of your tailwind.config.js will do the trick. What I'd like to add is that the new value for transitionDuration will completely replace Tailwind’s default configuration for that key, and the initial transition duration utilities will not be generated. So if you only define DEFAULT under the transitionDuration key, your classes like .duration-500 will not work. The solution is to define the full set of duration values you might use. For your convenience, here's the full set from the default theme: transitionDuration: { DEFAULT: '150ms', 75: '75ms', 100: '100ms', 150: '150ms', 200: '200ms', 300: '300ms', 500: '500ms', 700: '700ms', 1000: '1000ms', }, If you want to update the DEFAULT while still retaining all of the other durations, use something like: transitionDuration: { DEFAULT: '500ms', 75: '75ms', 100: '100ms', 150: '150ms', 200: '200ms', 300: '300ms', 500: '500ms', 700: '700ms', 1000: '1000ms', },
Tailwind - Override default transition duration
Default is 150ms, was looking to extend this to 250ms as an application default. Tried everything I could think of, last attempt being transitionDuration: { DEFAULT: '250ms' }, in tailwind.config.js under theme, theme.extend, variants, and variants.extend. Any help would be appreciated!
[ "theme: {\n transitionDuration: {\n DEFAULT: '250ms'\n }\n}\n\nin your tailwind.config.js should work.\nDemo: https://play.tailwindcss.com/zSWIMghZQf\n(Default transition duration overwritten to 2000ms.)\nFor reference: With npx tailwindcss init --full you can create a configuration file that contains all tailwind default values.\n", "Just like ptts answered before me, inserting an override in the theme section of your tailwind.config.js will do the trick.\nWhat I'd like to add is that the new value for transitionDuration will completely replace Tailwind’s default configuration for that key, and the initial transition duration utilities will not be generated. So if you only define DEFAULT under the transitionDuration key, your classes like .duration-500 will not work.\nThe solution is to define the full set of duration values you might use. For your convenience, here's the full set from the default theme:\ntransitionDuration: {\n DEFAULT: '150ms',\n 75: '75ms',\n 100: '100ms',\n 150: '150ms',\n 200: '200ms',\n 300: '300ms',\n 500: '500ms',\n 700: '700ms',\n 1000: '1000ms',\n},\n\nIf you want to update the DEFAULT while still retaining all of the other durations, use something like:\ntransitionDuration: {\n DEFAULT: '500ms',\n 75: '75ms',\n 100: '100ms',\n 150: '150ms',\n 200: '200ms',\n 300: '300ms',\n 500: '500ms',\n 700: '700ms',\n 1000: '1000ms',\n},\n\n" ]
[ 9, 0 ]
[]
[]
[ "tailwind_css" ]
stackoverflow_0067063939_tailwind_css.txt
Q: undefined method `ancestors' on PUT with rspec test i'm testing with rspec, factory_girl and capybara. The project uses devise, i have the following method to login inside the specs: def login_admin before(:each) do @request.env["devise.mapping"] = Devise.mappings[:admin] sign_in FactoryGirl.create(:admin) end end def login_user before(:each) do @request.env["devise.mapping"] = Devise.mappings[:user] sign_in FactoryGirl.create(:user) end end Then i perform the tests on companies_controller_spec: require 'spec_helper' describe CompaniesController, :type => :controller do let(:valid_attributes) { { "email" => Faker::Internet.email } } login_admin describe "GET show" do it "assigns the requested company as @company" do company = FactoryGirl.create(:company) get :show, {:id => company.to_param} expect(assigns(:company)).to eq(company) end end describe "GET edit" do it "assigns the requested company as @company" do company = FactoryGirl.create(:company) get :edit, {:id => company.to_param} expect(assigns(:company)).to eq(company) end end describe "PUT update" do describe "with valid params" do it "updates the requested company" do company = FactoryGirl.create(:company) expect_any_instance_of(company).to receive(:update).with({ "email" => "[email protected]" }) put :update, {:id => company.to_param, :company => { "email" => "[email protected]" }} end end end But i keep getting this two errors: NoMethodError: undefined method `ancestors' for #<Company:0x000000059b41f0> # ./spec/controllers/companies_controller_spec.rb:34:in `block (4 levels) in <top (required)>' line 34: expect_any_instance_of(company).to receive(:update).with({ "email" => "[email protected]" }) and expected: #<Company id: 86... got: nil # ./spec/controllers/companies_controller_spec.rb:41:in `block (4 levels) in <top (required)>' line 41: expect(assigns(:company)).to eq(company) This is my factory for companies: FactoryGirl.define do factory :company do name { Faker::Name.name } plan_id {} phone { Faker::PhoneNumber.phone_number } email { Faker::Internet.email } facebook { Faker::Internet.url('facebook.com') } twitter { Faker::Internet.url('twitter.com') } linkedin { Faker::Internet.url('linkedin.com') } web { Faker::Internet.url } end end A: Just did this myself, accidentally called expect_any_instance_of on an actual instance, instead of on the class itself. Old question, but for others finding this question, looks like OP should be using Company (uppercase class name) instead of company (lowercase reference to an instance). expect_any_instance_of(Company).to receive(:update).with({ "email" => "[email protected]" }) To explain the error in a little more detail, ancestors is trying to access all of the classes from which Company inherits. A good example of ancestors/descendants on this other SO question. A: Since you have company = FactoryGirl.create(:company), isn't company an instance on its own? How about Company.any_instance.expects(:update).with({ "email" => "[email protected]" })? A: This is because we have to expect the actual modal/class -> here it is Company, instead of expecting any instance of the instance (company). so the expected line should be expect_any_instance_of(Company) instead of expect_any_instance_of(company)
undefined method `ancestors' on PUT with rspec test
i'm testing with rspec, factory_girl and capybara. The project uses devise, i have the following method to login inside the specs: def login_admin before(:each) do @request.env["devise.mapping"] = Devise.mappings[:admin] sign_in FactoryGirl.create(:admin) end end def login_user before(:each) do @request.env["devise.mapping"] = Devise.mappings[:user] sign_in FactoryGirl.create(:user) end end Then i perform the tests on companies_controller_spec: require 'spec_helper' describe CompaniesController, :type => :controller do let(:valid_attributes) { { "email" => Faker::Internet.email } } login_admin describe "GET show" do it "assigns the requested company as @company" do company = FactoryGirl.create(:company) get :show, {:id => company.to_param} expect(assigns(:company)).to eq(company) end end describe "GET edit" do it "assigns the requested company as @company" do company = FactoryGirl.create(:company) get :edit, {:id => company.to_param} expect(assigns(:company)).to eq(company) end end describe "PUT update" do describe "with valid params" do it "updates the requested company" do company = FactoryGirl.create(:company) expect_any_instance_of(company).to receive(:update).with({ "email" => "[email protected]" }) put :update, {:id => company.to_param, :company => { "email" => "[email protected]" }} end end end But i keep getting this two errors: NoMethodError: undefined method `ancestors' for #<Company:0x000000059b41f0> # ./spec/controllers/companies_controller_spec.rb:34:in `block (4 levels) in <top (required)>' line 34: expect_any_instance_of(company).to receive(:update).with({ "email" => "[email protected]" }) and expected: #<Company id: 86... got: nil # ./spec/controllers/companies_controller_spec.rb:41:in `block (4 levels) in <top (required)>' line 41: expect(assigns(:company)).to eq(company) This is my factory for companies: FactoryGirl.define do factory :company do name { Faker::Name.name } plan_id {} phone { Faker::PhoneNumber.phone_number } email { Faker::Internet.email } facebook { Faker::Internet.url('facebook.com') } twitter { Faker::Internet.url('twitter.com') } linkedin { Faker::Internet.url('linkedin.com') } web { Faker::Internet.url } end end
[ "Just did this myself, accidentally called expect_any_instance_of on an actual instance, instead of on the class itself.\nOld question, but for others finding this question, looks like OP should be using Company (uppercase class name) instead of company (lowercase reference to an instance).\nexpect_any_instance_of(Company).to receive(:update).with({ \"email\" => \"[email protected]\" })\nTo explain the error in a little more detail, ancestors is trying to access all of the classes from which Company inherits. A good example of ancestors/descendants on this other SO question.\n", "Since you have company = FactoryGirl.create(:company), isn't company an instance on its own?\nHow about Company.any_instance.expects(:update).with({ \"email\" => \"[email protected]\" })?\n", "This is because we have to expect the actual modal/class -> here it\nis Company, instead of expecting any instance of the instance (company).\nso the expected line should be\nexpect_any_instance_of(Company) instead of expect_any_instance_of(company)\n" ]
[ 24, 0, 0 ]
[]
[]
[ "devise", "factory_bot", "rspec", "ruby_on_rails" ]
stackoverflow_0026023249_devise_factory_bot_rspec_ruby_on_rails.txt
Q: Is there way to make talkback speak? While making my application accessible, I have a problem - there's no way to make it SPEAK!! By referencing google's library, I make public boolean dispatchPopulateAccessibilityEvent(AccessibilityEvent event) on my customized view and I get right event message - I checked it by using Log.d However, there's no way to make talkback to speak... My Application runs from API8 so I can't use also, onPopulateAccessibilityEvent() Am I thinking wrong? Please somebody help me... A: For people looking to implement @Carter Hudson's code in Java (don't judge me cause I'm still not using Kotlin in 2019): AccessibilityManager accessibilityManager = (AccessibilityManager) context.getSystemService(Context.ACCESSIBILITY_SERVICE); AccessibilityEvent accessibilityEvent = AccessibilityEvent.obtain(); accessibilityEvent.setEventType(AccessibilityEvent.TYPE_ANNOUNCEMENT); accessibilityEvent.getText().add("Text to be spoken by TalkBack"); if (accessibilityManager != null) { accessibilityManager.sendAccessibilityEvent(accessibilityEvent); } A: I needed to announce when a button became visible after reloading a RecyclerView's items with a new dataset. RecyclerView being a framework view, it supports talkback / accessibility out-of-the-box. After loading new data, talkback announces "showing items x through y of z" automatically. Utilizing the TTS API to solve the use case I mentioned introduces the following pitfalls: TTS instance initialization and management is cumbersome and questionable for the following reasons: Managing TTS instance lifecycle with onInit listener Managing Locale settings Managing resources via shutdown() ties you to an Activity's lifecycle per documentation An Activity's onDestroy is not guaranteed to be called, which seems like a poor mechanism for calling shutdown() in order to deallocate TTS resources. An easier, more maintainable solution is to play nicely with TalkBack and utilize the Accessibility API like so: class AccessibilityHelper { companion object { @JvmStatic fun announceForAccessibility(context: Context, announcement: String) { context .getSystemService(ACCESSIBILITY_SERVICE) .let { it as AccessibilityManager } .let { manager -> AccessibilityEvent .obtain() .apply { eventType = TYPE_ANNOUNCEMENT className = context.javaClass.name packageName = context.packageName text.add(announcement) } .let { manager.sendAccessibilityEvent(it) } } } } } Call the above from wherever you need (I added a method to my base activity that forwards to the helper). This will insert the announcement into the queue of messages for TalkBack to announce out loud and requires no handling of TTS instances. I ended up adding a delay parameter and mechanism into my final implementation to separate these events from ongoing ui-triggered events as they sometimes tend to override manual announcements. A: If you want it to speak, use the TextToSpeech API. It takes a string and reads it outloud. A: announceForAccessibility method defined in the View class probably serves the purpose here. It was introduced in API level 16. More details here. A: Very this is tool, can use it everywhere with guard public static void speak_loud(String str_speak) { if (isGoogleTalkbackActive()) { AccessibilityManager accessibilityManager = (AccessibilityManager) getDefaultContext().getSystemService(Context.ACCESSIBILITY_SERVICE); AccessibilityEvent accessibilityEvent = AccessibilityEvent.obtain(); accessibilityEvent.setEventType(AccessibilityEvent.TYPE_ANNOUNCEMENT); accessibilityEvent.getText().add(str_speak); if (accessibilityManager != null) { accessibilityManager.sendAccessibilityEvent(accessibilityEvent); } } } public static boolean isGoogleTalkbackActive() { AccessibilityManager am = (AccessibilityManager) getDefaultContext().getSystemService(Context.ACCESSIBILITY_SERVICE); if (am != null && am.isEnabled()) { List<AccessibilityServiceInfo> serviceInfoList = am.getEnabledAccessibilityServiceList(AccessibilityServiceInfo.FEEDBACK_SPOKEN); if (!serviceInfoList.isEmpty()) return true; } return false; }
Is there way to make talkback speak?
While making my application accessible, I have a problem - there's no way to make it SPEAK!! By referencing google's library, I make public boolean dispatchPopulateAccessibilityEvent(AccessibilityEvent event) on my customized view and I get right event message - I checked it by using Log.d However, there's no way to make talkback to speak... My Application runs from API8 so I can't use also, onPopulateAccessibilityEvent() Am I thinking wrong? Please somebody help me...
[ "For people looking to implement @Carter Hudson's code in Java (don't judge me cause I'm still not using Kotlin in 2019):\nAccessibilityManager accessibilityManager = (AccessibilityManager) context.getSystemService(Context.ACCESSIBILITY_SERVICE);\nAccessibilityEvent accessibilityEvent = AccessibilityEvent.obtain();\naccessibilityEvent.setEventType(AccessibilityEvent.TYPE_ANNOUNCEMENT);\n\naccessibilityEvent.getText().add(\"Text to be spoken by TalkBack\");\nif (accessibilityManager != null) {\n accessibilityManager.sendAccessibilityEvent(accessibilityEvent);\n}\n\n", "I needed to announce when a button became visible after reloading a RecyclerView's items with a new dataset. RecyclerView being a framework view, it supports talkback / accessibility out-of-the-box. After loading new data, talkback announces \"showing items x through y of z\" automatically. Utilizing the TTS API to solve the use case I mentioned introduces the following pitfalls: \n\nTTS instance initialization and management is cumbersome and questionable for the following reasons:\n\n\nManaging TTS instance lifecycle with onInit listener\nManaging Locale settings\nManaging resources via shutdown() ties you to an Activity's lifecycle per documentation\nAn Activity's onDestroy is not guaranteed to be called, which seems like a poor mechanism for calling shutdown() in order to deallocate TTS resources.\n\n\nAn easier, more maintainable solution is to play nicely with TalkBack and utilize the Accessibility API like so:\nclass AccessibilityHelper {\n companion object {\n @JvmStatic\n fun announceForAccessibility(context: Context, announcement: String) {\n context\n .getSystemService(ACCESSIBILITY_SERVICE)\n .let { it as AccessibilityManager }\n .let { manager ->\n AccessibilityEvent\n .obtain()\n .apply {\n eventType = TYPE_ANNOUNCEMENT\n className = context.javaClass.name\n packageName = context.packageName\n text.add(announcement)\n }\n .let {\n manager.sendAccessibilityEvent(it)\n }\n }\n }\n }\n}\n\nCall the above from wherever you need (I added a method to my base activity that forwards to the helper). This will insert the announcement into the queue of messages for TalkBack to announce out loud and requires no handling of TTS instances. I ended up adding a delay parameter and mechanism into my final implementation to separate these events from ongoing ui-triggered events as they sometimes tend to override manual announcements.\n", "If you want it to speak, use the TextToSpeech API. It takes a string and reads it outloud.\n", "announceForAccessibility method defined in the View class probably serves the purpose here. It was introduced in API level 16. More details here.\n", "Very this is tool, can use it everywhere with guard\npublic static void speak_loud(String str_speak) {\n if (isGoogleTalkbackActive()) {\n AccessibilityManager accessibilityManager = (AccessibilityManager) getDefaultContext().getSystemService(Context.ACCESSIBILITY_SERVICE);\n AccessibilityEvent accessibilityEvent = AccessibilityEvent.obtain();\n accessibilityEvent.setEventType(AccessibilityEvent.TYPE_ANNOUNCEMENT);\n\n accessibilityEvent.getText().add(str_speak);\n if (accessibilityManager != null) {\n accessibilityManager.sendAccessibilityEvent(accessibilityEvent);\n }\n }\n}\n\n\n public static boolean isGoogleTalkbackActive() {\n AccessibilityManager am = (AccessibilityManager) getDefaultContext().getSystemService(Context.ACCESSIBILITY_SERVICE);\n if (am != null && am.isEnabled()) {\n List<AccessibilityServiceInfo> serviceInfoList = am.getEnabledAccessibilityServiceList(AccessibilityServiceInfo.FEEDBACK_SPOKEN);\n if (!serviceInfoList.isEmpty())\n return true;\n }\n return false;\n }\n\n" ]
[ 7, 4, 0, 0, 0 ]
[]
[]
[ "accessibility", "android", "talkback" ]
stackoverflow_0015757008_accessibility_android_talkback.txt
Q: How to show a list of device name and locations from which a user has signed in just like facebook using firebase in a web application? I want to create a web application where users can sign in using phone number from different devices and can see the device details along with location and last logged in time using node Js/react Js I thought of using json web tokens A: To create a web application where users can sign in using their phone number and see the details of the devices they have signed in from, you can use Firebase's authentication service in combination with Node.js and React.js. First, you will need to set up a Firebase project and enable the Phone Authentication method. Then, you can use the signInWithPhoneNumber method of the auth object in the Firebase JavaScript SDK to authenticate the user with their phone number. Once the user is authenticated, you can use the getToken method of the currentUser object to get the authentication token for the current user, and then use the verifyIdToken method of the auth object to verify the token and get the user's profile information. Once you have the user's profile information, you can use the metadata property to access information about the user's last sign-in time and the location from which they signed in. Here is an example of how you might implement this functionality in your web application: // Import the Firebase JavaScript SDK import firebase from 'firebase/app'; import 'firebase/auth'; // Initialize Firebase const firebaseConfig = { // Your Firebase configuration goes here }; firebase.initializeApp(firebaseConfig); // Authenticate the user with their phone number const phoneNumber = '+15555555555'; // Replace with the user's phone number const applicationVerifier = new firebase.auth.RecaptchaVerifier('recaptcha-container'); firebase.auth().signInWithPhoneNumber(phoneNumber, applicationVerifier) .then(function(confirmationResult) { // Ask the user to enter the verification code they received var verificationCode = window.prompt('Please enter the verification code:'); return confirmationResult.confirm(verificationCode); }).then(function(result) { // User is authenticated, get their profile information var user = result.user; return user.getToken(/* forceRefresh */ true); }).then(function(idToken) { // Verify the token and get the user's profile information return firebase.auth().verifyIdToken(idToken); }).then(function(decodedToken) { var user = decodedToken; // Get the user's last sign-in time and location var lastSignInTime = user.metadata.lastSignInTime; var lastSignInLocation = user.metadata.lastSignInLocation; // ... }).catch(function(error) { // Handle errors }); You can then use this information to display the details of the devices the user has signed in from in your React.js component. I hope this helps! Let me know if you have any other questions.
How to show a list of device name and locations from which a user has signed in just like facebook using firebase in a web application?
I want to create a web application where users can sign in using phone number from different devices and can see the device details along with location and last logged in time using node Js/react Js I thought of using json web tokens
[ "To create a web application where users can sign in using their phone number and see the details of the devices they have signed in from, you can use Firebase's authentication service in combination with Node.js and React.js.\nFirst, you will need to set up a Firebase project and enable the Phone Authentication method. Then, you can use the signInWithPhoneNumber method of the auth object in the Firebase JavaScript SDK to authenticate the user with their phone number.\nOnce the user is authenticated, you can use the getToken method of the currentUser object to get the authentication token for the current user, and then use the verifyIdToken method of the auth object to verify the token and get the user's profile information.\nOnce you have the user's profile information, you can use the metadata property to access information about the user's last sign-in time and the location from which they signed in.\nHere is an example of how you might implement this functionality in your web application:\n // Import the Firebase JavaScript SDK\nimport firebase from 'firebase/app';\nimport 'firebase/auth';\n\n// Initialize Firebase\nconst firebaseConfig = {\n // Your Firebase configuration goes here\n};\n\nfirebase.initializeApp(firebaseConfig);\n\n// Authenticate the user with their phone number\nconst phoneNumber = '+15555555555'; // Replace with the user's phone number\nconst applicationVerifier = new firebase.auth.RecaptchaVerifier('recaptcha-container');\n\nfirebase.auth().signInWithPhoneNumber(phoneNumber, applicationVerifier)\n .then(function(confirmationResult) {\n // Ask the user to enter the verification code they received\n var verificationCode = window.prompt('Please enter the verification code:');\n\n return confirmationResult.confirm(verificationCode);\n }).then(function(result) {\n // User is authenticated, get their profile information\n var user = result.user;\n\n return user.getToken(/* forceRefresh */ true);\n }).then(function(idToken) {\n // Verify the token and get the user's profile information\n return firebase.auth().verifyIdToken(idToken);\n }).then(function(decodedToken) {\n var user = decodedToken;\n\n // Get the user's last sign-in time and location\n var lastSignInTime = user.metadata.lastSignInTime;\n var lastSignInLocation = user.metadata.lastSignInLocation;\n // ...\n }).catch(function(error) {\n // Handle errors\n });\n\nYou can then use this information to display the details of the devices the user has signed in from in your React.js component.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "firebase", "javascript", "node.js", "reactjs" ]
stackoverflow_0074665108_firebase_javascript_node.js_reactjs.txt
Q: Outlook VSTO Add-in: Removing a lot of recipients is too slow and not always work I have an VSTO Outlook Add-in. In the compose windows I have a button. When this button is clicked the recipients that satisfy a condition are removed. The recipients that need to be deleted are stored in a list, that is, List<Outlook.Recipient>. I iterate this list and remove each recipient. foreach (Outlook.Recipient recipient in this.RecipientsList) { this.MyMailItem?.Recipients?.Remove(recipient.Index); } // clear all the recipients this.RecipientsList.Clear(); I have noted that the more recipients need to be removed the more slower is and also not always all the recipients contained in the list are removed, sometimes yet and sometimes not. Two things here: How can I optimize the speed for removing recipients? Why sometimes all the recipients contained in the list are not being removed? It's randomly, sometimes yes, sometimes not. Note that this.MyMailItem is of type Outlook.MailItem and this.MyMailItem.Recipients is of type Outlook.Recipients. Remove function requires an integer as a parameter, this is what its definition says, see here. A: Keeping Outlook COM objects in the collections is not really a good idea. Instead, you can keep their indexes and etc. In the following loop: foreach (Outlook.Recipient recipient in this.RecipientsList) { this.MyMailItem?.Recipients?.Remove(recipient.Index); } The recipient object is ignored! You could use the following: foreach (Outlook.Recipient recipient in this.RecipientsList) { recipient.Delete(); } The Recipient.Delete method deletes an object from the collection. The rule of thumb is to release underlying COM objects instantly. In that case you may be sure that objects are released timely. For example, in the code each time in the loop you get the Recipients collection which increases the reference counter: this.MyMailItem?.Recipients I'd recommend keeping the collection and only deal with a specific items in the loop: Outlook.Recipients recipients = this.MyMailItem?.Recipients; // in the loop you can call the Remove method recipients.Remove(recipient.Index); Also you may try to get a specific recipient instance in the loop and delete it. After that you also need to release an underlying COM object: Outlook.Recipient recipient = null; Outlook.Recipients recipients = this.MyMailItem?.Recipients; for(int i = 1; i<recipients.Count;i++) { recipient = this.RecipientsList.Item(i); recipient.Delete(); Marshal.ReleaseComObject(recipient); recipient = null; } A: Recipient count is changing dynamically inside for loop so not all recipients get cleared in for loop. i used below code as a workaround. while (recipients.Count>0) { recipients.Remove(recipients.Count); }
Outlook VSTO Add-in: Removing a lot of recipients is too slow and not always work
I have an VSTO Outlook Add-in. In the compose windows I have a button. When this button is clicked the recipients that satisfy a condition are removed. The recipients that need to be deleted are stored in a list, that is, List<Outlook.Recipient>. I iterate this list and remove each recipient. foreach (Outlook.Recipient recipient in this.RecipientsList) { this.MyMailItem?.Recipients?.Remove(recipient.Index); } // clear all the recipients this.RecipientsList.Clear(); I have noted that the more recipients need to be removed the more slower is and also not always all the recipients contained in the list are removed, sometimes yet and sometimes not. Two things here: How can I optimize the speed for removing recipients? Why sometimes all the recipients contained in the list are not being removed? It's randomly, sometimes yes, sometimes not. Note that this.MyMailItem is of type Outlook.MailItem and this.MyMailItem.Recipients is of type Outlook.Recipients. Remove function requires an integer as a parameter, this is what its definition says, see here.
[ "Keeping Outlook COM objects in the collections is not really a good idea. Instead, you can keep their indexes and etc.\nIn the following loop:\nforeach (Outlook.Recipient recipient in this.RecipientsList)\n{\n this.MyMailItem?.Recipients?.Remove(recipient.Index);\n}\n\nThe recipient object is ignored!\nYou could use the following:\nforeach (Outlook.Recipient recipient in this.RecipientsList)\n{\n recipient.Delete();\n}\n\nThe Recipient.Delete method deletes an object from the collection.\nThe rule of thumb is to release underlying COM objects instantly. In that case you may be sure that objects are released timely. For example, in the code each time in the loop you get the Recipients collection which increases the reference counter:\nthis.MyMailItem?.Recipients\n\nI'd recommend keeping the collection and only deal with a specific items in the loop:\nOutlook.Recipients recipients = this.MyMailItem?.Recipients;\n// in the loop you can call the Remove method\nrecipients.Remove(recipient.Index);\n\nAlso you may try to get a specific recipient instance in the loop and delete it. After that you also need to release an underlying COM object:\nOutlook.Recipient recipient = null;\nOutlook.Recipients recipients = this.MyMailItem?.Recipients;\nfor(int i = 1; i<recipients.Count;i++)\n{\n recipient = this.RecipientsList.Item(i);\n recipient.Delete();\n Marshal.ReleaseComObject(recipient);\n recipient = null;\n}\n\n", "Recipient count is changing dynamically inside for loop so not all recipients get cleared in for loop. i used below code as a workaround.\nwhile (recipients.Count>0)\n{\n recipients.Remove(recipients.Count);\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "office_addins", "office_interop", "outlook", "outlook_addin", "vsto" ]
stackoverflow_0073280647_office_addins_office_interop_outlook_outlook_addin_vsto.txt
Q: Vite creating it's own node_modules in workspace instead of using monorepo I have a monorepo for a fullstack webapp with the following directory structure . β”œβ”€β”€ client β”‚ β”œβ”€β”€ index.html β”‚ β”œβ”€β”€ package.json β”‚ β”œβ”€β”€ src β”‚ └── vite.config.ts β”œβ”€β”€ node_modules β”œβ”€β”€ package-lock.json β”œβ”€β”€ package.json β”œβ”€β”€ server β”‚ β”œβ”€β”€ package.json β”‚ └── src β”œβ”€β”€ tsconfig.json └── tsconfig.node.json However, when I run npm run dev -ws client, vite generates it's own node_modules/ inside client/. . β”œβ”€β”€ client β”‚ β”œβ”€β”€ index.html β”‚ β”œβ”€β”€ node_modules <--- this β”‚ β”‚ └── .vite β”‚ β”‚ └── deps_temp β”‚ β”‚ └── package.json β”‚ β”œβ”€β”€ package.json β”‚ β”œβ”€β”€ src β”‚ └── vite.config.ts My understanding is that the point of using npm workspaces is to avoid having multiple node_modules/ in each sub-project, instead having all dependencies installed in the root node_modules/. Vite generating it's own seems to defeat that point. I'm assuming I don't have something configured properly (I used npx create-vite to setup vite). Output of npm run dev -ws client > @sargon-dashboard/[email protected] dev > vite client (!) Could not auto-determine entry point from rollupOptions or html files and there are no explicit optimizeDeps.include patterns. Skipping dependency pre-bundling. VITE v3.2.4 ready in 175 ms ➜ Local: http://localhost:5173/ ➜ Network: use --host to expose Contents of vite.config.ts import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' // https://vitejs.dev/config/ export default defineConfig({ plugins: [react()] }) contents of root/package.json { "name": "app", "private": true, "workspaces": [ "client", "server" ] } contents of root/client/package.json { "name": "@app/client", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc && vite build", "preview": "vite preview" }, "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0" }, "devDependencies": { "@types/react": "^18.0.24", "@types/react-dom": "^18.0.8", "@vitejs/plugin-react": "^2.2.0", "typescript": "^4.6.4", "vite": "^3.2.3" } } contents of root/server/package.json { "name": "@app/server", "version": "0.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" } A: You did nothing wrong. node_modules/.vite is the default vite cache directory. It only looks like a misconfiguration in a monorepo because you don't expect a node_modules folder inside the packages anymore. If you want, you can configure a different path: https://v2.vitejs.dev/config/#cachedir A: It looks like you're using npm workspaces correctly in your monorepo. However, it seems like Vite is creating its own node_modules directory inside the client directory. This is not unexpected behavior, as Vite uses a local development server to serve the files in your project, and it needs to install its own dependencies in order to do so. It's possible to configure Vite to use the node_modules directory at the root of your monorepo instead, by using the base option in your Vite config. You can add the following line to your vite.config.ts file to do this: import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' export default defineConfig({ base: '.', plugins: [react()] }) This will tell Vite to use the node_modules directory at the root of your monorepo instead of creating a new one inside the client directory. I hope this helps! Let me know if you have any other questions.
Vite creating it's own node_modules in workspace instead of using monorepo
I have a monorepo for a fullstack webapp with the following directory structure . β”œβ”€β”€ client β”‚ β”œβ”€β”€ index.html β”‚ β”œβ”€β”€ package.json β”‚ β”œβ”€β”€ src β”‚ └── vite.config.ts β”œβ”€β”€ node_modules β”œβ”€β”€ package-lock.json β”œβ”€β”€ package.json β”œβ”€β”€ server β”‚ β”œβ”€β”€ package.json β”‚ └── src β”œβ”€β”€ tsconfig.json └── tsconfig.node.json However, when I run npm run dev -ws client, vite generates it's own node_modules/ inside client/. . β”œβ”€β”€ client β”‚ β”œβ”€β”€ index.html β”‚ β”œβ”€β”€ node_modules <--- this β”‚ β”‚ └── .vite β”‚ β”‚ └── deps_temp β”‚ β”‚ └── package.json β”‚ β”œβ”€β”€ package.json β”‚ β”œβ”€β”€ src β”‚ └── vite.config.ts My understanding is that the point of using npm workspaces is to avoid having multiple node_modules/ in each sub-project, instead having all dependencies installed in the root node_modules/. Vite generating it's own seems to defeat that point. I'm assuming I don't have something configured properly (I used npx create-vite to setup vite). Output of npm run dev -ws client > @sargon-dashboard/[email protected] dev > vite client (!) Could not auto-determine entry point from rollupOptions or html files and there are no explicit optimizeDeps.include patterns. Skipping dependency pre-bundling. VITE v3.2.4 ready in 175 ms ➜ Local: http://localhost:5173/ ➜ Network: use --host to expose Contents of vite.config.ts import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' // https://vitejs.dev/config/ export default defineConfig({ plugins: [react()] }) contents of root/package.json { "name": "app", "private": true, "workspaces": [ "client", "server" ] } contents of root/client/package.json { "name": "@app/client", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc && vite build", "preview": "vite preview" }, "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0" }, "devDependencies": { "@types/react": "^18.0.24", "@types/react-dom": "^18.0.8", "@vitejs/plugin-react": "^2.2.0", "typescript": "^4.6.4", "vite": "^3.2.3" } } contents of root/server/package.json { "name": "@app/server", "version": "0.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }
[ "You did nothing wrong. node_modules/.vite is the default vite cache directory. It only looks like a misconfiguration in a monorepo because you don't expect a node_modules folder inside the packages anymore.\nIf you want, you can configure a different path:\nhttps://v2.vitejs.dev/config/#cachedir\n", "It looks like you're using npm workspaces correctly in your monorepo. However, it seems like Vite is creating its own node_modules directory inside the client directory. This is not unexpected behavior, as Vite uses a local development server to serve the files in your project, and it needs to install its own dependencies in order to do so.\nIt's possible to configure Vite to use the node_modules directory at the root of your monorepo instead, by using the base option in your Vite config. You can add the following line to your vite.config.ts file to do this:\nimport { defineConfig } from 'vite'\nimport react from '@vitejs/plugin-react'\n\nexport default defineConfig({\n base: '.',\n plugins: [react()]\n})\n\nThis will tell Vite to use the node_modules directory at the root of your monorepo instead of creating a new one inside the client directory.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 1, 1 ]
[]
[]
[ "javascript", "monorepo", "npm", "react_fullstack", "vite" ]
stackoverflow_0074635060_javascript_monorepo_npm_react_fullstack_vite.txt