content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
'numpy.ndarray' object has no attribute 'xaxis' - not sure why
I have the following code.
I am trying to loop through a dataframe 'out' and create a separate subplot for each group and level.
There are 35 groups and 5 levels, producing 175 plots in total.
I thus want to create 5 figures each with 35 subplots (7 rows and 5 columns).
However, when I try to assign specific plots to different axes, I get the error: 'numpy.ndarray' object has no attribute 'xaxis'
I would be so grateful for a helping hand!
I have attached some example data below.
for j in range(0,len(individualoutliers)):
fig = plt.figure(figsize=(50,50))
fig,axes = plt.subplots(7,5)
for i in range(0,len(individualoutliers[j])):
individualoutliersnew = individualoutliers[j]
out = individualoutliersnew.loc[:, ["newID", "x", "y","level"]].apply(lambda x: pd.Series(x).explode())
for k,g in out.groupby("newID"):
globals()['interestingvariable'] = g
newframe = interestingvariable
sns.lineplot(data=newframe,x='x',y='y',ax=axes[i])
axes[i].set_xlabel('x-coordinate',labelpad = 40,fontsize=70,weight='bold')
axes[i].set_ylabel('y-coordinate',labelpad = 40,fontsize=70,weight='bold')
plt.xticks(weight='bold',fontsize=60,rotation = 30)
plt.yticks(weight='bold',fontsize=60)
title = (newframe.iloc[0,0]+' '+'level'+' '+str(newframe.iloc[i,3]))
axes[i].set_title(title,fontsize=70,pad=40,weight='bold')
dir_name = "/Users/macbook/Desktop/"
plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name))
plt.savefig(newframe.iloc[0,0]+' '+'level'+' '+str(newframe.iloc[i,3])+'individualoutlierplot')
plt.show()
out.head(10)
newID x y level
24 610020 55 60 1
24 610020 55 60 1
24 610020 55 60 1
24 610020 60 60 1
24 610020 60 65 1
24 610020 60 65 1
24 610020 65 70 1
24 610020 70 70 1
24 610020 70 75 1
24 610020 75 75 1
newframe.head(10)
newID x y level
3313 5d254d 55 60 1
3313 5d254d 55 60 1
3313 5d254d 55 60 1
3313 5d254d 60 60 1
3313 5d254d 60 65 1
3313 5d254d 60 65 1
3313 5d254d 65 65 1
3313 5d254d 65 70 1
3313 5d254d 70 75 1
3313 5d254d 75 75 1
A:
In fig,axes = plt.subplots(7,5), axes is a 2D array of axes (actually pairs of x, y axes).
In sns.lineplot(data=newframe,x='x',y='y',ax=axes[i]) you are passing a 1D array axes[i], not a single axis (pair) as lineplot may expect.
|
'numpy.ndarray' object has no attribute 'xaxis' - not sure why
|
I have the following code.
I am trying to loop through a dataframe 'out' and create a separate subplot for each group and level.
There are 35 groups and 5 levels, producing 175 plots in total.
I thus want to create 5 figures each with 35 subplots (7 rows and 5 columns).
However, when I try to assign specific plots to different axes, I get the error: 'numpy.ndarray' object has no attribute 'xaxis'
I would be so grateful for a helping hand!
I have attached some example data below.
for j in range(0,len(individualoutliers)):
fig = plt.figure(figsize=(50,50))
fig,axes = plt.subplots(7,5)
for i in range(0,len(individualoutliers[j])):
individualoutliersnew = individualoutliers[j]
out = individualoutliersnew.loc[:, ["newID", "x", "y","level"]].apply(lambda x: pd.Series(x).explode())
for k,g in out.groupby("newID"):
globals()['interestingvariable'] = g
newframe = interestingvariable
sns.lineplot(data=newframe,x='x',y='y',ax=axes[i])
axes[i].set_xlabel('x-coordinate',labelpad = 40,fontsize=70,weight='bold')
axes[i].set_ylabel('y-coordinate',labelpad = 40,fontsize=70,weight='bold')
plt.xticks(weight='bold',fontsize=60,rotation = 30)
plt.yticks(weight='bold',fontsize=60)
title = (newframe.iloc[0,0]+' '+'level'+' '+str(newframe.iloc[i,3]))
axes[i].set_title(title,fontsize=70,pad=40,weight='bold')
dir_name = "/Users/macbook/Desktop/"
plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name))
plt.savefig(newframe.iloc[0,0]+' '+'level'+' '+str(newframe.iloc[i,3])+'individualoutlierplot')
plt.show()
out.head(10)
newID x y level
24 610020 55 60 1
24 610020 55 60 1
24 610020 55 60 1
24 610020 60 60 1
24 610020 60 65 1
24 610020 60 65 1
24 610020 65 70 1
24 610020 70 70 1
24 610020 70 75 1
24 610020 75 75 1
newframe.head(10)
newID x y level
3313 5d254d 55 60 1
3313 5d254d 55 60 1
3313 5d254d 55 60 1
3313 5d254d 60 60 1
3313 5d254d 60 65 1
3313 5d254d 60 65 1
3313 5d254d 65 65 1
3313 5d254d 65 70 1
3313 5d254d 70 75 1
3313 5d254d 75 75 1
|
[
"In fig,axes = plt.subplots(7,5), axes is a 2D array of axes (actually pairs of x, y axes).\nIn sns.lineplot(data=newframe,x='x',y='y',ax=axes[i]) you are passing a 1D array axes[i], not a single axis (pair) as lineplot may expect.\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"loops",
"matplotlib",
"pandas",
"python"
] |
stackoverflow_0074659542_jupyter_notebook_loops_matplotlib_pandas_python.txt
|
Q:
How to execute GMAIL API filters
I am able to access my Gmail filters in Apps Script through Gmail-API
filters = Gmail.Users.Settings.Filters.list("me")
filter_instance = filters.filter[0]
How can I find the emails matching filter_instance criteria?
An alternative solution would be to know how to apply that existing filter to a certain group of labelled messages.
A:
From How can I find the emails matching filter_instance criteria?, I believe your goal is as follows.
You want to retrieve Email messages by searching with the installed Gmail filter.
You want to achieve this using Google Apps Script.
In the current stage, it seems that the Email messages cannot be directly searched by the installed Gmail filter. So, in this case, it is required to create the search query from the installed Gmail filter. So, how about the following sample script?
Sample script:
Before you use this script, please enable Gmail API at Advanced Google services.
function myFunction() {
const filters = Gmail.Users.Settings.Filters.list("me");
const searchQuery = filters.filter.map(({ criteria: { from, to, subject, query, negatedQuery, hasAttachment, size, sizeComparison } }) => ([
from ? `from:(${from})` : "",
to ? `to:(${to})` : "",
subject ? `subject:(${subject})` : "",
query || "",
negatedQuery ? `-{${negatedQuery}}` : "",
hasAttachment ? "has:attachment" : "",
(size && sizeComparison) ? `${sizeComparison}:${size}` : "",
].join(" ").trim()));
if (searchQuery.length > 0) {
const query = searchQuery[0]; // From your script, 1st filter is used.
GmailApp.search(query).forEach(t => {
t.getMessages().forEach(m => {
console.log({ from: m.getFrom(), to: m.getTo(), subject: m.getSubject(), date: m.getDate() });
});
});
} else {
throw new Error("No filters were found.");
}
}
When this script is run, the Email messages are searched using the installed Gmail filter. In this case, from your showing script, the 1st filter is used. And, as a sample output, console.log({ from: m.getFrom(), to: m.getTo(), subject: m.getSubject(), date: m.getDate() }) is used. This is a sample. So, please modify this script to your actual situation.
As an important point, GmailApp.search returns 500 messages as the maximum size. If you want to retrieve more messages, please use search(query, start, max).
References:
Method: users.settings.filters.list
search(query, start, max)
|
How to execute GMAIL API filters
|
I am able to access my Gmail filters in Apps Script through Gmail-API
filters = Gmail.Users.Settings.Filters.list("me")
filter_instance = filters.filter[0]
How can I find the emails matching filter_instance criteria?
An alternative solution would be to know how to apply that existing filter to a certain group of labelled messages.
|
[
"From How can I find the emails matching filter_instance criteria?, I believe your goal is as follows.\n\nYou want to retrieve Email messages by searching with the installed Gmail filter.\nYou want to achieve this using Google Apps Script.\n\nIn the current stage, it seems that the Email messages cannot be directly searched by the installed Gmail filter. So, in this case, it is required to create the search query from the installed Gmail filter. So, how about the following sample script?\nSample script:\nBefore you use this script, please enable Gmail API at Advanced Google services.\nfunction myFunction() {\n const filters = Gmail.Users.Settings.Filters.list(\"me\");\n const searchQuery = filters.filter.map(({ criteria: { from, to, subject, query, negatedQuery, hasAttachment, size, sizeComparison } }) => ([\n from ? `from:(${from})` : \"\",\n to ? `to:(${to})` : \"\",\n subject ? `subject:(${subject})` : \"\",\n query || \"\",\n negatedQuery ? `-{${negatedQuery}}` : \"\",\n hasAttachment ? \"has:attachment\" : \"\",\n (size && sizeComparison) ? `${sizeComparison}:${size}` : \"\",\n ].join(\" \").trim()));\n if (searchQuery.length > 0) {\n const query = searchQuery[0]; // From your script, 1st filter is used.\n GmailApp.search(query).forEach(t => {\n t.getMessages().forEach(m => {\n console.log({ from: m.getFrom(), to: m.getTo(), subject: m.getSubject(), date: m.getDate() });\n });\n });\n } else {\n throw new Error(\"No filters were found.\");\n }\n}\n\n\nWhen this script is run, the Email messages are searched using the installed Gmail filter. In this case, from your showing script, the 1st filter is used. And, as a sample output, console.log({ from: m.getFrom(), to: m.getTo(), subject: m.getSubject(), date: m.getDate() }) is used. This is a sample. So, please modify this script to your actual situation.\n\nAs an important point, GmailApp.search returns 500 messages as the maximum size. If you want to retrieve more messages, please use search(query, start, max).\n\n\nReferences:\n\nMethod: users.settings.filters.list\nsearch(query, start, max)\n\n"
] |
[
0
] |
[] |
[] |
[
"gmail_api",
"google_apps_script"
] |
stackoverflow_0074655555_gmail_api_google_apps_script.txt
|
Q:
Typesafe higher order function mimicing the interface of any passed in function
I'm trying to type a higher order function. The basic idea is that high is passed a function fn, and it returns a function that takes the exact same parameters and gives the same return type.
It's an exercise in trying to understand the language better. I have something that works but erases the type from the input parameters. Please see the types Test1 vs Test2 below:
export function high<R>(fn: (...args: any[]) => R) {
return (...args: any[]) => {
const moddedArgs = args.map((el) =>
typeof el === "string" ? el + "OMG" : el
);
return fn(...moddedArgs);
};
}
const test1 = (nr1: number, str1?: string) => (str1 ?? "Wow").repeat(nr1);
test1(3, "yo");
// returns yoyoyo
test1(3);
// returns WowWowWow
const test2 = high(test1);
test2(3, "yo");
// returns yoOMGyoOMGyoOMG
type Test1 = typeof test1;
// type Test1 = (nr1: number, str1?: string) => string
type Test2 = typeof test2;
// type Test2 = (...args: any[]) => string
Playground Link
I tried having a P type argument for the parameters, which I sort of got working But it didn't handle optional parameters well.
Any ideas how this can be solved?
A:
You should store the whole type of fn in a generic type. To type ...args, we can use Parameters<Fn>.
export function high<Fn extends (...args: any[]) => any>(fn: Fn) {
return (...args: Parameters<Fn>) => {
const moddedArgs = args.map((el) =>
typeof el === "string" ? el + "OMG" : el
);
return fn(...moddedArgs);
};
}
Playground
A:
Instead of trying to guess generics for the fn function parameters and return types separately, we can have a single generic type F for the whole function, and then extract its constituent types with Parameters and ReturnType TypeScript built-in utility types:
export function high<F extends (...args: any[]) => any>(fn: F) {
return (...args: Parameters<F>): ReturnType<F> => {
const moddedArgs = args.map((el) =>
typeof el === "string" ? el + "OMG" : el
);
return fn(...moddedArgs);
};
}
const test2 = high(test1);
// ^? (nr1: number, str1?: string | undefined) => string
We can even go a little further, in case it might make sense in your case: by asserting the modified function as having exactly the same type as the original function (type F above), we get its associated JSDoc as well!
function high2<F extends (...args: any[]) => any>(fn: F) {
return high(fn) as F // If we assert the returned function type as exactly F, we get its associated JSDoc as well!
}
const test22 = high2(test1);
// ^? (nr1: number, str1?: string) => string
// In the IDE, we should have any JSDoc associated to test1, popup as well on test22, since it is said to be exactly the same type.
Playground Link
|
Typesafe higher order function mimicing the interface of any passed in function
|
I'm trying to type a higher order function. The basic idea is that high is passed a function fn, and it returns a function that takes the exact same parameters and gives the same return type.
It's an exercise in trying to understand the language better. I have something that works but erases the type from the input parameters. Please see the types Test1 vs Test2 below:
export function high<R>(fn: (...args: any[]) => R) {
return (...args: any[]) => {
const moddedArgs = args.map((el) =>
typeof el === "string" ? el + "OMG" : el
);
return fn(...moddedArgs);
};
}
const test1 = (nr1: number, str1?: string) => (str1 ?? "Wow").repeat(nr1);
test1(3, "yo");
// returns yoyoyo
test1(3);
// returns WowWowWow
const test2 = high(test1);
test2(3, "yo");
// returns yoOMGyoOMGyoOMG
type Test1 = typeof test1;
// type Test1 = (nr1: number, str1?: string) => string
type Test2 = typeof test2;
// type Test2 = (...args: any[]) => string
Playground Link
I tried having a P type argument for the parameters, which I sort of got working But it didn't handle optional parameters well.
Any ideas how this can be solved?
|
[
"You should store the whole type of fn in a generic type. To type ...args, we can use Parameters<Fn>.\nexport function high<Fn extends (...args: any[]) => any>(fn: Fn) {\n return (...args: Parameters<Fn>) => {\n const moddedArgs = args.map((el) =>\n typeof el === \"string\" ? el + \"OMG\" : el\n );\n\n return fn(...moddedArgs);\n };\n}\n\n\nPlayground\n",
"Instead of trying to guess generics for the fn function parameters and return types separately, we can have a single generic type F for the whole function, and then extract its constituent types with Parameters and ReturnType TypeScript built-in utility types:\nexport function high<F extends (...args: any[]) => any>(fn: F) {\n return (...args: Parameters<F>): ReturnType<F> => {\n const moddedArgs = args.map((el) =>\n typeof el === \"string\" ? el + \"OMG\" : el\n );\n\n return fn(...moddedArgs);\n };\n}\n\nconst test2 = high(test1);\n// ^? (nr1: number, str1?: string | undefined) => string\n\n\nWe can even go a little further, in case it might make sense in your case: by asserting the modified function as having exactly the same type as the original function (type F above), we get its associated JSDoc as well!\nfunction high2<F extends (...args: any[]) => any>(fn: F) {\n return high(fn) as F // If we assert the returned function type as exactly F, we get its associated JSDoc as well!\n}\n\nconst test22 = high2(test1);\n// ^? (nr1: number, str1?: string) => string\n// In the IDE, we should have any JSDoc associated to test1, popup as well on test22, since it is said to be exactly the same type.\n\nPlayground Link\n"
] |
[
1,
1
] |
[] |
[] |
[
"typescript",
"typescript_generics"
] |
stackoverflow_0074662717_typescript_typescript_generics.txt
|
Q:
Using a const in a URL onclick event
Im trying to build a cheaty kinda multilingual site in wordpress (interim fix) and need to set the logo link to go the the correct landing page when clicked, either /en/ or /se/.
Im trying to do this by grabing the url and then splitting the path name.
`
<script language="javascript" type="text/javascript">
const firstpath = location.pathname.split('/')[1];
</script>
<a href="/" onclick="location.href=this.href+firstpath;return false;">
<img />
</a>
`
fairly sure im missing something simple, especially as it did work awhile ago until I changed something that i dont remember :/
When clicked the link shoulf return:
root/first-folder-of-current-url
MORE DETAILS:
(sorry fairly new here)
So I have a two folder hierarcy /en/ and /se/.
When in the /en/ folder if i click the logo I should be taken back to the root/en/ folder.
When in the /se/ folder if i click the logo I should be taken back to the root/se/ folder.
I cant have unique code for each folder so stuck trying to make this work with a javascript link.
A:
You can use a regular expression instead. (Regular expressions are how we look for patterns in strings of text in JavaScript. You can read about them here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions)
Also, instead of using the onclick event, <a> tags already include a function for creating a hyperlink using JavaScript, by writing javascript: in the href attribute. That might be more compatible with WordPress.
In your case, it would go like this:
<a href="javascript:location.href=location.href.match( new RegExp( '^.*://.*/(en|se)/' ) )[ 0 ];">
<img />
</a>
This regular expression, ^.*://.*/(en|se)/, works like this:
^: Means [At the beginning of the string]
.*: Means [One or more of any character]
:// Means what it says, "://". This is in every URL, like https://www.google.com
.* Means [One or more of any character]
/(en|se)/ Means [Slash "/", followed by "en" or "se", followed by slash "/"]
The .match() function finds matches for the pattern we described, and returns them in an array. We only want the first entry of that array, so we specify .match( ... )[ 0 ]
A:
I found an alternative by simply replacing the content of the href:
<a href="#" id="homelink">
<img src="URL" />
</a>
<script>
document.getElementById("homelink").href ='/'+location.pathname.split('/')[1];
</script>
|
Using a const in a URL onclick event
|
Im trying to build a cheaty kinda multilingual site in wordpress (interim fix) and need to set the logo link to go the the correct landing page when clicked, either /en/ or /se/.
Im trying to do this by grabing the url and then splitting the path name.
`
<script language="javascript" type="text/javascript">
const firstpath = location.pathname.split('/')[1];
</script>
<a href="/" onclick="location.href=this.href+firstpath;return false;">
<img />
</a>
`
fairly sure im missing something simple, especially as it did work awhile ago until I changed something that i dont remember :/
When clicked the link shoulf return:
root/first-folder-of-current-url
MORE DETAILS:
(sorry fairly new here)
So I have a two folder hierarcy /en/ and /se/.
When in the /en/ folder if i click the logo I should be taken back to the root/en/ folder.
When in the /se/ folder if i click the logo I should be taken back to the root/se/ folder.
I cant have unique code for each folder so stuck trying to make this work with a javascript link.
|
[
"You can use a regular expression instead. (Regular expressions are how we look for patterns in strings of text in JavaScript. You can read about them here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions)\nAlso, instead of using the onclick event, <a> tags already include a function for creating a hyperlink using JavaScript, by writing javascript: in the href attribute. That might be more compatible with WordPress.\nIn your case, it would go like this:\n<a href=\"javascript:location.href=location.href.match( new RegExp( '^.*://.*/(en|se)/' ) )[ 0 ];\">\n <img />\n</a>\n\nThis regular expression, ^.*://.*/(en|se)/, works like this:\n^: Means [At the beginning of the string]\n.*: Means [One or more of any character]\n:// Means what it says, \"://\". This is in every URL, like https://www.google.com\n.* Means [One or more of any character]\n/(en|se)/ Means [Slash \"/\", followed by \"en\" or \"se\", followed by slash \"/\"]\nThe .match() function finds matches for the pattern we described, and returns them in an array. We only want the first entry of that array, so we specify .match( ... )[ 0 ]\n",
"I found an alternative by simply replacing the content of the href:\n<a href=\"#\" id=\"homelink\">\n<img src=\"URL\" />\n</a>\n<script>\ndocument.getElementById(\"homelink\").href ='/'+location.pathname.split('/')[1];\n</script>\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074662606_javascript.txt
|
Q:
d.py number of bans in a guild
So I tried using embed.add_field(name="Ban Count", value=f"{len(await ctx.guild.bans())} Bans",inline=False)
but I get this error object async_generator can't be used in 'await' expression How do I display the amount of bans?
A:
You must first convert the bans into a list, then get the length of the list:
bans_list = [entry async for entry in ctx.guild.bans()]
number_of_bans = len(bans_list)
# output number of bans, etc...
|
d.py number of bans in a guild
|
So I tried using embed.add_field(name="Ban Count", value=f"{len(await ctx.guild.bans())} Bans",inline=False)
but I get this error object async_generator can't be used in 'await' expression How do I display the amount of bans?
|
[
"You must first convert the bans into a list, then get the length of the list:\nbans_list = [entry async for entry in ctx.guild.bans()]\nnumber_of_bans = len(bans_list)\n\n# output number of bans, etc...\n\n"
] |
[
0
] |
[] |
[] |
[
"discord",
"discord.py",
"python"
] |
stackoverflow_0074663211_discord_discord.py_python.txt
|
Q:
React-Native Button onPress doesn't work?
I have a problem with onPress, tried all the solutions and did not work handleClick function,
I've tried it with following approaches as well:
onPress={this.handleClick}
onPress={this.handleClick()}
onPress={this.handleClick.bind(this)}
onPress={() => this.handleClick.bind(this)}
And I tried to change the function to:
handleClick(){
console.log('Button clicked!');
}
and this is the my code:
import React, { Component } from 'react';
import {
View,
} from 'react-native';
import Card from './common/Card';
import CardItem from './common/CardItem';
import Buttom from './common/Buttom';
import Input from './common/Input';
export default class LoginForm extends Component {
constructor(props) {
super(props);
this.state = {
email: '',
password: '',
}
}
onLoginPress() {
//console.log(`Email is : ${ this.state.email }`);
//console.log(`Password is : ${ this.state.password }`);
};
handleClick = () => {
console.log('Button clicked!');
};
render() {
return (
<View >
<Card>
<CardItem>
<Input
label='Email'
placeholder='Enter your email'
secureTextEntry={false}
onChangeText = { (email) => this.setState({ email })}
/>
</CardItem>
<CardItem>
<Input
label='Password'
placeholder='Enter your password'
secureTextEntry={true}
onChangeText = { (password) => this.setState({ password })}
/>
</CardItem>
<CardItem>
<Buttom onPress={this.handleClick}> Login </Buttom>
</CardItem>
</Card>
</View>
);
}
}
and this is my Buttom.js file:
import React from 'react';
import {StyleSheet, Text, TouchableOpacity} from 'react-native';
const Buttom = (props) => {
return(
<TouchableOpacity style={styles.ButtomView} >
<Text style={styles.TextButtom}> {props.children} </Text>
</TouchableOpacity>
);
}
const styles = StyleSheet.create({
ButtomView: {
flex: 1,
height: 35,
borderRadius: 5,
backgroundColor: '#2a3744',
justifyContent: 'center',
marginVertical: 15
},
TextButtom: {
color: '#fff',
textAlign: 'center',
fontWeight: 'bold',
fontSize: 15,
}
});
export default Buttom;
A:
You can not bind event to Component. Event is only can attached to React Native element in React-Native or DOM in React only.
You should pass event handler,
<Buttom onPressHanlder={this.handleClick}> Login </Buttom>
In Buttom component use props.onPressHanlder to call passed event handler :
const Buttom = (props) => {
return(
<TouchableOpacity style={styles.ButtomView} onPress={props.onPressHanlder}>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<Text style={styles.TextButtom}> {props.children} </Text>
</TouchableOpacity>
);
}
A:
bind the function to scope
constructor(props) {
super(props);
this.state = {
email: '',
password: '',
}
this.handleClick = this.handleClick.bind(this);
}
Them <Buttom onPress={() => this.handleClick() }> Login </Buttom>
A:
I had to restart the app after adding new packages.
(using react-native start or react-native run-android)
|
React-Native Button onPress doesn't work?
|
I have a problem with onPress, tried all the solutions and did not work handleClick function,
I've tried it with following approaches as well:
onPress={this.handleClick}
onPress={this.handleClick()}
onPress={this.handleClick.bind(this)}
onPress={() => this.handleClick.bind(this)}
And I tried to change the function to:
handleClick(){
console.log('Button clicked!');
}
and this is the my code:
import React, { Component } from 'react';
import {
View,
} from 'react-native';
import Card from './common/Card';
import CardItem from './common/CardItem';
import Buttom from './common/Buttom';
import Input from './common/Input';
export default class LoginForm extends Component {
constructor(props) {
super(props);
this.state = {
email: '',
password: '',
}
}
onLoginPress() {
//console.log(`Email is : ${ this.state.email }`);
//console.log(`Password is : ${ this.state.password }`);
};
handleClick = () => {
console.log('Button clicked!');
};
render() {
return (
<View >
<Card>
<CardItem>
<Input
label='Email'
placeholder='Enter your email'
secureTextEntry={false}
onChangeText = { (email) => this.setState({ email })}
/>
</CardItem>
<CardItem>
<Input
label='Password'
placeholder='Enter your password'
secureTextEntry={true}
onChangeText = { (password) => this.setState({ password })}
/>
</CardItem>
<CardItem>
<Buttom onPress={this.handleClick}> Login </Buttom>
</CardItem>
</Card>
</View>
);
}
}
and this is my Buttom.js file:
import React from 'react';
import {StyleSheet, Text, TouchableOpacity} from 'react-native';
const Buttom = (props) => {
return(
<TouchableOpacity style={styles.ButtomView} >
<Text style={styles.TextButtom}> {props.children} </Text>
</TouchableOpacity>
);
}
const styles = StyleSheet.create({
ButtomView: {
flex: 1,
height: 35,
borderRadius: 5,
backgroundColor: '#2a3744',
justifyContent: 'center',
marginVertical: 15
},
TextButtom: {
color: '#fff',
textAlign: 'center',
fontWeight: 'bold',
fontSize: 15,
}
});
export default Buttom;
|
[
"You can not bind event to Component. Event is only can attached to React Native element in React-Native or DOM in React only.\nYou should pass event handler,\n<Buttom onPressHanlder={this.handleClick}> Login </Buttom>\n\nIn Buttom component use props.onPressHanlder to call passed event handler :\nconst Buttom = (props) => {\n return(\n <TouchableOpacity style={styles.ButtomView} onPress={props.onPressHanlder}>\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n <Text style={styles.TextButtom}> {props.children} </Text>\n </TouchableOpacity>\n );\n}\n\n",
"bind the function to scope \nconstructor(props) {\n super(props);\n this.state = {\n email: '',\n password: '',\n }\n this.handleClick = this.handleClick.bind(this);\n }\n\nThem <Buttom onPress={() => this.handleClick() }> Login </Buttom>\n",
"I had to restart the app after adding new packages.\n(using react-native start or react-native run-android)\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"javascript",
"react_native",
"reactjs"
] |
stackoverflow_0048887937_javascript_react_native_reactjs.txt
|
Q:
How to add filters to jQuery.fancyTable (or alternative)
I'm currently using fancyTable to display a table of django results. I have been able to add a search bar and sortable columns (ascending/descending). I've also added "filters" but all they really do is update the search bar with pre-defined text
The problem is that I want to be able to use these filters to only show text that matches exactly. For example, if I'm trying to filter by Stage looking for Prescreen I will currently get lines that include Prescreen, Passed - Prescreen 1, and Passed - Prescreen 2
I've already tried writing a function that sets the tr = display="none", but the pagination does not refresh, so I'm left with several blank pages.
t
Currently initiating fancyTable as such:
<script type="text/javascript">
$(document).ready(function () {
$(".sampleTable").fancyTable({
/* Column number for initial sorting*/
sortColumn: 0,
sortOrder: 'descending', // Valid values are 'desc', 'descending', 'asc', 'ascending', -1 (descending) and 1 (ascending)
/* Setting pagination or enabling */
pagination: true,
/* Rows per page kept for display */
perPage: 12,
globalSearch: true
});
});
</script>
Any ideas on how I can create these filters? I don't see anything from fancyTable about filters.
A:
Ok, so first I had to trigger FancyTable to fun the isSearchMatch function in fancyTable.js :
<div id="checkboxes" class="dropdown-content-filters" style="">
{% for stage in companyStages %}
<label for="one" >
<input id="{{ stage }}" onchange="updateSearchBar_multiselect()" class="multiSelectCheckbox" type="checkbox" id="one" value="{{ stage }}"/>{{ stage }}
</label>
{% endfor %}
</div>
Then, updateSearchBar_multiselect() gets which checkboxes are checked, and adds them to a comma delimited string and adds that string to the search box. search_bar.dispatchEvent(new Event('change')); triggers fancyTable isSearchMatch
function updateSearchBar_multiselect(){
checkElements = document.getElementsByClassName("multiSelectCheckbox")
toSearch = ""
for(var i = 0; i < checkElements.length; i++){
if (checkElements[i].checked) {
//console.log("checkElements[i]",checkElements[i])
type = checkElements[i].id
toSearch = toSearch.concat(type, ',')
}
}
var search_bar = document.getElementById("searchID");
search_bar.value = toSearch;
search_bar.dispatchEvent(new Event('change'));
}
within isSearchMatch:
isFilter = false
if (search.includes(",")) {
searchArray = search.split(",");
isFilter = true
}
console.log("searchArray",searchArray)
if (isFilter) {
console.log("filtering")
if (data.includes(">PRESCREEN<") && searchArray.includes("PRESCREEN")) {
index = searchArray.indexOf('PRESCREEN')
return getReturn(searchArray[index],data)
}
else if (data.includes(">SCREENING<") && searchArray.includes("SCREENING")) {
index = searchArray.indexOf('SCREENING')
return getReturn(searchArray[index], data)
}
else if (data.includes(">MONITOR<") && searchArray.includes("MONITOR")) {
index = searchArray.indexOf('MONITOR')
return getReturn(searchArray[index], data)
}
else if (data.includes(">PORTFOLIO<") && searchArray.includes("PORTFOLIO")) {
index = searchArray.indexOf('PORTFOLIO')
return getReturn(searchArray[index], data)
}
else if (data.includes(">NOSTAGE<") && searchArray.includes("NOSTAGE")) {
index = searchArray.indexOf('NOSTAGE')
return getReturn(searchArray[index], data)
}
else if (data.includes(">DUE DILIGENCE<") && searchArray.includes("DUE DILIGENCE")) {
index = searchArray.indexOf('DUE DILIGENCE')
return getReturn(searchArray[index], data)
}
else if (data.includes(">PASSED - DUE DILIGENCE<") && searchArray.includes("PASSED - DUE DILIGENCE")) {
index = searchArray.indexOf('PASSED - DUE DILIGENCE')
return getReturn(searchArray[index], data)
}
else if (data.includes(">HOLD<") && searchArray.includes("HOLD")) {
index = searchArray.indexOf('HOLD')
return getReturn(searchArray[index], data)
}
else if (data.includes(">DUE DILIGENCE 2<") && searchArray.includes("DUE DILIGENCE 2")) {
index = searchArray.indexOf('DUE DILIGENCE 2')
return getReturn(searchArray[index], data)
}
else if (data.includes(">FEEDER PORTFOLIO<") && searchArray.includes("FEEDER PORTFOLIO")) {
index = searchArray.indexOf('FEEDER PORTFOLIO')
return getReturn(searchArray[index], data)
}
else if (data.includes(">NOT CURRENTLY RAISING<") && searchArray.includes("NOT CURRENTLY RAISING")) {
index = searchArray.indexOf('NOT CURRENTLY RAISING')
return getReturn(searchArray[index], data)
}
else if (data.includes(">SAVE FOR FUND 2<") && searchArray.includes("SAVE FOR FUND 2")) {
index = searchArray.indexOf('SAVE FOR FUND 2')
return getReturn(searchArray[index], data)
}
else {
return(false)
}
}
return (settings.exactMatch === true) ? (data==search) : (new RegExp(search).test(data));
}
catch {
return (settings.exactMatch === true) ? (data==search) : (new RegExp(search).test(data));
}
and getReturn():
function getReturn(substr,data) {
splitData = data.split(">")[1]
splitData2 = splitData.split("<")[0]
if (splitData2 == substr) {
return (true);
}
if (splitData2.toLowerCase().includes(substr.toLowerCase())) {
return (false);
}
}
|
How to add filters to jQuery.fancyTable (or alternative)
|
I'm currently using fancyTable to display a table of django results. I have been able to add a search bar and sortable columns (ascending/descending). I've also added "filters" but all they really do is update the search bar with pre-defined text
The problem is that I want to be able to use these filters to only show text that matches exactly. For example, if I'm trying to filter by Stage looking for Prescreen I will currently get lines that include Prescreen, Passed - Prescreen 1, and Passed - Prescreen 2
I've already tried writing a function that sets the tr = display="none", but the pagination does not refresh, so I'm left with several blank pages.
t
Currently initiating fancyTable as such:
<script type="text/javascript">
$(document).ready(function () {
$(".sampleTable").fancyTable({
/* Column number for initial sorting*/
sortColumn: 0,
sortOrder: 'descending', // Valid values are 'desc', 'descending', 'asc', 'ascending', -1 (descending) and 1 (ascending)
/* Setting pagination or enabling */
pagination: true,
/* Rows per page kept for display */
perPage: 12,
globalSearch: true
});
});
</script>
Any ideas on how I can create these filters? I don't see anything from fancyTable about filters.
|
[
"Ok, so first I had to trigger FancyTable to fun the isSearchMatch function in fancyTable.js :\n<div id=\"checkboxes\" class=\"dropdown-content-filters\" style=\"\">\n {% for stage in companyStages %}\n <label for=\"one\" >\n <input id=\"{{ stage }}\" onchange=\"updateSearchBar_multiselect()\" class=\"multiSelectCheckbox\" type=\"checkbox\" id=\"one\" value=\"{{ stage }}\"/>{{ stage }}\n </label>\n {% endfor %}\n</div>\n\nThen, updateSearchBar_multiselect() gets which checkboxes are checked, and adds them to a comma delimited string and adds that string to the search box. search_bar.dispatchEvent(new Event('change')); triggers fancyTable isSearchMatch\nfunction updateSearchBar_multiselect(){\n checkElements = document.getElementsByClassName(\"multiSelectCheckbox\")\n toSearch = \"\"\n for(var i = 0; i < checkElements.length; i++){\n if (checkElements[i].checked) {\n //console.log(\"checkElements[i]\",checkElements[i])\n type = checkElements[i].id\n toSearch = toSearch.concat(type, ',')\n }\n }\n var search_bar = document.getElementById(\"searchID\");\n search_bar.value = toSearch;\n search_bar.dispatchEvent(new Event('change'));\n}\n\nwithin isSearchMatch:\nisFilter = false\nif (search.includes(\",\")) {\n searchArray = search.split(\",\");\n isFilter = true\n}\nconsole.log(\"searchArray\",searchArray)\nif (isFilter) {\n console.log(\"filtering\")\n if (data.includes(\">PRESCREEN<\") && searchArray.includes(\"PRESCREEN\")) {\n index = searchArray.indexOf('PRESCREEN')\n return getReturn(searchArray[index],data)\n }\n else if (data.includes(\">SCREENING<\") && searchArray.includes(\"SCREENING\")) {\n index = searchArray.indexOf('SCREENING')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">MONITOR<\") && searchArray.includes(\"MONITOR\")) {\n index = searchArray.indexOf('MONITOR')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">PORTFOLIO<\") && searchArray.includes(\"PORTFOLIO\")) {\n index = searchArray.indexOf('PORTFOLIO')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">NOSTAGE<\") && searchArray.includes(\"NOSTAGE\")) {\n index = searchArray.indexOf('NOSTAGE')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">DUE DILIGENCE<\") && searchArray.includes(\"DUE DILIGENCE\")) {\n index = searchArray.indexOf('DUE DILIGENCE')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">PASSED - DUE DILIGENCE<\") && searchArray.includes(\"PASSED - DUE DILIGENCE\")) {\n index = searchArray.indexOf('PASSED - DUE DILIGENCE')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">HOLD<\") && searchArray.includes(\"HOLD\")) {\n index = searchArray.indexOf('HOLD')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">DUE DILIGENCE 2<\") && searchArray.includes(\"DUE DILIGENCE 2\")) {\n index = searchArray.indexOf('DUE DILIGENCE 2')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">FEEDER PORTFOLIO<\") && searchArray.includes(\"FEEDER PORTFOLIO\")) {\n index = searchArray.indexOf('FEEDER PORTFOLIO')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">NOT CURRENTLY RAISING<\") && searchArray.includes(\"NOT CURRENTLY RAISING\")) {\n index = searchArray.indexOf('NOT CURRENTLY RAISING')\n return getReturn(searchArray[index], data)\n }\n else if (data.includes(\">SAVE FOR FUND 2<\") && searchArray.includes(\"SAVE FOR FUND 2\")) {\n index = searchArray.indexOf('SAVE FOR FUND 2')\n return getReturn(searchArray[index], data)\n }\n\n else {\n return(false)\n }\n}\nreturn (settings.exactMatch === true) ? (data==search) : (new RegExp(search).test(data));\n}\ncatch {\nreturn (settings.exactMatch === true) ? (data==search) : (new RegExp(search).test(data));\n}\n\nand getReturn():\nfunction getReturn(substr,data) {\n splitData = data.split(\">\")[1]\n splitData2 = splitData.split(\"<\")[0]\n if (splitData2 == substr) {\n return (true);\n }\n if (splitData2.toLowerCase().includes(substr.toLowerCase())) {\n return (false);\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"datatables",
"django",
"frontend",
"javascript",
"jquery"
] |
stackoverflow_0074621725_datatables_django_frontend_javascript_jquery.txt
|
Q:
Python program not working - simple mathematic function
cat Prog4CCM.py
numberArray = []
count = 0
#filename = input("Please enter the file name: ")
filename = "t.txt" # for testing purposes
file = open(filename, "r")
for each_line in file:
numberArray.append(each_line)
for i in numberArray:
print(i)
count = count + 1
def findMaxValue(numberArray, count):
maxval = numberArray[0]
for i in range(0, count):
if numberArray[i] > maxval:
maxval = numberArray[i]
return maxval
def findMinValue(numberArray, count):
minval = numberArray[0]
for i in range(0, count):
if numberArray[i] < minval:
minval = numberArray[i]
return minval
def findFirstOccurence(numberArray, vtf, count):
for i in range(0, count):
if numberArray[i] == vtf:
return i
break
i = i + 1
# Function calls start
print("The maxiumum value in the file is "+ str(findMaxValue(numberArray, count)))
print("The minimum value in the file is "+str(findMinValue(numberArray, count)))
vtf = input("Please insert the number you would like to find the first occurence of: ")
print("First occurence is at "+str(findFirstOccurence(numberArray, vtf, count)))
This is supposed to call a function (Find First Occurrence) and check for the first occurrence in my array.
It should return a proper value, but just returns "None". Why might this be?
The file reading, and max and min value all seem to work perfectly.
A:
At a quick glance, the function findFirstOccurence miss return statement. If you want us to help you debug the code in detail, you may need to provide your test data, like t.txt
A:
You forgot to add a return in the findFirstOccurence() function, in case the vtf response is not in the list and there is an error with adding one to the iterator and use break, the for loop will do that for you.
The correct code would look like this:
...
def findFirstOccurence(numberArray, vtf, count):
for i in range(0, count):
if numberArray[i] == vtf:
return i
# break # <==
# i = i + 1 # It's errors
return "Can't find =("
# Function calls start
print("The maxiumum value in the file is "+ str(findMaxValue(numberArray, count)))
print("The minimum value in the file is "+str(findMinValue(numberArray, count)))
vtf = input("Please insert the number you would like to find the first occurence of: ")
print("First occurence is at "+str(findFirstOccurence(numberArray, vtf, count)))
|
Python program not working - simple mathematic function
|
cat Prog4CCM.py
numberArray = []
count = 0
#filename = input("Please enter the file name: ")
filename = "t.txt" # for testing purposes
file = open(filename, "r")
for each_line in file:
numberArray.append(each_line)
for i in numberArray:
print(i)
count = count + 1
def findMaxValue(numberArray, count):
maxval = numberArray[0]
for i in range(0, count):
if numberArray[i] > maxval:
maxval = numberArray[i]
return maxval
def findMinValue(numberArray, count):
minval = numberArray[0]
for i in range(0, count):
if numberArray[i] < minval:
minval = numberArray[i]
return minval
def findFirstOccurence(numberArray, vtf, count):
for i in range(0, count):
if numberArray[i] == vtf:
return i
break
i = i + 1
# Function calls start
print("The maxiumum value in the file is "+ str(findMaxValue(numberArray, count)))
print("The minimum value in the file is "+str(findMinValue(numberArray, count)))
vtf = input("Please insert the number you would like to find the first occurence of: ")
print("First occurence is at "+str(findFirstOccurence(numberArray, vtf, count)))
This is supposed to call a function (Find First Occurrence) and check for the first occurrence in my array.
It should return a proper value, but just returns "None". Why might this be?
The file reading, and max and min value all seem to work perfectly.
|
[
"At a quick glance, the function findFirstOccurence miss return statement. If you want us to help you debug the code in detail, you may need to provide your test data, like t.txt\n",
"You forgot to add a return in the findFirstOccurence() function, in case the vtf response is not in the list and there is an error with adding one to the iterator and use break, the for loop will do that for you.\nThe correct code would look like this:\n...\n\ndef findFirstOccurence(numberArray, vtf, count):\n for i in range(0, count):\n if numberArray[i] == vtf:\n return i\n # break # <==\n # i = i + 1 # It's errors\n return \"Can't find =(\"\n\n\n# Function calls start\n\n\nprint(\"The maxiumum value in the file is \"+ str(findMaxValue(numberArray, count)))\nprint(\"The minimum value in the file is \"+str(findMinValue(numberArray, count)))\n\nvtf = input(\"Please insert the number you would like to find the first occurence of: \")\nprint(\"First occurence is at \"+str(findFirstOccurence(numberArray, vtf, count)))\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"algorithm",
"python",
"python_3.x"
] |
stackoverflow_0074663165_algorithm_python_python_3.x.txt
|
Q:
Prometheus Absent function
I want to check if a certain metric is not available in prometheus for 5 minute.
I am using absent(K_KA_GCPP) and giving a 5 minute threshold. But it seems I cannot group the absent function on certain labels like Site Id.
Absent works if the metric is not available for all 4 site Ids. I want to find out if the metric is not available or absent for 1 site id out of all 4 and I don't want to hardcode the site Id labels in the query, it should be generic. Is there any way I can do that?
A:
I was able to achive this by doing somthing like this:
count(up{job="prometheus"} offset 1h) by (project) unless count(up{job="prometheus"} ) by (project)
If the metric is missing in the last 1 hour, it will trigger an alert.
You can add any labels you need after the by section (that's helpful in altering for example).
Source: Prometheus Alert for missing metrics and labels
A:
This task can be solved with lag function from MetricsQL. For example, the following query returns time series, which received samples during the last 24 hours, but had no new samples during the last hour:
lag(K_KA_GCPP[24h]) > 1h
See MetricsQL docs for more details.
A:
The offset I feel like is a great starting point, but it has a big weakness. If there's no sample in the time - offset then your query doesn't return what you'd like to.
I reworked the answer from Ahmed to this:
group(present_over_time(myMetric{label1="asd"}[3h])) by (labels) unless group(myMetric{label1="asd"}) by (labels)
using period with present_over_time should fix that aforementioned problem
group() aggregation, since you don't need the value
also I like to use the actual metric, since up{} is a state of the scraped target, not the "metric is present" information which I feel might not be equivalent
A:
There exists the Prometheus absent_over_time function
A:
You can use it as a group! see how to configure an alert rule group
You can also use absent_over_time function
absent returns just one result as it is for a single site ID in your case
absent(<expr>)
Returns an empty vector if the vector passed to it has any elements and a 1-element vector with the value 1 if the vector passed to it has no elements. This is useful for alerting on when no time series exist for a given metric name and label combination.
|
Prometheus Absent function
|
I want to check if a certain metric is not available in prometheus for 5 minute.
I am using absent(K_KA_GCPP) and giving a 5 minute threshold. But it seems I cannot group the absent function on certain labels like Site Id.
Absent works if the metric is not available for all 4 site Ids. I want to find out if the metric is not available or absent for 1 site id out of all 4 and I don't want to hardcode the site Id labels in the query, it should be generic. Is there any way I can do that?
|
[
"I was able to achive this by doing somthing like this:\ncount(up{job=\"prometheus\"} offset 1h) by (project) unless count(up{job=\"prometheus\"} ) by (project)\n\nIf the metric is missing in the last 1 hour, it will trigger an alert.\nYou can add any labels you need after the by section (that's helpful in altering for example).\nSource: Prometheus Alert for missing metrics and labels\n",
"This task can be solved with lag function from MetricsQL. For example, the following query returns time series, which received samples during the last 24 hours, but had no new samples during the last hour:\nlag(K_KA_GCPP[24h]) > 1h\n\nSee MetricsQL docs for more details.\n",
"The offset I feel like is a great starting point, but it has a big weakness. If there's no sample in the time - offset then your query doesn't return what you'd like to.\nI reworked the answer from Ahmed to this:\ngroup(present_over_time(myMetric{label1=\"asd\"}[3h])) by (labels) unless group(myMetric{label1=\"asd\"}) by (labels)\n\n\nusing period with present_over_time should fix that aforementioned problem\ngroup() aggregation, since you don't need the value\nalso I like to use the actual metric, since up{} is a state of the scraped target, not the \"metric is present\" information which I feel might not be equivalent\n\n",
"There exists the Prometheus absent_over_time function\n",
"You can use it as a group! see how to configure an alert rule group\n\nYou can also use absent_over_time function\n\nabsent returns just one result as it is for a single site ID in your case\n absent(<expr>)\n\nReturns an empty vector if the vector passed to it has any elements and a 1-element vector with the value 1 if the vector passed to it has no elements. This is useful for alerting on when no time series exist for a given metric name and label combination.\n"
] |
[
2,
0,
0,
0,
0
] |
[] |
[] |
[
"prometheus",
"prometheus_alertmanager",
"promql"
] |
stackoverflow_0053191746_prometheus_prometheus_alertmanager_promql.txt
|
Q:
backface-visibility: hidden not working in grid on Safari
I'm very close to having solved this problem, by nesting a 1:1 grid instead of using flex and position: absolute; however, while it seems to really, really work on Chrome and Firefox, on Safari my backface is visible:
Super curiously, in dark mode, it appears to briefly work properly, before the backside takes over:
How do I make it so that my "flipped" card only shows the correct content? Can I Use seems to think backface-visibility has been supported in Safari ~ forever. Is this a new bug? Am I doing something wrong?
(FYI, I'm using Safari 16.0)
Fiddle:
.flip {
perspective: 600;
display: flex;
}
.flip-content {
display: grid;
grid-template-columns: repeat(1, minmax(0, 1fr));
grid-template-rows: repeat(1, minmax(0, 1fr));
margin-left: auto;
margin-right: auto;
transition: 0.4s;
transform-style: preserve-3d;
width: 100%;
}
.flip:hover .flip-content {
transform: rotateY(180deg);
transition: transform 0.3s;
}
.flip-front,
.flip-back {
backface-visibility: hidden;
grid-column-start: 1;
grid-row-start: 1;
margin-left: auto;
margin-right: auto;
max-width: 24rem;
overflow: hidden;
width: 100%;
}
.flip-back {
transform: rotateY(180deg);
}
@media (prefers-color-scheme: dark) {
.flip-front,
.flip-back {
background-color: rgb(30, 41, 59);
color: rgb(226, 232, 240);
}
}
.container {
width: 450px;
}
<div class="container">
<div class="flip">
<div class="flip-content">
<div class="flip-front">
<h2>Step 1:<br>Lorem ipsum dolor sit amet</h2>
</div>
<div class="flip-back">
<p>Maecenas justo purus, semper id feugiat in, ornare vel urna. Pellentesque maximus tortor metus, eu posuere velit ullamcorper sit amet.</p>
</div>
</div>
</div>
</div>
A:
I ran into a similar issue on Safari, and the janky workaround I came up with was to adjust the opacity of front/back on transform. Seems to work fine in both Safari/Chrome, and appears to work with dark mode as well.
.flip {
perspective: 600;
display: flex;
}
.flip-content {
display: grid;
grid-template-columns: repeat(1, minmax(0, 1fr));
grid-template-rows: repeat(1, minmax(0, 1fr));
margin-left: auto;
margin-right: auto;
transition: 0.4s;
transform-style: preserve-3d;
width: 100%;
}
.flip:hover .flip-content {
transform: rotateY(180deg);
transition: transform 0.3s;
}
.flip-front,
.flip-back {
backface-visibility: hidden;
grid-column-start: 1;
grid-row-start: 1;
margin-left: auto;
margin-right: auto;
max-width: 24rem;
overflow: hidden;
width: 100%;
}
.flip-front {
opacity: 100%;
}
.flip-back {
transform: rotateY(180deg);
opacity: 0%;
}
.flip:hover .flip-content .flip-front {
opacity: 0%;
}
.flip:hover .flip-content .flip-back {
opacity: 100%;
}
@media (prefers-color-scheme: dark) {
.flip-front,
.flip-back {
background-color: rgb(30, 41, 59);
color: rgb(226, 232, 240);
}
}
.container {
width: 450px;
}
<div class="container">
<div class="flip">
<div class="flip-content">
<div class="flip-front">
<h2>Step 1:<br>Lorem ipsum dolor sit amet</h2>
</div>
<div class="flip-back">
<p>Maecenas justo purus, semper id feugiat in, ornare vel urna. Pellentesque maximus tortor metus, eu posuere velit ullamcorper sit amet.</p>
</div>
</div>
</div>
</div>
|
backface-visibility: hidden not working in grid on Safari
|
I'm very close to having solved this problem, by nesting a 1:1 grid instead of using flex and position: absolute; however, while it seems to really, really work on Chrome and Firefox, on Safari my backface is visible:
Super curiously, in dark mode, it appears to briefly work properly, before the backside takes over:
How do I make it so that my "flipped" card only shows the correct content? Can I Use seems to think backface-visibility has been supported in Safari ~ forever. Is this a new bug? Am I doing something wrong?
(FYI, I'm using Safari 16.0)
Fiddle:
.flip {
perspective: 600;
display: flex;
}
.flip-content {
display: grid;
grid-template-columns: repeat(1, minmax(0, 1fr));
grid-template-rows: repeat(1, minmax(0, 1fr));
margin-left: auto;
margin-right: auto;
transition: 0.4s;
transform-style: preserve-3d;
width: 100%;
}
.flip:hover .flip-content {
transform: rotateY(180deg);
transition: transform 0.3s;
}
.flip-front,
.flip-back {
backface-visibility: hidden;
grid-column-start: 1;
grid-row-start: 1;
margin-left: auto;
margin-right: auto;
max-width: 24rem;
overflow: hidden;
width: 100%;
}
.flip-back {
transform: rotateY(180deg);
}
@media (prefers-color-scheme: dark) {
.flip-front,
.flip-back {
background-color: rgb(30, 41, 59);
color: rgb(226, 232, 240);
}
}
.container {
width: 450px;
}
<div class="container">
<div class="flip">
<div class="flip-content">
<div class="flip-front">
<h2>Step 1:<br>Lorem ipsum dolor sit amet</h2>
</div>
<div class="flip-back">
<p>Maecenas justo purus, semper id feugiat in, ornare vel urna. Pellentesque maximus tortor metus, eu posuere velit ullamcorper sit amet.</p>
</div>
</div>
</div>
</div>
|
[
"I ran into a similar issue on Safari, and the janky workaround I came up with was to adjust the opacity of front/back on transform. Seems to work fine in both Safari/Chrome, and appears to work with dark mode as well.\n\n\n.flip {\n perspective: 600;\n display: flex;\n}\n\n.flip-content {\n display: grid;\n grid-template-columns: repeat(1, minmax(0, 1fr));\n grid-template-rows: repeat(1, minmax(0, 1fr));\n margin-left: auto;\n margin-right: auto;\n transition: 0.4s;\n transform-style: preserve-3d;\n width: 100%;\n}\n\n.flip:hover .flip-content {\n transform: rotateY(180deg);\n transition: transform 0.3s;\n}\n\n.flip-front,\n.flip-back {\n backface-visibility: hidden;\n grid-column-start: 1;\n grid-row-start: 1;\n margin-left: auto;\n margin-right: auto;\n max-width: 24rem;\n overflow: hidden;\n width: 100%;\n}\n\n.flip-front {\n opacity: 100%;\n}\n\n.flip-back {\n transform: rotateY(180deg);\n opacity: 0%;\n}\n\n.flip:hover .flip-content .flip-front {\n opacity: 0%;\n}\n\n.flip:hover .flip-content .flip-back {\n opacity: 100%;\n}\n\n@media (prefers-color-scheme: dark) {\n .flip-front,\n .flip-back {\n background-color: rgb(30, 41, 59);\n color: rgb(226, 232, 240);\n }\n}\n\n.container {\n width: 450px;\n}\n<div class=\"container\">\n <div class=\"flip\">\n <div class=\"flip-content\">\n <div class=\"flip-front\">\n <h2>Step 1:<br>Lorem ipsum dolor sit amet</h2>\n </div>\n <div class=\"flip-back\">\n <p>Maecenas justo purus, semper id feugiat in, ornare vel urna. Pellentesque maximus tortor metus, eu posuere velit ullamcorper sit amet.</p>\n </div>\n </div>\n </div>\n</div>\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0074189983_css.txt
|
Q:
Strapi - Media Library frontend side upload with user-specific access
I want to let authorized end users (non-admin) upload images to the media library straight from frontend (with no view restrictions - just public after upload).
How can I let the user see only his own folder during upload? Can I create a folder with ID of the user? How to create a folder programmatically? Is it even possible? Or I gotta use Cloudinary or S3?
A:
To extend functionality of Strapi plugin we need to copy the needed plugin from node_modules to src/extensions
In this particular example, we just want to modify `add() a little
src\extensions\upload\services\upload.js
async add(values, { user } = {}) {
const fileValues = { ...values };
if (user) {
fileValues[UPDATED_BY_ATTRIBUTE] = user.id;
fileValues[CREATED_BY_ATTRIBUTE] = user.id;
fileValues.username = user.username; // <<< Here
}
sendMediaMetrics(fileValues);
const res = await strapi.query(FILE_MODEL_UID).create({ data: fileValues });
await this.emitEvent(MEDIA_CREATE, res);
return res;
},
Maybe it won't be enough, I just don't remember all the steps, but I think this would do the trick. If not - just go through the chain of invocations and make sure user.username at this stage is not undefined and it'll work.
|
Strapi - Media Library frontend side upload with user-specific access
|
I want to let authorized end users (non-admin) upload images to the media library straight from frontend (with no view restrictions - just public after upload).
How can I let the user see only his own folder during upload? Can I create a folder with ID of the user? How to create a folder programmatically? Is it even possible? Or I gotta use Cloudinary or S3?
|
[
"To extend functionality of Strapi plugin we need to copy the needed plugin from node_modules to src/extensions\nIn this particular example, we just want to modify `add() a little\nsrc\\extensions\\upload\\services\\upload.js\nasync add(values, { user } = {}) {\n const fileValues = { ...values };\n\n if (user) {\n fileValues[UPDATED_BY_ATTRIBUTE] = user.id;\n fileValues[CREATED_BY_ATTRIBUTE] = user.id;\n fileValues.username = user.username; // <<< Here\n }\n sendMediaMetrics(fileValues);\n\n const res = await strapi.query(FILE_MODEL_UID).create({ data: fileValues });\n\n await this.emitEvent(MEDIA_CREATE, res);\n\n return res;\n},\n\nMaybe it won't be enough, I just don't remember all the steps, but I think this would do the trick. If not - just go through the chain of invocations and make sure user.username at this stage is not undefined and it'll work.\n"
] |
[
1
] |
[] |
[] |
[
"strapi",
"upload"
] |
stackoverflow_0073260933_strapi_upload.txt
|
Q:
Check time interval between multiple times
Table: Booking from MySQL
┌────┬────────────────┬───────┬───────────┬─────────────┐
│ ID │ Schedule Date │ Hour │ Service │ Client Name │
├────┼────────────────┼───────┼───────────┼─────────────┤
│ 1 │ 02/12/2022 │ 13:00 │ Service 3 │ Connan │
│ 2 │ 02/12/2022 │ 14:00 │ Service 2 │ Dean │
│ 3 │ 02/12/2022 │ 15:00 │ Service 1 │ Holmes │
│ 4 │ 02/12/2022 │ 15:30 │ Service 3 │ Hyuk │
└────┴────────────────┴───────┴───────────┴─────────────┘
Table: Services from MySQL
┌────┬──────────────┬──────────┐
│ ID │ Service Name │ Duration │
├────┼──────────────┼──────────┤
│ 1 │ Service 1 │ 0:15 │
│ 2 │ Service 2 │ 0:30 │
│ 3 │ Service 3 │ 1:00 │
└────┴──────────────┴──────────┘
Can be scheduled a service 1 at 15:15.
interval beteween 15:15 and 15:30 is 15 minutes,
making the 15:15 available to book a service with 0:15 duration.
I would like to do a per-service check. If the customer wants to schedule service 1, he will check in the schedules which times are available for service 1, and so on.
A:
To check if a service can be scheduled at a specific time, you can use the following steps:
Query the Booking table to get a list of all the bookings on the same day as the desired time.
For each booking, calculate the start and end times of the booking by adding the duration of the service to the booking's Hour field.
If the desired time falls within the start and end times of any of the bookings, the service cannot be scheduled at that time. Otherwise, the service can be scheduled.
Here is an example of how this can be implemented in SQL:
SELECT * FROM Booking
WHERE DATE(ScheduleDate) = DATE(@desiredTime)
AND (
HOUR(@desiredTime) >= HOUR(Booking.Hour) AND
MINUTE(@desiredTime) >= MINUTE(Booking.Hour) AND
HOUR(@desiredTime) <= HOUR(DATE_ADD(Booking.Hour, INTERVAL Service.Duration HOUR_SECOND)) AND
MINUTE(@desiredTime) <= MINUTE(DATE_ADD(Booking.Hour, INTERVAL Service.Duration HOUR_SECOND))
)
In this query, @desiredTime is a parameter representing the time at which the service is to be scheduled. The query first filters the Booking table to only include bookings on the same day as @desiredTime, and then checks if the desired time falls within the start and end times of any of the bookings. If any rows are returned by the query, it means that the service cannot be scheduled at the desired time. Otherwise, the service can be scheduled.
|
Check time interval between multiple times
|
Table: Booking from MySQL
┌────┬────────────────┬───────┬───────────┬─────────────┐
│ ID │ Schedule Date │ Hour │ Service │ Client Name │
├────┼────────────────┼───────┼───────────┼─────────────┤
│ 1 │ 02/12/2022 │ 13:00 │ Service 3 │ Connan │
│ 2 │ 02/12/2022 │ 14:00 │ Service 2 │ Dean │
│ 3 │ 02/12/2022 │ 15:00 │ Service 1 │ Holmes │
│ 4 │ 02/12/2022 │ 15:30 │ Service 3 │ Hyuk │
└────┴────────────────┴───────┴───────────┴─────────────┘
Table: Services from MySQL
┌────┬──────────────┬──────────┐
│ ID │ Service Name │ Duration │
├────┼──────────────┼──────────┤
│ 1 │ Service 1 │ 0:15 │
│ 2 │ Service 2 │ 0:30 │
│ 3 │ Service 3 │ 1:00 │
└────┴──────────────┴──────────┘
Can be scheduled a service 1 at 15:15.
interval beteween 15:15 and 15:30 is 15 minutes,
making the 15:15 available to book a service with 0:15 duration.
I would like to do a per-service check. If the customer wants to schedule service 1, he will check in the schedules which times are available for service 1, and so on.
|
[
"To check if a service can be scheduled at a specific time, you can use the following steps:\nQuery the Booking table to get a list of all the bookings on the same day as the desired time.\nFor each booking, calculate the start and end times of the booking by adding the duration of the service to the booking's Hour field.\nIf the desired time falls within the start and end times of any of the bookings, the service cannot be scheduled at that time. Otherwise, the service can be scheduled.\nHere is an example of how this can be implemented in SQL:\nSELECT * FROM Booking\nWHERE DATE(ScheduleDate) = DATE(@desiredTime)\nAND (\n HOUR(@desiredTime) >= HOUR(Booking.Hour) AND\n MINUTE(@desiredTime) >= MINUTE(Booking.Hour) AND\n HOUR(@desiredTime) <= HOUR(DATE_ADD(Booking.Hour, INTERVAL Service.Duration HOUR_SECOND)) AND\n MINUTE(@desiredTime) <= MINUTE(DATE_ADD(Booking.Hour, INTERVAL Service.Duration HOUR_SECOND))\n)\n\nIn this query, @desiredTime is a parameter representing the time at which the service is to be scheduled. The query first filters the Booking table to only include bookings on the same day as @desiredTime, and then checks if the desired time falls within the start and end times of any of the bookings. If any rows are returned by the query, it means that the service cannot be scheduled at the desired time. Otherwise, the service can be scheduled.\n"
] |
[
-1
] |
[] |
[] |
[
"javascript",
"mysql",
"node.js"
] |
stackoverflow_0074662849_javascript_mysql_node.js.txt
|
Q:
(matplot, 3d, plot_surface, Animation) How can I freez z axis from moving in the animaton
I want to make an animation of a drum vibration in python. My problem is that the zero point of the z_axis keeps moving. How can I freeze the z_axis? video link
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
z=sol[0]
def init():
surf,=ax.plot_surface(xv,yv,z0)
ax.set_xlim(0,1)
ax.set_ylim(0,1)
#ax.set_zlim3d(-1,1)
ax.set_zlim(-1,1)
return surf
def animate(i):
u=add_boundry(z[i+1])
ax.clear()
surf=ax.plot_surface(xv, yv, u)
return surf
anim = FuncAnimation(fig, animate,frames=200, interval=200, blit=False)
anim.save('drum.gif', writer='ffmpeg')
A:
I find a solution.
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
z=sol[0]
z0=add_boundry(z[0])
#surf=ax.plot_surface(np.empty_like(xv),np.empty_like(yv),np.empty_like(z0))
def init():
surf=ax.plot_surface(xv,yv,z0)
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_zlim(-0.2,0.2)
ax._autoscaleZon = False
return surf
def animate(i):
u=add_boundry(z[i+1])
ax.clear()
surf=ax.plot_surface(xv, yv, u)
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_zlim(-0.2,0.2)
ax._autoscaleZon = False
return surf
anim = FuncAnimation(fig, animate, init_func=init, frames=200, interval=200, blit=False)
anim.save('drum.gif', writer='ffmpeg')
|
(matplot, 3d, plot_surface, Animation) How can I freez z axis from moving in the animaton
|
I want to make an animation of a drum vibration in python. My problem is that the zero point of the z_axis keeps moving. How can I freeze the z_axis? video link
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
z=sol[0]
def init():
surf,=ax.plot_surface(xv,yv,z0)
ax.set_xlim(0,1)
ax.set_ylim(0,1)
#ax.set_zlim3d(-1,1)
ax.set_zlim(-1,1)
return surf
def animate(i):
u=add_boundry(z[i+1])
ax.clear()
surf=ax.plot_surface(xv, yv, u)
return surf
anim = FuncAnimation(fig, animate,frames=200, interval=200, blit=False)
anim.save('drum.gif', writer='ffmpeg')
|
[
"I find a solution.\nfig, ax = plt.subplots(subplot_kw={\"projection\": \"3d\"})\nz=sol[0]\nz0=add_boundry(z[0])\n#surf=ax.plot_surface(np.empty_like(xv),np.empty_like(yv),np.empty_like(z0))\n\ndef init():\n surf=ax.plot_surface(xv,yv,z0)\n ax.set_xlim(0,1)\n ax.set_ylim(0,1)\n ax.set_zlim(-0.2,0.2)\n ax._autoscaleZon = False\n return surf\ndef animate(i):\n u=add_boundry(z[i+1])\n ax.clear()\n surf=ax.plot_surface(xv, yv, u)\n ax.set_xlim(0,1)\n ax.set_ylim(0,1)\n ax.set_zlim(-0.2,0.2)\n ax._autoscaleZon = False\n return surf\n\n\n \nanim = FuncAnimation(fig, animate, init_func=init, frames=200, interval=200, blit=False)\n\n\nanim.save('drum.gif', writer='ffmpeg')\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"matplotlib_animation",
"python"
] |
stackoverflow_0074662936_matplotlib_matplotlib_animation_python.txt
|
Q:
Solidity: How to pass an array of addresses from one contract (NFT) to another one (not NFT) for later use?
I have a simplest nft code. My task is to take an array of buyers (wallet addresses) of this nft when it's minted and to pass it ((array) or them ((addresses))) to another contract of mine, so i could take further action with them. The answer is... HOW?
I'm new to programming, so please be gentle with me ^_^
Thank you in advance!
Andrew
Nft code ->
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.8;
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
contract Nft is ERC721 {
string public constant TOKEN_URI =
"ipfs://...";
uint256 private s_tokenCounter;
constructor() ERC721("NFT", "NFT") {
s_tokenCounter = 0;
}
function mintNft() public {
s_tokenCounter = s_tokenCounter + 1;
_safeMint(msg.sender, s_tokenCounter);
}
function tokenURI(uint256 tokenId) public view override returns (string memory) {
// require(_exists(tokenId), "ERC721Metadata: URI query for nonexistent token");
return TOKEN_URI;
}
function getTokenCounter() public view returns (uint256) {
return s_tokenCounter;
}
}
I tried to build a getter function but got lost in the code and advices.
Tried to import an NFT-contract into an executive contract...
Full of mistakes and disappointment.
A:
Here is your contract with array of addresses and its interface.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;
interface InterfaceParentContract {
function viewMyArr() external view returns(address[] memory);
}
contract ParentContract {
address[] public myArr;
constructor () {
myArr.push(0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2);
myArr.push(0x4B20993Bc481177ec7E8f571ceCaE8A9e22C02db);
myArr.push(0x78731D3Ca6b7E34aC0F824c42a7cC18A495cabaB);
}
function viewMyArr() external view returns(address[] memory) {
return myArr;
}
}
And here we have another smart contract that interact with first one via interface(InterfaceParentContract)
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;
import "./ParentContract.sol";
contract Child {
address[] public newArr;
address parentContract;
constructor(address _address) {
parentContract = _address;
}
function smth() public {
InterfaceParentContract b = InterfaceParentContract(parentContract);
newArr = b.viewMyArr();
}
}
So, after using smth() function your array now copied into other contract where you can operate with it.
A:
Petr gave an awesome answer! I made this video showing that it works as expected. If you try to read 'newArr' from the ChildContract BEFORE you execute smth(), it's an error. But if you execute smth(), then it copies the address from the ParentContract and 'newArr' becomes readable within the ChildContract. Check it out here: https://www.youtube.com/watch?v=1l7fqgN2yrU
|
Solidity: How to pass an array of addresses from one contract (NFT) to another one (not NFT) for later use?
|
I have a simplest nft code. My task is to take an array of buyers (wallet addresses) of this nft when it's minted and to pass it ((array) or them ((addresses))) to another contract of mine, so i could take further action with them. The answer is... HOW?
I'm new to programming, so please be gentle with me ^_^
Thank you in advance!
Andrew
Nft code ->
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.8;
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
contract Nft is ERC721 {
string public constant TOKEN_URI =
"ipfs://...";
uint256 private s_tokenCounter;
constructor() ERC721("NFT", "NFT") {
s_tokenCounter = 0;
}
function mintNft() public {
s_tokenCounter = s_tokenCounter + 1;
_safeMint(msg.sender, s_tokenCounter);
}
function tokenURI(uint256 tokenId) public view override returns (string memory) {
// require(_exists(tokenId), "ERC721Metadata: URI query for nonexistent token");
return TOKEN_URI;
}
function getTokenCounter() public view returns (uint256) {
return s_tokenCounter;
}
}
I tried to build a getter function but got lost in the code and advices.
Tried to import an NFT-contract into an executive contract...
Full of mistakes and disappointment.
|
[
"Here is your contract with array of addresses and its interface.\n// SPDX-License-Identifier: MIT\npragma solidity ^0.8.17;\n\ninterface InterfaceParentContract {\n\n function viewMyArr() external view returns(address[] memory);\n\n} \n\ncontract ParentContract {\n address[] public myArr; \n\n constructor () {\n myArr.push(0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2);\n myArr.push(0x4B20993Bc481177ec7E8f571ceCaE8A9e22C02db);\n myArr.push(0x78731D3Ca6b7E34aC0F824c42a7cC18A495cabaB);\n } \n\n function viewMyArr() external view returns(address[] memory) {\n return myArr; \n }\n}\n\nAnd here we have another smart contract that interact with first one via interface(InterfaceParentContract)\n// SPDX-License-Identifier: MIT\npragma solidity ^0.8.17;\n\nimport \"./ParentContract.sol\"; \n\ncontract Child {\n\n address[] public newArr; \n\n address parentContract; \n\n constructor(address _address) {\n parentContract = _address; \n }\n\n function smth() public {\n InterfaceParentContract b = InterfaceParentContract(parentContract); \n newArr = b.viewMyArr(); \n }\n}\n\nSo, after using smth() function your array now copied into other contract where you can operate with it.\n",
"Petr gave an awesome answer! I made this video showing that it works as expected. If you try to read 'newArr' from the ChildContract BEFORE you execute smth(), it's an error. But if you execute smth(), then it copies the address from the ParentContract and 'newArr' becomes readable within the ChildContract. Check it out here: https://www.youtube.com/watch?v=1l7fqgN2yrU\n"
] |
[
1,
0
] |
[] |
[] |
[
"ethereum",
"import",
"nft",
"smartcontracts",
"solidity"
] |
stackoverflow_0074644003_ethereum_import_nft_smartcontracts_solidity.txt
|
Q:
How to open the new tab using Playwright (ex. click the button to open the new section in a new tab)
I am looking for a simpler solution to a current situation. For example, you open the google (any another website) and you want BY CLICK on the button (ex. Gmail) - open this page in the new tab using Playwright.
let browser, page, context;
describe('Check the main page view', function () {
before(async () => {
for (const browserType of ['chromium']) {
browser = await playwright[browserType].launch({headless: false});
context = await browser.newContext();
page = await context.newPage();
await page.goto(baseUrl);
}
});
after(async function () {
browser.close();
});
await page.click(tax);
const taxPage = await page.getAttribute(taxAccount, 'href');
const [newPage] = await Promise.all([
context.waitForEvent('page'),
page.evaluate((taxPage) => window.open(taxPage, '_blank'), taxPage)]);
await newPage.waitForLoadState();
console.log(await newPage.title());
A:
it('Open a new tab', async function () {
await page.click(button, { button: "middle" });
await page.waitForTimeout(2000); //waitForNavigation and waitForLoadState do not work in this case
let pages = await context.pages();
expect(await pages[1].title()).equal('Title');
A:
You could pass a modifier to the click function. In macos it would be Meta because you'd open in a new tab with cmd+click. In windows it would be Control.
const browser = await playwright["chromium"].launch({headless : false});
const page = await browser.newPage();
await page.goto('https://www.facebook.com/');
var pagePromise = page.context().waitForEvent('page', p => p.url() =='https://www.messenger.com/');
await page.click('text=Messenger', { modifiers: ['Meta']});
const newPage = await pagePromise;
await newPage.bringToFront();
await browser.close();
A:
In my case i am clicking on link in a pop up like (ctrl + click on link) then it opens new tab and work on that new tab
await page.click('#open')
const [newTab] = await Promise.all([
page.waitForEvent('popup'),
await page.keyboard.down('Control'),
await page.frameLocator('//iframe[@title="New tab."]').locator('a').click(), // in popup
await page.keyboard.up('Control'),
console.log("clicked on link")
]);
await newTab.waitForFunction(()=>document.title === 'new tab title')
await newTab.fill('#firstname')
await newTab.close() // close the current tab
await page.click('#exitbutton') //back to parent tab and work on it
....
....
await page.close() // close the parent tab
A:
To open a new tab when clicking a button using Playwright, you can use the evaluate and waitForEvent functions to execute JavaScript in the page context and wait for a new page to be created. Here is an example of how you might do this:
const buttonSelector = "#gmail";
const eventToWaitFor = "page";
await page.click(buttonSelector);
const [newPage] = await Promise.all([
context.waitForEvent(eventToWaitFor),
page.evaluate(() => window.open("https://gmail.com", "_blank"))
]);
await newPage.waitForLoadState();
console.log(await newPage.title());
Here it clicks the button with the selector #gmail and then uses the evaluate function to execute JavaScript that opens a new tab with the URL https://gmail.com. The code then uses the waitForEvent function to wait for the page event to be triggered, indicating that a new page has been created. The new page is stored in a variable and then the code uses the waitForLoadState function to wait for the page to finish loading. Finally, the code prints the title of the new page using the title function. This code should open a new tab when clicking the button and print the title of the page that is opened in the new tab.
|
How to open the new tab using Playwright (ex. click the button to open the new section in a new tab)
|
I am looking for a simpler solution to a current situation. For example, you open the google (any another website) and you want BY CLICK on the button (ex. Gmail) - open this page in the new tab using Playwright.
let browser, page, context;
describe('Check the main page view', function () {
before(async () => {
for (const browserType of ['chromium']) {
browser = await playwright[browserType].launch({headless: false});
context = await browser.newContext();
page = await context.newPage();
await page.goto(baseUrl);
}
});
after(async function () {
browser.close();
});
await page.click(tax);
const taxPage = await page.getAttribute(taxAccount, 'href');
const [newPage] = await Promise.all([
context.waitForEvent('page'),
page.evaluate((taxPage) => window.open(taxPage, '_blank'), taxPage)]);
await newPage.waitForLoadState();
console.log(await newPage.title());
|
[
"it('Open a new tab', async function () {\n await page.click(button, { button: \"middle\" });\n await page.waitForTimeout(2000); //waitForNavigation and waitForLoadState do not work in this case\n let pages = await context.pages();\n expect(await pages[1].title()).equal('Title');\n\n",
"You could pass a modifier to the click function. In macos it would be Meta because you'd open in a new tab with cmd+click. In windows it would be Control.\nconst browser = await playwright[\"chromium\"].launch({headless : false});\nconst page = await browser.newPage();\nawait page.goto('https://www.facebook.com/');\nvar pagePromise = page.context().waitForEvent('page', p => p.url() =='https://www.messenger.com/');\nawait page.click('text=Messenger', { modifiers: ['Meta']});\nconst newPage = await pagePromise;\nawait newPage.bringToFront();\nawait browser.close();\n\n",
"In my case i am clicking on link in a pop up like (ctrl + click on link) then it opens new tab and work on that new tab\nawait page.click('#open')\nconst [newTab] = await Promise.all([\n page.waitForEvent('popup'),\n await page.keyboard.down('Control'),\n await page.frameLocator('//iframe[@title=\"New tab.\"]').locator('a').click(), // in popup\n await page.keyboard.up('Control'),\n console.log(\"clicked on link\")\n]);\nawait newTab.waitForFunction(()=>document.title === 'new tab title')\nawait newTab.fill('#firstname')\nawait newTab.close() // close the current tab\nawait page.click('#exitbutton') //back to parent tab and work on it\n....\n....\nawait page.close() // close the parent tab\n\n",
"To open a new tab when clicking a button using Playwright, you can use the evaluate and waitForEvent functions to execute JavaScript in the page context and wait for a new page to be created. Here is an example of how you might do this:\nconst buttonSelector = \"#gmail\";\nconst eventToWaitFor = \"page\";\n \nawait page.click(buttonSelector);\n \nconst [newPage] = await Promise.all([\n context.waitForEvent(eventToWaitFor),\n page.evaluate(() => window.open(\"https://gmail.com\", \"_blank\"))\n]);\n \nawait newPage.waitForLoadState();\nconsole.log(await newPage.title());\n\nHere it clicks the button with the selector #gmail and then uses the evaluate function to execute JavaScript that opens a new tab with the URL https://gmail.com. The code then uses the waitForEvent function to wait for the page event to be triggered, indicating that a new page has been created. The new page is stored in a variable and then the code uses the waitForLoadState function to wait for the page to finish loading. Finally, the code prints the title of the new page using the title function. This code should open a new tab when clicking the button and print the title of the page that is opened in the new tab.\n"
] |
[
9,
2,
0,
0
] |
[] |
[] |
[
"javascript",
"node.js",
"playwright",
"tabs",
"webautomation"
] |
stackoverflow_0064277178_javascript_node.js_playwright_tabs_webautomation.txt
|
Q:
How to use filter on cloud/roleName in Azure Application Insights?
I created the "New Failures Analysis" workbook in Azure Application Insights. In that, I added a custom chart using the below query to display the count of requests, failures, exceptions, etc. based on the cloud/roleName property.
{
"type": 10,
"content": {
"chartId": "xxxxxxxxxxxxxxxxxxx",
"version": "MetricsItem/2.0",
"size": 3,
"chartType": 0,
"resourceType": "microsoft.insights/components",
"metricScope": 0,
"resourceIds": [
"xxxxxxxxxxx"
],
"timeContext": {
"durationMs": 86400000
},
"metrics": [
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/count",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/failed",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--exceptions/count",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/duration",
"aggregation": 4
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/rate",
"aggregation": 4
}
],
"gridFormatType": 2,
"filters": [
{
"key": "cloud/roleName",
"operator": 0,
"values": [
"xxxxxxxxx"
]
}
],
"gridSettings": {
"formatters": [
{
"columnMatch": "Subscription",
"formatter": 5
},
{
"columnMatch": "Name",
"formatter": 5,
"formatOptions": {
"linkTarget": "Resource"
}
},
{
"columnMatch": "Segment",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/count",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 5000,
"palette": "green"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/count Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/failed",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 20,
"palette": "orangeRed"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/failed Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--exceptions/count",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 20,
"palette": "yellowOrangeRed"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--exceptions/count Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/duration",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 1000,
"palette": "yellowGreenBlue"
}
},
{
"columnMatch": "microsoft.insights/components--requests/duration Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/rate",
"formatter": 8,
"formatOptions": {
"min": 0,
"palette": "blueGreen"
},
"numberFormat": {
"unit": 31,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/rate Timeline",
"formatter": 5
}
],
"rowLimit": 10000,
"labelSettings": [
{
"columnId": "microsoft.insights/components--requests/count",
"label": "Requests"
},
{
"columnId": "microsoft.insights/components--requests/count Timeline",
"label": "microsoft.insights/components--requests/count (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/failed",
"label": "Failed"
},
{
"columnId": "microsoft.insights/components--requests/failed Timeline",
"label": "microsoft.insights/components--requests/failed (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--exceptions/count",
"label": "Exceptions"
},
{
"columnId": "microsoft.insights/components--exceptions/count Timeline",
"label": "microsoft.insights/components--exceptions/count (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/duration",
"label": "Avg Response Time"
},
{
"columnId": "microsoft.insights/components--requests/duration Timeline",
"label": "microsoft.insights/components--requests/duration (Average) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/rate",
"label": "Request Rate"
},
{
"columnId": "microsoft.insights/components--requests/rate Timeline",
"label": "microsoft.insights/components--requests/rate (Average) Timeline"
}
]
}
},
"name": "metric - 01"
}
The above query worked fine a few days ago, but suddenly this query is not giving any results, even though the logs are available in the Azure Application Insights for the specific cloud/roleName property.
A:
The metric ID doesn't look quite right. When I add a metric in Workbooks, say, failed requests, I get the ID "metric": "microsoft.insights/components-Failures-requests/failed" instead of "metric": "microsoft.insights/components--requests/failed". I'm curious how this Workbook JSON was generated? I would recreate this metrics step again and check if the issue persists. If it does, also check if you are seeing no results in the Metrics blade as well. If the Workbook continues to show incorrect results afterwards, I would recommend submitting a support request.
|
How to use filter on cloud/roleName in Azure Application Insights?
|
I created the "New Failures Analysis" workbook in Azure Application Insights. In that, I added a custom chart using the below query to display the count of requests, failures, exceptions, etc. based on the cloud/roleName property.
{
"type": 10,
"content": {
"chartId": "xxxxxxxxxxxxxxxxxxx",
"version": "MetricsItem/2.0",
"size": 3,
"chartType": 0,
"resourceType": "microsoft.insights/components",
"metricScope": 0,
"resourceIds": [
"xxxxxxxxxxx"
],
"timeContext": {
"durationMs": 86400000
},
"metrics": [
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/count",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/failed",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--exceptions/count",
"aggregation": 7
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/duration",
"aggregation": 4
},
{
"namespace": "microsoft.insights/components",
"metric": "microsoft.insights/components--requests/rate",
"aggregation": 4
}
],
"gridFormatType": 2,
"filters": [
{
"key": "cloud/roleName",
"operator": 0,
"values": [
"xxxxxxxxx"
]
}
],
"gridSettings": {
"formatters": [
{
"columnMatch": "Subscription",
"formatter": 5
},
{
"columnMatch": "Name",
"formatter": 5,
"formatOptions": {
"linkTarget": "Resource"
}
},
{
"columnMatch": "Segment",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/count",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 5000,
"palette": "green"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/count Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/failed",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 20,
"palette": "orangeRed"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/failed Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--exceptions/count",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 20,
"palette": "yellowOrangeRed"
},
"numberFormat": {
"unit": 0,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--exceptions/count Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/duration",
"formatter": 8,
"formatOptions": {
"min": 0,
"max": 1000,
"palette": "yellowGreenBlue"
}
},
{
"columnMatch": "microsoft.insights/components--requests/duration Timeline",
"formatter": 5
},
{
"columnMatch": "microsoft.insights/components--requests/rate",
"formatter": 8,
"formatOptions": {
"min": 0,
"palette": "blueGreen"
},
"numberFormat": {
"unit": 31,
"options": {
"style": "decimal"
}
}
},
{
"columnMatch": "microsoft.insights/components--requests/rate Timeline",
"formatter": 5
}
],
"rowLimit": 10000,
"labelSettings": [
{
"columnId": "microsoft.insights/components--requests/count",
"label": "Requests"
},
{
"columnId": "microsoft.insights/components--requests/count Timeline",
"label": "microsoft.insights/components--requests/count (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/failed",
"label": "Failed"
},
{
"columnId": "microsoft.insights/components--requests/failed Timeline",
"label": "microsoft.insights/components--requests/failed (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--exceptions/count",
"label": "Exceptions"
},
{
"columnId": "microsoft.insights/components--exceptions/count Timeline",
"label": "microsoft.insights/components--exceptions/count (Count) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/duration",
"label": "Avg Response Time"
},
{
"columnId": "microsoft.insights/components--requests/duration Timeline",
"label": "microsoft.insights/components--requests/duration (Average) Timeline"
},
{
"columnId": "microsoft.insights/components--requests/rate",
"label": "Request Rate"
},
{
"columnId": "microsoft.insights/components--requests/rate Timeline",
"label": "microsoft.insights/components--requests/rate (Average) Timeline"
}
]
}
},
"name": "metric - 01"
}
The above query worked fine a few days ago, but suddenly this query is not giving any results, even though the logs are available in the Azure Application Insights for the specific cloud/roleName property.
|
[
"The metric ID doesn't look quite right. When I add a metric in Workbooks, say, failed requests, I get the ID \"metric\": \"microsoft.insights/components-Failures-requests/failed\" instead of \"metric\": \"microsoft.insights/components--requests/failed\". I'm curious how this Workbook JSON was generated? I would recreate this metrics step again and check if the issue persists. If it does, also check if you are seeing no results in the Metrics blade as well. If the Workbook continues to show incorrect results afterwards, I would recommend submitting a support request.\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_application_insights",
"azure_monitor_workbooks"
] |
stackoverflow_0074657943_azure_azure_application_insights_azure_monitor_workbooks.txt
|
Q:
Complete example to use external jar libs in maven-based project
I am working on a project which uses Eclipse UML libraries, which are poorly supported on maven-central. The projects works well using the related jar libs in local. Now, we plan to deploy in using jenkins and docker, so now we want to make the maven build success. When ran locally with IntelliJ IDEA, the app works but when runnin mvn install, the jar libs are (obviously) not taken into account, and the build fails (package missing etc...).
I've investigated the maven-install-plugin for hours now, and I can not find a complete example and/or a MVP, with the syntax to use multiple jars in a libs folder during the maven build.
Can someone provide one or provide pointers to a clear and working example?
|
Complete example to use external jar libs in maven-based project
|
I am working on a project which uses Eclipse UML libraries, which are poorly supported on maven-central. The projects works well using the related jar libs in local. Now, we plan to deploy in using jenkins and docker, so now we want to make the maven build success. When ran locally with IntelliJ IDEA, the app works but when runnin mvn install, the jar libs are (obviously) not taken into account, and the build fails (package missing etc...).
I've investigated the maven-install-plugin for hours now, and I can not find a complete example and/or a MVP, with the syntax to use multiple jars in a libs folder during the maven build.
Can someone provide one or provide pointers to a clear and working example?
|
[] |
[] |
[
"A possibly helpful \"collection\" of solutions, or pointers to solutions, for this kind of problem, including examples and discussion, can be found in this and that stackoverflow threads.\nThough one wouldn't call that \"clear\". But it gives you a bundle of approaches to pick for your specific case.\nEdit: Oh, and here, too.\n"
] |
[
-1
] |
[
"build",
"dependencies",
"jar",
"maven"
] |
stackoverflow_0074655651_build_dependencies_jar_maven.txt
|
Q:
Should stream's next_in, the compressed buffer be available for inflatePrime() ? - zlib
I noticed on 2nd boot up, when inflatePrime() is called, stream.next_in doesn't have a compressed data buffer assigned yet, so how would it insert bits? (byte offset and bit offset, how they work to restart the inflation, is still bit unclear to me, here is my latest flow of things).
Thanks, appreciate any feedback.
////////// on 2nd boot:
// go to saved byte offset in compressed file
ZSEEK64(z_filefunc, filestream, streamCurrentPos, ZLIB_FILEFUNC_SEEK_SET);
if (streamBits)
{
int aByte=0;
unz64local_getByte(z_filefunc, filestream, &aByte));
inflatePrime (&stream, streamBits, aByte >> (8 - streamBits) );
}
inflateSetDictionary (&stream, dictionary_buf, dictLength);
// 'someSize' is BUF_SIZE or remaining rest_read_compressed
ZREAD64(z_filefunc, filestream, read_buffer, someSize);
stream.next_in = read_buffer;
stream.avail_in = someSize;
// on first call to inflate, gets data-error with "invalid code lengths set";
inflate(&stream, Z_BLOCK);
////////// on 1st boot: where 'state of uncompression' is saved
if ( (stream.data_type & 0x0080) && some_16MB_wrote_to_output) )
{
inflateGetDictionary(&stream, dictionary_buf, &dictLength);
streamBits = stream.data_type & (~0x1C0);
// current file pointer could be way ahead
streamCurrentPos = ZTELL64(z_filefunc, filestream) - stream.avail_in;
}
A:
inflatePrime() inserts bits into an internal buffer. It has nothing to do with next_in or avail_in, which can be changed before every call of inflate().
|
Should stream's next_in, the compressed buffer be available for inflatePrime() ? - zlib
|
I noticed on 2nd boot up, when inflatePrime() is called, stream.next_in doesn't have a compressed data buffer assigned yet, so how would it insert bits? (byte offset and bit offset, how they work to restart the inflation, is still bit unclear to me, here is my latest flow of things).
Thanks, appreciate any feedback.
////////// on 2nd boot:
// go to saved byte offset in compressed file
ZSEEK64(z_filefunc, filestream, streamCurrentPos, ZLIB_FILEFUNC_SEEK_SET);
if (streamBits)
{
int aByte=0;
unz64local_getByte(z_filefunc, filestream, &aByte));
inflatePrime (&stream, streamBits, aByte >> (8 - streamBits) );
}
inflateSetDictionary (&stream, dictionary_buf, dictLength);
// 'someSize' is BUF_SIZE or remaining rest_read_compressed
ZREAD64(z_filefunc, filestream, read_buffer, someSize);
stream.next_in = read_buffer;
stream.avail_in = someSize;
// on first call to inflate, gets data-error with "invalid code lengths set";
inflate(&stream, Z_BLOCK);
////////// on 1st boot: where 'state of uncompression' is saved
if ( (stream.data_type & 0x0080) && some_16MB_wrote_to_output) )
{
inflateGetDictionary(&stream, dictionary_buf, &dictLength);
streamBits = stream.data_type & (~0x1C0);
// current file pointer could be way ahead
streamCurrentPos = ZTELL64(z_filefunc, filestream) - stream.avail_in;
}
|
[
"inflatePrime() inserts bits into an internal buffer. It has nothing to do with next_in or avail_in, which can be changed before every call of inflate().\n"
] |
[
0
] |
[] |
[] |
[
"deflate",
"gzip",
"huffman_code",
"lossless_compression",
"zlib"
] |
stackoverflow_0074662765_deflate_gzip_huffman_code_lossless_compression_zlib.txt
|
Q:
XGBoost Error when saving and loading xgboost model using Pickle, JSON and JobLib
I have trained and saved an xgboost regressor model in Jupyter Notebook (Google Colab) and tried to load it in my local machine without success. I have tried to save and load the model in multiple formats: .pkl using pickle library, .sav using joblib library or .json.
When I load the model in VS Code, I get the following error:
raise XGBoostError(py_str(_LIB.XGBGetLastError()))
xgboost.core.XGBoostError: [10:56:21] ../src/c_api/c_api.cc:846: Check
failed: str[0] == '{' (
What is the problem here?
A:
The issue was a mismatch between the two versions of xgboost when saving the model in Google Colab (xgboost version 0.9) and loading the model in my local Python environment (xgboost version 1.5.1).
I managed to solve the problem by upgrading my xgboost package to the latest version (xgboost version 1.7.1) both on Google Colab and on my local Python environment. I resaved the model and re-loaded it using the newly saved file.
Now the loading works well without any errors.
I will leave my post here on Stackoverflow just in case it may be useful for someone else.
|
XGBoost Error when saving and loading xgboost model using Pickle, JSON and JobLib
|
I have trained and saved an xgboost regressor model in Jupyter Notebook (Google Colab) and tried to load it in my local machine without success. I have tried to save and load the model in multiple formats: .pkl using pickle library, .sav using joblib library or .json.
When I load the model in VS Code, I get the following error:
raise XGBoostError(py_str(_LIB.XGBGetLastError()))
xgboost.core.XGBoostError: [10:56:21] ../src/c_api/c_api.cc:846: Check
failed: str[0] == '{' (
What is the problem here?
|
[
"The issue was a mismatch between the two versions of xgboost when saving the model in Google Colab (xgboost version 0.9) and loading the model in my local Python environment (xgboost version 1.5.1).\nI managed to solve the problem by upgrading my xgboost package to the latest version (xgboost version 1.7.1) both on Google Colab and on my local Python environment. I resaved the model and re-loaded it using the newly saved file.\nNow the loading works well without any errors.\nI will leave my post here on Stackoverflow just in case it may be useful for someone else.\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"python",
"visual_studio_code",
"xgboost"
] |
stackoverflow_0074662799_data_science_python_visual_studio_code_xgboost.txt
|
Q:
I need to find the closest name in a list
I had a really big file in this format that I passed into a string.
International Workshop on Collaborative Virtual Environments B3
International 3D System Integration Conference B1
Three-Dimensional Image Processing, Measurement and Applications B3
Eurographics Workshop on 3D Object Retrieval B2
...
And I have a name that matches one of those lines in the file.
I need a function that loops through this longer string, finds the line with the most similarity (I'm using levershtein string distance function to measure the distance) and returns me the rating, which is the letter and number at the end of the line.
// Return the min number between three integers
int min(int a, int b, int c) {
int res;
if(a <= b && a <= c)
res = a;
else if(b <= a && b <= c)
res = b;
else if(c <= a && c <= b)
res = c;
return res;
}
// Returns a int that means the distance between two strings
int levenshtein(char *s1, char *s2) {
unsigned int x, y, s1len, s2len = 0;
s1len = strlen(s1);
s2len = strlen(s2);
unsigned int matrix[s2len+1][s1len+1];
matrix[0][0] = 0;
for (x = 1; x <= s2len; x++)
matrix[x][0] = matrix[x-1][0] + 1;
for (y = 1; y <= s1len; y++)
matrix[0][y] = matrix[0][y-1] + 1;
for (x = 1; x <= s2len; x++)
for (y = 1; y <= s1len; y++)
matrix[x][y] = min(matrix[x-1][y] + 1, matrix[x][y-1] + 1, matrix[x-1][y-1] + (s1[y-1] == s2[x-1] ? 0 : 1));
return(matrix[s2len][s1len]);
}
For example:
if i have the name: International Workshop on Collaborativa Virtual Environments Sao Paulo, i want the function to return B3. Sao Paulo being the reason why i need the string distance function.
I don't really know how to loop through the string to compare.
A:
Here is one possible implementation of a function that takes a string representing a name and a string representing the file contents, and returns the rating for the name that is most similar to the input string:
function getRating(name, file) {
// Parse the file contents to extract the names and ratings
const entries = file.split('\n') // Split the file into lines
.map(line => line.trim()) // Remove leading and trailing whitespace from each line
.filter(line => line.length > 0) // Remove empty lines
.map(line => {
// Split each line into two parts: the name and the rating
const parts = line.split(' ');
const rating = parts.pop();
const name = parts.join(' ');
return { name, rating };
});
// Find the entry with the most similar name to the input name
let closestName = null;
let closestDistance = Number.MAX_SAFE_INTEGER;
for (const entry of entries) {
const distance = levenshtein(name, entry.name);
if (distance < closestDistance) {
closestName = entry.name;
closestDistance = distance;
}
}
// Return the rating for the most similar name
return entries.find(entry => entry.name === closestName).rating;
}
This function uses the levenshtein function that you provided to calculate the distance between the input name and each name in the file. It then returns the rating for the entry with the name that is most similar to the input name.
Here is an example of how you could use this function:
const file = `
International Workshop on Collaborative Virtual Environments B3
International 3D System Integration Conference B1
Three-Dimensional Image Processing, Measurement and Applications B3
Eurographics Workshop on 3D Object Retrieval B2
`;
const name = "International Workshop on Collaborativa Virtual Environments Sao Paulo";
const rating = getRating(name, file);
console.log(rating); // Output: B3
|
I need to find the closest name in a list
|
I had a really big file in this format that I passed into a string.
International Workshop on Collaborative Virtual Environments B3
International 3D System Integration Conference B1
Three-Dimensional Image Processing, Measurement and Applications B3
Eurographics Workshop on 3D Object Retrieval B2
...
And I have a name that matches one of those lines in the file.
I need a function that loops through this longer string, finds the line with the most similarity (I'm using levershtein string distance function to measure the distance) and returns me the rating, which is the letter and number at the end of the line.
// Return the min number between three integers
int min(int a, int b, int c) {
int res;
if(a <= b && a <= c)
res = a;
else if(b <= a && b <= c)
res = b;
else if(c <= a && c <= b)
res = c;
return res;
}
// Returns a int that means the distance between two strings
int levenshtein(char *s1, char *s2) {
unsigned int x, y, s1len, s2len = 0;
s1len = strlen(s1);
s2len = strlen(s2);
unsigned int matrix[s2len+1][s1len+1];
matrix[0][0] = 0;
for (x = 1; x <= s2len; x++)
matrix[x][0] = matrix[x-1][0] + 1;
for (y = 1; y <= s1len; y++)
matrix[0][y] = matrix[0][y-1] + 1;
for (x = 1; x <= s2len; x++)
for (y = 1; y <= s1len; y++)
matrix[x][y] = min(matrix[x-1][y] + 1, matrix[x][y-1] + 1, matrix[x-1][y-1] + (s1[y-1] == s2[x-1] ? 0 : 1));
return(matrix[s2len][s1len]);
}
For example:
if i have the name: International Workshop on Collaborativa Virtual Environments Sao Paulo, i want the function to return B3. Sao Paulo being the reason why i need the string distance function.
I don't really know how to loop through the string to compare.
|
[
"Here is one possible implementation of a function that takes a string representing a name and a string representing the file contents, and returns the rating for the name that is most similar to the input string:\nfunction getRating(name, file) {\n // Parse the file contents to extract the names and ratings\n const entries = file.split('\\n') // Split the file into lines\n .map(line => line.trim()) // Remove leading and trailing whitespace from each line\n .filter(line => line.length > 0) // Remove empty lines\n .map(line => {\n // Split each line into two parts: the name and the rating\n const parts = line.split(' ');\n const rating = parts.pop();\n const name = parts.join(' ');\n return { name, rating };\n });\n\n // Find the entry with the most similar name to the input name\n let closestName = null;\n let closestDistance = Number.MAX_SAFE_INTEGER;\n for (const entry of entries) {\n const distance = levenshtein(name, entry.name);\n if (distance < closestDistance) {\n closestName = entry.name;\n closestDistance = distance;\n }\n }\n\n // Return the rating for the most similar name\n return entries.find(entry => entry.name === closestName).rating;\n}\n\nThis function uses the levenshtein function that you provided to calculate the distance between the input name and each name in the file. It then returns the rating for the entry with the name that is most similar to the input name.\nHere is an example of how you could use this function:\nconst file = `\n International Workshop on Collaborative Virtual Environments B3\n International 3D System Integration Conference B1\n Three-Dimensional Image Processing, Measurement and Applications B3\n Eurographics Workshop on 3D Object Retrieval B2\n`;\nconst name = \"International Workshop on Collaborativa Virtual Environments Sao Paulo\";\n\nconst rating = getRating(name, file);\nconsole.log(rating); // Output: B3\n\n"
] |
[
0
] |
[] |
[] |
[
"c",
"levenshtein_distance",
"string"
] |
stackoverflow_0074662865_c_levenshtein_distance_string.txt
|
Q:
Matching a set of numbers with regex
i have a response from an API like "1,2,23,21" also could be one single number like "3". I have this regex
(\\d{1,2})|(\\d{1,2}\\,\\d{1,2})*
and i have to validate about the pattern of the response is like "number,number,...." with one of two digit number, but my regex doesn't work with "2,3,12". I think its because the regex matchs the whole string, not only the two first numbers and then the final single number. Any idea?
I'm using Java
I tried other regex like
([1-9]{1,2})|([1-9]{1,2}\\,)
But the result is the same, works with "3" of "2,3" but not with "3,4,1" "1,23,12,1"
A:
You can use the following regex:
^(\d{1,2})(,\d{1,2})*$
This will match a string that consists of one or more comma-separated numbers, each of which is one or two digits long.
For example, it will match "3", "2,3", "3,4,1", and "1,23,12,1". It will not match "2,3,12", since the last number is three digits long.
|
Matching a set of numbers with regex
|
i have a response from an API like "1,2,23,21" also could be one single number like "3". I have this regex
(\\d{1,2})|(\\d{1,2}\\,\\d{1,2})*
and i have to validate about the pattern of the response is like "number,number,...." with one of two digit number, but my regex doesn't work with "2,3,12". I think its because the regex matchs the whole string, not only the two first numbers and then the final single number. Any idea?
I'm using Java
I tried other regex like
([1-9]{1,2})|([1-9]{1,2}\\,)
But the result is the same, works with "3" of "2,3" but not with "3,4,1" "1,23,12,1"
|
[
"You can use the following regex:\n^(\\d{1,2})(,\\d{1,2})*$\n\nThis will match a string that consists of one or more comma-separated numbers, each of which is one or two digits long.\nFor example, it will match \"3\", \"2,3\", \"3,4,1\", and \"1,23,12,1\". It will not match \"2,3,12\", since the last number is three digits long.\n"
] |
[
1
] |
[] |
[] |
[
"java",
"regex"
] |
stackoverflow_0074663261_java_regex.txt
|
Q:
Sagemaker Regex pattern matching
In Sagemaker validation:auc metric monitor has the following regex
.*\[[0-9]+\].*#011validation-auc:([-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?).*
that searches the logs and extracts the matching metrics. I have to log the metrics, so that, they can match the above regex. For this, I tried the following
[2]#011validation-auc:+0.89
Actually I have a list of AUC values like
[2]#011validation-auc: [+0.89, +0.90]
to be logged. I can't modify the regex as it is predefined by the AWS Sagemaker. How do I format my log entries, so that, they can match the above regex?
Thanks
Raj.
A:
I assume you are using a SageMaker Training Job, if you are using the SageMaker SDK you can set metric_definitions in your Estimator object to set the metric's regex.
Kindly see this link for more information: https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html#define-train-metrics
|
Sagemaker Regex pattern matching
|
In Sagemaker validation:auc metric monitor has the following regex
.*\[[0-9]+\].*#011validation-auc:([-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?).*
that searches the logs and extracts the matching metrics. I have to log the metrics, so that, they can match the above regex. For this, I tried the following
[2]#011validation-auc:+0.89
Actually I have a list of AUC values like
[2]#011validation-auc: [+0.89, +0.90]
to be logged. I can't modify the regex as it is predefined by the AWS Sagemaker. How do I format my log entries, so that, they can match the above regex?
Thanks
Raj.
|
[
"I assume you are using a SageMaker Training Job, if you are using the SageMaker SDK you can set metric_definitions in your Estimator object to set the metric's regex.\nKindly see this link for more information: https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html#define-train-metrics\n"
] |
[
0
] |
[] |
[] |
[
"amazon_sagemaker",
"python",
"regex"
] |
stackoverflow_0074662448_amazon_sagemaker_python_regex.txt
|
Q:
Trying to create a hex color type - TS2590
Trying to create a simple hex colour that starts from #000000 to #ffffff. However, TypeScript keeps giving me TS2590 due to the length.
type hexDigit = '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | 'a' | 'b' | 'c' | 'd' | 'e'| 'f';
type hexColor = `#${hexDigit}${hexDigit}${hexDigit}${hexDigit}${hexDigit}${hexDigit}`;
A:
Maybe I'm a bit late, but maybe you also never got a real answer.
Short answer: This question is a duplicate of this post.
Longer answer: I've been looking for a solution for this myself, and it seems that, at least for now, there is no real way to make such a type -- not without generating one that is absurdly complex and stupidly heavy to compute.
Like, for instance, at some point I tried looking into making regex validated types, and found this Github issue on TypeScript's repo which completely derailed to the point some guy implemented a Turing machine with the typing engine.
I myself managed to crash my whole session by trying out stuff and TypeScript going crazy over what I was writing.
Anyway, your union would generate 16^6 (16777216) possible candidates, and is above TypeScript's 100,000 union members limit. Sadly this means you can't just constrain a string to have the shape of a hex colour code.
So, your best bet is to check for validity at runtime, and/or to generate the string through a function that uses generics and type inference in its parameters like in the post linked above.
As for checking it at runtime, seeing your last comment, don't fret. It's very easy ! A very simple regex validator can be written like so:
const isHexColourCode = (s: string) => !!s.match(/^#[a-f0-9]{3}([a-f0-9]{3})?$/i)
if (isHexColourCode('42')) console.log("Won't print anything")
if (isHexColourCode('#A24')) console.log('#A24 is valid')
if (isHexColourCode('#be1337')) console.log('be1337 is valid too')
if (isHexColourCode('#deadbeef')) console.log('Too many characters !')
!! inverts the truthiness of the return value from s.match() twice so that it is returned as true or false
An array is a truthy value, so if a match has been found, !![...] -> true
Else if s.match() returned null, which is falsy, the value will be converted to false.
The regex is quite simple:
^# Start of the line followed by a #
[a-f0-9]{3} Any character between a and f or 0 and 9 (inclusive) three times in a row
([a-f0-9]{3})? Same pattern, but wrapped in parentheses to make a group that is facultative (with the ? quantifier).
$ End of line
i is a flag that makes your pattern ignore the case. Remove it if you want only lowercase characters, and change a-f to A-F for uppercase characters.
Hope this helps, have a great day~
|
Trying to create a hex color type - TS2590
|
Trying to create a simple hex colour that starts from #000000 to #ffffff. However, TypeScript keeps giving me TS2590 due to the length.
type hexDigit = '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | 'a' | 'b' | 'c' | 'd' | 'e'| 'f';
type hexColor = `#${hexDigit}${hexDigit}${hexDigit}${hexDigit}${hexDigit}${hexDigit}`;
|
[
"Maybe I'm a bit late, but maybe you also never got a real answer.\nShort answer: This question is a duplicate of this post.\nLonger answer: I've been looking for a solution for this myself, and it seems that, at least for now, there is no real way to make such a type -- not without generating one that is absurdly complex and stupidly heavy to compute.\nLike, for instance, at some point I tried looking into making regex validated types, and found this Github issue on TypeScript's repo which completely derailed to the point some guy implemented a Turing machine with the typing engine.\nI myself managed to crash my whole session by trying out stuff and TypeScript going crazy over what I was writing.\nAnyway, your union would generate 16^6 (16777216) possible candidates, and is above TypeScript's 100,000 union members limit. Sadly this means you can't just constrain a string to have the shape of a hex colour code.\nSo, your best bet is to check for validity at runtime, and/or to generate the string through a function that uses generics and type inference in its parameters like in the post linked above.\nAs for checking it at runtime, seeing your last comment, don't fret. It's very easy ! A very simple regex validator can be written like so:\nconst isHexColourCode = (s: string) => !!s.match(/^#[a-f0-9]{3}([a-f0-9]{3})?$/i)\n\nif (isHexColourCode('42')) console.log(\"Won't print anything\")\n\nif (isHexColourCode('#A24')) console.log('#A24 is valid')\n\nif (isHexColourCode('#be1337')) console.log('be1337 is valid too')\n\nif (isHexColourCode('#deadbeef')) console.log('Too many characters !')\n\n!! inverts the truthiness of the return value from s.match() twice so that it is returned as true or false\n\nAn array is a truthy value, so if a match has been found, !![...] -> true\nElse if s.match() returned null, which is falsy, the value will be converted to false.\n\nThe regex is quite simple:\n\n^# Start of the line followed by a #\n[a-f0-9]{3} Any character between a and f or 0 and 9 (inclusive) three times in a row\n([a-f0-9]{3})? Same pattern, but wrapped in parentheses to make a group that is facultative (with the ? quantifier).\n$ End of line\ni is a flag that makes your pattern ignore the case. Remove it if you want only lowercase characters, and change a-f to A-F for uppercase characters.\n\nHope this helps, have a great day~\n"
] |
[
0
] |
[] |
[] |
[
"typescript"
] |
stackoverflow_0071105451_typescript.txt
|
Q:
Ruby - quick way to extract number from string
As a beginner in Ruby, is there a quick to extract the first and second number from this string 5.16.0.0-15? In this case, I am looking 5 and 16. Thanks
A:
Use #split, telling it to split on "." and only split into three parts, then access the first two.
irb(main):003:0> s = "5.16.0.0-15"
=> "5.16.0.0-15"
irb(main):004:0> s.split(".", 3)[0..1]
=> ["5", "16"]
Optionally map to integers.
irb(main):005:0> s.split(".", 3)[0..1].map(&:to_i)
=> [5, 16]
A:
One way is to use the method String#match with the regular expression
rgx = /(\d+)\.(\d+)/
to construct a MatchData object. The regular expression captures the first two strings of digits, separated by a period. The method MatchData#captures is then use to extract the contents of capture groups 1 and 2 (strings) and save them to an array. Lastly, String#to_i is used to convert the strings in the array to integers:
"5.16.0.0-15".match(rgx).captures.map(&:to_i)
#=> [5, 16]
We see that
m = "5.16.0.0-15".match(rgx)
#=> #<MatchData "5.16" 1:"5" 2:"16">
a = m.captures
#=> ["5", "16"]
a.map(&:to_i)
#=> [5, 16]
a.map(&:to_i) can be thought of as shorthand for a.map { |s| s.to_i }.
We can express the regular expression in free-spacing mode to make it self-documenting:
/
( # begin capture group 1
\d+ # match one or more digits
) # end capture group 1
\. # match a period
( # begin capture group 2
\d+ # match one or more digits
) # end capture group 2
/x # invoke free-spacing regex definition mode
One reason for using a regular expression here is to confirm the structure of the string, should that be desired. That could be done by using the following regex:
rgx1 =
/
\A # match the beginning of the string
( # begin capture group 1
\d+ # match one or more digits
) # end capture group 1
\. # match a period
( # begin capture group 2
\d+ # match one or more digits
) # end capture group 2
(?: # begin a non-capture group
\. # match a period
\d+ # match one or more digits
(?: # begin a non-capture group
\- # match a hyphen
\d+ # match one or more digits
)? # end non-capture group and make it optional
)* # end non-capture group and execute it zero or more times
\z # match the end of the string
/x # invoke free-spacing regex definition mode
"5.16.0.0-15".match(rgx1).captures.map(&:to_i)
#=> [5, 16]
"5.16.0.A".match(rgx1)
#=> nil
"5.16.0.0-1-5".match(rgx1)
#=> nil
The last two examples would generate exceptions because nil has no method captures. One could of course handle those exceptions.
rgx1 is conventionally written /\A(\d+)\.(\d+)(x?:\.\d+(?:\-\d+)?)*\z/.
|
Ruby - quick way to extract number from string
|
As a beginner in Ruby, is there a quick to extract the first and second number from this string 5.16.0.0-15? In this case, I am looking 5 and 16. Thanks
|
[
"Use #split, telling it to split on \".\" and only split into three parts, then access the first two.\nirb(main):003:0> s = \"5.16.0.0-15\"\n=> \"5.16.0.0-15\"\nirb(main):004:0> s.split(\".\", 3)[0..1]\n=> [\"5\", \"16\"]\n\nOptionally map to integers.\nirb(main):005:0> s.split(\".\", 3)[0..1].map(&:to_i)\n=> [5, 16]\n\n",
"One way is to use the method String#match with the regular expression\nrgx = /(\\d+)\\.(\\d+)/\n\nto construct a MatchData object. The regular expression captures the first two strings of digits, separated by a period. The method MatchData#captures is then use to extract the contents of capture groups 1 and 2 (strings) and save them to an array. Lastly, String#to_i is used to convert the strings in the array to integers:\n\"5.16.0.0-15\".match(rgx).captures.map(&:to_i)\n #=> [5, 16]\n\nWe see that\nm = \"5.16.0.0-15\".match(rgx)\n #=> #<MatchData \"5.16\" 1:\"5\" 2:\"16\">\na = m.captures\n #=> [\"5\", \"16\"]\na.map(&:to_i)\n #=> [5, 16]\n\na.map(&:to_i) can be thought of as shorthand for a.map { |s| s.to_i }.\nWe can express the regular expression in free-spacing mode to make it self-documenting:\n/\n( # begin capture group 1\n \\d+ # match one or more digits\n) # end capture group 1\n\\. # match a period\n( # begin capture group 2\n \\d+ # match one or more digits\n) # end capture group 2\n/x # invoke free-spacing regex definition mode\n\n\nOne reason for using a regular expression here is to confirm the structure of the string, should that be desired. That could be done by using the following regex:\nrgx1 =\n/\n\\A # match the beginning of the string\n( # begin capture group 1\n \\d+ # match one or more digits\n) # end capture group 1\n\\. # match a period\n( # begin capture group 2\n \\d+ # match one or more digits\n) # end capture group 2\n(?: # begin a non-capture group\n \\. # match a period\n \\d+ # match one or more digits\n (?: # begin a non-capture group\n \\- # match a hyphen\n \\d+ # match one or more digits\n )? # end non-capture group and make it optional\n)* # end non-capture group and execute it zero or more times \n\\z # match the end of the string\n/x # invoke free-spacing regex definition mode\n\n\"5.16.0.0-15\".match(rgx1).captures.map(&:to_i)\n #=> [5, 16]\n\"5.16.0.A\".match(rgx1)\n #=> nil\n\"5.16.0.0-1-5\".match(rgx1)\n #=> nil\n\nThe last two examples would generate exceptions because nil has no method captures. One could of course handle those exceptions.\nrgx1 is conventionally written /\\A(\\d+)\\.(\\d+)(x?:\\.\\d+(?:\\-\\d+)?)*\\z/.\n"
] |
[
2,
2
] |
[] |
[] |
[
"extract",
"ruby",
"string"
] |
stackoverflow_0074661056_extract_ruby_string.txt
|
Q:
Parsing m3u file : separate live tv from Vod
I'm currently working on an Iptv player app, and i have managed to parse the m3u file, the problem now that i want to separate live tv from Vod, i don't know when live tv channels ends and the Vod begins in the playlists
here are the keys of every object after the parsing is complete
[ 'duration', 'title', 'tvgId', 'tvgName', 'tvgLogo', 'groupTitle' ]
i'm using nestJs and m3u8-file-parser library for m3u parsing
A:
The way to distinguish between live TV channels and VOD (Video on Demand) in an M3U playlist depends on how the playlist is structured. Some possible ways to differentiate between live TV and VOD in a playlist are:
Using the #EXTINF tag: Many M3U playlists use the #EXTINF tag to provide metadata about the media files in the playlist. This tag can include a duration parameter that specifies the length of the media file in seconds. For live TV channels, this parameter is typically not specified or set to a very large value (e.g. -1) to indicate that the channel has no set end time. For VOD, this parameter will typically be set to the actual duration of the video. So, you can use the duration parameter in the #EXTINF tag to differentiate between live TV channels and VOD.
Using the group-title attribute: Another way to differentiate between live TV channels and VOD in an M3U playlist is to use the group-title attribute. This attribute is typically used to group similar channels or videos together in the playlist. For example, the group-title attribute for live TV channels might be set to something like "Live TV" or "Live Channels", while the group-title attribute for VOD might be set to something like "Movies" or "TV Shows". So, you can use the group-title attribute to identify the type of content in the playlist.
Using the file name or URL: In some cases, the names or URLs of the media files in the playlist might provide clues about the type of content. For example, live TV channels might have URLs or file names that include the channel name or number, while VOD might have URLs or file names that include the title of the movie or show. So, you can use the file name or URL of the media files to differentiate between live TV channels and VOD.
All the best!
|
Parsing m3u file : separate live tv from Vod
|
I'm currently working on an Iptv player app, and i have managed to parse the m3u file, the problem now that i want to separate live tv from Vod, i don't know when live tv channels ends and the Vod begins in the playlists
here are the keys of every object after the parsing is complete
[ 'duration', 'title', 'tvgId', 'tvgName', 'tvgLogo', 'groupTitle' ]
i'm using nestJs and m3u8-file-parser library for m3u parsing
|
[
"The way to distinguish between live TV channels and VOD (Video on Demand) in an M3U playlist depends on how the playlist is structured. Some possible ways to differentiate between live TV and VOD in a playlist are:\n\nUsing the #EXTINF tag: Many M3U playlists use the #EXTINF tag to provide metadata about the media files in the playlist. This tag can include a duration parameter that specifies the length of the media file in seconds. For live TV channels, this parameter is typically not specified or set to a very large value (e.g. -1) to indicate that the channel has no set end time. For VOD, this parameter will typically be set to the actual duration of the video. So, you can use the duration parameter in the #EXTINF tag to differentiate between live TV channels and VOD.\n\nUsing the group-title attribute: Another way to differentiate between live TV channels and VOD in an M3U playlist is to use the group-title attribute. This attribute is typically used to group similar channels or videos together in the playlist. For example, the group-title attribute for live TV channels might be set to something like \"Live TV\" or \"Live Channels\", while the group-title attribute for VOD might be set to something like \"Movies\" or \"TV Shows\". So, you can use the group-title attribute to identify the type of content in the playlist.\n\nUsing the file name or URL: In some cases, the names or URLs of the media files in the playlist might provide clues about the type of content. For example, live TV channels might have URLs or file names that include the channel name or number, while VOD might have URLs or file names that include the title of the movie or show. So, you can use the file name or URL of the media files to differentiate between live TV channels and VOD.\n\n\nAll the best!\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"m3u",
"m3u8",
"nestjs",
"typescript"
] |
stackoverflow_0074662861_javascript_m3u_m3u8_nestjs_typescript.txt
|
Q:
How to send specifically an IMAGE file from client to server using Python Paramiko
So I want to send and IMAGE file from client to server using Python Paramiko. For example: .jpeg, .jpg, .png
I don't get an error, but, it does print this message:
Failure
Here is example code:
from PIL import ImageGrab
import paramiko
class Client:
def __init__(self, hostname, username, password):
self.hostname = hostname
self.username = username
self.password = password
self.client = paramiko.SSHClient()
def connect(self):
self.client.load_system_host_keys()
self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.client.connect(hostname=self.hostname, username=self.username, password=self.password, port=22)
def close(self):
self.client.close()
ImageGrab.grab().save("screenshot.png") # Saves a screenshot
client = Client("hostname", "username", "password")
client.connect()
sftp_client = client.client.open_sftp()
sftp_client.put("screenshot.png", "/home") # Line that has the error
The line that I believe is messed up is the last line.
Feel free to run this code and test it. If you have any questions about this, go ahead and ask. If I did not include enough information, please say something.
A:
Based on the issue post on paramiko github repo, you need to specify the destination parameter to the file name instead of the directory name, such as
sftp_client.put("screenshot.png", "/home/screenshot.png")
|
How to send specifically an IMAGE file from client to server using Python Paramiko
|
So I want to send and IMAGE file from client to server using Python Paramiko. For example: .jpeg, .jpg, .png
I don't get an error, but, it does print this message:
Failure
Here is example code:
from PIL import ImageGrab
import paramiko
class Client:
def __init__(self, hostname, username, password):
self.hostname = hostname
self.username = username
self.password = password
self.client = paramiko.SSHClient()
def connect(self):
self.client.load_system_host_keys()
self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.client.connect(hostname=self.hostname, username=self.username, password=self.password, port=22)
def close(self):
self.client.close()
ImageGrab.grab().save("screenshot.png") # Saves a screenshot
client = Client("hostname", "username", "password")
client.connect()
sftp_client = client.client.open_sftp()
sftp_client.put("screenshot.png", "/home") # Line that has the error
The line that I believe is messed up is the last line.
Feel free to run this code and test it. If you have any questions about this, go ahead and ask. If I did not include enough information, please say something.
|
[
"Based on the issue post on paramiko github repo, you need to specify the destination parameter to the file name instead of the directory name, such as\nsftp_client.put(\"screenshot.png\", \"/home/screenshot.png\")\n"
] |
[
0
] |
[] |
[] |
[
"class",
"function",
"python",
"python_3.x",
"server"
] |
stackoverflow_0074663210_class_function_python_python_3.x_server.txt
|
Q:
Scala 3 fails to match a method to an equivalent function signature?
I'm on Scala 3.2.1, and the code below fails to compile.
class Test[F[_]] {
def error(message: => String): F[Unit] = {??? }
def error(t: Throwable)(message: => String): F[Unit] = {??? }
def test1(f: String => F[Unit]): Unit = {}
def test2(f: Throwable => String => F[Unit]): Unit = {}
test1(error)
test2(error)
}
test1 line compiles fine, but test2 breaks with
None of the overloaded alternatives of method error in class Test with types
(t: Throwable)(message: => String): F[Unit]
(message: => String): F[Unit]
match expected type Throwable => String => F[Unit]
test2(error)
Substituting F[Unit] for just Unit works, so I'm suspecting some kind of type erasure issue...? Do I need to put a ClassTag somewhere...?
A:
Your second overloaded error
def error(t: Throwable)(message: => String): F[Unit] = {??? }
has => String as the type of the second parameter, not String. These two types are not the same: first is a potentially side-effecty function that can be re-run multiple times (or not at all), whereas the second is just a pure string.
If you adjust the signature of the second test accordingly to
def test2(f: Throwable => (=> String) => F[Unit]): Unit = {}
it compiles at least with 3.2.0, I don't expect 3.2.1 to be much different here.
|
Scala 3 fails to match a method to an equivalent function signature?
|
I'm on Scala 3.2.1, and the code below fails to compile.
class Test[F[_]] {
def error(message: => String): F[Unit] = {??? }
def error(t: Throwable)(message: => String): F[Unit] = {??? }
def test1(f: String => F[Unit]): Unit = {}
def test2(f: Throwable => String => F[Unit]): Unit = {}
test1(error)
test2(error)
}
test1 line compiles fine, but test2 breaks with
None of the overloaded alternatives of method error in class Test with types
(t: Throwable)(message: => String): F[Unit]
(message: => String): F[Unit]
match expected type Throwable => String => F[Unit]
test2(error)
Substituting F[Unit] for just Unit works, so I'm suspecting some kind of type erasure issue...? Do I need to put a ClassTag somewhere...?
|
[
"Your second overloaded error\ndef error(t: Throwable)(message: => String): F[Unit] = {??? }\n\nhas => String as the type of the second parameter, not String. These two types are not the same: first is a potentially side-effecty function that can be re-run multiple times (or not at all), whereas the second is just a pure string.\nIf you adjust the signature of the second test accordingly to\ndef test2(f: Throwable => (=> String) => F[Unit]): Unit = {}\n\nit compiles at least with 3.2.0, I don't expect 3.2.1 to be much different here.\n"
] |
[
3
] |
[] |
[] |
[
"scala",
"scala_3"
] |
stackoverflow_0074662087_scala_scala_3.txt
|
Q:
querying on a new field in open search isnt retrieving results
am using opensearch 2.4 and I have an index with some fields while creating , later i started saving new field to the index , now when i query on the newly created field am not getting any results
ex : query 1
POST abc/_search
{
"query": {
"bool": {
"must": [
{
"terms": {
"name": [
"john"
]
}
}
]
}
}
}
above works fine because name fields exists since creation of index
query 2 :
POST abc/_search
{
"query": {
"bool": {
"must": [
{
"terms": {
"lastname": [
"William"
]
}
}
]
}
}
}
above query doesnt work though i have some documents with lastname william
A:
When you index a new field without previously declaring it in the mapping, opensearch/elastic will generate text type and type keyword.
There are two ways for you to get results with the Term Query. First remember that Term query works with exact terms.
The first option is to use the keyword field.
{
"terms": {
"lastname.keyword": [
"William"
]
}
}
The second option is to search in the text field, but remember that when indexing the default parser is applied, then the lowecase filter leaves the token like this: william.
In this case, the query should be:
{
"terms": {
"lastname": [
"william"
]
}
}
A:
When you use "terms" there must be an exact match (including casing).
So make sure your document contains William and not william or Williams
If you want more tolerance you can explore the match query:
https://opensearch.org/docs/latest/opensearch/query-dsl/full-text/#match
|
querying on a new field in open search isnt retrieving results
|
am using opensearch 2.4 and I have an index with some fields while creating , later i started saving new field to the index , now when i query on the newly created field am not getting any results
ex : query 1
POST abc/_search
{
"query": {
"bool": {
"must": [
{
"terms": {
"name": [
"john"
]
}
}
]
}
}
}
above works fine because name fields exists since creation of index
query 2 :
POST abc/_search
{
"query": {
"bool": {
"must": [
{
"terms": {
"lastname": [
"William"
]
}
}
]
}
}
}
above query doesnt work though i have some documents with lastname william
|
[
"When you index a new field without previously declaring it in the mapping, opensearch/elastic will generate text type and type keyword.\nThere are two ways for you to get results with the Term Query. First remember that Term query works with exact terms.\nThe first option is to use the keyword field.\n {\n \"terms\": {\n \"lastname.keyword\": [\n \"William\"\n ]\n }\n }\n\nThe second option is to search in the text field, but remember that when indexing the default parser is applied, then the lowecase filter leaves the token like this: william.\nIn this case, the query should be:\n {\n \"terms\": {\n \"lastname\": [\n \"william\"\n ]\n }\n }\n\n",
"When you use \"terms\" there must be an exact match (including casing).\nSo make sure your document contains William and not william or Williams\nIf you want more tolerance you can explore the match query:\nhttps://opensearch.org/docs/latest/opensearch/query-dsl/full-text/#match\n"
] |
[
1,
0
] |
[] |
[] |
[
"amazon_opensearch",
"elasticsearch",
"java",
"opensearch"
] |
stackoverflow_0074663060_amazon_opensearch_elasticsearch_java_opensearch.txt
|
Q:
`devtools::test()` works, but `devtools::check()` does not for `system.file` to locate file with `.yml` extension
I am trying to locate an external file with .yml extension for creating unit test with testthat while building a R package, the file is placed under \inst folder, as mentioned here https://r-pkgs.org/data.html#data-extdata. Am using system.file() to load the file with yaml package for some unit test (not base::system.file(), so that pkgload:::shim_system.file() can intercept). The tests passes if I do devtools::test(), but it keeps throwing error when I run devtools::check().
However, interestingly after lots of trials I found out that devtools::check() does not throw any error if I save the file with .txt extension instead of .yaml/.yml.
## inside test_that()
## cf <- system.file("config", "config1.yml", package = "mypackage", mustWork = TRUE)
## Error note after devtools::check()
Error in `system.file("config", "config1.yml", package = "mypackage", mustWork = TRUE)`: no file found
Backtrace:
x
1. \-base::system.file(...) test-set_config.R:34:2
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 15 ]
# -------------------------------------------------------------
## No error in devtools::check() when config1.yml is renamed as config1.txt !!
Anyone experienced the same? Am using latest version of devtools as on CRAN (v2.4.2), on R 4.1.1. If yes, is there any way to keep using .yaml/.yml, and not .txt?
UPDATE
I did accidentally keep [.]yml, [.]yaml in .Rbuildignore to ignore the continuous integration yaml files, have made them specifc instead of wildcards - and it is working now! Apologies for raising a false alarm, I was myself quite confused when I got this weird thing. https://github.com/r-lib/devtools/issues/2384#issuecomment-947943050
A:
Figured out the problem, finally, the culprit was my .Rbuildignore - I accidentally kept [.]yml, [.]yaml in .Rbuildignore to ignore the continuous integration yaml files while building the package. I modified them and made them specific, instead of wildcards - and it is working now! Apologies for raising a false alarm; I was quite confused when I got this weird thing. Thanks, @jimhester, who responded on github devtools issues
|
`devtools::test()` works, but `devtools::check()` does not for `system.file` to locate file with `.yml` extension
|
I am trying to locate an external file with .yml extension for creating unit test with testthat while building a R package, the file is placed under \inst folder, as mentioned here https://r-pkgs.org/data.html#data-extdata. Am using system.file() to load the file with yaml package for some unit test (not base::system.file(), so that pkgload:::shim_system.file() can intercept). The tests passes if I do devtools::test(), but it keeps throwing error when I run devtools::check().
However, interestingly after lots of trials I found out that devtools::check() does not throw any error if I save the file with .txt extension instead of .yaml/.yml.
## inside test_that()
## cf <- system.file("config", "config1.yml", package = "mypackage", mustWork = TRUE)
## Error note after devtools::check()
Error in `system.file("config", "config1.yml", package = "mypackage", mustWork = TRUE)`: no file found
Backtrace:
x
1. \-base::system.file(...) test-set_config.R:34:2
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 15 ]
# -------------------------------------------------------------
## No error in devtools::check() when config1.yml is renamed as config1.txt !!
Anyone experienced the same? Am using latest version of devtools as on CRAN (v2.4.2), on R 4.1.1. If yes, is there any way to keep using .yaml/.yml, and not .txt?
UPDATE
I did accidentally keep [.]yml, [.]yaml in .Rbuildignore to ignore the continuous integration yaml files, have made them specifc instead of wildcards - and it is working now! Apologies for raising a false alarm, I was myself quite confused when I got this weird thing. https://github.com/r-lib/devtools/issues/2384#issuecomment-947943050
|
[
"Figured out the problem, finally, the culprit was my .Rbuildignore - I accidentally kept [.]yml, [.]yaml in .Rbuildignore to ignore the continuous integration yaml files while building the package. I modified them and made them specific, instead of wildcards - and it is working now! Apologies for raising a false alarm; I was quite confused when I got this weird thing. Thanks, @jimhester, who responded on github devtools issues\n"
] |
[
0
] |
[] |
[] |
[
"devtools",
"r",
"yaml"
] |
stackoverflow_0069650300_devtools_r_yaml.txt
|
Q:
MySQL Aurora and AWS S3: Need an alternate way of MySQL's "LOAD DATA" for loading document data from S3
I need to import data from files stored in S3 into an MySQL Aurora db.
I have Eventbridge setup so when the file is added to S3 it fires an event that calls a lambda.
The lambda needs to import the file data into MySQL. The MySQL "LOAD DATA FROM S3" feature would be great for this..... but.... you will get the error: This command is not supported in the prepared statement protocol yet.
LOAD DATA has a lot of limitations such as this, it cannot be be in a stored procedure, cannot be in dynamic SQL (really needed here). I cannot find a hack work-around for this and need an alternate way to import data directly from S3 to MySQL. I don't want to move the data from S3 to Lambda to MySQL as that extra step in the middle adds a lot of exposure for failure.
Does anyone know any good ideas (and even not so good) for moving data from S3 to MySQL Aurora?
Thanks.
A:
Check whether the user has permission to import data, which requires AWS_LOAD_S3_ACCESS permission
Examples of import statements can be referred to as follows:
LOAD DATA FROM S3 's3://mybucket/data.txt'
INTO TABLE table1
(column1, column2)
SET column3 = CURRENT_TIMESTAMP;
For more detailed information, please refer to
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html
|
MySQL Aurora and AWS S3: Need an alternate way of MySQL's "LOAD DATA" for loading document data from S3
|
I need to import data from files stored in S3 into an MySQL Aurora db.
I have Eventbridge setup so when the file is added to S3 it fires an event that calls a lambda.
The lambda needs to import the file data into MySQL. The MySQL "LOAD DATA FROM S3" feature would be great for this..... but.... you will get the error: This command is not supported in the prepared statement protocol yet.
LOAD DATA has a lot of limitations such as this, it cannot be be in a stored procedure, cannot be in dynamic SQL (really needed here). I cannot find a hack work-around for this and need an alternate way to import data directly from S3 to MySQL. I don't want to move the data from S3 to Lambda to MySQL as that extra step in the middle adds a lot of exposure for failure.
Does anyone know any good ideas (and even not so good) for moving data from S3 to MySQL Aurora?
Thanks.
|
[
"\nCheck whether the user has permission to import data, which requires AWS_LOAD_S3_ACCESS permission\nExamples of import statements can be referred to as follows:\n\nLOAD DATA FROM S3 's3://mybucket/data.txt'\n INTO TABLE table1\n (column1, column2)\n SET column3 = CURRENT_TIMESTAMP;\n\n\nFor more detailed information, please refer to\nhttps://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html\n\n"
] |
[
0
] |
[] |
[] |
[
"amazon_aurora",
"amazon_s3",
"mysql"
] |
stackoverflow_0074660606_amazon_aurora_amazon_s3_mysql.txt
|
Q:
How to extract the data for this type of Json Array with Newtonsoft Json
I am new to JSON in general. I have a JSON file and I want to extract data from it, but I can't seem to find a way on how to do it. I've searched online, but I could not find any answer or I was just looking at the wrong places.
Here is my JSON data:
{"data":
{"cars":
{"total":117,
"results":[
{"id":"779579"},
{"id":"952209"},
{"id":"1103285"},
{"id":"1157321"},
{"id":"1372321"},
{"id":"1533192"},
{"id":"1630240"},
{"id":"2061824"},
{"id":"2312383"},
{"id":"2353755"},
{"id":"2796716"},
{"id":"2811260"},
{"id":"2824839"},
{"id":"2961828"},
{"id":"3315226"},
{"id":"3586555"},
{"id":"3668182"},
{"id":"3986886"},
{"id":"3989623"},
{"id":"3998581"},
{"id":"4021057"},
{"id":"4038880"},
{"id":"4308809"},
{"id":"4325718"},
{"id":"4352725"},
{"id":"4360349"},
{"id":"4628661"},
{"id":"4863093"},
{"id":"4940146"},
{"id":"4947395"},
{"id":"5157781"},
{"id":"5794466"},
{"id":"6134469"},
{"id":"6157337"},
{"id":"6307352"},
{"id":"6727975"},
{"id":"6783794"},
{"id":"6831800"},
{"id":"6960771"},
{"id":"7159286"},
{"id":"7211880"},
{"id":"7212277"},
{"id":"7217410"},
{"id":"7264660"},
{"id":"7406984"},
{"id":"7893798"},
{"id":"7948268"},
{"id":"8047751"},
{"id":"8271106"},
{"id":"8346001"},
{"id":"8352176"},
{"id":"8485193"},
{"id":"8746468"},
{"id":"8801718"},
{"id":"9104008"},
{"id":"9494179"},
{"id":"9588599"},
{"id":"9717878"},
{"id":"9845048"},
{"id":"9891941"},
{"id":"9943516"},
{"id":"10002374"},
{"id":"10213949"},
{"id":"10326370"},
{"id":"10499431"},
{"id":"10518069"},
{"id":"10538037"},
{"id":"10589618"},
{"id":"10602337"},
{"id":"10723171"},
{"id":"10724725"},
{"id":"10746729"},
{"id":"10751575"},
{"id":"10752559"},
{"id":"10852235"},
{"id":"10867573"},
{"id":"10877115"},
{"id":"10893349"},
{"id":"10988880"},
{"id":"10993485"},
{"id":"11026957"},
{"id":"11111205"},
{"id":"11122085"},
{"id":"11150052"},
{"id":"11251748"},
{"id":"11259887"},
{"id":"11270391"},
{"id":"11291731"},
{"id":"11303142"},
{"id":"11303143"},
{"id":"11308615"},
{"id":"11313379"},
{"id":"11334337"},
{"id":"11338119"},
{"id":"11338290"},
{"id":"11339650"},
{"id":"11347202"},
{"id":"11359983"},
{"id":"11390048"},
{"id":"11399541"}]}}}
I want to extract all the id and put them in an array. I tried JToken, but it can only get 100 data (up to element [99]) only because anything beyond 99 would give me an error. I tried it using a for loop.
This is the error I get if go beyond 99:
ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
A:
Your problem is that the value of data.cars.total is not what you think it is. In the JSON shown there are only 100 ids, despite the fact that data.cars.total equals 117. To put those 100 ids into an array, you may use SelectTokens() along with LINQ's ToArray() as follows:
var jtoken = JToken.Parse(jsonString); // Or load the JSON from a stream
var ids = jtoken.SelectTokens("data.cars.results[*].id").Select(id => (int)id).ToArray(); // Remove the .Select(id => (int)id) if you want them as strings
Console.WriteLine("{0} ids found:", ids.Length); // Prints 100
Console.WriteLine(string.Join(",", ids)); // Prints the deserialized ids
Where [*] is the JSONPath wildcard operator selecting all array elements.
Just a guess here, but possibly your JSON reflects a paged response in which data.cars.total is the total number of ids, not the number returned in the current page which was limited to 100.
Demo fiddle here.
A:
Thank you everyone for pointing out that the content of the array is only 100 and not 117. That solved the problem. The Json file was not returning the exact number of data that I expected (117). It turns out that my code was working fine all along and the problem is with the Json file.
|
How to extract the data for this type of Json Array with Newtonsoft Json
|
I am new to JSON in general. I have a JSON file and I want to extract data from it, but I can't seem to find a way on how to do it. I've searched online, but I could not find any answer or I was just looking at the wrong places.
Here is my JSON data:
{"data":
{"cars":
{"total":117,
"results":[
{"id":"779579"},
{"id":"952209"},
{"id":"1103285"},
{"id":"1157321"},
{"id":"1372321"},
{"id":"1533192"},
{"id":"1630240"},
{"id":"2061824"},
{"id":"2312383"},
{"id":"2353755"},
{"id":"2796716"},
{"id":"2811260"},
{"id":"2824839"},
{"id":"2961828"},
{"id":"3315226"},
{"id":"3586555"},
{"id":"3668182"},
{"id":"3986886"},
{"id":"3989623"},
{"id":"3998581"},
{"id":"4021057"},
{"id":"4038880"},
{"id":"4308809"},
{"id":"4325718"},
{"id":"4352725"},
{"id":"4360349"},
{"id":"4628661"},
{"id":"4863093"},
{"id":"4940146"},
{"id":"4947395"},
{"id":"5157781"},
{"id":"5794466"},
{"id":"6134469"},
{"id":"6157337"},
{"id":"6307352"},
{"id":"6727975"},
{"id":"6783794"},
{"id":"6831800"},
{"id":"6960771"},
{"id":"7159286"},
{"id":"7211880"},
{"id":"7212277"},
{"id":"7217410"},
{"id":"7264660"},
{"id":"7406984"},
{"id":"7893798"},
{"id":"7948268"},
{"id":"8047751"},
{"id":"8271106"},
{"id":"8346001"},
{"id":"8352176"},
{"id":"8485193"},
{"id":"8746468"},
{"id":"8801718"},
{"id":"9104008"},
{"id":"9494179"},
{"id":"9588599"},
{"id":"9717878"},
{"id":"9845048"},
{"id":"9891941"},
{"id":"9943516"},
{"id":"10002374"},
{"id":"10213949"},
{"id":"10326370"},
{"id":"10499431"},
{"id":"10518069"},
{"id":"10538037"},
{"id":"10589618"},
{"id":"10602337"},
{"id":"10723171"},
{"id":"10724725"},
{"id":"10746729"},
{"id":"10751575"},
{"id":"10752559"},
{"id":"10852235"},
{"id":"10867573"},
{"id":"10877115"},
{"id":"10893349"},
{"id":"10988880"},
{"id":"10993485"},
{"id":"11026957"},
{"id":"11111205"},
{"id":"11122085"},
{"id":"11150052"},
{"id":"11251748"},
{"id":"11259887"},
{"id":"11270391"},
{"id":"11291731"},
{"id":"11303142"},
{"id":"11303143"},
{"id":"11308615"},
{"id":"11313379"},
{"id":"11334337"},
{"id":"11338119"},
{"id":"11338290"},
{"id":"11339650"},
{"id":"11347202"},
{"id":"11359983"},
{"id":"11390048"},
{"id":"11399541"}]}}}
I want to extract all the id and put them in an array. I tried JToken, but it can only get 100 data (up to element [99]) only because anything beyond 99 would give me an error. I tried it using a for loop.
This is the error I get if go beyond 99:
ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
|
[
"Your problem is that the value of data.cars.total is not what you think it is. In the JSON shown there are only 100 ids, despite the fact that data.cars.total equals 117. To put those 100 ids into an array, you may use SelectTokens() along with LINQ's ToArray() as follows:\nvar jtoken = JToken.Parse(jsonString); // Or load the JSON from a stream\nvar ids = jtoken.SelectTokens(\"data.cars.results[*].id\").Select(id => (int)id).ToArray(); // Remove the .Select(id => (int)id) if you want them as strings\n\nConsole.WriteLine(\"{0} ids found:\", ids.Length); // Prints 100\nConsole.WriteLine(string.Join(\",\", ids)); // Prints the deserialized ids\n\nWhere [*] is the JSONPath wildcard operator selecting all array elements.\nJust a guess here, but possibly your JSON reflects a paged response in which data.cars.total is the total number of ids, not the number returned in the current page which was limited to 100.\nDemo fiddle here.\n",
"Thank you everyone for pointing out that the content of the array is only 100 and not 117. That solved the problem. The Json file was not returning the exact number of data that I expected (117). It turns out that my code was working fine all along and the problem is with the Json file.\n"
] |
[
1,
0
] |
[] |
[] |
[
"arrays",
"json",
"json.net"
] |
stackoverflow_0074658528_arrays_json_json.net.txt
|
Q:
How come a variable in a function is able to reference from outside it's scope?
In this case, the "all_lines" variable is initalised in the context manager, and it is accessible from the function "part_1".
total = 0
with open("advent_input.txt", "r") as txt:
all_lines = []
context_total = 0
for line in txt:
all_lines.append((line.rstrip().split(" ")))
def part_1():
# total = 0
for line in all_lines:
if line[0] == "A":
if line[1] == "Y":
total += 8
elif line[1] == "X":
context_total += 4
However, "context_total", which is also initalised in the context manager, does not work in the function "part_1". And "total" from the global scope does not work either. How come "all_lines" works?
A:
Python does not have general block scope, so anything assigned within the with will be accessible outside of the block.
context_total is different though since you're reassigning it within the function. If you assign within a function, the variable will be treated as a local unless you use global to specify otherwise. That's problematic here though since += necessarily must refer to an existing variable (or else what are you adding to?), but there is no local variable with that name.
Add global context_total to use it within the function, or pass it in as an argument if you don't need the reassigned value externally.
A:
It works because inside the function, the all_lines variable is referenced but not assigned. The other two variables are assigned.
If a variable is assigned inside a function, then that variable is treated as local throughout the function, even if there is a global variable of the same name.
|
How come a variable in a function is able to reference from outside it's scope?
|
In this case, the "all_lines" variable is initalised in the context manager, and it is accessible from the function "part_1".
total = 0
with open("advent_input.txt", "r") as txt:
all_lines = []
context_total = 0
for line in txt:
all_lines.append((line.rstrip().split(" ")))
def part_1():
# total = 0
for line in all_lines:
if line[0] == "A":
if line[1] == "Y":
total += 8
elif line[1] == "X":
context_total += 4
However, "context_total", which is also initalised in the context manager, does not work in the function "part_1". And "total" from the global scope does not work either. How come "all_lines" works?
|
[
"Python does not have general block scope, so anything assigned within the with will be accessible outside of the block.\ncontext_total is different though since you're reassigning it within the function. If you assign within a function, the variable will be treated as a local unless you use global to specify otherwise. That's problematic here though since += necessarily must refer to an existing variable (or else what are you adding to?), but there is no local variable with that name.\nAdd global context_total to use it within the function, or pass it in as an argument if you don't need the reassigned value externally.\n",
"It works because inside the function, the all_lines variable is referenced but not assigned. The other two variables are assigned.\nIf a variable is assigned inside a function, then that variable is treated as local throughout the function, even if there is a global variable of the same name.\n"
] |
[
1,
0
] |
[] |
[] |
[
"function",
"python",
"scope"
] |
stackoverflow_0074663272_function_python_scope.txt
|
Q:
XAMPP Apache Webserver localhost not working on MAC OS
I install XAMPP server on MAC OS 10.6 it was working fine.
After a lot of days I checked it, but not working this time, localhost not opening this time.
after some R&D I reinstall XAMPP server after uninstall
When I start the apache after reinstall it giving port 80 running a another webserver Then I restart system, then apache start ok, but same local host not working
Then I check Web Sharing in my System Preference then it was already Tuned Off ...
Please anybody tell me where I am wrong?
A:
This is what helped me:
sudo apachectl stop
This command killed Apache server that was pre-installed on MAC OS X.
A:
I had to disable OSX's built-in Apache server (XAMPP support thread):
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
This allowed XAMPP to start on 80, while POW runs on 20559.
What had failed: I reconfigured /etc/apache2/httpd.conf to listen on an alternate port and rebooted OSX. No luck.
A:
try
sudo /Applications/XAMPP/xamppfiles/bin/apachectl start
in terminal
A:
This solution worked perfectly fine for me..
1) close XAMPP control
2) Open Activity Monitor(Launchpad->Other->Activity Monitor)
3) select filter for All processes (default is My processes)
4) in fulltext search type: httpd
5) kill all httpd items
6) relaunch XAMPP control and launch apache again
Hurray :)
A:
To be able to do this, you will have to stop apache from your terminal.
sudo apachectl stop
After you've done this, your apache server will be be up and running again!
Hope this helps
A:
This is because in Mac OS X there is already Apache pre-installed. So what you can do is to change the listening port of one of the Apaches, either the Apache that you installed with XAMPP or the pre-installed one.
To change the listening port for XAMPP's Apache, go to /Applications/XAMPP/xamppfiles/etc and edit httpd.conf. Change the line "Listen 80" (80 is the listening port) to other port, eg. "Listen 1234".
Or,
To change the one for pre-installed Apache, go to /etc/apache2. You can do the same thing with file httpd.conf there.
After changing you might need to restart your Mac, just to make sure.
A:
Run xampp services by command line
To start apache service
sudo /Applications/XAMPP/xamppfiles/bin/apachectl start
To start mysql service
sudo /Applications/XAMPP/xamppfiles/bin/mysql.server start
Both commands are working like charm :)
A:
I was having this exact problem, the above solutions didn't make much sense to me.
My Solution:
Turn of Bluetooth! Worked a treat ..
After connecting my macbook pro to iphone5 (hotspot) I started getting error message, after turning of bluetooth the error message is gone ..Hope that helps somebody!
A:
I had similar issue after integrating MongoDB into XAMPP. However executing the command "sudo apachectl stop" fixed the problem
A:
Found out how to make it work!
I just moved apache2 (the Web Sharing folder) to my desktop.
go to terminal and type "mv /etc/apache2/ /Users/hseungun/Desktop"
actually it says you need authority so
type this "sudo -s" then it'll go to bash-3.2
passwd root
set your password and then "mv /etc/apache2/ /Users/hseungun/Desktop"
try turning on the web sharing, and then start xampp on mac
A:
If you are also running skype at the same time.
It will give you error:
port 80 running a another webserver
First close skype and restart your apache it will work fine.
A:
I had success with easy killing all active httpd processes in Monitor Activity tool:
1) close XAMPP control
2) open Monitor Activity
3) select filter for All processes (default is My processes)
4) in fulltext search type: httpd
5) kill all showen items
6) relaunch XAMPP control and launch apache again
A:
In my case, Web Sharing was running, which blocked XAMPP.
'Untick' Web Sharing in the Bluetooth Settings (or Network), which causes HTTPD to show in activity log.
Apache should now run and be available!
A:
Same thing as mine on OS X Mavericks.
After a couple of trials by error while changing Apache configuration, I got weird output on localhost/xampp. Thought PHP engine was messed up. However, 127.0.0.1/xampp is working completely okay.
Finally, I cleaned up the browser cache and reload the page again and Voila!
Resolved on Firefox...
A:
As Reid mentioned above in one comment you can also do it like this:
Quit XAMPP Manager-osx
Run in terminal: sudo killall httpd
Restart your XAMPP servers
... and you should be good to go!
A:
The issue in my case was valetand I didn't know it's using port 80. If you want to use XAMPP, just stop valet by typing valet stop and run the XAMPP.
|
XAMPP Apache Webserver localhost not working on MAC OS
|
I install XAMPP server on MAC OS 10.6 it was working fine.
After a lot of days I checked it, but not working this time, localhost not opening this time.
after some R&D I reinstall XAMPP server after uninstall
When I start the apache after reinstall it giving port 80 running a another webserver Then I restart system, then apache start ok, but same local host not working
Then I check Web Sharing in my System Preference then it was already Tuned Off ...
Please anybody tell me where I am wrong?
|
[
"This is what helped me:\nsudo apachectl stop\n\nThis command killed Apache server that was pre-installed on MAC OS X.\n",
"I had to disable OSX's built-in Apache server (XAMPP support thread):\nsudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist\n\nThis allowed XAMPP to start on 80, while POW runs on 20559.\nWhat had failed: I reconfigured /etc/apache2/httpd.conf to listen on an alternate port and rebooted OSX. No luck.\n",
"try \nsudo /Applications/XAMPP/xamppfiles/bin/apachectl start\n\nin terminal\n",
"This solution worked perfectly fine for me..\n1) close XAMPP control\n2) Open Activity Monitor(Launchpad->Other->Activity Monitor)\n3) select filter for All processes (default is My processes)\n4) in fulltext search type: httpd\n5) kill all httpd items\n6) relaunch XAMPP control and launch apache again\nHurray :)\n",
"To be able to do this, you will have to stop apache from your terminal. \nsudo apachectl stop\n\nAfter you've done this, your apache server will be be up and running again!\nHope this helps\n",
"This is because in Mac OS X there is already Apache pre-installed. So what you can do is to change the listening port of one of the Apaches, either the Apache that you installed with XAMPP or the pre-installed one.\nTo change the listening port for XAMPP's Apache, go to /Applications/XAMPP/xamppfiles/etc and edit httpd.conf. Change the line \"Listen 80\" (80 is the listening port) to other port, eg. \"Listen 1234\".\nOr,\nTo change the one for pre-installed Apache, go to /etc/apache2. You can do the same thing with file httpd.conf there.\nAfter changing you might need to restart your Mac, just to make sure.\n",
"Run xampp services by command line\nTo start apache service\nsudo /Applications/XAMPP/xamppfiles/bin/apachectl start\nTo start mysql service\nsudo /Applications/XAMPP/xamppfiles/bin/mysql.server start\nBoth commands are working like charm :)\n",
"I was having this exact problem, the above solutions didn't make much sense to me. \nMy Solution:\nTurn of Bluetooth! Worked a treat .. \nAfter connecting my macbook pro to iphone5 (hotspot) I started getting error message, after turning of bluetooth the error message is gone ..Hope that helps somebody! \n",
"I had similar issue after integrating MongoDB into XAMPP. However executing the command \"sudo apachectl stop\" fixed the problem\n",
"Found out how to make it work!\nI just moved apache2 (the Web Sharing folder) to my desktop.\n\ngo to terminal and type \"mv /etc/apache2/ /Users/hseungun/Desktop\"\nactually it says you need authority so\ntype this \"sudo -s\" then it'll go to bash-3.2\npasswd root\nset your password and then \"mv /etc/apache2/ /Users/hseungun/Desktop\"\ntry turning on the web sharing, and then start xampp on mac\n\n",
"If you are also running skype at the same time.\nIt will give you error: \n\nport 80 running a another webserver\n\nFirst close skype and restart your apache it will work fine.\n",
"I had success with easy killing all active httpd processes in Monitor Activity tool:\n1) close XAMPP control \n2) open Monitor Activity \n3) select filter for All processes (default is My processes)\n4) in fulltext search type: httpd \n5) kill all showen items \n6) relaunch XAMPP control and launch apache again\n",
"In my case, Web Sharing was running, which blocked XAMPP. \n'Untick' Web Sharing in the Bluetooth Settings (or Network), which causes HTTPD to show in activity log. \nApache should now run and be available!\n",
"Same thing as mine on OS X Mavericks.\nAfter a couple of trials by error while changing Apache configuration, I got weird output on localhost/xampp. Thought PHP engine was messed up. However, 127.0.0.1/xampp is working completely okay.\nFinally, I cleaned up the browser cache and reload the page again and Voila! \nResolved on Firefox...\n",
"As Reid mentioned above in one comment you can also do it like this:\n\nQuit XAMPP Manager-osx\nRun in terminal: sudo killall httpd\nRestart your XAMPP servers\n\n... and you should be good to go!\n",
"The issue in my case was valetand I didn't know it's using port 80. If you want to use XAMPP, just stop valet by typing valet stop and run the XAMPP.\n"
] |
[
218,
60,
48,
36,
19,
14,
13,
5,
5,
4,
4,
4,
3,
2,
0,
0
] |
[] |
[] |
[
"apache",
"macos",
"xampp"
] |
stackoverflow_0004582504_apache_macos_xampp.txt
|
Q:
How do I close a full-screen matplotlib figure?
How may I close a full-screen matplotlib window? I spawned the figure using:
plt.ion()
fig = plt.figure('Optimizer')
plt.tight_layout()
mng = plt.get_current_fig_manager()
mng.full_screen_toggle()
However, plt.close("all") does not seem to do anything, and I couldn't find many things online to try that are relevant to full-screen figures. Seems like the behavior of closing figures differs for full-screen plots?
I have to manually kill -9 the entire script for it to close. (Running on a Raspberry Pi if that matters)
A:
Alt F4 does the trick for me (Ubuntu / Windows). But I have not tried it on a Pi.
|
How do I close a full-screen matplotlib figure?
|
How may I close a full-screen matplotlib window? I spawned the figure using:
plt.ion()
fig = plt.figure('Optimizer')
plt.tight_layout()
mng = plt.get_current_fig_manager()
mng.full_screen_toggle()
However, plt.close("all") does not seem to do anything, and I couldn't find many things online to try that are relevant to full-screen figures. Seems like the behavior of closing figures differs for full-screen plots?
I have to manually kill -9 the entire script for it to close. (Running on a Raspberry Pi if that matters)
|
[
"Alt F4 does the trick for me (Ubuntu / Windows). But I have not tried it on a Pi.\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0070239693_matplotlib_python.txt
|
Q:
Alternatives to .explode() when turning a colum of list into a single colum
So by far whenever I had a dataframe that has a column of list such as the following:
'category_id'
[030000, 010403, 010402, 030604, 234440]
[030000, 010405, 010402, 030604, 033450]
[030000, 010403, 010407, 030604, 030600]
[030000, 010403, 010402, 030609, 032600]
Usually whenever I want to make this category_id column become like:
'category_id'
030000
010403
010402
030604
234440
030000
010405
010402
030604
033450
I would usually use the following code:
df2 = df.explode('category_id')
But whenever my data size gets really big, likes the sales data over the course of an entire month, .explode() becomes extremely slow and I am always worried whether I would encounter any OOM issues related to memory leaks.
Is there any other alternative solutions to .explode that would somehow perform better?
I tried to to use flatMap() but I'm stuck on how to exactly turn a dataframe to rdd format and then change it back to dataframe format that I can utilize.
Any info would be appreciated.
A:
It might not be the fastest method, but you can simply explode each row of the pandas frame, and combine:
import pandas
df = pandas.DataFrame({"col1":[[12,34,12,34,45,56], [12,14,154,6]], "col2":['a','b']})
# col1 col2
#0 [12, 34, 12, 34, 45, 56] a
#1 [12, 14, 154, 6] b
# df.explode('col1')
# col1 col2
#0 12 a
#0 34 a
#0 12 a
#0 34 a
#0 45 a
#0 56 a
#1 12 b
#1 14 b
#1 154 b
#1 6 b
new_df = pandas.DataFrame()
for i in range(len(df)):
df_i = df.iloc[i:i+1].explode('col1')
new_df = pandas.concat((new_df, df_i))
# new_df
# col1 col2
#0 12 a
#0 34 a
#0 12 a
#0 34 a
#0 45 a
#0 56 a
#1 12 b
#1 14 b
#1 154 b
#1 6 b
To optimize performance, you can step through the dataframe in chunks (e.g. df_chunk = df.iloc[start: start+chunk_size]; start += chunk_size; etc)
An alternative approach (probably similar in performance) that avoids explode altogether:
from itertools import product
new_df = pandas.DataFrame()
for i, row in df.iterrows():
p = product(*row.to_list())
sub_df = pandas.DataFrame.from_records(p, columns=list(df))
sub_df.index = pandas.Index([i]*len(sub_df))
new_df = pandas.concat((new_df, sub_df))
I "think" this should generalize but I did not test it.
A:
A fast and memory-efficient solution is to lean on numpy's .flatten() method. This one line will give you your new dataframe:
df2 = pd.DataFrame({'category_id': np.array([np.array(row) for row in df['category_id']]).flatten()})
To break that down a bit, .flatten() creates a flat array out of nested arrays. But first, you have to convert from list to array, which is where the list comprehension comes in handy. Here's a more self-explanatory version of the same code:
list_of_arrays = [np.array(row) for row in df['category_id']]
array_of_arrays = np.array(list_of_arrays)
flat_array = array_of_arrays.flatten()
df2 = pd.DataFrame({'category_id': flat_array})
|
Alternatives to .explode() when turning a colum of list into a single colum
|
So by far whenever I had a dataframe that has a column of list such as the following:
'category_id'
[030000, 010403, 010402, 030604, 234440]
[030000, 010405, 010402, 030604, 033450]
[030000, 010403, 010407, 030604, 030600]
[030000, 010403, 010402, 030609, 032600]
Usually whenever I want to make this category_id column become like:
'category_id'
030000
010403
010402
030604
234440
030000
010405
010402
030604
033450
I would usually use the following code:
df2 = df.explode('category_id')
But whenever my data size gets really big, likes the sales data over the course of an entire month, .explode() becomes extremely slow and I am always worried whether I would encounter any OOM issues related to memory leaks.
Is there any other alternative solutions to .explode that would somehow perform better?
I tried to to use flatMap() but I'm stuck on how to exactly turn a dataframe to rdd format and then change it back to dataframe format that I can utilize.
Any info would be appreciated.
|
[
"It might not be the fastest method, but you can simply explode each row of the pandas frame, and combine:\nimport pandas\n\ndf = pandas.DataFrame({\"col1\":[[12,34,12,34,45,56], [12,14,154,6]], \"col2\":['a','b']})\n# col1 col2\n#0 [12, 34, 12, 34, 45, 56] a\n#1 [12, 14, 154, 6] b\n\n# df.explode('col1')\n# col1 col2\n#0 12 a\n#0 34 a\n#0 12 a\n#0 34 a\n#0 45 a\n#0 56 a\n#1 12 b\n#1 14 b\n#1 154 b\n#1 6 b\n\nnew_df = pandas.DataFrame()\nfor i in range(len(df)):\n df_i = df.iloc[i:i+1].explode('col1')\n new_df = pandas.concat((new_df, df_i)) \n\n# new_df\n# col1 col2\n#0 12 a\n#0 34 a\n#0 12 a\n#0 34 a\n#0 45 a\n#0 56 a\n#1 12 b\n#1 14 b\n#1 154 b\n#1 6 b\n\nTo optimize performance, you can step through the dataframe in chunks (e.g. df_chunk = df.iloc[start: start+chunk_size]; start += chunk_size; etc)\nAn alternative approach (probably similar in performance) that avoids explode altogether:\nfrom itertools import product\n\nnew_df = pandas.DataFrame()\nfor i, row in df.iterrows():\n p = product(*row.to_list())\n sub_df = pandas.DataFrame.from_records(p, columns=list(df))\n sub_df.index = pandas.Index([i]*len(sub_df))\n new_df = pandas.concat((new_df, sub_df))\n\nI \"think\" this should generalize but I did not test it.\n",
"A fast and memory-efficient solution is to lean on numpy's .flatten() method. This one line will give you your new dataframe:\ndf2 = pd.DataFrame({'category_id': np.array([np.array(row) for row in df['category_id']]).flatten()})\n\nTo break that down a bit, .flatten() creates a flat array out of nested arrays. But first, you have to convert from list to array, which is where the list comprehension comes in handy. Here's a more self-explanatory version of the same code:\nlist_of_arrays = [np.array(row) for row in df['category_id']]\narray_of_arrays = np.array(list_of_arrays)\nflat_array = array_of_arrays.flatten()\ndf2 = pd.DataFrame({'category_id': flat_array})\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"databricks",
"pandas",
"pyspark",
"python"
] |
stackoverflow_0074662147_databricks_pandas_pyspark_python.txt
|
Q:
Django can not connect to PostgreSQL DB
When I`m trying to connect Django Server to PostgreSQL db there is an error:
" port 5433 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? "
I`m using Windows 10, Pycharm, Debian
Settings in Django:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'ps_store_db',
'USER': 'zesshi',
'PASSWORD': '',
'HOST': 'localhost',
'PORT': '5433',
}
}
Tried to check connection with DBeaver and all`s good there, but still cant connect with Django
My firewall is off, i was trying to change from 5432 to 5433
Dbeaver connection
Dbeaver connection 2
A:
Try restarting/reinstalling postgres. Most likely DBeaver has blocked the port that's why you are not able to connect from django.
(Sorry for posting answer, i am unable to comment yet)
A:
The default port of the PG database is 5432. If you need to change this port, you need to edit the postgresql.conf file and restart the database service before the client can access it.
You also need to check the pg_hba.conf file. The recommended configuration is as follows:
host all all 0.0.0.0/0 md5
|
Django can not connect to PostgreSQL DB
|
When I`m trying to connect Django Server to PostgreSQL db there is an error:
" port 5433 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? "
I`m using Windows 10, Pycharm, Debian
Settings in Django:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'ps_store_db',
'USER': 'zesshi',
'PASSWORD': '',
'HOST': 'localhost',
'PORT': '5433',
}
}
Tried to check connection with DBeaver and all`s good there, but still cant connect with Django
My firewall is off, i was trying to change from 5432 to 5433
Dbeaver connection
Dbeaver connection 2
|
[
"Try restarting/reinstalling postgres. Most likely DBeaver has blocked the port that's why you are not able to connect from django.\n(Sorry for posting answer, i am unable to comment yet)\n",
"The default port of the PG database is 5432. If you need to change this port, you need to edit the postgresql.conf file and restart the database service before the client can access it.\nYou also need to check the pg_hba.conf file. The recommended configuration is as follows:\nhost all all 0.0.0.0/0 md5\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"postgresql"
] |
stackoverflow_0074660628_django_postgresql.txt
|
Q:
Vertical TabView or pagination on Apple Watch using Digital Crown to scroll
I'm trying to recreate the vertical scroll used by something like the Apple Workout app on the Apple watch. See screenshot below:
It uses both a regular TabView and something that looks similar to a vertical TabView on the right. You can either scroll with your finger or the digital crown and it will jump between pages like tabs. In the screenshot the top part with the time is fixed and only the part below is changing for different views.
Are they using a public component here and if not, how can it be recreated?
It behaves almost identically to a regular TabView except it allows scrolling with the digital crown and when you do it turns green instead of white before fading out.
This is what I've attempted so far but it's pretty hacky and not ideal using rotation. You would also need to manually implement page changes based on the digital crown movement.
GeometryReader { proxy in
VStack {
Text("00:00:00 - Fixed above")
TabView {
VStack {
Text("Page 1")
}.rotationEffect(.degrees(90))
VStack {
Text("Page 2")
}.rotationEffect(.degrees(90))
VStack {
Text("Page 3")
}.rotationEffect(.degrees(90))
}.rotationEffect(.degrees(-90)).frame(width: proxy.size.height, height: proxy.size.width)
}
}
A:
Turns out it's as simple as setting the style to carousel. It even does the digital crown page turning for you.
TabView{
Text("Page 1")
Text("Page 2")
Text("Page 3")
}.tabViewStyle(.carousel)
|
Vertical TabView or pagination on Apple Watch using Digital Crown to scroll
|
I'm trying to recreate the vertical scroll used by something like the Apple Workout app on the Apple watch. See screenshot below:
It uses both a regular TabView and something that looks similar to a vertical TabView on the right. You can either scroll with your finger or the digital crown and it will jump between pages like tabs. In the screenshot the top part with the time is fixed and only the part below is changing for different views.
Are they using a public component here and if not, how can it be recreated?
It behaves almost identically to a regular TabView except it allows scrolling with the digital crown and when you do it turns green instead of white before fading out.
This is what I've attempted so far but it's pretty hacky and not ideal using rotation. You would also need to manually implement page changes based on the digital crown movement.
GeometryReader { proxy in
VStack {
Text("00:00:00 - Fixed above")
TabView {
VStack {
Text("Page 1")
}.rotationEffect(.degrees(90))
VStack {
Text("Page 2")
}.rotationEffect(.degrees(90))
VStack {
Text("Page 3")
}.rotationEffect(.degrees(90))
}.rotationEffect(.degrees(-90)).frame(width: proxy.size.height, height: proxy.size.width)
}
}
|
[
"Turns out it's as simple as setting the style to carousel. It even does the digital crown page turning for you.\nTabView{\n Text(\"Page 1\")\n Text(\"Page 2\")\n Text(\"Page 3\")\n}.tabViewStyle(.carousel)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"apple_watch",
"swift",
"swiftui",
"watchkit",
"watchos"
] |
stackoverflow_0074661719_apple_watch_swift_swiftui_watchkit_watchos.txt
|
Q:
Printing numbers without exponential notation in Maxima
I have trouble with WordMat, when calculating with Maxima, which didn't occur before, but started recently. When i calculate something that either results in x00000 or 0,0000x i get the result returned as its scientific expression, IE x*10^5 or x*10^-5. Even though this is correct, i would rather have it returned as a full number.
This only tends to happen when the number of 0's goes beyonda certain number, in the case of decimal numbers its from xe-5 and higher numbers its from xe9 and beyond. I can not turn this off in the settings as far as i can see, but it seems to be a setting in maxima simplification, with the variable option "expon" from what i found here:
file:///C:/Program%20Files%20(x86)/WordMat/Maxima-5.45.1/share/maxima/5.45.1/doc/html/maxima_46.html
Does anyone know of a way to either change the setting in wordmat, or edit the simplification rules in maxima?
I tried:
Turning on and off most setting in wordmat
Restarting my pc
Reinstalling WordMat
Looking through the manual for maxima/wordmat and looking for changeable settings
A:
When you want to control the printed format of numbers, I think the best general answer is to use printf to specify exactly how to print the numbers. For what you want, printf(true, "~f", x) will print the value of x without ever introducing exponential format.
(The default maxima output is basically "~g" which basically automatically chooses between "~f" and "~e" depending on the value of x).
Perhaps a future version of maxima will allow you to change the default output range, but in general printf is the method to use if you want fine-grained control of the output.
|
Printing numbers without exponential notation in Maxima
|
I have trouble with WordMat, when calculating with Maxima, which didn't occur before, but started recently. When i calculate something that either results in x00000 or 0,0000x i get the result returned as its scientific expression, IE x*10^5 or x*10^-5. Even though this is correct, i would rather have it returned as a full number.
This only tends to happen when the number of 0's goes beyonda certain number, in the case of decimal numbers its from xe-5 and higher numbers its from xe9 and beyond. I can not turn this off in the settings as far as i can see, but it seems to be a setting in maxima simplification, with the variable option "expon" from what i found here:
file:///C:/Program%20Files%20(x86)/WordMat/Maxima-5.45.1/share/maxima/5.45.1/doc/html/maxima_46.html
Does anyone know of a way to either change the setting in wordmat, or edit the simplification rules in maxima?
I tried:
Turning on and off most setting in wordmat
Restarting my pc
Reinstalling WordMat
Looking through the manual for maxima/wordmat and looking for changeable settings
|
[
"When you want to control the printed format of numbers, I think the best general answer is to use printf to specify exactly how to print the numbers. For what you want, printf(true, \"~f\", x) will print the value of x without ever introducing exponential format.\n(The default maxima output is basically \"~g\" which basically automatically chooses between \"~f\" and \"~e\" depending on the value of x).\nPerhaps a future version of maxima will allow you to change the default output range, but in general printf is the method to use if you want fine-grained control of the output.\n"
] |
[
0
] |
[] |
[] |
[
"maxima",
"simplification"
] |
stackoverflow_0074614448_maxima_simplification.txt
|
Q:
How to change the look of overdue dates with flutter CupertinoDatePicker?
I don't know how to change the grey color of overdue dates. For me there is not enough contrast between past date, future date and today's date.
A:
Looking at the source code of of CupertinoDatePicker, it seems that the colors are not easily changeable.
To quote from flutter's source code:
TextStyle _themeTextStyle(BuildContext context, { bool isValid = true }) {
final TextStyle style = CupertinoTheme.of(context).textTheme.dateTimePickerTextStyle;
return isValid
? style.copyWith(color: CupertinoDynamicColor.maybeResolve(style.color, context))
: style.copyWith(color: CupertinoDynamicColor.resolve(CupertinoColors.inactiveGray, context));
}
Therefore, the only direct impact you have on the colors is by wrapping your CupertinoDatePicker inside a CupertinoTheme and providing a value for textTheme.dateTimePickerTextStyle . Further customisation is not supported as of yet
A:
Since there is no option to change the overlay color from the CupertinoDatePicker widget constructor, you will need to change the overlay color from its source code.
First, we will take an example of CupertinoDatePicker like this:
// ...
child: CupertinoDatePicker(
onDateTimeChanged: (date) {},
),
// ...
And it shows this on the screen:
Going inside its source code, you need to search for those lines:
const Widget _startSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capEndEdge: false);
const Widget _centerSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capStartEdge: false, capEndEdge: false);
const Widget _endSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capStartEdge: false);
This represents the implementation of that grey overlay that you see on the screen.
Then, you need to go through the CupertinoPickerDefaultSelectionOverlay source code also, you should find this:
/// default margin and use rounded corners on the left and right side of the
/// rectangular overlay.
/// Default to true and must not be null.
const CupertinoPickerDefaultSelectionOverlay({
super.key,
this.background = CupertinoColors.tertiarySystemFill,
this.capStartEdge = true,
this.capEndEdge = true,
}) : assert(background != null),
assert(capStartEdge != null),
assert(capEndEdge != null);
We want to change the color of that overlay background, so in this line:
this.background = CupertinoColors.tertiarySystemFill,
We will change its value with the Color we want, so as an example we will change it like this:
this.background = const Color.fromARGB(49, 0, 253, 30),
and after a hot restart, the result will be:
Notes:
If you want to use material colors like Colors.red,
Colors.green..., you will need to import the material library on
the top of that file.
You should mark the color with a const as I did in the sample of
code, otherwise, you will get an error.
|
How to change the look of overdue dates with flutter CupertinoDatePicker?
|
I don't know how to change the grey color of overdue dates. For me there is not enough contrast between past date, future date and today's date.
|
[
"Looking at the source code of of CupertinoDatePicker, it seems that the colors are not easily changeable.\nTo quote from flutter's source code:\nTextStyle _themeTextStyle(BuildContext context, { bool isValid = true }) {\n final TextStyle style = CupertinoTheme.of(context).textTheme.dateTimePickerTextStyle;\n return isValid\n ? style.copyWith(color: CupertinoDynamicColor.maybeResolve(style.color, context))\n : style.copyWith(color: CupertinoDynamicColor.resolve(CupertinoColors.inactiveGray, context));\n}\n\nTherefore, the only direct impact you have on the colors is by wrapping your CupertinoDatePicker inside a CupertinoTheme and providing a value for textTheme.dateTimePickerTextStyle . Further customisation is not supported as of yet\n",
"Since there is no option to change the overlay color from the CupertinoDatePicker widget constructor, you will need to change the overlay color from its source code.\nFirst, we will take an example of CupertinoDatePicker like this:\n // ...\n child: CupertinoDatePicker(\n onDateTimeChanged: (date) {},\n ),\n // ...\n\nAnd it shows this on the screen:\n\nGoing inside its source code, you need to search for those lines:\nconst Widget _startSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capEndEdge: false);\nconst Widget _centerSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capStartEdge: false, capEndEdge: false);\nconst Widget _endSelectionOverlay = CupertinoPickerDefaultSelectionOverlay(capStartEdge: false);\n\nThis represents the implementation of that grey overlay that you see on the screen.\nThen, you need to go through the CupertinoPickerDefaultSelectionOverlay source code also, you should find this:\n /// default margin and use rounded corners on the left and right side of the\n /// rectangular overlay.\n /// Default to true and must not be null.\n const CupertinoPickerDefaultSelectionOverlay({\n super.key,\n this.background = CupertinoColors.tertiarySystemFill,\n this.capStartEdge = true,\n this.capEndEdge = true,\n }) : assert(background != null),\n assert(capStartEdge != null),\n assert(capEndEdge != null);\n\nWe want to change the color of that overlay background, so in this line:\nthis.background = CupertinoColors.tertiarySystemFill,\nWe will change its value with the Color we want, so as an example we will change it like this:\nthis.background = const Color.fromARGB(49, 0, 253, 30),\n\nand after a hot restart, the result will be:\n\n\nNotes:\n\nIf you want to use material colors like Colors.red,\nColors.green..., you will need to import the material library on\nthe top of that file.\nYou should mark the color with a const as I did in the sample of\ncode, otherwise, you will get an error.\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"flutter"
] |
stackoverflow_0074627514_flutter.txt
|
Q:
How to access Spring Cloud Function from WebClient or Postman
How to access a Spring Cloud Function from a Client Application or Webclient or Postman. The following is the working source code of a Spring Cloud Function and the curl command to access the function. Appreciate if any one can provide a WebClient example that also can call this function. Because the ultimate aim to develop a Spring Function to use within the Client Application.
Spring Cloud Function Source code
@SpringBootApplication
public class CloudFunctionApplication {
public static void main(String[] args) {
SpringApplication.run(CloudFunctionApplication.class, args);
}
@Bean
public Function<String, String> reverseString() {
return value -> new StringBuilder(value).reverse().toString();
}
}
**Curl command to access
**
curl localhost:8080/greeter -H "Content-Type: text/plain" -d "World"
I have tried to access the function from curl command. How to use Webclient of a client application such as React App to use the function ?
A:
To access a Spring Cloud Function from a client application, you will need to use the URL of the function in your WebClient or other HTTP client. The URL of the function will typically have the following format:
In the case of the example Spring Cloud Function you provided, the function is named "reverseString", so the URL would be something like
'http://localhost:8080/reverseString.'
To use the WebClient to call the function, you can use code like the following:
WebClient webClient = WebClient.create("http://localhost:8080");
Mono<String> response = webClient.post()
.uri("/reverseString")
.contentType(MediaType.TEXT_PLAIN)
.bodyValue("World")
.retrieve()
.bodyToMono(String.class);
This code creates a WebClient object that is configured to make requests to http://localhost:8080, and then uses the WebClient to make a POST request to the /reverseString endpoint with a request body of "World" and a content type of "text/plain". The response is then converted to a Mono object, which represents a reactive stream of the response data.
You can then use the 'response' object to access the data returned by the function, for example by subscribing to it and printing the result:
response.subscribe(result -> System.out.println(result));
I hope this helps! Let me know if you have any other questions.
Check out my bio!
|
How to access Spring Cloud Function from WebClient or Postman
|
How to access a Spring Cloud Function from a Client Application or Webclient or Postman. The following is the working source code of a Spring Cloud Function and the curl command to access the function. Appreciate if any one can provide a WebClient example that also can call this function. Because the ultimate aim to develop a Spring Function to use within the Client Application.
Spring Cloud Function Source code
@SpringBootApplication
public class CloudFunctionApplication {
public static void main(String[] args) {
SpringApplication.run(CloudFunctionApplication.class, args);
}
@Bean
public Function<String, String> reverseString() {
return value -> new StringBuilder(value).reverse().toString();
}
}
**Curl command to access
**
curl localhost:8080/greeter -H "Content-Type: text/plain" -d "World"
I have tried to access the function from curl command. How to use Webclient of a client application such as React App to use the function ?
|
[
"To access a Spring Cloud Function from a client application, you will need to use the URL of the function in your WebClient or other HTTP client. The URL of the function will typically have the following format:\n\nIn the case of the example Spring Cloud Function you provided, the function is named \"reverseString\", so the URL would be something like\n'http://localhost:8080/reverseString.'\nTo use the WebClient to call the function, you can use code like the following:\nWebClient webClient = WebClient.create(\"http://localhost:8080\");\nMono<String> response = webClient.post()\n .uri(\"/reverseString\")\n .contentType(MediaType.TEXT_PLAIN)\n .bodyValue(\"World\")\n .retrieve()\n .bodyToMono(String.class);\n\nThis code creates a WebClient object that is configured to make requests to http://localhost:8080, and then uses the WebClient to make a POST request to the /reverseString endpoint with a request body of \"World\" and a content type of \"text/plain\". The response is then converted to a Mono object, which represents a reactive stream of the response data.\nYou can then use the 'response' object to access the data returned by the function, for example by subscribing to it and printing the result:\nresponse.subscribe(result -> System.out.println(result));\n\nI hope this helps! Let me know if you have any other questions.\nCheck out my bio!\n"
] |
[
0
] |
[] |
[] |
[
"spring_cloud_function",
"webclient"
] |
stackoverflow_0074663267_spring_cloud_function_webclient.txt
|
Q:
Error while Inserting the form data in MERN
I am new to stackOverflow and Reactjs,
i had tried many other problem like them but they didn't help
I aam trying to inert the form data from react to mongodb database using node and express.
and i am using FETCH API to send my form data. but there are two errors in chrome console.
First error
POST http://localhost:3000/register 404 (Not Found)
*For this error i had used *
"proxy": "http://localhost:4000" in my pakage.json(reactjs) but still there is error*
second error
VM18761:1 Uncaught (in promise) SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON
I dont know what is this
please guide me how to tackle all of these error
Reister.js(reactjs)
import React, { useState } from 'react'
import zerotwo from "../images/07.svg"
// import { Formik, useFormik } from 'formik'
// import { Signupschema } from '../Form-Validation/Schema'
const Signup = () => {
const [user, setUser] = useState({
username: "",
email: "",
mobile: "",
password: "",
cpassword: ""
})
let name, value
const handleInput = (e) => {
name = e.target.name
value = e.target.value
setUser({ ...user, [name]: value })
}
const PostData = async (e) => {
e.preventDefault()
const { username, email, mobile, password, cpassword } = user
const res = await fetch("/register", {
method: "POST",
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json'
},
body: JSON.stringify({
username, email, mobile, password, cpassword
})
})
const data = await res.json()
if (data === 422 || !data) {
alert("not registered")
} else {
alert("Sucesssfuly")
}
}
return (
<div>
<div class=" container position-relative z-index-9">
<div class="row g-4 g-sm-5 justify-content-between">
<div class=" hero-rah mb-5 col-12 col-lg-6 d-md-flex align-items-center justify-content-center bg-opacity-10 vh-lg-100">
<div class="p-3 p-lg-5">
<div class="text-center">
<h2 class="fw-bold">Welcome to our largest community</h2>
<p class="mb-0 h6 fw-light">Let's learn something new today!</p>
</div>
<img src={zerotwo} class="mt-5" alt="" />
</div>
</div>
<div class="col-lg-6 position-relative">
<div class=" jk mt-5 bg-primary bg-opacity-10 rounded-3 p-4 p-sm-5">
<h2 class="mb-3">Register Here</h2>
<form method='POST' class="row g-4 g-sm-3 mt-2 mb-0">
<div class="col-12">
<label class="form-label">Name *</label>
<input type="text"
class="form-control"
aria-label="First name"
name='username'
value={user.username}
onChange={handleInput}
/>
{/* {errors.username && touched.username ? (<span class="badge badge-danger">{errors.username}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Email *</label>
<input type="email"
class="form-control"
name='email'
value={user.email}
onChange={handleInput}
/>
{/* {errors.email && touched.email ? (<span class="badge badge-danger">{errors.email}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Mobile number *</label>
<input type="text"
class="form-control"
aria-label="Mobile number"
name='mobile'
value={user.mobile}
onChange={handleInput}
/>
{/* {errors.mobile && touched.mobile ? (<span class="badge badge-danger">{errors.mobile}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Password *</label>
<input type="password"
class="form-control"
aria-label="password"
name='password'
value={user.password}
onChange={handleInput}
/>
{/* {errors.password && touched.password ? (<span class="badge badge-danger">{errors.password}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Confirm Password *</label>
<input type="password"
class="form-control"
aria-label="password"
name='cpassword'
value={user.cpassword}
onChange={handleInput}
/>
{/* {errors.cpassword && touched.cpassword ? (<span class="badge badge-danger">{errors.cpassword}</span>) : null} */}
</div>
<div class="col-12 d-grid">
<button onClick={PostData} type="submit" class="btn btn-lg btn-primary mb-0">Register</button>
</div>
</form>
Auth.js
const express = require("express")
const router = express()
const bcrypt = require("bcryptjs")
const jwt = require("jsonwebtoken")
require("../conn")
const User = require("../models/SignupSchema")
router.get("/", (req, res) => {
res.send("hello i am home router js")
})
router.post("/register", (req, res) => {
const { username, email, mobile, password, cpassword } = req.body
if (!username || !email || !mobile || !password || !cpassword) {
return res.status(422).json({ error: "please fill all the data" })
}
User.findOne({ email: email }).then((userExit) => {
if (userExit) {
return res.status(422).json({ error: "User is already registered" })
}
const user = new User({ username, email, mobile, password, cpassword })
user.save().then(() => {
res.status(200).json({ message: "user is registered" })
}).catch(() => {
res.status(500).json({ error: "Error while registering the user" })
})
}).catch((err) => {
console.log(err);
})
})
router.post("/login", async (req, res) => {
let token
try {
const { email, password } = req.body
if (!email || !password) {
return res.status(400).json({ message: "fill the credentials" })
}
const UserLogin = await User.findOne({ email: email })
if (UserLogin) {
const passmatch = await bcrypt.compare(password, UserLogin.password)
token = await UserLogin.generateAuthToken()
console.log(token);
res.cookie("jwt",token,{
expires : new Date(Date.now()+25892000000),
httpOnly : true
})
if (!passmatch) {
res.status(400).json({ error: "Invalid credentials" })
}
else {
res.status(200).json({ message: "sign in successfully" })
}
} else {
res.status(400).json({ error: "Invalid credentials" })
}
} catch (err) {
console.log(err);
}
})
module. Exports = router
App.js(backend)
const dotenv = require("dotenv")
const express = require("express")
const app = express()
dotenv.config({path:"./config.env"})
require("./conn")
app.use(express.json())
const PORT = process.env.PORT
app.use(require("./router/Auth"))
const middleware = (req,res,next)=>{
console.log("i am using middleware");
next();
}
app.get("/",(req,res)=>{
res.send("hello world from the server")
})
app.get("/about",middleware,(req,res)=>{
res.send("this is about page rahul")
})
app.listen(PORT,()=>{
console.log(`server is listening ${PORT}`);
})
signup schema
const mongoose = require("mongoose")
const bcrypt = require("bcryptjs")
const jwt = require("jsonwebtoken")
const SignupSchema = new mongoose.Schema({
username: {
type: String,
required: true
},
email: {
type: String,
required: true
},
mobile: {
type: String,
required: true
},
password: {
type: String,
required: true
},
cpassword: {
type: String,
required: true
},
tokens:[
{
token:{
type:String,
required:true
}
}
]
})
SignupSchema.pre("save", async function (next) {
console.log("hi i am pre");
if (this.isModified("password")) {
console.log("hi i am pre password");
this.password = await bcrypt.hash(this.password, 12)
this.cpassword = await bcrypt.hash(this.cpassword, 12)
next()
}
})
SignupSchema.methods.generateAuthToken = async function () {
try {
let token = jwt.sign({ _id:this._id},process.env.SECRET_KEY)
this.tokens = this.tokens.concat({token:token})
await this.save()
return token
} catch (err) {
console.log(err);
}
}
const Signup = mongoose.model("SIGNUP", SignupSchema)
module. Exports = Signup
I had tried to insert my form data using fect api in reactjs abd i expecting to data to be inserted in my database
A:
If you are using node server, install this npm install body-parser and add these lines in your server-side code const
`bodyParser = require("body-parser");
router.use(bodyParser.json());`
if you are using app const app = express(); then you can simply use app.use(bodyParser.json());
in your code,
also use the full port name while fetch.
fetch("http://localhost:3000/register");
A:
it should be const router = express.Router() not router = express() and it's module.exports not module. Exports.
|
Error while Inserting the form data in MERN
|
I am new to stackOverflow and Reactjs,
i had tried many other problem like them but they didn't help
I aam trying to inert the form data from react to mongodb database using node and express.
and i am using FETCH API to send my form data. but there are two errors in chrome console.
First error
POST http://localhost:3000/register 404 (Not Found)
*For this error i had used *
"proxy": "http://localhost:4000" in my pakage.json(reactjs) but still there is error*
second error
VM18761:1 Uncaught (in promise) SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON
I dont know what is this
please guide me how to tackle all of these error
Reister.js(reactjs)
import React, { useState } from 'react'
import zerotwo from "../images/07.svg"
// import { Formik, useFormik } from 'formik'
// import { Signupschema } from '../Form-Validation/Schema'
const Signup = () => {
const [user, setUser] = useState({
username: "",
email: "",
mobile: "",
password: "",
cpassword: ""
})
let name, value
const handleInput = (e) => {
name = e.target.name
value = e.target.value
setUser({ ...user, [name]: value })
}
const PostData = async (e) => {
e.preventDefault()
const { username, email, mobile, password, cpassword } = user
const res = await fetch("/register", {
method: "POST",
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json'
},
body: JSON.stringify({
username, email, mobile, password, cpassword
})
})
const data = await res.json()
if (data === 422 || !data) {
alert("not registered")
} else {
alert("Sucesssfuly")
}
}
return (
<div>
<div class=" container position-relative z-index-9">
<div class="row g-4 g-sm-5 justify-content-between">
<div class=" hero-rah mb-5 col-12 col-lg-6 d-md-flex align-items-center justify-content-center bg-opacity-10 vh-lg-100">
<div class="p-3 p-lg-5">
<div class="text-center">
<h2 class="fw-bold">Welcome to our largest community</h2>
<p class="mb-0 h6 fw-light">Let's learn something new today!</p>
</div>
<img src={zerotwo} class="mt-5" alt="" />
</div>
</div>
<div class="col-lg-6 position-relative">
<div class=" jk mt-5 bg-primary bg-opacity-10 rounded-3 p-4 p-sm-5">
<h2 class="mb-3">Register Here</h2>
<form method='POST' class="row g-4 g-sm-3 mt-2 mb-0">
<div class="col-12">
<label class="form-label">Name *</label>
<input type="text"
class="form-control"
aria-label="First name"
name='username'
value={user.username}
onChange={handleInput}
/>
{/* {errors.username && touched.username ? (<span class="badge badge-danger">{errors.username}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Email *</label>
<input type="email"
class="form-control"
name='email'
value={user.email}
onChange={handleInput}
/>
{/* {errors.email && touched.email ? (<span class="badge badge-danger">{errors.email}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Mobile number *</label>
<input type="text"
class="form-control"
aria-label="Mobile number"
name='mobile'
value={user.mobile}
onChange={handleInput}
/>
{/* {errors.mobile && touched.mobile ? (<span class="badge badge-danger">{errors.mobile}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Password *</label>
<input type="password"
class="form-control"
aria-label="password"
name='password'
value={user.password}
onChange={handleInput}
/>
{/* {errors.password && touched.password ? (<span class="badge badge-danger">{errors.password}</span>) : null} */}
</div>
<div class="col-12">
<label class="form-label">Confirm Password *</label>
<input type="password"
class="form-control"
aria-label="password"
name='cpassword'
value={user.cpassword}
onChange={handleInput}
/>
{/* {errors.cpassword && touched.cpassword ? (<span class="badge badge-danger">{errors.cpassword}</span>) : null} */}
</div>
<div class="col-12 d-grid">
<button onClick={PostData} type="submit" class="btn btn-lg btn-primary mb-0">Register</button>
</div>
</form>
Auth.js
const express = require("express")
const router = express()
const bcrypt = require("bcryptjs")
const jwt = require("jsonwebtoken")
require("../conn")
const User = require("../models/SignupSchema")
router.get("/", (req, res) => {
res.send("hello i am home router js")
})
router.post("/register", (req, res) => {
const { username, email, mobile, password, cpassword } = req.body
if (!username || !email || !mobile || !password || !cpassword) {
return res.status(422).json({ error: "please fill all the data" })
}
User.findOne({ email: email }).then((userExit) => {
if (userExit) {
return res.status(422).json({ error: "User is already registered" })
}
const user = new User({ username, email, mobile, password, cpassword })
user.save().then(() => {
res.status(200).json({ message: "user is registered" })
}).catch(() => {
res.status(500).json({ error: "Error while registering the user" })
})
}).catch((err) => {
console.log(err);
})
})
router.post("/login", async (req, res) => {
let token
try {
const { email, password } = req.body
if (!email || !password) {
return res.status(400).json({ message: "fill the credentials" })
}
const UserLogin = await User.findOne({ email: email })
if (UserLogin) {
const passmatch = await bcrypt.compare(password, UserLogin.password)
token = await UserLogin.generateAuthToken()
console.log(token);
res.cookie("jwt",token,{
expires : new Date(Date.now()+25892000000),
httpOnly : true
})
if (!passmatch) {
res.status(400).json({ error: "Invalid credentials" })
}
else {
res.status(200).json({ message: "sign in successfully" })
}
} else {
res.status(400).json({ error: "Invalid credentials" })
}
} catch (err) {
console.log(err);
}
})
module. Exports = router
App.js(backend)
const dotenv = require("dotenv")
const express = require("express")
const app = express()
dotenv.config({path:"./config.env"})
require("./conn")
app.use(express.json())
const PORT = process.env.PORT
app.use(require("./router/Auth"))
const middleware = (req,res,next)=>{
console.log("i am using middleware");
next();
}
app.get("/",(req,res)=>{
res.send("hello world from the server")
})
app.get("/about",middleware,(req,res)=>{
res.send("this is about page rahul")
})
app.listen(PORT,()=>{
console.log(`server is listening ${PORT}`);
})
signup schema
const mongoose = require("mongoose")
const bcrypt = require("bcryptjs")
const jwt = require("jsonwebtoken")
const SignupSchema = new mongoose.Schema({
username: {
type: String,
required: true
},
email: {
type: String,
required: true
},
mobile: {
type: String,
required: true
},
password: {
type: String,
required: true
},
cpassword: {
type: String,
required: true
},
tokens:[
{
token:{
type:String,
required:true
}
}
]
})
SignupSchema.pre("save", async function (next) {
console.log("hi i am pre");
if (this.isModified("password")) {
console.log("hi i am pre password");
this.password = await bcrypt.hash(this.password, 12)
this.cpassword = await bcrypt.hash(this.cpassword, 12)
next()
}
})
SignupSchema.methods.generateAuthToken = async function () {
try {
let token = jwt.sign({ _id:this._id},process.env.SECRET_KEY)
this.tokens = this.tokens.concat({token:token})
await this.save()
return token
} catch (err) {
console.log(err);
}
}
const Signup = mongoose.model("SIGNUP", SignupSchema)
module. Exports = Signup
I had tried to insert my form data using fect api in reactjs abd i expecting to data to be inserted in my database
|
[
"If you are using node server, install this npm install body-parser and add these lines in your server-side code const\n`bodyParser = require(\"body-parser\"); \n router.use(bodyParser.json());`\n\nif you are using app const app = express(); then you can simply use app.use(bodyParser.json());\nin your code,\nalso use the full port name while fetch.\nfetch(\"http://localhost:3000/register\");\n\n",
"it should be const router = express.Router() not router = express() and it's module.exports not module. Exports.\n"
] |
[
0,
0
] |
[] |
[] |
[
"express",
"mongodb",
"node.js",
"reactjs"
] |
stackoverflow_0074644664_express_mongodb_node.js_reactjs.txt
|
Q:
How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune?
How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune?
Let's say I built a Python class called CustomEnv (similar to the 'CartPoleEnv' class used to create the OpenAI Gym "CartPole-v1" environment) to create my own (custom) reinforcement learning environment, and I am using tune.run() from Ray Tune (in Ray 2.1.0 with Python 3.9.15) to train an agent in my environment using the 'PPO' algorithm:
import ray
from ray import tune
tune.run(
"PPO", # 'PPO' algorithm
config={"env": CustomEnv, # custom class used to create an environment
"framework": "tf2",
"evaluation_interval": 100,
"evaluation_duration": 100,
},
checkpoint_freq = 100, # Save checkpoint at every evaluation
local_dir=checkpoint_dir, # Save results to a local directory
stop{"episode_reward_mean": 250}, # Stopping criterion
)
This works fine, and I can use TensorBoard to monitor training progress, etc., but as it turns out, learning is slow, so I want to try using 'wrappers' from Gym to scale observations, rewards, and/or actions, limit variance, and speed-up learning. So I've got an ObservationWrapper, a RewardWrapper, and an ActionWrapper to do that--for example, something like this (the exact nature of the scaling is not central to my question):
import gym
class ObservationWrapper(gym.ObservationWrapper):
def __init__(self, env):
super().__init__(env)
self.o_min = 0.
self.o_max = 5000.
def observation(self, ob):
# Normalize observations
ob = (ob - self.o_min)/(self.o_max - self.o_min)
return ob
class RewardWrapper(gym.RewardWrapper):
def __init__(self, env):
super().__init__(env)
self.r_min = -500
self.r_max = 100
def reward(self, reward):
# Scale rewards:
reward = reward/(self.r_max - self.r_min)
return reward
class ActionWrapper(gym.ActionWrapper):
def __init__(self, env):
super().__init__(env)
def action(self, action):
# Scale actions
action = action/10
return action
Wrappers like these work fine with my custom class when I create an instance of the class on my local machine and use it in traditional training loops, like this:
from my_file import CustomEnv
env = CustomEnv()
wrapped_env = ObservationWrapper(RewardWrapper(ActionWrapper(env)))
episodes = 10
for episode in range(1,episodes+1):
obs = wrapped_env.reset()
done = False
score = 0
while not done:
action = wrapped_env.action_space.sample()
obs, reward, done, info = wrapped_env.step(action)
score += reward
print(f'Episode: {episode}, Score: {score:.3f}')
My question is: How can I use wrappers like these with my custom class (CustomEnv) and ray.tune()? This particular method expects the value for "env" to be passed either (1) as a class (such as CustomEnv) or (2) as a string associated with a registered Gym environment (such as "CartPole-v1"), as I found out while trying various incorrect ways to pass a wrapped version of my custom class:
ValueError: >>> is an invalid env specifier. You can specify a custom env as either a class (e.g., YourEnvCls) or a registered env id (e.g., "your_env").
So I am not sure how to do it (assuming it is possible). I would prefer to solve this problem without having to register my custom Gym environment, but I am open to any solution.
In learning about wrappers, I leveraged mostly 'Getting Started With OpenAI Gym: The Basic Building Blocks' by Ayoosh Kathuria, and 'TF 2.0 for Reinforcement Learning: Gym Wrappers'.
A:
I was able to answer my own question about how to get Ray's tune.run() to work with a wrapped custom class for a Gym environment. The documentation for Ray Environments was helpful.
The solution was to register the custom class through Ray. Assuming you have defined your Gym wrappers (classes) as discussed above, it works like this:
from ray.tune.registry import register_env
from your_file import CustomEnv # import your custom class
def env_creator(env_config):
# wrap and return an instance of your custom class
return ObservationWrapper(RewardWrapper(ActionWrapper(CustomEnv())))
# Choose a name and register your custom environment
register_env('WrappedCustomEnv-v0', env_creator)
Now, in tune.run(), you can submit the name of the registered instance as you would any other registered Gym environment:
import ray
from ray import tune
tune.run(
"PPO", # 'PPO' algorithm (for example)
config={"env": "WrappedCustomEnv-v0", # the registered instance
#other options here as desired
},
# other options here as desired
)
tune.run() will work with no errors--problem solved!
|
How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune?
|
How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune?
Let's say I built a Python class called CustomEnv (similar to the 'CartPoleEnv' class used to create the OpenAI Gym "CartPole-v1" environment) to create my own (custom) reinforcement learning environment, and I am using tune.run() from Ray Tune (in Ray 2.1.0 with Python 3.9.15) to train an agent in my environment using the 'PPO' algorithm:
import ray
from ray import tune
tune.run(
"PPO", # 'PPO' algorithm
config={"env": CustomEnv, # custom class used to create an environment
"framework": "tf2",
"evaluation_interval": 100,
"evaluation_duration": 100,
},
checkpoint_freq = 100, # Save checkpoint at every evaluation
local_dir=checkpoint_dir, # Save results to a local directory
stop{"episode_reward_mean": 250}, # Stopping criterion
)
This works fine, and I can use TensorBoard to monitor training progress, etc., but as it turns out, learning is slow, so I want to try using 'wrappers' from Gym to scale observations, rewards, and/or actions, limit variance, and speed-up learning. So I've got an ObservationWrapper, a RewardWrapper, and an ActionWrapper to do that--for example, something like this (the exact nature of the scaling is not central to my question):
import gym
class ObservationWrapper(gym.ObservationWrapper):
def __init__(self, env):
super().__init__(env)
self.o_min = 0.
self.o_max = 5000.
def observation(self, ob):
# Normalize observations
ob = (ob - self.o_min)/(self.o_max - self.o_min)
return ob
class RewardWrapper(gym.RewardWrapper):
def __init__(self, env):
super().__init__(env)
self.r_min = -500
self.r_max = 100
def reward(self, reward):
# Scale rewards:
reward = reward/(self.r_max - self.r_min)
return reward
class ActionWrapper(gym.ActionWrapper):
def __init__(self, env):
super().__init__(env)
def action(self, action):
# Scale actions
action = action/10
return action
Wrappers like these work fine with my custom class when I create an instance of the class on my local machine and use it in traditional training loops, like this:
from my_file import CustomEnv
env = CustomEnv()
wrapped_env = ObservationWrapper(RewardWrapper(ActionWrapper(env)))
episodes = 10
for episode in range(1,episodes+1):
obs = wrapped_env.reset()
done = False
score = 0
while not done:
action = wrapped_env.action_space.sample()
obs, reward, done, info = wrapped_env.step(action)
score += reward
print(f'Episode: {episode}, Score: {score:.3f}')
My question is: How can I use wrappers like these with my custom class (CustomEnv) and ray.tune()? This particular method expects the value for "env" to be passed either (1) as a class (such as CustomEnv) or (2) as a string associated with a registered Gym environment (such as "CartPole-v1"), as I found out while trying various incorrect ways to pass a wrapped version of my custom class:
ValueError: >>> is an invalid env specifier. You can specify a custom env as either a class (e.g., YourEnvCls) or a registered env id (e.g., "your_env").
So I am not sure how to do it (assuming it is possible). I would prefer to solve this problem without having to register my custom Gym environment, but I am open to any solution.
In learning about wrappers, I leveraged mostly 'Getting Started With OpenAI Gym: The Basic Building Blocks' by Ayoosh Kathuria, and 'TF 2.0 for Reinforcement Learning: Gym Wrappers'.
|
[
"I was able to answer my own question about how to get Ray's tune.run() to work with a wrapped custom class for a Gym environment. The documentation for Ray Environments was helpful.\nThe solution was to register the custom class through Ray. Assuming you have defined your Gym wrappers (classes) as discussed above, it works like this:\nfrom ray.tune.registry import register_env\nfrom your_file import CustomEnv # import your custom class\n\ndef env_creator(env_config):\n # wrap and return an instance of your custom class\n return ObservationWrapper(RewardWrapper(ActionWrapper(CustomEnv())))\n\n# Choose a name and register your custom environment\nregister_env('WrappedCustomEnv-v0', env_creator)\n\nNow, in tune.run(), you can submit the name of the registered instance as you would any other registered Gym environment:\nimport ray\nfrom ray import tune\n\ntune.run(\n \"PPO\", # 'PPO' algorithm (for example)\n config={\"env\": \"WrappedCustomEnv-v0\", # the registered instance\n #other options here as desired\n },\n # other options here as desired\n )\n\ntune.run() will work with no errors--problem solved!\n"
] |
[
0
] |
[] |
[] |
[
"openai_gym",
"python",
"ray",
"tensorflow"
] |
stackoverflow_0074637712_openai_gym_python_ray_tensorflow.txt
|
Q:
C# Error CS0426 coming up when it shouldn't be VS2022
I'm getting the error
CS0426 "the type name 'GetRequest' does not exist in the type 'Request'
when I'm quite sure that it does. I don't understand what I possibly did wrong here. Here's the code I'm working with:
public class Request
{
public static Request GetRequest()
{
return null;
}
}
In this class you can clearly see that GetRequest does infact exist within the type 'Request'. However when I try and use this method I get an error saying that it doesn't exist.
This line generates the error I have been getting:
Request req = new Request.GetRequest();
A:
The error you are seeing is because the GetRequest method is not a constructor for the Request class, but rather a static method. In order to call this method, you need to use the class name instead of the new keyword, like this:
Request req = Request.GetRequest(msg);
Alternatively, you can create a new instance of the Request class using the constructor, and then call the GetRequest method on that instance, like this:
Request request = new Request();
Request req = request.GetRequest(msg);
Note that in this case, you will need to modify the GetRequest method to make it non-static, like this:
public Request GetRequest(String request)
{
if (string.IsNullOrEmpty(request)) { return null; } //return null if the string is empty or nothing
String[] tokens = request.Split(' ');
String type = tokens[0];
String url = tokens[1];
String host = tokens[4];
return new Request(type, url, host);
}
A:
new Request.GetRequest(msg);
You're trying to create an instance of the type Request.GetRequest here, which in fact does not exist, what you probably wanted is Request.GetRequest(msg) calling the static method GetRequest in the Request class.
|
C# Error CS0426 coming up when it shouldn't be VS2022
|
I'm getting the error
CS0426 "the type name 'GetRequest' does not exist in the type 'Request'
when I'm quite sure that it does. I don't understand what I possibly did wrong here. Here's the code I'm working with:
public class Request
{
public static Request GetRequest()
{
return null;
}
}
In this class you can clearly see that GetRequest does infact exist within the type 'Request'. However when I try and use this method I get an error saying that it doesn't exist.
This line generates the error I have been getting:
Request req = new Request.GetRequest();
|
[
"The error you are seeing is because the GetRequest method is not a constructor for the Request class, but rather a static method. In order to call this method, you need to use the class name instead of the new keyword, like this:\nRequest req = Request.GetRequest(msg);\n\nAlternatively, you can create a new instance of the Request class using the constructor, and then call the GetRequest method on that instance, like this:\nRequest request = new Request();\nRequest req = request.GetRequest(msg);\n\nNote that in this case, you will need to modify the GetRequest method to make it non-static, like this:\npublic Request GetRequest(String request)\n{\n if (string.IsNullOrEmpty(request)) { return null; } //return null if the string is empty or nothing\n\n String[] tokens = request.Split(' ');\n String type = tokens[0];\n String url = tokens[1];\n String host = tokens[4];\n\n return new Request(type, url, host);\n}\n\n",
"new Request.GetRequest(msg);\nYou're trying to create an instance of the type Request.GetRequest here, which in fact does not exist, what you probably wanted is Request.GetRequest(msg) calling the static method GetRequest in the Request class.\n"
] |
[
2,
1
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0074663274_c#.txt
|
Q:
How to calculate relevance in Elasticsearch based on associated documents
Main question:
I have one data type, let's call them People, and another associated data type, let's say Reports. A person can have many reports associated with them and vice versa in our relational database. These reports can be pretty long, sometimes over 1000 words mostly in English.
We want to be able to search people by a keyword, where the results are the people whose reports are most relevant to the keyword. For example if Person A's reports mention "art director" a lot more than any other person, we want Person A to show up high in the search results if someone searched "art director".
More details:
The key thing here, is that we don't want to combine all the reports together and add them as a field for the Person model. I think that with 100,000s of our People records and 1,000,000s of long reports, this would make the index super big. (And I think there might be limits on how long the text of a field can be.)
The reports are also indexed on their own, so that people can do full text searches of all the reports without considering the People records. This works great already.
To avoid these large and kind of redundant indexes, I want to use the Elasticsearch query language to search "through" the Person record to its associated reports.
Is this possible, and if so how?
P.S. I am using the Searchkick Ruby gem to generate the Elasticsearch queries through an API. But I can also use the Elasticsearch DSL directly if necessary.
A:
Answering to your questions.
1.(...) we want Person A to show up high in the search results if someone searched "art director".
That's exactly what Elasticsearch does, so I would recommend you to start with a simple match query:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html
From there you can start adding up more complexity.
Elasticsearch uses TF-IDF which means:
TF(Term Frequency): The most frequent a term is within a document, more relevant it is.
IDF(Inverse Document Frequency): The most frequent a term is across the entire dataset the less relevant it is.
2.(...) To avoid these large and kind of redundant indexes, I want to use the Elasticsearch query language to search "through" the Person record to its associated reports.
You are right. The recommendation is not indexing a book as a field, but index the different chapter/pages/etc.. as documents.
https://www.elastic.co/guide/en/elasticsearch/reference/current/general-recommendations.html
There are some structures you can use. Which one to use will depend on how big is the scale of your data, en how do you want to show this data to your users.
The structures are:
Joined field type (parent=author child=report pages)
Nested field type (array of report pages within an author)
Collapsed results (each doc being a book page, collapse by author)
We can discuss a lot about the best one, but I invite you to try yourself.
Some guidelines:
If the number of reports outnumber for a lot to the author you can use joined field type.
|
How to calculate relevance in Elasticsearch based on associated documents
|
Main question:
I have one data type, let's call them People, and another associated data type, let's say Reports. A person can have many reports associated with them and vice versa in our relational database. These reports can be pretty long, sometimes over 1000 words mostly in English.
We want to be able to search people by a keyword, where the results are the people whose reports are most relevant to the keyword. For example if Person A's reports mention "art director" a lot more than any other person, we want Person A to show up high in the search results if someone searched "art director".
More details:
The key thing here, is that we don't want to combine all the reports together and add them as a field for the Person model. I think that with 100,000s of our People records and 1,000,000s of long reports, this would make the index super big. (And I think there might be limits on how long the text of a field can be.)
The reports are also indexed on their own, so that people can do full text searches of all the reports without considering the People records. This works great already.
To avoid these large and kind of redundant indexes, I want to use the Elasticsearch query language to search "through" the Person record to its associated reports.
Is this possible, and if so how?
P.S. I am using the Searchkick Ruby gem to generate the Elasticsearch queries through an API. But I can also use the Elasticsearch DSL directly if necessary.
|
[
"Answering to your questions.\n1.(...) we want Person A to show up high in the search results if someone searched \"art director\".\nThat's exactly what Elasticsearch does, so I would recommend you to start with a simple match query:\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html\nFrom there you can start adding up more complexity.\nElasticsearch uses TF-IDF which means:\nTF(Term Frequency): The most frequent a term is within a document, more relevant it is.\nIDF(Inverse Document Frequency): The most frequent a term is across the entire dataset the less relevant it is.\n2.(...) To avoid these large and kind of redundant indexes, I want to use the Elasticsearch query language to search \"through\" the Person record to its associated reports.\nYou are right. The recommendation is not indexing a book as a field, but index the different chapter/pages/etc.. as documents.\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/general-recommendations.html\nThere are some structures you can use. Which one to use will depend on how big is the scale of your data, en how do you want to show this data to your users.\nThe structures are:\n\nJoined field type (parent=author child=report pages)\nNested field type (array of report pages within an author)\nCollapsed results (each doc being a book page, collapse by author)\n\nWe can discuss a lot about the best one, but I invite you to try yourself.\nSome guidelines:\nIf the number of reports outnumber for a lot to the author you can use joined field type.\n"
] |
[
0
] |
[] |
[] |
[
"elasticsearch",
"searchkick"
] |
stackoverflow_0074662851_elasticsearch_searchkick.txt
|
Q:
Key "uri" in the image picker result is deprecated and will be removed in SDK 48, you can access selected assets through the "assets" array instead
I am working on an Image picker on react native. Im getting a warning ... Key "uri" in the image picker result is deprecated and will be removed in SDK 48, you can access selected assets through the "assets" array instead. I am getting on both android emulator and IOS How can I overcome this?
const selectImage = async () =>{
try {
const result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.Images,
allowsEditing: true,
aspect: [4, 3],
quality: 0.5
});
if(!result.canceled){
setImage(result.uri)
saveToFile();
}else Alert.alert('Delete', 'Are you sure you want to delte the image', [
{text:"Yes", onPress:()=> setImage(null)},{text:"No"} ])
} catch (error) {
console.log("error reading an image")
}
}
|
Key "uri" in the image picker result is deprecated and will be removed in SDK 48, you can access selected assets through the "assets" array instead
|
I am working on an Image picker on react native. Im getting a warning ... Key "uri" in the image picker result is deprecated and will be removed in SDK 48, you can access selected assets through the "assets" array instead. I am getting on both android emulator and IOS How can I overcome this?
const selectImage = async () =>{
try {
const result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.Images,
allowsEditing: true,
aspect: [4, 3],
quality: 0.5
});
if(!result.canceled){
setImage(result.uri)
saveToFile();
}else Alert.alert('Delete', 'Are you sure you want to delte the image', [
{text:"Yes", onPress:()=> setImage(null)},{text:"No"} ])
} catch (error) {
console.log("error reading an image")
}
}
|
[] |
[] |
[
"It looks like the uri property of the result object that is returned by launchImageLibraryAsync is deprecated and will be removed in the future. Instead, you should access the selected image(s) through the assets property of the result object.\nHere is an example of how you can update your code to use the assets property instead:\nconst selectImage = async () => {\n try {\n const result = await ImagePicker.launchImageLibraryAsync({\n mediaTypes: ImagePicker.MediaTypeOptions.Images,\n allowsEditing: true,\n aspect: [4, 3],\n quality: 0.5\n });\n\n if (!result.canceled) {\n // Use the first selected image\n const selectedImage = result.assets[0];\n\n // Update the state with the selected image\n setImage(selectedImage);\n\n // You can also use the selected image to save it to a file\n saveToFile(selectedImage);\n } else {\n Alert.alert(\n \"Delete\",\n \"Are you sure you want to delete the image\",\n [\n { text: \"Yes\", onPress: () => setImage(null) },\n { text: \"No\" }\n ]\n );\n }\n } catch (error) {\n console.log(\"error reading an image\");\n }\n};\n\nIn this code, we are using the assets property of the result object to access the selected image(s), and then using the first image in the assets array to update the state and save it to a file. You can adjust this code to fit your needs and handle multiple selected images if necessary.\n"
] |
[
-1
] |
[
"expo",
"react_native"
] |
stackoverflow_0074663269_expo_react_native.txt
|
Q:
Get indicators on 1 hour daily time frame in correlation to daily timeframe
In one hour timeframe I am trying to study the four days of daily candles to know the trend to create an indicator on the 1 hour chart if I should make a sale or buy the stock.
How do I get the historical data for the series got from security function for 1D resolution on all four values close, open, high and low.
But for all indexes on the series got from security function I get the same value back.
Below is the code:
//@version=5
indicator("My Script", overlay=true)
indexHighTf = barstate.isrealtime ? 1:0
indexCurrTf = barstate.isrealtime ? 0:1
d_timeframe_Close = request.security(syminfo.tickerid, "D", close[indexHighTf])[indexCurrTf]
d_timeframe_Open = request.security(syminfo.tickerid, "D", open[indexHighTf])[indexCurrTf]
d_timeframe_High = request.security(syminfo.tickerid, "D", high[indexHighTf])[indexCurrTf]
d_timeframe_Low = request.security(syminfo.tickerid, "D", low[indexHighTf])[indexCurrTf]
down_lowest = 0.00
down_id_lowest = 0.00
up_list = array.new_int(0)
up_highest = 0.00
up_id_highest = 0
up_low_of_heighest = 0.00
candle_start = 4
candle_end = 0
sell_alert = 0
if (d_timeframe_Open[candle_start]<d_timeframe_Close[candle_end])
count = 0
for i = candle_end to candle_start
if (d_timeframe_Open[i]<d_timeframe_Close[i])
if (count == 0)
count := count+1
up_highest := d_timeframe_High[i]
if (up_highest <= d_timeframe_High[i])
array.push(up_list, i)
up_highest := d_timeframe_High[i]
up_id_highest := i
up_low_of_heighest := d_timeframe_Low[up_id_highest]
if (open[0] > up_low_of_heighest and close[0] < up_low_of_heighest)
sell_alert := 1
plotshape(sell_alert, style=shape.triangledown, location=location.belowbar, text="Sell")
A:
I'm not sure what's your goal, but this may this help:
// © BlueJayBird
//@version=5
indicator("My script", overlay = true)
fNoRepaint(timeframe, expression) =>
request.security(symbol = syminfo.tickerid, timeframe = timeframe, expression = expression[1], lookahead = barmerge.lookahead_on)
requestOpen = fNoRepaint('D', open)
plot(requestOpen, 'Open Requested', color.orange, linewidth = 2, show_last = 24 * 4)
requestHigh = fNoRepaint('D', high)
plot(requestHigh, 'High Requested', color.red, linewidth = 2, show_last = 24 * 4)
requestLow = fNoRepaint('D', low)
plot(requestLow, 'Low Requested', color.green, linewidth = 2, show_last = 24 * 4)
Which renders as:
Keep in mind:
That the line moves with the last hourly candle, so it will advance the entire 4 * 24 with the latest candle, trimming somehow the line.
I'm using the fNoRepaint function for avoiding repaint in the request.
And:
That what you are receiving a series of numbers, not just ONE value. Probably that for loop you have in your code is not necessary. Just compare the conditions you may need and plot the shape you want.
close and close[0] return the same value.
I may be able to provide some help with the condition if you provide a written explanation of that you want to verify.
|
Get indicators on 1 hour daily time frame in correlation to daily timeframe
|
In one hour timeframe I am trying to study the four days of daily candles to know the trend to create an indicator on the 1 hour chart if I should make a sale or buy the stock.
How do I get the historical data for the series got from security function for 1D resolution on all four values close, open, high and low.
But for all indexes on the series got from security function I get the same value back.
Below is the code:
//@version=5
indicator("My Script", overlay=true)
indexHighTf = barstate.isrealtime ? 1:0
indexCurrTf = barstate.isrealtime ? 0:1
d_timeframe_Close = request.security(syminfo.tickerid, "D", close[indexHighTf])[indexCurrTf]
d_timeframe_Open = request.security(syminfo.tickerid, "D", open[indexHighTf])[indexCurrTf]
d_timeframe_High = request.security(syminfo.tickerid, "D", high[indexHighTf])[indexCurrTf]
d_timeframe_Low = request.security(syminfo.tickerid, "D", low[indexHighTf])[indexCurrTf]
down_lowest = 0.00
down_id_lowest = 0.00
up_list = array.new_int(0)
up_highest = 0.00
up_id_highest = 0
up_low_of_heighest = 0.00
candle_start = 4
candle_end = 0
sell_alert = 0
if (d_timeframe_Open[candle_start]<d_timeframe_Close[candle_end])
count = 0
for i = candle_end to candle_start
if (d_timeframe_Open[i]<d_timeframe_Close[i])
if (count == 0)
count := count+1
up_highest := d_timeframe_High[i]
if (up_highest <= d_timeframe_High[i])
array.push(up_list, i)
up_highest := d_timeframe_High[i]
up_id_highest := i
up_low_of_heighest := d_timeframe_Low[up_id_highest]
if (open[0] > up_low_of_heighest and close[0] < up_low_of_heighest)
sell_alert := 1
plotshape(sell_alert, style=shape.triangledown, location=location.belowbar, text="Sell")
|
[
"I'm not sure what's your goal, but this may this help:\n// © BlueJayBird\n\n//@version=5\nindicator(\"My script\", overlay = true)\n\nfNoRepaint(timeframe, expression) =>\n request.security(symbol = syminfo.tickerid, timeframe = timeframe, expression = expression[1], lookahead = barmerge.lookahead_on)\n\nrequestOpen = fNoRepaint('D', open)\nplot(requestOpen, 'Open Requested', color.orange, linewidth = 2, show_last = 24 * 4)\n\nrequestHigh = fNoRepaint('D', high)\nplot(requestHigh, 'High Requested', color.red, linewidth = 2, show_last = 24 * 4)\n\nrequestLow = fNoRepaint('D', low)\nplot(requestLow, 'Low Requested', color.green, linewidth = 2, show_last = 24 * 4)\n\nWhich renders as:\n\nKeep in mind:\n\nThat the line moves with the last hourly candle, so it will advance the entire 4 * 24 with the latest candle, trimming somehow the line.\nI'm using the fNoRepaint function for avoiding repaint in the request.\n\nAnd:\n\nThat what you are receiving a series of numbers, not just ONE value. Probably that for loop you have in your code is not necessary. Just compare the conditions you may need and plot the shape you want.\nclose and close[0] return the same value.\n\nI may be able to provide some help with the condition if you provide a written explanation of that you want to verify.\n"
] |
[
0
] |
[] |
[] |
[
"pine_script",
"pinescript_v5"
] |
stackoverflow_0070945652_pine_script_pinescript_v5.txt
|
Q:
How to make an HTML countdown timer that resets on reload?
How would I go about making a countdown clock using only HTML and JS? Problem is, there are a few features that are required:
• It must reset on a page reload
• It must count down from a certain number of hours from page load (i.e. not a universal time)
I realize these are a lot of demands, but anything hints/tips/advice will help. Thanks in advance :]
I've tried some online timers, but they are universal and don't reset on a page reload.
A:
To create an HTML countdown timer that resets on reload, you can use JavaScript to track the time remaining and update the countdown display. Here is an example of how you could implement this:
<p id="timer">0:00</p>
<script>
// Set the date we're counting down to PLUS one day for example
var countDownDate = new Date(new Date().getTime() + (24 * 60 * 60 * 1000));
// Update the count down every 1 second
var x = setInterval(function() {
// Get today's date and time
var now = new Date().getTime();
// Find the distance between now and the count down date
var distance = countDownDate - now;
// Time calculations for days, hours, minutes and seconds
var days = Math.floor(distance / (1000 * 60 * 60 * 24));
var hours = Math.floor((distance % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60));
var minutes = Math.floor((distance % (1000 * 60 * 60)) / (1000 * 60));
var seconds = Math.floor((distance % (1000 * 60)) / 1000);
// Display the result in the element with id="timer"
document.getElementById("timer").innerHTML = days + "d " + hours + "h "
+ minutes + "m " + seconds + "s ";
// If the count down is finished, write some text
if (distance < 0) {
clearInterval(x);
document.getElementById("timer").innerHTML = "EXPIRED";
}
}, 1000);
</script>
This code sets a countdown date and then updates the countdown display every second. When the countdown is finished, it displays the text "EXPIRED".
A:
I don't know what you've tried. Here's one.
(Although, to be honest, I probably shouldn't answer a poorly written question.)
<div id="countdown"></div>
<script> {
let remaining = { hours: 1, minutes: 0, seconds: 5 }
displayRemainingTime();
const countdownInterval =
setInterval( () => {
--remaining.seconds;
if( remaining.seconds < 0 ) {
--remaining.minutes;
if( remaining.minutes < 0 ) {
--remaining.hours;
if( remaining.hours < 0 ) {
clearInterval( countdownInterval );
displayRemainingTime();
} else {
remaining.minutes = 59;
remaining.seconds = 59;
}
} else {
remaining.seconds = 59;
}
}
displayRemainingTime();
}, 1000 );
function displayTimerFinished() {
document.getElementById( "countdown" ).textContent = "The timer has finished!";
}
function displayRemainingTime() {
const hoursDigit = remaining.hours < 10 ? "0" : "",
minutesDigit = remaining.minutes < 10 ? "0" : "",
secondsDigit = remaining.seconds < 10 ? "0" : "";
document.getElementById( "countdown" ).textContent =
`${hoursDigit + remaining.hours}:${minutesDigit + remaining.minutes}:${secondsDigit + remaining.seconds}`;
}
} </script>
|
How to make an HTML countdown timer that resets on reload?
|
How would I go about making a countdown clock using only HTML and JS? Problem is, there are a few features that are required:
• It must reset on a page reload
• It must count down from a certain number of hours from page load (i.e. not a universal time)
I realize these are a lot of demands, but anything hints/tips/advice will help. Thanks in advance :]
I've tried some online timers, but they are universal and don't reset on a page reload.
|
[
"To create an HTML countdown timer that resets on reload, you can use JavaScript to track the time remaining and update the countdown display. Here is an example of how you could implement this:\n\n\n<p id=\"timer\">0:00</p>\n\n<script>\n // Set the date we're counting down to PLUS one day for example\n var countDownDate = new Date(new Date().getTime() + (24 * 60 * 60 * 1000));\n\n // Update the count down every 1 second\n var x = setInterval(function() {\n\n // Get today's date and time\n var now = new Date().getTime();\n\n // Find the distance between now and the count down date\n var distance = countDownDate - now;\n\n // Time calculations for days, hours, minutes and seconds\n var days = Math.floor(distance / (1000 * 60 * 60 * 24));\n var hours = Math.floor((distance % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60));\n var minutes = Math.floor((distance % (1000 * 60 * 60)) / (1000 * 60));\n var seconds = Math.floor((distance % (1000 * 60)) / 1000);\n\n // Display the result in the element with id=\"timer\"\n document.getElementById(\"timer\").innerHTML = days + \"d \" + hours + \"h \"\n + minutes + \"m \" + seconds + \"s \";\n\n // If the count down is finished, write some text\n if (distance < 0) {\n clearInterval(x);\n document.getElementById(\"timer\").innerHTML = \"EXPIRED\";\n }\n}, 1000);\n</script>\n\n\n\nThis code sets a countdown date and then updates the countdown display every second. When the countdown is finished, it displays the text \"EXPIRED\".\n",
"I don't know what you've tried. Here's one.\n(Although, to be honest, I probably shouldn't answer a poorly written question.)\n\n<div id=\"countdown\"></div>\n<script> {\n let remaining = { hours: 1, minutes: 0, seconds: 5 }\n \n displayRemainingTime();\n\n const countdownInterval = \n setInterval( () => {\n\n --remaining.seconds;\n\n if( remaining.seconds < 0 ) {\n\n --remaining.minutes;\n\n if( remaining.minutes < 0 ) {\n\n --remaining.hours;\n\n if( remaining.hours < 0 ) {\n\n clearInterval( countdownInterval );\n displayRemainingTime();\n\n } else {\n\n remaining.minutes = 59;\n remaining.seconds = 59;\n\n }\n\n\n } else {\n\n remaining.seconds = 59;\n\n }\n\n }\n\n displayRemainingTime();\n\n }, 1000 );\n\n function displayTimerFinished() {\n\n document.getElementById( \"countdown\" ).textContent = \"The timer has finished!\";\n\n }\n\n function displayRemainingTime() {\n\n const hoursDigit = remaining.hours < 10 ? \"0\" : \"\",\n minutesDigit = remaining.minutes < 10 ? \"0\" : \"\",\n secondsDigit = remaining.seconds < 10 ? \"0\" : \"\";\n\n document.getElementById( \"countdown\" ).textContent =\n `${hoursDigit + remaining.hours}:${minutesDigit + remaining.minutes}:${secondsDigit + remaining.seconds}`;\n\n }\n\n} </script>\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"countdown",
"html",
"javascript",
"timer"
] |
stackoverflow_0074663179_countdown_html_javascript_timer.txt
|
Q:
ncurses and key codes after fork
I don't understand why the arrow keys code changes after forking in a WINDOW.
The up arrow returns 259, but after the fork 65.
If I run the same program on stdscr, it returns 65 already at the beginning.
Thanks for the help and sorry for the english (translated by Google).
`
#include <curses.h>
#include <sys/ioctl.h>
#include <sys/wait.h>
#include <unistd.h>
#include <stdlib.h>
void openVim() {
pid_t pid = fork();
if (pid < 0) {}
else if (pid == 0) {
execl("/usr/bin/vim", "/usr/bin/vim", NULL);
exit(0);
}
else {
wait(NULL);
}
}
int main() {
initscr();
noecho();
int ch = 0;
WINDOW* mainWin = newwin(10,10,0,0);
keypad(mainWin, TRUE);
while ((ch = wgetch(mainWin)) != 'q') {
wclear(mainWin);
if (ch == 'V') openVim();
else
mvwprintw(mainWin, 0, 0, "%i - %c", ch, ch);
wrefresh(mainWin);
}
delwin(mainWin);
endwin();
return 0;
}
`
I noticed that if I put a simple for loop in the fork, it doesn't happen. It probably has to do with execl?
A:
vim resets the terminal I/O mode; your program doesn't account for that (see reset_prog_mode and reset_shell_mode).
The NCURSES Programming HOWTO section Temporarily Leaving Curses mode also discusses this issue.
|
ncurses and key codes after fork
|
I don't understand why the arrow keys code changes after forking in a WINDOW.
The up arrow returns 259, but after the fork 65.
If I run the same program on stdscr, it returns 65 already at the beginning.
Thanks for the help and sorry for the english (translated by Google).
`
#include <curses.h>
#include <sys/ioctl.h>
#include <sys/wait.h>
#include <unistd.h>
#include <stdlib.h>
void openVim() {
pid_t pid = fork();
if (pid < 0) {}
else if (pid == 0) {
execl("/usr/bin/vim", "/usr/bin/vim", NULL);
exit(0);
}
else {
wait(NULL);
}
}
int main() {
initscr();
noecho();
int ch = 0;
WINDOW* mainWin = newwin(10,10,0,0);
keypad(mainWin, TRUE);
while ((ch = wgetch(mainWin)) != 'q') {
wclear(mainWin);
if (ch == 'V') openVim();
else
mvwprintw(mainWin, 0, 0, "%i - %c", ch, ch);
wrefresh(mainWin);
}
delwin(mainWin);
endwin();
return 0;
}
`
I noticed that if I put a simple for loop in the fork, it doesn't happen. It probably has to do with execl?
|
[
"vim resets the terminal I/O mode; your program doesn't account for that (see reset_prog_mode and reset_shell_mode).\nThe NCURSES Programming HOWTO section Temporarily Leaving Curses mode also discusses this issue.\n"
] |
[
0
] |
[] |
[] |
[
"execl",
"ncurses"
] |
stackoverflow_0074657445_execl_ncurses.txt
|
Q:
Angular universal : Unable to prerendering dynamic routes
There is an issue with prerendering dynamic routes. In my case, product-category is a primary route, and there is a title as a parameter so my link should be product-category/:title. There are many categories under my primary route, so I need this dynamic prerendering. How to set this type of data under prerender -> options -> routes or how to dynamic prerendering achieved.
Here is my prerender code under angular.json
"prerender": {
"builder": "@nguniversal/builders:prerender",
"options": {
"routes": [
"/",
"/product-category/"
]
},
angular.json
{
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
"cli": {
"analytics": false
},
"version": 1,
"newProjectRoot": "projects",
"projects": {
"seasonsIndia": {
"projectType": "application",
"schematics": {
"@schematics/angular:component": {
"style": "scss"
},
"@schematics/angular:application": {
"strict": true
}
},
"root": "",
"sourceRoot": "src",
"prefix": "app",
"architect": {
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/seasonsIndia/browser",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.app.json",
"inlineStyleLanguage": "scss",
"allowedCommonJsDependencies": [
"crypto-js"
],
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"node_modules/slick-carousel/slick/slick.scss",
"node_modules/slick-carousel/slick/slick-theme.scss",
"src/styles.scss",
"src/assets/css/theme.scss"
],
"scripts": [
"node_modules/jquery/dist/jquery.min.js",
"node_modules/slick-carousel/slick/slick.min.js"
]
},
"configurations": {
"production": {
"budgets": [
{
"type": "initial",
"maximumWarning": "5mb",
"maximumError": "10mb"
},
{
"type": "anyComponentStyle",
"maximumWarning": "500kb",
"maximumError": "800kb"
}
],
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"outputHashing": "all"
},
"development": {
"buildOptimizer": false,
"optimization": false,
"vendorChunk": true,
"extractLicenses": false,
"sourceMap": true,
"namedChunks": true
}
},
"defaultConfiguration": "production"
},
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"configurations": {
"production": {
"browserTarget": "seasonsIndia:build:production"
},
"development": {
"browserTarget": "seasonsIndia:build:development"
}
},
"defaultConfiguration": "development"
},
"extract-i18n": {
"builder": "@angular-devkit/build-angular:extract-i18n",
"options": {
"browserTarget": "seasonsIndia:build"
}
},
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.spec.json",
"karmaConfig": "karma.conf.js",
"inlineStyleLanguage": "scss",
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"./node_modules/@angular/material/prebuilt-themes/indigo-pink.css",
"src/styles.scss"
],
"scripts": []
}
},
"server": {
"builder": "@angular-devkit/build-angular:server",
"options": {
"outputPath": "dist/seasonsIndia/server",
"main": "server.ts",
"tsConfig": "tsconfig.server.json",
"inlineStyleLanguage": "scss"
},
"configurations": {
"production": {
"outputHashing": "media",
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
]
},
"development": {
"optimization": false,
"sourceMap": true,
"extractLicenses": false
}
},
"defaultConfiguration": "production"
},
"serve-ssr": {
"builder": "@nguniversal/builders:ssr-dev-server",
"configurations": {
"development": {
"browserTarget": "seasonsIndia:build:development",
"serverTarget": "seasonsIndia:server:development"
},
"production": {
"browserTarget": "seasonsIndia:build:production",
"serverTarget": "seasonsIndia:server:production"
}
},
"defaultConfiguration": "development"
},
"prerender": {
"builder": "@nguniversal/builders:prerender",
"options": {
"routes": [
"/",
"/product-category/*"
]
},
"configurations": {
"production": {
"browserTarget": "seasonsIndia:build:production",
"serverTarget": "seasonsIndia:server:production"
},
"development": {
"browserTarget": "seasonsIndia:build:development",
"serverTarget": "seasonsIndia:server:development"
}
},
"defaultConfiguration": "production"
}
}
}
},
"defaultProject": "seasonsIndia"
}
server.ts
import 'zone.js/dist/zone-node';
import { ngExpressEngine } from '@nguniversal/express-engine';
import * as express from 'express';
import { join } from 'path';
import { AppServerModule } from './src/main.server';
import { APP_BASE_HREF } from '@angular/common';
import { existsSync } from 'fs';
import { REQUEST, RESPONSE } from '@nguniversal/express-engine/tokens';
import { NgxRequest, NgxResponse } from '@gorniv/ngx-universal';
import * as compression from 'compression';
import * as cookieparser from 'cookie-parser';
import { exit } from 'process';
import 'localstorage-polyfill';
// for debug
require('source-map-support').install();
// for tests
const test = process.env['TEST'] === 'true';
// ssr DOM
const domino = require('domino');
const fs = require('fs');
const path = require('path');
// index from browser build!
const distFolder = join(process.cwd(), 'dist/seasonsIndia/browser');
const template = existsSync(join(distFolder, 'index.original.html')) ? 'index.original.html' : 'index';
// const indexHtml = existsSync(join(distFolder, 'index.original.html')) ? 'index.original.html' : 'index';
// for mock global window by domino
const win = domino.createWindow(template);
// mock
global['window'] = win;
global['localStorage'] = localStorage;
// not implemented property and functions
Object.defineProperty(win.document.body.style, 'transform', {
value: () => {
return {
enumerable: true,
configurable: true,
};
},
});
// mock documnet
global['document'] = win.document;
// othres mock
// global['CSS'] = null;
// global['XMLHttpRequest'] = require('xmlhttprequest').XMLHttpRequest;
// global['Prism'] = null;
// The Express app is exported so that it can be used by serverless Functions.
export function app() {
const server = express();
const distFolder = join(process.cwd(), 'dist');
const indexHtml = existsSync(join(distFolder, 'index.original.html'))
? 'index.original.html'
: 'index';
// redirects!
const redirectowww = false;
const redirectohttps = false;
const wwwredirecto = true;
server.use((req, res, next) => {
// for domain/index.html
if (req.url === '/index.html') {
res.redirect(301, 'https://' + req.hostname);
}
// check if it is a secure (https) request
// if not redirect to the equivalent https url
if (
redirectohttps &&
req.headers['x-forwarded-proto'] !== 'https' &&
req.hostname !== 'localhost'
) {
// special for robots.txt
if (req.url === '/robots.txt') {
next();
return;
}
res.redirect(301, 'https://' + req.hostname + req.url);
}
// www or not
if (redirectowww && !req.hostname.startsWith('www.')) {
res.redirect(301, 'https://www.' + req.hostname + req.url);
}
// www or not
if (wwwredirecto && req.hostname.startsWith('www.')) {
const host = req.hostname.slice(4, req.hostname.length);
res.redirect(301, 'https://' + host + req.url);
}
// for test
if (test && req.url === '/test/exit') {
res.send('exit');
exit(0);
}
next();
});
// Our Universal express-engine (found @ https://github.com/angular/universal/tree/master/modules/express-engine)
server.engine(
'html',
ngExpressEngine({
bootstrap: AppServerModule,
}),
);
server.set('view engine', 'html');
server.set('views', distFolder);
// Example Express Rest API endpoints
// server.get('/api/**', (req, res) => { });
// Serve static files from /browser
server.get(
'*.*',
express.static(distFolder, {
maxAge: '1y',
}),
);
// All regular routes use the Universal engine
server.get('*', (req, res) => {
global['navigator'] = { userAgent: req['headers']['user-agent'] } as Navigator;
const http =
req.headers['x-forwarded-proto'] === undefined ? 'http' : req.headers['x-forwarded-proto'];
res.render(indexHtml, {
req,
providers: [
{ provide: APP_BASE_HREF, useValue: req.baseUrl },
// for http and cookies
{
provide: REQUEST,
useValue: req,
},
{
provide: RESPONSE,
useValue: res,
},
/// for cookie
{
provide: NgxRequest,
useValue: req,
},
{
provide: NgxResponse,
useValue: res,
},
// for absolute path
{
provide: 'ORIGIN_URL',
useValue: `${http}://${req.headers.host}`,
},
],
});
});
return server;
}
function run() {
const port = process.env.PORT || 4000;
// Start up the Node server
const server = app();
// gzip
server.use(compression());
// cokies
server.use(cookieparser());
server.listen(port, () => {
console.log(`Node Express server listening on http://localhost:${port}`);
});
}
// Webpack will replace 'require' with '__webpack_require__'
// '__non_webpack_require__' is a proxy to Node 'require'
// The below code is to ensure that the server is run only when not requiring the bundle.
declare const __non_webpack_require__: NodeRequire;
const mainModule = __non_webpack_require__.main;
const moduleFilename = (mainModule && mainModule.filename) || '';
if (moduleFilename === __filename || moduleFilename.includes('iisnode')) {
run();
}
export * from './src/main.server';
A:
"routes": [
"/",
"/product-category/*"
]
There's a clear problem with that code. You're basically asking Angular to guess all possible parameters, titles in your case.
You'd have to be more explicit than that and add the exact routes. See the example in the docs
ng run <app-name>:prerender --routes /product/1 /product/2
In your case:
"routes": [
"/",
"/product-category/1",
"/product-category/2",
...
]
If you have a lot of routes, you can always add them in a text file, like this:
// You can place the file wherever you want,
// I put it in the src folder, I don't know if there's a
// recommended location
"prerender": {
"builder": "@nguniversal/builders:prerender",
"options": {
"routesFile":"./src/routes.txt"
},
|
Angular universal : Unable to prerendering dynamic routes
|
There is an issue with prerendering dynamic routes. In my case, product-category is a primary route, and there is a title as a parameter so my link should be product-category/:title. There are many categories under my primary route, so I need this dynamic prerendering. How to set this type of data under prerender -> options -> routes or how to dynamic prerendering achieved.
Here is my prerender code under angular.json
"prerender": {
"builder": "@nguniversal/builders:prerender",
"options": {
"routes": [
"/",
"/product-category/"
]
},
angular.json
{
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
"cli": {
"analytics": false
},
"version": 1,
"newProjectRoot": "projects",
"projects": {
"seasonsIndia": {
"projectType": "application",
"schematics": {
"@schematics/angular:component": {
"style": "scss"
},
"@schematics/angular:application": {
"strict": true
}
},
"root": "",
"sourceRoot": "src",
"prefix": "app",
"architect": {
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/seasonsIndia/browser",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.app.json",
"inlineStyleLanguage": "scss",
"allowedCommonJsDependencies": [
"crypto-js"
],
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"node_modules/slick-carousel/slick/slick.scss",
"node_modules/slick-carousel/slick/slick-theme.scss",
"src/styles.scss",
"src/assets/css/theme.scss"
],
"scripts": [
"node_modules/jquery/dist/jquery.min.js",
"node_modules/slick-carousel/slick/slick.min.js"
]
},
"configurations": {
"production": {
"budgets": [
{
"type": "initial",
"maximumWarning": "5mb",
"maximumError": "10mb"
},
{
"type": "anyComponentStyle",
"maximumWarning": "500kb",
"maximumError": "800kb"
}
],
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"outputHashing": "all"
},
"development": {
"buildOptimizer": false,
"optimization": false,
"vendorChunk": true,
"extractLicenses": false,
"sourceMap": true,
"namedChunks": true
}
},
"defaultConfiguration": "production"
},
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"configurations": {
"production": {
"browserTarget": "seasonsIndia:build:production"
},
"development": {
"browserTarget": "seasonsIndia:build:development"
}
},
"defaultConfiguration": "development"
},
"extract-i18n": {
"builder": "@angular-devkit/build-angular:extract-i18n",
"options": {
"browserTarget": "seasonsIndia:build"
}
},
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.spec.json",
"karmaConfig": "karma.conf.js",
"inlineStyleLanguage": "scss",
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"./node_modules/@angular/material/prebuilt-themes/indigo-pink.css",
"src/styles.scss"
],
"scripts": []
}
},
"server": {
"builder": "@angular-devkit/build-angular:server",
"options": {
"outputPath": "dist/seasonsIndia/server",
"main": "server.ts",
"tsConfig": "tsconfig.server.json",
"inlineStyleLanguage": "scss"
},
"configurations": {
"production": {
"outputHashing": "media",
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
]
},
"development": {
"optimization": false,
"sourceMap": true,
"extractLicenses": false
}
},
"defaultConfiguration": "production"
},
"serve-ssr": {
"builder": "@nguniversal/builders:ssr-dev-server",
"configurations": {
"development": {
"browserTarget": "seasonsIndia:build:development",
"serverTarget": "seasonsIndia:server:development"
},
"production": {
"browserTarget": "seasonsIndia:build:production",
"serverTarget": "seasonsIndia:server:production"
}
},
"defaultConfiguration": "development"
},
"prerender": {
"builder": "@nguniversal/builders:prerender",
"options": {
"routes": [
"/",
"/product-category/*"
]
},
"configurations": {
"production": {
"browserTarget": "seasonsIndia:build:production",
"serverTarget": "seasonsIndia:server:production"
},
"development": {
"browserTarget": "seasonsIndia:build:development",
"serverTarget": "seasonsIndia:server:development"
}
},
"defaultConfiguration": "production"
}
}
}
},
"defaultProject": "seasonsIndia"
}
server.ts
import 'zone.js/dist/zone-node';
import { ngExpressEngine } from '@nguniversal/express-engine';
import * as express from 'express';
import { join } from 'path';
import { AppServerModule } from './src/main.server';
import { APP_BASE_HREF } from '@angular/common';
import { existsSync } from 'fs';
import { REQUEST, RESPONSE } from '@nguniversal/express-engine/tokens';
import { NgxRequest, NgxResponse } from '@gorniv/ngx-universal';
import * as compression from 'compression';
import * as cookieparser from 'cookie-parser';
import { exit } from 'process';
import 'localstorage-polyfill';
// for debug
require('source-map-support').install();
// for tests
const test = process.env['TEST'] === 'true';
// ssr DOM
const domino = require('domino');
const fs = require('fs');
const path = require('path');
// index from browser build!
const distFolder = join(process.cwd(), 'dist/seasonsIndia/browser');
const template = existsSync(join(distFolder, 'index.original.html')) ? 'index.original.html' : 'index';
// const indexHtml = existsSync(join(distFolder, 'index.original.html')) ? 'index.original.html' : 'index';
// for mock global window by domino
const win = domino.createWindow(template);
// mock
global['window'] = win;
global['localStorage'] = localStorage;
// not implemented property and functions
Object.defineProperty(win.document.body.style, 'transform', {
value: () => {
return {
enumerable: true,
configurable: true,
};
},
});
// mock documnet
global['document'] = win.document;
// othres mock
// global['CSS'] = null;
// global['XMLHttpRequest'] = require('xmlhttprequest').XMLHttpRequest;
// global['Prism'] = null;
// The Express app is exported so that it can be used by serverless Functions.
export function app() {
const server = express();
const distFolder = join(process.cwd(), 'dist');
const indexHtml = existsSync(join(distFolder, 'index.original.html'))
? 'index.original.html'
: 'index';
// redirects!
const redirectowww = false;
const redirectohttps = false;
const wwwredirecto = true;
server.use((req, res, next) => {
// for domain/index.html
if (req.url === '/index.html') {
res.redirect(301, 'https://' + req.hostname);
}
// check if it is a secure (https) request
// if not redirect to the equivalent https url
if (
redirectohttps &&
req.headers['x-forwarded-proto'] !== 'https' &&
req.hostname !== 'localhost'
) {
// special for robots.txt
if (req.url === '/robots.txt') {
next();
return;
}
res.redirect(301, 'https://' + req.hostname + req.url);
}
// www or not
if (redirectowww && !req.hostname.startsWith('www.')) {
res.redirect(301, 'https://www.' + req.hostname + req.url);
}
// www or not
if (wwwredirecto && req.hostname.startsWith('www.')) {
const host = req.hostname.slice(4, req.hostname.length);
res.redirect(301, 'https://' + host + req.url);
}
// for test
if (test && req.url === '/test/exit') {
res.send('exit');
exit(0);
}
next();
});
// Our Universal express-engine (found @ https://github.com/angular/universal/tree/master/modules/express-engine)
server.engine(
'html',
ngExpressEngine({
bootstrap: AppServerModule,
}),
);
server.set('view engine', 'html');
server.set('views', distFolder);
// Example Express Rest API endpoints
// server.get('/api/**', (req, res) => { });
// Serve static files from /browser
server.get(
'*.*',
express.static(distFolder, {
maxAge: '1y',
}),
);
// All regular routes use the Universal engine
server.get('*', (req, res) => {
global['navigator'] = { userAgent: req['headers']['user-agent'] } as Navigator;
const http =
req.headers['x-forwarded-proto'] === undefined ? 'http' : req.headers['x-forwarded-proto'];
res.render(indexHtml, {
req,
providers: [
{ provide: APP_BASE_HREF, useValue: req.baseUrl },
// for http and cookies
{
provide: REQUEST,
useValue: req,
},
{
provide: RESPONSE,
useValue: res,
},
/// for cookie
{
provide: NgxRequest,
useValue: req,
},
{
provide: NgxResponse,
useValue: res,
},
// for absolute path
{
provide: 'ORIGIN_URL',
useValue: `${http}://${req.headers.host}`,
},
],
});
});
return server;
}
function run() {
const port = process.env.PORT || 4000;
// Start up the Node server
const server = app();
// gzip
server.use(compression());
// cokies
server.use(cookieparser());
server.listen(port, () => {
console.log(`Node Express server listening on http://localhost:${port}`);
});
}
// Webpack will replace 'require' with '__webpack_require__'
// '__non_webpack_require__' is a proxy to Node 'require'
// The below code is to ensure that the server is run only when not requiring the bundle.
declare const __non_webpack_require__: NodeRequire;
const mainModule = __non_webpack_require__.main;
const moduleFilename = (mainModule && mainModule.filename) || '';
if (moduleFilename === __filename || moduleFilename.includes('iisnode')) {
run();
}
export * from './src/main.server';
|
[
" \"routes\": [\n \"/\",\n \"/product-category/*\"\n ]\n\nThere's a clear problem with that code. You're basically asking Angular to guess all possible parameters, titles in your case.\nYou'd have to be more explicit than that and add the exact routes. See the example in the docs\nng run <app-name>:prerender --routes /product/1 /product/2\n\nIn your case:\n \"routes\": [\n \"/\",\n \"/product-category/1\",\n \"/product-category/2\",\n ...\n ]\n\nIf you have a lot of routes, you can always add them in a text file, like this:\n // You can place the file wherever you want, \n // I put it in the src folder, I don't know if there's a \n // recommended location\n\n \"prerender\": {\n \"builder\": \"@nguniversal/builders:prerender\",\n \"options\": {\n \"routesFile\":\"./src/routes.txt\"\n },\n\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"angular12",
"angular_seo",
"angular_universal"
] |
stackoverflow_0072994857_angular_angular12_angular_seo_angular_universal.txt
|
Q:
How can I make it so you input a "Worker code" and get the worker details same as when you "print(Worker_36.details)"? - Python
I am new to python and just playing around please help!
Worker_31 = Worker('David', 'Williamson',31 , 92500, 5, 37)
Worker_32 = Worker('Frank', 'Murphy',32 , 58500, 6, 27)
Worker_33 = Worker('Josephine', 'Dover',33 , 69500, 2, 30)
Worker_34 = Worker('Chester', 'Cohen',34 , 88500, 3, 52)
Worker_35 = Worker('Saba', "Brenland",35 , 96500, 4, 35)
Worker_36 = Worker('Tommy-Lee', 'Briggs',36 , 98500, 3, 57)
Worker_37 = Worker('Li', 'Hu-Tao',37 , 55000, 3, 22)
Worker_38 = Worker('Qin', 'Shi-Huang',38 ,14 ,1500000 , 34)
Worker_39 = Worker('Maximillian', 'Mendoza',39 , 200000, 13, 33)
Worker_40 = Worker('Sarah', 'Patel',40 , 86500 , 8, 29)
Worker_41 = Worker('Sumaiya', 'Johns',41 ,77900 , 10, 32)
So you see I was able to make the workers
def details(self):
return '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' .format("Your worker ", self.fullname(), " (Worker number ", self.number, ") ", "is paid a yearly salary of ", "$"+str(self.pay), " Dollars. ", "This worker has been with your company for ", self.time_with, " years, unfortunately however, they are due to retire in ", self.time_left, " years (aged 65).", " In the mean time however you can contact this worker with the email address ", self.email)
And able to print(Worker_36.details) for example and it works...
print("You have 50 workers what worker would you like to check the details of?")
Worker_Number_Check = input("Please input there worker number ")
If the user inputs 36 for example I want it to return the equavilent of the "print(Worker_36.details)"
I don't want to have to do a long else if for every single possible input number with a worker who has that as there "Worker number", please help?
A:
Instead of declaring many separate Worker variables, make a list of them:
workers = [
Worker(...),
Worker(...),
Worker(...),
Worker(...),
]
And then you can refer to workers[36].
|
How can I make it so you input a "Worker code" and get the worker details same as when you "print(Worker_36.details)"? - Python
|
I am new to python and just playing around please help!
Worker_31 = Worker('David', 'Williamson',31 , 92500, 5, 37)
Worker_32 = Worker('Frank', 'Murphy',32 , 58500, 6, 27)
Worker_33 = Worker('Josephine', 'Dover',33 , 69500, 2, 30)
Worker_34 = Worker('Chester', 'Cohen',34 , 88500, 3, 52)
Worker_35 = Worker('Saba', "Brenland",35 , 96500, 4, 35)
Worker_36 = Worker('Tommy-Lee', 'Briggs',36 , 98500, 3, 57)
Worker_37 = Worker('Li', 'Hu-Tao',37 , 55000, 3, 22)
Worker_38 = Worker('Qin', 'Shi-Huang',38 ,14 ,1500000 , 34)
Worker_39 = Worker('Maximillian', 'Mendoza',39 , 200000, 13, 33)
Worker_40 = Worker('Sarah', 'Patel',40 , 86500 , 8, 29)
Worker_41 = Worker('Sumaiya', 'Johns',41 ,77900 , 10, 32)
So you see I was able to make the workers
def details(self):
return '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' '{}' .format("Your worker ", self.fullname(), " (Worker number ", self.number, ") ", "is paid a yearly salary of ", "$"+str(self.pay), " Dollars. ", "This worker has been with your company for ", self.time_with, " years, unfortunately however, they are due to retire in ", self.time_left, " years (aged 65).", " In the mean time however you can contact this worker with the email address ", self.email)
And able to print(Worker_36.details) for example and it works...
print("You have 50 workers what worker would you like to check the details of?")
Worker_Number_Check = input("Please input there worker number ")
If the user inputs 36 for example I want it to return the equavilent of the "print(Worker_36.details)"
I don't want to have to do a long else if for every single possible input number with a worker who has that as there "Worker number", please help?
|
[
"Instead of declaring many separate Worker variables, make a list of them:\nworkers = [\n Worker(...),\n Worker(...),\n Worker(...),\n Worker(...),\n]\n\nAnd then you can refer to workers[36].\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074663319_python.txt
|
Q:
switch tabs in playwright test
I'm trying to switch between tabs using playwright tests
but it's not taking control of windows element.
Do we have any method similar to selenium driver.switchto().window() in playwright?
const { chromium } = require('playwright');
(async () => {
const browser = await chromium.launch({ headless: false, args: ['--start-maximized'] });
const context = await browser.newContext({ viewport: null });
context.on("page", async newPage => {
console.log("***newPage***", await newPage.title())
})
const page = await context.newPage()
const navigationPromise = page.waitForNavigation()
// dummy url
await page.goto('https://www.myapp.com/')
await navigationPromise
// User login
await page.waitForSelector('#username-in')
await page.fill('#username-in', 'username')
await page.fill('#password-in', 'password')
await page.click('//button[contains(text(),"Sign In")]')
await navigationPromise
// User lands in application home page and clicks on link in dashboard
// link will open another application in new tab
await page.click('(//span[text()="launch-app-from-dashboard"])[2]')
await navigationPromise
await page.context()
// Waiting for element to appear in new tab and click on ok button
await page.waitForTimeout(6000)
await page.waitForSelector('//bdi[text()="OK"]')
await page.click('//bdi[text()="OK"]')
})()
A:
Assuming "launch-app-from-dashboard" is creating a new page tag, you can use the following pattern to run the subsequent lines of code on the new page. See multi-page scenarios doc for more examples.
// Get page after a specific action (e.g. clicking a link)
const [newPage] = await Promise.all([
context.waitForEvent('page'),
page.click('a[target="_blank"]') // Opens a new tab
])
await newPage.waitForLoadState();
console.log(await newPage.title());
Since you run headless, it might also be useful to switch the visible tab in the browser with page.bringToFront (docs).
A:
The browserContext?.pages() is an array that contains the tabs opened by your application, from there you can use a temporal page to make a switch, once completed your validations you can switch back.
playwright.pageMain: Page = await playwright.Context.newPage();
playwright.pageTemp: Page;
// Save your current page to Temp
playwright.pageTemp = playwright.pageMain;
// Make the new tab launched your main page
playwright.pageMain = playwright.browserContext?.pages()[1];
expect(await playwright.pageMain.title()).toBe('Tab Title');
A:
Assume you only created one page (via browser context), but for some reason, new pages/tabs open.
You can have a list of all the pages by : context.pages,
Now each element of that list represents a <class 'playwright.async_api._generated.Page'> object.
So, now you can assign each page to any variable and access it. (For eg. page2 = context.pages[1])
A:
To switch between tabs in Playwright, you can use the pages function to get a list of pages in the current context and the page function to switch to a specific page. Here is an example of how you might do this:
// get a list of pages in the current context
const pages = await context.pages();
// switch to the second page in the list (assuming it is the one you want to switch to)
await context.page(pages[1]);
You can then use the Playwright API to interact with the new page as you would with any other page. For example, you can use the click and waitForSelector functions to click on elements and wait for them to appear on the page.
|
switch tabs in playwright test
|
I'm trying to switch between tabs using playwright tests
but it's not taking control of windows element.
Do we have any method similar to selenium driver.switchto().window() in playwright?
const { chromium } = require('playwright');
(async () => {
const browser = await chromium.launch({ headless: false, args: ['--start-maximized'] });
const context = await browser.newContext({ viewport: null });
context.on("page", async newPage => {
console.log("***newPage***", await newPage.title())
})
const page = await context.newPage()
const navigationPromise = page.waitForNavigation()
// dummy url
await page.goto('https://www.myapp.com/')
await navigationPromise
// User login
await page.waitForSelector('#username-in')
await page.fill('#username-in', 'username')
await page.fill('#password-in', 'password')
await page.click('//button[contains(text(),"Sign In")]')
await navigationPromise
// User lands in application home page and clicks on link in dashboard
// link will open another application in new tab
await page.click('(//span[text()="launch-app-from-dashboard"])[2]')
await navigationPromise
await page.context()
// Waiting for element to appear in new tab and click on ok button
await page.waitForTimeout(6000)
await page.waitForSelector('//bdi[text()="OK"]')
await page.click('//bdi[text()="OK"]')
})()
|
[
"Assuming \"launch-app-from-dashboard\" is creating a new page tag, you can use the following pattern to run the subsequent lines of code on the new page. See multi-page scenarios doc for more examples.\n// Get page after a specific action (e.g. clicking a link)\nconst [newPage] = await Promise.all([\n context.waitForEvent('page'),\n page.click('a[target=\"_blank\"]') // Opens a new tab\n])\nawait newPage.waitForLoadState();\nconsole.log(await newPage.title());\n\nSince you run headless, it might also be useful to switch the visible tab in the browser with page.bringToFront (docs).\n",
"The browserContext?.pages() is an array that contains the tabs opened by your application, from there you can use a temporal page to make a switch, once completed your validations you can switch back.\nplaywright.pageMain: Page = await playwright.Context.newPage();\nplaywright.pageTemp: Page; \n\n// Save your current page to Temp \nplaywright.pageTemp = playwright.pageMain;\n\n// Make the new tab launched your main page\nplaywright.pageMain = playwright.browserContext?.pages()[1];\nexpect(await playwright.pageMain.title()).toBe('Tab Title');\n\n",
"Assume you only created one page (via browser context), but for some reason, new pages/tabs open.\nYou can have a list of all the pages by : context.pages,\nNow each element of that list represents a <class 'playwright.async_api._generated.Page'> object.\nSo, now you can assign each page to any variable and access it. (For eg. page2 = context.pages[1])\n",
"To switch between tabs in Playwright, you can use the pages function to get a list of pages in the current context and the page function to switch to a specific page. Here is an example of how you might do this:\n// get a list of pages in the current context\nconst pages = await context.pages();\n \n// switch to the second page in the list (assuming it is the one you want to switch to)\nawait context.page(pages[1]);\n\nYou can then use the Playwright API to interact with the new page as you would with any other page. For example, you can use the click and waitForSelector functions to click on elements and wait for them to appear on the page.\n"
] |
[
6,
0,
0,
0
] |
[
"it('Open a new tab and check the title', async function () {\n await page.click(button, { button: \"middle\" }); //to open an another tab\n await page.waitForTimeout(); // wait for page loading \n let pages = await context.pages();\n expect(await pages[1].title()).equal('Title'); /to compare the title of the second page\n})\n\n"
] |
[
-1
] |
[
"javascript",
"playwright"
] |
stackoverflow_0064348468_javascript_playwright.txt
|
Q:
Disable input conditionally (Vue.js)
I have an input:
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? '' : disabled"
/>
and in my Vue.js component, I have:
..
..
ready() {
this.form.name = this.store.name;
this.form.validated = this.store.validated;
},
..
validated being a boolean, it can be either 0 or 1, but no matter what value is stored in the database, my input is always disabled.
I need the input to be disabled if false, otherwise it should be enabled and editable.
Update:
Doing this always enables the input (no matter I have 0 or 1 in the database):
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? '' : disabled"
/>
Doing this always disabled the input (no matter I have 0 or 1 in the database):
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? disabled : ''"
/>
A:
To remove the disabled prop, you should set its value to false. This needs to be the boolean value for false, not the string 'false'.
So, if the value for validated is either a 1 or a 0, then conditionally set the disabled prop based off that value. E.g.:
<input type="text" :disabled="validated == 1">
Here is an example.
var app = new Vue({
el: '#app',
data: {
disabled: 0
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<div id="app">
<button @click="disabled = (disabled + 1) % 2">Toggle Enable</button>
<input type="text" :disabled="disabled == 1">
<pre>{{ $data }}</pre>
</div>
A:
you could have a computed property that returns a boolean dependent on whatever criteria you need.
<input type="text" :disabled=isDisabled>
then put your logic in a computed property...
computed: {
isDisabled() {
// evaluate whatever you need to determine disabled here...
return this.form.validated;
}
}
A:
Not difficult, check this.
<button @click="disabled = !disabled">Toggle Enable</button>
<input type="text" id="name" class="form-control" name="name" v-model="form.name" :disabled="disabled">
jsfiddle
A:
You can manipulate :disabled attribute in vue.js.
It will accept a boolean, if it's true, then the input gets disabled, otherwise it will be enabled...
Something like structured like below in your case for example:
<input type="text" id="name" class="form-control" name="name" v-model="form.name" :disabled="validated ? false : true">
Also read this below:
Conditionally Disabling Input Elements via JavaScript
Expression You can conditionally disable input elements inline
with a JavaScript expression. This compact approach provides a quick
way to apply simple conditional logic. For example, if you only needed
to check the length of the password, you may consider doing something
like this.
<h3>Change Your Password</h3>
<div class="form-group">
<label for="newPassword">Please choose a new password</label>
<input type="password" class="form-control" id="newPassword" placeholder="Password" v-model="newPassword">
</div>
<div class="form-group">
<label for="confirmPassword">Please confirm your new password</label>
<input type="password" class="form-control" id="confirmPassword" placeholder="Password" v-model="confirmPassword" v-bind:disabled="newPassword.length === 0 ? true : false">
</div>
A:
Your disabled attribute requires a boolean value:
<input :disabled="validated" />
Notice how i've only checked if validated - This should work as 0 is falsey ...e.g
0 is considered to be false in JS (like undefined or null)
1 is in fact considered to be true
To be extra careful try:
<input :disabled="!!validated" />
This double negation turns the falsey or truthy value of 0 or 1 to false or true
don't believe me? go into your console and type !!0 or !!1 :-)
Also, to make sure your number 1 or 0 are definitely coming through as a Number and not the String '1' or '0' pre-pend the value you are checking with a + e.g <input :disabled="!!+validated"/> this turns a string of a number into a Number e.g
+1 = 1
+'1' = 1
Like David Morrow said above you could put your conditional logic into a method - this gives you more readable code - just return out of the method the condition you wish to check.
A:
You may make a computed property and enable/disable any form type according to its value.
<template>
<button class="btn btn-default" :disabled="clickable">Click me</button>
</template>
<script>
export default{
computed: {
clickable() {
// if something
return true;
}
}
}
</script>
A:
Try this
<div id="app">
<p>
<label for='terms'>
<input id='terms' type='checkbox' v-model='terms' /> Click me to enable
</label>
</p>
<input :disabled='isDisabled'></input>
</div>
vue js
new Vue({
el: '#app',
data: {
terms: false
},
computed: {
isDisabled: function(){
return !this.terms;
}
}
})
A:
To toggle the input's disabled attribute was surprisingly complex. The issue for me was twofold:
(1) Remember: the input's "disabled" attribute is NOT a Boolean attribute.
The mere presence of the attribute means that the input is disabled.
However, the Vue.js creators have prepared this...
https://v2.vuejs.org/v2/guide/syntax.html#Attributes
(Thanks to @connexo for this... How to add disabled attribute in input text in vuejs?)
(2) In addition, there was a DOM timing re-rendering issue that I was having. The DOM was not updating when I tried to toggle back.
Upon certain situations, "the component will not re-render immediately. It will update in the next 'tick.'"
From Vue.js docs: https://v2.vuejs.org/v2/guide/reactivity.html
The solution was to use:
this.$nextTick(()=>{
this.disableInputBool = true
})
Fuller example workflow:
<div @click="allowInputOverwrite">
<input
type="text"
:disabled="disableInputBool">
</div>
<button @click="disallowInputOverwrite">
press me (do stuff in method, then disable input bool again)
</button>
<script>
export default {
data() {
return {
disableInputBool: true
}
},
methods: {
allowInputOverwrite(){
this.disableInputBool = false
},
disallowInputOverwrite(){
// accomplish other stuff here.
this.$nextTick(()=>{
this.disableInputBool = true
})
}
}
}
</script>
A:
Can use this add condition.
<el-form-item :label="Amount ($)" style="width:100%" >
<template slot-scope="scoped">
<el-input-number v-model="listQuery.refAmount" :disabled="(rowData.status !== 1 ) === true" ></el-input-number>
</template>
</el-form-item>
A:
If you use SFC and want a minimal example for this case, this would be how you can use it:
export default {
data() {
return {
disableInput: false
}
},
methods: {
toggleInput() {
this.disableInput = !this.disableInput
}
}
}
<template>
<div>
<input type="text" :disabled="disableInput">
<button @click="toggleInput">Toggle Input</button>
</div>
</template>
Clicking the button triggers the toggleInput function and simply switches the state of disableInput with this.disableInput = !this.disableInput.
A:
This will also work
<input type="text" id="name" class="form-control" name="name" v-model="form.name" :disabled="!validated">
A:
My Solution:
// App.vue Template:
<button
type="submit"
class="disabled:opacity-50 w-full px-3 py-4 text-white bg-indigo-500 rounded-md focus:bg-indigo-600 focus:outline-none"
:disabled="isButtonDisabled()"
@click="sendIdentity()"
>
<span v-if="MYVARIABLE > 0"> Add {{ MYVARIABLE }}</span>
<span v-else-if="MYVARIABLE == 0">Alternative text if you like</span>
<span v-else>Alternative text if you like</span>
</button>
Styles based on Tailwind
// App.vue Script:
(...)
methods: {
isButtonDisabled(){
return this.MYVARIABLE >= 0 ? undefined: 'disabled';
}
}
Manual:
vue v2
vue v3
If isButtonDisabled has the value of null, undefined, or false, the
disabled attribute will not even be included in the rendered
element.
A:
Bear in mind that ES6 Sets/Maps don't appear to be reactive as far as i can tell, at time of writing.
A:
We can disable inputs conditionally with Vue 3 by setting the disabled prop to the condition when we want to disable the input
For instance, we can write:
<template>
<input :disabled="disabled" />
<button @click="disabled = !disabled">toggle disable</button>
</template>
<script>
export default {
name: "App",
data() {
return {
disabled: false,
};
},
};
</script>
A:
There is something newly released called inert, which is literally making it ignored by the browser.
<template>
<input
type="text"
id="name"
class="form-control"
name="name"
:inert="isItInert"
/>
</template>
<script setup>
const isItInert = true
</script>
Here is the playground for testing purposes.
|
Disable input conditionally (Vue.js)
|
I have an input:
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? '' : disabled"
/>
and in my Vue.js component, I have:
..
..
ready() {
this.form.name = this.store.name;
this.form.validated = this.store.validated;
},
..
validated being a boolean, it can be either 0 or 1, but no matter what value is stored in the database, my input is always disabled.
I need the input to be disabled if false, otherwise it should be enabled and editable.
Update:
Doing this always enables the input (no matter I have 0 or 1 in the database):
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? '' : disabled"
/>
Doing this always disabled the input (no matter I have 0 or 1 in the database):
<input
type="text"
id="name"
class="form-control"
name="name"
v-model="form.name"
:disabled="validated ? disabled : ''"
/>
|
[
"To remove the disabled prop, you should set its value to false. This needs to be the boolean value for false, not the string 'false'.\nSo, if the value for validated is either a 1 or a 0, then conditionally set the disabled prop based off that value. E.g.:\n<input type=\"text\" :disabled=\"validated == 1\">\n\nHere is an example.\n\n\nvar app = new Vue({\n el: '#app',\n\n data: {\n disabled: 0\n }\n}); \n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js\"></script>\n<div id=\"app\">\n <button @click=\"disabled = (disabled + 1) % 2\">Toggle Enable</button>\n <input type=\"text\" :disabled=\"disabled == 1\">\n \n <pre>{{ $data }}</pre>\n</div>\n\n\n\n",
"you could have a computed property that returns a boolean dependent on whatever criteria you need.\n<input type=\"text\" :disabled=isDisabled>\n\nthen put your logic in a computed property...\ncomputed: {\n isDisabled() {\n // evaluate whatever you need to determine disabled here...\n return this.form.validated;\n }\n}\n\n",
"Not difficult, check this.\n<button @click=\"disabled = !disabled\">Toggle Enable</button>\n<input type=\"text\" id=\"name\" class=\"form-control\" name=\"name\" v-model=\"form.name\" :disabled=\"disabled\">\n\njsfiddle\n",
"You can manipulate :disabled attribute in vue.js.\nIt will accept a boolean, if it's true, then the input gets disabled, otherwise it will be enabled...\nSomething like structured like below in your case for example:\n<input type=\"text\" id=\"name\" class=\"form-control\" name=\"name\" v-model=\"form.name\" :disabled=\"validated ? false : true\">\n\nAlso read this below:\n\nConditionally Disabling Input Elements via JavaScript\n Expression You can conditionally disable input elements inline\n with a JavaScript expression. This compact approach provides a quick\n way to apply simple conditional logic. For example, if you only needed\n to check the length of the password, you may consider doing something\n like this.\n\n<h3>Change Your Password</h3>\n<div class=\"form-group\">\n <label for=\"newPassword\">Please choose a new password</label>\n <input type=\"password\" class=\"form-control\" id=\"newPassword\" placeholder=\"Password\" v-model=\"newPassword\">\n</div>\n\n<div class=\"form-group\">\n <label for=\"confirmPassword\">Please confirm your new password</label>\n <input type=\"password\" class=\"form-control\" id=\"confirmPassword\" placeholder=\"Password\" v-model=\"confirmPassword\" v-bind:disabled=\"newPassword.length === 0 ? true : false\">\n</div>\n\n",
"Your disabled attribute requires a boolean value:\n<input :disabled=\"validated\" />\nNotice how i've only checked if validated - This should work as 0 is falsey ...e.g\n0 is considered to be false in JS (like undefined or null)\n1 is in fact considered to be true\nTo be extra careful try:\n<input :disabled=\"!!validated\" />\nThis double negation turns the falsey or truthy value of 0 or 1 to false or true\ndon't believe me? go into your console and type !!0 or !!1 :-)\nAlso, to make sure your number 1 or 0 are definitely coming through as a Number and not the String '1' or '0' pre-pend the value you are checking with a + e.g <input :disabled=\"!!+validated\"/> this turns a string of a number into a Number e.g\n+1 = 1\n+'1' = 1\n\nLike David Morrow said above you could put your conditional logic into a method - this gives you more readable code - just return out of the method the condition you wish to check. \n",
"You may make a computed property and enable/disable any form type according to its value.\n<template>\n <button class=\"btn btn-default\" :disabled=\"clickable\">Click me</button>\n</template>\n<script>\n export default{\n computed: {\n clickable() {\n // if something\n return true;\n }\n }\n }\n</script>\n\n",
"Try this\n <div id=\"app\">\n <p>\n <label for='terms'>\n <input id='terms' type='checkbox' v-model='terms' /> Click me to enable\n </label>\n </p>\n <input :disabled='isDisabled'></input>\n</div>\n\nvue js\nnew Vue({\n el: '#app',\n data: {\n terms: false\n },\n computed: {\n isDisabled: function(){\n return !this.terms;\n }\n }\n})\n\n",
"To toggle the input's disabled attribute was surprisingly complex. The issue for me was twofold:\n(1) Remember: the input's \"disabled\" attribute is NOT a Boolean attribute.\nThe mere presence of the attribute means that the input is disabled.\nHowever, the Vue.js creators have prepared this...\nhttps://v2.vuejs.org/v2/guide/syntax.html#Attributes\n(Thanks to @connexo for this... How to add disabled attribute in input text in vuejs?)\n(2) In addition, there was a DOM timing re-rendering issue that I was having. The DOM was not updating when I tried to toggle back.\nUpon certain situations, \"the component will not re-render immediately. It will update in the next 'tick.'\"\nFrom Vue.js docs: https://v2.vuejs.org/v2/guide/reactivity.html\nThe solution was to use:\nthis.$nextTick(()=>{\n this.disableInputBool = true\n})\n\nFuller example workflow:\n<div @click=\"allowInputOverwrite\">\n <input\n type=\"text\"\n :disabled=\"disableInputBool\">\n</div>\n\n<button @click=\"disallowInputOverwrite\">\n press me (do stuff in method, then disable input bool again)\n</button>\n\n<script>\n\nexport default {\n data() {\n return {\n disableInputBool: true\n }\n },\n methods: {\n allowInputOverwrite(){\n this.disableInputBool = false\n },\n disallowInputOverwrite(){\n // accomplish other stuff here.\n this.$nextTick(()=>{\n this.disableInputBool = true\n })\n }\n }\n\n}\n</script>\n\n",
"Can use this add condition.\n <el-form-item :label=\"Amount ($)\" style=\"width:100%\" >\n <template slot-scope=\"scoped\">\n <el-input-number v-model=\"listQuery.refAmount\" :disabled=\"(rowData.status !== 1 ) === true\" ></el-input-number>\n </template>\n </el-form-item>\n\n",
"If you use SFC and want a minimal example for this case, this would be how you can use it:\n\n\nexport default {\n data() {\nreturn {\n disableInput: false\n}\n },\n methods: {\ntoggleInput() {\n this.disableInput = !this.disableInput\n}\n }\n}\n<template>\n <div>\n<input type=\"text\" :disabled=\"disableInput\">\n<button @click=\"toggleInput\">Toggle Input</button>\n </div>\n</template>\n\n\n\nClicking the button triggers the toggleInput function and simply switches the state of disableInput with this.disableInput = !this.disableInput.\n",
"This will also work\n<input type=\"text\" id=\"name\" class=\"form-control\" name=\"name\" v-model=\"form.name\" :disabled=\"!validated\">\n\n",
"My Solution:\n// App.vue Template:\n<button\n type=\"submit\"\n class=\"disabled:opacity-50 w-full px-3 py-4 text-white bg-indigo-500 rounded-md focus:bg-indigo-600 focus:outline-none\"\n :disabled=\"isButtonDisabled()\"\n @click=\"sendIdentity()\"\n >\n <span v-if=\"MYVARIABLE > 0\"> Add {{ MYVARIABLE }}</span>\n <span v-else-if=\"MYVARIABLE == 0\">Alternative text if you like</span>\n <span v-else>Alternative text if you like</span>\n </button>\n\nStyles based on Tailwind\n// App.vue Script:\n (...)\n methods: {\n isButtonDisabled(){\n return this.MYVARIABLE >= 0 ? undefined: 'disabled';\n }\n }\n\nManual:\nvue v2\nvue v3\n\nIf isButtonDisabled has the value of null, undefined, or false, the\ndisabled attribute will not even be included in the rendered \nelement.\n\n",
"Bear in mind that ES6 Sets/Maps don't appear to be reactive as far as i can tell, at time of writing.\n",
"We can disable inputs conditionally with Vue 3 by setting the disabled prop to the condition when we want to disable the input\nFor instance, we can write:\n<template>\n <input :disabled=\"disabled\" />\n <button @click=\"disabled = !disabled\">toggle disable</button>\n</template>\n\n\n<script>\nexport default {\n name: \"App\",\n data() {\n return {\n disabled: false,\n };\n },\n};\n</script>\n\n",
"There is something newly released called inert, which is literally making it ignored by the browser.\n<template>\n <input \n type=\"text\" \n id=\"name\" \n class=\"form-control\" \n name=\"name\" \n :inert=\"isItInert\"\n />\n</template>\n\n<script setup>\nconst isItInert = true\n</script>\n\nHere is the playground for testing purposes.\n"
] |
[
676,
88,
28,
18,
17,
9,
8,
7,
3,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"forms",
"html",
"input",
"javascript",
"vue.js"
] |
stackoverflow_0038085180_forms_html_input_javascript_vue.js.txt
|
Q:
ORACLE SQL = statement error for GROUP BY for whole columns
I want to see everything columns in the database (by select *) with new columns (as), in order that I can anlyse those columns without needing copy and paste in excel spreadsheet
how to typing at 'group by' without need typing each columns (that more than 10 columns), usually it come up error if I am using this query;
select a. *
min(order_date) over(partition by customer_id) as
min_order , max(order_date) over(partition by customer_id) as
max_order , max(create_date) over(partition by customer_id) as
max_create , sum (count(*)) over (partition by customer_id,
create_date order by create_date desc ) count from purchase a
group by a. customer_id
statement error shows this;
ORA-00937: not a single-group group function ORA-02063: preceding line
from OMSRPT_OMS_OBJECTS
00937. 00000 - "not a single-group group function"
*Cause:
*Action:
Thank you for your help
A:
It looks like you are trying to use a window function (min, max, and sum with the over clause) in a group by clause. This is not allowed in SQL because a window function operates on the whole set of rows, while a group by clause divides the rows into groups and applies a function to each group.
To fix the error, you need to either remove the group by clause or move the window functions to a subquery. Here is an example of how you could rewrite the query using a subquery:
SELECT *,
min(order_date) over(partition by customer_id) as min_order ,
max(order_date) over(partition by customer_id) as max_order ,
max(create_date) over(partition by customer_id) as max_create ,
sum (count(*)) over (partition by customer_id, create_date order by create_date desc ) count
FROM (
SELECT *
FROM purchase
GROUP BY customer_id
) a
In this query, the group by clause is moved to the subquery, which means that the window functions can operate on the whole set of rows in the subquery. The outer query then selects all columns from the subquery and adds the additional columns calculated by the window functions.
|
ORACLE SQL = statement error for GROUP BY for whole columns
|
I want to see everything columns in the database (by select *) with new columns (as), in order that I can anlyse those columns without needing copy and paste in excel spreadsheet
how to typing at 'group by' without need typing each columns (that more than 10 columns), usually it come up error if I am using this query;
select a. *
min(order_date) over(partition by customer_id) as
min_order , max(order_date) over(partition by customer_id) as
max_order , max(create_date) over(partition by customer_id) as
max_create , sum (count(*)) over (partition by customer_id,
create_date order by create_date desc ) count from purchase a
group by a. customer_id
statement error shows this;
ORA-00937: not a single-group group function ORA-02063: preceding line
from OMSRPT_OMS_OBJECTS
00937. 00000 - "not a single-group group function"
*Cause:
*Action:
Thank you for your help
|
[
"It looks like you are trying to use a window function (min, max, and sum with the over clause) in a group by clause. This is not allowed in SQL because a window function operates on the whole set of rows, while a group by clause divides the rows into groups and applies a function to each group.\nTo fix the error, you need to either remove the group by clause or move the window functions to a subquery. Here is an example of how you could rewrite the query using a subquery:\nSELECT *,\n min(order_date) over(partition by customer_id) as min_order ,\n max(order_date) over(partition by customer_id) as max_order ,\n max(create_date) over(partition by customer_id) as max_create ,\n sum (count(*)) over (partition by customer_id, create_date order by create_date desc ) count\nFROM (\n SELECT *\n FROM purchase\n GROUP BY customer_id\n) a\n\nIn this query, the group by clause is moved to the subquery, which means that the window functions can operate on the whole set of rows in the subquery. The outer query then selects all columns from the subquery and adds the additional columns calculated by the window functions.\n"
] |
[
2
] |
[] |
[] |
[
"oracle_sqldeveloper"
] |
stackoverflow_0074663317_oracle_sqldeveloper.txt
|
Q:
Value conversion error when trying to make a swift toggle button
At the moment I'm hacking away at swift to learn the language and I'm coming at it from a java/C++ perspective. I'm trying to make an app for a game I play called World War II Online. However I can't get my head around why I'm getting a binding error when trying to code in the toggle for remembering a password. Below is my code for the landing page for my app.
struct ContentView: View {
@State private var empty_field = ""
@State private var passwordState = false
let userfieldTitle : String = "username"
let passwordFieldTitle : String = "password"
let landingPageTitle = "World War II Online"
let toggleName = "remember password"
var body: some View
{
Text(landingPageTitle).font(.largeTitle)
Section {
Form{
VStack
{
TextField(userfieldTitle,text : $empty_field)
TextField(passwordFieldTitle,text : $empty_field)
Toggle(toggleName, isOn: $passwordState){
print("hello world")
}
}
.padding()
}
}
}
I'm getting the error:
Cannot convert value of type 'Binding' to expected argument type 'KeyPath<(() -> ()).Element, Binding>'
I'm really bad in understanding bindings and properties. Is there something I've been code blind to ?
A:
You can't just put code in curly braces and have it execute without this extension.
public extension View {
/// Execute imperative code.
func callAsFunction(_ execute: () -> Void) -> Self {
execute()
return self
}
}
A:
You have to probably stop thinking of SwiftUI like Java or C++. It's not following the common OO paradigm for GUIs that Swing, MFC, Wx, etc use. It's something completely different.
A body in a View is returning data. Think of it more like JSON/XML that describes the view. There are some things in there that are like imperative code, but it's subtly different.
So, you can't just throw print statements in there whenever you want. You can inside of closures that are imperative actions (like a button click).
So, just remove the block you put in after the Toggle. If you are trying to print out something when the toggle changes, you can look at the $passwordState variable you made.
For example:
Form {
VStack {
TextField(userfieldTitle,text : $empty_field)
TextField(passwordFieldTitle,text : $empty_field)
Toggle(toggleName, isOn: $passwordState)
if passwordState {
Text("hello world")
}
}
.padding()
}
Don't think of that if as being like code that runs an if statement -- think of it more like an <IF> XML tag. That if is not run when the body function runs -- it's returned to the caller that reads the data-structure and figures out what to actually show by analyzing it and the currently shown view and applying diffs to change it from what it is to what it should be.
|
Value conversion error when trying to make a swift toggle button
|
At the moment I'm hacking away at swift to learn the language and I'm coming at it from a java/C++ perspective. I'm trying to make an app for a game I play called World War II Online. However I can't get my head around why I'm getting a binding error when trying to code in the toggle for remembering a password. Below is my code for the landing page for my app.
struct ContentView: View {
@State private var empty_field = ""
@State private var passwordState = false
let userfieldTitle : String = "username"
let passwordFieldTitle : String = "password"
let landingPageTitle = "World War II Online"
let toggleName = "remember password"
var body: some View
{
Text(landingPageTitle).font(.largeTitle)
Section {
Form{
VStack
{
TextField(userfieldTitle,text : $empty_field)
TextField(passwordFieldTitle,text : $empty_field)
Toggle(toggleName, isOn: $passwordState){
print("hello world")
}
}
.padding()
}
}
}
I'm getting the error:
Cannot convert value of type 'Binding' to expected argument type 'KeyPath<(() -> ()).Element, Binding>'
I'm really bad in understanding bindings and properties. Is there something I've been code blind to ?
|
[
"You can't just put code in curly braces and have it execute without this extension.\npublic extension View {\n /// Execute imperative code.\n func callAsFunction(_ execute: () -> Void) -> Self {\n execute()\n return self\n }\n}\n\n",
"You have to probably stop thinking of SwiftUI like Java or C++. It's not following the common OO paradigm for GUIs that Swing, MFC, Wx, etc use. It's something completely different.\nA body in a View is returning data. Think of it more like JSON/XML that describes the view. There are some things in there that are like imperative code, but it's subtly different.\nSo, you can't just throw print statements in there whenever you want. You can inside of closures that are imperative actions (like a button click).\nSo, just remove the block you put in after the Toggle. If you are trying to print out something when the toggle changes, you can look at the $passwordState variable you made.\nFor example:\nForm {\n VStack {\n TextField(userfieldTitle,text : $empty_field)\n TextField(passwordFieldTitle,text : $empty_field) \n Toggle(toggleName, isOn: $passwordState)\n if passwordState {\n Text(\"hello world\")\n }\n }\n .padding()\n}\n\nDon't think of that if as being like code that runs an if statement -- think of it more like an <IF> XML tag. That if is not run when the body function runs -- it's returned to the caller that reads the data-structure and figures out what to actually show by analyzing it and the currently shown view and applying diffs to change it from what it is to what it should be.\n"
] |
[
1,
0
] |
[] |
[] |
[
"swift",
"swiftui"
] |
stackoverflow_0074662078_swift_swiftui.txt
|
Q:
SymPy: Replace all ints with floats in expression
Seems like SymPy makes it pretty easy to do the opposite - convert all floats to ints, but I'm curious how to do the reverse?
The specific problem I'm running into is with the RustCodeGen spitting out expressions with mixed f64/int types, which makes the compiler unhappy.
Any suggestions on ways to get around this programmatically would be greatly appreciated!
Simple example:
>> variables = [symbols('x1')]
>> expression = 'x1 % 0.5'
>> expr = parse_expr(expression, evaluate=0)
>> print(expr) # Notice it has injected a multiply by 2
0.5*(Mod(2*x1, 1))
>> CG = RustCodeGen()
>> routine = CG.routine("", expr, variables, {})
>> CG._call_printer(routine)
['let out1 = 0.5*(2*x1 - (2*x1).floor());\n', 'out1', '\n']
which doesn't compile:
error[E0277]: cannot multiply `{integer}` by `{float}`
--> src/main.rs:5:22
|
5 | let out1 = 0.5*(2*x1 - (2*x1).floor());
| ^ no implementation for `{integer} * {float}`
A:
I would recommend faking the integer with a symbol having desired float name:
>>> f= expr.xreplace({i:Symbol(str(i)+".") for i in expr.atoms(Integer)})
>>> routine = CG.routine("", f, variables, {})
>>> CG._call_printer(routine)```
|
SymPy: Replace all ints with floats in expression
|
Seems like SymPy makes it pretty easy to do the opposite - convert all floats to ints, but I'm curious how to do the reverse?
The specific problem I'm running into is with the RustCodeGen spitting out expressions with mixed f64/int types, which makes the compiler unhappy.
Any suggestions on ways to get around this programmatically would be greatly appreciated!
Simple example:
>> variables = [symbols('x1')]
>> expression = 'x1 % 0.5'
>> expr = parse_expr(expression, evaluate=0)
>> print(expr) # Notice it has injected a multiply by 2
0.5*(Mod(2*x1, 1))
>> CG = RustCodeGen()
>> routine = CG.routine("", expr, variables, {})
>> CG._call_printer(routine)
['let out1 = 0.5*(2*x1 - (2*x1).floor());\n', 'out1', '\n']
which doesn't compile:
error[E0277]: cannot multiply `{integer}` by `{float}`
--> src/main.rs:5:22
|
5 | let out1 = 0.5*(2*x1 - (2*x1).floor());
| ^ no implementation for `{integer} * {float}`
|
[
"I would recommend faking the integer with a symbol having desired float name:\n>>> f= expr.xreplace({i:Symbol(str(i)+\".\") for i in expr.atoms(Integer)})\n>>> routine = CG.routine(\"\", f, variables, {})\n>>> CG._call_printer(routine)```\n\n"
] |
[
0
] |
[] |
[] |
[
"codegen",
"python",
"rust",
"sympy"
] |
stackoverflow_0074663159_codegen_python_rust_sympy.txt
|
Q:
MySql Stored Procedure var null
I'm quite new to mySql and in my stored procedure I can't get my variable id_lead to fetch into the cursor. In other words, inside the loop I do FETCH cursor_id_leads INTO id_lead; but its value is null everytime it iterates.
This procedure is supposed to change two rows in two different tables or rollback in case an error arises
DELIMITER $$
DROP PROCEDURE IF EXISTS modify_entity$$
CREATE DEFINER=`admin_base`@`%` PROCEDURE `modify_entity`(
IN newEntity VARCHAR(100),
IN currentEntity VARCHAR(100)
)
BEGIN
DECLARE errno INT;
DECLARE errname VARCHAR(200);
DECLARE hasError INTEGER DEFAULT 0;
DECLARE numberOfEntitiesAffected INTEGER;
DECLARE numberOfLeadsAffected INTEGER;
DECLARE id_lead INTEGER;
DECLARE var_final_cursor INTEGER DEFAULT 0;
DECLARE cursor_id_leads CURSOR FOR SELECT Id_lead FROM BASE_LEADS WHERE entity = currentEntity;
DECLARE exit handler for SQLEXCEPTION
BEGIN
SET huboerr = 1;
GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE,
@errno = MYSQL_ERRNO, @text = MESSAGE_TEXT;
SET @full_error = CONCAT("ERROR ", @errno, " (", @sqlstate, "): ", @text);
SELECT @full_error, hasError, numberOfEntitiesAffected, numberOfLeadsAffected;
ROLLBACK;
END;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET var_final_cursor := 1;
SET numberOfEntitiesAffected := (SELECT COUNT(*) FROM BASE_ENTITIES WHERE entity = currentEntity);
SET numberOfLeadsAffected := (SELECT COUNT(*) FROM BASE_LEADS WHERE entity = currentEntity);
START TRANSACTION;
OPEN cursor_id_leads;
bucle: LOOP
FETCH cursor_id_leads INTO id_lead;
select id_lead; -- I use this line to figure out what values does id_lead take, but it's always null
IF var_final_cursor = 1 THEN
LEAVE bucle;
END IF;
UPDATE BASE_ENTITIES SET Entidad = newEntity WHERE Entidad = entidadActual;
UPDATE BASE_LEADS SET Entidad = newEntity WHERE Id_lead = id_lead;
END LOOP bucle;
CLOSE cursor_id_leads;
COMMIT;
select errno, errname, hasError, numberOfEntitiesAffected, numberOfLeadsAffected, id_lead;
END$$
DELIMITER ;
The SELECT the cursor does should have data as I try it outside the procedure with the same values and it returns rows
A:
I found the reason why it wasn't fetching data into id_lead. It was because the column in the table has the same name. I renamed my variable id_lead to id_lead_aux and it worked.
|
MySql Stored Procedure var null
|
I'm quite new to mySql and in my stored procedure I can't get my variable id_lead to fetch into the cursor. In other words, inside the loop I do FETCH cursor_id_leads INTO id_lead; but its value is null everytime it iterates.
This procedure is supposed to change two rows in two different tables or rollback in case an error arises
DELIMITER $$
DROP PROCEDURE IF EXISTS modify_entity$$
CREATE DEFINER=`admin_base`@`%` PROCEDURE `modify_entity`(
IN newEntity VARCHAR(100),
IN currentEntity VARCHAR(100)
)
BEGIN
DECLARE errno INT;
DECLARE errname VARCHAR(200);
DECLARE hasError INTEGER DEFAULT 0;
DECLARE numberOfEntitiesAffected INTEGER;
DECLARE numberOfLeadsAffected INTEGER;
DECLARE id_lead INTEGER;
DECLARE var_final_cursor INTEGER DEFAULT 0;
DECLARE cursor_id_leads CURSOR FOR SELECT Id_lead FROM BASE_LEADS WHERE entity = currentEntity;
DECLARE exit handler for SQLEXCEPTION
BEGIN
SET huboerr = 1;
GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE,
@errno = MYSQL_ERRNO, @text = MESSAGE_TEXT;
SET @full_error = CONCAT("ERROR ", @errno, " (", @sqlstate, "): ", @text);
SELECT @full_error, hasError, numberOfEntitiesAffected, numberOfLeadsAffected;
ROLLBACK;
END;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET var_final_cursor := 1;
SET numberOfEntitiesAffected := (SELECT COUNT(*) FROM BASE_ENTITIES WHERE entity = currentEntity);
SET numberOfLeadsAffected := (SELECT COUNT(*) FROM BASE_LEADS WHERE entity = currentEntity);
START TRANSACTION;
OPEN cursor_id_leads;
bucle: LOOP
FETCH cursor_id_leads INTO id_lead;
select id_lead; -- I use this line to figure out what values does id_lead take, but it's always null
IF var_final_cursor = 1 THEN
LEAVE bucle;
END IF;
UPDATE BASE_ENTITIES SET Entidad = newEntity WHERE Entidad = entidadActual;
UPDATE BASE_LEADS SET Entidad = newEntity WHERE Id_lead = id_lead;
END LOOP bucle;
CLOSE cursor_id_leads;
COMMIT;
select errno, errname, hasError, numberOfEntitiesAffected, numberOfLeadsAffected, id_lead;
END$$
DELIMITER ;
The SELECT the cursor does should have data as I try it outside the procedure with the same values and it returns rows
|
[
"I found the reason why it wasn't fetching data into id_lead. It was because the column in the table has the same name. I renamed my variable id_lead to id_lead_aux and it worked.\n"
] |
[
1
] |
[] |
[] |
[
"mysql",
"stored_procedures"
] |
stackoverflow_0074662645_mysql_stored_procedures.txt
|
Q:
Module not found: Can't resolve './PlatformColorValueTypes'
I have just started learning React Native and wanted to add styleSheet it seems doesn't work, I'm trying to solve an issue but I'm still stuck please can someone help me to solve this issue. by the way I am using react native.
Console log
Browser result:
here is my code below :
const styles = StyleSheet.create({
container: {
flex: 1,
marginTop: 100,
paddingTop: 20,
paddingBottom: 30,
justifyContent: 'center',
alignItems: 'center',
},
tinyPic: {
width: 90,
height: 90,
top: -10
},
gauge: {
position: 'absolute',
width: 140,
height: 140,
alignItems: 'center',
justifyContent: 'center',
},
label1: {
position: 'absolute',
top:15,
right: '115%',
padding: 4
},
label2: {
position: 'absolute',
top:15,
left: '115%',
padding: 4
},
label3: {
position: 'absolute',
top: 145,
left: '95%',
padding: 4
}
});
A:
I solve this issue by updating react-native :
npm i https://github.com/expo/react-native/archive/sdk-42.0.0.tar.gz --save -force
A:
I encountered a similar issue in an Expo React Native (SDK 47) app, where Webpack told me,
Exporting with Webpack...
Failed to compile
ModuleNotFoundError: ./node_modules/react-native/Libraries/StyleSheet/normalizeColor.js
Cannot find module: './PlatformColorValueTypes'. Make sure this package is installed.
You can install this package by running: npm install ./PlatformColorValueTypes.
I looked into ./node_modules/react-native/Libraries/StyleSheet/normalizeColor.js and confirmed that it called require('./PlatformColorValueTypes').
When I looked inside ./node_modules/react-native/Libraries/StyleSheet/, I saw that Webpack was correct that normalizeColor.js did not have a sibling file named PlatformColorValueTypes.js. But there were sibling files named PlatformColorValueTypes.ios.js and PlatformColorValueTypes.android.js.
To resolve this, I edited my webpack.config.js file to include .ios.js, .android.js, and .web.js extensions as ones that would be resolved by Webpack:
resolve: {
extensions: [".tsx", ".ts", ".js", ".ios.js", ".android.js", ".web.js"]
}
|
Module not found: Can't resolve './PlatformColorValueTypes'
|
I have just started learning React Native and wanted to add styleSheet it seems doesn't work, I'm trying to solve an issue but I'm still stuck please can someone help me to solve this issue. by the way I am using react native.
Console log
Browser result:
here is my code below :
const styles = StyleSheet.create({
container: {
flex: 1,
marginTop: 100,
paddingTop: 20,
paddingBottom: 30,
justifyContent: 'center',
alignItems: 'center',
},
tinyPic: {
width: 90,
height: 90,
top: -10
},
gauge: {
position: 'absolute',
width: 140,
height: 140,
alignItems: 'center',
justifyContent: 'center',
},
label1: {
position: 'absolute',
top:15,
right: '115%',
padding: 4
},
label2: {
position: 'absolute',
top:15,
left: '115%',
padding: 4
},
label3: {
position: 'absolute',
top: 145,
left: '95%',
padding: 4
}
});
|
[
"I solve this issue by updating react-native :\nnpm i https://github.com/expo/react-native/archive/sdk-42.0.0.tar.gz --save -force\n",
"I encountered a similar issue in an Expo React Native (SDK 47) app, where Webpack told me,\nExporting with Webpack...\nFailed to compile\nModuleNotFoundError: ./node_modules/react-native/Libraries/StyleSheet/normalizeColor.js\nCannot find module: './PlatformColorValueTypes'. Make sure this package is installed.\n\nYou can install this package by running: npm install ./PlatformColorValueTypes.\n\nI looked into ./node_modules/react-native/Libraries/StyleSheet/normalizeColor.js and confirmed that it called require('./PlatformColorValueTypes').\nWhen I looked inside ./node_modules/react-native/Libraries/StyleSheet/, I saw that Webpack was correct that normalizeColor.js did not have a sibling file named PlatformColorValueTypes.js. But there were sibling files named PlatformColorValueTypes.ios.js and PlatformColorValueTypes.android.js.\nTo resolve this, I edited my webpack.config.js file to include .ios.js, .android.js, and .web.js extensions as ones that would be resolved by Webpack:\nresolve: {\n extensions: [\".tsx\", \".ts\", \".js\", \".ios.js\", \".android.js\", \".web.js\"]\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"react_native",
"reactjs",
"typescript"
] |
stackoverflow_0069583163_javascript_react_native_reactjs_typescript.txt
|
Q:
Make an array multiply only the columns behind it, going one at a time until the last one and save the answer in c
I'm making a program in C that factors any number using primes and saves these primes, multiplying them you find all the divisors of a number.
But I can't make an array that multiplies the previous columns and saves the results. follow the example
60 / 2
30 / 2
15 / 3
5 / 5
divisors = 2, 2, 3, 5
now i need`add 1 to array array {1, 2, 2, 3, 5}
i need this now start colune 2 {1, 2} 2 * 1 = 2 save.
next colune 3 {1, 2, 2} 2 * 1 = 2 but we already have 2 so don't save it.
continue 2 * 2 = 4 save.
colune 4 {1, 2, 2, 3} 3 * 1 = 3 save, 3 * 2 = 6 save, 3 * 4 = 12 save.
colune 5 {1, 2, 2, 3, 5} 5 * 1 = 5 save, 5* 2 = 10, 5 * 4 = 20 save, 5 * 3= 15 save, 5 * 6 = 30 save, 5 * 12 = 60 save.
now we found all divisors of 60 = 1, 2, 3, 4, 5, 6, 10 ,12 , 15,20, 30, 60.
It is important to mention that I need the program to be like this, I know there are other ways... but I only need this one, I have been unable to complete it for 1 week
video to help https://www.youtube.com/watch?v=p0v5FpONddU&t=1s&ab_channel=MATEM%C3%81TICAFORALLLUISCARLOS
my program so far
#include <stdlib.h>
#include <stdio.h>
int N = 1;
int verificarPrimo(int numero);
int main()
{
int num = 60, i, primo = 1, resultados[N], j = 1;
for (i = 0; i < 60; i++)
{
if (primo == 1)
{
resultados[N - 1] = primo;
i = 2;
primo = i;
}
if (verificarPrimo(i))
{
while (num % i == 0)
{
num = num / i;
resultados[N] = i;
N++;
}
}
}
for (i = 1; i < N; i++)
{
printf("%d \n", resultados[i]);
}
}
int verificarPrimo(int primo)
{
int i;
if (primo <= 1)
return 0;
for (i = 2; i <= primo / 2; i++)
{
if (primo % i == 0)
return 0;
}
return 1;
}
A:
I tried out your code and ran into some issues with how the results were being stored. First off, the results array is being initially defined as an array with a size of "1", and that it not what you probably want.
int num = 60, i, primo = 1, resultados[N], j = 1;
With that in mind and determining the spirit of this project, following is tweaked version of the code to test for one or more values and their factors.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int verificarPrimo(int primo)
{
int sq = sqrt(primo) + 1; /* Usual checking for a prime number is from '2' to the square root of the number being evaluated */
if (primo <= 1)
return 0;
for (int i = 2; i < sq; i++)
{
if (primo % i == 0)
return 0;
}
return 1;
}
int main()
{
int N = 0;
int num = 0, entry = 0, resultados[100]; /* The results array needs to be defined with some value large enough to contain the assorted factors a number might have */
printf("Enter a number to evaluate for factors: "); /* Using a prompt to allow various values to be tested */
scanf("%d", &entry);
num = entry;
if (verificarPrimo(num)) /* Catchall in case the entered number is a prime number */
{
printf("This number is a prime number and has no factors other than one and itself\n");
return 0;
}
resultados[0] = 1; /* Normally the value '1' is implied in a list of factors, so these lines could be omitted */
N = 1;
for (int i = 2; i < entry; i++)
{
if (verificarPrimo(i))
{
while (num % i == 0)
{
num = num / i;
resultados[N] = i;
N++;
}
}
}
printf("Factors for %d\n", entry);
for (int i = 0; i < N; i++)
{
printf("%d ", resultados[i]);
}
printf("\n");
return 0;
}
Some items to point out in this tweaked code.
In the prime number verification function, it is usually customary to set up a for loop in testing for prime numbers to go from the value of "2" to the square root of the number being tested. There usually is no need travel to one half of the number being tested. For that, the #include <math.h> statement was added (FYI, "-lm" would need to be added to link in the math library).
Instead of defining the results array with a value of one element, an arbitrary value of "60" was chosen for the holding the possible number of results when evaluating factors for a given value. Your original code had the potential of storing data past the end of the array and causing a "smashing" error.
The value of "1" is usually left out of the list of factors for a number, but was left in as the initial result value. This might be left out of the completed code.
An additional entry field was added to allow for user entry to be tested to give the code some flexibility in testing numbers.
A test was also added to see if the entered number is itself a prime number, which would only have factors of "1" and itself.
Following is some sample terminal output testing out your original value of "60" along with some other values.
@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors
Enter a number to evaluate for factors: 60
Factors for 60
1 2 2 3 5
@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors
Enter a number to evaluate for factors: 63
Factors for 63
1 3 3 7
@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors
Enter a number to evaluate for factors: 29
This number is a prime number and has no factors other than one and itself
Give that a try to see if it meets the spirit of your project.
|
Make an array multiply only the columns behind it, going one at a time until the last one and save the answer in c
|
I'm making a program in C that factors any number using primes and saves these primes, multiplying them you find all the divisors of a number.
But I can't make an array that multiplies the previous columns and saves the results. follow the example
60 / 2
30 / 2
15 / 3
5 / 5
divisors = 2, 2, 3, 5
now i need`add 1 to array array {1, 2, 2, 3, 5}
i need this now start colune 2 {1, 2} 2 * 1 = 2 save.
next colune 3 {1, 2, 2} 2 * 1 = 2 but we already have 2 so don't save it.
continue 2 * 2 = 4 save.
colune 4 {1, 2, 2, 3} 3 * 1 = 3 save, 3 * 2 = 6 save, 3 * 4 = 12 save.
colune 5 {1, 2, 2, 3, 5} 5 * 1 = 5 save, 5* 2 = 10, 5 * 4 = 20 save, 5 * 3= 15 save, 5 * 6 = 30 save, 5 * 12 = 60 save.
now we found all divisors of 60 = 1, 2, 3, 4, 5, 6, 10 ,12 , 15,20, 30, 60.
It is important to mention that I need the program to be like this, I know there are other ways... but I only need this one, I have been unable to complete it for 1 week
video to help https://www.youtube.com/watch?v=p0v5FpONddU&t=1s&ab_channel=MATEM%C3%81TICAFORALLLUISCARLOS
my program so far
#include <stdlib.h>
#include <stdio.h>
int N = 1;
int verificarPrimo(int numero);
int main()
{
int num = 60, i, primo = 1, resultados[N], j = 1;
for (i = 0; i < 60; i++)
{
if (primo == 1)
{
resultados[N - 1] = primo;
i = 2;
primo = i;
}
if (verificarPrimo(i))
{
while (num % i == 0)
{
num = num / i;
resultados[N] = i;
N++;
}
}
}
for (i = 1; i < N; i++)
{
printf("%d \n", resultados[i]);
}
}
int verificarPrimo(int primo)
{
int i;
if (primo <= 1)
return 0;
for (i = 2; i <= primo / 2; i++)
{
if (primo % i == 0)
return 0;
}
return 1;
}
|
[
"I tried out your code and ran into some issues with how the results were being stored. First off, the results array is being initially defined as an array with a size of \"1\", and that it not what you probably want.\nint num = 60, i, primo = 1, resultados[N], j = 1;\n\nWith that in mind and determining the spirit of this project, following is tweaked version of the code to test for one or more values and their factors.\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n\nint verificarPrimo(int primo)\n{\n int sq = sqrt(primo) + 1; /* Usual checking for a prime number is from '2' to the square root of the number being evaluated */\n\n if (primo <= 1)\n return 0;\n for (int i = 2; i < sq; i++)\n {\n if (primo % i == 0)\n return 0;\n }\n return 1;\n}\n\nint main()\n{\n int N = 0;\n int num = 0, entry = 0, resultados[100]; /* The results array needs to be defined with some value large enough to contain the assorted factors a number might have */\n\n printf(\"Enter a number to evaluate for factors: \"); /* Using a prompt to allow various values to be tested */\n scanf(\"%d\", &entry);\n\n num = entry;\n\n if (verificarPrimo(num)) /* Catchall in case the entered number is a prime number */\n {\n printf(\"This number is a prime number and has no factors other than one and itself\\n\");\n return 0;\n }\n\n resultados[0] = 1; /* Normally the value '1' is implied in a list of factors, so these lines could be omitted */\n N = 1;\n\n for (int i = 2; i < entry; i++)\n {\n if (verificarPrimo(i))\n {\n while (num % i == 0)\n {\n num = num / i;\n resultados[N] = i;\n N++;\n }\n }\n }\n\n printf(\"Factors for %d\\n\", entry);\n\n for (int i = 0; i < N; i++)\n {\n printf(\"%d \", resultados[i]);\n }\n\n printf(\"\\n\");\n\n return 0;\n}\n\nSome items to point out in this tweaked code.\n\nIn the prime number verification function, it is usually customary to set up a for loop in testing for prime numbers to go from the value of \"2\" to the square root of the number being tested. There usually is no need travel to one half of the number being tested. For that, the #include <math.h> statement was added (FYI, \"-lm\" would need to be added to link in the math library).\nInstead of defining the results array with a value of one element, an arbitrary value of \"60\" was chosen for the holding the possible number of results when evaluating factors for a given value. Your original code had the potential of storing data past the end of the array and causing a \"smashing\" error.\nThe value of \"1\" is usually left out of the list of factors for a number, but was left in as the initial result value. This might be left out of the completed code.\nAn additional entry field was added to allow for user entry to be tested to give the code some flexibility in testing numbers.\nA test was also added to see if the entered number is itself a prime number, which would only have factors of \"1\" and itself.\n\nFollowing is some sample terminal output testing out your original value of \"60\" along with some other values.\n@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors \nEnter a number to evaluate for factors: 60\nFactors for 60\n1 2 2 3 5 \n@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors \nEnter a number to evaluate for factors: 63\nFactors for 63\n1 3 3 7 \n@Dev:~/C_Programs/Console/Factors/bin/Release$ ./Factors \nEnter a number to evaluate for factors: 29\nThis number is a prime number and has no factors other than one and itself\n\nGive that a try to see if it meets the spirit of your project.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"c"
] |
stackoverflow_0074663114_arrays_c.txt
|
Q:
value only in first row of new category, how to fill missing values
I'm trying to clean data from an Excel file with PANDAS.
I want to fill the missing values from the category and subcategory columns.
I was thinking a for loop, but I not sure how I would do that. Anyone know or have a better idea to manage this data ?
A:
Pandas has a handy .fillna() method that helps here. Assuming your data is stored in spreadsheet.csv, you can just use the code below. The ffill method is an abbreviation for "forward fill" - meaning that it will go down each column and fill a missing value with the value above it.
df = pd.read_csv('spreadsheet.csv')
df.fillna(method='ffill')
Alternatively, if you want to manually specify a fill value for each column, you can pass a dictionary into .fillna() using the column names for keys, and replacement for values. For example:
df.fillna({'Category': 'new category value',
'SubCategory': 'new subcategory',
'Product': "No product"})
|
value only in first row of new category, how to fill missing values
|
I'm trying to clean data from an Excel file with PANDAS.
I want to fill the missing values from the category and subcategory columns.
I was thinking a for loop, but I not sure how I would do that. Anyone know or have a better idea to manage this data ?
|
[
"Pandas has a handy .fillna() method that helps here. Assuming your data is stored in spreadsheet.csv, you can just use the code below. The ffill method is an abbreviation for \"forward fill\" - meaning that it will go down each column and fill a missing value with the value above it.\ndf = pd.read_csv('spreadsheet.csv')\ndf.fillna(method='ffill')\n\nAlternatively, if you want to manually specify a fill value for each column, you can pass a dictionary into .fillna() using the column names for keys, and replacement for values. For example:\ndf.fillna({'Category': 'new category value', \n 'SubCategory': 'new subcategory', \n 'Product': \"No product\"})\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python_3.x"
] |
stackoverflow_0074663294_dataframe_pandas_python_3.x.txt
|
Q:
How to add an SVG circle to an Array of SVGs in react on button click
import { useState, useRef } from 'react'
import './App.css'
function App() {
const [circle, setCircle] = useState([])
const refElement = useRef(null);
const width = 500;
const height = 500;
const circleX= Math.floor(Math.random() * 400);
const circleY = Math.floor(Math.random() * 400);
const circleRadius = 20;
const randomColor = '#' + (Math.random().toString(16) + "000000").substring(2,8)
const LeadCircle = <circle cx={circleX} cy={circleY} r={circleRadius} fill={randomColor} stroke={"black"} strokeWidth={"10"}/>
const Circle = <circle cx={circleX} cy={circleY} r={circleRadius} fill={randomColor}/>
const addLead = () =>{
console.log("circleLead added " + LeadCircle)
LeadCircle
setCircle(previous =>[...circle,previous]);
}
const addNormal = () =>{
console.log("circleNormal added " + Circle)
}
const removeLast = () =>{
}
const removeAll = () =>{
}
return (
<div className="App">
<h1>doobverse setup with svg</h1>
<div className="card">
<div id="svg-container">
<svg width={width} height={height} style={{border:"2px solid white"}} useref={refElement}>
{LeadCircle}
</svg>
</div>
<br></br>
<br></br>
<button id={"addLead"} onClick={addLead}>
Add lead
</button>
<button onClick={addNormal}>
Add normal
</button>
<br></br><br></br>
<button onClick={removeLast}>
remove one
</button>
<button onClick={removeAll}>
remove all
</button>
</div>
</div>
)
}
export default App
So thats my Code,
I just want to have a simple React app, where you can add a circle on button click, remove the last drawn circle, and remove all drawn circles... Ive tried to do this with canvas before. I could draw circles on click but had Problems removing them, so I have asked in a Discord where they explained me that you need to clear and redraw the canvas every time you want to remove an element and that its easier to work with svgs, so I've set the thing up with svgs but now its not even working with the add function...
Thx for help !!
:)
A:
To add a circle to an array of SVGs in React, you can use the useState hook to store the array of circles and the useRef hook to reference the SVG element in your component. Then, you can create a new circle element and add it to the array of circles using the setCircle function. Here's an example of how you can do this:
import { useState, useRef } from 'react'
function App() {
const [circles, setCircles] = useState([])
const svgRef = useRef(null)
const width = 500
const height = 500
const circleX = Math.floor(Math.random() * 400)
const circleY = Math.floor(Math.random() * 400)
const circleRadius = 20
const randomColor = '#' + (Math.random().toString(16) + "000000").substring(2,8)
const addCircle = () => {
// Create a new circle element
const circle = (
<circle
cx={circleX}
cy={circleY}
r={circleRadius}
fill={randomColor}
stroke={"black"}
strokeWidth={"10"}
/>
)
// Add the circle to the array of circles
setCircles(prevCircles => [...prevCircles, circle])
}
return (
<div>
<h1>doobverse setup with svg</h1>
<div className="card">
<div id="svg-container">
{/* Reference the SVG element using the useRef hook */}
<svg width={width} height={height} style={{border:"2px solid white"}} ref={svgRef}>
{/* Render the array of circles */}
{circles.map(circle => circle)}
</svg>
</div>
<br />
<br />
{/* Add a new circle when the button is clicked */}
<button onClick={addCircle}>Add circle</button>
</div>
</div>
)
}
export default App
In this example, the addCircle function is called when the "Add circle" button is clicked, and it creates a new circle element with random position and color. It then adds this element to the array of circles using the setCircles function. The array of circles is rendered inside the SVG element by mapping over the array and rendering each circle element.
A:
It looks like you're trying to create an SVG circle element using the LeadCircle and Circle variables and add it to the circle state array when the addLead or addNormal functions are called. However, there are a few issues with your code that are preventing this from working correctly.
First, in the addLead function, you are calling the LeadCircle variable, which contains an SVG circle element, but you are not actually adding it to the circle state array. Instead, you are adding the previous value of the circle array to itself using the spread operator (...), which will not have the desired effect. To fix this, you should replace the line setCircle(previous =>[...circle,previous]); with setCircle(previous =>[...previous, LeadCircle]);. This will add the LeadCircle element to the circle array and update the state.
Next, you are using the useRef hook to create a reference to an SVG element, but you are not actually using this reference in your code. In the addLead and addNormal functions, you need to use this reference to add the LeadCircle and Circle elements to the SVG element. You can do this by calling the current property of the refElement variable and using the appendChild method to add the elements to the SVG element. For example, you could add the following line to the addLead function:
refElement.current.appendChild(LeadCircle);
Finally, you are not rendering the circle state array in the SVG element, so the circles that you add will not be visible on the page. To fix this, you can use the map method to iterate over the circle array and render each element in the SVG element. For example, you could replace the line {LeadCircle} with the following:
{circle.map(circle => circle)}
This will render each element in the circle array in the SVG element.
|
How to add an SVG circle to an Array of SVGs in react on button click
|
import { useState, useRef } from 'react'
import './App.css'
function App() {
const [circle, setCircle] = useState([])
const refElement = useRef(null);
const width = 500;
const height = 500;
const circleX= Math.floor(Math.random() * 400);
const circleY = Math.floor(Math.random() * 400);
const circleRadius = 20;
const randomColor = '#' + (Math.random().toString(16) + "000000").substring(2,8)
const LeadCircle = <circle cx={circleX} cy={circleY} r={circleRadius} fill={randomColor} stroke={"black"} strokeWidth={"10"}/>
const Circle = <circle cx={circleX} cy={circleY} r={circleRadius} fill={randomColor}/>
const addLead = () =>{
console.log("circleLead added " + LeadCircle)
LeadCircle
setCircle(previous =>[...circle,previous]);
}
const addNormal = () =>{
console.log("circleNormal added " + Circle)
}
const removeLast = () =>{
}
const removeAll = () =>{
}
return (
<div className="App">
<h1>doobverse setup with svg</h1>
<div className="card">
<div id="svg-container">
<svg width={width} height={height} style={{border:"2px solid white"}} useref={refElement}>
{LeadCircle}
</svg>
</div>
<br></br>
<br></br>
<button id={"addLead"} onClick={addLead}>
Add lead
</button>
<button onClick={addNormal}>
Add normal
</button>
<br></br><br></br>
<button onClick={removeLast}>
remove one
</button>
<button onClick={removeAll}>
remove all
</button>
</div>
</div>
)
}
export default App
So thats my Code,
I just want to have a simple React app, where you can add a circle on button click, remove the last drawn circle, and remove all drawn circles... Ive tried to do this with canvas before. I could draw circles on click but had Problems removing them, so I have asked in a Discord where they explained me that you need to clear and redraw the canvas every time you want to remove an element and that its easier to work with svgs, so I've set the thing up with svgs but now its not even working with the add function...
Thx for help !!
:)
|
[
"To add a circle to an array of SVGs in React, you can use the useState hook to store the array of circles and the useRef hook to reference the SVG element in your component. Then, you can create a new circle element and add it to the array of circles using the setCircle function. Here's an example of how you can do this:\nimport { useState, useRef } from 'react'\n\nfunction App() {\n const [circles, setCircles] = useState([])\n const svgRef = useRef(null)\n\n const width = 500\n const height = 500\n const circleX = Math.floor(Math.random() * 400)\n const circleY = Math.floor(Math.random() * 400)\n const circleRadius = 20\n const randomColor = '#' + (Math.random().toString(16) + \"000000\").substring(2,8)\n\n const addCircle = () => {\n // Create a new circle element\n const circle = (\n <circle\n cx={circleX}\n cy={circleY}\n r={circleRadius}\n fill={randomColor}\n stroke={\"black\"}\n strokeWidth={\"10\"}\n />\n )\n\n // Add the circle to the array of circles\n setCircles(prevCircles => [...prevCircles, circle])\n }\n\n return (\n <div>\n <h1>doobverse setup with svg</h1>\n <div className=\"card\">\n <div id=\"svg-container\">\n {/* Reference the SVG element using the useRef hook */}\n <svg width={width} height={height} style={{border:\"2px solid white\"}} ref={svgRef}>\n {/* Render the array of circles */}\n {circles.map(circle => circle)}\n </svg>\n </div>\n <br />\n <br />\n {/* Add a new circle when the button is clicked */}\n <button onClick={addCircle}>Add circle</button>\n </div>\n </div>\n )\n}\n\nexport default App\n\nIn this example, the addCircle function is called when the \"Add circle\" button is clicked, and it creates a new circle element with random position and color. It then adds this element to the array of circles using the setCircles function. The array of circles is rendered inside the SVG element by mapping over the array and rendering each circle element.\n",
"It looks like you're trying to create an SVG circle element using the LeadCircle and Circle variables and add it to the circle state array when the addLead or addNormal functions are called. However, there are a few issues with your code that are preventing this from working correctly.\nFirst, in the addLead function, you are calling the LeadCircle variable, which contains an SVG circle element, but you are not actually adding it to the circle state array. Instead, you are adding the previous value of the circle array to itself using the spread operator (...), which will not have the desired effect. To fix this, you should replace the line setCircle(previous =>[...circle,previous]); with setCircle(previous =>[...previous, LeadCircle]);. This will add the LeadCircle element to the circle array and update the state.\nNext, you are using the useRef hook to create a reference to an SVG element, but you are not actually using this reference in your code. In the addLead and addNormal functions, you need to use this reference to add the LeadCircle and Circle elements to the SVG element. You can do this by calling the current property of the refElement variable and using the appendChild method to add the elements to the SVG element. For example, you could add the following line to the addLead function:\nrefElement.current.appendChild(LeadCircle);\n\nFinally, you are not rendering the circle state array in the SVG element, so the circles that you add will not be visible on the page. To fix this, you can use the map method to iterate over the circle array and render each element in the SVG element. For example, you could replace the line {LeadCircle} with the following:\n{circle.map(circle => circle)}\n\nThis will render each element in the circle array in the SVG element.\n"
] |
[
0,
0
] |
[] |
[] |
[
"arrays",
"javascript",
"reactjs",
"svg"
] |
stackoverflow_0074663337_arrays_javascript_reactjs_svg.txt
|
Q:
Terraform Azure: enforcing Client id and Secret?
I have a simple terraform code
# Configure the Microsoft Azure provider
provider "azurerm" {
features {}
}
# Create a Resource Group if it doesn’t exist
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West US"
}
It used to work as I logged in the Terminal using my User credentials , but now it throws an error
│ Error: building AzureRM Client: 3 errors occurred:
│ * A Subscription ID must be configured when authenticating as a Service Principal using a Client Secret.
│ * A Client ID must be configured when authenticating as a Service Principal using a Client Secret.
│ * A Tenant ID must be configured when authenticating as a Service Principal using a Client Secret.
│
│
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 2, in provider "azurerm":
│ 2: provider "azurerm" {
What is causing this issue? I can't create a Service Principal due to lack of permission at the Active Directory. How do I make it work again without the Service Principal?
A:
I guess you're executing this from a local PC or a VM. The use of service principal here is to authenticate between your PC and Azure Portal. There are other methods too, like using managed identity services. It is better to get a SPN if you don't have one. Please find the reference link:
Terraform-Azure Authentication
You have mentioned it used to work. I think the PC you are using must have the following environment variables configured.
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
or there must be a local configuration file must be available.
A:
You must need provide this subscription_id, tenant_id, client_id, and client_secret details when you are running on locally or in CICD.
These values will authenticate your azure account and create the resources which you want to create based upon the code.
Below are the authentication methods link:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_certificate please do check it.
In any organization the users will be get a least privilege. If you are provisioning terraform activities, you need at least a contributor role (you can provision limited resources).
Another alternative way is you can use the service principal details existing App Registration, ask your admin to add your name as an owner of that App Registration.
You can ask your admin to create a new APP registration and add your name as owner for the App Registartion, ask to add required api permissions, create and share the client_secret with you, add the required role to APP registration based upon your requirement.
|
Terraform Azure: enforcing Client id and Secret?
|
I have a simple terraform code
# Configure the Microsoft Azure provider
provider "azurerm" {
features {}
}
# Create a Resource Group if it doesn’t exist
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West US"
}
It used to work as I logged in the Terminal using my User credentials , but now it throws an error
│ Error: building AzureRM Client: 3 errors occurred:
│ * A Subscription ID must be configured when authenticating as a Service Principal using a Client Secret.
│ * A Client ID must be configured when authenticating as a Service Principal using a Client Secret.
│ * A Tenant ID must be configured when authenticating as a Service Principal using a Client Secret.
│
│
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 2, in provider "azurerm":
│ 2: provider "azurerm" {
What is causing this issue? I can't create a Service Principal due to lack of permission at the Active Directory. How do I make it work again without the Service Principal?
|
[
"I guess you're executing this from a local PC or a VM. The use of service principal here is to authenticate between your PC and Azure Portal. There are other methods too, like using managed identity services. It is better to get a SPN if you don't have one. Please find the reference link:\nTerraform-Azure Authentication\nYou have mentioned it used to work. I think the PC you are using must have the following environment variables configured.\nARM_CLIENT_ID\nARM_CLIENT_SECRET\nARM_SUBSCRIPTION_ID\nARM_TENANT_ID\n\nor there must be a local configuration file must be available.\n",
"You must need provide this subscription_id, tenant_id, client_id, and client_secret details when you are running on locally or in CICD.\nThese values will authenticate your azure account and create the resources which you want to create based upon the code.\nBelow are the authentication methods link:\nhttps://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_certificate please do check it.\nIn any organization the users will be get a least privilege. If you are provisioning terraform activities, you need at least a contributor role (you can provision limited resources).\nAnother alternative way is you can use the service principal details existing App Registration, ask your admin to add your name as an owner of that App Registration.\nYou can ask your admin to create a new APP registration and add your name as owner for the App Registartion, ask to add required api permissions, create and share the client_secret with you, add the required role to APP registration based upon your requirement.\n"
] |
[
1,
1
] |
[] |
[] |
[
"azure",
"terraform_provider_azure"
] |
stackoverflow_0074644361_azure_terraform_provider_azure.txt
|
Q:
Create repository button disabled on GitHub
I am using macOS version 11.2.2 BigSur. I don't seem to be able to create a GitHub repository on the web (I am using safari). The button is greyed out or you say it as disabled.
There is no other problem in naming, storage etc.
Please suggest me some solutions.
A:
Try creating a repository with .gitignore without any template makes create repository button disabled.
.gitignore template: None -> create Repository button is disabled.
Now either deselect the option or select a template, will make the button enabled.
A:
Try to remove the disabled attribute of the button using developer tools (inspect). It works for me.
A:
The only required fields are:
Owner
Repository Name
Once those 2 fields have been filled out the button will be enabled.
A:
I was able to solve this by pressing Tab until the button was selected. Then Enter submitted the form.
A:
I also had this problem. It is because of chrome use microsoft edge or something else. I am guessing it is because of schrome extentions like addblock.
|
Create repository button disabled on GitHub
|
I am using macOS version 11.2.2 BigSur. I don't seem to be able to create a GitHub repository on the web (I am using safari). The button is greyed out or you say it as disabled.
There is no other problem in naming, storage etc.
Please suggest me some solutions.
|
[
"Try creating a repository with .gitignore without any template makes create repository button disabled.\n.gitignore template: None -> create Repository button is disabled.\n\nNow either deselect the option or select a template, will make the button enabled.\n\n",
"Try to remove the disabled attribute of the button using developer tools (inspect). It works for me.\n",
"The only required fields are:\n\nOwner\nRepository Name\n\nOnce those 2 fields have been filled out the button will be enabled.\n",
"I was able to solve this by pressing Tab until the button was selected. Then Enter submitted the form.\n",
"I also had this problem. It is because of chrome use microsoft edge or something else. I am guessing it is because of schrome extentions like addblock.\n"
] |
[
27,
15,
3,
0,
0
] |
[] |
[] |
[
"git",
"github",
"repository"
] |
stackoverflow_0067565130_git_github_repository.txt
|
Q:
Git merge: is it possible to avoid auto-merge for non fast-forwarded files?
When I do a git merge I would like that only the files that can be fast-forwarded (i.e. the files that have changed only on one of the branches being merged since the last common revision) are automerged, while all the other files (that have changed on both branches, even if on different lines) are marked as conflicts. I have looked around but it seems that there is not a "simple" way (e.g. an option to pass to git merge) to do that. Here there is a similar question, but not quite this one. Here there is a very similar question, but it is 4 years old and it has no conclusive answer.
A:
As far as I know it's not possible to do this without using some sort of script (shouldn't be too difficult). I looked for a similar solution a couple months back and I simply ended up disabling the fast-forwarded merge. I think they mention how in some of the questions you added:
git merge --no-ff
A:
Put * -merge in .git/info/attributes of the repository where you're performing the merge. This will prevent any auto-merging, introducing merge conflicts instead.
Note that .git/info/attributes has the highest precedence of all .gitattribute files, so none of those would cancel it out.
|
Git merge: is it possible to avoid auto-merge for non fast-forwarded files?
|
When I do a git merge I would like that only the files that can be fast-forwarded (i.e. the files that have changed only on one of the branches being merged since the last common revision) are automerged, while all the other files (that have changed on both branches, even if on different lines) are marked as conflicts. I have looked around but it seems that there is not a "simple" way (e.g. an option to pass to git merge) to do that. Here there is a similar question, but not quite this one. Here there is a very similar question, but it is 4 years old and it has no conclusive answer.
|
[
"As far as I know it's not possible to do this without using some sort of script (shouldn't be too difficult). I looked for a similar solution a couple months back and I simply ended up disabling the fast-forwarded merge. I think they mention how in some of the questions you added: \ngit merge --no-ff\n\n",
"Put * -merge in .git/info/attributes of the repository where you're performing the merge. This will prevent any auto-merging, introducing merge conflicts instead.\nNote that .git/info/attributes has the highest precedence of all .gitattribute files, so none of those would cancel it out.\n"
] |
[
1,
1
] |
[] |
[] |
[
"git",
"merge"
] |
stackoverflow_0033427507_git_merge.txt
|
Q:
Match similar column values in different rows
I have a table with an ID column (String) and I need to be able to find IDs that are similar between different rows. What is the SQL that will allow me to flag a row as similar? Note: There can be one-to-many rows like shown below (i.e. 12345, 12345RED, etc.)
Update: The IDs are "similar" in that there is typically leading numerical values followed by no space then alpha characters OR space " ", hyphen "-", or forward slash "/" then followed by alpha characters. ####[a-zA-z], #### [a-zA-Z], ####-[a-zA-z}, or ####/[a-zA-z]. (I'm not sure how to indicate 1-to-many numeric characters).
ID
Similar
12345
Yes
12345RED (Could also be 12345-RED, 12345/RED, or 12345 RED)
Yes
12345BLU (Could also be 12345-BLU, 12345/BLU, or 12345 BLU)
Yes
12345GRN (Could also be 12345-GRN, 12345/GRN, or 12345 GRN)
Yes
12345BLK (Could also be 12345-BLK, 12345/BLK, or 12345 BLK)
Yes
123456
No
123457
No
A:
Assuming "similar" means "have the same leading numerals"...
First, extract the numerals, such as with a regular expression. Then, count how many other ros have the same leading numerals, using a window function.
WITH
extract_numerals AS
(
SELECT
*,
REGEXP_EXTRACT(id, r'^\d+') AS leading_numerals
FROM
your_table
)
SELECT
*,
COUNT(*) OVER (PARTITION BY leading_numerals) - 1 AS similar_rows
FROM
extract_numerals
ORDER BY
leading_numerals
Any row where the count is zero (after having deducted one from the window function) has no "similar" rows.
|
Match similar column values in different rows
|
I have a table with an ID column (String) and I need to be able to find IDs that are similar between different rows. What is the SQL that will allow me to flag a row as similar? Note: There can be one-to-many rows like shown below (i.e. 12345, 12345RED, etc.)
Update: The IDs are "similar" in that there is typically leading numerical values followed by no space then alpha characters OR space " ", hyphen "-", or forward slash "/" then followed by alpha characters. ####[a-zA-z], #### [a-zA-Z], ####-[a-zA-z}, or ####/[a-zA-z]. (I'm not sure how to indicate 1-to-many numeric characters).
ID
Similar
12345
Yes
12345RED (Could also be 12345-RED, 12345/RED, or 12345 RED)
Yes
12345BLU (Could also be 12345-BLU, 12345/BLU, or 12345 BLU)
Yes
12345GRN (Could also be 12345-GRN, 12345/GRN, or 12345 GRN)
Yes
12345BLK (Could also be 12345-BLK, 12345/BLK, or 12345 BLK)
Yes
123456
No
123457
No
|
[
"Assuming \"similar\" means \"have the same leading numerals\"...\nFirst, extract the numerals, such as with a regular expression. Then, count how many other ros have the same leading numerals, using a window function.\nWITH\n extract_numerals AS\n(\n SELECT\n *,\n REGEXP_EXTRACT(id, r'^\\d+') AS leading_numerals\n FROM\n your_table\n)\nSELECT\n *,\n COUNT(*) OVER (PARTITION BY leading_numerals) - 1 AS similar_rows\nFROM\n extract_numerals\nORDER BY\n leading_numerals\n\nAny row where the count is zero (after having deducted one from the window function) has no \"similar\" rows.\n"
] |
[
0
] |
[] |
[] |
[
"google_bigquery",
"sql"
] |
stackoverflow_0074663194_google_bigquery_sql.txt
|
Q:
Data is not displayed on the first try firestore db flutter
I'm trying to learn some flutter programming and I'm facing a problem, when I run the program I don't get the information on the screen at the first time, I mean the data is retreived from firestore but I have to reload the page several times to show the data on the screen, what might be happening
I am using FutureBuilder
pet_page.dart
import 'package:flutter/material.dart';
import 'package:font_awesome_flutter/font_awesome_flutter.dart';
import 'package:histovet/src/controller/pet_controller.dart';
import 'package:histovet/src/models/pet_model.dart';
import 'package:histovet/src/pages/widgets/menu_lateral.dart';
import 'package:histovet/src/pages/pet/add_pets.dart';
import 'package:histovet/src/pages/pet/pet_update.dart';
// Clases encargadas de la vista donde se enlistan todas las ventas que existan en la
// base de datos
class PetsPage extends StatefulWidget {
static String id = "pets_page";
const PetsPage({Key? key}) : super(key: key);
@override
State<PetsPage> createState() => _PetsPageState();
}
class _PetsPageState extends State<PetsPage> {
TextStyle txtStyle = const TextStyle(
fontWeight: FontWeight.w900, fontSize: 30, color: Colors.black);
PetController petCont = PetController();
bool answer = false;
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
appBar: AppBar(
centerTitle: true,
title: const Text("Mascotas"),
actions: [
IconButton(
onPressed: () {
setState(() {});
},
icon: const Icon(Icons.refresh))
],
),
//drawer: const MenuLateral(),
floatingActionButton: FloatingActionButton.extended(
icon: const Icon(FontAwesomeIcons.plus),
label: const Text('Registrar nueva mascota'),
elevation: 15.0,
backgroundColor: Colors.blue,
onPressed: () {
Navigator.pushNamed(context, AddPet.id);
}),
body: FutureBuilder(
future: petCont.allPets(),
builder: (BuildContext context, AsyncSnapshot<List> snapshot) {
if (snapshot.hasError) {
print("error");
return const Text('Error');
} else if (snapshot.hasData) {
List species = snapshot.data ??[];
print(species);
return Padding(
padding: const EdgeInsets.all(8.0),
child: ListView(
children: [
for (Pet specie in species)
Card(
margin: const EdgeInsets.all(6),
elevation: 6,
child: Container(
decoration: const BoxDecoration(
image: DecorationImage(
image: AssetImage('assets/img/fondo.jpg'),
fit: BoxFit.cover,
opacity: 0.3),
),
child: ListTile(
onLongPress: () {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => UpdatePet(
specie.id.toString(),
specie.owner.toString())));
},
leading: const Icon(
FontAwesomeIcons.paw,
color: Colors.black,
),
title: Text(
specie.name,
style: txtStyle,
),
subtitle: Text(
specie.specie,
style: txtStyle.copyWith(fontSize: 17),
),
trailing: IconButton(
icon: const Icon(Icons.delete,
color: Colors.black),
onPressed: () {
messageDelete(specie.id.toString());
Navigator.pushNamed(context, '/pets')
.then((_) => setState(() {}));
},
))),
)
],
),
);
} else {
return const Text('Empty data');
}
})),
);
}
// Le indica al usuario si se pudo o no eliminar el registro
void messageDelete(String idPet) async {
answer = await petCont.deletePet(idPet);
if (answer) {
Navigator.pushNamed(context, '/pets').then((_) => setState(() {}));
ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
content: Text("Se eliminó el registro de la mascota"),
backgroundColor: Colors.green,
));
} else {
ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
content: Text("No se pudo eliminar el registro de la mascota"),
backgroundColor: Colors.green,
));
}
}
}
pet_controller.dart
Future<List<Pet>> allPets() async {
try{
_pets = await _service.getPetsBD();
return _pets;
} catch (e) {
return _pets;
}
}
pet_service.dart
Future<List<Pet>> getPetsBD() async {
List<Pet> mascotas = [];
final FirebaseAuth auth = FirebaseAuth.instance;
final User? user = auth.currentUser;
final uid = user?.uid;
try {
final collection = FirebaseFirestore.instance
.collection('pets')
.where('owner', isEqualTo: uid);
collection.snapshots().listen((querySnapshot) {
for (var doc in querySnapshot.docs) {
Map<String, dynamic> data = doc.data();
Pet newPet = Pet(
data["id"],
data["owner"],
data["birthday"].toString(),
data["name"],
data["neutering"],
data["age"],
data["breed"],
data["specie"],
data["color"],
data["gender"],
data["clinic"]);
mascotas.add(newPet);
}
});
return mascotas;
} catch (e) {
return mascotas;
}
}
A:
Please try;
List species = snapshot.data ??[];
to
List species = snapshot.data() ??[];
A:
It looks like the FutureBuilder widget is not being rebuilt when new data is available. This is likely because the Future returned by petCont.allPets() is not changing.
You can fix this by providing a new Future to the FutureBuilder each time the widget is rebuilt. For example, you could use a StatefulWidget and store the Future in the widget's state. Then, in the build method, you can return a new FutureBuilder with the updated Future each time the widget is rebuilt. This will cause the FutureBuilder to rebuild and display the updated data.
Here is an example of how this could be implemented:
class _PetsPageState extends State<PetsPage> {
PetController petCont = PetController();
Future<List<Pet>> _future;
@override
void initState() {
super.initState();
_future = petCont.allPets();
}
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
appBar: AppBar(
centerTitle: true,
title: const Text("Mascotas"),
actions: [
IconButton(
onPressed: () {
setState(() {
_future = petCont.allPets();
});
},
icon: const Icon(Icons.refresh))
],
),
body: FutureBuilder(
future: _future,
builder: (BuildContext context, AsyncSnapshot<List> snapshot) {
if (snapshot.hasError) {
print("error");
return const Text('Error');
} else if (snapshot.hasData) {
List species = snapshot.data ??[];
print(species);
return Padding(
padding: const EdgeInsets.all(8.0),
child: ListView(
children: [
for (Pet specie in species)
Card(
margin: const EdgeInsets.all(6),
elevation: 6,
child: Container(
decoration: const BoxDecoration(
image: DecorationImage(
image: AssetImage('assets/img/fondo.jpg'),
fit: BoxFit.cover,
opacity: 0.3),
),
child: ListTile(
// ...
),
),
),
],
),
);
}
return const CircularProgressIndicator();
},
),
floatingActionButton: FloatingActionButton.extended(
icon: const Icon(FontAwesomeIcons.plus),
label: const Text('Registrar nueva mascota'),
elevation: 15.0,
backgroundColor: Colors.blue,
onPressed: () {
Navigator.pushNamed(context, AddPet.id);
}),
),
);
}
}
In the code above, the Future returned by petCont.allPets() is stored in a StatefulWidget's state, and a new FutureBuilder is returned in the build method
I hope that's useful, check out my Bio!
|
Data is not displayed on the first try firestore db flutter
|
I'm trying to learn some flutter programming and I'm facing a problem, when I run the program I don't get the information on the screen at the first time, I mean the data is retreived from firestore but I have to reload the page several times to show the data on the screen, what might be happening
I am using FutureBuilder
pet_page.dart
import 'package:flutter/material.dart';
import 'package:font_awesome_flutter/font_awesome_flutter.dart';
import 'package:histovet/src/controller/pet_controller.dart';
import 'package:histovet/src/models/pet_model.dart';
import 'package:histovet/src/pages/widgets/menu_lateral.dart';
import 'package:histovet/src/pages/pet/add_pets.dart';
import 'package:histovet/src/pages/pet/pet_update.dart';
// Clases encargadas de la vista donde se enlistan todas las ventas que existan en la
// base de datos
class PetsPage extends StatefulWidget {
static String id = "pets_page";
const PetsPage({Key? key}) : super(key: key);
@override
State<PetsPage> createState() => _PetsPageState();
}
class _PetsPageState extends State<PetsPage> {
TextStyle txtStyle = const TextStyle(
fontWeight: FontWeight.w900, fontSize: 30, color: Colors.black);
PetController petCont = PetController();
bool answer = false;
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
appBar: AppBar(
centerTitle: true,
title: const Text("Mascotas"),
actions: [
IconButton(
onPressed: () {
setState(() {});
},
icon: const Icon(Icons.refresh))
],
),
//drawer: const MenuLateral(),
floatingActionButton: FloatingActionButton.extended(
icon: const Icon(FontAwesomeIcons.plus),
label: const Text('Registrar nueva mascota'),
elevation: 15.0,
backgroundColor: Colors.blue,
onPressed: () {
Navigator.pushNamed(context, AddPet.id);
}),
body: FutureBuilder(
future: petCont.allPets(),
builder: (BuildContext context, AsyncSnapshot<List> snapshot) {
if (snapshot.hasError) {
print("error");
return const Text('Error');
} else if (snapshot.hasData) {
List species = snapshot.data ??[];
print(species);
return Padding(
padding: const EdgeInsets.all(8.0),
child: ListView(
children: [
for (Pet specie in species)
Card(
margin: const EdgeInsets.all(6),
elevation: 6,
child: Container(
decoration: const BoxDecoration(
image: DecorationImage(
image: AssetImage('assets/img/fondo.jpg'),
fit: BoxFit.cover,
opacity: 0.3),
),
child: ListTile(
onLongPress: () {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => UpdatePet(
specie.id.toString(),
specie.owner.toString())));
},
leading: const Icon(
FontAwesomeIcons.paw,
color: Colors.black,
),
title: Text(
specie.name,
style: txtStyle,
),
subtitle: Text(
specie.specie,
style: txtStyle.copyWith(fontSize: 17),
),
trailing: IconButton(
icon: const Icon(Icons.delete,
color: Colors.black),
onPressed: () {
messageDelete(specie.id.toString());
Navigator.pushNamed(context, '/pets')
.then((_) => setState(() {}));
},
))),
)
],
),
);
} else {
return const Text('Empty data');
}
})),
);
}
// Le indica al usuario si se pudo o no eliminar el registro
void messageDelete(String idPet) async {
answer = await petCont.deletePet(idPet);
if (answer) {
Navigator.pushNamed(context, '/pets').then((_) => setState(() {}));
ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
content: Text("Se eliminó el registro de la mascota"),
backgroundColor: Colors.green,
));
} else {
ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
content: Text("No se pudo eliminar el registro de la mascota"),
backgroundColor: Colors.green,
));
}
}
}
pet_controller.dart
Future<List<Pet>> allPets() async {
try{
_pets = await _service.getPetsBD();
return _pets;
} catch (e) {
return _pets;
}
}
pet_service.dart
Future<List<Pet>> getPetsBD() async {
List<Pet> mascotas = [];
final FirebaseAuth auth = FirebaseAuth.instance;
final User? user = auth.currentUser;
final uid = user?.uid;
try {
final collection = FirebaseFirestore.instance
.collection('pets')
.where('owner', isEqualTo: uid);
collection.snapshots().listen((querySnapshot) {
for (var doc in querySnapshot.docs) {
Map<String, dynamic> data = doc.data();
Pet newPet = Pet(
data["id"],
data["owner"],
data["birthday"].toString(),
data["name"],
data["neutering"],
data["age"],
data["breed"],
data["specie"],
data["color"],
data["gender"],
data["clinic"]);
mascotas.add(newPet);
}
});
return mascotas;
} catch (e) {
return mascotas;
}
}
|
[
"Please try;\nList species = snapshot.data ??[];\n\nto\nList species = snapshot.data() ??[];\n\n",
"It looks like the FutureBuilder widget is not being rebuilt when new data is available. This is likely because the Future returned by petCont.allPets() is not changing.\nYou can fix this by providing a new Future to the FutureBuilder each time the widget is rebuilt. For example, you could use a StatefulWidget and store the Future in the widget's state. Then, in the build method, you can return a new FutureBuilder with the updated Future each time the widget is rebuilt. This will cause the FutureBuilder to rebuild and display the updated data.\nHere is an example of how this could be implemented:\nclass _PetsPageState extends State<PetsPage> {\n PetController petCont = PetController();\n Future<List<Pet>> _future;\n\n @override\n void initState() {\n super.initState();\n _future = petCont.allPets();\n }\n\n @override\n Widget build(BuildContext context) {\n return SafeArea(\n child: Scaffold(\n appBar: AppBar(\n centerTitle: true,\n title: const Text(\"Mascotas\"),\n actions: [\n IconButton(\n onPressed: () {\n setState(() {\n _future = petCont.allPets();\n });\n },\n icon: const Icon(Icons.refresh))\n ],\n ),\n body: FutureBuilder(\n future: _future,\n builder: (BuildContext context, AsyncSnapshot<List> snapshot) {\n if (snapshot.hasError) {\n print(\"error\");\n return const Text('Error');\n } else if (snapshot.hasData) {\n List species = snapshot.data ??[];\n\n print(species);\n\n return Padding(\n padding: const EdgeInsets.all(8.0),\n child: ListView(\n children: [\n for (Pet specie in species)\n Card(\n margin: const EdgeInsets.all(6),\n elevation: 6,\n child: Container(\n decoration: const BoxDecoration(\n image: DecorationImage(\n image: AssetImage('assets/img/fondo.jpg'),\n fit: BoxFit.cover,\n opacity: 0.3),\n ),\n child: ListTile(\n // ...\n ),\n ),\n ),\n ],\n ),\n );\n }\n return const CircularProgressIndicator();\n },\n ),\n floatingActionButton: FloatingActionButton.extended(\n icon: const Icon(FontAwesomeIcons.plus),\n label: const Text('Registrar nueva mascota'),\n elevation: 15.0,\n backgroundColor: Colors.blue,\n onPressed: () {\n Navigator.pushNamed(context, AddPet.id);\n }),\n ),\n );\n }\n}\n\nIn the code above, the Future returned by petCont.allPets() is stored in a StatefulWidget's state, and a new FutureBuilder is returned in the build method\nI hope that's useful, check out my Bio!\n"
] |
[
0,
0
] |
[] |
[] |
[
"dart",
"firebase",
"flutter",
"google_cloud_firestore"
] |
stackoverflow_0074661034_dart_firebase_flutter_google_cloud_firestore.txt
|
Q:
How can I prevent Eclipse from stepping into Java library code
How can I prevent Eclipse from stepping into Java library code when using Step Into?
What I am used to in other IDEs (like IntelliJ) is that with Step Into you enter the methods defined by yourself or third party libraries but not the methods of the Java framework itself.
Eclipse does that and it really slows down debugging especially if you have calls to your own methods and ones defined in the Java framework in one line. You have to constantly switch between Step Over, Step Into or Step Return if you already stepped in.
A:
You can configure the Eclipse Java debugger to not step into those bits of code by configuring a ‘Step Filter’.
Go to Windows -> Preferences -> Java -> Debug -> Step Filtering.
Check ‘Use Step Filters’.
Check the appropriate options on the screen. You can add parts that are relevant to your own codebase.
Click ‘Apply’.
More you can read about Eclipse Step Filter here
You can even create a filter for your project Package or Java class as well.
Another good link
A:
For the People who wants to know same setting in Intellij Idea please go through below.
The reason intellij Idea do not step into java specific code is, by default Intellij Idea is enabled with below restriction. To add any other classes we can simply add it here. I added org.testng.* classes.
File->Settings-> Build, Execution and Deployment -> Debugger -> stepping
|
How can I prevent Eclipse from stepping into Java library code
|
How can I prevent Eclipse from stepping into Java library code when using Step Into?
What I am used to in other IDEs (like IntelliJ) is that with Step Into you enter the methods defined by yourself or third party libraries but not the methods of the Java framework itself.
Eclipse does that and it really slows down debugging especially if you have calls to your own methods and ones defined in the Java framework in one line. You have to constantly switch between Step Over, Step Into or Step Return if you already stepped in.
|
[
"You can configure the Eclipse Java debugger to not step into those bits of code by configuring a ‘Step Filter’.\n\nGo to Windows -> Preferences -> Java -> Debug -> Step Filtering.\nCheck ‘Use Step Filters’.\nCheck the appropriate options on the screen. You can add parts that are relevant to your own codebase.\nClick ‘Apply’.\n\nMore you can read about Eclipse Step Filter here\nYou can even create a filter for your project Package or Java class as well.\nAnother good link \n\n",
"For the People who wants to know same setting in Intellij Idea please go through below.\nThe reason intellij Idea do not step into java specific code is, by default Intellij Idea is enabled with below restriction. To add any other classes we can simply add it here. I added org.testng.* classes.\nFile->Settings-> Build, Execution and Deployment -> Debugger -> stepping\n\n"
] |
[
11,
0
] |
[] |
[] |
[
"eclipse",
"java"
] |
stackoverflow_0032154474_eclipse_java.txt
|
Q:
Detect a space in a string
I am writing a lexer for a programming language.
I have managed to detect all characters but spaces (not general whitespaces, just spaces).
I've tried just using letter == " " and letter === " ".
I've also looked at some SO question that recommend using some Regex, but that also didn't work.
Relevant code snippet (this is part of a bigger Lexer class):
generateOutput(input) {
const wordList = input.split(' ');
const keyword = wordList[0];
const identifier = wordList[1];
const output = [
"[KEYWORD] " + keyword.toUpperCase(),
"[IDENTIFIER] " + identifier
]
for (let i = 2; i < wordList.length; i++) {
const word = wordList[i];
const words = word.split("");
words.forEach((letter) => {
if (upperCase.includes(letter)) {
output.push("[KEY] U_" + letter)
} else if (lowerCase.includes(letter)) {
output.push("[KEY] L_" + letter)
} else if (symbols.includes(letter)) {
output.push("[SYMBOL] " + letter)
} else if (letter === " ") {
output.push("[KEY] SPACE")
}
})
}
return output
}
A:
You used input.split(' '), like I said in my comment. That deleted all of the spaces.
var a = "this is a sentence"
console.log( a.split( " " ) );
//["this","is","a","sentence"] <- contains absolutely no spaces
|
Detect a space in a string
|
I am writing a lexer for a programming language.
I have managed to detect all characters but spaces (not general whitespaces, just spaces).
I've tried just using letter == " " and letter === " ".
I've also looked at some SO question that recommend using some Regex, but that also didn't work.
Relevant code snippet (this is part of a bigger Lexer class):
generateOutput(input) {
const wordList = input.split(' ');
const keyword = wordList[0];
const identifier = wordList[1];
const output = [
"[KEYWORD] " + keyword.toUpperCase(),
"[IDENTIFIER] " + identifier
]
for (let i = 2; i < wordList.length; i++) {
const word = wordList[i];
const words = word.split("");
words.forEach((letter) => {
if (upperCase.includes(letter)) {
output.push("[KEY] U_" + letter)
} else if (lowerCase.includes(letter)) {
output.push("[KEY] L_" + letter)
} else if (symbols.includes(letter)) {
output.push("[SYMBOL] " + letter)
} else if (letter === " ") {
output.push("[KEY] SPACE")
}
})
}
return output
}
|
[
"You used input.split(' '), like I said in my comment. That deleted all of the spaces.\nvar a = \"this is a sentence\"\nconsole.log( a.split( \" \" ) );\n//[\"this\",\"is\",\"a\",\"sentence\"] <- contains absolutely no spaces\n\n"
] |
[
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074663184_javascript.txt
|
Q:
How to loop through a get curl output and put it into a JSON file then a .CSV file?
I am running a little bash script that passes in some credentials, and then loops on the specified pages and takes that information and places it into a .json file called output. I want to then convert that .json file into a .csv file so I can read it a bit more clearer. Is there anyway I can do this correctly?
Using any site to convert a .json file to .csv provides me an error because it looks like the output creates multiple JSON objects, due to the scope of the GET request iterating over multiple pages.
TOKEN="value123"
curl -k >> output.json \
--header "Auth-detials: $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://someapi.com/api/details?page%5Bnumber%5D=[1-10]
A:
You can save from getting JSON, CSV file by curl and jq.
Auth and Token. it depends on REST-API service.
I can't test in my hand without specific URL.
So I focused JSON data to convert CSV by curl.
source URL
https://jsonplaceholder.typicode.com/users
JSON data
[
{
"id": 1,
"name": "Leanne Graham",
"username": "Bret",
"email": "[email protected]",
"address": {
"street": "Kulas Light",
"suite": "Apt. 556",
"city": "Gwenborough",
"zipcode": "92998-3874",
"geo": {
"lat": "-37.3159",
"lng": "81.1496"
}
},
"phone": "1-770-736-8031 x56442",
"website": "hildegard.org",
"company": {
"name": "Romaguera-Crona",
"catchPhrase": "Multi-layered client-server neural-net",
"bs": "harness real-time e-markets"
}
},
{
"id": 2,
"name": "Ervin Howell",
"username": "Antonette",
"email": "[email protected]",
"address": {
"street": "Victor Plains",
"suite": "Suite 879",
"city": "Wisokyburgh",
"zipcode": "90566-7771",
"geo": {
"lat": "-43.9509",
"lng": "-34.4618"
}
},
"phone": "010-692-6593 x09125",
"website": "anastasia.net",
"company": {
"name": "Deckow-Crist",
"catchPhrase": "Proactive didactic contingency",
"bs": "synergize scalable supply-chains"
}
}
...
Target CSV - some items removed - but you can add it
1,Leanne Graham,Bret,Gwenborough,92998-3874,hildegard.org
2,Ervin Howell,Antonette,Wisokyburgh,90566-7771,anastasia.net
3,Clementine Bauch,Samantha,McKenziehaven,59590-4157,ramiro.info
4,Patricia Lebsack,Karianne,South Elvis,53919-4257,kale.biz
5,Chelsey Dietrich,Kamren,Roscoeview,33263,demarco.info
6,Mrs. Dennis Schulist,Leopoldo_Corkery,South Christy,23505-1337,ola.org
7,Kurtis Weissnat,Elwyn.Skiles,Howemouth,58804-1099,elvis.io
8,Nicholas Runolfsdottir V,Maxime_Nienow,Aliyaview,45169,jacynthe.com
9,Glenna Reichert,Delphine,Bartholomebury,76495-3109,conrad.com
10,Clementina DuBuque,Moriah.Stanton,Lebsackbury,31428-2261,ambrose.net
Using this command in terminal
curl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username, city: .address.city, zipcode: .address.zipcode, website: .website } | join(",")'
if add this, save CSV file.
> data.csv
curl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username, city: .address.city, zipcode: .address.zipcode, website: .website } | join(",")' > data.csv
cat data.csv
Key steps
#1 Remove [ and ]
curl https://jsonplaceholder.typicode.com/users | jq '.[]'
#2 Filter and change data
curl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name }'
result
{
"id": 1,
"name": "Leanne Graham"
}
{
"id": 2,
"name": "Ervin Howell"
}
{
"id": 3,
"name": "Clementine Bauch"
}
...
#3 to convert CSV
curl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username } | join(",")'
result
1,Leanne Graham,Bret
2,Ervin Howell,Antonette
3,Clementine Bauch,Samantha
...
Reference
How to convert JSON to CSV using Linux / Unix shell
|
How to loop through a get curl output and put it into a JSON file then a .CSV file?
|
I am running a little bash script that passes in some credentials, and then loops on the specified pages and takes that information and places it into a .json file called output. I want to then convert that .json file into a .csv file so I can read it a bit more clearer. Is there anyway I can do this correctly?
Using any site to convert a .json file to .csv provides me an error because it looks like the output creates multiple JSON objects, due to the scope of the GET request iterating over multiple pages.
TOKEN="value123"
curl -k >> output.json \
--header "Auth-detials: $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://someapi.com/api/details?page%5Bnumber%5D=[1-10]
|
[
"You can save from getting JSON, CSV file by curl and jq.\nAuth and Token. it depends on REST-API service.\nI can't test in my hand without specific URL.\nSo I focused JSON data to convert CSV by curl.\nsource URL\nhttps://jsonplaceholder.typicode.com/users\n\nJSON data\n[\n {\n \"id\": 1,\n \"name\": \"Leanne Graham\",\n \"username\": \"Bret\",\n \"email\": \"[email protected]\",\n \"address\": {\n \"street\": \"Kulas Light\",\n \"suite\": \"Apt. 556\",\n \"city\": \"Gwenborough\",\n \"zipcode\": \"92998-3874\",\n \"geo\": {\n \"lat\": \"-37.3159\",\n \"lng\": \"81.1496\"\n }\n },\n \"phone\": \"1-770-736-8031 x56442\",\n \"website\": \"hildegard.org\",\n \"company\": {\n \"name\": \"Romaguera-Crona\",\n \"catchPhrase\": \"Multi-layered client-server neural-net\",\n \"bs\": \"harness real-time e-markets\"\n }\n },\n {\n \"id\": 2,\n \"name\": \"Ervin Howell\",\n \"username\": \"Antonette\",\n \"email\": \"[email protected]\",\n \"address\": {\n \"street\": \"Victor Plains\",\n \"suite\": \"Suite 879\",\n \"city\": \"Wisokyburgh\",\n \"zipcode\": \"90566-7771\",\n \"geo\": {\n \"lat\": \"-43.9509\",\n \"lng\": \"-34.4618\"\n }\n },\n \"phone\": \"010-692-6593 x09125\",\n \"website\": \"anastasia.net\",\n \"company\": {\n \"name\": \"Deckow-Crist\",\n \"catchPhrase\": \"Proactive didactic contingency\",\n \"bs\": \"synergize scalable supply-chains\"\n }\n }\n...\n\nTarget CSV - some items removed - but you can add it\n1,Leanne Graham,Bret,Gwenborough,92998-3874,hildegard.org\n2,Ervin Howell,Antonette,Wisokyburgh,90566-7771,anastasia.net\n3,Clementine Bauch,Samantha,McKenziehaven,59590-4157,ramiro.info\n4,Patricia Lebsack,Karianne,South Elvis,53919-4257,kale.biz\n5,Chelsey Dietrich,Kamren,Roscoeview,33263,demarco.info\n6,Mrs. Dennis Schulist,Leopoldo_Corkery,South Christy,23505-1337,ola.org\n7,Kurtis Weissnat,Elwyn.Skiles,Howemouth,58804-1099,elvis.io\n8,Nicholas Runolfsdottir V,Maxime_Nienow,Aliyaview,45169,jacynthe.com\n9,Glenna Reichert,Delphine,Bartholomebury,76495-3109,conrad.com\n10,Clementina DuBuque,Moriah.Stanton,Lebsackbury,31428-2261,ambrose.net\n\nUsing this command in terminal\ncurl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username, city: .address.city, zipcode: .address.zipcode, website: .website } | join(\",\")'\n\n\nif add this, save CSV file.\n> data.csv\n\ncurl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username, city: .address.city, zipcode: .address.zipcode, website: .website } | join(\",\")' > data.csv\n\ncat data.csv\n\n\nKey steps\n#1 Remove [ and ]\ncurl https://jsonplaceholder.typicode.com/users | jq '.[]'\n\n#2 Filter and change data\ncurl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name }'\n\nresult\n{\n \"id\": 1,\n \"name\": \"Leanne Graham\"\n}\n{\n \"id\": 2,\n \"name\": \"Ervin Howell\"\n}\n{\n \"id\": 3,\n \"name\": \"Clementine Bauch\"\n}\n...\n\n#3 to convert CSV\ncurl https://jsonplaceholder.typicode.com/users | jq -r '.[] | { id: .id, name: .name, username: .username } | join(\",\")'\n\nresult\n1,Leanne Graham,Bret\n2,Ervin Howell,Antonette\n3,Clementine Bauch,Samantha\n...\n\nReference\nHow to convert JSON to CSV using Linux / Unix shell\n"
] |
[
1
] |
[] |
[] |
[
"csv",
"curl",
"for_loop",
"json"
] |
stackoverflow_0074661457_csv_curl_for_loop_json.txt
|
Q:
Laravel Valet 502 Bad Gate Way nginx/1.15.7
I am getting a 502 Bad Gateway on my Laravel projects running Laravel valet.
I have tried many of the solutions online and with no success. i.e. https://gist.github.com/adamwathan/6ea40e90a804ea2b3f9f24146d86ad7f
At the moment the error I see is 502 bad gateway and an error in my command line when running valet install is an error when it gets to the updating PHP configuration. It gives the following error:
Warning: file_get_contents(/usr/local/etc/php/7.3/php-fpm.d/www.conf): failed to open stream: No such file or directory in /Users/username/.composer/vendor/laravel/valet/cli/Valet/Filesystem.php on line 112
Warning: file_get_contents(/usr/local/etc/php/7.3/php-fpm.d/www.conf): failed to open stream: No such file or directory in /Users/username/.composer/vendor/laravel/valet/cli/Valet/Filesystem.php on line 125
Has anybody had similar issues?
Thanks
A:
If you're anyone like me who're seeing 502 Bad Gateway while using Laravel Valet after updating it composer global update to the latest version, you most probably forgot to run valet install command. Laravel Valet requires (in most cases) to run valet install command after updating to the latest version.
A:
in most cases running valet install will solve the issue.
A:
Had the same symptoms after updating to php 7.3 and then installing a new Laravel project.
It appears that brew install php73 doesn't install php-fpm
Solution is to uninstall php
brew uninstall php73
brew uninstall php72
brew uninstall php71 ... whatever versions you have
brew uninstall --force php
Now reinstall php
brew install php --build-from-source
I encountered permission errors mkdir: /usr/local/etc/php/7.3/php-fpm.d: Permission denied so sudo chown -R: <yourusercode> /usr/local/etc/php fixed that and then brew install php --build-from-source again. Once it builds php 7.3 successfully reinstall valet:
valet install
A:
None of the above answers worked for me, but found the solution here: https://janostlund.com/2019-06-20/502-bad-gateway-laravel-valet
~/.config/valet/Log/nginx-error.log shows:
[error] 17423#0: *1 upstream sent too big header while reading response header from upstream [...]
Solved by adding two lines to http in /usr/local/etc/nginx/nginx.conf
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
//...
}
and then running valet restart
A:
I solved this by doing:
php -v
PHP 8.0.1 (cli) (built: Jan 8 2021 09:07:02) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.1, Copyright (c) Zend Technologies
with Zend OPcache v8.0.1, Copyright (c), by Zend Technologies
followed by:
valet use [email protected] --force
Unlinking current version: php
Linking new version: [email protected]
Updating PHP configuration...
Restarting php...
Restarting nginx...
Valet is now using [email protected].
Valet seemed to be confused over which PHP it was using.
A:
I ran into the same problem with Laravel 8. Both Valet and Expose seemed to work, but the webpage always gave a 502 response.
The solution I found when I updated composer and tried to reinstall Valet was that Valet didn't know which version of php to use.
To fix this, use the following command to tell valet which version of php to use.
valet use [email protected]
A:
Try this
brew services start php
If it didn’t work, try to reinstall php from source
brew uninstall php
brew install php --build-from-source
valet install
Source: laravel/valet github issues
A:
I had the same problem. I solved it by upgrading mariadb. brew upgrade mariadb
A:
Following the config above, but put it in the file.
~/.valet/Nginx/all.conf
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
This did catch on all the sites "Im use Valet plus"
A:
Well, normally "valet install" solves the issue but for me it was different.
For my case, I was using valet isolate for a different project with different PHP versions. So, I have to do the binding process again.
I did,
valet install
valet isolate [email protected] (Here you have to use the selected version)
this solves the issue for me.
A:
In my case I reinstalled different version of php. I just run
valet install
and it worked fine.
|
Laravel Valet 502 Bad Gate Way nginx/1.15.7
|
I am getting a 502 Bad Gateway on my Laravel projects running Laravel valet.
I have tried many of the solutions online and with no success. i.e. https://gist.github.com/adamwathan/6ea40e90a804ea2b3f9f24146d86ad7f
At the moment the error I see is 502 bad gateway and an error in my command line when running valet install is an error when it gets to the updating PHP configuration. It gives the following error:
Warning: file_get_contents(/usr/local/etc/php/7.3/php-fpm.d/www.conf): failed to open stream: No such file or directory in /Users/username/.composer/vendor/laravel/valet/cli/Valet/Filesystem.php on line 112
Warning: file_get_contents(/usr/local/etc/php/7.3/php-fpm.d/www.conf): failed to open stream: No such file or directory in /Users/username/.composer/vendor/laravel/valet/cli/Valet/Filesystem.php on line 125
Has anybody had similar issues?
Thanks
|
[
"If you're anyone like me who're seeing 502 Bad Gateway while using Laravel Valet after updating it composer global update to the latest version, you most probably forgot to run valet install command. Laravel Valet requires (in most cases) to run valet install command after updating to the latest version.\n",
"in most cases running valet install will solve the issue.\n",
"Had the same symptoms after updating to php 7.3 and then installing a new Laravel project. \nIt appears that brew install php73 doesn't install php-fpm \nSolution is to uninstall php\nbrew uninstall php73 \nbrew uninstall php72\nbrew uninstall php71 ... whatever versions you have\nbrew uninstall --force php\n\nNow reinstall php\nbrew install php --build-from-source\n\nI encountered permission errors mkdir: /usr/local/etc/php/7.3/php-fpm.d: Permission denied so sudo chown -R: <yourusercode> /usr/local/etc/php fixed that and then brew install php --build-from-source again. Once it builds php 7.3 successfully reinstall valet:\nvalet install\n\n",
"None of the above answers worked for me, but found the solution here: https://janostlund.com/2019-06-20/502-bad-gateway-laravel-valet\n~/.config/valet/Log/nginx-error.log shows:\n[error] 17423#0: *1 upstream sent too big header while reading response header from upstream [...]\n\nSolved by adding two lines to http in /usr/local/etc/nginx/nginx.conf\nhttp {\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n \n //...\n}\n\nand then running valet restart\n",
"I solved this by doing:\nphp -v\n\n\nPHP 8.0.1 (cli) (built: Jan 8 2021 09:07:02) ( NTS )\nCopyright (c) The PHP Group\nZend Engine v4.0.1, Copyright (c) Zend Technologies\nwith Zend OPcache v8.0.1, Copyright (c), by Zend Technologies\n\nfollowed by:\nvalet use [email protected] --force\n\n\nUnlinking current version: php\nLinking new version: [email protected]\nUpdating PHP configuration...\nRestarting php...\nRestarting nginx...\nValet is now using [email protected].\n\nValet seemed to be confused over which PHP it was using.\n",
"I ran into the same problem with Laravel 8. Both Valet and Expose seemed to work, but the webpage always gave a 502 response.\nThe solution I found when I updated composer and tried to reinstall Valet was that Valet didn't know which version of php to use.\nTo fix this, use the following command to tell valet which version of php to use.\nvalet use [email protected]\n\n",
"Try this\nbrew services start php\n\nIf it didn’t work, try to reinstall php from source\nbrew uninstall php\nbrew install php --build-from-source\nvalet install\n\nSource: laravel/valet github issues\n",
"I had the same problem. I solved it by upgrading mariadb. brew upgrade mariadb\n",
"Following the config above, but put it in the file.\n~/.valet/Nginx/all.conf\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n\nThis did catch on all the sites \"Im use Valet plus\"\n",
"Well, normally \"valet install\" solves the issue but for me it was different.\nFor my case, I was using valet isolate for a different project with different PHP versions. So, I have to do the binding process again.\nI did,\n\nvalet install\nvalet isolate [email protected] (Here you have to use the selected version)\n\nthis solves the issue for me.\n",
"In my case I reinstalled different version of php. I just run\nvalet install\n\nand it worked fine.\n"
] |
[
39,
19,
17,
9,
7,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"laravel",
"laravel_valet",
"php"
] |
stackoverflow_0053837655_laravel_laravel_valet_php.txt
|
Q:
How can solve this issues? Error (Xcode): Framework not found Flutter
How can solve this issues? Error (Xcode): Framework not found Flutter
I tried to more methods,
tried to delete ios then pod install...
tried to flutter clean...
tried to create new project etc.
but still cannot solve.
Launching lib/main.dart on iPhone 13 Pro Max in debug mode...
/Users/pin-chientseng/Desktop/yomate/ios/Runner/Info.plist: Property List error: Found non-key inside <dict> at line 56 / JSON error:
JSON text did not start with array or object and option to allow fragments not set. around line 1, column 0.
Xcode build done. 221.1s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
Writing result bundle at path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.hhxYin/flutter_ios_build_temp_dirsANWae/temporary_xcresult_bundle
ld: framework not found Flutter
clang: error: linker command failed with exit code 1 (use -v to see invocation)
note: Using new build system
note: Planning
note: Build preparation complete
note: Building targets in dependency order
warning: Stale file '/Users/pin-chientseng/Desktop/yomate/build/ios/Debug-iphonesimulator/Runner.app/GoogleService-Info.plist' is
located outside of the allowed root paths.
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/InputFileList-5F0225AF943341352A9BA345-Pods-Runner-resources-Debug-input-files-37e3c74e
61b246db180ac6f1b6f5519a-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/InputFileList-678E497CE5823DAA4909D0F3-Pods-Runner-frameworks-Debug-input-files-6f17fb4
132a6c963427e5fd8c0f46475-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/OutputFileList-5F0225AF943341352A9BA345-Pods-Runner-resources-Debug-output-files-2b94b0
84fd7edee03f689887bc427bd3-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/OutputFileList-678E497CE5823DAA4909D0F3-Pods-Runner-frameworks-Debug-output-files-3dbe4
531b144e8e556eea6741f7e46e6-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-5F0225AF943341352A9BA345.sh'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-678E497CE5823DAA4909D0F3.sh'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-69D48C77EE5D169DAA62588C.sh'
/Users/pin-chientseng/Desktop/yomate/ios/Pods/Pods.xcodeproj: warning: The iOS Simulator deployment target
'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 15.2.99. (in target
'FMDB' from project 'Pods')
/Users/pin-chientseng/Desktop/yomate/ios/Pods/Pods.xcodeproj: warning: The iOS Simulator deployment target
'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 15.2.99. (in target
'leveldb-library' from project 'Pods')
Result bundle written to path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.hhxYin/flutter_ios_build_temp_dirsANWae/temporary_xcresult_bundle
Error (Xcode): Framework not found Flutter
Could not build the application for the simulator.
Error launching application on iPhone 13 Pro Max.
if I used this methods,
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
if Gem::Version.new('8.0') > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '8.0'
end
end
end
end
i will got this error...
Launching lib/main.dart on iPhone 13 Pro Max in debug mode...
Running pod install... 38.2s
Running Xcode build...
Xcode build done. 28.7s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
Writing result bundle at path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.gk0YEl/flutter_ios_build_temp_dirWL743Q/temporary_xcresult_bundle
/Users/pin-chientseng/Development/flutter/.pub-cache/hosted/pub.dartlang.org/video_player_avfoundation-2.3.4/ios/Classes/messages
.g.m:7:9: fatal error: 'Flutter/Flutter.h' file not found
#import <Flutter/Flutter.h>
^~~~~~~~~~~~~~~~~~~
1 error generated.
note: Using new build system
note: Planning
note: Build preparation complete
note: Building targets in dependency order
Result bundle written to path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.gk0YEl/flutter_ios_build_temp_dirWL743Q/temporary_xcresult_bundle
Lexical or Preprocessor Issue (Xcode): 'Flutter/Flutter.h' file not found
/Users/pin-chientseng/Development/flutter/.pub-cache/hosted/pub.dartlang.org/video_player_avfoundation-2.3.4/ios/Classes/messages.g.m
:6:8
Could not build the application for the simulator.
Error launching application on iPhone 13 Pro Max.
A:
I have tried many solutions for this error. The only possible solution is to delete the flutter folder in the FLUTTER PATH, and re-copy the flutter folder.
This solved my error.
A:
In my case, I got this error after deleting the flutter instances while trying to clean up some storage on my 258go macbook to update Xcode :(, in doing so I probably deleted the flutter.framework.
The solutions given here have not changed anything for me.
So I tried to clean up everything in the pod, including the cache. The pod install command failed because it missed the ios tools, which can be downloaded with flutter precache --ios.
So the complete process to solve this problem for me was :
cd ios
pod cache clean --all
rm -rf ~/Library/Caches/CocoaPods
rm -rf Pods
rm -rf ~/Library/Developer/Xcode/DerivedData/*
pod deintegrate
flutter precache --ios
pod install
I hope that this solution will help someone and prevent them from losing a whole day's work as it did for me.
A:
1- flutter channel beta
2- flutter upgrade
3- flutter run
4- flutter channel stable
5- flutter upgrade
6- flutter run
|
How can solve this issues? Error (Xcode): Framework not found Flutter
|
How can solve this issues? Error (Xcode): Framework not found Flutter
I tried to more methods,
tried to delete ios then pod install...
tried to flutter clean...
tried to create new project etc.
but still cannot solve.
Launching lib/main.dart on iPhone 13 Pro Max in debug mode...
/Users/pin-chientseng/Desktop/yomate/ios/Runner/Info.plist: Property List error: Found non-key inside <dict> at line 56 / JSON error:
JSON text did not start with array or object and option to allow fragments not set. around line 1, column 0.
Xcode build done. 221.1s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
Writing result bundle at path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.hhxYin/flutter_ios_build_temp_dirsANWae/temporary_xcresult_bundle
ld: framework not found Flutter
clang: error: linker command failed with exit code 1 (use -v to see invocation)
note: Using new build system
note: Planning
note: Build preparation complete
note: Building targets in dependency order
warning: Stale file '/Users/pin-chientseng/Desktop/yomate/build/ios/Debug-iphonesimulator/Runner.app/GoogleService-Info.plist' is
located outside of the allowed root paths.
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/InputFileList-5F0225AF943341352A9BA345-Pods-Runner-resources-Debug-input-files-37e3c74e
61b246db180ac6f1b6f5519a-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/InputFileList-678E497CE5823DAA4909D0F3-Pods-Runner-frameworks-Debug-input-files-6f17fb4
132a6c963427e5fd8c0f46475-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/OutputFileList-5F0225AF943341352A9BA345-Pods-Runner-resources-Debug-output-files-2b94b0
84fd7edee03f689887bc427bd3-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/OutputFileList-678E497CE5823DAA4909D0F3-Pods-Runner-frameworks-Debug-output-files-3dbe4
531b144e8e556eea6741f7e46e6-resolved.xcfilelist'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-5F0225AF943341352A9BA345.sh'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-678E497CE5823DAA4909D0F3.sh'
note: Removed stale file
'/Users/pin-chientseng/Library/Developer/Xcode/DerivedData/Runner-bfllcubjuppngacurzilkdnplylp/Build/Intermediates.noindex/Runner
.build/Debug-iphonesimulator/Runner.build/Script-69D48C77EE5D169DAA62588C.sh'
/Users/pin-chientseng/Desktop/yomate/ios/Pods/Pods.xcodeproj: warning: The iOS Simulator deployment target
'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 15.2.99. (in target
'FMDB' from project 'Pods')
/Users/pin-chientseng/Desktop/yomate/ios/Pods/Pods.xcodeproj: warning: The iOS Simulator deployment target
'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is 9.0 to 15.2.99. (in target
'leveldb-library' from project 'Pods')
Result bundle written to path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.hhxYin/flutter_ios_build_temp_dirsANWae/temporary_xcresult_bundle
Error (Xcode): Framework not found Flutter
Could not build the application for the simulator.
Error launching application on iPhone 13 Pro Max.
if I used this methods,
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
if Gem::Version.new('8.0') > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '8.0'
end
end
end
end
i will got this error...
Launching lib/main.dart on iPhone 13 Pro Max in debug mode...
Running pod install... 38.2s
Running Xcode build...
Xcode build done. 28.7s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
Writing result bundle at path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.gk0YEl/flutter_ios_build_temp_dirWL743Q/temporary_xcresult_bundle
/Users/pin-chientseng/Development/flutter/.pub-cache/hosted/pub.dartlang.org/video_player_avfoundation-2.3.4/ios/Classes/messages
.g.m:7:9: fatal error: 'Flutter/Flutter.h' file not found
#import <Flutter/Flutter.h>
^~~~~~~~~~~~~~~~~~~
1 error generated.
note: Using new build system
note: Planning
note: Build preparation complete
note: Building targets in dependency order
Result bundle written to path:
/var/folders/rg/v6d4v6m545949bhd3pv5555r0000gn/T/flutter_tools.gk0YEl/flutter_ios_build_temp_dirWL743Q/temporary_xcresult_bundle
Lexical or Preprocessor Issue (Xcode): 'Flutter/Flutter.h' file not found
/Users/pin-chientseng/Development/flutter/.pub-cache/hosted/pub.dartlang.org/video_player_avfoundation-2.3.4/ios/Classes/messages.g.m
:6:8
Could not build the application for the simulator.
Error launching application on iPhone 13 Pro Max.
|
[
"I have tried many solutions for this error. The only possible solution is to delete the flutter folder in the FLUTTER PATH, and re-copy the flutter folder.\nThis solved my error.\n",
"In my case, I got this error after deleting the flutter instances while trying to clean up some storage on my 258go macbook to update Xcode :(, in doing so I probably deleted the flutter.framework.\nThe solutions given here have not changed anything for me.\nSo I tried to clean up everything in the pod, including the cache. The pod install command failed because it missed the ios tools, which can be downloaded with flutter precache --ios.\nSo the complete process to solve this problem for me was :\ncd ios\npod cache clean --all\nrm -rf ~/Library/Caches/CocoaPods\nrm -rf Pods\nrm -rf ~/Library/Developer/Xcode/DerivedData/*\npod deintegrate\nflutter precache --ios\npod install\n\nI hope that this solution will help someone and prevent them from losing a whole day's work as it did for me.\n",
"1- flutter channel beta\n2- flutter upgrade\n3- flutter run\n4- flutter channel stable\n5- flutter upgrade\n6- flutter run\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"flutter",
"flutter_ios"
] |
stackoverflow_0072267071_flutter_flutter_ios.txt
|
Q:
Close or Switch Tabs in Playwright/Python
I'm doing an automation, at the time of download it opens a tab, sometimes it doesn't close automatically, so how can I close a tab in playwright using python?
A:
I managed to make a code that closes only a specific tab!
all_pages = page.context.pages
await all_pages[1].close()
A:
You can also use the close method on the Page object that represents the tab you want to close. Here is an example of how you might do this:
# launch a browser and create a new context
browser = await playwright[browserType].launch()
context = await browser.newContext()
# create a new page and go to the URL you want to download from
page = await context.newPage()
await page.goto("https://www.example.com")
# download a file from the page
await page.click("#download-button")
# wait for the download to finish and the new tab to be created
await page.waitForSelector("#download-complete")
# get a list of pages in the current context
pages = await context.pages()
# assume the last page in the list is the new tab that was created
# (you may need to adapt this to your specific use case)
newTab = pages[-1]
# close the new tab
await newTab.close()
|
Close or Switch Tabs in Playwright/Python
|
I'm doing an automation, at the time of download it opens a tab, sometimes it doesn't close automatically, so how can I close a tab in playwright using python?
|
[
"I managed to make a code that closes only a specific tab!\nall_pages = page.context.pages\nawait all_pages[1].close()\n\n",
"You can also use the close method on the Page object that represents the tab you want to close. Here is an example of how you might do this:\n# launch a browser and create a new context\nbrowser = await playwright[browserType].launch()\ncontext = await browser.newContext()\n\n# create a new page and go to the URL you want to download from\npage = await context.newPage()\nawait page.goto(\"https://www.example.com\")\n\n# download a file from the page\nawait page.click(\"#download-button\")\n\n# wait for the download to finish and the new tab to be created\nawait page.waitForSelector(\"#download-complete\")\n\n# get a list of pages in the current context\npages = await context.pages()\n\n# assume the last page in the list is the new tab that was created\n# (you may need to adapt this to your specific use case)\nnewTab = pages[-1]\n\n# close the new tab\nawait newTab.close()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"browser",
"playwright",
"playwright_python",
"python",
"tabs"
] |
stackoverflow_0073209567_browser_playwright_playwright_python_python_tabs.txt
|
Q:
How to create a working progress bar using Bootstrap and Flask
I have simple textarea form, outputs the result below the form, and I'd like to have a progress bar display upon clicking submit.
I've searched else where to no avail and no idea where to start.
Can someone guide this poor soul in the right direction? (novice by the way)
A:
First, you will need to create a Flask route that returns the current progress value. This route can be called using an AJAX request from the progress bar element to update the progress bar value dynamically.
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/progress')
def progress():
# Return the current progress value as JSON
return jsonify({'progress': current_progress})
Next, you can create the progress bar element using the progress class from Bootstrap. You can use the data-value attribute to specify the initial progress value, and the data-url attribute to specify the URL of the Flask route that returns the current progress value. Here is an example of a progress bar element using Bootstrap:
<div class="progress" data-value="0" data-url="/progress">
<div class="progress-bar" role="progressbar" style="width: 0%;" aria-valuenow="0"
aria-valuemin="0" aria-valuemax="100"></div>
</div>
Finally, you can use JavaScript to update the progress bar value dynamically by making an AJAX request to the Flask route that returns the current progress value. You can use the setInterval function to make the AJAX request at regular intervals, and update the data-value and style attributes of the progress bar element with the new progress value
// Set the initial progress value
var progress = $('.progress').data('value');
// Set the URL of the Flask route that returns the current progress value
var url = $('.progress').data('url');
// Update the progress bar value every 1000 milliseconds (1 second)
setInterval(function() {
// Make an AJAX request to the Flask route
$.get(url, function(data) {
// Update the progress bar value
progress = data.progress;
$('.progress-bar').attr('data-value', progress);
$('.progress-bar').css('width', progress + '%');
});
}, 1000);
Not sure what you want to show with the progress bar but I hope that helps you :)
|
How to create a working progress bar using Bootstrap and Flask
|
I have simple textarea form, outputs the result below the form, and I'd like to have a progress bar display upon clicking submit.
I've searched else where to no avail and no idea where to start.
Can someone guide this poor soul in the right direction? (novice by the way)
|
[
"First, you will need to create a Flask route that returns the current progress value. This route can be called using an AJAX request from the progress bar element to update the progress bar value dynamically.\nfrom flask import Flask, jsonify\n\napp = Flask(__name__)\n\[email protected]('/progress')\ndef progress():\n # Return the current progress value as JSON\n return jsonify({'progress': current_progress})\n\nNext, you can create the progress bar element using the progress class from Bootstrap. You can use the data-value attribute to specify the initial progress value, and the data-url attribute to specify the URL of the Flask route that returns the current progress value. Here is an example of a progress bar element using Bootstrap:\n<div class=\"progress\" data-value=\"0\" data-url=\"/progress\">\n <div class=\"progress-bar\" role=\"progressbar\" style=\"width: 0%;\" aria-valuenow=\"0\" \n aria-valuemin=\"0\" aria-valuemax=\"100\"></div>\n</div>\n\nFinally, you can use JavaScript to update the progress bar value dynamically by making an AJAX request to the Flask route that returns the current progress value. You can use the setInterval function to make the AJAX request at regular intervals, and update the data-value and style attributes of the progress bar element with the new progress value\n// Set the initial progress value\nvar progress = $('.progress').data('value');\n\n// Set the URL of the Flask route that returns the current progress value\nvar url = $('.progress').data('url');\n\n// Update the progress bar value every 1000 milliseconds (1 second)\nsetInterval(function() {\n // Make an AJAX request to the Flask route\n $.get(url, function(data) {\n // Update the progress bar value\n progress = data.progress;\n $('.progress-bar').attr('data-value', progress);\n $('.progress-bar').css('width', progress + '%');\n });\n}, 1000);\n\nNot sure what you want to show with the progress bar but I hope that helps you :)\n"
] |
[
0
] |
[] |
[] |
[
"bootstrap_5",
"python"
] |
stackoverflow_0074662867_bootstrap_5_python.txt
|
Q:
Is this the idiomatic way to draw/redraw widget components?
new to Svelte and unsure if this is the proper way to make material widgets appear/hide/redraw (in this case I'm using Carbon by IBM). Essentially I use a boolean variable for every widget that I want to control and use it to dictate what should render. Is there a better way?
<script>
import { Button } from 'carbon-components-svelte';
let loading_finished: boolean;
async function initAnalysis() {
// Do stuff for a while..
loading_finished = true;
}
let analysis_finished: boolean;
async function runAnalysis() {
// Do stuff for a while..
analysis_finished = true;
}
</script>
<Button on:click={initAnalysis}>Load Inputs</Button>
{#if loading_finished}
<Button on:click={runAnalysis}>Run Analysis</Button>
{/if}
{#if analysis_finished}
Analysis is finished
{/if}
A:
Depends on the logic.
If all the flags are exclusive, you can use a state variable to replace multiple booleans instead. So something like:
{#if state == 'init'}
...
{:else if state == 'processing'}
...
{:else if state == 'finished'}
...
{/if}
Or instead of using {#if} you could toggle visibility using a class (which means the element always exists but is hidden), though in this case you would have to deactivate the default scoping because the buttons are components rather than regular DOM elements.
|
Is this the idiomatic way to draw/redraw widget components?
|
new to Svelte and unsure if this is the proper way to make material widgets appear/hide/redraw (in this case I'm using Carbon by IBM). Essentially I use a boolean variable for every widget that I want to control and use it to dictate what should render. Is there a better way?
<script>
import { Button } from 'carbon-components-svelte';
let loading_finished: boolean;
async function initAnalysis() {
// Do stuff for a while..
loading_finished = true;
}
let analysis_finished: boolean;
async function runAnalysis() {
// Do stuff for a while..
analysis_finished = true;
}
</script>
<Button on:click={initAnalysis}>Load Inputs</Button>
{#if loading_finished}
<Button on:click={runAnalysis}>Run Analysis</Button>
{/if}
{#if analysis_finished}
Analysis is finished
{/if}
|
[
"Depends on the logic.\nIf all the flags are exclusive, you can use a state variable to replace multiple booleans instead. So something like:\n{#if state == 'init'}\n ...\n{:else if state == 'processing'}\n ...\n{:else if state == 'finished'}\n ...\n{/if}\n\nOr instead of using {#if} you could toggle visibility using a class (which means the element always exists but is hidden), though in this case you would have to deactivate the default scoping because the buttons are components rather than regular DOM elements.\n"
] |
[
1
] |
[] |
[] |
[
"svelte",
"sveltekit"
] |
stackoverflow_0074662386_svelte_sveltekit.txt
|
Q:
Flutter how to change text color of some word which have @
I have a text in string in which I need to change the color of that word which start with @
String = "I am tagging @John at the post of @Elvin"
So I need to change text color of John and elvin in text.
I try to search but not find any solution for this
A:
you can use RichText
first, we need to split the text into an Iterable so we can color each String depending on whether it contains a '@'
final text = "I am tagging @John at the post of @Elvin";
Iterable<String> splitText() {
return text.split('@').skip(1).map((e) {
final i = e.indexOf(' ');
if (i == -1) return ['@$e'];
return [
'@${e.substring(0, i)}',
e.substring(i),
];
}).expand((e) => e);
}
The above function returns (@John, at the post of , @Elvin), we can use this in the RichText Widget
Note that we skipped "I am tagging " (text.split('@').skip(1)) as it will be added in the TextSpan text argument ( text.split('@').first )
RichText(
text: TextSpan(
text: text.split('@').first,
children: splitText().map(
(e) => TextSpan(
text: e,
style: TextStyle(
color: e.contains('@') ? Colors.purple: null,
),
)).toList(),
),
),
|
Flutter how to change text color of some word which have @
|
I have a text in string in which I need to change the color of that word which start with @
String = "I am tagging @John at the post of @Elvin"
So I need to change text color of John and elvin in text.
I try to search but not find any solution for this
|
[
"you can use RichText\nfirst, we need to split the text into an Iterable so we can color each String depending on whether it contains a '@'\nfinal text = \"I am tagging @John at the post of @Elvin\";\nIterable<String> splitText() {\n return text.split('@').skip(1).map((e) {\n final i = e.indexOf(' ');\n if (i == -1) return ['@$e'];\n return [\n '@${e.substring(0, i)}',\n e.substring(i),\n ];\n }).expand((e) => e);\n}\n\nThe above function returns (@John, at the post of , @Elvin), we can use this in the RichText Widget\nNote that we skipped \"I am tagging \" (text.split('@').skip(1)) as it will be added in the TextSpan text argument ( text.split('@').first )\nRichText(\n text: TextSpan(\n text: text.split('@').first,\n children: splitText().map(\n (e) => TextSpan(\n text: e,\n style: TextStyle(\n color: e.contains('@') ? Colors.purple: null,\n ),\n )).toList(),\n ),\n),\n\n"
] |
[
0
] |
[] |
[] |
[
"dart",
"flutter"
] |
stackoverflow_0074662048_dart_flutter.txt
|
Q:
Please aid me on this simple SQL Question. Cannot figure it out
How would you update the query to only include the user's name if the user was created after the visit? The user table includes a field created_on that indicates when the user record was created.
SELECT v.visit_id, v.visit_datetime, v.page_url, v.user_id, u.name
FROM web_visits AS v LEFT JOIN user AS u ON v.user_id = u.id
A:
Just add it to the ON condition if you still want to see the visits, but not the users (if created afterwards)
SELECT v.visit_id, v.visit_datetime, v.page_url, v.user_id, u.name
FROM web_visits AS v
LEFT JOIN user AS u
ON v.user_id = u.id
and u.created_on<=v.visit_datetime
|
Please aid me on this simple SQL Question. Cannot figure it out
|
How would you update the query to only include the user's name if the user was created after the visit? The user table includes a field created_on that indicates when the user record was created.
SELECT v.visit_id, v.visit_datetime, v.page_url, v.user_id, u.name
FROM web_visits AS v LEFT JOIN user AS u ON v.user_id = u.id
|
[
"Just add it to the ON condition if you still want to see the visits, but not the users (if created afterwards)\nSELECT v.visit_id, v.visit_datetime, v.page_url, v.user_id, u.name \nFROM web_visits AS v \n \n LEFT JOIN user AS u \n ON v.user_id = u.id\n and u.created_on<=v.visit_datetime\n\n"
] |
[
0
] |
[] |
[] |
[
"analysis",
"sql",
"syntax"
] |
stackoverflow_0074663369_analysis_sql_syntax.txt
|
Q:
Select multiple variables while filtering and grouping by
The following sparql query works fine (I have simplified obj and prop names):
What I want is: Getting celebs (URI/name/count of children) born after 1980 that have more than 5 children.
SELECT ?celeb
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
}
GROUP BY ?celeb
HAVING (count(?children) > 4)
My problem is that as long as I put another variable on my select clause, like
SELECT ?celeb ?Celebname (COUNT(*) as ?childrenCount)
I am getting the following error :
Variable ?Celebname is used in the result set outside aggregate and not mentioned in GROUP BY clause.
I even tried :
SELECT ?celeb ?celebName (COUNT(?children) as ?count)
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
FILTER(?count > 8).
}
but it returns the same error.
What am I missing ? Should I rethink my query structure ?
Many thanks for your help !
A:
In your SPARQL query, you are trying to use the variable ?Celebname in the SELECT clause, but you are not including it in the GROUP BY clause. As a result, you are getting the error "Variable ?Celebname is used in the result set outside aggregate and not mentioned in GROUP BY clause".
To fix this, you need to include the variable ?Celebname in the GROUP BY clause, like this:
SELECT ?celeb ?Celebname (COUNT(*) as ?childrenCount)
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
}
GROUP BY ?celeb ?Celebname
HAVING (count(?children) > 4)
This will group the results by the ?celeb and ?Celebname variables, and it will allow you to use the ?Celebname variable in the SELECT clause.
Alternatively, you can use the AS keyword to give the ?Celebname variable a different name in the SELECT clause, like this:
SELECT ?celeb (?Celebname AS ?name) (COUNT(*) as ?childrenCount)
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
}
GROUP BY ?celeb
HAVING (count(?children) > 4)
This will allow you to use the ?Celebname variable in the SELECT clause without having to include it in the GROUP BY clause.
|
Select multiple variables while filtering and grouping by
|
The following sparql query works fine (I have simplified obj and prop names):
What I want is: Getting celebs (URI/name/count of children) born after 1980 that have more than 5 children.
SELECT ?celeb
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
}
GROUP BY ?celeb
HAVING (count(?children) > 4)
My problem is that as long as I put another variable on my select clause, like
SELECT ?celeb ?Celebname (COUNT(*) as ?childrenCount)
I am getting the following error :
Variable ?Celebname is used in the result set outside aggregate and not mentioned in GROUP BY clause.
I even tried :
SELECT ?celeb ?celebName (COUNT(?children) as ?count)
WHERE {
?celeb a "Musician";
hasChildren ?children;
wasborn ?bdate;
hasName ?Celebname.
FILTER(?bdate > "1980-01-01T00:00:00Z"^^xsd:dateTime").
FILTER(?count > 8).
}
but it returns the same error.
What am I missing ? Should I rethink my query structure ?
Many thanks for your help !
|
[
"In your SPARQL query, you are trying to use the variable ?Celebname in the SELECT clause, but you are not including it in the GROUP BY clause. As a result, you are getting the error \"Variable ?Celebname is used in the result set outside aggregate and not mentioned in GROUP BY clause\".\nTo fix this, you need to include the variable ?Celebname in the GROUP BY clause, like this:\nSELECT ?celeb ?Celebname (COUNT(*) as ?childrenCount)\nWHERE {\n ?celeb a \"Musician\";\n hasChildren ?children;\n wasborn ?bdate;\n hasName ?Celebname. \n FILTER(?bdate > \"1980-01-01T00:00:00Z\"^^xsd:dateTime\").\n}\nGROUP BY ?celeb ?Celebname\nHAVING (count(?children) > 4)\n\nThis will group the results by the ?celeb and ?Celebname variables, and it will allow you to use the ?Celebname variable in the SELECT clause.\nAlternatively, you can use the AS keyword to give the ?Celebname variable a different name in the SELECT clause, like this:\nSELECT ?celeb (?Celebname AS ?name) (COUNT(*) as ?childrenCount)\nWHERE {\n ?celeb a \"Musician\";\n hasChildren ?children;\n wasborn ?bdate;\n hasName ?Celebname. \n FILTER(?bdate > \"1980-01-01T00:00:00Z\"^^xsd:dateTime\").\n}\nGROUP BY ?celeb\nHAVING (count(?children) > 4)\n\nThis will allow you to use the ?Celebname variable in the SELECT clause without having to include it in the GROUP BY clause.\n"
] |
[
0
] |
[] |
[] |
[
"count",
"group_by",
"sparql",
"wikidata"
] |
stackoverflow_0074663052_count_group_by_sparql_wikidata.txt
|
Q:
Eliminate "e_sqlite3.dll" during single file compilation
In my attempts to compile a single-file binary that leverages Microsoft.Data.Sqlite, I am consistently left with two files that are both required for the application to work.
{ProjectName}.exe
e_sqlite3.dll
Is it possible to include the e_sqlite3.dll into the exe?
It appears that System.Data.Sqlite exhibits the same behaviour, but instead a file called SQLite.Interop.dll.
Sample code
Note: I realize there is no actual interop with SQLite happening, this code is purely meant to demonstrate the compilation.
ProjectName.fsproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<PublishSingleFile>true</PublishSingleFile>
<SelfContained>true</SelfContained>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Data.Sqlite" version="7.*" />
</ItemGroup>
<ItemGroup>
<Compile Include="Program.fs" />
</ItemGroup>
</Project>
Program.fs
module ProjectName.Program
open System
[<EntryPoint>]
let main (argv : string[]) =
printfn "Hello world"
0
Compiling the project as follows:
dotnet publish .\ProjectName.fsproj -c Release
A:
Solution
It turns out that it's rather easy to do this in net6, net7 and (presumably) beyond, by setting IncludeNativeLibrariesForSelfExtract to true.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<SelfContained>true</SelfContained>
<PublishSingleFile>true</PublishSingleFile>
<PublishReadyToRun>true</PublishReadyToRun>
<PublishTrimmed>true</PublishTrimmed>
<!-- Must include this line -->
<IncludeNativeLibrariesForSelfExtract>true</IncludeNativeLibrariesForSelfExtract>
<DebuggerSupport>false</DebuggerSupport>
<EnableUnsafeUTF7Encoding>false</EnableUnsafeUTF7Encoding>
<HttpActivityPropagationSupport>false</HttpActivityPropagationSupport>
<InvariantGlobalization>true</InvariantGlobalization>
<UseNativeHttpHandler>true</UseNativeHttpHandler>
<UseSystemResourceKeys>true</UseSystemResourceKeys>
<EnableCompressionInSingleFile>true</EnableCompressionInSingleFile>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Data.Sqlite" version="7.*" />
</ItemGroup>
<ItemGroup>
<Compile Include="Program.fs" />
</ItemGroup>
</Project>
A:
Unfortunately, it does not appear to be possible to include the e_sqlite3.dll into the exe file. The only way to have a single-file executable is to use the .NET Native toolchain, which can be enabled by setting the property to true.
|
Eliminate "e_sqlite3.dll" during single file compilation
|
In my attempts to compile a single-file binary that leverages Microsoft.Data.Sqlite, I am consistently left with two files that are both required for the application to work.
{ProjectName}.exe
e_sqlite3.dll
Is it possible to include the e_sqlite3.dll into the exe?
It appears that System.Data.Sqlite exhibits the same behaviour, but instead a file called SQLite.Interop.dll.
Sample code
Note: I realize there is no actual interop with SQLite happening, this code is purely meant to demonstrate the compilation.
ProjectName.fsproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<PublishSingleFile>true</PublishSingleFile>
<SelfContained>true</SelfContained>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Data.Sqlite" version="7.*" />
</ItemGroup>
<ItemGroup>
<Compile Include="Program.fs" />
</ItemGroup>
</Project>
Program.fs
module ProjectName.Program
open System
[<EntryPoint>]
let main (argv : string[]) =
printfn "Hello world"
0
Compiling the project as follows:
dotnet publish .\ProjectName.fsproj -c Release
|
[
"Solution\nIt turns out that it's rather easy to do this in net6, net7 and (presumably) beyond, by setting IncludeNativeLibrariesForSelfExtract to true.\n<Project Sdk=\"Microsoft.NET.Sdk\">\n <PropertyGroup>\n <SelfContained>true</SelfContained>\n <PublishSingleFile>true</PublishSingleFile>\n <PublishReadyToRun>true</PublishReadyToRun>\n <PublishTrimmed>true</PublishTrimmed>\n\n <!-- Must include this line -->\n <IncludeNativeLibrariesForSelfExtract>true</IncludeNativeLibrariesForSelfExtract>\n <DebuggerSupport>false</DebuggerSupport>\n <EnableUnsafeUTF7Encoding>false</EnableUnsafeUTF7Encoding>\n <HttpActivityPropagationSupport>false</HttpActivityPropagationSupport>\n <InvariantGlobalization>true</InvariantGlobalization>\n <UseNativeHttpHandler>true</UseNativeHttpHandler>\n <UseSystemResourceKeys>true</UseSystemResourceKeys>\n <EnableCompressionInSingleFile>true</EnableCompressionInSingleFile>\n </PropertyGroup>\n <ItemGroup>\n <PackageReference Include=\"Microsoft.Data.Sqlite\" version=\"7.*\" />\n </ItemGroup>\n <ItemGroup>\n <Compile Include=\"Program.fs\" />\n </ItemGroup>\n</Project>\n\n",
"Unfortunately, it does not appear to be possible to include the e_sqlite3.dll into the exe file. The only way to have a single-file executable is to use the .NET Native toolchain, which can be enabled by setting the property to true.\n"
] |
[
2,
1
] |
[] |
[] |
[
".net",
".net_core",
"f#",
"microsoft.data.sqlite",
"sqlite"
] |
stackoverflow_0074661400_.net_.net_core_f#_microsoft.data.sqlite_sqlite.txt
|
Q:
bootstrap, navbar and dropdown content positioning
The problem is the drop-down behavior (here's the codepen to the problematic html, the original layout was taken from here).
<header class="pb-3">
<nav class="navbar navbar-expand-md navbar-light bg-white border-bottom">
<div class="d-flex w-100 navbar-collapse">
<ul class="navbar-nav me-auto mb-2 mb-md-0">
<li>
<div style="width: 100px; height: 100px">
<a href="#">
<img src="../static/store_logo.png" alt="logo" />
</a>
</div>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle d-none d-md-block fw500" href="#" id="navbarDropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Product categories
</a>
<ul class="dropdown-menu rounded-0 border-0">
<a href="#">
<li>Foo</li>
</a>
<a href="#">
<li>Bar</li>
</a>
<a href="#">
<li>Baz</li>
</a>
</ul>
</li>
</ul>
</div>
</nav>
</header>
In the original source the problem is hidden by the fact that a logo image/placeholder inside the nav > .navbar ul is tiny. When I used a larger image, it becomes manifest (I used a div with fixed 100 by 100 px to mimic it in the pen).
The problem is the behavior of a drop-down in the same nav. The list of items in .dropdown-menu (ul) doesn't attach to the bottom of the .dropdown-toggle element - instead it's attached to the bottom of the .navbar-nav ul. Which, naturally, is affected by the size of the li containing the image/placeholder.
Being a css noob, I don't know what is the real cause of the problem. Is it a conflict
beyween some of the bootstrap classes? I even tried adding a z-index to the .dropdown-menu ul, to no avail. Basic dropdown example in the Bootstrap docs doesn't seem to require any additional tweaks - but there's no navs/navbars involved in there (not to mention a dozen other parent elements).
Thank you all for your time!
A:
Please add this custom css and check agian.
body .navbar-expand-md .navbar-nav .dropdown-menu {
top: 30px !important;
}
A:
less hacky solution would be:
.navbar-expand-md .navbar-nav .dropdown-menu {
position: static;
}
====== UPDATE ============
well this solution has an issue of it's own - the entire content of the .navbar is shifting because of the 'static' dropdown menu; the accepted answer doesn't have that problem. CSS dark magic =)
|
bootstrap, navbar and dropdown content positioning
|
The problem is the drop-down behavior (here's the codepen to the problematic html, the original layout was taken from here).
<header class="pb-3">
<nav class="navbar navbar-expand-md navbar-light bg-white border-bottom">
<div class="d-flex w-100 navbar-collapse">
<ul class="navbar-nav me-auto mb-2 mb-md-0">
<li>
<div style="width: 100px; height: 100px">
<a href="#">
<img src="../static/store_logo.png" alt="logo" />
</a>
</div>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle d-none d-md-block fw500" href="#" id="navbarDropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Product categories
</a>
<ul class="dropdown-menu rounded-0 border-0">
<a href="#">
<li>Foo</li>
</a>
<a href="#">
<li>Bar</li>
</a>
<a href="#">
<li>Baz</li>
</a>
</ul>
</li>
</ul>
</div>
</nav>
</header>
In the original source the problem is hidden by the fact that a logo image/placeholder inside the nav > .navbar ul is tiny. When I used a larger image, it becomes manifest (I used a div with fixed 100 by 100 px to mimic it in the pen).
The problem is the behavior of a drop-down in the same nav. The list of items in .dropdown-menu (ul) doesn't attach to the bottom of the .dropdown-toggle element - instead it's attached to the bottom of the .navbar-nav ul. Which, naturally, is affected by the size of the li containing the image/placeholder.
Being a css noob, I don't know what is the real cause of the problem. Is it a conflict
beyween some of the bootstrap classes? I even tried adding a z-index to the .dropdown-menu ul, to no avail. Basic dropdown example in the Bootstrap docs doesn't seem to require any additional tweaks - but there's no navs/navbars involved in there (not to mention a dozen other parent elements).
Thank you all for your time!
|
[
"Please add this custom css and check agian.\nbody .navbar-expand-md .navbar-nav .dropdown-menu {\n top: 30px !important;\n}\n\n",
"less hacky solution would be:\n.navbar-expand-md .navbar-nav .dropdown-menu {\n position: static;\n}\n\n====== UPDATE ============\nwell this solution has an issue of it's own - the entire content of the .navbar is shifting because of the 'static' dropdown menu; the accepted answer doesn't have that problem. CSS dark magic =)\n"
] |
[
1,
0
] |
[] |
[] |
[
"bootstrap_5",
"css",
"drop_down_menu",
"dropdown",
"navbar"
] |
stackoverflow_0074637707_bootstrap_5_css_drop_down_menu_dropdown_navbar.txt
|
Q:
Typescript opposite / reverse operation of "typeof"
Angular can Query subComponents by Types, which is used in Testing like this:
fixture.debugElement.query( By.directive( ComponentType ) );
Now i wanted to make a function which does this for me:
import { ComponentFixture } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { Type } from '@angular/core';
export function findSubComponent<T>( fixture: ComponentFixture<any>, componentType: T & Type<any> ): T {
const subComponentDebugElement = fixture.debugElement.query( By.directive( componentType ) );
return subComponentDebugElement.componentInstance;
}
Now here comes the problem. My function currently returns typeof ComponentType instead of an actual object of ComponentType and therefore i can not access its properties.
The Type of subComponentDebugElement.componentInstance here is any, so i can just declare the type in the return Type argument (function (): T)
How can i turn T which stands for typeof ComponentInstance in this case into ComponentInstance?
A:
InstanceType<T>
As suggested by @jcalz the solution to this was to use InstanceType<T> like this:
type AbstractClassType = abstract new ( ...args: any ) => any;
export function querySubComponent<T extends AbstractClassType>(...): InstanceType<T> {
...
}
use of the AbstractClassType as abstract new ( ...args: any ) => any
Please note that the AbstractClassType might not be needed with your existing type definition, but apparently the generic InstanceType<> needs to use a type with a constructor, otherwise i get the following TS-Error: Type 'T' does not satisfy the constraint 'abstract new (...args: any) => any'.
A:
Basically the same answer, but more compact
function querySubComponent<T extends abstract new (...args: any) => any>(fixture: ComponentFixture<any>, componentType: T & Type<any>) {
const subComponentDebugElement = fixture.debugElement.query( By.directive( componentType ) );
return subComponentDebugElement.componentInstance as InstanceType<T>;
}
|
Typescript opposite / reverse operation of "typeof"
|
Angular can Query subComponents by Types, which is used in Testing like this:
fixture.debugElement.query( By.directive( ComponentType ) );
Now i wanted to make a function which does this for me:
import { ComponentFixture } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { Type } from '@angular/core';
export function findSubComponent<T>( fixture: ComponentFixture<any>, componentType: T & Type<any> ): T {
const subComponentDebugElement = fixture.debugElement.query( By.directive( componentType ) );
return subComponentDebugElement.componentInstance;
}
Now here comes the problem. My function currently returns typeof ComponentType instead of an actual object of ComponentType and therefore i can not access its properties.
The Type of subComponentDebugElement.componentInstance here is any, so i can just declare the type in the return Type argument (function (): T)
How can i turn T which stands for typeof ComponentInstance in this case into ComponentInstance?
|
[
"InstanceType<T>\nAs suggested by @jcalz the solution to this was to use InstanceType<T> like this:\ntype AbstractClassType = abstract new ( ...args: any ) => any;\n\nexport function querySubComponent<T extends AbstractClassType>(...): InstanceType<T> {\n...\n}\n\nuse of the AbstractClassType as abstract new ( ...args: any ) => any\nPlease note that the AbstractClassType might not be needed with your existing type definition, but apparently the generic InstanceType<> needs to use a type with a constructor, otherwise i get the following TS-Error: Type 'T' does not satisfy the constraint 'abstract new (...args: any) => any'.\n",
"Basically the same answer, but more compact\nfunction querySubComponent<T extends abstract new (...args: any) => any>(fixture: ComponentFixture<any>, componentType: T & Type<any>) {\n const subComponentDebugElement = fixture.debugElement.query( By.directive( componentType ) );\n return subComponentDebugElement.componentInstance as InstanceType<T>;\n}\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"angular",
"angular2_testing",
"angular_test",
"typescript",
"typescript_generics"
] |
stackoverflow_0070880862_angular_angular2_testing_angular_test_typescript_typescript_generics.txt
|
Q:
How would i compare two JSON files and get the specific objects which differs using C#?
I got two JSON files containing mostly identical data, but there might be objects that is only present in one of the files. How would i go about identifying these specific objects?
Example:
JSON 1:
[
{
"SourceLocation": "England",
"DestinationLocation": "Spain",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Germany",
"DeliveryDate": "9/12"
}
]
JSON 2:
[
{
"SourceLocation": "England",
"DestinationLocation": "Spain",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Germany",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Netherlands",
"DeliveryDate": "12/12"
}
]
Desired result:
[
{
"SourceLocation": "England",
"DestinationLocation": "Netherlands",
"DeliveryDate": "12/12"
}
]
A:
You could convert the string representation of Json to JArrays (I see you have list of items) and compare each item. For example (inline comments included for code),
public IEnumerable<String> CompareJsonArrays(string expected,string actual)
{
// Parse string to JArrays
JArray firstArray = JArray.Parse(expected);
JArray secondArray = JArray.Parse(actual);
// retrieve all children of JArray
var firstTokens = firstArray.Children();
var secondTokens = secondArray.Children();
// Compare the two set of collections using Custom comparer
var results = firstTokens.Except(secondTokens, new JTokenComparer())
.Concat(secondTokens.Except(firstTokens,new JTokenComparer()));
// Convert to Json String Representation
return results.Select(x=>x.ToString());
}
Where JTokenComparer is defined as
public class JTokenComparer : IEqualityComparer<JToken>
{
public int GetHashCode(JToken co)
{
return 1;
}
public bool Equals(JToken x1, JToken x2)
{
if (object.ReferenceEquals(x1, x2))
{
return true;
}
if (object.ReferenceEquals(x1, null) ||
object.ReferenceEquals(x2, null))
{
return false;
}
// Compares token and all child tokens.
return JToken.DeepEquals(x1,x2);
}
}
Now you could compare the two json strings as following
CompareJsonArrays(json1,json2);
The Custom Comparer uses JToken.DeepEquals() for retrieving difference between tokens.
|
How would i compare two JSON files and get the specific objects which differs using C#?
|
I got two JSON files containing mostly identical data, but there might be objects that is only present in one of the files. How would i go about identifying these specific objects?
Example:
JSON 1:
[
{
"SourceLocation": "England",
"DestinationLocation": "Spain",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Germany",
"DeliveryDate": "9/12"
}
]
JSON 2:
[
{
"SourceLocation": "England",
"DestinationLocation": "Spain",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Germany",
"DeliveryDate": "9/12"
},
{
"SourceLocation": "England",
"DestinationLocation": "Netherlands",
"DeliveryDate": "12/12"
}
]
Desired result:
[
{
"SourceLocation": "England",
"DestinationLocation": "Netherlands",
"DeliveryDate": "12/12"
}
]
|
[
"You could convert the string representation of Json to JArrays (I see you have list of items) and compare each item. For example (inline comments included for code),\n\n public IEnumerable<String> CompareJsonArrays(string expected,string actual)\n {\n // Parse string to JArrays\n JArray firstArray = JArray.Parse(expected);\n JArray secondArray = JArray.Parse(actual);\n \n // retrieve all children of JArray\n var firstTokens = firstArray.Children();\n var secondTokens = secondArray.Children();\n \n // Compare the two set of collections using Custom comparer\n var results = firstTokens.Except(secondTokens, new JTokenComparer())\n .Concat(secondTokens.Except(firstTokens,new JTokenComparer()));\n \n // Convert to Json String Representation\n return results.Select(x=>x.ToString());\n\n }\n\n\nWhere JTokenComparer is defined as\npublic class JTokenComparer : IEqualityComparer<JToken>\n{\n public int GetHashCode(JToken co)\n {\n return 1;\n }\n\n public bool Equals(JToken x1, JToken x2)\n {\n if (object.ReferenceEquals(x1, x2))\n {\n return true;\n }\n if (object.ReferenceEquals(x1, null) ||\n object.ReferenceEquals(x2, null))\n {\n return false;\n }\n // Compares token and all child tokens.\n return JToken.DeepEquals(x1,x2);\n }\n}\n\nNow you could compare the two json strings as following\nCompareJsonArrays(json1,json2);\n\nThe Custom Comparer uses JToken.DeepEquals() for retrieving difference between tokens.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"c#",
"json"
] |
stackoverflow_0074663137_.net_c#_json.txt
|
Q:
Saving datasets created in a forloop to multiple files
I have URLs (for web scraping) and municipality name stored in this list:
muni = [("https://openbilanci.it/armonizzati/bilanci/filettino-comune-fr/entrate/dettaglio?year=2021&type=preventivo", "filettino"), ("https://openbilanci.it/armonizzati/bilanci/partanna-comune-tp/entrate/dettaglio?year=2021&type=preventivo","partanna"), ("https://openbilanci.it/armonizzati/bilanci/fragneto-labate-comune-bn/entrate/dettaglio?year=2021&type=preventivo", "fragneto-labate") ]
I am trying to create different datasets for different municipalities. For example, data scraped from the first URL would be: filettinodak.csv.
I am using the following code right now:
import re
import json
import requests
import pandas as pd
import os
import random
os.chdir(r"/Users/aartimalik/Dropbox/data")
muni = [("https://openbilanci.it/armonizzati/bilanci/filettino-comune-fr/entrate/dettaglio?year=2021&type=preventivo", "filettino"),
("https://openbilanci.it/armonizzati/bilanci/partanna-comune-tp/entrate/dettaglio?year=2021&type=preventivo","partanna"),
("https://openbilanci.it/armonizzati/bilanci/fragneto-labate-comune-bn/entrate/dettaglio?year=2021&type=preventivo", "fragneto-labate")
]
for m in muni[1]:
URL = m
r = requests.get(URL)
p = re.compile("var bilancio_tree = (.*?);")
data = p.search(r.text).group(1)
data = json.loads(data)
all_data = []
for d in data:
for v in d["values"]:
for kk, vv in v.items():
all_data.append([d["label"], "-", kk, vv.get("abs"), vv.get("pc")])
for c in d["children"]:
for v in c["values"]:
for kk, vv in v.items():
all_data.append(
[d["label"], c["label"], kk, vv.get("abs"), vv.get("pc")]
)
df = pd.DataFrame(all_data, columns=["label 1", "label 2", "year", "abs", "pc"])
df.to_csv(muni[2]+"dak.csv", index=False)
The error I am getting is: Traceback (most recent call last): File "<stdin>", line 19, in <module> TypeError: can only concatenate tuple (not "str") to tuple.
I think I am doing something wrong with the muni indexing: muni[i]. Any suggestions? Thank you so much!
A:
If you adjust your for loop a bit, it should solve your problem. The below change loops through all list entries in muni. Each time, it extracts the first value from each tuple into URL and the second tuple value into label.
for URL, label in muni:
And with that change, the final line in your code can become:
df.to_csv(label+"dak.csv", index=False)
|
Saving datasets created in a forloop to multiple files
|
I have URLs (for web scraping) and municipality name stored in this list:
muni = [("https://openbilanci.it/armonizzati/bilanci/filettino-comune-fr/entrate/dettaglio?year=2021&type=preventivo", "filettino"), ("https://openbilanci.it/armonizzati/bilanci/partanna-comune-tp/entrate/dettaglio?year=2021&type=preventivo","partanna"), ("https://openbilanci.it/armonizzati/bilanci/fragneto-labate-comune-bn/entrate/dettaglio?year=2021&type=preventivo", "fragneto-labate") ]
I am trying to create different datasets for different municipalities. For example, data scraped from the first URL would be: filettinodak.csv.
I am using the following code right now:
import re
import json
import requests
import pandas as pd
import os
import random
os.chdir(r"/Users/aartimalik/Dropbox/data")
muni = [("https://openbilanci.it/armonizzati/bilanci/filettino-comune-fr/entrate/dettaglio?year=2021&type=preventivo", "filettino"),
("https://openbilanci.it/armonizzati/bilanci/partanna-comune-tp/entrate/dettaglio?year=2021&type=preventivo","partanna"),
("https://openbilanci.it/armonizzati/bilanci/fragneto-labate-comune-bn/entrate/dettaglio?year=2021&type=preventivo", "fragneto-labate")
]
for m in muni[1]:
URL = m
r = requests.get(URL)
p = re.compile("var bilancio_tree = (.*?);")
data = p.search(r.text).group(1)
data = json.loads(data)
all_data = []
for d in data:
for v in d["values"]:
for kk, vv in v.items():
all_data.append([d["label"], "-", kk, vv.get("abs"), vv.get("pc")])
for c in d["children"]:
for v in c["values"]:
for kk, vv in v.items():
all_data.append(
[d["label"], c["label"], kk, vv.get("abs"), vv.get("pc")]
)
df = pd.DataFrame(all_data, columns=["label 1", "label 2", "year", "abs", "pc"])
df.to_csv(muni[2]+"dak.csv", index=False)
The error I am getting is: Traceback (most recent call last): File "<stdin>", line 19, in <module> TypeError: can only concatenate tuple (not "str") to tuple.
I think I am doing something wrong with the muni indexing: muni[i]. Any suggestions? Thank you so much!
|
[
"If you adjust your for loop a bit, it should solve your problem. The below change loops through all list entries in muni. Each time, it extracts the first value from each tuple into URL and the second tuple value into label.\nfor URL, label in muni:\n\nAnd with that change, the final line in your code can become:\ndf.to_csv(label+\"dak.csv\", index=False)\n\n"
] |
[
1
] |
[] |
[] |
[
"dataset",
"for_loop",
"loops",
"pandas",
"saving_data"
] |
stackoverflow_0074663227_dataset_for_loop_loops_pandas_saving_data.txt
|
Q:
Getting value of input element in Playwright
How do I return the value of elem so that I can verify that it is in fact 1?
const elem = await page.$('input#my-input')
await elem.fill('1')
A:
inputValue method has been added in Playwright v1.13.0
await page.inputValue('input#my-input');
Locator:
await page.locator('input#my-input').inputValue();
It returns input.value for the selected <input> or <textarea> element. Throws for non-input elements. Read more.
A:
The easiest way is to use $eval. Here you see a small example:
const playwright = require("playwright");
(async () => {
const browser = await playwright.chromium.launch();
const context = await browser.newContext();
const page = await context.newPage();
await page.setContent(`<input id="foo"/>`);
await page.type("#foo", "New value")
console.log(await page.$eval("#foo", el => el.value))
await page.screenshot({ path: `example.png` });
await browser.close();
})();
A:
From version 1.19 (and probably lower versions) Element Handler is not recomended.
Instead of it use Locator.
page.locator(selector).innerText()
in your case with assertion it will be
expect(page.locator("input#my-input").innerText().includes("1")).toBeTruthy()
Read more on:
https://playwright.dev/docs/api/class-elementhandle#element-handle-fill
A:
You can use the value property of the ElementHandle object that represents the element. Here is an example of how you might do this:
# get a reference to the input element
elem = await page.$('input#my-input')
# fill the input with the value "1"
await elem.fill('1')
# get the value of the input
inputValue = elem.value
# verify that the value is "1"
assert inputValue == "1"
|
Getting value of input element in Playwright
|
How do I return the value of elem so that I can verify that it is in fact 1?
const elem = await page.$('input#my-input')
await elem.fill('1')
|
[
"inputValue method has been added in Playwright v1.13.0\nawait page.inputValue('input#my-input');\n\nLocator:\nawait page.locator('input#my-input').inputValue();\n\nIt returns input.value for the selected <input> or <textarea> element. Throws for non-input elements. Read more.\n",
"The easiest way is to use $eval. Here you see a small example:\nconst playwright = require(\"playwright\");\n\n(async () => {\n const browser = await playwright.chromium.launch();\n const context = await browser.newContext();\n const page = await context.newPage();\n await page.setContent(`<input id=\"foo\"/>`);\n await page.type(\"#foo\", \"New value\")\n console.log(await page.$eval(\"#foo\", el => el.value))\n await page.screenshot({ path: `example.png` });\n await browser.close();\n})();\n\n",
"From version 1.19 (and probably lower versions) Element Handler is not recomended.\nInstead of it use Locator.\npage.locator(selector).innerText()\n\nin your case with assertion it will be\nexpect(page.locator(\"input#my-input\").innerText().includes(\"1\")).toBeTruthy()\n\nRead more on:\nhttps://playwright.dev/docs/api/class-elementhandle#element-handle-fill\n",
"You can use the value property of the ElementHandle object that represents the element. Here is an example of how you might do this:\n# get a reference to the input element\nelem = await page.$('input#my-input')\n\n# fill the input with the value \"1\"\nawait elem.fill('1')\n\n# get the value of the input\ninputValue = elem.value\n\n# verify that the value is \"1\"\nassert inputValue == \"1\"\n\n"
] |
[
34,
8,
5,
0
] |
[] |
[] |
[
"playwright"
] |
stackoverflow_0062002041_playwright.txt
|
Q:
Why does my code not run correct with arr10 but work fine with arr9?
To get the minimum value in an array, I made the method minValue
public int minValue() {
int smallestVal = 0; //field
if (intArray.length == 0) { //if array is empty, return 0
return 0;
}
int a = intArray[0]; //field
for (int i : intArray) {
if (i > a) {
smallestVal = a;
}
else {
a = i;
}
}
return smallestVal; //returns the smallest value
}
Tested it in a main method with arr9 = { 1, 2, -1, 40, 1, 40, 0, 0, -3, 2, 2, -2, -5, 0, 1, -4, -5 }
and arr10 = { 4, 5, 5, 4, 1, 5, -3, 4, -1, -2, -2, -2, -2, -2, -2, 1, 4, 5, -5 }
For arr9, it returns -5 but for arr10 it returns -3 instead of -5.
Is there something I need to change in my code?
A:
The reason your code doesn't work with the second array is because it will never set smallestVal when the smallest value is the last location in the array.
There is no need for your variable a here. Replace it with this:
int smallestVal = intArray[0]; //field
for (int i : intArray){
if (i < smallestVal){
smallestVal = i;
}
}
A:
if (i > a) {
smallestVal = a;
}
else {
a = i;
}
This sets a to the value of i if i is smaller or equal a and sets smallestVal to the value of a if i is greater than a. In the case that i is never greater than a after a got updated smalledtVal will not be updated to the smallest value. I'd suggest getting rid of smallestVal and just returning a as a should always contain the smallest value. That being said a should probably be named smallestVal as it's a more descriptive name.
|
Why does my code not run correct with arr10 but work fine with arr9?
|
To get the minimum value in an array, I made the method minValue
public int minValue() {
int smallestVal = 0; //field
if (intArray.length == 0) { //if array is empty, return 0
return 0;
}
int a = intArray[0]; //field
for (int i : intArray) {
if (i > a) {
smallestVal = a;
}
else {
a = i;
}
}
return smallestVal; //returns the smallest value
}
Tested it in a main method with arr9 = { 1, 2, -1, 40, 1, 40, 0, 0, -3, 2, 2, -2, -5, 0, 1, -4, -5 }
and arr10 = { 4, 5, 5, 4, 1, 5, -3, 4, -1, -2, -2, -2, -2, -2, -2, 1, 4, 5, -5 }
For arr9, it returns -5 but for arr10 it returns -3 instead of -5.
Is there something I need to change in my code?
|
[
"The reason your code doesn't work with the second array is because it will never set smallestVal when the smallest value is the last location in the array.\nThere is no need for your variable a here. Replace it with this:\nint smallestVal = intArray[0]; //field\nfor (int i : intArray){\n if (i < smallestVal){\n smallestVal = i;\n }\n }\n\n",
"if (i > a) {\n smallestVal = a;\n}\nelse {\n a = i;\n}\n\nThis sets a to the value of i if i is smaller or equal a and sets smallestVal to the value of a if i is greater than a. In the case that i is never greater than a after a got updated smalledtVal will not be updated to the smallest value. I'd suggest getting rid of smallestVal and just returning a as a should always contain the smallest value. That being said a should probably be named smallestVal as it's a more descriptive name.\n"
] |
[
1,
0
] |
[] |
[] |
[
"arrays",
"java",
"methods",
"min",
"minimum"
] |
stackoverflow_0074663397_arrays_java_methods_min_minimum.txt
|
Q:
discord.js modal giving error something went wrong
Whenever i submit the modal it gives me error something went wrong try again..
this is my code -
const { Events, EmbedBuilder, AttachmentBuilder, ModalBuilder, TextInputBuilder, TextInputStyle, ActionRowBuilder, ButtonBuilder, ButtonStyle, InteractionType} = require('discord.js');
const { Verification } = require('../models/verificationSchema')
const { Captcha } = require('captcha-canvas')
module.exports = {
name: Events.InteractionCreate,
async execute(interaction, client) {
if (interaction.isButton()){
if (interaction.customId === 'verify') {
await interaction.deferReply({ephemeral: true});
const member = await interaction.guild.members.cache.get(interaction.member.user.id) || await interaction.guild.members.fetch(interaction.member.user.id).catch(err => {});
const captcha = new Captcha();
captcha.async = true;
captcha.addDecoy();
captcha.drawTrace();
captcha.drawCaptcha();
const captchaAnswer = captcha.text;
const captchaImage = new AttachmentBuilder()
.setFile(await captcha.png)
.setName('captcha.png')
const captchaEmbed = new EmbedBuilder()
.setTitle('Verification Captcha')
.setColor('Yellow')
.setImage('attachment://captcha.png')
.setDescription(`Please enter the captcha text`)
const captchaRow = new ActionRowBuilder()
.addComponents([
new ButtonBuilder()
.setLabel('Answer')
.setCustomId('answer')
.setStyle(ButtonStyle.Success)
])
await interaction.editReply({embeds: [captchaEmbed], files: [captchaImage], components: [captchaRow]});
}
}
if (interaction.customId === 'answer') {
const modal = new ModalBuilder()
.setCustomId('verificationModal')
.setTitle('Verification Input')
.addComponents([
new ActionRowBuilder().addComponents([
new TextInputBuilder()
.setCustomId('captchaInput')
.setLabel("Enter the Captcha.")
.setStyle(TextInputStyle.Short),
])
]);
interaction.showModal(modal);
}
if (interaction.isModalSubmit()) {
console.log(interaction)
if (interaction.customId === 'verificationModel') {
const response = interaction.fields.getTextInputValue('captchaInput');
console.log(response)
}
}
}
}
I am trying to make a verification command in discord i ask the user for captcha text via the modal but it gives me error. i don't know how to fix this error i just want to get the user input in modal whenever the modal is submitted..
there is no error in the terminal.
thanks in advance :)
error image
A:
The modal is showing that because Discord expected a response to the modal interaction. If you don't want to send a message to the user when they submit the modal (e.g. by using interaction.reply()) then you can simply defer an update to the interaction using the ModalSubmitInteraction.deferUpdate() method. Example:
if (interaction.isModalSubmit()) {
interaction.deferUpdate()
// all the other stuff you want to do with the modal submission here
}
What should happen here is that when the user clicks the submit button on your modal, Discord sends their submission to your bot, you respond that you simply want to defer the interaction, and then Discord closes the modal for the user without showing an error message.
|
discord.js modal giving error something went wrong
|
Whenever i submit the modal it gives me error something went wrong try again..
this is my code -
const { Events, EmbedBuilder, AttachmentBuilder, ModalBuilder, TextInputBuilder, TextInputStyle, ActionRowBuilder, ButtonBuilder, ButtonStyle, InteractionType} = require('discord.js');
const { Verification } = require('../models/verificationSchema')
const { Captcha } = require('captcha-canvas')
module.exports = {
name: Events.InteractionCreate,
async execute(interaction, client) {
if (interaction.isButton()){
if (interaction.customId === 'verify') {
await interaction.deferReply({ephemeral: true});
const member = await interaction.guild.members.cache.get(interaction.member.user.id) || await interaction.guild.members.fetch(interaction.member.user.id).catch(err => {});
const captcha = new Captcha();
captcha.async = true;
captcha.addDecoy();
captcha.drawTrace();
captcha.drawCaptcha();
const captchaAnswer = captcha.text;
const captchaImage = new AttachmentBuilder()
.setFile(await captcha.png)
.setName('captcha.png')
const captchaEmbed = new EmbedBuilder()
.setTitle('Verification Captcha')
.setColor('Yellow')
.setImage('attachment://captcha.png')
.setDescription(`Please enter the captcha text`)
const captchaRow = new ActionRowBuilder()
.addComponents([
new ButtonBuilder()
.setLabel('Answer')
.setCustomId('answer')
.setStyle(ButtonStyle.Success)
])
await interaction.editReply({embeds: [captchaEmbed], files: [captchaImage], components: [captchaRow]});
}
}
if (interaction.customId === 'answer') {
const modal = new ModalBuilder()
.setCustomId('verificationModal')
.setTitle('Verification Input')
.addComponents([
new ActionRowBuilder().addComponents([
new TextInputBuilder()
.setCustomId('captchaInput')
.setLabel("Enter the Captcha.")
.setStyle(TextInputStyle.Short),
])
]);
interaction.showModal(modal);
}
if (interaction.isModalSubmit()) {
console.log(interaction)
if (interaction.customId === 'verificationModel') {
const response = interaction.fields.getTextInputValue('captchaInput');
console.log(response)
}
}
}
}
I am trying to make a verification command in discord i ask the user for captcha text via the modal but it gives me error. i don't know how to fix this error i just want to get the user input in modal whenever the modal is submitted..
there is no error in the terminal.
thanks in advance :)
error image
|
[
"The modal is showing that because Discord expected a response to the modal interaction. If you don't want to send a message to the user when they submit the modal (e.g. by using interaction.reply()) then you can simply defer an update to the interaction using the ModalSubmitInteraction.deferUpdate() method. Example:\nif (interaction.isModalSubmit()) {\n interaction.deferUpdate()\n // all the other stuff you want to do with the modal submission here\n}\n\nWhat should happen here is that when the user clicks the submit button on your modal, Discord sends their submission to your bot, you respond that you simply want to defer the interaction, and then Discord closes the modal for the user without showing an error message.\n"
] |
[
0
] |
[] |
[] |
[
"discord.js"
] |
stackoverflow_0074656337_discord.js.txt
|
Q:
How to update new line colours in Plotly from a button click and access results?
I want to plot an image, draw freehand over the image, then be able to press a custom button so that freehand drawing is now in a different colour. I cannot figure out how to make the button press change the line colour though.
The code I have tried is here below. I've tried using all four button methods described in the documentation, but none of them have any effect when pressed.
Furthermore, I can't find anywhere in the documentation how to access the lines that have been drawn over the image once finished (other than having to save the image manually using the GUI which I want to avoid).
# Imports
import plotly
import plotly.graph_objects as go
import plotly.express as px
# Show image
img = cv2.imread(fpath)
fig = px.imshow(img)
# Enable freehand drawing on mouse drag
fig.update_layout(overwrite=True,
dragmode='drawopenpath',
newshape_line_color='cyan',
modebar_add=['drawopenpath',"eraseshape"])
# Add two buttons, 'r' and 'b' which attempt to update newshape_line_color...
fig.update_layout(
updatemenus=[
dict(
type="buttons",
direction="right",
active=0,
showactive=True,
x=0.57,
y=1.2,
buttons=list([
{
'label':"r",
'method':"relayout",
'args':[{'newshape_line_color':'red'}],
},
dict(label="b",
method="restyle",
args=[{"newshape_line_color": 'blue'}]),
]),
)
])
# Show figure
config = dict({'scrollZoom': True})
fig.show(config = config)
Any help would be greatly appreciated!
A:
If I understand correctly, you want all of the drawn lines to change color when the button is selected.
I've got two solutions for you.
The first doesn't do exactly what you're asking for, but it's entirely in Python. Instead of changing the last line drawn, all subsequent lines are drawn with the selected color.
The second does do what you're looking for but requires JS.
Pythonic...but not quite what you're looking for
Here's the updated updatemenus. You were pretty close, actually. I've created two color buttons: red and green.
fig.update_layout(
updatemenus = list([
dict(type = "buttons",
direction = "right",
active = 0,
showactive = True,
x = 0.57,
y = 1.2,
buttons = list([ # change future colors
dict(label = "Make Me Red",
method = "relayout",
args = [{'newshape.line.color': 'red'}]
),
dict(label = "Make Me Green",
method = "relayout",
args = [{'newshape.line.color': 'green'}]
)
])
)
])
)
When you select the color button, all lines drawn after will be in the color selected.
Here's the entire chunk of code used to make this:
from skimage import io
import plotly.graph_objects as go
import plotly.express as px
# Show image
img = io.imread('https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Crab_Nebula.jpg/240px-Crab_Nebula.jpg')
fig = px.imshow(img)
# Enable freehand drawing on mouse drag
fig.update_layout(overwrite=True,
dragmode='drawopenpath',
newshape_line_color='cyan',
modebar_add=['drawopenpath',"eraseshape"])
fig.add_shape(dict(editable = True, type = "line",
line = dict(color = "white"),
layer = 'above',
x0 = 0, x1 = 200.0000001,
y0 = 0, y1 = 200.0000001))
# Add two buttons, 'r' and 'b' which attempt to update newshape_line_color...
fig.update_layout(
updatemenus = list([
dict(type = "buttons",
direction = "right",
active = 0,
showactive = True,
x = 0.57,
y = 1.2,
buttons = list([ # change future colors
dict(label = "Make Me Red",
method = "relayout",
args = [{'newshape.line.color': 'red'}]
),
dict(label = "Make Me Green",
method = "relayout",
args = [{'newshape.line.color': 'green'}]
)
])
)
])
)
# Show figure
config = dict({'scrollZoom': True})
fig.show(config = config)
Embedded JS Changing the Drawn Line Color
You'll use everything up to fig.show() then you'll use the following. Yes, it creates an external file, but it will immediately open in your browser, as well.
This piggybacks off of your buttons. When green is clicked now, it will change all of the lines, not just what's drawn next. There are two events here, one for each color.
In the JS, you'll notice two for loops in each event. These serve very different purposes. Because there doesn't seem to be a built-in event to do this for me, the first loop changes the actual attributes of the plot. However, that won't be visible immediately. So the second loop changes what you actually see at that moment.
This requires the Plotly io package. You had called import plotly, so you could just change pio to plotly.io instead of calling it though.
import plotly.io as pio
pio.write_html(fig, file = 'index2.html', auto_open = True,
config = config, include_plotlyjs = 'cdn', include_mathjax = 'cdn',
post_script = "setTimeout(function() {" +
"btns = document.querySelectorAll('g.updatemenu-button');" +
"btns[0].addEventListener('click', function() {" +
"ch = document.getElementById('thisCh');" +
"shapes = ch.layout.shapes; /* update the plot attributes */" +
"for(i = 0; i < shapes.length; i++) {" +
"shapes[i].line.color = 'red';" +
"} /* update the current appearance immediately */" +
"chart = document.querySelectorAll('g.shapelayer')[2];" +
"for(i = 0; i < chart.children.length; i++) {" +
"chart.children[i].style.stroke = 'red';" +
"}" +
"});" +
"btns[1].addEventListener('click', function() {" +
"ch = document.getElementById('thisCh');" +
"shapes = ch.layout.shapes; /* update the plot attributes */" +
"for(i = 0; i < shapes.length; i++) {" +
"shapes[i].line.color = 'green';" +
"} /* update the current appearance immediately */" +
"chart = document.querySelectorAll('g.shapelayer')[2];" +
"for(i = 0; i < chart.children.length; i++) {" +
"chart.children[i].style.stroke = 'green';" +
"}" +
"});" +
"}, 200)", full_html = True, div_id = "thisCh")
If I've misunderstood what you're looking for or if you have any questions, let me know. Oh, and if you go with the second solution but wanted many colors, I can make it color dynamic, I didn't do that with only two colors that I used for this demonstration.
|
How to update new line colours in Plotly from a button click and access results?
|
I want to plot an image, draw freehand over the image, then be able to press a custom button so that freehand drawing is now in a different colour. I cannot figure out how to make the button press change the line colour though.
The code I have tried is here below. I've tried using all four button methods described in the documentation, but none of them have any effect when pressed.
Furthermore, I can't find anywhere in the documentation how to access the lines that have been drawn over the image once finished (other than having to save the image manually using the GUI which I want to avoid).
# Imports
import plotly
import plotly.graph_objects as go
import plotly.express as px
# Show image
img = cv2.imread(fpath)
fig = px.imshow(img)
# Enable freehand drawing on mouse drag
fig.update_layout(overwrite=True,
dragmode='drawopenpath',
newshape_line_color='cyan',
modebar_add=['drawopenpath',"eraseshape"])
# Add two buttons, 'r' and 'b' which attempt to update newshape_line_color...
fig.update_layout(
updatemenus=[
dict(
type="buttons",
direction="right",
active=0,
showactive=True,
x=0.57,
y=1.2,
buttons=list([
{
'label':"r",
'method':"relayout",
'args':[{'newshape_line_color':'red'}],
},
dict(label="b",
method="restyle",
args=[{"newshape_line_color": 'blue'}]),
]),
)
])
# Show figure
config = dict({'scrollZoom': True})
fig.show(config = config)
Any help would be greatly appreciated!
|
[
"If I understand correctly, you want all of the drawn lines to change color when the button is selected.\nI've got two solutions for you.\nThe first doesn't do exactly what you're asking for, but it's entirely in Python. Instead of changing the last line drawn, all subsequent lines are drawn with the selected color.\nThe second does do what you're looking for but requires JS.\nPythonic...but not quite what you're looking for\nHere's the updated updatemenus. You were pretty close, actually. I've created two color buttons: red and green.\nfig.update_layout(\n updatemenus = list([\n dict(type = \"buttons\",\n direction = \"right\",\n active = 0,\n showactive = True,\n x = 0.57,\n y = 1.2,\n buttons = list([ # change future colors\n dict(label = \"Make Me Red\", \n method = \"relayout\", \n args = [{'newshape.line.color': 'red'}]\n ),\n dict(label = \"Make Me Green\", \n method = \"relayout\", \n args = [{'newshape.line.color': 'green'}]\n )\n ]) \n )\n ])\n)\n\nWhen you select the color button, all lines drawn after will be in the color selected.\n\nHere's the entire chunk of code used to make this:\nfrom skimage import io\nimport plotly.graph_objects as go\nimport plotly.express as px\n\n# Show image\nimg = io.imread('https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Crab_Nebula.jpg/240px-Crab_Nebula.jpg')\nfig = px.imshow(img)\n\n# Enable freehand drawing on mouse drag\nfig.update_layout(overwrite=True,\n dragmode='drawopenpath',\n newshape_line_color='cyan',\n modebar_add=['drawopenpath',\"eraseshape\"])\n\nfig.add_shape(dict(editable = True, type = \"line\", \n line = dict(color = \"white\"), \n layer = 'above',\n x0 = 0, x1 = 200.0000001, \n y0 = 0, y1 = 200.0000001))\n\n# Add two buttons, 'r' and 'b' which attempt to update newshape_line_color...\nfig.update_layout(\n updatemenus = list([\n dict(type = \"buttons\",\n direction = \"right\",\n active = 0,\n showactive = True,\n x = 0.57,\n y = 1.2,\n buttons = list([ # change future colors\n dict(label = \"Make Me Red\", \n method = \"relayout\", \n args = [{'newshape.line.color': 'red'}]\n ),\n dict(label = \"Make Me Green\", \n method = \"relayout\", \n args = [{'newshape.line.color': 'green'}]\n )\n ]) \n )\n ])\n)\n\n# Show figure\nconfig = dict({'scrollZoom': True})\nfig.show(config = config)\n\nEmbedded JS Changing the Drawn Line Color\nYou'll use everything up to fig.show() then you'll use the following. Yes, it creates an external file, but it will immediately open in your browser, as well.\nThis piggybacks off of your buttons. When green is clicked now, it will change all of the lines, not just what's drawn next. There are two events here, one for each color.\nIn the JS, you'll notice two for loops in each event. These serve very different purposes. Because there doesn't seem to be a built-in event to do this for me, the first loop changes the actual attributes of the plot. However, that won't be visible immediately. So the second loop changes what you actually see at that moment.\nThis requires the Plotly io package. You had called import plotly, so you could just change pio to plotly.io instead of calling it though.\nimport plotly.io as pio\n\npio.write_html(fig, file = 'index2.html', auto_open = True, \n config = config, include_plotlyjs = 'cdn', include_mathjax = 'cdn',\n post_script = \"setTimeout(function() {\" +\n \"btns = document.querySelectorAll('g.updatemenu-button');\" +\n \"btns[0].addEventListener('click', function() {\" + \n \"ch = document.getElementById('thisCh');\" +\n \"shapes = ch.layout.shapes; /* update the plot attributes */\" +\n \"for(i = 0; i < shapes.length; i++) {\" +\n \"shapes[i].line.color = 'red';\" +\n \"} /* update the current appearance immediately */\" +\n \"chart = document.querySelectorAll('g.shapelayer')[2];\" +\n \"for(i = 0; i < chart.children.length; i++) {\" +\n \"chart.children[i].style.stroke = 'red';\" +\n \"}\" +\n \"});\" +\n \"btns[1].addEventListener('click', function() {\" +\n \"ch = document.getElementById('thisCh');\" +\n \"shapes = ch.layout.shapes; /* update the plot attributes */\" +\n \"for(i = 0; i < shapes.length; i++) {\" +\n \"shapes[i].line.color = 'green';\" +\n \"} /* update the current appearance immediately */\" + \n \"chart = document.querySelectorAll('g.shapelayer')[2];\" +\n \"for(i = 0; i < chart.children.length; i++) {\" +\n \"chart.children[i].style.stroke = 'green';\" +\n \"}\" +\n \"});\" +\n \"}, 200)\", full_html = True, div_id = \"thisCh\")\n\n\n\nIf I've misunderstood what you're looking for or if you have any questions, let me know. Oh, and if you go with the second solution but wanted many colors, I can make it color dynamic, I didn't do that with only two colors that I used for this demonstration.\n"
] |
[
0
] |
[] |
[] |
[
"google_colaboratory",
"interactive",
"plotly",
"python"
] |
stackoverflow_0074659489_google_colaboratory_interactive_plotly_python.txt
|
Q:
.some with e.target.dataset.id returns false but hardcoded value returns true
pokemonContainer.addEventListener('click', function (e) {
let favourite = getStorageItem('pokemon');
let eTargetID = e.target.dataset.id;
let found = favourite.some((el) => el.id === eTargetID);
console.log(found);
console.log(eTargetID);
if (!found) {
favourite.push(pokemonArray[e.target.dataset.id - 1]);
setStorageItem('pokemon', favourite);
}
});
I'm trying to look for an id inside an array with objects. The object have an id of 1. My e.target.dataset.id is 1 but it returns false. But when I hardcode the value el.id === 1, it returns true. Am I doing something wrong? Why is it false even though e.target.dataset.id is also 1?
A:
Use el.id == eTargetId instead of el.id === eTargetId. (Use just two equal signs, not three.)
e.target.dataset.id is a JavaScript number 1. (It's a thing you can do math on, just like 3.141)
But your HTML element ID is a JavaScript string "1". (It's made out of letters, like "I have 1 apple.").
In JavaScript, there's a fuzzy equality operator ==, where numbers will be read as if they were strings when compared to strings. So, 1=="1" is true.
There is also a strict equality operator, where strings and numbers are considered completely different things. So, 1==="1" is false.
|
.some with e.target.dataset.id returns false but hardcoded value returns true
|
pokemonContainer.addEventListener('click', function (e) {
let favourite = getStorageItem('pokemon');
let eTargetID = e.target.dataset.id;
let found = favourite.some((el) => el.id === eTargetID);
console.log(found);
console.log(eTargetID);
if (!found) {
favourite.push(pokemonArray[e.target.dataset.id - 1]);
setStorageItem('pokemon', favourite);
}
});
I'm trying to look for an id inside an array with objects. The object have an id of 1. My e.target.dataset.id is 1 but it returns false. But when I hardcode the value el.id === 1, it returns true. Am I doing something wrong? Why is it false even though e.target.dataset.id is also 1?
|
[
"Use el.id == eTargetId instead of el.id === eTargetId. (Use just two equal signs, not three.)\ne.target.dataset.id is a JavaScript number 1. (It's a thing you can do math on, just like 3.141)\nBut your HTML element ID is a JavaScript string \"1\". (It's made out of letters, like \"I have 1 apple.\").\nIn JavaScript, there's a fuzzy equality operator ==, where numbers will be read as if they were strings when compared to strings. So, 1==\"1\" is true.\nThere is also a strict equality operator, where strings and numbers are considered completely different things. So, 1===\"1\" is false.\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html",
"javascript"
] |
stackoverflow_0074663099_css_html_javascript.txt
|
Q:
Odd edge behavior in `qgraph` after scaling node size with an attribute
[[Reproducible data for this question is found at bottom of question.]]
When plotting a network with qgraph, the edges usually link to nodes in a relatively straightforward way.
library(qgraph)
qgraph(Network)
But as soon as I add a size to my nodes, the edges often overshoot the nodes:
qgraph(Network,
vsize=log(Attributes)*3, # scale nodes
vTrans=150, #Transparency of the nodes
label.scale=F # don't scale labels along with nodes
)
Some node scaling sizes work better than others:
qgraph(Network,vsize=Attributes/5,
vTrans=150,#Transparency of the nodes, must be an integer between 0 and 255, 255 indicating no transparency
label.scale=F)
But it isn't clear why this is the case, or how I can override the edges to meet the node appropriately (either at the boundary of the scaled node or at the centerpoint of the node). Any thoughts welcome.
Data:
Network<-structure(list(V4 = c(0, 0, 0.6, 0.01, 0.06, 0.09, 0.01, 0.01,
0, 0.01, 0.03, 0, 0, 0, 0.12, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V5 = c(0, 0, 0.6,
0.01, 0.06, 0.09, 0.01, 0.01, 0, 0.01, 0.03, 0, 0, 0, 0.13, 0.04,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V6 = c(0, 0, 0, 0.02, 0.12, 0.08, 0, 0.01, 0, 0.01, 0.02,
0, 0, 0, 0.04, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V7 = c(0, 0, 0, 0, 0, 0, 0.01, 0.01,
0.01, 0.03, 0.01, 0.03, 0.05, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V8 = c(0,
0, 0, 0, 0, 0, 0.01, 0.01, 0.01, 0.03, 0.01, 0.03, 0.06, 0, 0.03,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0.01, 0, 0, 0), V9 = c(0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.01,
0.03, 0.01, 0.03, 0.01, 0, 0.04, 0.02, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V10 = c(0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.01, 0.04, 0.05, 0, 0.01, 0,
0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0, 0, 0,
0, 0.01, 0, 0, 0), V11 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01,
0.01, 0.03, 0.08, 0, 0, 0, 0.02, 0, 0.02, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V12 = c(0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0.07, 0, 0, 0, 0, 0.01, 0, 0.02, 0,
0.02, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0.01, 0, 0, 0.01,
0, 0, 0, 0, 0), V13 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.01,
0.04, 0.05, 0, 0, 0, 0.02, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V14 = c(0, 0,
0, 0, 0, 0, 0, 0.01, 0.01, 0.02, 0, 0.01, 0.09, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V15 = c(0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.02, 0, 0, 0.09,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0), V16 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0), V17 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.21, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V18 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V19 = c(0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0.01, 0, 0.01,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V20 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0.08, 0,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0), V21 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V22 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V23 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V24 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V25 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V26 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V27 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V28 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V29 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0.01, 0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V30 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V31 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V32 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V33 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.02, 0.02, 0, 0, 0.01, 0.01, 0.01, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V34 = c(0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V35 = c(0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V36 = c(0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V37 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V38 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V39 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V40 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0.01,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V41 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0), V42 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01,
0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V43 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V44 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0), V45 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V46 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.02, 0.07, 0.02, 0, 0, 0.01, 0, 0.01, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V47 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V48 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0.01, 0.01, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V49 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0, 0, 0.01,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V50 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.02, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0), V51 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0.03, 0.01, 0, 0, 0.01, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V52 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.02, 0.01, 0.03, 0, 0.01, 0.02, 0.02, 0.02,
0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V53 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.09, 0,
0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V54 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.01, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0.01,
0.09, 0, 0.01, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0, 0, 0, 0), V55 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0.01, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.08, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0.02, 0, 0, 0, 0), V56 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.05, 0.08, 0, 0.02, 0, 0, 0.01,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V57 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0.02, 0.03, 0, 0.01,
0), V58 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01,
0, 0.08, 0, 0, 0, 0, 0, 0.03, 0.01, 0.01, 0.01, 0.01, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0), V59 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0.03, 0, 0, 0.01, 0.01,
0, 0, 0, 0, 0, 0.02, 0, 0.02, 0, 0, 0), V60 = c(0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0.04, 0, 0, 0, 0), V61 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0), V62 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0.04, 0.01, 0, 0),
V63 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.03, 0.01, 0, 0, 0)), class = "data.frame", row.names = c("4",
"5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15",
"16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26",
"27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37",
"38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48",
"49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59",
"60", "61", "62", "63"))
Attributes<-c(34.93768692, 4.75733614, 13.93967533, 2.833557367, 8.325469971,
8.177970886, 2.928951502, 2.174068213, 7.494392872, 6.128136158,
2.818100929, 1.909636378, 3.748121262, 1e-05, 70.72342682, 22.41350937,
2.115944386, 0.005, 1.84581995, 0.102126002, 15.20289135, 2.613022089,
4.338716984, 0.032485999, 0.059714999, 0.080463, 0.035101, 0.011345,
1, 3.151705027, 0.239722997, 0.137802005, 0.017914001, 0.036782667,
1.388822675, 0.435640007, 3.397774458, 2.329986095, 21.80796051,
0.200000003, 1.358244658, 0.687838018, 2.832928419, 1.016921043,
11.10915184, 2.84529686, 0.925952315, 4.18819809, 3.080216408,
0.276921213, 1.808943033, 3.043907881, 0.426636606, 80, 3.872853518,
7.236839294, 1.322934866, 11.1804142, 3.803627491, 31.66708755
)
A:
The edges aren't necessarily wrong. You've given many of the nodes negative values. if you even set them to 1, the arrows do as you expect. For example, vsize = ifelse(log(Attributes) * 3 > 0, log(Attributes) * 3, 1) will present with all meaningful arrows.
I'm surprised it didn't cause an error when you made the nodes negative... it's actually really nice that it didn't. It probably made it a lot easier to figure out what was wrong. When you used Attributes/5 you didn't end up with negative values.
|
Odd edge behavior in `qgraph` after scaling node size with an attribute
|
[[Reproducible data for this question is found at bottom of question.]]
When plotting a network with qgraph, the edges usually link to nodes in a relatively straightforward way.
library(qgraph)
qgraph(Network)
But as soon as I add a size to my nodes, the edges often overshoot the nodes:
qgraph(Network,
vsize=log(Attributes)*3, # scale nodes
vTrans=150, #Transparency of the nodes
label.scale=F # don't scale labels along with nodes
)
Some node scaling sizes work better than others:
qgraph(Network,vsize=Attributes/5,
vTrans=150,#Transparency of the nodes, must be an integer between 0 and 255, 255 indicating no transparency
label.scale=F)
But it isn't clear why this is the case, or how I can override the edges to meet the node appropriately (either at the boundary of the scaled node or at the centerpoint of the node). Any thoughts welcome.
Data:
Network<-structure(list(V4 = c(0, 0, 0.6, 0.01, 0.06, 0.09, 0.01, 0.01,
0, 0.01, 0.03, 0, 0, 0, 0.12, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V5 = c(0, 0, 0.6,
0.01, 0.06, 0.09, 0.01, 0.01, 0, 0.01, 0.03, 0, 0, 0, 0.13, 0.04,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V6 = c(0, 0, 0, 0.02, 0.12, 0.08, 0, 0.01, 0, 0.01, 0.02,
0, 0, 0, 0.04, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V7 = c(0, 0, 0, 0, 0, 0, 0.01, 0.01,
0.01, 0.03, 0.01, 0.03, 0.05, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V8 = c(0,
0, 0, 0, 0, 0, 0.01, 0.01, 0.01, 0.03, 0.01, 0.03, 0.06, 0, 0.03,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0.01, 0, 0, 0), V9 = c(0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.01,
0.03, 0.01, 0.03, 0.01, 0, 0.04, 0.02, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V10 = c(0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.01, 0.04, 0.05, 0, 0.01, 0,
0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0, 0, 0,
0, 0.01, 0, 0, 0), V11 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01,
0.01, 0.03, 0.08, 0, 0, 0, 0.02, 0, 0.02, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V12 = c(0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0.07, 0, 0, 0, 0, 0.01, 0, 0.02, 0,
0.02, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0.01, 0, 0, 0.01,
0, 0, 0, 0, 0), V13 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.01,
0.04, 0.05, 0, 0, 0, 0.02, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0), V14 = c(0, 0,
0, 0, 0, 0, 0, 0.01, 0.01, 0.02, 0, 0.01, 0.09, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V15 = c(0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.02, 0, 0, 0.09,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0), V16 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0), V17 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.21, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V18 = c(0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V19 = c(0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0.01, 0, 0.01,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0), V20 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0.08, 0,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0), V21 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V22 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V23 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V24 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V25 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V26 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V27 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V28 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0.01, 0.01, 0.09, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V29 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0, 0.01, 0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V30 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09, 0, 0.01, 0,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V31 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0.01, 0.01, 0.09,
0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V32 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V33 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.02, 0.02, 0, 0, 0.01, 0.01, 0.01, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V34 = c(0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V35 = c(0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V36 = c(0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0.03,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V37 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V38 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V39 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V40 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0.01,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V41 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0), V42 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01,
0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V43 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0), V44 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0), V45 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0), V46 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.02, 0.07, 0.02, 0, 0, 0.01, 0, 0.01, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V47 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V48 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0.01, 0.01, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V49 = c(0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0, 0, 0.01,
0.01, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
), V50 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.02, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0), V51 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.01, 0.03, 0.01, 0, 0, 0.01, 0.01, 0.01, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0), V52 = c(0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.02, 0.01, 0.03, 0, 0.01, 0.02, 0.02, 0.02,
0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V53 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.09, 0,
0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V54 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.01, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0.01,
0.09, 0, 0.01, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0, 0, 0, 0), V55 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0.01, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.08, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0,
0.01, 0, 0, 0, 0, 0.02, 0, 0, 0, 0), V56 = c(0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.05, 0.08, 0, 0.02, 0, 0, 0.01,
0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), V57 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.01, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0.01, 0.02, 0.03, 0, 0.01,
0), V58 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01,
0, 0.08, 0, 0, 0, 0, 0, 0.03, 0.01, 0.01, 0.01, 0.01, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0), V59 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0.03, 0, 0, 0.01, 0.01,
0, 0, 0, 0, 0, 0.02, 0, 0.02, 0, 0, 0), V60 = c(0, 0, 0, 0, 0,
0, 0, 0, 0.01, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0.04, 0, 0, 0, 0), V61 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0), V62 = c(0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0.04, 0.01, 0, 0),
V63 = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.03, 0.01, 0, 0, 0)), class = "data.frame", row.names = c("4",
"5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15",
"16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26",
"27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37",
"38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48",
"49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59",
"60", "61", "62", "63"))
Attributes<-c(34.93768692, 4.75733614, 13.93967533, 2.833557367, 8.325469971,
8.177970886, 2.928951502, 2.174068213, 7.494392872, 6.128136158,
2.818100929, 1.909636378, 3.748121262, 1e-05, 70.72342682, 22.41350937,
2.115944386, 0.005, 1.84581995, 0.102126002, 15.20289135, 2.613022089,
4.338716984, 0.032485999, 0.059714999, 0.080463, 0.035101, 0.011345,
1, 3.151705027, 0.239722997, 0.137802005, 0.017914001, 0.036782667,
1.388822675, 0.435640007, 3.397774458, 2.329986095, 21.80796051,
0.200000003, 1.358244658, 0.687838018, 2.832928419, 1.016921043,
11.10915184, 2.84529686, 0.925952315, 4.18819809, 3.080216408,
0.276921213, 1.808943033, 3.043907881, 0.426636606, 80, 3.872853518,
7.236839294, 1.322934866, 11.1804142, 3.803627491, 31.66708755
)
|
[
"The edges aren't necessarily wrong. You've given many of the nodes negative values. if you even set them to 1, the arrows do as you expect. For example, vsize = ifelse(log(Attributes) * 3 > 0, log(Attributes) * 3, 1) will present with all meaningful arrows.\nI'm surprised it didn't cause an error when you made the nodes negative... it's actually really nice that it didn't. It probably made it a lot easier to figure out what was wrong. When you used Attributes/5 you didn't end up with negative values.\n"
] |
[
1
] |
[] |
[] |
[
"igraph",
"r",
"social_networking"
] |
stackoverflow_0074660656_igraph_r_social_networking.txt
|
Q:
How to get the address of a contract deployed by another contract
-----START EDIT-----
I don't know what I was doing wrong before, but the code below was somehow not working for me, and now is working, and it's exactly the same. I don't know how or what I was missing before, but both this minimal example and the real project I am working on is working now. Obviously I changed something, but I can't figure out what. I just know it's working now. Sorry for the confusion and thanks to everyone for helping.
-----END EDIT-----
I am new to Solidity and am using the Factory pattern for deploying a contract from another contract. I am trying to get the contract address of the deployed contract, but I am running into errors.
I already tried the solution in this question, but I'm getting the following error: Return argument type struct StorageFactory.ContractData storage ref is not implicitly convertible to expected type (type of first return variable) address.
Here is my code:
// START EDIT (adding version)
pragma solidity ^0.8.0;
// END EDIT
contract StorageFactory {
struct ContractData {
address contractAddress; // I want to save the deployed contract address in a mapping that includes this struct
bool exists;
}
// mapping from address of user who deployed new Storage contract => ContractData struct (which includes the contract address)
mapping(address => ContractData) public userAddressToStruct;
function createStorageContract(address _userAddress) public {
// require that the user has not previously deployed a storage contract
require(!userAddressToStruct[_userAddress].exists, "Account already exists");
// TRYING TO GET THE ADDRESS OF THE NEWLY CREATED CONTRACT HERE, BUT GETTING AN ERROR
address contractAddress = address(new StorageContract(_userAddress));
// trying to save the contractAddress here but unable to isolate the contract address
userAddressToStruct[_userAddress].contractAddress = contractAddress;
userAddressToStruct[_userAddress].exists = true;
}
}
// arbitrary StorageContract being deployed
contract StorageContract {
address immutable deployedBy;
constructor(address _deployedBy) {
deployedBy = _deployedBy;
}
}
How can I get this contract address, so I can store it in the ContractData struct? Thanks.
A:
I compiled your contract, deployed it on Remix, and interacted without issue with this setting
pragma solidity >=0.7.0 <0.9.0;
I think you had this in your contract before
userAddressToStruct[_userAddress] = contractAddress;
instead of this
userAddressToStruct[_userAddress].contractAddress = contractAddress;
A:
You can use the following code to get the address of the deployed contract:
address contractAddress;
(contractAddress,) = new StorageContract(_userAddress);
userAddressToStruct[_userAddress].contractAddress = contractAddress;
A:
The error you're seeing is because you're trying to assign the return value of the new keyword to a variable of type address, but the new keyword actually returns a contract instance, which is a different type than an address.
To fix this, you'll need to declare a variable of the correct type to store the contract instance returned by the new keyword. You can do this by using the type of the contract you're deploying (in this case, StorageContract) as the type of the variable. Here's an example:
// arbitrary StorageContract being deployed
contract StorageContract {
address immutable deployedBy;
constructor(address _deployedBy) {
deployedBy = _deployedBy;
}
}
contract StorageFactory {
struct ContractData {
address contractAddress; // I want to save the deployed contract address in a mapping that includes this struct
bool exists;
}
// mapping from address of user who deployed new Storage contract => ContractData struct (which includes the contract address)
mapping(address => ContractData) public userAddressToStruct;
function createStorageContract(address _userAddress) public {
// require that the user has not previously deployed a storage contract
require(!userAddressToStruct[_userAddress].exists, "Account already exists");
// Declare a variable of type StorageContract to store the contract instance returned by the `new` keyword
StorageContract storageContract = new StorageContract(_userAddress);
// Get the contract address by calling the .address property on the contract instance
address contractAddress = storageContract.address;
// Save the contract address to the mapping
userAddressToStruct[_userAddress].contractAddress = contractAddress;
userAddressToStruct[_userAddress].exists = true;
}
}
A:
The problem is that the address type is not implicitly convertible from the type of the value returned by the new keyword. To fix this, you can simply use the new keyword to create a new instance of the StorageContract contract, and then use the .address property to get the address of the newly deployed contract, like this:
// Create a new instance of the StorageContract contract
StorageContract storageContract = new StorageContract(_userAddress);
// Get the address of the newly deployed contract
address contractAddress = storageContract.address;
// Save the contract address in the mapping
userAddressToStruct[_userAddress].contractAddress = contractAddress;
You can also use the address(...) conversion function to convert the storageContract instance to an address type, like this:
// Create a new instance of the StorageContract contract
StorageContract storageContract = new StorageContract(_userAddress);
// Convert the storageContract instance to an address type
address contractAddress = address(storageContract);
// Save the contract address in the mapping
userAddressToStruct[_userAddress].contractAddress = contractAddress;
I hope this helps! Let me know if you have any other questions.
-- Oops! --
I'm sorry, there seems to have been a mistake in my earlier response. The correct way to get the address of the newly deployed contract is to use the .address property of the returned tuple, like this:
// get the contract instance and address as a tuple
(address contractAddress, StorageContract storageContract) = new StorageContract(_userAddress);
// save the contract address
userAddressToStruct[_userAddress].contractAddress = contractAddress;
However, this will only work if the StorageContract contract has been declared before the StorageFactory contract. If you declare the StorageContract contract after the StorageFactory contract, as in the code you posted, you will get the error you mentioned because the StorageContract contract is not yet visible to the StorageFactory contract at the time of the new keyword.
To fix this, you can move the declaration of the StorageContract contract to before the declaration of the StorageFactory contract, like this:
// arbitrary StorageContract being deployed
contract StorageContract {
address immutable deployedBy;
constructor(address _deployedBy) {
deployedBy = _deployedBy;
}
}
// StorageFactory contract
contract StorageFactory {
...
}
I hope this helps! Let me know if you have any other questions.
|
How to get the address of a contract deployed by another contract
|
-----START EDIT-----
I don't know what I was doing wrong before, but the code below was somehow not working for me, and now is working, and it's exactly the same. I don't know how or what I was missing before, but both this minimal example and the real project I am working on is working now. Obviously I changed something, but I can't figure out what. I just know it's working now. Sorry for the confusion and thanks to everyone for helping.
-----END EDIT-----
I am new to Solidity and am using the Factory pattern for deploying a contract from another contract. I am trying to get the contract address of the deployed contract, but I am running into errors.
I already tried the solution in this question, but I'm getting the following error: Return argument type struct StorageFactory.ContractData storage ref is not implicitly convertible to expected type (type of first return variable) address.
Here is my code:
// START EDIT (adding version)
pragma solidity ^0.8.0;
// END EDIT
contract StorageFactory {
struct ContractData {
address contractAddress; // I want to save the deployed contract address in a mapping that includes this struct
bool exists;
}
// mapping from address of user who deployed new Storage contract => ContractData struct (which includes the contract address)
mapping(address => ContractData) public userAddressToStruct;
function createStorageContract(address _userAddress) public {
// require that the user has not previously deployed a storage contract
require(!userAddressToStruct[_userAddress].exists, "Account already exists");
// TRYING TO GET THE ADDRESS OF THE NEWLY CREATED CONTRACT HERE, BUT GETTING AN ERROR
address contractAddress = address(new StorageContract(_userAddress));
// trying to save the contractAddress here but unable to isolate the contract address
userAddressToStruct[_userAddress].contractAddress = contractAddress;
userAddressToStruct[_userAddress].exists = true;
}
}
// arbitrary StorageContract being deployed
contract StorageContract {
address immutable deployedBy;
constructor(address _deployedBy) {
deployedBy = _deployedBy;
}
}
How can I get this contract address, so I can store it in the ContractData struct? Thanks.
|
[
"I compiled your contract, deployed it on Remix, and interacted without issue with this setting\npragma solidity >=0.7.0 <0.9.0;\n\nI think you had this in your contract before\n userAddressToStruct[_userAddress] = contractAddress;\n\ninstead of this\n userAddressToStruct[_userAddress].contractAddress = contractAddress;\n\n",
"You can use the following code to get the address of the deployed contract:\naddress contractAddress;\n(contractAddress,) = new StorageContract(_userAddress);\nuserAddressToStruct[_userAddress].contractAddress = contractAddress;\n\n",
"The error you're seeing is because you're trying to assign the return value of the new keyword to a variable of type address, but the new keyword actually returns a contract instance, which is a different type than an address.\nTo fix this, you'll need to declare a variable of the correct type to store the contract instance returned by the new keyword. You can do this by using the type of the contract you're deploying (in this case, StorageContract) as the type of the variable. Here's an example:\n// arbitrary StorageContract being deployed\ncontract StorageContract {\n address immutable deployedBy;\n\n constructor(address _deployedBy) {\n deployedBy = _deployedBy;\n }\n}\n\ncontract StorageFactory {\n\n struct ContractData {\n address contractAddress; // I want to save the deployed contract address in a mapping that includes this struct\n bool exists;\n }\n\n // mapping from address of user who deployed new Storage contract => ContractData struct (which includes the contract address)\n mapping(address => ContractData) public userAddressToStruct;\n\n function createStorageContract(address _userAddress) public {\n\n // require that the user has not previously deployed a storage contract\n require(!userAddressToStruct[_userAddress].exists, \"Account already exists\");\n\n // Declare a variable of type StorageContract to store the contract instance returned by the `new` keyword\n StorageContract storageContract = new StorageContract(_userAddress);\n\n // Get the contract address by calling the .address property on the contract instance\n address contractAddress = storageContract.address;\n\n // Save the contract address to the mapping\n userAddressToStruct[_userAddress].contractAddress = contractAddress;\n userAddressToStruct[_userAddress].exists = true;\n }\n}\n\n",
"The problem is that the address type is not implicitly convertible from the type of the value returned by the new keyword. To fix this, you can simply use the new keyword to create a new instance of the StorageContract contract, and then use the .address property to get the address of the newly deployed contract, like this:\n// Create a new instance of the StorageContract contract\nStorageContract storageContract = new StorageContract(_userAddress);\n\n// Get the address of the newly deployed contract\naddress contractAddress = storageContract.address;\n\n// Save the contract address in the mapping\nuserAddressToStruct[_userAddress].contractAddress = contractAddress;\n\nYou can also use the address(...) conversion function to convert the storageContract instance to an address type, like this:\n// Create a new instance of the StorageContract contract\nStorageContract storageContract = new StorageContract(_userAddress);\n\n// Convert the storageContract instance to an address type\naddress contractAddress = address(storageContract);\n\n// Save the contract address in the mapping\nuserAddressToStruct[_userAddress].contractAddress = contractAddress;\n\nI hope this helps! Let me know if you have any other questions.\n-- Oops! --\nI'm sorry, there seems to have been a mistake in my earlier response. The correct way to get the address of the newly deployed contract is to use the .address property of the returned tuple, like this:\n// get the contract instance and address as a tuple\n(address contractAddress, StorageContract storageContract) = new StorageContract(_userAddress);\n\n// save the contract address\nuserAddressToStruct[_userAddress].contractAddress = contractAddress;\n\nHowever, this will only work if the StorageContract contract has been declared before the StorageFactory contract. If you declare the StorageContract contract after the StorageFactory contract, as in the code you posted, you will get the error you mentioned because the StorageContract contract is not yet visible to the StorageFactory contract at the time of the new keyword.\nTo fix this, you can move the declaration of the StorageContract contract to before the declaration of the StorageFactory contract, like this:\n// arbitrary StorageContract being deployed\ncontract StorageContract {\n address immutable deployedBy;\n\n constructor(address _deployedBy) {\n deployedBy = _deployedBy;\n }\n}\n\n// StorageFactory contract\ncontract StorageFactory {\n ...\n}\n\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"blockchain",
"ethereum",
"smartcontracts",
"solidity"
] |
stackoverflow_0074663044_blockchain_ethereum_smartcontracts_solidity.txt
|
Q:
Mongoose model promise returns a response instead of undefined upon rejection
This is my controller function I want to test
const register = async (req, res) => {
const user = await User.create({
name: req.body.name,
username: req.body.username,
password: req.body.password,
email: req.body.email
}).catch(error => handleServerErrorResponse(res));
console.log('isUserRes?', user === res);
if(user) res.status(201).json(user.toJSON());
}
This is my handleServerErrorResponse function
handleServerErrorResponse: (res, error = 'A server error occured') => {
return res.status(500).send({
message: error
});
},
This is my test implementation (I'm trying to mock mongoose model 'create' call and have it fail)
it('should fail', async () => {
// arrange
jest.spyOn(User, 'create').mockRejectedValue('error');
const req = mockRequest();
const res = mockResponse();
req.body = payload;
// act
await register(req, res);
// assert
expect(res.status).toHaveBeenCalledWith(500);
expect(res.send).toHaveBeenCalledWith(
expect.objectContaining({
message: 'A server error occured'
})
);
})
Running the tests gives me the following
jest --silent=false --testEnvironment=node --runInBand --detectOpenHandles --coverage ./tests
console.log
isUserRes? true
● register › should fail
TypeError: user.toJSON is not a function
My question is why is const user not undefined when the create Promise fails? I've created a jsbin here that shows the value remains undefined when the await Promise fails. So what am I doing wrong here? And why is it referencing the response object?
Note: Even when sending real data using a HTTP client, if the create promise fails, the const user still references the response object and becomes truthy, failing my check. I am aware that this can be fixed using a then handler
.then(user => res.status(201).json(user));
but I am curious to know why is it behaving like this.
Update: Changed my handleServerErrorResponse function to this
handleServerErrorResponse: (res, error = 'A server error occured') => {
res.status(500).send({
message: error
});
return;
},
As suggested by haggbart, this function was responsible for setting the user to the returned response object.
A:
I think the problem here is that you're catching the error in the register function and returning the result of calling handleServerErrorResponse, rather than just rethrowing the error. This means that the user variable will be assigned the result of calling handleServerErrorResponse, which is the response object.
Instead of catching the error in the register function, you could rethrow it and catch it in the test itself, like this:
const register = async (req, res) => {
const user = await User.create({
name: req.body.name,
username: req.body.username,
password: req.body.password,
email: req.body.email
});
console.log('isUserRes?', user === res);
if(user) res.status(201).json(user.toJSON());
}
it('should fail', async () => {
// arrange
jest.spyOn(User, 'create').mockRejectedValue('error');
const req = mockRequest();
const res = mockResponse();
req.body = payload;
// act
try {
await register(req, res);
} catch (error) {
// assert
expect(res.status).toHaveBeenCalledWith(500);
expect(res.send).toHaveBeenCalledWith(
expect.objectContaining({
message: 'A server error occured'
})
);
}
})
This way, the error will be thrown in the register function, and caught in the test, so the user variable will remain undefined.
|
Mongoose model promise returns a response instead of undefined upon rejection
|
This is my controller function I want to test
const register = async (req, res) => {
const user = await User.create({
name: req.body.name,
username: req.body.username,
password: req.body.password,
email: req.body.email
}).catch(error => handleServerErrorResponse(res));
console.log('isUserRes?', user === res);
if(user) res.status(201).json(user.toJSON());
}
This is my handleServerErrorResponse function
handleServerErrorResponse: (res, error = 'A server error occured') => {
return res.status(500).send({
message: error
});
},
This is my test implementation (I'm trying to mock mongoose model 'create' call and have it fail)
it('should fail', async () => {
// arrange
jest.spyOn(User, 'create').mockRejectedValue('error');
const req = mockRequest();
const res = mockResponse();
req.body = payload;
// act
await register(req, res);
// assert
expect(res.status).toHaveBeenCalledWith(500);
expect(res.send).toHaveBeenCalledWith(
expect.objectContaining({
message: 'A server error occured'
})
);
})
Running the tests gives me the following
jest --silent=false --testEnvironment=node --runInBand --detectOpenHandles --coverage ./tests
console.log
isUserRes? true
● register › should fail
TypeError: user.toJSON is not a function
My question is why is const user not undefined when the create Promise fails? I've created a jsbin here that shows the value remains undefined when the await Promise fails. So what am I doing wrong here? And why is it referencing the response object?
Note: Even when sending real data using a HTTP client, if the create promise fails, the const user still references the response object and becomes truthy, failing my check. I am aware that this can be fixed using a then handler
.then(user => res.status(201).json(user));
but I am curious to know why is it behaving like this.
Update: Changed my handleServerErrorResponse function to this
handleServerErrorResponse: (res, error = 'A server error occured') => {
res.status(500).send({
message: error
});
return;
},
As suggested by haggbart, this function was responsible for setting the user to the returned response object.
|
[
"I think the problem here is that you're catching the error in the register function and returning the result of calling handleServerErrorResponse, rather than just rethrowing the error. This means that the user variable will be assigned the result of calling handleServerErrorResponse, which is the response object.\nInstead of catching the error in the register function, you could rethrow it and catch it in the test itself, like this:\nconst register = async (req, res) => {\n const user = await User.create({\n name: req.body.name,\n username: req.body.username,\n password: req.body.password,\n email: req.body.email\n });\n\n console.log('isUserRes?', user === res);\n if(user) res.status(201).json(user.toJSON());\n}\n\nit('should fail', async () => {\n\n // arrange\n jest.spyOn(User, 'create').mockRejectedValue('error');\n const req = mockRequest();\n const res = mockResponse();\n req.body = payload;\n\n // act\n try {\n await register(req, res);\n } catch (error) {\n // assert\n expect(res.status).toHaveBeenCalledWith(500);\n expect(res.send).toHaveBeenCalledWith(\n expect.objectContaining({\n message: 'A server error occured'\n })\n );\n }\n})\n\nThis way, the error will be thrown in the register function, and caught in the test, so the user variable will remain undefined.\n"
] |
[
1
] |
[] |
[] |
[
"express",
"jestjs",
"mongoose",
"node.js",
"unit_testing"
] |
stackoverflow_0074663420_express_jestjs_mongoose_node.js_unit_testing.txt
|
Q:
How can one achieve better generic inference
Given the example below, are there any ways to achieve DRY code that's similar to foobarA?
const foo = <S,T extends string>(type:T)=>{
return {
[type]: (arg: S)=>console.log(arg)
}
}
// the following correctly types foobar, however 'bar' redundantly repeats.
const foobar = foo<string,'bar'>('bar')
// ideally one would want the generic inference to be smarter and the
// following to work:
const foobarA = foo<string>('bar')
A:
The following will literally “work” — it will compile. I don’t know if it will actually accomplish what you want.
const foo = <S,T extends string = string>(type:T)=>{
return {
[type]: (arg: S)=>console.log(arg)
}
}
|
How can one achieve better generic inference
|
Given the example below, are there any ways to achieve DRY code that's similar to foobarA?
const foo = <S,T extends string>(type:T)=>{
return {
[type]: (arg: S)=>console.log(arg)
}
}
// the following correctly types foobar, however 'bar' redundantly repeats.
const foobar = foo<string,'bar'>('bar')
// ideally one would want the generic inference to be smarter and the
// following to work:
const foobarA = foo<string>('bar')
|
[
"The following will literally “work” — it will compile. I don’t know if it will actually accomplish what you want.\nconst foo = <S,T extends string = string>(type:T)=>{\n return { \n [type]: (arg: S)=>console.log(arg)\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"typescript"
] |
stackoverflow_0074662987_typescript.txt
|
Q:
Allow EC2 instance to access S3 bucket
I've got an S3 bucket with a few files in. Public access disabled
I've also got an EC2 instance which I want to be able to access all files in the bucket.
I created a role with permissions like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucketname"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucketname/*"
]
}
]
}
I assigned the role to my EC2 instance, but I still get 403 forbidden if I try and access a file in the bucket from my EC2 instance.
Not sure what i've done wrong.
Thanks
A:
When accessing private objects in an Amazon S3 bucket, it is necessary to provide authentication to prove that you are permitted to access the object.
It would appear that you are attempting to access the file without any authentication information, by simply accessing the URL: mybucket.s3.eu-west-2.amazonaws.com/myfile
If you wish to access an object this way, you can create an Amazon S3 pre-signed URL, which provides time-limited access to a private object in Amazon S3. It appends a 'signature' to the URL to prove that you are authorised to access it.
Alternatively, you could access the object via the AWS Command-Line Interface (CLI), or via the AWS SDK in your preferred programming language. This way, API requests will be authenticated against the S3 service.
|
Allow EC2 instance to access S3 bucket
|
I've got an S3 bucket with a few files in. Public access disabled
I've also got an EC2 instance which I want to be able to access all files in the bucket.
I created a role with permissions like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucketname"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucketname/*"
]
}
]
}
I assigned the role to my EC2 instance, but I still get 403 forbidden if I try and access a file in the bucket from my EC2 instance.
Not sure what i've done wrong.
Thanks
|
[
"When accessing private objects in an Amazon S3 bucket, it is necessary to provide authentication to prove that you are permitted to access the object.\nIt would appear that you are attempting to access the file without any authentication information, by simply accessing the URL: mybucket.s3.eu-west-2.amazonaws.com/myfile\nIf you wish to access an object this way, you can create an Amazon S3 pre-signed URL, which provides time-limited access to a private object in Amazon S3. It appends a 'signature' to the URL to prove that you are authorised to access it.\nAlternatively, you could access the object via the AWS Command-Line Interface (CLI), or via the AWS SDK in your preferred programming language. This way, API requests will be authenticated against the S3 service.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_ec2",
"amazon_iam",
"amazon_s3",
"amazon_web_services"
] |
stackoverflow_0074658812_amazon_ec2_amazon_iam_amazon_s3_amazon_web_services.txt
|
Q:
Swap position of keys in a dictionary for same value
I have a dictionary
cost = {
(0,1):70,
(0,2):40,
(1,2):65
}
I would like a dictionary where the values for the opposite keys are also the same. To clarify,
(0,1):70 is also the same as (1,0):70
I tried to flip the values of the keys using this:
for i,j in cost.keys():
cost [j,i]==cost[i,j]
This gives a key error of (1,0) but that is the key that I want the code to add.
I further tried cost1 = {tuple(y): x for x, y in cost.keys()}
This resulted in a TypeError:'int' object not iterable
How can I then further append all the values to a dictionary? Thank you for your time and help.
A:
Try this code snippet, to see if that's what you want:
# make a new dict to reflect the swap keys:
cost1 = {}
for key, val in cost.items():
x, y = key # unpack the key
cost1[(y, x)] = val # swap x, y - tuple as the new key
print(cost1)
# {(1, 0): 70, (2, 0): 40, (2, 1): 65}
|
Swap position of keys in a dictionary for same value
|
I have a dictionary
cost = {
(0,1):70,
(0,2):40,
(1,2):65
}
I would like a dictionary where the values for the opposite keys are also the same. To clarify,
(0,1):70 is also the same as (1,0):70
I tried to flip the values of the keys using this:
for i,j in cost.keys():
cost [j,i]==cost[i,j]
This gives a key error of (1,0) but that is the key that I want the code to add.
I further tried cost1 = {tuple(y): x for x, y in cost.keys()}
This resulted in a TypeError:'int' object not iterable
How can I then further append all the values to a dictionary? Thank you for your time and help.
|
[
"Try this code snippet, to see if that's what you want:\n# make a new dict to reflect the swap keys:\ncost1 = {}\n\nfor key, val in cost.items():\n x, y = key # unpack the key\n cost1[(y, x)] = val # swap x, y - tuple as the new key\n \nprint(cost1)\n# {(1, 0): 70, (2, 0): 40, (2, 1): 65}\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074663417_dictionary_python.txt
|
Q:
Change Color for iPhone visitor
How can i change the color of an element, depending if the visitor is using ios (iPhone)?
This is what i tried:
<p id="TestID">Change this color for iPhone</p>
document.onload = function myFunc(){
if( (navigator.platform.indexOf("iPhone") != -1))
document.getElementById("TestID").style.color = "red !important";);}
But it did not work.
Any suggestions?
Much appreciated
A:
here how i check for users' device. maybe this works for you too.
const userAgent = navigator.userAgent || navigator.vendor || window.opera;
if (/android/i.test(userAgent)) {
// android users
} else if (/iphone/i.test(userAgent) || /ipad/i.test(userAgent)) {
// ios users
}
for your case, try:
window.onload = () => {
const userAgent = navigator.userAgent || navigator.vendor || window.opera;
if (/iphone/i.test(userAgent) || /ipad/i.test(userAgent)) {
document.querySelector("#TestID").style.color = "red";
}
}
also you used document.onload incorrectly. more on that
|
Change Color for iPhone visitor
|
How can i change the color of an element, depending if the visitor is using ios (iPhone)?
This is what i tried:
<p id="TestID">Change this color for iPhone</p>
document.onload = function myFunc(){
if( (navigator.platform.indexOf("iPhone") != -1))
document.getElementById("TestID").style.color = "red !important";);}
But it did not work.
Any suggestions?
Much appreciated
|
[
"here how i check for users' device. maybe this works for you too.\nconst userAgent = navigator.userAgent || navigator.vendor || window.opera;\nif (/android/i.test(userAgent)) {\n // android users\n} else if (/iphone/i.test(userAgent) || /ipad/i.test(userAgent)) {\n // ios users\n}\n\nfor your case, try:\nwindow.onload = () => {\n const userAgent = navigator.userAgent || navigator.vendor || window.opera;\n if (/iphone/i.test(userAgent) || /ipad/i.test(userAgent)) {\n document.querySelector(\"#TestID\").style.color = \"red\";\n }\n}\n\nalso you used document.onload incorrectly. more on that\n"
] |
[
1
] |
[] |
[] |
[
"html",
"ios",
"javascript"
] |
stackoverflow_0074663429_html_ios_javascript.txt
|
Q:
What is the easier way to debug jpackage?
I have a problem with jpackage that is rare enough for not to share here. I feel my only way to kill this bug is to debug the jpackage tool's code. I've seen that its code is pure Java and I was wondering if there is an easy way to see the failing code running step by step
I'm an experienced java programmer and I know how to use a debugger but I could not start jpackage from Java (it seems the main class package is not exported from its module). I think that compile the full JDK will be too much for me (because it is native code) and that is probably not the way to reach the Java part of the code
That's why I ask you for help here. ¿is there another way?
I'm simply not able to do it alone and the situation is very frutrating because I've been coding a solution for weeks and now I don't have anything to deliver because of this bug. I have also no time to file the bug (It's difficult to know if it is a JDK or a Wix bug and I need to finish my work in a week)
In an ideal world it should be possible to run jpackage.exe in debug mode and attach a remote debugger to it but it sounds like reality is a little bit harder
Thank you in advance
A:
What you can do is invoke jpackage through the ToolProvider API:
static final ToolProvider JPACKAGE = ToolProvider.findFirst("jpackage").orElseThrow();
public static void main(String[] args) {
JPACKAGE.run(System.out, System.err, "--help"); // put actual options here
}
You can then debug this application from an IDE or other debugger, and step into the jpackage code from there.
This works since in this case jpackage runs in the same process as the program it's called from.
If a wix command fails with a certain exit code, you can also re-run jpackage with the --temp option to output all the temp files into a fixed directory, and then re-run the failing Wix command from the exception message directly.
Yet another option that can help is the --verbose option, which makes jpackage output more information.
|
What is the easier way to debug jpackage?
|
I have a problem with jpackage that is rare enough for not to share here. I feel my only way to kill this bug is to debug the jpackage tool's code. I've seen that its code is pure Java and I was wondering if there is an easy way to see the failing code running step by step
I'm an experienced java programmer and I know how to use a debugger but I could not start jpackage from Java (it seems the main class package is not exported from its module). I think that compile the full JDK will be too much for me (because it is native code) and that is probably not the way to reach the Java part of the code
That's why I ask you for help here. ¿is there another way?
I'm simply not able to do it alone and the situation is very frutrating because I've been coding a solution for weeks and now I don't have anything to deliver because of this bug. I have also no time to file the bug (It's difficult to know if it is a JDK or a Wix bug and I need to finish my work in a week)
In an ideal world it should be possible to run jpackage.exe in debug mode and attach a remote debugger to it but it sounds like reality is a little bit harder
Thank you in advance
|
[
"What you can do is invoke jpackage through the ToolProvider API:\nstatic final ToolProvider JPACKAGE = ToolProvider.findFirst(\"jpackage\").orElseThrow();\npublic static void main(String[] args) {\n JPACKAGE.run(System.out, System.err, \"--help\"); // put actual options here\n}\n\nYou can then debug this application from an IDE or other debugger, and step into the jpackage code from there.\nThis works since in this case jpackage runs in the same process as the program it's called from.\n\nIf a wix command fails with a certain exit code, you can also re-run jpackage with the --temp option to output all the temp files into a fixed directory, and then re-run the failing Wix command from the exception message directly.\n\nYet another option that can help is the --verbose option, which makes jpackage output more information.\n"
] |
[
3
] |
[] |
[] |
[
"java",
"java_17",
"jpackage",
"remote_debugging"
] |
stackoverflow_0074663391_java_java_17_jpackage_remote_debugging.txt
|
Q:
Why does the .NET MAUI application build and run for Windows Desktop but fails on build for Android Emulator?
I recently started learning .NET and am currently learning to build applications using .NET MAUI.
At the moment, I am following Build mobile and desktop apps with .NET MAUI
When running the .NET MAUI application that is created when creating a new project in Visual Studio, it is able to run and build fine for the windows machine. But when I try to run the android emulator, "Pixel 5 - API 33 (Android 13.0 - API 33)", it starts the emulator but fails the build for the application.
I tried deleting the emulator and redownloading it again to see if it would work but I got the same problem.
Additionally, these are the logs when I try to build application and the target is the android emulator.
Build started... 1>------ Build started: Project: MauiApp1, Configuration: Debug Any CPU ------ Starting emulator pixel_5_-_api_33 ... 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Android is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Core is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Controls is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Essentials is not installed. Please restore and try again. 1>Done building project "MauiApp1.csproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== ========== Elapsed 00:00.468 ========== ========== Deploy: 0 succeeded, 0 failed, 0 skipped ========== ========== Elapsed 00:00.468 ========== C:\Program Files (x86)\Android\android-sdk\emulator\emulator.EXE -netfast -accel on -avd pixel_5_-_api_33 -prop monodroid.avdname=pixel_5_-_api_33 Emulator pixel_5_-_api_33 is running.
Update:
I decided to create a new project and it was able to run fine. I'm not sure why it didn't work previously but when I looked at the live visual tree the component of the app wouldn't show up so I think that might have something to do with it.
A:
Per your build log, the four targeting packs below were not being completed downloaded by the Visual Studio previously so that's why the build error occurred in Android Emulator.
Microsoft.Android
Microsoft.Maui.Core
Microsoft.Maui.Controls
Microsoft.Maui.Essentials
When you created a new project, the packs were ready so it would run successfully.
A:
I also had this other error: "could not find android.jar for API level ##". The error message below shows for API version 31"
And following the steps in this solution helped me fix these too.
go to Tools | Android | Android SDK Manager and install the missing
Android SDK.
https://stackoverflow.com/a/73841407/5436341
|
Why does the .NET MAUI application build and run for Windows Desktop but fails on build for Android Emulator?
|
I recently started learning .NET and am currently learning to build applications using .NET MAUI.
At the moment, I am following Build mobile and desktop apps with .NET MAUI
When running the .NET MAUI application that is created when creating a new project in Visual Studio, it is able to run and build fine for the windows machine. But when I try to run the android emulator, "Pixel 5 - API 33 (Android 13.0 - API 33)", it starts the emulator but fails the build for the application.
I tried deleting the emulator and redownloading it again to see if it would work but I got the same problem.
Additionally, these are the logs when I try to build application and the target is the android emulator.
Build started... 1>------ Build started: Project: MauiApp1, Configuration: Debug Any CPU ------ Starting emulator pixel_5_-_api_33 ... 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Android is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Core is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Controls is not installed. Please restore and try again. 1>C:\Program Files\dotnet\sdk\7.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(376,5): error NETSDK1127: The targeting pack Microsoft.Maui.Essentials is not installed. Please restore and try again. 1>Done building project "MauiApp1.csproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== ========== Elapsed 00:00.468 ========== ========== Deploy: 0 succeeded, 0 failed, 0 skipped ========== ========== Elapsed 00:00.468 ========== C:\Program Files (x86)\Android\android-sdk\emulator\emulator.EXE -netfast -accel on -avd pixel_5_-_api_33 -prop monodroid.avdname=pixel_5_-_api_33 Emulator pixel_5_-_api_33 is running.
Update:
I decided to create a new project and it was able to run fine. I'm not sure why it didn't work previously but when I looked at the live visual tree the component of the app wouldn't show up so I think that might have something to do with it.
|
[
"Per your build log, the four targeting packs below were not being completed downloaded by the Visual Studio previously so that's why the build error occurred in Android Emulator.\n\nMicrosoft.Android\nMicrosoft.Maui.Core\nMicrosoft.Maui.Controls\nMicrosoft.Maui.Essentials\n\nWhen you created a new project, the packs were ready so it would run successfully.\n",
"I also had this other error: \"could not find android.jar for API level ##\". The error message below shows for API version 31\"\nAnd following the steps in this solution helped me fix these too.\n\ngo to Tools | Android | Android SDK Manager and install the missing\nAndroid SDK.\n\nhttps://stackoverflow.com/a/73841407/5436341\n"
] |
[
0,
0
] |
[] |
[] |
[
".net",
"android_emulator",
"maui"
] |
stackoverflow_0074385500_.net_android_emulator_maui.txt
|
Q:
How to add a link inside an svg circle
I have drawn a circle using svg. This circle has a hover effect. I would like to add a link within in the circle and for the link text to change color along with the hover effect.
svg#circle {
height: 250px;
width: 250px;
}
circle {
stroke-dasharray: 700;
stroke-dashoffset: 700;
stroke-linecap: butt;
-webkit-transition: all 2s ease-out;
-moz-transition: all 2s ease-out;
-ms-transition: all 2s ease-out;
-o-transition: all 2s ease-out;
transition: all 2s ease-out;
}
circle:hover {
fill: pink;
stroke-dashoffset: 0;
stroke-dasharray: 700;
stroke-width: 10;
}
<svg id="circle">
<circle cx="125" cy="125" r="100" stroke="darkblue" stroke-width="3" fill="green" />
</svg>
A:
You need to add a text element wrapped in an anchor link.
Note, the text element, being on top of the circle will block the hover action on that circle. So, I've wrapped the whole thing in a g group and placed the hover capture on that instead.
svg#circle {
height: 250px;
width: 250px;
}
g circle {
stroke-dasharray: 700;
stroke-dashoffset: 700;
stroke-linecap: butt;
-webkit-transition: all 2s ease-out;
-moz-transition: all 2s ease-out;
-ms-transition: all 2s ease-out;
-o-transition: all 2s ease-out;
transition: all 2s ease-out;
}
g:hover circle {
fill: pink;
stroke-dashoffset: 0;
stroke-dasharray: 700;
stroke-width: 10;
}
text {
fill: pink;
font-size: 24px;
}
a:hover text {
fill: blue;
}
<svg id="circle">
<g>
<circle cx="125" cy="125" r="100" stroke="darkblue" stroke-width="3" fill="green" />
<a xlink:href="https://www.google.co.uk/" target="_top">
<text x="50%" y="50%" style="text-anchor: middle">google</text>
</a>
</g>
</svg>
A:
I think this will work :
<svg id="circle">
<a xlink:href="https://www.google.com" style="cursor: pointer" target="_blank">
<circle cx="125" cy="70" r="60" stroke="darkblue" stroke-width="3" fill="green" />
</a>
</svg>
EDIT: Dynamically adding link to SVG Circle.
function addAnchor(){
var dummyElement = document.createElement("div");
dummyElement.innerHTML = '<a xlink:href="https://www.google.com" style="cursor: pointer" target="_blank"></a>';
var htmlAnchorElement = dummyElement.querySelector("a");
var circleSVG = document.getElementById("circle");
htmlAnchorElement.innerHTML = circleSVG.innerHTML;
circleSVG.innerHTML = dummyElement.innerHTML;
}
<svg id="circle">
<circle cx="125" cy="70" r="60" stroke="darkblue" stroke-width="3" fill="green" />
</svg>
<button onclick="addAnchor()">Add Anchor</button>
A:
very simple!..
just wrap the entire SVG in a link...this worked for me anyway!!
initialise the link,
insert svg,
close svg,
close link
<a href="http://stackoverflow.com/questions/34968082/how-to-add-a-link-inside-an-svg-circle#"> <svg style="align-self: center" height="125" width="190" </a>>
<defs>
<linearGradient id="grad1" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="3%" style="stop-color:rgb(255,255,0);stop-opacity:0" />
<stop offset="100%" style="stop-color:rgb(255,0,0);stop-opacity:1" />
</linearGradient>
</defs>
<ellipse cx="100" cy="70" rx="85" ry="55" fill="url(#grad1)" />
<text fill="#000066" font-size="40" font-family="Verdana" x="50" y="86">MBS</text>
Sorry, your browser does not support SVG.
</svg> </a>
A:
Very, Very Simple. Add onClick in tag element,
<svg>
<circle onClick="location.href='https://stackoverflow.com'" cx="50" cy="50" r="25" stroke="darkblue" stroke-width="3" fill="green" />
</svg>
|
How to add a link inside an svg circle
|
I have drawn a circle using svg. This circle has a hover effect. I would like to add a link within in the circle and for the link text to change color along with the hover effect.
svg#circle {
height: 250px;
width: 250px;
}
circle {
stroke-dasharray: 700;
stroke-dashoffset: 700;
stroke-linecap: butt;
-webkit-transition: all 2s ease-out;
-moz-transition: all 2s ease-out;
-ms-transition: all 2s ease-out;
-o-transition: all 2s ease-out;
transition: all 2s ease-out;
}
circle:hover {
fill: pink;
stroke-dashoffset: 0;
stroke-dasharray: 700;
stroke-width: 10;
}
<svg id="circle">
<circle cx="125" cy="125" r="100" stroke="darkblue" stroke-width="3" fill="green" />
</svg>
|
[
"You need to add a text element wrapped in an anchor link.\nNote, the text element, being on top of the circle will block the hover action on that circle. So, I've wrapped the whole thing in a g group and placed the hover capture on that instead.\n\n\nsvg#circle {\r\n height: 250px;\r\n width: 250px;\r\n}\r\ng circle {\r\n stroke-dasharray: 700;\r\n stroke-dashoffset: 700;\r\n stroke-linecap: butt;\r\n -webkit-transition: all 2s ease-out;\r\n -moz-transition: all 2s ease-out;\r\n -ms-transition: all 2s ease-out;\r\n -o-transition: all 2s ease-out;\r\n transition: all 2s ease-out;\r\n}\r\ng:hover circle {\r\n fill: pink;\r\n stroke-dashoffset: 0;\r\n stroke-dasharray: 700;\r\n stroke-width: 10;\r\n}\r\ntext {\r\n fill: pink;\r\n font-size: 24px;\r\n}\r\na:hover text {\r\n fill: blue;\r\n}\n<svg id=\"circle\">\r\n <g>\r\n <circle cx=\"125\" cy=\"125\" r=\"100\" stroke=\"darkblue\" stroke-width=\"3\" fill=\"green\" />\r\n <a xlink:href=\"https://www.google.co.uk/\" target=\"_top\">\r\n <text x=\"50%\" y=\"50%\" style=\"text-anchor: middle\">google</text>\r\n </a>\r\n </g>\r\n</svg>\n\n\n\n",
"I think this will work :\n\n\n<svg id=\"circle\">\r\n <a xlink:href=\"https://www.google.com\" style=\"cursor: pointer\" target=\"_blank\">\r\n <circle cx=\"125\" cy=\"70\" r=\"60\" stroke=\"darkblue\" stroke-width=\"3\" fill=\"green\" />\r\n </a>\r\n</svg>\n\n\n\nEDIT: Dynamically adding link to SVG Circle.\n\n\nfunction addAnchor(){\r\n var dummyElement = document.createElement(\"div\");\r\n dummyElement.innerHTML = '<a xlink:href=\"https://www.google.com\" style=\"cursor: pointer\" target=\"_blank\"></a>';\r\n \r\n var htmlAnchorElement = dummyElement.querySelector(\"a\");\r\n\r\n var circleSVG = document.getElementById(\"circle\");\r\n\r\n htmlAnchorElement.innerHTML = circleSVG.innerHTML;\r\n\r\n circleSVG.innerHTML = dummyElement.innerHTML;\r\n \r\n}\n<svg id=\"circle\">\r\n <circle cx=\"125\" cy=\"70\" r=\"60\" stroke=\"darkblue\" stroke-width=\"3\" fill=\"green\" />\r\n</svg>\r\n\r\n<button onclick=\"addAnchor()\">Add Anchor</button>\n\n\n\n",
"very simple!..\njust wrap the entire SVG in a link...this worked for me anyway!!\n\ninitialise the link,\ninsert svg,\nclose svg,\nclose link\n\n\n\n <a href=\"http://stackoverflow.com/questions/34968082/how-to-add-a-link-inside-an-svg-circle#\"> <svg style=\"align-self: center\" height=\"125\" width=\"190\" </a>>\r\n <defs>\r\n <linearGradient id=\"grad1\" x1=\"0%\" y1=\"0%\" x2=\"100%\" y2=\"0%\">\r\n <stop offset=\"3%\" style=\"stop-color:rgb(255,255,0);stop-opacity:0\" />\r\n <stop offset=\"100%\" style=\"stop-color:rgb(255,0,0);stop-opacity:1\" />\r\n </linearGradient>\r\n </defs>\r\n <ellipse cx=\"100\" cy=\"70\" rx=\"85\" ry=\"55\" fill=\"url(#grad1)\" />\r\n \r\n <text fill=\"#000066\" font-size=\"40\" font-family=\"Verdana\" x=\"50\" y=\"86\">MBS</text>\r\n Sorry, your browser does not support SVG.\r\n </svg> </a>\n\n\n\n",
"Very, Very Simple. Add onClick in tag element,\n\n\n<svg>\n <circle onClick=\"location.href='https://stackoverflow.com'\" cx=\"50\" cy=\"50\" r=\"25\" stroke=\"darkblue\" stroke-width=\"3\" fill=\"green\" />\n</svg>\n\n\n\n"
] |
[
24,
15,
3,
0
] |
[] |
[] |
[
"css",
"html",
"hyperlink",
"svg"
] |
stackoverflow_0034968082_css_html_hyperlink_svg.txt
|
Q:
How to GeoCode a simple address using Data Science Toolbox
I am fed up with Google's geocoding, and decided to try an alternative. The Data Science Toolkit (http://www.datasciencetoolkit.org) allows you to Geocode unlimited number of addresses. R has an excellent package that serves as a wrapper for its functions (CRAN:RDSTK). The package has a function called street2coordinates() that interfaces with the Data Science Toolkit's geocoding utility.
However, the RDSTK function street2coordinates() does not work if you try to geocode something simple like City, Country. In the following example I will try to use the function to get the latitude and longitude for the city of Phoenix:
> require("RDSTK")
> street2coordinates("Phoenix+Arizona+United+States")
[1] full.address
<0 rows> (or 0-length row.names)
The utility from the data science toolkit works perfectly. This is the URL request that gives the answer:
http://www.datasciencetoolkit.org/maps/api/geocode/json?sensor=false&address=Phoenix+Arizona+United+States
I am interested in geocoding multiple addresses (which complete addresses and city names). I know that the Data Science Toolkit URL will work well.
How do I interface with the URL and get multiple latitudes and longitudes into a data frame with the addresses?
Here is an sample dataset:
dff <- data.frame(address=c(
"Birmingham, Alabama, United States",
"Mobile, Alabama, United States",
"Phoenix, Arizona, United States",
"Tucson, Arizona, United States",
"Little Rock, Arkansas, United States",
"Berkeley, California, United States",
"Duarte, California, United States",
"Encinitas, California, United States",
"La Jolla, California, United States",
"Los Angeles, California, United States",
"Orange, California, United States",
"Redwood City, California, United States",
"Sacramento, California, United States",
"San Francisco, California, United States",
"Stanford, California, United States",
"Hartford, Connecticut, United States",
"New Haven, Connecticut, United States"
))
A:
Like this:
library(httr)
library(rjson)
data <- paste0("[",paste(paste0("\"",dff$address,"\""),collapse=","),"]")
url <- "http://www.datasciencetoolkit.org/street2coordinates"
response <- POST(url,body=data)
json <- fromJSON(content(response,type="text"))
geocode <- do.call(rbind,sapply(json,
function(x) c(long=x$longitude,lat=x$latitude)))
geocode
# long lat
# San Francisco, California, United States -117.88536 35.18713
# Mobile, Alabama, United States -88.10318 30.70114
# La Jolla, California, United States -117.87645 33.85751
# Duarte, California, United States -118.29866 33.78659
# Little Rock, Arkansas, United States -91.20736 33.60892
# Tucson, Arizona, United States -110.97087 32.21798
# Redwood City, California, United States -117.88536 35.18713
# New Haven, Connecticut, United States -72.92751 41.36571
# Berkeley, California, United States -122.29673 37.86058
# Hartford, Connecticut, United States -72.76356 41.78516
# Sacramento, California, United States -121.55541 38.38046
# Encinitas, California, United States -116.84605 33.01693
# Birmingham, Alabama, United States -86.80190 33.45641
# Stanford, California, United States -122.16750 37.42509
# Orange, California, United States -117.85311 33.78780
# Los Angeles, California, United States -117.88536 35.18713
This takes advantage of the POST interface to the street2coordinates API (documented here), which returns all the results in 1 request, rather than using multiple GET requests.
The absence of Phoenix seems to be a bug in the street2coordinates API. If you go the API demo page and try "Phoenix, Arizona, United States", you get a null response. However, as your example shows, using their "Google-style Geocoder" does give a result for Phoenix. So here's a solution using repeated GET requests. Note that this runs much slower.
geo.dsk <- function(addr){ # single address geocode with data sciences toolkit
require(httr)
require(rjson)
url <- "http://www.datasciencetoolkit.org/maps/api/geocode/json"
response <- GET(url,query=list(sensor="FALSE",address=addr))
json <- fromJSON(content(response,type="text"))
loc <- json['results'][[1]][[1]]$geometry$location
return(c(address=addr,long=loc$lng, lat= loc$lat))
}
result <- do.call(rbind,lapply(as.character(dff$address),geo.dsk))
result <- data.frame(result)
result
# address long lat
# 1 Birmingham, Alabama, United States -86.801904 33.456412
# 2 Mobile, Alabama, United States -88.103184 30.701142
# 3 Phoenix, Arizona, United States -112.0733333 33.4483333
# 4 Tucson, Arizona, United States -110.970869 32.217975
# 5 Little Rock, Arkansas, United States -91.207356 33.608922
# 6 Berkeley, California, United States -122.29673 37.860576
# 7 Duarte, California, United States -118.298662 33.786594
# 8 Encinitas, California, United States -116.846046 33.016928
# 9 La Jolla, California, United States -117.876447 33.857515
# 10 Los Angeles, California, United States -117.885359 35.187133
# 11 Orange, California, United States -117.853112 33.787795
# 12 Redwood City, California, United States -117.885359 35.187133
# 13 Sacramento, California, United States -121.555406 38.380456
# 14 San Francisco, California, United States -117.885359 35.187133
# 15 Stanford, California, United States -122.1675 37.42509
# 16 Hartford, Connecticut, United States -72.763564 41.78516
# 17 New Haven, Connecticut, United States -72.927507 41.365709
A:
The ggmap package includes support for geocoding using either Google or Data Science Toolkit, the latter with their "Google-style geocoder". This is quite slow for multiple addresses, as noted in the earlier answer.
library(ggmap)
result <- geocode(as.character(dff[[1]]), source = "dsk")
print(cbind(dff, result))
# address lon lat
# 1 Birmingham, Alabama, United States -86.80190 33.45641
# 2 Mobile, Alabama, United States -88.10318 30.70114
# 3 Phoenix, Arizona, United States -112.07404 33.44838
# 4 Tucson, Arizona, United States -110.97087 32.21798
# 5 Little Rock, Arkansas, United States -91.20736 33.60892
# 6 Berkeley, California, United States -122.29673 37.86058
# 7 Duarte, California, United States -118.29866 33.78659
# 8 Encinitas, California, United States -116.84605 33.01693
# 9 La Jolla, California, United States -117.87645 33.85751
# 10 Los Angeles, California, United States -117.88536 35.18713
# 11 Orange, California, United States -117.85311 33.78780
# 12 Redwood City, California, United States -117.88536 35.18713
# 13 Sacramento, California, United States -121.55541 38.38046
# 14 San Francisco, California, United States -117.88536 35.18713
# 15 Stanford, California, United States -122.16750 37.42509
# 16 Hartford, Connecticut, United States -72.76356 41.78516
# 17 New Haven, Connecticut, United States -72.92751 41.36571
A:
To use the Data Science Toolkit's geocoding utility in R to geocode the addresses in your sample dataset, you can use the getURL() function from the utils package to send a request to the API and parse the response. Here is an example of how you could do this:
# Load the required packages
library(RDSTK)
library(utils)
# Define the base URL for the API
base_url <- "http://www.datasciencetoolkit.org/maps/api/geocode/json?sensor=false&address="
# Define the data frame of addresses to geocode
dff <- data.frame(address=c(
"Birmingham, Alabama, United States",
"Mobile, Alabama, United States",
"Phoenix, Arizona, United States",
"Tucson, Arizona, United States",
"Little Rock, Arkansas, United States",
"Berkeley, California, United States",
"Duarte, California, United States",
"Encinitas, California, United States",
"La Jolla, California, United States",
"Los Angeles, California, United States",
"Orange, California, United States",
"Redwood City, California, United States",
"Sacramento, California, United States",
"San Francisco, California, United States",
"Stanford, California, United States",
"Hartford, Connecticut, United States",
"New Haven, Connecticut, United States"
))
# Loop over the addresses and geocode each one
results <- vector("list", nrow(dff))
for (i in 1:nrow(dff)) {
# Send the request to the API and parse the response
url <- paste0(base_url, gsub(" ", "+", dff$address[i]))
response <- fromJSON(getURL(url))
# Extract the latitude and longitude from the response
lat <- response$results[[1]]$geometry$location$lat
lng <- response$results[[1]]$geometry$location$lng
# Store the results in a list
results[[i]] <- data.frame(address = dff$address[i], lat = lat, lng = lng)
}
# Bind the results together into a single data frame
results <- do.call(rbind, results)
This code will loop over each address in the dff data frame and send a request to the API using getURL(). It will then parse the response and extract the latitude and longitude from the results. The results for each address will be stored in a list, and then bound together into a single data frame.
You can then use this data frame to work with the geocoded addresses as needed. Let me know if you have any other questions.
|
How to GeoCode a simple address using Data Science Toolbox
|
I am fed up with Google's geocoding, and decided to try an alternative. The Data Science Toolkit (http://www.datasciencetoolkit.org) allows you to Geocode unlimited number of addresses. R has an excellent package that serves as a wrapper for its functions (CRAN:RDSTK). The package has a function called street2coordinates() that interfaces with the Data Science Toolkit's geocoding utility.
However, the RDSTK function street2coordinates() does not work if you try to geocode something simple like City, Country. In the following example I will try to use the function to get the latitude and longitude for the city of Phoenix:
> require("RDSTK")
> street2coordinates("Phoenix+Arizona+United+States")
[1] full.address
<0 rows> (or 0-length row.names)
The utility from the data science toolkit works perfectly. This is the URL request that gives the answer:
http://www.datasciencetoolkit.org/maps/api/geocode/json?sensor=false&address=Phoenix+Arizona+United+States
I am interested in geocoding multiple addresses (which complete addresses and city names). I know that the Data Science Toolkit URL will work well.
How do I interface with the URL and get multiple latitudes and longitudes into a data frame with the addresses?
Here is an sample dataset:
dff <- data.frame(address=c(
"Birmingham, Alabama, United States",
"Mobile, Alabama, United States",
"Phoenix, Arizona, United States",
"Tucson, Arizona, United States",
"Little Rock, Arkansas, United States",
"Berkeley, California, United States",
"Duarte, California, United States",
"Encinitas, California, United States",
"La Jolla, California, United States",
"Los Angeles, California, United States",
"Orange, California, United States",
"Redwood City, California, United States",
"Sacramento, California, United States",
"San Francisco, California, United States",
"Stanford, California, United States",
"Hartford, Connecticut, United States",
"New Haven, Connecticut, United States"
))
|
[
"Like this:\nlibrary(httr)\nlibrary(rjson)\n\ndata <- paste0(\"[\",paste(paste0(\"\\\"\",dff$address,\"\\\"\"),collapse=\",\"),\"]\")\nurl <- \"http://www.datasciencetoolkit.org/street2coordinates\"\nresponse <- POST(url,body=data)\njson <- fromJSON(content(response,type=\"text\"))\ngeocode <- do.call(rbind,sapply(json,\n function(x) c(long=x$longitude,lat=x$latitude)))\ngeocode\n# long lat\n# San Francisco, California, United States -117.88536 35.18713\n# Mobile, Alabama, United States -88.10318 30.70114\n# La Jolla, California, United States -117.87645 33.85751\n# Duarte, California, United States -118.29866 33.78659\n# Little Rock, Arkansas, United States -91.20736 33.60892\n# Tucson, Arizona, United States -110.97087 32.21798\n# Redwood City, California, United States -117.88536 35.18713\n# New Haven, Connecticut, United States -72.92751 41.36571\n# Berkeley, California, United States -122.29673 37.86058\n# Hartford, Connecticut, United States -72.76356 41.78516\n# Sacramento, California, United States -121.55541 38.38046\n# Encinitas, California, United States -116.84605 33.01693\n# Birmingham, Alabama, United States -86.80190 33.45641\n# Stanford, California, United States -122.16750 37.42509\n# Orange, California, United States -117.85311 33.78780\n# Los Angeles, California, United States -117.88536 35.18713\n\nThis takes advantage of the POST interface to the street2coordinates API (documented here), which returns all the results in 1 request, rather than using multiple GET requests.\nThe absence of Phoenix seems to be a bug in the street2coordinates API. If you go the API demo page and try \"Phoenix, Arizona, United States\", you get a null response. However, as your example shows, using their \"Google-style Geocoder\" does give a result for Phoenix. So here's a solution using repeated GET requests. Note that this runs much slower.\ngeo.dsk <- function(addr){ # single address geocode with data sciences toolkit\n require(httr)\n require(rjson)\n url <- \"http://www.datasciencetoolkit.org/maps/api/geocode/json\"\n response <- GET(url,query=list(sensor=\"FALSE\",address=addr))\n json <- fromJSON(content(response,type=\"text\"))\n loc <- json['results'][[1]][[1]]$geometry$location\n return(c(address=addr,long=loc$lng, lat= loc$lat))\n}\nresult <- do.call(rbind,lapply(as.character(dff$address),geo.dsk))\nresult <- data.frame(result)\nresult\n# address long lat\n# 1 Birmingham, Alabama, United States -86.801904 33.456412\n# 2 Mobile, Alabama, United States -88.103184 30.701142\n# 3 Phoenix, Arizona, United States -112.0733333 33.4483333\n# 4 Tucson, Arizona, United States -110.970869 32.217975\n# 5 Little Rock, Arkansas, United States -91.207356 33.608922\n# 6 Berkeley, California, United States -122.29673 37.860576\n# 7 Duarte, California, United States -118.298662 33.786594\n# 8 Encinitas, California, United States -116.846046 33.016928\n# 9 La Jolla, California, United States -117.876447 33.857515\n# 10 Los Angeles, California, United States -117.885359 35.187133\n# 11 Orange, California, United States -117.853112 33.787795\n# 12 Redwood City, California, United States -117.885359 35.187133\n# 13 Sacramento, California, United States -121.555406 38.380456\n# 14 San Francisco, California, United States -117.885359 35.187133\n# 15 Stanford, California, United States -122.1675 37.42509\n# 16 Hartford, Connecticut, United States -72.763564 41.78516\n# 17 New Haven, Connecticut, United States -72.927507 41.365709\n\n",
"The ggmap package includes support for geocoding using either Google or Data Science Toolkit, the latter with their \"Google-style geocoder\". This is quite slow for multiple addresses, as noted in the earlier answer.\nlibrary(ggmap)\nresult <- geocode(as.character(dff[[1]]), source = \"dsk\")\nprint(cbind(dff, result))\n# address lon lat\n# 1 Birmingham, Alabama, United States -86.80190 33.45641\n# 2 Mobile, Alabama, United States -88.10318 30.70114\n# 3 Phoenix, Arizona, United States -112.07404 33.44838\n# 4 Tucson, Arizona, United States -110.97087 32.21798\n# 5 Little Rock, Arkansas, United States -91.20736 33.60892\n# 6 Berkeley, California, United States -122.29673 37.86058\n# 7 Duarte, California, United States -118.29866 33.78659\n# 8 Encinitas, California, United States -116.84605 33.01693\n# 9 La Jolla, California, United States -117.87645 33.85751\n# 10 Los Angeles, California, United States -117.88536 35.18713\n# 11 Orange, California, United States -117.85311 33.78780\n# 12 Redwood City, California, United States -117.88536 35.18713\n# 13 Sacramento, California, United States -121.55541 38.38046\n# 14 San Francisco, California, United States -117.88536 35.18713\n# 15 Stanford, California, United States -122.16750 37.42509\n# 16 Hartford, Connecticut, United States -72.76356 41.78516\n# 17 New Haven, Connecticut, United States -72.92751 41.36571\n\n",
"To use the Data Science Toolkit's geocoding utility in R to geocode the addresses in your sample dataset, you can use the getURL() function from the utils package to send a request to the API and parse the response. Here is an example of how you could do this:\n# Load the required packages\nlibrary(RDSTK)\nlibrary(utils)\n\n# Define the base URL for the API\nbase_url <- \"http://www.datasciencetoolkit.org/maps/api/geocode/json?sensor=false&address=\"\n\n# Define the data frame of addresses to geocode\ndff <- data.frame(address=c(\n \"Birmingham, Alabama, United States\",\n \"Mobile, Alabama, United States\",\n \"Phoenix, Arizona, United States\",\n \"Tucson, Arizona, United States\",\n \"Little Rock, Arkansas, United States\",\n \"Berkeley, California, United States\",\n \"Duarte, California, United States\",\n \"Encinitas, California, United States\",\n \"La Jolla, California, United States\",\n \"Los Angeles, California, United States\",\n \"Orange, California, United States\",\n \"Redwood City, California, United States\",\n \"Sacramento, California, United States\",\n \"San Francisco, California, United States\",\n \"Stanford, California, United States\",\n \"Hartford, Connecticut, United States\",\n \"New Haven, Connecticut, United States\"\n ))\n\n# Loop over the addresses and geocode each one\nresults <- vector(\"list\", nrow(dff))\nfor (i in 1:nrow(dff)) {\n # Send the request to the API and parse the response\n url <- paste0(base_url, gsub(\" \", \"+\", dff$address[i]))\n response <- fromJSON(getURL(url))\n\n # Extract the latitude and longitude from the response\n lat <- response$results[[1]]$geometry$location$lat\n lng <- response$results[[1]]$geometry$location$lng\n\n # Store the results in a list\n results[[i]] <- data.frame(address = dff$address[i], lat = lat, lng = lng)\n}\n\n# Bind the results together into a single data frame\nresults <- do.call(rbind, results)\n\n\nThis code will loop over each address in the dff data frame and send a request to the API using getURL(). It will then parse the response and extract the latitude and longitude from the results. The results for each address will be stored in a list, and then bound together into a single data frame.\nYou can then use this data frame to work with the geocoded addresses as needed. Let me know if you have any other questions.\n"
] |
[
16,
5,
0
] |
[] |
[] |
[
"geocoding",
"maps",
"r"
] |
stackoverflow_0022887833_geocoding_maps_r.txt
|
Q:
Kafka Connect Skipping Messages due to Confluent Interceptor
I am seeing following messages in my connect log
WARN Monitoring Interceptor skipped 2294 messages with missing or invalid timestamps for topic TEST_TOPIC_1. The messages were either corrupted or using an older message format. Please verify that all your producers support timestamped messages and that your brokers and topics are all configured with log.message.format.version, and message.format.version >= 0.10.0 respectively. You may also experience this if you are consuming older messages produced to Kafka prior to any of those changes taking place. (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
I have changed my kafka broker with this
KAFKA_INTER_BROKER_PROTOCOL_VERSION: 0.11.0
KAFKA_LOG_MESSAGE_FORMAT_VERSION: 0.11.0
I am guessing this is reducing my overall producer throughput and I am trying load testing.
PS:
I don't want to remove the confluent interceptor because it helps me with throughput and consumer lag.
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
Any way to not skip those messages, I am using pepperbox to produce messages and it doesn't have timestamp
{
"messageId":{{SEQUENCE("messageId", 1, 1)}},
"messageBody":"{{RANDOM_ALPHA_NUMERIC("abcedefghijklmnopqrwxyzABCDEFGHIJKLMNOPQRWXYZ", 2700)}}",
"messageCategory":"{{RANDOM_STRING("Finance", "Insurance", "Healthcare", "Shares")}}",
"messageStatus":"{{RANDOM_STRING("Accepted","Pending","Processing","Rejected")}}"
}
Thanks in advance!
A:
Look at the Kafka version in the pom, and you'll see it's using Kafka 0.9
Timestamps were added to Kafka as of 0.10.2.
As the error says Please verify that all your producers support timestamped messages.
Recompile the project with a new version, and all produced records will automatically have a timestamp, and therefore not be skipped.
Or, use a different tool like JMeter or Kafka Connect Datagen.
|
Kafka Connect Skipping Messages due to Confluent Interceptor
|
I am seeing following messages in my connect log
WARN Monitoring Interceptor skipped 2294 messages with missing or invalid timestamps for topic TEST_TOPIC_1. The messages were either corrupted or using an older message format. Please verify that all your producers support timestamped messages and that your brokers and topics are all configured with log.message.format.version, and message.format.version >= 0.10.0 respectively. You may also experience this if you are consuming older messages produced to Kafka prior to any of those changes taking place. (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
I have changed my kafka broker with this
KAFKA_INTER_BROKER_PROTOCOL_VERSION: 0.11.0
KAFKA_LOG_MESSAGE_FORMAT_VERSION: 0.11.0
I am guessing this is reducing my overall producer throughput and I am trying load testing.
PS:
I don't want to remove the confluent interceptor because it helps me with throughput and consumer lag.
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
Any way to not skip those messages, I am using pepperbox to produce messages and it doesn't have timestamp
{
"messageId":{{SEQUENCE("messageId", 1, 1)}},
"messageBody":"{{RANDOM_ALPHA_NUMERIC("abcedefghijklmnopqrwxyzABCDEFGHIJKLMNOPQRWXYZ", 2700)}}",
"messageCategory":"{{RANDOM_STRING("Finance", "Insurance", "Healthcare", "Shares")}}",
"messageStatus":"{{RANDOM_STRING("Accepted","Pending","Processing","Rejected")}}"
}
Thanks in advance!
|
[
"Look at the Kafka version in the pom, and you'll see it's using Kafka 0.9\nTimestamps were added to Kafka as of 0.10.2.\nAs the error says Please verify that all your producers support timestamped messages.\nRecompile the project with a new version, and all produced records will automatically have a timestamp, and therefore not be skipped.\nOr, use a different tool like JMeter or Kafka Connect Datagen.\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"apache_kafka_connect",
"confluent_platform"
] |
stackoverflow_0073245600_apache_kafka_apache_kafka_connect_confluent_platform.txt
|
Q:
Update to the latest version of project from remote branch
Problem
I have a project based on Pterodacty.io.
Take a look at their branches - they have a branch called release/v1.7.0 - this is the release my project is based on.
I have a heavily modified version of Pterodactyl v1.7.0, but I want an easy way to update my entire codebase to v1.10.4 (or for this matter any other branch, such as develop).
How can I easily compare difference and update my whole project?
For example, I want files that have not yet been touched by me to be automatically updated to the newest version. Files that have been modified and can't be merged, I want them to be shown side by side for some kind of manual review.
Technical details:
My repository is a totally separate GitHub repository, a private one.
A:
To merge another Git repository into your repository
(I tried to guess the branch name but you may need to adjust)
Use the git remote command to add the other repository as a remote to your repository. For example, if the other repository is located at https://github.com/other-user/other-repository.git, you would run the following command:
git remote add ptero https://github.com/other-user/other-repository.git
Use the git fetch command to download the branches and commits from the other repository. This will not merge the changes into your repository, but it will make the branches and commits from the other repository available for you to merge.
git fetch ptero
Use the git merge command to merge the changes from the other repository into your repository.
git merge ptero/v1.7.0
Resolve any merge conflicts that may arise, if necessary. This may involve editing the files that have conflicts and using the git add and git commit commands to commit the resolved conflicts
And you should be up-to-date :)
|
Update to the latest version of project from remote branch
|
Problem
I have a project based on Pterodacty.io.
Take a look at their branches - they have a branch called release/v1.7.0 - this is the release my project is based on.
I have a heavily modified version of Pterodactyl v1.7.0, but I want an easy way to update my entire codebase to v1.10.4 (or for this matter any other branch, such as develop).
How can I easily compare difference and update my whole project?
For example, I want files that have not yet been touched by me to be automatically updated to the newest version. Files that have been modified and can't be merged, I want them to be shown side by side for some kind of manual review.
Technical details:
My repository is a totally separate GitHub repository, a private one.
|
[
"To merge another Git repository into your repository\n(I tried to guess the branch name but you may need to adjust)\n\nUse the git remote command to add the other repository as a remote to your repository. For example, if the other repository is located at https://github.com/other-user/other-repository.git, you would run the following command:\n\ngit remote add ptero https://github.com/other-user/other-repository.git\n\n\nUse the git fetch command to download the branches and commits from the other repository. This will not merge the changes into your repository, but it will make the branches and commits from the other repository available for you to merge.\n\ngit fetch ptero\n\n\nUse the git merge command to merge the changes from the other repository into your repository.\n\ngit merge ptero/v1.7.0\n\n\nResolve any merge conflicts that may arise, if necessary. This may involve editing the files that have conflicts and using the git add and git commit commands to commit the resolved conflicts\n\nAnd you should be up-to-date :)\n"
] |
[
0
] |
[] |
[] |
[
"git",
"github",
"merge"
] |
stackoverflow_0074661676_git_github_merge.txt
|
Q:
cpp use istream_iterator to populate a vector, avoid copy
this question seems to be quite similar to others around here, but some crucial details are different in this case (they always copy the element of the iterator to the vector as far as I've seen posts).
Sticking with a quite common idiom (as far as I read), I'm doing file -> ifstream -> istream_iterator -> vector (this approach calls >> on type of the vector). The problem I've got with this is that according to the reference, istream_iterator
Dereferencing only returns a copy of the most recently read object
In my case I want to read some objects containing a vector-member, which means that the currently read object is copied to insert it into the vector (which means to copy construct the whole vector-member) and immediately afterwards it is constructed completely new when reading the new object.
So this copy is unneeded and just slowing down the whole thing. Is there a way to eliminate this uneeded copy (I imagine, the istream_iterator might just be moveing the object out and construct a new one afterwards, but maybe there is another way to avoid this copying)?
For illustration of the problem see some example code:
#include <fstream>
#include <iostream>
#include <iterator>
#include <vector>
struct My_class {
std::vector<std::string> vec;
// default/copy/move constructors to illustrate what constructors are being invoked
My_class() : vec{} {
std::cout << " default" << std::endl;
}
My_class(My_class const &other) : vec{other.vec} {
std::cout << " copy" << std::endl;
}
My_class(My_class &&other) noexcept : vec{std::move(other.vec)} {
std::cout << " move" << std::endl;
}
friend std::istream& operator>>(std::istream& is, my_class& i) {
i.vec.clear(); // needed since i still contains the data from the previous object read
std::string l;
// in other use cases some more lines would be read or maybe the lines get preprocessed (e.g. split it up) and then inserted into the vector
std::getline(is, l);
i.vec.push_back(l);
return is;
}
};
int main(int argc, char* argv[]) {
std::ifstream ifs{argv[1]};
std::vector<my_class> data{std::istream_iterator<my_class>{ifs}, {}};
for(const auto& _ele : data){
for(const auto& ele : _ele.vec){
std::cout << ele << " ";
}
std::cout << std::endl;
}
}
Second try:
#include <fstream>
#include <iostream>
#include <iterator>
#include <vector>
#include <memory>
#include <ranges>
static int cpy_cnt = 0;
struct My_class {
std::vector<std::string> vec;
// default constructor
My_class() : vec{} {
std::cout << vec.size() << " default" << std::endl;
}
My_class(My_class&&) = default;
My_class(const My_class&) = default;
My_class& operator=(My_class&&) = default;
My_class& operator=(My_class&) = default;
friend std::istream& operator>>(std::istream& is, My_class& i) {
i.vec.clear();
std::string l;
std::getline(is, l);
i.vec.push_back(l);
return is;
}
};
int main(int argc, char* argv[]) {
std::ifstream ifs{argv[1]};
std::cout << "iter create" << std::endl;
auto tmp = std::views::istream<My_class>(ifs);
std::cout << "vec create" << std::endl;
std::vector<My_class> data2{};
}
gives an error containing error: no matching function for call to '__begin' (not gonna paste the complete long error message of clang).
I've been fiddling around with gcc and clang and it seams, that the error only occurs with clang while with gcc everything is just fine.
Configuring with
cd build && cmake -DCMAKE_BUILD_TYPE=Debug -D CMAKE_C_COMPILER=clang -D CMAKE_CXX_COMPILER=clang++ -DCMAKE_EXPORT_COMPILE_COMMANDS=1 .. and cd build && cmake -DCMAKE_BUILD_TYPE=Debug -D CMAKE_C_COMPILER=gcc -D CMAKE_CXX_COMPILER=g++ -DCMAKE_EXPORT_COMPILE_COMMANDS=1 ..
$ g++ --version
g++ (GCC) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ clang++ --version
clang version 14.0.6
Target: x86_64-pc-linux-gnu
Thread model: posix
A:
Like you have found out, that derefencing istream_iterator will give you a copy. One workaround for you is to use views::istream and ranges::move:
std::ifstream ifs{argv[1]};
auto ifs_view = std::views::istream<My_class>(ifs);
std::vector<My_class> data;
std::ranges::move(ifs_view, std::back_inserter(data));
Depends on your use case, you can also loop on ifs_view directly to avoid any constructions(since your operator >> insert data directly to the underlying vector):
for(const auto& _ele : ifs_view) {
// for loop logics here...
}
Sidenote, istream_view requires the object to be assignable, and your My_class's assigning operators are implicitly deleted right now, so you would need to either provide at least one yourself, or just default the copy/move constructors.
|
cpp use istream_iterator to populate a vector, avoid copy
|
this question seems to be quite similar to others around here, but some crucial details are different in this case (they always copy the element of the iterator to the vector as far as I've seen posts).
Sticking with a quite common idiom (as far as I read), I'm doing file -> ifstream -> istream_iterator -> vector (this approach calls >> on type of the vector). The problem I've got with this is that according to the reference, istream_iterator
Dereferencing only returns a copy of the most recently read object
In my case I want to read some objects containing a vector-member, which means that the currently read object is copied to insert it into the vector (which means to copy construct the whole vector-member) and immediately afterwards it is constructed completely new when reading the new object.
So this copy is unneeded and just slowing down the whole thing. Is there a way to eliminate this uneeded copy (I imagine, the istream_iterator might just be moveing the object out and construct a new one afterwards, but maybe there is another way to avoid this copying)?
For illustration of the problem see some example code:
#include <fstream>
#include <iostream>
#include <iterator>
#include <vector>
struct My_class {
std::vector<std::string> vec;
// default/copy/move constructors to illustrate what constructors are being invoked
My_class() : vec{} {
std::cout << " default" << std::endl;
}
My_class(My_class const &other) : vec{other.vec} {
std::cout << " copy" << std::endl;
}
My_class(My_class &&other) noexcept : vec{std::move(other.vec)} {
std::cout << " move" << std::endl;
}
friend std::istream& operator>>(std::istream& is, my_class& i) {
i.vec.clear(); // needed since i still contains the data from the previous object read
std::string l;
// in other use cases some more lines would be read or maybe the lines get preprocessed (e.g. split it up) and then inserted into the vector
std::getline(is, l);
i.vec.push_back(l);
return is;
}
};
int main(int argc, char* argv[]) {
std::ifstream ifs{argv[1]};
std::vector<my_class> data{std::istream_iterator<my_class>{ifs}, {}};
for(const auto& _ele : data){
for(const auto& ele : _ele.vec){
std::cout << ele << " ";
}
std::cout << std::endl;
}
}
Second try:
#include <fstream>
#include <iostream>
#include <iterator>
#include <vector>
#include <memory>
#include <ranges>
static int cpy_cnt = 0;
struct My_class {
std::vector<std::string> vec;
// default constructor
My_class() : vec{} {
std::cout << vec.size() << " default" << std::endl;
}
My_class(My_class&&) = default;
My_class(const My_class&) = default;
My_class& operator=(My_class&&) = default;
My_class& operator=(My_class&) = default;
friend std::istream& operator>>(std::istream& is, My_class& i) {
i.vec.clear();
std::string l;
std::getline(is, l);
i.vec.push_back(l);
return is;
}
};
int main(int argc, char* argv[]) {
std::ifstream ifs{argv[1]};
std::cout << "iter create" << std::endl;
auto tmp = std::views::istream<My_class>(ifs);
std::cout << "vec create" << std::endl;
std::vector<My_class> data2{};
}
gives an error containing error: no matching function for call to '__begin' (not gonna paste the complete long error message of clang).
I've been fiddling around with gcc and clang and it seams, that the error only occurs with clang while with gcc everything is just fine.
Configuring with
cd build && cmake -DCMAKE_BUILD_TYPE=Debug -D CMAKE_C_COMPILER=clang -D CMAKE_CXX_COMPILER=clang++ -DCMAKE_EXPORT_COMPILE_COMMANDS=1 .. and cd build && cmake -DCMAKE_BUILD_TYPE=Debug -D CMAKE_C_COMPILER=gcc -D CMAKE_CXX_COMPILER=g++ -DCMAKE_EXPORT_COMPILE_COMMANDS=1 ..
$ g++ --version
g++ (GCC) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ clang++ --version
clang version 14.0.6
Target: x86_64-pc-linux-gnu
Thread model: posix
|
[
"Like you have found out, that derefencing istream_iterator will give you a copy. One workaround for you is to use views::istream and ranges::move:\nstd::ifstream ifs{argv[1]};\nauto ifs_view = std::views::istream<My_class>(ifs);\n\nstd::vector<My_class> data;\nstd::ranges::move(ifs_view, std::back_inserter(data));\n\nDepends on your use case, you can also loop on ifs_view directly to avoid any constructions(since your operator >> insert data directly to the underlying vector):\nfor(const auto& _ele : ifs_view) { \n // for loop logics here...\n}\n\n\nSidenote, istream_view requires the object to be assignable, and your My_class's assigning operators are implicitly deleted right now, so you would need to either provide at least one yourself, or just default the copy/move constructors.\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"iterator",
"vector"
] |
stackoverflow_0074662715_c++_iterator_vector.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.