content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Loading a random image for every different screen in Processing
I am trying to find how to change the background to a different image every time the screen changes. A bit of context: I am trying to make a quiz game, and every time I click on an answer, the screen changes. With the screen changing to a different question, I want to change the background as well. There will be 10 questions so I have 10 images. S
o far, I am just mapping out the process on paper, basically, how should I tackle it step by step, but I am stuck on this one. Just looking for some orientation. Thank you!
A:
You just need a PImage for each background:
PImage background1;
PImage background2;
...
Then in draw, depending on the question, you can say which image to draw through a switch statement:
switch (question)
{
case 1 -> image(background1, 0, 0);
case 2 -> image(background2, 0, 0);
...
}
And if you want, you can have questions be objects which store all the necessary information including its own background.
class Question
{
PImage background;
String message;
void draw()
{
image(background, 0, 0);
text(message, 100, 100);
}
}
@Override
public void draw() // draw loop
{
currentQuestion.draw();
}
Where currentQuestion is the currently "selected" question in an array/list.
This way when you supposedly change questions, the image drawn by draw() in Question will change too.
|
Loading a random image for every different screen in Processing
|
I am trying to find how to change the background to a different image every time the screen changes. A bit of context: I am trying to make a quiz game, and every time I click on an answer, the screen changes. With the screen changing to a different question, I want to change the background as well. There will be 10 questions so I have 10 images. S
o far, I am just mapping out the process on paper, basically, how should I tackle it step by step, but I am stuck on this one. Just looking for some orientation. Thank you!
|
[
"You just need a PImage for each background:\nPImage background1;\nPImage background2;\n...\n\nThen in draw, depending on the question, you can say which image to draw through a switch statement:\nswitch (question)\n{\n case 1 -> image(background1, 0, 0);\n case 2 -> image(background2, 0, 0);\n ...\n}\n\nAnd if you want, you can have questions be objects which store all the necessary information including its own background.\nclass Question\n{\n PImage background;\n String message;\n \n void draw()\n {\n image(background, 0, 0);\n text(message, 100, 100);\n }\n}\n\n@Override\npublic void draw() // draw loop\n{\n currentQuestion.draw();\n}\n\nWhere currentQuestion is the currently \"selected\" question in an array/list.\nThis way when you supposedly change questions, the image drawn by draw() in Question will change too.\n"
] |
[
0
] |
[] |
[] |
[
"processing"
] |
stackoverflow_0074664314_processing.txt
|
Q:
How to know through which of the 2 channels the messages arrive [WebRTC]
I'm trying to identify which of the two channels a message came through, but I don't know how I can tell them apart.
How could I manage to do it?
let channel = null;
let channel2 = null;
channel = connection.createDataChannel('data');
channel2 = connection.createDataChannel('data2');
connection.ondatachannel = (event) => {
// I'm sure it's here, but I don't know how to tell the difference.
channel = event.channel;
channel2 = event.channel;
// Regardless of whether channel or channel2 is typed, the messages are mixed up.
channel.onmessage = (event) => {}
channel2.onmessage = (event) => {}
}
A:
Datachannels have a 'label' property that you set to data and data2 respectively which gets signalled from the creator to the receiver. You can inspect event.channel.label and make decisions based on that.
|
How to know through which of the 2 channels the messages arrive [WebRTC]
|
I'm trying to identify which of the two channels a message came through, but I don't know how I can tell them apart.
How could I manage to do it?
let channel = null;
let channel2 = null;
channel = connection.createDataChannel('data');
channel2 = connection.createDataChannel('data2');
connection.ondatachannel = (event) => {
// I'm sure it's here, but I don't know how to tell the difference.
channel = event.channel;
channel2 = event.channel;
// Regardless of whether channel or channel2 is typed, the messages are mixed up.
channel.onmessage = (event) => {}
channel2.onmessage = (event) => {}
}
|
[
"Datachannels have a 'label' property that you set to data and data2 respectively which gets signalled from the creator to the receiver. You can inspect event.channel.label and make decisions based on that.\n"
] |
[
4
] |
[] |
[] |
[
"javascript",
"openwebrtc",
"webrtc"
] |
stackoverflow_0074663917_javascript_openwebrtc_webrtc.txt
|
Q:
Why am I getting a JWT with a bunch of periods/dots back from Google OAuth?
In a web application I'm running, I suddenly started getting these odd tokens containing a huge string of periods at the end.
This happens even when I bypass my application code and call the function from the Google OAuth library directly.
Here's an example token:
ya29.c.Kp8BCgi0lxWtUt-_[Normal JWT stuff, redacted for security]yVvGk...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Could this be an upstream issue with Google OAuth? Has anyone else seen tokens like this?
A:
Same here, it suddenly started. Had to remove them from the received token, now it works again.
A:
I found the problem is on the Google server-side. It's actually returning the JWT with the trailing "." chars. I'm updating Chilkat to automatically trim the trailing "." chars if found before returning the JWT.
A:
Same with me. And it leads to Error: Invalid login: 555 5.5.2 Syntax error for my nodeMailer application.
Solved with the following code:
tokensCache.access_token = tokensCache.access_token.replace(/\.+$/, '');
A:
The problem is that clients should be able to handle the token sizes declared in https://developers.google.com/identity/protocols/oauth2#size. Also, tokens must be opaque to clients, meaning, assuming token start with "ya29.blabla" is wrong. Instead, the token must be parsed as a string encoded Web-safe base64 which standard is https://www.base64encode.org/enc/safe/
|
Why am I getting a JWT with a bunch of periods/dots back from Google OAuth?
|
In a web application I'm running, I suddenly started getting these odd tokens containing a huge string of periods at the end.
This happens even when I bypass my application code and call the function from the Google OAuth library directly.
Here's an example token:
ya29.c.Kp8BCgi0lxWtUt-_[Normal JWT stuff, redacted for security]yVvGk...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Could this be an upstream issue with Google OAuth? Has anyone else seen tokens like this?
|
[
"Same here, it suddenly started. Had to remove them from the received token, now it works again.\n",
"I found the problem is on the Google server-side. It's actually returning the JWT with the trailing \".\" chars. I'm updating Chilkat to automatically trim the trailing \".\" chars if found before returning the JWT.\n",
"Same with me. And it leads to Error: Invalid login: 555 5.5.2 Syntax error for my nodeMailer application.\nSolved with the following code:\ntokensCache.access_token = tokensCache.access_token.replace(/\\.+$/, '');\n\n",
"The problem is that clients should be able to handle the token sizes declared in https://developers.google.com/identity/protocols/oauth2#size. Also, tokens must be opaque to clients, meaning, assuming token start with \"ya29.blabla\" is wrong. Instead, the token must be parsed as a string encoded Web-safe base64 which standard is https://www.base64encode.org/enc/safe/\n"
] |
[
3,
3,
2,
0
] |
[
"In fact the dots make no difference. You can still use the access_token to call apis. If you get an error response, you'd better check a further reason. Do you set the correct scope (https://developers.google.com/identity/protocols/oauth2/scopes)? Does the\npermission of the service account is right?\n"
] |
[
-3
] |
[
"google_oauth",
"jwt"
] |
stackoverflow_0068654502_google_oauth_jwt.txt
|
Q:
Calculating multilabel recall for this problem
I have a table with two columns, and the two entries of a row show that they are related:
Col1
Col2
a
A
b
B
a
C
c
A
b
D
Here a is related to A, C and b to B, D and c to A, meaning the same entry in col1 might have multiple labels in col2 related. I trained a Machine Learning model to quantify the relationship between Col1 and Col2 by creating a vector embedding of Col1 and Col2 and optimizing the cosine_similarity between the two vectors. Now, I want to test my model by calculating Recall on a test set. I want to check if at various recall@N, what proportion of these positive relationships can be retrieved. Suppose I have normalized vector representation of all entries in each column, then I can calculate the cosine distance between them as :
cosine_distance = torch.mm(col1_feature, col2_feature.t())
which gives a matrix of distances between all pairs that can be formed between col1 and col2.
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(c,A)
dist(c,B)
dist(c,C)
dist(c,A)
dist(c, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
I can then calculate which pairs have largest distance to calculate recall@k. My question is how can I make this efficient for a millions of rows. I found out this module in pytorch: torchmetrics.classification.MultilabelRecall(https://torchmetrics.readthedocs.io/en/stable/classification/recall.html), that seems to be useful but for that I need to specify number of labels. In my case, I can have variable number of labels for each unique entry of col1. Any ideas?
A:
You can use a clustering algorithm to group the entries in Col1 and Col2 into clusters. Then you can use the MultilabelRecall metric to calculate the recall for each cluster. This way, you don't have to specify the number of labels for each entry in Col1.
A:
If you have a large number of rows in your table, it may be inefficient to calculate the cosine distance between all pairs of entries in Col1 and Col2. One way to make this more efficient is to use approximate nearest neighbor (ANN) algorithms, which can quickly find the closest vectors in a high-dimensional space. These algorithms typically involve constructing a data structure that allows for efficient search, such as a k-d tree or locality-sensitive hashing. Once you have built this data structure, you can use it to quickly find the entries in Col2 that are closest to a given entry in Col1, and then calculate the recall@k for those entries.
Here is an example of how you might use an ANN algorithm to calculate the recall@k in your case. This code uses the k-d tree implementation in the scikit-learn library to index the vectors in Col1 and Col2, and then finds the nearest neighbors of each vector in Col1 using the k-d tree. It then calculates the recall@k for the nearest neighbors of each vector in Col1.
from sklearn.neighbors import KDTree
# Create a k-d tree to index the vectors in Col1 and Col2
tree = KDTree(np.concatenate((col1_feature, col2_feature), axis=0))
# Find the nearest neighbors of each vector in Col1 using the k-d tree
# This returns a tuple containing the indices of the nearest neighbors
# in Col2 and the distances to those neighbors
neighbors = tree.query(col1_feature, k=k)
# Calculate the recall@k for each vector in Col1
recall_at_k = 0
for i, (neighbor_indices, distances) in enumerate(neighbors):
# Get the labels for the nearest neighbors of the current vector
neighbor_labels = col2[neighbor_indices]
# Count the number of true labels among the nearest neighbors
true_labels = 0
for label in neighbor_labels:
if label in true_labels_for_col1[i]:
true_labels += 1
# Calculate the recall@k for the current vector
recall_at_k += true_labels / k
# Calculate the average recall@k over all vectors in Col1
average_recall_at_k = recall_at_k / len(col1)
|
Calculating multilabel recall for this problem
|
I have a table with two columns, and the two entries of a row show that they are related:
Col1
Col2
a
A
b
B
a
C
c
A
b
D
Here a is related to A, C and b to B, D and c to A, meaning the same entry in col1 might have multiple labels in col2 related. I trained a Machine Learning model to quantify the relationship between Col1 and Col2 by creating a vector embedding of Col1 and Col2 and optimizing the cosine_similarity between the two vectors. Now, I want to test my model by calculating Recall on a test set. I want to check if at various recall@N, what proportion of these positive relationships can be retrieved. Suppose I have normalized vector representation of all entries in each column, then I can calculate the cosine distance between them as :
cosine_distance = torch.mm(col1_feature, col2_feature.t())
which gives a matrix of distances between all pairs that can be formed between col1 and col2.
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(c,A)
dist(c,B)
dist(c,C)
dist(c,A)
dist(c, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
I can then calculate which pairs have largest distance to calculate recall@k. My question is how can I make this efficient for a millions of rows. I found out this module in pytorch: torchmetrics.classification.MultilabelRecall(https://torchmetrics.readthedocs.io/en/stable/classification/recall.html), that seems to be useful but for that I need to specify number of labels. In my case, I can have variable number of labels for each unique entry of col1. Any ideas?
|
[
"You can use a clustering algorithm to group the entries in Col1 and Col2 into clusters. Then you can use the MultilabelRecall metric to calculate the recall for each cluster. This way, you don't have to specify the number of labels for each entry in Col1.\n",
"If you have a large number of rows in your table, it may be inefficient to calculate the cosine distance between all pairs of entries in Col1 and Col2. One way to make this more efficient is to use approximate nearest neighbor (ANN) algorithms, which can quickly find the closest vectors in a high-dimensional space. These algorithms typically involve constructing a data structure that allows for efficient search, such as a k-d tree or locality-sensitive hashing. Once you have built this data structure, you can use it to quickly find the entries in Col2 that are closest to a given entry in Col1, and then calculate the recall@k for those entries.\nHere is an example of how you might use an ANN algorithm to calculate the recall@k in your case. This code uses the k-d tree implementation in the scikit-learn library to index the vectors in Col1 and Col2, and then finds the nearest neighbors of each vector in Col1 using the k-d tree. It then calculates the recall@k for the nearest neighbors of each vector in Col1.\nfrom sklearn.neighbors import KDTree\n\n# Create a k-d tree to index the vectors in Col1 and Col2\ntree = KDTree(np.concatenate((col1_feature, col2_feature), axis=0))\n\n# Find the nearest neighbors of each vector in Col1 using the k-d tree\n# This returns a tuple containing the indices of the nearest neighbors\n# in Col2 and the distances to those neighbors\nneighbors = tree.query(col1_feature, k=k)\n\n# Calculate the recall@k for each vector in Col1\nrecall_at_k = 0\nfor i, (neighbor_indices, distances) in enumerate(neighbors):\n # Get the labels for the nearest neighbors of the current vector\n neighbor_labels = col2[neighbor_indices]\n\n # Count the number of true labels among the nearest neighbors\n true_labels = 0\n for label in neighbor_labels:\n if label in true_labels_for_col1[i]:\n true_labels += 1\n\n # Calculate the recall@k for the current vector\n recall_at_k += true_labels / k\n\n# Calculate the average recall@k over all vectors in Col1\naverage_recall_at_k = recall_at_k / len(col1)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"machine_learning",
"precision_recall",
"python",
"pytorch"
] |
stackoverflow_0074633636_machine_learning_precision_recall_python_pytorch.txt
|
Q:
Unit Test for a function with API call in Jasmine
I have a utility function defined in my utils.tsx file:
// resolveAxiosInstance creates an axios instance
const createAxiosInstance = resolveAxiosInstance();
export const getItemList = params => {
const axios = await createAxiosInstance;
const res = await axios.get("/my-url", {params});
return res.data;
}
And I am using the getItemList utility in my component mycomponent.tsx. It is invoked on click of a button but before calling that API the click event sets some states as well. Here's the code of my component:
export const MyComponent = () => {
//rest of component code
const clickMe = () => {
setIsLoading(true);
const data = {
// item and price are vars whose values are filled by user through input text
itemName: item,
itemPrice: price,
};
getItemList(data).then(res => {
if (res) {
setItemData({
itemName: name,
itemPrice: price,
itemDiscount: res.disc,
});
}
}, err => console.log(err));
}
return (
//rest of the component code
<div>
<Button onClick={clickMe} data-testid="update">Click Me</Button>
</div>
)
}
I want to write a unit test case in Jasmine to test the on click functionality. I am able to invoke the on click function by using simulate("click") on the button element. But it doesn't execute the API call and that's understandable. To execute the API call I tried to use spyOn but it didn't help. It returns the error that getItemList is not declared configurable. Here's my test case:
it("should show data on click me", () => {
const wrapper = mount(<MyComponent />);
let elem = wrapper.find(MyComponent);
const mockSpy = Jasmine.createSpy("getItemList").and.returnValue(Promise.resolve(mockResp))
let btn = elem.find('[data-testid="update"]');
btn.at(0).simulate("click");
elem = elem.update();
expect(elem.find("table").length).toBe(1);
});
My question is how can I write a unit test for my use case where I trigger a button click and it calls a function which does something, and then calls an API and updates the table on my view as per the API response.
A:
You can mock fetch or XMLHttpRequest which used by axios.
jasmine has official jasmine-ajax library which allows mocking XMLHttpRequest object but not fetch. I think that in 99% axios will pick the fetch instead of old XMLHttpRequest so I recommend to use something like fetch-mock
it("should show data on click me", /*⚠️*/async () => {
/*⚠️*/fetchMock.mock('/my-url', {data: {disc: 'foo'}});
const wrapper = mount(<MyComponent />);
let elem = wrapper.find(MyComponent);
let btn = elem.find('[data-testid="update"]');
btn.at(0).simulate("click");
elem = elem.update();
// because fetch async we should wait response
expectAsync(/*⚠️*/(await waitFor("table")).length).toBe(1);
function waitFor(selector) {
return new Promise((resolve) => {
checkSelector();
function checkSelector() {
const result = elem.find(selector);
if (result.length) return resolve(result);
setTimeout(checkSelector);
}
});
}
});
A:
To test the on click functionality in your use case, you can use the spyOn method provided by the Jasmine testing framework. The spyOn method allows you to create a spy that tracks the calls to a specific function and returns a specified value.
In your case, you can use spyOn to track the calls to the getItemList function and return a value that simulates the response from the API. Here is an example of how you can use spyOn in your test case:
it("should show data on click me", () => {
// Create a spy for the getItemList function
spyOn(MyComponent.prototype, 'getItemList').and.returnValue(Promise.resolve(mockResp));
const wrapper = mount(<MyComponent />);
let elem = wrapper.find(MyComponent);
// Simulate a button click
let btn = elem.find('[data-testid="update"]');
btn.at(0).simulate("click");
// Update the wrapper to reflect the changes made by the function
elem = elem.update();
// Make assertions about the expected behavior
expect(elem.find("table").length).toBe(1);
});
---------------- Update code without using Promise ---------
It looks like you're trying to test a button click that triggers an API call in your React component. To do this, you can use a combination of simulate to trigger the button click and spyOn to mock the API call.
Here's an example of how you can test this:
it("should show data on click me", () => {
const wrapper = mount(<MyComponent />);
// Use spyOn to mock the getItemList function
spyOn(MyComponent.prototype, 'getItemList').and.returnValue(mockResp);
// Find the button and simulate a click on it
const btn = wrapper.find('[data-testid="update"]');
btn.at(0).simulate("click");
// Update the wrapper to get the updated state
wrapper.update();
// Expect the table to be rendered
expect(wrapper.find("table").length).toBe(1);
});
In this test, we first use spyOn to mock the getItemList function so that it returns the mockResp value instead of actually making an API call. Then we use simulate to trigger a click on the button, update the wrapper to get the updated state, and finally check that the table is rendered.
Note that spyOn only works if the function you're trying to mock is declared on the component's prototype. In your code, getItemList seems to be a regular function, not a method on the component. You may need to update your component to make this test work.
|
Unit Test for a function with API call in Jasmine
|
I have a utility function defined in my utils.tsx file:
// resolveAxiosInstance creates an axios instance
const createAxiosInstance = resolveAxiosInstance();
export const getItemList = params => {
const axios = await createAxiosInstance;
const res = await axios.get("/my-url", {params});
return res.data;
}
And I am using the getItemList utility in my component mycomponent.tsx. It is invoked on click of a button but before calling that API the click event sets some states as well. Here's the code of my component:
export const MyComponent = () => {
//rest of component code
const clickMe = () => {
setIsLoading(true);
const data = {
// item and price are vars whose values are filled by user through input text
itemName: item,
itemPrice: price,
};
getItemList(data).then(res => {
if (res) {
setItemData({
itemName: name,
itemPrice: price,
itemDiscount: res.disc,
});
}
}, err => console.log(err));
}
return (
//rest of the component code
<div>
<Button onClick={clickMe} data-testid="update">Click Me</Button>
</div>
)
}
I want to write a unit test case in Jasmine to test the on click functionality. I am able to invoke the on click function by using simulate("click") on the button element. But it doesn't execute the API call and that's understandable. To execute the API call I tried to use spyOn but it didn't help. It returns the error that getItemList is not declared configurable. Here's my test case:
it("should show data on click me", () => {
const wrapper = mount(<MyComponent />);
let elem = wrapper.find(MyComponent);
const mockSpy = Jasmine.createSpy("getItemList").and.returnValue(Promise.resolve(mockResp))
let btn = elem.find('[data-testid="update"]');
btn.at(0).simulate("click");
elem = elem.update();
expect(elem.find("table").length).toBe(1);
});
My question is how can I write a unit test for my use case where I trigger a button click and it calls a function which does something, and then calls an API and updates the table on my view as per the API response.
|
[
"You can mock fetch or XMLHttpRequest which used by axios.\njasmine has official jasmine-ajax library which allows mocking XMLHttpRequest object but not fetch. I think that in 99% axios will pick the fetch instead of old XMLHttpRequest so I recommend to use something like fetch-mock\nit(\"should show data on click me\", /*⚠️*/async () => {\n /*⚠️*/fetchMock.mock('/my-url', {data: {disc: 'foo'}});\n const wrapper = mount(<MyComponent />);\n let elem = wrapper.find(MyComponent);\n let btn = elem.find('[data-testid=\"update\"]');\n btn.at(0).simulate(\"click\");\n elem = elem.update();\n\n // because fetch async we should wait response\n expectAsync(/*⚠️*/(await waitFor(\"table\")).length).toBe(1); \n\n function waitFor(selector) {\n return new Promise((resolve) => {\n checkSelector();\n function checkSelector() {\n const result = elem.find(selector);\n if (result.length) return resolve(result);\n setTimeout(checkSelector);\n }\n });\n }\n});\n\n",
"To test the on click functionality in your use case, you can use the spyOn method provided by the Jasmine testing framework. The spyOn method allows you to create a spy that tracks the calls to a specific function and returns a specified value.\nIn your case, you can use spyOn to track the calls to the getItemList function and return a value that simulates the response from the API. Here is an example of how you can use spyOn in your test case:\nit(\"should show data on click me\", () => {\n // Create a spy for the getItemList function\n spyOn(MyComponent.prototype, 'getItemList').and.returnValue(Promise.resolve(mockResp));\n\n const wrapper = mount(<MyComponent />);\n let elem = wrapper.find(MyComponent);\n\n // Simulate a button click\n let btn = elem.find('[data-testid=\"update\"]');\n btn.at(0).simulate(\"click\");\n\n // Update the wrapper to reflect the changes made by the function\n elem = elem.update();\n\n // Make assertions about the expected behavior\n expect(elem.find(\"table\").length).toBe(1);\n});\n\n---------------- Update code without using Promise ---------\nIt looks like you're trying to test a button click that triggers an API call in your React component. To do this, you can use a combination of simulate to trigger the button click and spyOn to mock the API call.\nHere's an example of how you can test this:\nit(\"should show data on click me\", () => {\n const wrapper = mount(<MyComponent />);\n\n // Use spyOn to mock the getItemList function\n spyOn(MyComponent.prototype, 'getItemList').and.returnValue(mockResp);\n\n // Find the button and simulate a click on it\n const btn = wrapper.find('[data-testid=\"update\"]');\n btn.at(0).simulate(\"click\");\n\n // Update the wrapper to get the updated state\n wrapper.update();\n\n // Expect the table to be rendered\n expect(wrapper.find(\"table\").length).toBe(1);\n});\n\nIn this test, we first use spyOn to mock the getItemList function so that it returns the mockResp value instead of actually making an API call. Then we use simulate to trigger a click on the button, update the wrapper to get the updated state, and finally check that the table is rendered.\nNote that spyOn only works if the function you're trying to mock is declared on the component's prototype. In your code, getItemList seems to be a regular function, not a method on the component. You may need to update your component to make this test work.\n"
] |
[
0,
0
] |
[] |
[] |
[
"jasmine",
"javascript",
"reactjs"
] |
stackoverflow_0074570266_jasmine_javascript_reactjs.txt
|
Q:
Collaborating on Github Branches with sensitive code behind
I am developing a website where I would like to keep large swathes of code unreadable / private / secret / need to know .. whatever.. from those team members that don't need to see it .. so front end dev and back end dev will be working on different branches..
There does not seem to be that sort of granularity in any of the version control software that I have explored.
I purchased the teams upgrade in github, thinking it would provide some sort of role definition or branch isolation (reading) BUT it still allows all team members with access to the repo to see the other branches and does not allow me to isolate certain members to particular branches.
Or am I misunderstanding something ?
A:
I confirm branches cannot be hidden with Git itself, or with a repository hosting service like GitHub.
You would need two repositories:
one for front/back end "common" code
one for front/back end "secret" code
You invite everybody as collaborators to the first repository.
But you add only privileged team members to the second.
That last team has access to the first "common code" repository, and can initiate pull request to the second repository, for them to benefit from the work done on the first repository.
|
Collaborating on Github Branches with sensitive code behind
|
I am developing a website where I would like to keep large swathes of code unreadable / private / secret / need to know .. whatever.. from those team members that don't need to see it .. so front end dev and back end dev will be working on different branches..
There does not seem to be that sort of granularity in any of the version control software that I have explored.
I purchased the teams upgrade in github, thinking it would provide some sort of role definition or branch isolation (reading) BUT it still allows all team members with access to the repo to see the other branches and does not allow me to isolate certain members to particular branches.
Or am I misunderstanding something ?
|
[
"I confirm branches cannot be hidden with Git itself, or with a repository hosting service like GitHub.\nYou would need two repositories:\n\none for front/back end \"common\" code\none for front/back end \"secret\" code\n\nYou invite everybody as collaborators to the first repository.\nBut you add only privileged team members to the second.\nThat last team has access to the first \"common code\" repository, and can initiate pull request to the second repository, for them to benefit from the work done on the first repository.\n"
] |
[
0
] |
[] |
[] |
[
"github"
] |
stackoverflow_0074652484_github.txt
|
Q:
Quadratic Equation Solver in JavaScript
For some reason, when a=1, b=1, c=-1, I am not getting the desired result of -1.6180339887499 and 0.61803398874989. Instead, I get 2 and 1. What am I doing wrong?
function solve(a,b,c){
var result = (((-1*b) + Math.sqrt(Math.pow(b,2)) - (4*a*c))/(2*a));
var result2 = (((-1*b) - Math.sqrt(Math.pow(b,2)) - (4*a*c))/(2*a));
return result + "<br>" + result2;
}
document.write( solve(1,1,-1) );
A:
You need another grouping:
var result = (((-1 * b) + Math.sqrt(Math.pow(b, 2)) - (4 * a * c)) / (2 * a)); // wrong
var result2 = (((-1 * b) - Math.sqrt(Math.pow(b, 2)) - (4 * a * c)) / (2 * a)); // wrong
vs
var result = (-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a); // right
var result2 = (-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a); // right
All together:
function solve(a, b, c) {
var result = (-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a);
var result2 = (-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a);
return result + "<br>" + result2;
}
document.write(solve(1, 1, -1));
A:
Try
var a, b, c, discriminant, root1, root2, r_Part, imag_Part;
document.write(realpart ="+r_Part" and imaganary part ="+imag_Part");
discriminant = b*b-4*a*c;
if (discriminant > 0)
{
root1 = (-b+sqrt(discriminant))/(2*a);
root2 = (-b-sqrt(discriminant))/(2*a);
document.write(real part ="+r_Part" and imaganary part ="+imag_Part");
}
else if (discriminant == 0)
{
root1 = root2 = -b/(2*a);
document.write(real part ="+r_Part" and imaganary part ="+imag_Part");
}
else
{
r_Part = -b/(2*a);
imag_Part = sqrt(-discriminant)/(2*a);
document.write(real part ="+r_Part" and imaganary part ="+imag_Part");
}
A:
function solve(a, b, c) {
var result = ((-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a)).toFixed(3);
var result2 = ((-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a)).toFixed(3);
return "{"+result + "," + result2+"}";
}
document.write(solve(1, -4, -7));
A:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title></title>
<link rel="stylesheet" href="">
</head>
<body>
<script type="text/javascript">
var a=2;
var b=9;
var c=2;
var root_part=Math.sqrt(b*b-4*a*c);
var denom=2*a;
if (isNaN(root_part) === true){
document.write("<br> Impossible to solve");
}
else{
var root1=(-b+root_part)/denom;
var root2=(-b-root_part)/denom;
document.write("<br>First Root is="+root1+ " and Second Root is="+root2);
}
</script>
</body>
</html>
|
Quadratic Equation Solver in JavaScript
|
For some reason, when a=1, b=1, c=-1, I am not getting the desired result of -1.6180339887499 and 0.61803398874989. Instead, I get 2 and 1. What am I doing wrong?
function solve(a,b,c){
var result = (((-1*b) + Math.sqrt(Math.pow(b,2)) - (4*a*c))/(2*a));
var result2 = (((-1*b) - Math.sqrt(Math.pow(b,2)) - (4*a*c))/(2*a));
return result + "<br>" + result2;
}
document.write( solve(1,1,-1) );
|
[
"You need another grouping:\nvar result = (((-1 * b) + Math.sqrt(Math.pow(b, 2)) - (4 * a * c)) / (2 * a)); // wrong\nvar result2 = (((-1 * b) - Math.sqrt(Math.pow(b, 2)) - (4 * a * c)) / (2 * a)); // wrong\n\nvs\nvar result = (-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a); // right\nvar result2 = (-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a); // right\n\nAll together:\n\n\nfunction solve(a, b, c) {\r\n var result = (-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a);\r\n var result2 = (-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a);\r\n return result + \"<br>\" + result2;\r\n}\r\ndocument.write(solve(1, 1, -1));\n\n\n\n",
"Try\nvar a, b, c, discriminant, root1, root2, r_Part, imag_Part;\n\ndocument.write(realpart =\"+r_Part\" and imaganary part =\"+imag_Part\");\ndiscriminant = b*b-4*a*c;\n\n\nif (discriminant > 0)\n{\n root1 = (-b+sqrt(discriminant))/(2*a);\n root2 = (-b-sqrt(discriminant))/(2*a);\n\ndocument.write(real part =\"+r_Part\" and imaganary part =\"+imag_Part\"); \n}\n\nelse if (discriminant == 0)\n{\n root1 = root2 = -b/(2*a);\n document.write(real part =\"+r_Part\" and imaganary part =\"+imag_Part\");\n}\n\n\nelse\n{\n r_Part = -b/(2*a);\n imag_Part = sqrt(-discriminant)/(2*a);\n document.write(real part =\"+r_Part\" and imaganary part =\"+imag_Part\");\n}\n\n",
"function solve(a, b, c) {\n var result = ((-1 * b + Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a)).toFixed(3);\n var result2 = ((-1 * b - Math.sqrt(Math.pow(b, 2) - (4 * a * c))) / (2 * a)).toFixed(3);\n return \"{\"+result + \",\" + result2+\"}\";\n}\ndocument.write(solve(1, -4, -7));\n\n",
"\n\n <!DOCTYPE html>\n <html>\n <head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <title></title>\n <link rel=\"stylesheet\" href=\"\">\n </head>\n <body>\n <script type=\"text/javascript\">\n var a=2;\n var b=9;\n var c=2;\n \n var root_part=Math.sqrt(b*b-4*a*c);\n var denom=2*a;\n \n if (isNaN(root_part) === true){\n document.write(\"<br> Impossible to solve\");\n }\n else{\n var root1=(-b+root_part)/denom;\n var root2=(-b-root_part)/denom;\n document.write(\"<br>First Root is=\"+root1+ \" and Second Root is=\"+root2); \n }\n </script>\n </body>\n </html>\n\n\n\n"
] |
[
8,
2,
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0033454438_javascript.txt
|
Q:
Permission error for shutils moving png to path
You see I am making a python gui with pysimplegui for my old qrcode generator script so im using shutils for the user to download the file.
I am using the 'default' thing because i want it to save to my users path not mine, do you know some other way I can do that? because i think this is the reason its not working
I tried making it so the user inputs there username such as 'My laptop' so it adds it to the path
src_path = r"D:\Python\QRcode generator\output.png"
dst_path = r"C:\Users\Default\Pictures"
shutil.move(src_path, dst_path)
A:
The code is correct. It wont give any error if you dont use C drive (Where the operating system is installed)
This is mostly due to C drive is protected for windows stability.
If you are using any code editor (Pycharm, VS Code etc.) or running the code in windows command prompt or any terminal etc. Try and run it with administrator rights
It should work.
|
Permission error for shutils moving png to path
|
You see I am making a python gui with pysimplegui for my old qrcode generator script so im using shutils for the user to download the file.
I am using the 'default' thing because i want it to save to my users path not mine, do you know some other way I can do that? because i think this is the reason its not working
I tried making it so the user inputs there username such as 'My laptop' so it adds it to the path
src_path = r"D:\Python\QRcode generator\output.png"
dst_path = r"C:\Users\Default\Pictures"
shutil.move(src_path, dst_path)
|
[
"The code is correct. It wont give any error if you dont use C drive (Where the operating system is installed)\nThis is mostly due to C drive is protected for windows stability.\nIf you are using any code editor (Pycharm, VS Code etc.) or running the code in windows command prompt or any terminal etc. Try and run it with administrator rights\nIt should work.\n"
] |
[
0
] |
[] |
[] |
[
"path",
"shutil"
] |
stackoverflow_0074664571_path_shutil.txt
|
Q:
Why use * here instead of + in regex for password must contain at least one number and both lower and uppercase letters?
The regex is like:
"^(?=.*[a-z])(?=.*[A-Z])[A-Za-z\d]{8,}$"
The * matches the previous token between zero and unlimited times.
The + matches the previous token between one and unlimited times.
plus sign + should make sense here.
Why use * here instead of +?
A:
(?=.*[a-z]) and (?=.*[A-Z]) are positive lookaheads for at least one lowercase and one uppercase letter, respectively. .* means skip 0+ chars. If you change that to .+ it would skip 1+ chars, so (?=.+[A-Z]) would not match password Aaaaaaaaa even though it has an uppercase char.
|
Why use * here instead of + in regex for password must contain at least one number and both lower and uppercase letters?
|
The regex is like:
"^(?=.*[a-z])(?=.*[A-Z])[A-Za-z\d]{8,}$"
The * matches the previous token between zero and unlimited times.
The + matches the previous token between one and unlimited times.
plus sign + should make sense here.
Why use * here instead of +?
|
[
"(?=.*[a-z]) and (?=.*[A-Z]) are positive lookaheads for at least one lowercase and one uppercase letter, respectively. .* means skip 0+ chars. If you change that to .+ it would skip 1+ chars, so (?=.+[A-Z]) would not match password Aaaaaaaaa even though it has an uppercase char.\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"python",
"regex"
] |
stackoverflow_0074664594_javascript_python_regex.txt
|
Q:
Localization not working on dotnet6 aspnet alpine docker image
I have an application that need to translate date. When using VS2022, I'm able to switch between languages when changing a entry parameter. When I run the docker container containing my app, dates are not localized and only English is supported by default
Here my dockerfile :
`
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS publish
WORKDIR /src
COPY myApp/. .
RUN dotnet restore myApp.sln
WORKDIR /src/myApp
RUN dotnet publish -c Release -o /app
FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine
WORKDIR /app
EXPOSE 80
COPY --from=publish /app .
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT false
RUN apk add --no-cache icu-libs
ENTRYPOINT ["dotnet", "myApp.dll"]
`
I also tried by switching
`
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT false
`
When executing
myDate.ToString("dddd dd MMMM yyyy", CultureInfo.GetCultureInfo("fr-FR")), the date is returned in english instead of french
A:
It turns out that icu-libs no longer contains all cultures.
You need to add:
RUN apk add --no-cache icu-data-full
For more information, take a look here: https://github.com/dotnet/dotnet-docker/issues/3844
|
Localization not working on dotnet6 aspnet alpine docker image
|
I have an application that need to translate date. When using VS2022, I'm able to switch between languages when changing a entry parameter. When I run the docker container containing my app, dates are not localized and only English is supported by default
Here my dockerfile :
`
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS publish
WORKDIR /src
COPY myApp/. .
RUN dotnet restore myApp.sln
WORKDIR /src/myApp
RUN dotnet publish -c Release -o /app
FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine
WORKDIR /app
EXPOSE 80
COPY --from=publish /app .
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT false
RUN apk add --no-cache icu-libs
ENTRYPOINT ["dotnet", "myApp.dll"]
`
I also tried by switching
`
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT false
`
When executing
myDate.ToString("dddd dd MMMM yyyy", CultureInfo.GetCultureInfo("fr-FR")), the date is returned in english instead of french
|
[
"It turns out that icu-libs no longer contains all cultures.\nYou need to add:\nRUN apk add --no-cache icu-data-full\nFor more information, take a look here: https://github.com/dotnet/dotnet-docker/issues/3844\n"
] |
[
0
] |
[] |
[] |
[
"asp.net_core",
"c#",
"docker",
"localization"
] |
stackoverflow_0074516643_asp.net_core_c#_docker_localization.txt
|
Q:
Can we use flutter without downloading android studio?
Can we use flutter without downloading android studio ?
Can we use flutter without downloading android studio ?
|
Can we use flutter without downloading android studio?
|
Can we use flutter without downloading android studio ?
Can we use flutter without downloading android studio ?
|
[] |
[] |
[
"You can use visual code, It is possible to visit the official site of Flutter\nhttps://docs.flutter.dev/development/tools/vs-code\n"
] |
[
-1
] |
[
"android",
"android_studio",
"flutter"
] |
stackoverflow_0074664483_android_android_studio_flutter.txt
|
Q:
How to add fields to empty struct from another struct?
I am building a table test that intends to use an empty struct to store a varying number of strings, each paired with a function call. I am having trouble adding fields to an empty struct from another struct.
func TestThing(t *testing.T) {
type queryArgs struct{} // want query: string, exec: func
test := map[string]struct {
testQueries queryArgs
...code...
}{
"Test Case 1: {
***This is where I am trying to define one or more new fields in queryArgs***
},
"Test Case 2: {
***Same thing***
},
}
for ...
run...
...code...
tc.exec // Function call to execute test queries
What I am trying to do is add the following pattern into the test cases, with each test case requiring a varying number of this pattern:
TestCase 1 may contain the following:
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
TestCase 2 may contain the following:
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
A:
It sounds like you want to define a struct type with two fields: query and exec, and then create instances of that struct type to store pairs of strings and function calls. To do that, you can define the struct type like this:
type queryArgs struct {
query string
exec func()
}
Then, to create instances of that struct type and add them to your test map, you can do something like this:
test := map[string]struct {
testQueries queryArgs
...code...
}{
"Test Case 1": {
queryArgs{query: "...", exec: functionToExecuteQuery},
...code...
},
"Test Case 2": {
queryArgs{query: "...", exec: functionToExecuteQuery},
queryArgs{query: "...", exec: functionToExecuteQuery},
...code...
},
}
Note that each struct type instance in the test map should have its own query and exec fields, initialized with the appropriate values.
|
How to add fields to empty struct from another struct?
|
I am building a table test that intends to use an empty struct to store a varying number of strings, each paired with a function call. I am having trouble adding fields to an empty struct from another struct.
func TestThing(t *testing.T) {
type queryArgs struct{} // want query: string, exec: func
test := map[string]struct {
testQueries queryArgs
...code...
}{
"Test Case 1: {
***This is where I am trying to define one or more new fields in queryArgs***
},
"Test Case 2: {
***Same thing***
},
}
for ...
run...
...code...
tc.exec // Function call to execute test queries
What I am trying to do is add the following pattern into the test cases, with each test case requiring a varying number of this pattern:
TestCase 1 may contain the following:
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
TestCase 2 may contain the following:
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
queryResultRow := sqlmock.NewRows([]string{"count"}).AddRow("1")
functionToExecuteQuery(which takes queryResultRow as a parameter)
|
[
"It sounds like you want to define a struct type with two fields: query and exec, and then create instances of that struct type to store pairs of strings and function calls. To do that, you can define the struct type like this:\ntype queryArgs struct {\n query string\n exec func()\n}\n\nThen, to create instances of that struct type and add them to your test map, you can do something like this:\ntest := map[string]struct {\n testQueries queryArgs\n ...code...\n}{\n \"Test Case 1\": {\n queryArgs{query: \"...\", exec: functionToExecuteQuery},\n ...code...\n },\n \"Test Case 2\": {\n queryArgs{query: \"...\", exec: functionToExecuteQuery},\n queryArgs{query: \"...\", exec: functionToExecuteQuery},\n ...code...\n },\n}\n\nNote that each struct type instance in the test map should have its own query and exec fields, initialized with the appropriate values.\n"
] |
[
0
] |
[] |
[] |
[
"go"
] |
stackoverflow_0074664159_go.txt
|
Q:
PlatformException(VideoError, Video player had error com.google.android.exoplayer2.ExoPlaybackException: Source error, null, null)
**HI I am trying to play a live news video in my flutter app it is .m3u8 format but get above error. Using all of the updated dependencies. I want to play live news in my flutter app. I have the url you can also try it.
URL: http://161.97.162.167:1936/live/tnnnews/playlist.m3u8
When I use another url with .m3u8 it plays on flutter app but when I paste the live url code it throws me the above error.
**
Code
import 'package:video_player/video_player.dart';
import 'package:flutter/material.dart';
class VideoApp extends StatefulWidget {
@override
_VideoAppState createState() => _VideoAppState();
}
class _VideoAppState extends State<VideoApp> {
VideoPlayerController _controller;
@override
void initState() {
super.initState();
_controller = VideoPlayerController.network(
'http://161.97.162.167:1936/live/tnnnews/playlist.m3u8')
..initialize().then((_) {
// Ensure the first frame is shown after the video is initialized, even before the play button has been pressed.
setState(() {});
});
}
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Video Demo',
home: Scaffold(
body: Center(
child: _controller.value.isInitialized
? AspectRatio(
aspectRatio: _controller.value.aspectRatio,
child: VideoPlayer(_controller),
)
: Container(),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
setState(() {
_controller.value.isPlaying
? _controller.pause()
: _controller.play();
});
},
child: Icon(
_controller.value.isPlaying ? Icons.pause : Icons.play_arrow,
),
),
),
);
}
@override
void dispose() {
super.dispose();
_controller.dispose();
}
}
A:
Put this in your AndroidManifest.xml
<application ...
android:usesCleartextTraffic="true"
A:
This issue happened with me and after searching I found the issue is in link itself
As the library will only work if the extension of your link is .mp4 and if not you have to parse it to contain the .mp4 extension
A:
Perform a Flutter clean and run the application .
A:
Hey i got the same error today in 2022/19/8 .Just adding
fijkplayer: ^0.10.0 in your project pubsec.yaml file it would work then.
A:
I've done this and it worked for me
uninstall the app
run flutter clean
run flutter pub get
run your application again
A:
My AndroidManifest.xml already have this line and it's not working.
So, the issue is in link. After link you have to add extension like .mp4
It is working for me.
|
PlatformException(VideoError, Video player had error com.google.android.exoplayer2.ExoPlaybackException: Source error, null, null)
|
**HI I am trying to play a live news video in my flutter app it is .m3u8 format but get above error. Using all of the updated dependencies. I want to play live news in my flutter app. I have the url you can also try it.
URL: http://161.97.162.167:1936/live/tnnnews/playlist.m3u8
When I use another url with .m3u8 it plays on flutter app but when I paste the live url code it throws me the above error.
**
Code
import 'package:video_player/video_player.dart';
import 'package:flutter/material.dart';
class VideoApp extends StatefulWidget {
@override
_VideoAppState createState() => _VideoAppState();
}
class _VideoAppState extends State<VideoApp> {
VideoPlayerController _controller;
@override
void initState() {
super.initState();
_controller = VideoPlayerController.network(
'http://161.97.162.167:1936/live/tnnnews/playlist.m3u8')
..initialize().then((_) {
// Ensure the first frame is shown after the video is initialized, even before the play button has been pressed.
setState(() {});
});
}
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Video Demo',
home: Scaffold(
body: Center(
child: _controller.value.isInitialized
? AspectRatio(
aspectRatio: _controller.value.aspectRatio,
child: VideoPlayer(_controller),
)
: Container(),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
setState(() {
_controller.value.isPlaying
? _controller.pause()
: _controller.play();
});
},
child: Icon(
_controller.value.isPlaying ? Icons.pause : Icons.play_arrow,
),
),
),
);
}
@override
void dispose() {
super.dispose();
_controller.dispose();
}
}
|
[
"Put this in your AndroidManifest.xml\n<application ...\nandroid:usesCleartextTraffic=\"true\"\n\n",
"This issue happened with me and after searching I found the issue is in link itself\nAs the library will only work if the extension of your link is .mp4 and if not you have to parse it to contain the .mp4 extension\n",
"Perform a Flutter clean and run the application .\n",
"Hey i got the same error today in 2022/19/8 .Just adding\nfijkplayer: ^0.10.0 in your project pubsec.yaml file it would work then.\n",
"I've done this and it worked for me\n\nuninstall the app\nrun flutter clean\nrun flutter pub get\nrun your application again\n\n",
"My AndroidManifest.xml already have this line and it's not working.\nSo, the issue is in link. After link you have to add extension like .mp4\nIt is working for me.\n"
] |
[
6,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"exoplayer",
"flutter",
"flutter_video_player",
"m3u8"
] |
stackoverflow_0068608353_exoplayer_flutter_flutter_video_player_m3u8.txt
|
Q:
Is using ranges in c++ advisable at all?
I find the traditional syntax of most c++ stl algorithms annoying; that using them is lengthy to write is only a small issue, but that they always need to operate on existing objects limits their composability considerably.
I was happy to see the advent of ranges in the stl; however, as of C++20, there are severe shortcomings: the support for this among different implementations of the standard library varies, and many things present in range-v3 did not make it into C++20, such as (to my great surprise), converting a view into a vector (which, for me, renders this all a bit useless if I cannot store the results of a computation in a vector).
On the other hand, using range-v3 also seems not ideal to me: it is poorly documented (and I don't agree that all things in there are self-explanatory), and, more severely, C++20-ideas of ranges differ from what range-v3 does, so I cannot just say, okay, let's stick with range-v3; that will become standard anyway at some time.
So, should I even use any of the two? Or is this all just not worth it, and by relying on std ranges or range-v3, making my code too difficult to maintain and port?
A:
Is using ranges in c++ advisable at all?
Yes.
and many things present in range-v3 did not make it into C++20, such
as (to my great surprise), converting a view into a vector
Yes. But std::ranges::to has been adopted by C++23, which is more powerful and works well with C++23's range version constructor of stl containers.
So, should I even use any of the two?
You should use the standard library <ranges>.
It contains several PR enhancements such as owning_view, redesigned split_view, and ongoing LWG fixes. In addition, C++23 brings not only more adapters such as join_with_view and zip_view, etc., but also more powerful features such as pipe support for user-defined range adaptors (P2387), and formatting ranges (P2286), etc. The only thing you have to do is wait for the compiler to implement it. You can refer to cppreference for the newest compiler support.
A:
I recommend using range-v3 and not std::ranges. There are too many things missing (at least before c++23 is implemented) to make it worth using std::ranges at all.
On the other hand, using range-v3 also seems not ideal to me: it is poorly documented (and I don't agree that all things in there are self-explanatory),
It's easily enough to learn range-v3 from these supplementary materials https://www.walletfox.com/course/quickref_range_v3.php https://www.walletfox.com/course/examples_range_v3.php and you could always buy the book if you want more.
Also range-v3 is open source so you can let the source code be your documentation.
and, more severely, C++20-ideas of ranges differ from what range-v3 does, so I cannot just say, okay, let's stick with range-v3; that will become standard anyway at some time.
I doubt these changes will matter, much, the main problem is that range-v3 and std::ranges dont combine but changing the namespaces should be most of the effort porting range-v3 to std::ranges 23.
making my code too difficult to maintain
Code without ranges is too difficult. The amount of time I save by using range-v3 for everything is enormous, particularly the time taken ironing out the bugs in freshly written code, but also the time it takes to understand code you've written in the past, and then modify it. I think the only reason to not use range-v3 is to maintain the conventions of an existing codebase.
A:
Simple example : sort vector of one hundred million random int values
#include <iostream>
#include <chrono>
#include <ranges>
#include <random>
#include <vector>
#include <algorithm>
int main(int argc, char **argv) {
const int START = 1, END = 50, QUANTITY = 100000000;
std::random_device dev;
std::mt19937 rng(dev());
std::uniform_int_distribution<std::mt19937::result_type> dist6(START, END);
std::vector<int> vec;
vec.reserve(QUANTITY);
for (int i = 0; i < QUANTITY; i++) {
vec.push_back(dist6(rng));
}
std::vector<int> original_copy = vec;
auto start_test1 = std::chrono::high_resolution_clock::now();
std::ranges::sort(vec);
auto end_test1 = std::chrono::high_resolution_clock::now();
auto duration_test1 = std::chrono::duration_cast<std::chrono::milliseconds>(end_test1 - start_test1).count();
auto start_test2 = std::chrono::high_resolution_clock::now();
std::sort(original_copy.begin(), original_copy.end());
auto end_test2 = std::chrono::high_resolution_clock::now();
auto duration_test2 = std::chrono::duration_cast<std::chrono::milliseconds>(end_test2 - start_test2).count();
std::cout << "test std::ranges::sort, vector was sorted in " << duration_test1 << " milliseconds." << std::endl;
std::cout << "test std::sort, vector was sorted in " << duration_test2 << " milliseconds." << std::endl;
if (duration_test1 > duration_test2) {
std::cout << "std::sort is " << duration_test1 - duration_test2 << " milliseconds faster" << std::endl;
} else {
std::cout << "std::ranges::sort is " << duration_test2 - duration_test1 << " milliseconds faster" << std::endl;
}
return 0;
}
output :
test std::ranges::sort, vector was sorted in 175319 milliseconds.
test std::sort, vector was sorted in 45368 milliseconds.
std::sort is 129951 milliseconds faster
in my opinion there is something strange in std::ranges, maybe it is easy to use than standard algorithms, but performance could be better
A:
As a ranges addict, I'm going answer again this time in the negative.
Most of time you spend developing, is spent incrementally compiling one compilation unit. Using ranges drastically increases these compile times. msvc compiles significantly faster and when I switch gcc or clang, it's unbearable.
You cant solve this by setting up compilation walls, since you pretty much always have to deduce the type of your ranges. So you are mostly stuck with slow compile times even when you're not modifying ranges code.
Getting the templates to compile is also a waste of time. After using Python's iterables you really start noticing the arbitrary limitations of the static type system. There are a lot of quirks you have to learn the hard way about.
C++ ranges are quite complicated. I'm trying to be less nerdy, and if you are too, staying away is recommended.
The declarative code is far more readable and maintainable than imperative. Functional programming pushes all the error prone detailed orientated code out of your code and into the library. But at what cost? map, reduce, filter are all easy enough to implement imperatively, but I need my group_by and split.
|
Is using ranges in c++ advisable at all?
|
I find the traditional syntax of most c++ stl algorithms annoying; that using them is lengthy to write is only a small issue, but that they always need to operate on existing objects limits their composability considerably.
I was happy to see the advent of ranges in the stl; however, as of C++20, there are severe shortcomings: the support for this among different implementations of the standard library varies, and many things present in range-v3 did not make it into C++20, such as (to my great surprise), converting a view into a vector (which, for me, renders this all a bit useless if I cannot store the results of a computation in a vector).
On the other hand, using range-v3 also seems not ideal to me: it is poorly documented (and I don't agree that all things in there are self-explanatory), and, more severely, C++20-ideas of ranges differ from what range-v3 does, so I cannot just say, okay, let's stick with range-v3; that will become standard anyway at some time.
So, should I even use any of the two? Or is this all just not worth it, and by relying on std ranges or range-v3, making my code too difficult to maintain and port?
|
[
"\nIs using ranges in c++ advisable at all?\n\nYes.\n\nand many things present in range-v3 did not make it into C++20, such\nas (to my great surprise), converting a view into a vector\n\nYes. But std::ranges::to has been adopted by C++23, which is more powerful and works well with C++23's range version constructor of stl containers.\n\nSo, should I even use any of the two?\n\nYou should use the standard library <ranges>.\nIt contains several PR enhancements such as owning_view, redesigned split_view, and ongoing LWG fixes. In addition, C++23 brings not only more adapters such as join_with_view and zip_view, etc., but also more powerful features such as pipe support for user-defined range adaptors (P2387), and formatting ranges (P2286), etc. The only thing you have to do is wait for the compiler to implement it. You can refer to cppreference for the newest compiler support.\n",
"I recommend using range-v3 and not std::ranges. There are too many things missing (at least before c++23 is implemented) to make it worth using std::ranges at all.\n\n\nOn the other hand, using range-v3 also seems not ideal to me: it is poorly documented (and I don't agree that all things in there are self-explanatory),\n\nIt's easily enough to learn range-v3 from these supplementary materials https://www.walletfox.com/course/quickref_range_v3.php https://www.walletfox.com/course/examples_range_v3.php and you could always buy the book if you want more.\nAlso range-v3 is open source so you can let the source code be your documentation.\n\nand, more severely, C++20-ideas of ranges differ from what range-v3 does, so I cannot just say, okay, let's stick with range-v3; that will become standard anyway at some time.\n\nI doubt these changes will matter, much, the main problem is that range-v3 and std::ranges dont combine but changing the namespaces should be most of the effort porting range-v3 to std::ranges 23.\n\nmaking my code too difficult to maintain\n\nCode without ranges is too difficult. The amount of time I save by using range-v3 for everything is enormous, particularly the time taken ironing out the bugs in freshly written code, but also the time it takes to understand code you've written in the past, and then modify it. I think the only reason to not use range-v3 is to maintain the conventions of an existing codebase.\n",
"Simple example : sort vector of one hundred million random int values\n#include <iostream>\n#include <chrono>\n#include <ranges>\n#include <random>\n#include <vector>\n#include <algorithm>\n\n\nint main(int argc, char **argv) {\n\n\n const int START = 1, END = 50, QUANTITY = 100000000;\n\n\n std::random_device dev;\n std::mt19937 rng(dev());\n std::uniform_int_distribution<std::mt19937::result_type> dist6(START, END);\n\n std::vector<int> vec;\n vec.reserve(QUANTITY);\n\n for (int i = 0; i < QUANTITY; i++) {\n vec.push_back(dist6(rng));\n }\n\n std::vector<int> original_copy = vec;\n\n auto start_test1 = std::chrono::high_resolution_clock::now();\n std::ranges::sort(vec);\n auto end_test1 = std::chrono::high_resolution_clock::now();\n auto duration_test1 = std::chrono::duration_cast<std::chrono::milliseconds>(end_test1 - start_test1).count();\n\n auto start_test2 = std::chrono::high_resolution_clock::now();\n std::sort(original_copy.begin(), original_copy.end());\n auto end_test2 = std::chrono::high_resolution_clock::now();\n auto duration_test2 = std::chrono::duration_cast<std::chrono::milliseconds>(end_test2 - start_test2).count();\n\n\n std::cout << \"test std::ranges::sort, vector was sorted in \" << duration_test1 << \" milliseconds.\" << std::endl;\n std::cout << \"test std::sort, vector was sorted in \" << duration_test2 << \" milliseconds.\" << std::endl;\n\n\n if (duration_test1 > duration_test2) {\n std::cout << \"std::sort is \" << duration_test1 - duration_test2 << \" milliseconds faster\" << std::endl;\n } else {\n std::cout << \"std::ranges::sort is \" << duration_test2 - duration_test1 << \" milliseconds faster\" << std::endl;\n }\n\n\n return 0;\n}\n\noutput :\ntest std::ranges::sort, vector was sorted in 175319 milliseconds.\ntest std::sort, vector was sorted in 45368 milliseconds.\nstd::sort is 129951 milliseconds faster\n\nin my opinion there is something strange in std::ranges, maybe it is easy to use than standard algorithms, but performance could be better\n",
"As a ranges addict, I'm going answer again this time in the negative.\nMost of time you spend developing, is spent incrementally compiling one compilation unit. Using ranges drastically increases these compile times. msvc compiles significantly faster and when I switch gcc or clang, it's unbearable.\nYou cant solve this by setting up compilation walls, since you pretty much always have to deduce the type of your ranges. So you are mostly stuck with slow compile times even when you're not modifying ranges code.\nGetting the templates to compile is also a waste of time. After using Python's iterables you really start noticing the arbitrary limitations of the static type system. There are a lot of quirks you have to learn the hard way about.\nC++ ranges are quite complicated. I'm trying to be less nerdy, and if you are too, staying away is recommended.\nThe declarative code is far more readable and maintainable than imperative. Functional programming pushes all the error prone detailed orientated code out of your code and into the library. But at what cost? map, reduce, filter are all easy enough to implement imperatively, but I need my group_by and split.\n"
] |
[
3,
3,
1,
1
] |
[] |
[] |
[
"c++",
"c++20",
"range_v3",
"std_ranges"
] |
stackoverflow_0072827759_c++_c++20_range_v3_std_ranges.txt
|
Q:
How to send data from API to another HTML file
Im using a TMDB API to search for movies and add them to a watchlist.
In this javascript function im getting movie details based on user input and rendering the results to html using bootstrap.
const searchMovie = async (searchInput) => {
try {
axios.get(`https://api.themoviedb.org/3/search/movie?api_key={API_KEY}&language=en-US&query=${searchInput}&page=1&include_adult=false `)
.then((response) => {
console.log(response);
let movies = response.data.results;
let displayMovies = '';
$.each(movies, (index, movie) => {
displayMovies += `
<div class="col-md-3">
<div class="well text-center">
<a href="https://www.themoviedb.org/movie/${movie.movie_id} target="_blank"><img src="https://image.tmdb.org/t/p/original${movie.poster_path}"></a>
<h5>${movie.title}</h5>
<h4>${movie.release_date}<h4>
<a class="btn btn-primary" href="#">Add to watchlist</a>
</div>
</div>
`;
});
$('#movies').html(displayMovies);
})
}catch(error) {
console.log(error)
}
}
I have another html file called watchlist.html that i want to send the movie selected from the search results to that file and build a watchlist.
A:
To send data from one HTML file to another, you can use the localStorage or sessionStorage objects available in JavaScript. These objects allow you to save key-value pairs of data in the user's web browser, which can be accessed on the same domain by other pages.
Here's an example of how you could use the localStorage object to save the data in the first HTML file and access it in the second HTML file:
First HTML file (search results):
// Save the movie data in localStorage when the "Add to watchlist" button is clicked
$('#add-to-watchlist-button').on('click', () => {
let movie = {
title: movie.title,
releaseDate: movie.release_date,
// Other movie data
};
localStorage.setItem('selectedMovie', JSON.stringify(movie));
});
Second HTML file (watchlist):
// Retrieve the selected movie from localStorage and display it on the page
let selectedMovie = JSON.parse(localStorage.getItem('selectedMovie'));
if (selectedMovie) {
// Display the selected movie on the page
}
Remember to call localStorage.removeItem('selectedMovie') or localStorage.clear() when you no longer need the data, to free up space in the user's browser.
A:
Please try this one before stringify
var obj = JSON.parse(movie);
localStorage.setItem('selectedMovie', JSON.stringify(obj));
|
How to send data from API to another HTML file
|
Im using a TMDB API to search for movies and add them to a watchlist.
In this javascript function im getting movie details based on user input and rendering the results to html using bootstrap.
const searchMovie = async (searchInput) => {
try {
axios.get(`https://api.themoviedb.org/3/search/movie?api_key={API_KEY}&language=en-US&query=${searchInput}&page=1&include_adult=false `)
.then((response) => {
console.log(response);
let movies = response.data.results;
let displayMovies = '';
$.each(movies, (index, movie) => {
displayMovies += `
<div class="col-md-3">
<div class="well text-center">
<a href="https://www.themoviedb.org/movie/${movie.movie_id} target="_blank"><img src="https://image.tmdb.org/t/p/original${movie.poster_path}"></a>
<h5>${movie.title}</h5>
<h4>${movie.release_date}<h4>
<a class="btn btn-primary" href="#">Add to watchlist</a>
</div>
</div>
`;
});
$('#movies').html(displayMovies);
})
}catch(error) {
console.log(error)
}
}
I have another html file called watchlist.html that i want to send the movie selected from the search results to that file and build a watchlist.
|
[
"To send data from one HTML file to another, you can use the localStorage or sessionStorage objects available in JavaScript. These objects allow you to save key-value pairs of data in the user's web browser, which can be accessed on the same domain by other pages.\nHere's an example of how you could use the localStorage object to save the data in the first HTML file and access it in the second HTML file:\nFirst HTML file (search results):\n// Save the movie data in localStorage when the \"Add to watchlist\" button is clicked\n$('#add-to-watchlist-button').on('click', () => {\n let movie = {\n title: movie.title,\n releaseDate: movie.release_date,\n // Other movie data\n };\n\n localStorage.setItem('selectedMovie', JSON.stringify(movie));\n});\n\nSecond HTML file (watchlist):\n// Retrieve the selected movie from localStorage and display it on the page\nlet selectedMovie = JSON.parse(localStorage.getItem('selectedMovie'));\nif (selectedMovie) {\n // Display the selected movie on the page\n}\n\nRemember to call localStorage.removeItem('selectedMovie') or localStorage.clear() when you no longer need the data, to free up space in the user's browser.\n",
"Please try this one before stringify\nvar obj = JSON.parse(movie);\nlocalStorage.setItem('selectedMovie', JSON.stringify(obj));\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"api",
"html",
"javascript",
"local_storage"
] |
stackoverflow_0074664085_api_html_javascript_local_storage.txt
|
Q:
Swift-SceneKit-Can not load '.scn' file from 'art.scnassets'
I'm trying to create a new SCNScene from 'diceCollada.scn' file.
But this file won't be loaded.
This file is in "ARDicee/art.assets" folder.
Not only "diceCollada.scn", but also it cannot load the default "ship.scn".
I don't know why it doesn't load files.
Here is my code.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Create a new scene. ---------- The error is here ---------------
guard let diceScene = SCNScene(named: "art.scnassets/diceCollada.scn") else {
fatalError()
}
// Setting node
if let diceNode = diceScene.rootNode.childNode(withName: "Dice", recursively: true) {
diceNode.position = SCNVector3(x: 0, y: 0, z: -0.1)
sceneView.scene.rootNode.addChildNode(diceNode)
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if ARWorldTrackingConfiguration.isSupported {
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
// Run the view's session
sceneView.session.run(configuration)
}
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
}
Xcode - Version 14.1
macOS Ventura - Version 13.0.1
GitHub - This project
I also tried to create SCNScene another way.
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// --- Another way to create SCNScene ---
let filePath = URL(fileURLWithPath: "/Applications/xcode/Development/ARDicee/ARDicee/art.scnassets/diceCollada.scn")
do {
let diceScene = try SCNScene(url: filePath)
if let diceNode = diceScene.rootNode.childNode(withName: "Dice", recursively: true) {
diceNode.position = SCNVector3(x: 0, y: 0, z: -0.1)
sceneView.scene.rootNode.addChildNode(diceNode)
}
} catch {
print(error)
}
}
But it gave this error.
Error Domain=NSCocoaErrorDomain Code=260 "The file “diceCollada.scn” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Applications/xcode/Development/ARDicee/ARDicee/art.scnassets/diceCollada.scn, NSUnderlyingError=0x282924570 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
I'm trying to create a new SCNScene from 'diceCollada.scn' file.
A:
If the app cannot load the spaceship from the apple template, the project might be broken somehow. Try to create a brandnew, default SceneKit/ARKit project, compile it directly and check if the spaceship loads correctly. If yes, copy and paste the code from your current project into the new one. If the spaceship does not even load from a fresh template, your xCode installation could be broken. You can also clean your project build folder or delete derived data, there are articles here on how to do such things. In addition you could share your project here on StackOverflow, so that we can have a look at it.
A:
Resetting the path to command-line tools
There are some issues in your command line tools. As a result, the content in the art.scnassets folder isn't readable. In order to fix this, you need to install the latest version of Command_Line_Tools for Xcode 14.1 and then execute the following commands in Terminal:
sudo xcode-select --reset
sudo xcode-select -switch /Library/Developer/CommandLineTools
Reinstalling command-line tools
Or you can remove the old version of command line tools and install the new one using Terminal:
sudo rm -rf /Library/Developer/CommandLineTools
sudo xcode-select --install
Then restart your Mac.
I had the same problem and these steps helped me.
Renaming
If the above steps still do not help, rename the art.scnassets folder to artisan.scnassets.
|
Swift-SceneKit-Can not load '.scn' file from 'art.scnassets'
|
I'm trying to create a new SCNScene from 'diceCollada.scn' file.
But this file won't be loaded.
This file is in "ARDicee/art.assets" folder.
Not only "diceCollada.scn", but also it cannot load the default "ship.scn".
I don't know why it doesn't load files.
Here is my code.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Create a new scene. ---------- The error is here ---------------
guard let diceScene = SCNScene(named: "art.scnassets/diceCollada.scn") else {
fatalError()
}
// Setting node
if let diceNode = diceScene.rootNode.childNode(withName: "Dice", recursively: true) {
diceNode.position = SCNVector3(x: 0, y: 0, z: -0.1)
sceneView.scene.rootNode.addChildNode(diceNode)
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if ARWorldTrackingConfiguration.isSupported {
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
// Run the view's session
sceneView.session.run(configuration)
}
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
}
Xcode - Version 14.1
macOS Ventura - Version 13.0.1
GitHub - This project
I also tried to create SCNScene another way.
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// --- Another way to create SCNScene ---
let filePath = URL(fileURLWithPath: "/Applications/xcode/Development/ARDicee/ARDicee/art.scnassets/diceCollada.scn")
do {
let diceScene = try SCNScene(url: filePath)
if let diceNode = diceScene.rootNode.childNode(withName: "Dice", recursively: true) {
diceNode.position = SCNVector3(x: 0, y: 0, z: -0.1)
sceneView.scene.rootNode.addChildNode(diceNode)
}
} catch {
print(error)
}
}
But it gave this error.
Error Domain=NSCocoaErrorDomain Code=260 "The file “diceCollada.scn” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Applications/xcode/Development/ARDicee/ARDicee/art.scnassets/diceCollada.scn, NSUnderlyingError=0x282924570 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
I'm trying to create a new SCNScene from 'diceCollada.scn' file.
|
[
"If the app cannot load the spaceship from the apple template, the project might be broken somehow. Try to create a brandnew, default SceneKit/ARKit project, compile it directly and check if the spaceship loads correctly. If yes, copy and paste the code from your current project into the new one. If the spaceship does not even load from a fresh template, your xCode installation could be broken. You can also clean your project build folder or delete derived data, there are articles here on how to do such things. In addition you could share your project here on StackOverflow, so that we can have a look at it.\n",
"Resetting the path to command-line tools\nThere are some issues in your command line tools. As a result, the content in the art.scnassets folder isn't readable. In order to fix this, you need to install the latest version of Command_Line_Tools for Xcode 14.1 and then execute the following commands in Terminal:\nsudo xcode-select --reset\n\n\nsudo xcode-select -switch /Library/Developer/CommandLineTools\n\n\nReinstalling command-line tools\nOr you can remove the old version of command line tools and install the new one using Terminal:\nsudo rm -rf /Library/Developer/CommandLineTools\n\n\nsudo xcode-select --install\n\nThen restart your Mac.\nI had the same problem and these steps helped me.\nRenaming\nIf the above steps still do not help, rename the art.scnassets folder to artisan.scnassets.\n"
] |
[
1,
1
] |
[] |
[] |
[
"scenekit",
"scnscene",
"swift"
] |
stackoverflow_0074653849_scenekit_scnscene_swift.txt
|
Q:
iOS - CocoaPods requires your terminal to be using UTF-8 encoding - after latest flutter upgrade
I am getting this error after I upgraded flutter. Before upgrading everything was working normal on both iOS and android. Now my project is not building in iOS.
Below is my terminal info.
pod setup --verbose
WARNING: CocoaPods requires your terminal to be using UTF-8 encoding.
Consider adding the following to ~/.profile:
export LANG=en_US.UTF-8
pod install --verbose
WARNING: CocoaPods requires your terminal to be using UTF-8 encoding.
Consider adding the following to ~/.profile:
export LANG=en_US.UTF-8
[!] No `Podfile' found in the project directory.
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command.rb:151:in `verify_podfile_exists!'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command/install.rb:46:in `run'
/Library/Ruby/Gems/2.3.0/gems/claide-1.0.2/lib/claide/command.rb:334:in `run'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command.rb:52:in `run'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/bin/pod:55:in `<top (required)>'
/usr/local/bin/pod:22:in `load'
/usr/local/bin/pod:22:in `<main>'
locale
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
A:
Open Terminal
Type open ~/.zshrc (or open ~/.profile if you don't use zsh)
It seems that LANG="en_US.UTF-8" alone isn't enough, so you have to set:
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
Save the file
Go back to Terminal and type source ~/.zshrc and type locale
You can now safely run pod update or pod install
A:
After one day of struggle finally i found the full solution
Follow these steps to solve this issue
Unhide filles in Mac using Cmd + Shift + dot.
Go to Macintosh HD -> User -> (Your user ).
Search for .zshrc
Open it with any editor (I recommend VS code)
Under. #User configuration comment out export LANG=en_US.UTF-8.
Open your flutter project and manually delete Pods folder, Podfile, Podfile.lock. (back-up all podfiles)
Restart your Mac and run your flutter application
Run pod install
It will automatically create podfile and its new config in UTF - 8 encoding
Enjoy and chill i got your back !!!!!
A:
Clean files with flutter clean
Type vim .zshrc in your terminal. This should open up your .zshrc profile. Type I to insert something. Then, simply paste in export LANG=en_US.UTF-8 and hit ESC to get out and then type :wq to save and quit.
Opened project folder, next open iOS folder in mac terminal and run pod install
It may give warning of using ios 9.0 so update it to 10.0, for that open Podfile in text edit and uncomment or change
# platform :ios, '9.0' to platform :ios, '10.0'
Run flutter build iOS in the main root project through the terminal.
Run main root file eg. flutter run
If you are facing a problem with Flutter. Try this solution
Delete the Podfile, Podfile.lock, Pods folder, Runner.xcworkspace
flutter clean
flutter build ios
A:
finally i have solved this with below steps
export LANG=en_US.UTF-8
opened project in Xcode and cleaned
it.
opened iOS folder in mac terminal and ran pod install
it gave me warming of using ios 9.0 so i updated it to 10.0
ran flutter build ios
project build successfully
opened Runner.xcworkspace in xode
clicked on run - this time Xcode again installed pod automatically
and Solved.
A:
To iterate on Shruti Tupkari's answer ~ To add in export LANG=en_US.UTF-8 to your terminal it needs to be added to a profile such as .zshrc , .bashrc , or .bash_profile .
To do this simply use vim
So try
vim .zshrc
This should open up your .zshrc profile. Type 'i' to insert something.
Then simply paste in export LANG=en_US.UTF-8 hit esc to get out and then type :wq to save and quit
Try and run your app again. If you get the error, repeat the same steps on the other profiles in your computer.
Here's some information on how to use vim
https://www.howtoforge.com/vim-basics
A:
If adding export LANG=en_US.UTF-8 doesn't help, try export LC_ALL="en_US.UTF-8"
Credits:
https://github.com/CocoaPods/CocoaPods/issues/6333#issuecomment-551052399
A:
The proper solution, without re-installing anything, can be found:
here: https://stackoverflow.com/a/69160445/3821002
and here: https://stackoverflow.com/a/69376499/3821002
The crux is to use export LC_ALL=en_US.UTF-8.
The links above explain how to do that.
A:
In my case the following needed to be added to .bash_profile instead of the other suggested files.
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
A:
if you use Android Studio, then open terminal in it with:
nano ~/.profile
add these values:
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
and it's very important: now you need to close Android Studio and open it again, then you will be able to make a project
A:
To fix this, you might want to find either your ~/.bash_profile (for bash) or ~/.zshrc (for zsh) and add the export line that you put in your ~/.profile
export LANG=en_US.UTF-8
That was all I had to do.
A:
After upgrading mac os big sur version I get this error when I try to build ios unity.
If you don't have the file .profile, you can create new file .profile in /Users/"user-name"/.profile
step 1: open terminal. create new file .profile
$ cd
$ touch .profile
step 2: edit .profile.add
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
in .profile
A:
I had this problem only in android studio. When I switched to the terminal outside Android Studio it worked fine.
A:
Just open your terminal
and enter the below command
export LC_ALL=en_US.UTF-8
A:
Changing the .zshrc file didn't work for me, so I ended up with executing the cmd with specific language UTF-8 definition:
$ LANG=en_US.UTF-8 <pod command>
Source: cocoaPods issue
A:
Solving Cocoapods UTF-8 error: (Detailed explanation)
We need to change the locale of the terminal to UTF-8.
Step 1: Open the terminal, type locale, press enter, and check what locale it shows.
Step 2: If it's a bash terminal, change it to a zsh terminal,
Step 3: Then open finder, go to mac HD, users, folder with your username, and press command+shift+. to open hidden files
Step 4: Create or open a file named .zshrc
Step 5: Paste this there:
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
Step 6: Save it by command + s
Step 7: Then open the terminal again and check by typing, locale and pressing enter (Then you can either keep it open or close the terminal)
Step 8: If it shows some other locale instead of UTF-8, then paste the below 3 lines and press enter:
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
Step 9: Then open Android Studio, go to the terminal and paste the same above 3 lines, and press enter
Step 10: Then on your project file, go to the ios folder, right-click on it, and open in the terminal, then in the terminal, do paste the same above 3 lines and press enter
Step 11: Now you can install pods in this terminal by typing, pod install.
Now it will work, if you still get the same error, go to tools -> flutter -> flutter clean and restart android studio by clicking on File -> restart ide or a similar one.
Then on your project, right-click on the ios folder and open the terminal, type locale, and press enter, if it's not in UTF-8 now, then from step 9 again up to step 11, it will work now. You can run your app on iOS devices now.
My repo -> https://github.com/anantha-eswar/
A:
In my case, this error only occurs when I use the android Studio terminal to run the flutter ios app.
so instead, I used Mac Terminal to run the flutter ios app, and it's working perfectly fine.
To run the flutter app from the terminal use the below command:
flutter run
|
iOS - CocoaPods requires your terminal to be using UTF-8 encoding - after latest flutter upgrade
|
I am getting this error after I upgraded flutter. Before upgrading everything was working normal on both iOS and android. Now my project is not building in iOS.
Below is my terminal info.
pod setup --verbose
WARNING: CocoaPods requires your terminal to be using UTF-8 encoding.
Consider adding the following to ~/.profile:
export LANG=en_US.UTF-8
pod install --verbose
WARNING: CocoaPods requires your terminal to be using UTF-8 encoding.
Consider adding the following to ~/.profile:
export LANG=en_US.UTF-8
[!] No `Podfile' found in the project directory.
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command.rb:151:in `verify_podfile_exists!'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command/install.rb:46:in `run'
/Library/Ruby/Gems/2.3.0/gems/claide-1.0.2/lib/claide/command.rb:334:in `run'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/lib/cocoapods/command.rb:52:in `run'
/Library/Ruby/Gems/2.3.0/gems/cocoapods-1.8.4/bin/pod:55:in `<top (required)>'
/usr/local/bin/pod:22:in `load'
/usr/local/bin/pod:22:in `<main>'
locale
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
|
[
"\nOpen Terminal\nType open ~/.zshrc (or open ~/.profile if you don't use zsh)\n\nIt seems that LANG=\"en_US.UTF-8\" alone isn't enough, so you have to set:\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\n\nSave the file\n\nGo back to Terminal and type source ~/.zshrc and type locale\n\nYou can now safely run pod update or pod install\n\n\n",
"After one day of struggle finally i found the full solution\nFollow these steps to solve this issue\n\nUnhide filles in Mac using Cmd + Shift + dot.\nGo to Macintosh HD -> User -> (Your user ).\nSearch for .zshrc\nOpen it with any editor (I recommend VS code)\nUnder. #User configuration comment out export LANG=en_US.UTF-8.\nOpen your flutter project and manually delete Pods folder, Podfile, Podfile.lock. (back-up all podfiles)\nRestart your Mac and run your flutter application\nRun pod install\nIt will automatically create podfile and its new config in UTF - 8 encoding\nEnjoy and chill i got your back !!!!!\n\n",
"\nClean files with flutter clean\nType vim .zshrc in your terminal. This should open up your .zshrc profile. Type I to insert something. Then, simply paste in export LANG=en_US.UTF-8 and hit ESC to get out and then type :wq to save and quit.\nOpened project folder, next open iOS folder in mac terminal and run pod install\nIt may give warning of using ios 9.0 so update it to 10.0, for that open Podfile in text edit and uncomment or change\n# platform :ios, '9.0' to platform :ios, '10.0'\nRun flutter build iOS in the main root project through the terminal.\nRun main root file eg. flutter run\n\nIf you are facing a problem with Flutter. Try this solution\n\nDelete the Podfile, Podfile.lock, Pods folder, Runner.xcworkspace\nflutter clean\nflutter build ios\n\n",
"finally i have solved this with below steps\n\nexport LANG=en_US.UTF-8\nopened project in Xcode and cleaned\nit.\nopened iOS folder in mac terminal and ran pod install\nit gave me warming of using ios 9.0 so i updated it to 10.0\nran flutter build ios\nproject build successfully\nopened Runner.xcworkspace in xode\nclicked on run - this time Xcode again installed pod automatically\n\nand Solved.\n",
"To iterate on Shruti Tupkari's answer ~ To add in export LANG=en_US.UTF-8 to your terminal it needs to be added to a profile such as .zshrc , .bashrc , or .bash_profile .\nTo do this simply use vim\nSo try\nvim .zshrc\n\nThis should open up your .zshrc profile. Type 'i' to insert something.\nThen simply paste in export LANG=en_US.UTF-8 hit esc to get out and then type :wq to save and quit\nTry and run your app again. If you get the error, repeat the same steps on the other profiles in your computer.\n\nHere's some information on how to use vim\nhttps://www.howtoforge.com/vim-basics\n",
"If adding export LANG=en_US.UTF-8 doesn't help, try export LC_ALL=\"en_US.UTF-8\"\nCredits:\nhttps://github.com/CocoaPods/CocoaPods/issues/6333#issuecomment-551052399\n",
"The proper solution, without re-installing anything, can be found:\n\nhere: https://stackoverflow.com/a/69160445/3821002\nand here: https://stackoverflow.com/a/69376499/3821002\n\nThe crux is to use export LC_ALL=en_US.UTF-8.\nThe links above explain how to do that.\n",
"In my case the following needed to be added to .bash_profile instead of the other suggested files.\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\n",
"if you use Android Studio, then open terminal in it with:\nnano ~/.profile\n\nadd these values:\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\nand it's very important: now you need to close Android Studio and open it again, then you will be able to make a project\n",
"To fix this, you might want to find either your ~/.bash_profile (for bash) or ~/.zshrc (for zsh) and add the export line that you put in your ~/.profile\nexport LANG=en_US.UTF-8\nThat was all I had to do.\n",
"After upgrading mac os big sur version I get this error when I try to build ios unity.\nIf you don't have the file .profile, you can create new file .profile in /Users/\"user-name\"/.profile\nstep 1: open terminal. create new file .profile\n$ cd\n$ touch .profile\n\nstep 2: edit .profile.add\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\nin .profile\n",
"I had this problem only in android studio. When I switched to the terminal outside Android Studio it worked fine.\n",
"Just open your terminal\nand enter the below command\nexport LC_ALL=en_US.UTF-8\n\n",
"Changing the .zshrc file didn't work for me, so I ended up with executing the cmd with specific language UTF-8 definition:\n$ LANG=en_US.UTF-8 <pod command>\n\nSource: cocoaPods issue\n",
"Solving Cocoapods UTF-8 error: (Detailed explanation)\nWe need to change the locale of the terminal to UTF-8.\nStep 1: Open the terminal, type locale, press enter, and check what locale it shows.\nStep 2: If it's a bash terminal, change it to a zsh terminal,\nStep 3: Then open finder, go to mac HD, users, folder with your username, and press command+shift+. to open hidden files\nStep 4: Create or open a file named .zshrc\nStep 5: Paste this there:\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\nStep 6: Save it by command + s\nStep 7: Then open the terminal again and check by typing, locale and pressing enter (Then you can either keep it open or close the terminal)\nStep 8: If it shows some other locale instead of UTF-8, then paste the below 3 lines and press enter:\nexport LANG=en_US.UTF-8\nexport LANGUAGE=en_US.UTF-8\nexport LC_ALL=en_US.UTF-8\n\nStep 9: Then open Android Studio, go to the terminal and paste the same above 3 lines, and press enter\nStep 10: Then on your project file, go to the ios folder, right-click on it, and open in the terminal, then in the terminal, do paste the same above 3 lines and press enter\nStep 11: Now you can install pods in this terminal by typing, pod install.\nNow it will work, if you still get the same error, go to tools -> flutter -> flutter clean and restart android studio by clicking on File -> restart ide or a similar one.\nThen on your project, right-click on the ios folder and open the terminal, type locale, and press enter, if it's not in UTF-8 now, then from step 9 again up to step 11, it will work now. You can run your app on iOS devices now.\nMy repo -> https://github.com/anantha-eswar/\n",
"In my case, this error only occurs when I use the android Studio terminal to run the flutter ios app.\nso instead, I used Mac Terminal to run the flutter ios app, and it's working perfectly fine.\nTo run the flutter app from the terminal use the below command:\nflutter run\n\n"
] |
[
111,
34,
17,
9,
9,
4,
3,
2,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"flutter",
"ios",
"xcode"
] |
stackoverflow_0059405671_flutter_ios_xcode.txt
|
Q:
Cannot download using coursera-dl, Error 404
I am trying to use coursera-dl in windows to download coursera videos using this command:
coursera-dl neural-networks-deep-learning
it gives this error:
coursera_dl version 0.11.5
Downloading class: neural-networks-deep-learning (1 / 1)
Parsing syllabus of on-demand course (id=W_mOXCrdEeeNPQ68_4aPpA). This may take some time, please be patient ...
Error 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true getting page https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
The server replied: <html>
<head>
<title>Coursera - API Route Does Not Exist</title>
</head>
<body style="background-color:#e4e4e4">
<div style="position:absolute; top:0; bottom:0; left:0; right:0; margin:auto; height:200px; width: 600px">
<div style="text-align:center">
<img src="https://s3.amazonaws.com/coursera/error_pages/coursera-logo.svg" width="400">
</div>
<h1 style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:100; color: #555">
API Route Does Not Exist
</h1>
<div style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:300; font-size:13pt; color: #555">
Edge does not know about this API route. <br>
Check whether this route is exposed in the routing table.
</div>
</div>
</body>
</html>
HTTPError 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
any ideas ?
A:
Per the documentation you should download as follows:
coursera-dl -u my_coursera_username -p my_coursera_password neural-networks-deep-learning
Note that you won't be able to access the course materials if you are not officially enrolled via the website.
|
Cannot download using coursera-dl, Error 404
|
I am trying to use coursera-dl in windows to download coursera videos using this command:
coursera-dl neural-networks-deep-learning
it gives this error:
coursera_dl version 0.11.5
Downloading class: neural-networks-deep-learning (1 / 1)
Parsing syllabus of on-demand course (id=W_mOXCrdEeeNPQ68_4aPpA). This may take some time, please be patient ...
Error 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true getting page https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
The server replied: <html>
<head>
<title>Coursera - API Route Does Not Exist</title>
</head>
<body style="background-color:#e4e4e4">
<div style="position:absolute; top:0; bottom:0; left:0; right:0; margin:auto; height:200px; width: 600px">
<div style="text-align:center">
<img src="https://s3.amazonaws.com/coursera/error_pages/coursera-logo.svg" width="400">
</div>
<h1 style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:100; color: #555">
API Route Does Not Exist
</h1>
<div style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:300; font-size:13pt; color: #555">
Edge does not know about this API route. <br>
Check whether this route is exposed in the routing table.
</div>
</div>
</body>
</html>
HTTPError 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
any ideas ?
|
[
"Per the documentation you should download as follows:\ncoursera-dl -u my_coursera_username -p my_coursera_password neural-networks-deep-learning\n\nNote that you won't be able to access the course materials if you are not officially enrolled via the website.\n"
] |
[
0
] |
[] |
[] |
[
"cmd",
"coursera_api",
"python"
] |
stackoverflow_0074662735_cmd_coursera_api_python.txt
|
Q:
How to pass state between renderCell components in MUI Data Grid
How can I change the MenuItems of one Select when another Select component changes using DataGrid? I need to be able to pass the state of one Select component to the other, but I'm not sure how when using renderCell.
For example, let's say I have the following object:
const data = {
"/path/to/file1.csv": {
parameters: ["Parameter 1", "Parameter 2", "Parameter 3"],
},
"/path/to/file2.csv": {
parameters: ["Parameter 2", "Parameter 3", "Parameter 4"],
},
"/path/to/file3.csv": {
parameters: ["Parameter 5", "Parameter 6", "Parameter 7"],
},
};
In my DataGrid table, every time I add a new row with the click of a button, the first cell has a Select component containing Object.keys(data).
The second cell contains another Select component. I want this Select component to contain parameters that are dependent on the value selected. For example, if /path/to/file1.csv is selected, I want to make available those parameters (Parameter 1, Parameter 2, Parameter 3), but if /path/to/file3.csv is selected, I want to make available those parameters (Parameter 5, Parameter 6, Parameter 7).
Here's my component:
import * as React from "react";
import PropTypes from "prop-types";
import { Button, Select, MenuItem } from "@mui/material";
import DeleteIcon from "@mui/icons-material/Delete";
import { DataGrid, GridActionsCellItem } from "@mui/x-data-grid";
const FileSelect = (props) => {
const { value } = props;
const [file, setFile] = React.useState("");
const handleChange = (event) => {
setFile(event.target.value);
};
return (
<Select id="file-select" value={file} onChange={handleChange} fullWidth>
{value?.map((item, index) => (
<MenuItem key={index} value={item}>
{item}
</MenuItem>
))}
</Select>
);
};
FileSelect.propTypes = {
value: PropTypes.array,
};
const ParameterSelect = (props) => {
const { value } = props;
const [parameter, setParameter] = React.useState("");
const handleChange = (event) => {
setParameter(event.target.value);
};
return (
<Select
id="parameter-select"
value={parameter}
onChange={handleChange}
fullWidth
>
{value?.map((item, index) => (
<MenuItem key={index} value={item}>
{item}
</MenuItem>
))}
</Select>
);
};
export default function DataGridTable(props) {
const { data } = props;
const files = Object.keys(data);
const [rows, setRows] = React.useState([]);
const columns = [
{
field: "file",
headerName: "File",
// width: 200,
flex: 1,
renderCell: FileSelect,
},
{
field: "x",
headerName: "X",
// width: 200,
flex: 0.5,
renderCell: ParameterSelect,
},
{
field: "actions",
headerName: "Delete",
type: "actions",
width: 80,
getActions: (params) => [
<GridActionsCellItem
icon={<DeleteIcon />}
label="Delete"
onClick={deleteRow(params.id)}
/>,
],
},
];
const handleClick = () => {
const newRow = {
id: rows.length + 1,
file: files,
x: [],
};
setRows((prevState) => [...prevState, newRow]);
};
const deleteRow = React.useCallback(
(id) => () => {
setTimeout(() => {
setRows((prevRows) => prevRows.filter((row) => row.id !== id));
});
},
[]
);
return (
<div>
<Button variant="contained" onClick={handleClick}>
Add row
</Button>
<div style={{ height: 300, width: "100%" }}>
<DataGrid rows={rows} columns={columns} disableSelectionOnClick />
</div>
</div>
);
}
A:
The simplest way that I could think to accomplish this is by adding an extra field to the column definition as an "easy" place to store the selected value.
...
const FileSelect = (props) => {
const { value, row } = props;
const [file, setFile] = React.useState("");
const handleChange = (event) => {
setFile(event.target.value);
// Set the value here
row.selectedFile = event.target.value;
};
return (
<Select id="file-select" value={file} onChange={handleChange} fullWidth>
{value?.map((item, index) => (
<MenuItem key={index} value={item}>
{item}
</MenuItem>
))}
</Select>
);
};
...
{
field: "selectedFile",
hideable: true
},
...
Then set the selected value (file) in the FileSelect parent value in the selectedFile column. Then all that was left to do was to make the parameters lookup values available to the ParameterSelect. Again, I just stuffed them into the renderCell props, but this could be done better as well:
...
{
field: "x",
headerName: "X",
flex: 0.5,
// Passing the entire original data in as an extra param, for demonstration purposes
renderCell: (props) => ParameterSelect({ ...props, data })
},
...
Finally, just hide the selectedFile column:
...
<DataGrid
rows={rows}
columns={columns}
disableSelectionOnClick
// Hiding the extra field
columnVisibilityModel={{
selectedFile: false
}}
/>
...
Producing this: (I changed your values to make them easier to read while I was working)
Working CodeSandBox: https://codesandbox.io/s/prod-sun-bdvcu0?file=/demo.js:842-854
|
How to pass state between renderCell components in MUI Data Grid
|
How can I change the MenuItems of one Select when another Select component changes using DataGrid? I need to be able to pass the state of one Select component to the other, but I'm not sure how when using renderCell.
For example, let's say I have the following object:
const data = {
"/path/to/file1.csv": {
parameters: ["Parameter 1", "Parameter 2", "Parameter 3"],
},
"/path/to/file2.csv": {
parameters: ["Parameter 2", "Parameter 3", "Parameter 4"],
},
"/path/to/file3.csv": {
parameters: ["Parameter 5", "Parameter 6", "Parameter 7"],
},
};
In my DataGrid table, every time I add a new row with the click of a button, the first cell has a Select component containing Object.keys(data).
The second cell contains another Select component. I want this Select component to contain parameters that are dependent on the value selected. For example, if /path/to/file1.csv is selected, I want to make available those parameters (Parameter 1, Parameter 2, Parameter 3), but if /path/to/file3.csv is selected, I want to make available those parameters (Parameter 5, Parameter 6, Parameter 7).
Here's my component:
import * as React from "react";
import PropTypes from "prop-types";
import { Button, Select, MenuItem } from "@mui/material";
import DeleteIcon from "@mui/icons-material/Delete";
import { DataGrid, GridActionsCellItem } from "@mui/x-data-grid";
const FileSelect = (props) => {
const { value } = props;
const [file, setFile] = React.useState("");
const handleChange = (event) => {
setFile(event.target.value);
};
return (
<Select id="file-select" value={file} onChange={handleChange} fullWidth>
{value?.map((item, index) => (
<MenuItem key={index} value={item}>
{item}
</MenuItem>
))}
</Select>
);
};
FileSelect.propTypes = {
value: PropTypes.array,
};
const ParameterSelect = (props) => {
const { value } = props;
const [parameter, setParameter] = React.useState("");
const handleChange = (event) => {
setParameter(event.target.value);
};
return (
<Select
id="parameter-select"
value={parameter}
onChange={handleChange}
fullWidth
>
{value?.map((item, index) => (
<MenuItem key={index} value={item}>
{item}
</MenuItem>
))}
</Select>
);
};
export default function DataGridTable(props) {
const { data } = props;
const files = Object.keys(data);
const [rows, setRows] = React.useState([]);
const columns = [
{
field: "file",
headerName: "File",
// width: 200,
flex: 1,
renderCell: FileSelect,
},
{
field: "x",
headerName: "X",
// width: 200,
flex: 0.5,
renderCell: ParameterSelect,
},
{
field: "actions",
headerName: "Delete",
type: "actions",
width: 80,
getActions: (params) => [
<GridActionsCellItem
icon={<DeleteIcon />}
label="Delete"
onClick={deleteRow(params.id)}
/>,
],
},
];
const handleClick = () => {
const newRow = {
id: rows.length + 1,
file: files,
x: [],
};
setRows((prevState) => [...prevState, newRow]);
};
const deleteRow = React.useCallback(
(id) => () => {
setTimeout(() => {
setRows((prevRows) => prevRows.filter((row) => row.id !== id));
});
},
[]
);
return (
<div>
<Button variant="contained" onClick={handleClick}>
Add row
</Button>
<div style={{ height: 300, width: "100%" }}>
<DataGrid rows={rows} columns={columns} disableSelectionOnClick />
</div>
</div>
);
}
|
[
"The simplest way that I could think to accomplish this is by adding an extra field to the column definition as an \"easy\" place to store the selected value.\n...\n\nconst FileSelect = (props) => {\n const { value, row } = props;\n\n const [file, setFile] = React.useState(\"\");\n\n const handleChange = (event) => {\n setFile(event.target.value);\n // Set the value here\n row.selectedFile = event.target.value;\n };\n\n return (\n <Select id=\"file-select\" value={file} onChange={handleChange} fullWidth>\n {value?.map((item, index) => (\n <MenuItem key={index} value={item}>\n {item}\n </MenuItem>\n ))}\n </Select>\n );\n};\n\n...\n\n{\n field: \"selectedFile\",\n hideable: true\n},\n\n...\n\nThen set the selected value (file) in the FileSelect parent value in the selectedFile column. Then all that was left to do was to make the parameters lookup values available to the ParameterSelect. Again, I just stuffed them into the renderCell props, but this could be done better as well:\n...\n\n{\n field: \"x\",\n headerName: \"X\",\n flex: 0.5,\n // Passing the entire original data in as an extra param, for demonstration purposes\n renderCell: (props) => ParameterSelect({ ...props, data })\n},\n\n...\n\nFinally, just hide the selectedFile column:\n...\n\n<DataGrid\n rows={rows}\n columns={columns}\n disableSelectionOnClick\n // Hiding the extra field\n columnVisibilityModel={{\n selectedFile: false\n }}\n/>\n\n...\n\nProducing this: (I changed your values to make them easier to read while I was working)\n\nWorking CodeSandBox: https://codesandbox.io/s/prod-sun-bdvcu0?file=/demo.js:842-854\n"
] |
[
2
] |
[] |
[] |
[
"material_ui",
"react_mui",
"reactjs"
] |
stackoverflow_0074650702_material_ui_react_mui_reactjs.txt
|
Q:
Logs written to os.Stdout using zerolog aren't visible in Google Cloud Run logs
I'm trying to run thomseddon/traefik-forward-auth in GCP's Cloud Run. The project uses sirupsen/logrus to write structured logs to os.Stdout. The logs show up in the terminal via stdout when running in a docker container on a few different development machines I have access to.
When deploying on Cloud Run, the container starts and responds as expected. However the container's logs don't show up in the Cloud Run Logs tab or the Logs Explorer. I do see a bunch of Cloud Run event logs and HTTP access logs like the following:
2022-12-01 20:45:57.195 CST
Cloud Run traefik-forward-auth {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:46:50.598 CST
Cloud Run ReplaceService traefik-forward-auth [email protected] {@type: type.googleapis.com/google.cloud.audit.AuditLog, authenticationInfo: {…}, authorizationInfo: […], methodName: google.cloud.run.v1.Services.ReplaceService, request: {…}, requestMetadata: {…}, resourceLocation: {…}, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}…
2022-12-02 16:46:58.956 CST
Cloud Run traefik-forward-auth-00017-rav {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/revisions/traefik-forward-auth-00017-rav, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:47:04.639 CST
Cloud Run traefik-forward-auth {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:48:54.874 CST
GET302 0 B 0 ms Firefox 105 http://oauth.domain.com/
I was able to successfully get some container logs to show up using alternative logging options including fmt.Println() and log.Print(). While that's nice, I'd really like to stick with structured logging. So I even replaced sirupsen/logrus with rs/zerolog thinking logrus is just an older package and maybe something's changed in the past year-and-change. I get the same results with zerolog.
From other posts, it appears that there are two common "problems" others run into with logs in GCP not showing up:
The container process doing the logging isn't the one that was called by the entrypoint. The entrypoint for this container is the compile golang binary and nothing else, so I'm pretty sure this isn't my case.
Or the logging package/library is buffering the output and it needs to be flushed. According to the golang documentation and Google Group, os.Stdout.Write() doesn't buffer and from what I can find in the docs and code for both logrus and zerolog, neither do they.
I'm stumped and I'd appreciate any help. I'm new-ish to both GCP and Golang and I feel like the answer is obvious and I'm just missing it.
A:
It sounds like you're doing everything correctly in terms of how you're logging from your application. It's possible that the issue is with how Cloud Run is handling the logs from your application.
By default, Cloud Run only captures logs from stdout and stderr, so if your application is logging to a different output stream (such as a file on disk), then those logs will not be captured. Additionally, Cloud Run only captures logs when the log level is set to "Debug" or higher. You can change the log level in the Cloud Run service's "Configuration" page in the GCP Console.
If you're still not seeing your logs after making sure that you're logging to stdout/stderr and setting the log level to "Debug" or higher, it's possible that there is an issue with how Cloud Run is collecting and storing the logs. In this case, you may want to contact GCP support for help troubleshooting the issue.
|
Logs written to os.Stdout using zerolog aren't visible in Google Cloud Run logs
|
I'm trying to run thomseddon/traefik-forward-auth in GCP's Cloud Run. The project uses sirupsen/logrus to write structured logs to os.Stdout. The logs show up in the terminal via stdout when running in a docker container on a few different development machines I have access to.
When deploying on Cloud Run, the container starts and responds as expected. However the container's logs don't show up in the Cloud Run Logs tab or the Logs Explorer. I do see a bunch of Cloud Run event logs and HTTP access logs like the following:
2022-12-01 20:45:57.195 CST
Cloud Run traefik-forward-auth {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:46:50.598 CST
Cloud Run ReplaceService traefik-forward-auth [email protected] {@type: type.googleapis.com/google.cloud.audit.AuditLog, authenticationInfo: {…}, authorizationInfo: […], methodName: google.cloud.run.v1.Services.ReplaceService, request: {…}, requestMetadata: {…}, resourceLocation: {…}, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}…
2022-12-02 16:46:58.956 CST
Cloud Run traefik-forward-auth-00017-rav {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/revisions/traefik-forward-auth-00017-rav, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:47:04.639 CST
Cloud Run traefik-forward-auth {@type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/my-project/services/traefik-forward-auth, response: {…}, serviceName: run.googleapis.com, status: {…}}
2022-12-02 16:48:54.874 CST
GET302 0 B 0 ms Firefox 105 http://oauth.domain.com/
I was able to successfully get some container logs to show up using alternative logging options including fmt.Println() and log.Print(). While that's nice, I'd really like to stick with structured logging. So I even replaced sirupsen/logrus with rs/zerolog thinking logrus is just an older package and maybe something's changed in the past year-and-change. I get the same results with zerolog.
From other posts, it appears that there are two common "problems" others run into with logs in GCP not showing up:
The container process doing the logging isn't the one that was called by the entrypoint. The entrypoint for this container is the compile golang binary and nothing else, so I'm pretty sure this isn't my case.
Or the logging package/library is buffering the output and it needs to be flushed. According to the golang documentation and Google Group, os.Stdout.Write() doesn't buffer and from what I can find in the docs and code for both logrus and zerolog, neither do they.
I'm stumped and I'd appreciate any help. I'm new-ish to both GCP and Golang and I feel like the answer is obvious and I'm just missing it.
|
[
"It sounds like you're doing everything correctly in terms of how you're logging from your application. It's possible that the issue is with how Cloud Run is handling the logs from your application.\nBy default, Cloud Run only captures logs from stdout and stderr, so if your application is logging to a different output stream (such as a file on disk), then those logs will not be captured. Additionally, Cloud Run only captures logs when the log level is set to \"Debug\" or higher. You can change the log level in the Cloud Run service's \"Configuration\" page in the GCP Console.\nIf you're still not seeing your logs after making sure that you're logging to stdout/stderr and setting the log level to \"Debug\" or higher, it's possible that there is an issue with how Cloud Run is collecting and storing the logs. In this case, you may want to contact GCP support for help troubleshooting the issue.\n"
] |
[
0
] |
[] |
[] |
[
"go",
"google_cloud_logging",
"google_cloud_platform",
"google_cloud_run",
"logging"
] |
stackoverflow_0074662692_go_google_cloud_logging_google_cloud_platform_google_cloud_run_logging.txt
|
Q:
Is this an effective way to determine if a someone has won in connect 4?
I'm using the following function to determine if a winner has been crowned in connect four. Piece is whether they are green or red, last is the last played move (by piece), and name is the discord name of the person playing the game, as it is a file based connect four game. Board is a 2d array being made of all empty and filled squares. Due to the game being based in python, is this a effecient way to check?
Examples:
Piece:
:green_circle:
Board:
[[':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':green_circle:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:']]
Last:
5,1
Discord View:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
# checks for 000_
if j > 2:
if board[i][j - 1] == piece and board[i][j - 2] == piece and board[i][
j - 3] == piece:
return piece + " won"
# checks for _000
if j < 4:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j + 3] == piece:
return piece + " won"
# checks for downs
if i < 3:
if board[i + 1][j] == piece and board[i + 2][j] == piece and board[
i + 3][j] == piece:
return piece + " won"
#check if you place in a 00_0
if not j in [0, 1, 6]:
if board[i][j + 1] == piece and board[i][j - 1] == piece and board[i][
j - 2] == piece:
return piece + " won"
#check for 0_00
if not j in [0, 5, 6]:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j - 1] == piece:
return piece + " won"
# check for top piece of a down-right diagonal
if i < 3 and j < 4:
if board[i + 1][j + 1] == piece and board[i + 2][j + 2] == piece and board[
i + 3][j + 3] == piece:
return piece + " won"
# check for bottom piece of a down-right diagonal
if i > 2 and j > 2:
if board[i - 1][j - 1] == piece and board[i - 2][j - 2] == piece and board[
i - 3][j - 3] == piece:
return piece + " won"
# check for top piece of down-left diagonal
if i < 3 and j > 2:
if board[i + 1][j - 1] == piece and board[i + 2][j - 2] == piece and board[
i + 3][j - 3] == piece:
return piece + " won"
# check for bottom piece of down-left diagonal
if i > 2 and j < 4:
if board[i - 1][j + 1] == piece and board[i - 2][j + 2] == piece and board[
i - 3][j + 3] == piece:
return piece + " won"
# check for 2nd top piece of down-right diagonal
if i in [1,2,3] and j in [1,2,3,4]:
if board[i - 1][j - 1] == piece and board[i +1 ][j + 1] == piece and board[i +2][j +2] == piece:
return piece + " won"
# check for 3rd piece of down-right diagonal
if i in [2,3,4] and j in [2,3,4,5]:
if board[i - 1][j - 1] == piece and board[i -2 ][j -2] == piece and board[i +1][j +1] == piece:
return piece + " won"
# check for 2nd piece of down-left diagonal
if i in [1,2,3] and j in [2,3,4,5]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i +2][j -2] == piece:
return piece + " won"
# check for 3rd piece in down-left diagonal
if i in [2,3,4] and j in [1,2,3,4]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i -2][j +2] == piece:
return piece + " won"
A:
Keeping in mind your conditions are apt, your code could be enhanced in the following manners:
Replacing conditions with all()
Avoiding nested if conditions
Using elif in places of if
Use min() to check inequality for smallest rather than checking both i and j
Combine conditions to make it faster
Here's just an enhanced version of your code:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
# thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
winMsg = f"{piece} win" # create variable for ease
if j > 2:
if all(piece == value for value in [board[i][j-1], board[i][j-2], board[i][j-2]]) or all(piece == value for value in [board[i+1][j-1], board[i+2][j-2], board[i+3][j-3]]): return winMsg
elif all(piece == value for value in [board[i][j+1], board[i][j+2], board[i][j+3]]): return winMsg
elif all(piece == value for value in [board[i+1][j], board[i+2][j], board[i+3][j]]): return winMsg
elif j not in [0, 1, 6] and all(piece == value for value in [board[i][j+1], board[i][j-1], board[i][j-2]]): return winMsg
elif j not in [0, 5, 6] and all(piece == value for value in [board[i][j-1], board[i][j+1], board[i][j+2]]): return winMsg
elif all(piece == value for value in [board[i+1][j+1], board[i][j+2], board[i][j-1]]): return winMsg
elif min(i, j) > 2 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i-3][j-3]]): return winMsg
elif i > 2 and all(piece == value for value in [board[i-1][j+1], board[i-2][j+2], board[i-3][j+3]]): return winMsg
elif i in [1,2,3]:
if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j-1], board[i+1][j+1], board[i+2][j+2]]): return winMsg
elif j == 5 and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i+2][j-2]]): return winMsg
elif i == 4:
if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i-2][j+2]]): return winMsg
elif j == 5 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i+1][j+1]]): return winMsg
If you could provide an exact input when there is a win, maybe a better approach could be made. Hope this helps :)
A:
Not sure if this is faster but I've done this before in Numpy. Here's how I did it:
import numpy as np
class Connect4Game():
# Construct a set of binary masks to find connect 4s
win_mask = np.zeros((4*4, 7, 7), 'bool')
idx1 = np.array(range(4))
idx2 = np.array([3]*4)
for i in range(4):
win_mask[([i]*4, idx1+i, idx2)] = True
win_mask[([i+4]*4, idx2, idx1+i)] = True
win_mask[([i+8]*4, idx1+i, idx1+i)] = True
win_mask[([i+12]*4, 6-idx1-i, idx1+i)] = True
def __init__(self, data=None):
# Extend the board area by adding borders
self.ext_board = np.zeros((12, 13), 'int8')
# Make the board a view slice
self.board = self.ext_board.view()[3:9, 3:10]
if data is not None:
self.load_game(data)
def reset(self):
self.board [:, :] = 0
def load_game(self, data):
data = np.array(data)
assert(data.shape == (6, 7))
self.reset()
self.board[data == ':green_circle:'] = 1
self.board[data == ':red_circle:'] = 2
def check_for_win(self, last, piece):
row, col = last
selection = self.ext_board[row:row+7, col:col+7]
wins = np.nonzero(np.all(
((selection == piece) & self.win_mask)
== self.win_mask, axis=(1, 2)
))[0]
return wins.tolist()
# Demo
g = Connect4Game(example_board)
print(g.board)
last = (5, 1)
piece = 1
assert g.board[last] == piece
wins = g.check_for_win(last, piece)
print(wins)
Output:
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0]], dtype=int8)
[]
|
Is this an effective way to determine if a someone has won in connect 4?
|
I'm using the following function to determine if a winner has been crowned in connect four. Piece is whether they are green or red, last is the last played move (by piece), and name is the discord name of the person playing the game, as it is a file based connect four game. Board is a 2d array being made of all empty and filled squares. Due to the game being based in python, is this a effecient way to check?
Examples:
Piece:
:green_circle:
Board:
[[':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':green_circle:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:']]
Last:
5,1
Discord View:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
# checks for 000_
if j > 2:
if board[i][j - 1] == piece and board[i][j - 2] == piece and board[i][
j - 3] == piece:
return piece + " won"
# checks for _000
if j < 4:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j + 3] == piece:
return piece + " won"
# checks for downs
if i < 3:
if board[i + 1][j] == piece and board[i + 2][j] == piece and board[
i + 3][j] == piece:
return piece + " won"
#check if you place in a 00_0
if not j in [0, 1, 6]:
if board[i][j + 1] == piece and board[i][j - 1] == piece and board[i][
j - 2] == piece:
return piece + " won"
#check for 0_00
if not j in [0, 5, 6]:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j - 1] == piece:
return piece + " won"
# check for top piece of a down-right diagonal
if i < 3 and j < 4:
if board[i + 1][j + 1] == piece and board[i + 2][j + 2] == piece and board[
i + 3][j + 3] == piece:
return piece + " won"
# check for bottom piece of a down-right diagonal
if i > 2 and j > 2:
if board[i - 1][j - 1] == piece and board[i - 2][j - 2] == piece and board[
i - 3][j - 3] == piece:
return piece + " won"
# check for top piece of down-left diagonal
if i < 3 and j > 2:
if board[i + 1][j - 1] == piece and board[i + 2][j - 2] == piece and board[
i + 3][j - 3] == piece:
return piece + " won"
# check for bottom piece of down-left diagonal
if i > 2 and j < 4:
if board[i - 1][j + 1] == piece and board[i - 2][j + 2] == piece and board[
i - 3][j + 3] == piece:
return piece + " won"
# check for 2nd top piece of down-right diagonal
if i in [1,2,3] and j in [1,2,3,4]:
if board[i - 1][j - 1] == piece and board[i +1 ][j + 1] == piece and board[i +2][j +2] == piece:
return piece + " won"
# check for 3rd piece of down-right diagonal
if i in [2,3,4] and j in [2,3,4,5]:
if board[i - 1][j - 1] == piece and board[i -2 ][j -2] == piece and board[i +1][j +1] == piece:
return piece + " won"
# check for 2nd piece of down-left diagonal
if i in [1,2,3] and j in [2,3,4,5]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i +2][j -2] == piece:
return piece + " won"
# check for 3rd piece in down-left diagonal
if i in [2,3,4] and j in [1,2,3,4]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i -2][j +2] == piece:
return piece + " won"
|
[
"Keeping in mind your conditions are apt, your code could be enhanced in the following manners:\n\nReplacing conditions with all()\nAvoiding nested if conditions\nUsing elif in places of if\nUse min() to check inequality for smallest rather than checking both i and j\nCombine conditions to make it faster\n\nHere's just an enhanced version of your code:\ndef checks(piece, last, name):\n board = []\n open_file = open(name, \"r\")\n # thing = open_file.readline()\n for x in range(6):\n value = open_file.readline()\n board.append(value.strip(\"\\n\").split(\",\"))\n open_file.close()\n cords = last.split(',')\n i = int(cords[0]) # row/x\n j = int(cords[1]) # column/y\n winMsg = f\"{piece} win\" # create variable for ease\n if j > 2:\n if all(piece == value for value in [board[i][j-1], board[i][j-2], board[i][j-2]]) or all(piece == value for value in [board[i+1][j-1], board[i+2][j-2], board[i+3][j-3]]): return winMsg\n elif all(piece == value for value in [board[i][j+1], board[i][j+2], board[i][j+3]]): return winMsg\n elif all(piece == value for value in [board[i+1][j], board[i+2][j], board[i+3][j]]): return winMsg\n elif j not in [0, 1, 6] and all(piece == value for value in [board[i][j+1], board[i][j-1], board[i][j-2]]): return winMsg\n elif j not in [0, 5, 6] and all(piece == value for value in [board[i][j-1], board[i][j+1], board[i][j+2]]): return winMsg\n elif all(piece == value for value in [board[i+1][j+1], board[i][j+2], board[i][j-1]]): return winMsg\n elif min(i, j) > 2 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i-3][j-3]]): return winMsg\n elif i > 2 and all(piece == value for value in [board[i-1][j+1], board[i-2][j+2], board[i-3][j+3]]): return winMsg\n elif i in [1,2,3]:\n if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j-1], board[i+1][j+1], board[i+2][j+2]]): return winMsg\n elif j == 5 and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i+2][j-2]]): return winMsg\n elif i == 4:\n if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i-2][j+2]]): return winMsg\n elif j == 5 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i+1][j+1]]): return winMsg\n\nIf you could provide an exact input when there is a win, maybe a better approach could be made. Hope this helps :)\n",
"Not sure if this is faster but I've done this before in Numpy. Here's how I did it:\nimport numpy as np\n\n\nclass Connect4Game():\n\n # Construct a set of binary masks to find connect 4s\n win_mask = np.zeros((4*4, 7, 7), 'bool')\n idx1 = np.array(range(4))\n idx2 = np.array([3]*4)\n for i in range(4):\n win_mask[([i]*4, idx1+i, idx2)] = True\n win_mask[([i+4]*4, idx2, idx1+i)] = True\n win_mask[([i+8]*4, idx1+i, idx1+i)] = True\n win_mask[([i+12]*4, 6-idx1-i, idx1+i)] = True\n\n def __init__(self, data=None):\n # Extend the board area by adding borders\n self.ext_board = np.zeros((12, 13), 'int8')\n # Make the board a view slice\n self.board = self.ext_board.view()[3:9, 3:10]\n if data is not None:\n self.load_game(data)\n\n def reset(self):\n self.board [:, :] = 0\n\n def load_game(self, data):\n data = np.array(data)\n assert(data.shape == (6, 7))\n self.reset()\n self.board[data == ':green_circle:'] = 1\n self.board[data == ':red_circle:'] = 2\n\n def check_for_win(self, last, piece):\n row, col = last\n selection = self.ext_board[row:row+7, col:col+7]\n wins = np.nonzero(np.all(\n ((selection == piece) & self.win_mask) \n == self.win_mask, axis=(1, 2)\n ))[0]\n return wins.tolist()\n\n\n# Demo\ng = Connect4Game(example_board)\nprint(g.board)\nlast = (5, 1)\npiece = 1\nassert g.board[last] == piece\nwins = g.check_for_win(last, piece)\nprint(wins)\n\nOutput:\narray([[0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 0, 0]], dtype=int8)\n\n[]\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"connect_four",
"discord.py",
"python"
] |
stackoverflow_0074664215_connect_four_discord.py_python.txt
|
Q:
on my github pages website the images wont load. does any one know why? this is the website
so I have been working on my website for a few months now and the background image won't load.
This is what the CSS code looks like:
.main{
width: 100%;
background: linear-gradient(to top, rgba(0,0,0,0.5)50%,rgb(0, 0, 0, 0.5)50%), url(file:///home/g7adz177a/Downloads/website%20background.png);
background-position: center;
background-size: cover;
height: 109vh;
}
I expected it to work on GitHub pages because when I opened the HTML file, it worked on the browser.
https://avrydacool1.github.io/avrysartshow.github.io/ (the website)
A:
Your background image seems to be loading now (you have embedded in the style sheet with commit e789643), probably because the previous url was not supported:
url(https://github.com/AvryDaCool1/avrysartshow.github.io/blob/main/website%20background.png);
What might have worked is (as illustrated here):
url(https://avrydacool1.github.io/avrysartshow.github.io/website%20background.png);
|
on my github pages website the images wont load. does any one know why? this is the website
|
so I have been working on my website for a few months now and the background image won't load.
This is what the CSS code looks like:
.main{
width: 100%;
background: linear-gradient(to top, rgba(0,0,0,0.5)50%,rgb(0, 0, 0, 0.5)50%), url(file:///home/g7adz177a/Downloads/website%20background.png);
background-position: center;
background-size: cover;
height: 109vh;
}
I expected it to work on GitHub pages because when I opened the HTML file, it worked on the browser.
https://avrydacool1.github.io/avrysartshow.github.io/ (the website)
|
[
"Your background image seems to be loading now (you have embedded in the style sheet with commit e789643), probably because the previous url was not supported:\n url(https://github.com/AvryDaCool1/avrysartshow.github.io/blob/main/website%20background.png);\n\nWhat might have worked is (as illustrated here):\n url(https://avrydacool1.github.io/avrysartshow.github.io/website%20background.png);\n\n"
] |
[
0
] |
[] |
[] |
[
"css",
"github",
"github_pages",
"html"
] |
stackoverflow_0074648541_css_github_github_pages_html.txt
|
Q:
jQuery remove options from select
I have a page with 5 selects that all have a class name 'ct'. I need to remove the option with a value of 'X' from each select while running an onclick event. My code is:
$(".ct").each(function() {
$(this).find('X').remove();
});
Where am I going wrong?
A:
Try this:
$(".ct option[value='X']").each(function() {
$(this).remove();
});
Or to be more terse, this will work just as well:
$(".ct option[value='X']").remove();
A:
$('.ct option').each(function() {
if ( $(this).val() == 'X' ) {
$(this).remove();
}
});
Or just
$('.ct option[value="X"]').remove();
Main point is that find takes a selector string, by feeding it x you are looking for elements named x.
A:
find() takes a selector, not a value. This means you need to use it in the same way you would use the regular jQuery function ($('selector')).
Therefore you need to do something like this:
$(this).find('[value="X"]').remove();
See the jQuery find docs.
A:
It works on either option tag or text field:
$("#idname option[value='option1']").remove();
A:
If no id or class were available for the option values, one can remove all the values from dropdown as below
$(this).find('select').find('option[value]').remove();
A:
Iterating a list and removing multiple items using a find.
Response contains an array of integers. $('#OneSelectList') is a select list.
$.ajax({
url: "Controller/Action",
type: "GET",
success: function (response) {
// Take out excluded years.
$.each(response, function (j, responseYear) {
$('#OneSelectList').find('[value="' + responseYear + '"]').remove();
});
},
error: function (response) {
console.log("Error");
}
});
A:
Something that has quickly become my favorite thing to do with removing an option is not to remove it at all. This method is beneficial for those who want to remove the option but might want to re-add it later, and make sure that it's added back in the correct order.
First, I actually disable that option.
$("#mySelect").change(
function() {
$("#mySelect").children('option[value="' + $(this).val() + '"]').prop("disabled", true);
$("#re-addOption").click(
function() {
$("#mySelect").children('option[value="' + howeverYouStoredTheValueHere + '"]').prop("disabled", false);
}
);
}
);
and then to clean up, in my CSS, I set disabled options to be hidden, because hiding an option in some browsers doesn't work, but using the method above, clients with those browsers wont be able to select the option again.
select option[disabled] {
display: none;
}
Personally, on the re-addOption element, I have a custom property of data-target="value", and in place of howeverYouStoredTheValueHere, I use $(this).attr('data-target').
A:
if your dropdown is in a table and you do not have id for it then you can use the following jquery:
var select_object = purchasing_table.rows[row_index].cells[cell_index].childNodes[1];
$(select_object).find('option[value='+site_name+']').remove();
A:
For jquery < 1.8 you can use :
$('#selectedId option').slice(index1,index2).remove()
to remove a especific range of the select options.
A:
When I did just a remove the option remained in the ddl on the view, but was gone in the html (if u inspect the page)
$("#ddlSelectList option[value='2']").remove(); //removes the option with value = 2
$('#ddlSelectList').val('').trigger('chosen:updated'); //refreshes the drop down list
A:
I tried this code:
$("#select-list").empty()
A:
Try this for remove the selected
$('#btn-remove').click(function () {
$('.ct option:selected').each(function () {
$(this).remove();
});
});
A:
To remove all Options
$(".select").empty();
To remove all Options and add a blank option
$(".select").empty().append(new Option('--Select--',''));
|
jQuery remove options from select
|
I have a page with 5 selects that all have a class name 'ct'. I need to remove the option with a value of 'X' from each select while running an onclick event. My code is:
$(".ct").each(function() {
$(this).find('X').remove();
});
Where am I going wrong?
|
[
"Try this:\n$(\".ct option[value='X']\").each(function() {\n $(this).remove();\n});\n\nOr to be more terse, this will work just as well:\n$(\".ct option[value='X']\").remove();\n\n",
"$('.ct option').each(function() {\n if ( $(this).val() == 'X' ) {\n $(this).remove();\n }\n});\n\nOr just\n$('.ct option[value=\"X\"]').remove();\n\nMain point is that find takes a selector string, by feeding it x you are looking for elements named x.\n",
"find() takes a selector, not a value. This means you need to use it in the same way you would use the regular jQuery function ($('selector')).\nTherefore you need to do something like this:\n$(this).find('[value=\"X\"]').remove();\n\nSee the jQuery find docs.\n",
"It works on either option tag or text field:\n$(\"#idname option[value='option1']\").remove();\n\n",
"If no id or class were available for the option values, one can remove all the values from dropdown as below\n$(this).find('select').find('option[value]').remove();\n\n",
"Iterating a list and removing multiple items using a find.\nResponse contains an array of integers. $('#OneSelectList') is a select list.\n$.ajax({\n url: \"Controller/Action\",\n type: \"GET\",\n success: function (response) {\n // Take out excluded years.\n $.each(response, function (j, responseYear) {\n $('#OneSelectList').find('[value=\"' + responseYear + '\"]').remove();\n });\n },\n error: function (response) {\n console.log(\"Error\");\n }\n});\n\n",
"Something that has quickly become my favorite thing to do with removing an option is not to remove it at all. This method is beneficial for those who want to remove the option but might want to re-add it later, and make sure that it's added back in the correct order.\nFirst, I actually disable that option.\n$(\"#mySelect\").change(\n function() {\n\n $(\"#mySelect\").children('option[value=\"' + $(this).val() + '\"]').prop(\"disabled\", true);\n\n $(\"#re-addOption\").click(\n function() {\n $(\"#mySelect\").children('option[value=\"' + howeverYouStoredTheValueHere + '\"]').prop(\"disabled\", false);\n }\n );\n }\n);\n\nand then to clean up, in my CSS, I set disabled options to be hidden, because hiding an option in some browsers doesn't work, but using the method above, clients with those browsers wont be able to select the option again.\nselect option[disabled] {\n display: none;\n}\n\nPersonally, on the re-addOption element, I have a custom property of data-target=\"value\", and in place of howeverYouStoredTheValueHere, I use $(this).attr('data-target').\n",
"if your dropdown is in a table and you do not have id for it then you can use the following jquery:\nvar select_object = purchasing_table.rows[row_index].cells[cell_index].childNodes[1];\n$(select_object).find('option[value='+site_name+']').remove();\n\n",
"For jquery < 1.8 you can use :\n$('#selectedId option').slice(index1,index2).remove()\n\nto remove a especific range of the select options. \n",
"When I did just a remove the option remained in the ddl on the view, but was gone in the html (if u inspect the page)\n$(\"#ddlSelectList option[value='2']\").remove(); //removes the option with value = 2\n$('#ddlSelectList').val('').trigger('chosen:updated'); //refreshes the drop down list\n\n",
"I tried this code:\n$(\"#select-list\").empty()\n",
"Try this for remove the selected\n$('#btn-remove').click(function () {\n $('.ct option:selected').each(function () {\n $(this).remove();\n });\n});\n\n",
"To remove all Options\n$(\".select\").empty();\n\nTo remove all Options and add a blank option\n$(\".select\").empty().append(new Option('--Select--',''));\n\n"
] |
[
529,
65,
37,
8,
4,
2,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"html_select",
"javascript",
"jquery"
] |
stackoverflow_0001518216_html_select_javascript_jquery.txt
|
Q:
Pip not recognized to install program
I'm trying to install instaloader and running into problems.
IU've downloaded the github file, extracted it, installed python and pip, i think. Now while runninng
pip3 install instaloader
in the windows command prompt its responding:
'pip3' is not recognized as an internal or external command,
operable program or batch file.
I've tried installing pip3 by running pip install pip in both python and command prompt, uninstalling and reinstalling python. Do i need to add python to the PATH?
A:
You can try to install pip by 'python get-pip.py' rather than 'pip install pip'.
|
Pip not recognized to install program
|
I'm trying to install instaloader and running into problems.
IU've downloaded the github file, extracted it, installed python and pip, i think. Now while runninng
pip3 install instaloader
in the windows command prompt its responding:
'pip3' is not recognized as an internal or external command,
operable program or batch file.
I've tried installing pip3 by running pip install pip in both python and command prompt, uninstalling and reinstalling python. Do i need to add python to the PATH?
|
[
"You can try to install pip by 'python get-pip.py' rather than 'pip install pip'.\n"
] |
[
0
] |
[] |
[] |
[
"instaloader",
"python"
] |
stackoverflow_0074664616_instaloader_python.txt
|
Q:
How to connect to mariadb5.5.52 using python3
My development environment
python3.8
mariadb 5.5.52
pymysql 1.0.2
django 4.1.3
try to migrate
but vscode tips django.db.utils.NotSupportedError: MariaDB 10.3 or later is required (found 5.5.52).
A:
To connect to a MariaDB 5.5.52 database using Python 3, you can use the pymysql library. This library provides a Python interface for connecting to and working with a MariaDB database.
To use pymysql, you will need to first install it using pip:
pip install pymysql
Once you have installed pymysql, you can use it to connect to your MariaDB database by importing the pymysql module and creating a new Connection object, like this:
import pymysql
# Connect to the database
conn = pymysql.connect(
host="localhost",
user="username",
password="password",
db="database_name"
)
# Use the cursor() method to create a cursor object
cur = conn.cursor()
# Execute a SQL query
cur.execute("SELECT * FROM table_name")
# Fetch the results of the query
results = cur.fetchall()
# Print the results
print(results)
As for the error message you are seeing from Django, it sounds like you are using a version of Django that is not compatible with MariaDB 5.5.52. Django 4.1.3 requires MariaDB 10.3 or later, so you will need to upgrade your MariaDB installation to a more recent version in order to use Django 4.1.3. Alternatively, you can try using an older version of Django that is compatible with MariaDB 5.5.52.
|
How to connect to mariadb5.5.52 using python3
|
My development environment
python3.8
mariadb 5.5.52
pymysql 1.0.2
django 4.1.3
try to migrate
but vscode tips django.db.utils.NotSupportedError: MariaDB 10.3 or later is required (found 5.5.52).
|
[
"To connect to a MariaDB 5.5.52 database using Python 3, you can use the pymysql library. This library provides a Python interface for connecting to and working with a MariaDB database.\nTo use pymysql, you will need to first install it using pip:\npip install pymysql\n\nOnce you have installed pymysql, you can use it to connect to your MariaDB database by importing the pymysql module and creating a new Connection object, like this:\nimport pymysql\n\n# Connect to the database\nconn = pymysql.connect(\n host=\"localhost\",\n user=\"username\",\n password=\"password\",\n db=\"database_name\"\n)\n\n# Use the cursor() method to create a cursor object\ncur = conn.cursor()\n\n# Execute a SQL query\ncur.execute(\"SELECT * FROM table_name\")\n\n# Fetch the results of the query\nresults = cur.fetchall()\n\n# Print the results\nprint(results)\n\nAs for the error message you are seeing from Django, it sounds like you are using a version of Django that is not compatible with MariaDB 5.5.52. Django 4.1.3 requires MariaDB 10.3 or later, so you will need to upgrade your MariaDB installation to a more recent version in order to use Django 4.1.3. Alternatively, you can try using an older version of Django that is compatible with MariaDB 5.5.52.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"mariadb",
"python"
] |
stackoverflow_0074664116_django_mariadb_python.txt
|
Q:
How can I catch 0xC0000005 Access Violation in Go
My current project involves using a program written in C++ to execute a program in Go that involves reading and writing to memory often. Sometimes I do not know the protection of the memory region and when attempting to read it my program just closes. I looked in a debugger and noticed it was exiting with error 3221225477 which is 0xC0000005 Access Violation. Is it possible to catch this sort of error inside my Go application? If not, can I catch this in my C++ code anyway?
I have added AddVectoredExceptionHandler(1, &VHandler); to my C++ code that executes this process and while this catches exceptions in C++ it doesn't catch the error from the Go process.
|
How can I catch 0xC0000005 Access Violation in Go
|
My current project involves using a program written in C++ to execute a program in Go that involves reading and writing to memory often. Sometimes I do not know the protection of the memory region and when attempting to read it my program just closes. I looked in a debugger and noticed it was exiting with error 3221225477 which is 0xC0000005 Access Violation. Is it possible to catch this sort of error inside my Go application? If not, can I catch this in my C++ code anyway?
I have added AddVectoredExceptionHandler(1, &VHandler); to my C++ code that executes this process and while this catches exceptions in C++ it doesn't catch the error from the Go process.
|
[] |
[] |
[
"In Go, you can use the recover function to catch panics and recover from them. Here is an example of how you might use it to recover from an access violation:\npackage main\n\nimport (\n \"fmt\"\n \"runtime\"\n \"unsafe\"\n)\n\nfunc main() {\n // Set up a function to catch any panics and recover from them.\n defer func() {\n if err := recover(); err != nil {\n fmt.Println(\"Caught panic:\", err)\n }\n }()\n\n // Attempt to read from an invalid memory address.\n // This will cause a panic and trigger our recovery function.\n var ptr unsafe.Pointer\n *ptr = 0\n}\n\nIn this example, the program will panic when it tries to dereference the ptr variable, which has not been initialized. The defer statement sets up a function that will be called when the main function returns, regardless of whether it returns normally or panics. This function will call recover to catch the panic and recover from it, printing the error message.\nIn your case, you may want to wrap any code that accesses memory in a defer statement like this to catch any panics and recover from them. Note that recovering from a panic does not guarantee that your program will continue to work correctly, so you should still try to avoid accessing invalid memory if possible.\nAs for catching the panic in your C++ code, it is possible to catch panics thrown by Go code using the recover function from the runtime package. Here is an example:\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"runtime.h\"\n#include \"go.h\"\n\nint main() {\n // Set up a function to catch panics and recover from them.\n // This is similar to the defer statement in Go.\n _go_defer_recover([]{\n if (void *err = _go_recover(NULL); err != NULL) {\n printf(\"Caught panic: %s\\n\", (const char *)err);\n free(err);\n }\n });\n\n // Call a Go function that may panic.\n // This will trigger our recovery function if the Go function panics.\n CallGoFunction();\n\n return 0;\n}\n\nThis example uses the _go_defer_recover and _go_recover functions from the runtime package to catch panics thrown by Go code. It is similar to the defer and recover functions in Go, but it is implemented in C++. It sets up a function that will be called when the main function returns, regardless of whether it returns normally or panics. This function will call _go_recover to catch any panics and recover from them, printing the error message.\nNote that in order to use these functions, you will need to include the runtime.h and go.h headers from the Go runtime library. You can find these headers in the $GOROOT/src directory, where $GOROOT is the path to your Go installation.\nI hope this helps!\n"
] |
[
-1
] |
[
"c++",
"exception",
"go",
"windows"
] |
stackoverflow_0074662649_c++_exception_go_windows.txt
|
Q:
React large array render on svg with drag and select
I am using React + d3js to render data on svg, i have a large array with data like this
const arrayItemExample = {
id: 1,
text: 'Some Text for node',
x: 100,
y: 300,
selected: false,
dragging: false
}
The list of such elements can reach tens of thousands. The main problem is that when I click on a rendered element or drag it, I update the data on that element:
const handleDragStart = useCallback((id)=>{
setData(data=>{
return data.map(item=>{
if(item.id === id){
return {
...item,
dragging: true
}
}
return item
})
})
},[])
const handleSelect = useCallback((id)=>{
setData(data=>{
return data.map(item=>{
if(item.id === id){
return {
...item,
selected: true
}
}
return item
})
})
},[])
Both of these functions work well on a small amount of data, but if there are 100 or more elements on the page, then clicking or dragging the element slows down the page during the redrawing of elements.
Are there any ways to update the data in a specific element with redrawing its only one? I can't use the component's internal state because I need the total data for the selected and draggable elements: for example, I have selected several elements with ctrl , and when I start dragging one of the selected, the other will also have to be dragged
A:
D3 data sets are most often rendered using SVG, a retained mode graphics model, which is easy to use, but performance is limited. SVG charts can typically handle around 1,000 data points.
Since D3 v4 you’ve also had the option to render charts using canvas, which is an immediate mode graphics model. With Canvas you can expect to render around 10,000 data points whilst maintaining smooth 60fps interactions.
so my recommendation is to use canvas instead of SVG. and if you have time I recommend using WebGL instead.
A:
To update the data for a specific element in an array without redrawing all the elements, you can use the useReducer hook in combination with the useMemo hook. The useReducer hook allows you to define a reducer function that updates the state based on the action that was dispatched. The useMemo hook allows you to compute a memoized value that is only recomputed when one of the dependencies changes.
Here is an example of how you could use the useReducer and useMemo hooks to update the data for a specific element in your array without redrawing all the elements:
const [state, dispatch] = useReducer(reducer, initialState);
const data = useMemo(() => {
return state.data.map(item => {
if (item.id === state.selectedId) {
return {
...item,
selected: true
};
} else if (item.id === state.draggingId) {
return {
...item,
dragging: true
};
}
return item;
});
}, [state.data, state.selectedId, state.draggingId]);
const handleDragStart = useCallback((id) => {
dispatch({ type: 'DRAG_START', id });
}, []);
const handleSelect = useCallback((id) => {
dispatch({ type: 'SELECT', id });
}, []);
In the code above, we define a reducer function using the useReducer hook. The reducer function takes the current state and an action, and returns a new state based on the action. We also define two action creators, handleDragStart and handleSelect, which dispatch the DRAG_START and SELECT actions respectively.
Next, we use the useMemo hook to compute a memoized version of the data array. The useMemo hook takes a function that returns the value to be memoized, and a list of dependencies. In this case, the function returns the updated data array, with the selected and dragging elements updated based on the current selectedId and draggingId in the state.
Finally, we use the data array in our React component to render the elements. Since the data array is memoized, it will only be recomputed when the selectedId or draggingId in the state changes, which ensures
A:
There are a few ways you can optimize the rendering of your data to improve performance.
One approach you can take is to use the shouldComponentUpdate lifecycle method in your React components. This method allows you to control when a component should be updated, which can help prevent unnecessary re-renders. For example, you can use shouldComponentUpdate to only re-render a component when its selected or dragging prop has changed, rather than re-rendering every time the parent component's data array is updated.
Another approach is to use a virtualized list component, such as the react-virtualized library. A virtualized list only renders the elements that are currently visible on the screen, which can significantly improve the performance of rendering large lists of data.
In addition, it's generally a good idea to optimize your d3.js code to avoid unnecessarily re-rendering elements. For example, you can use the .exit() and .enter() selections in d3 to only update the elements that have been added or removed from the data, rather than re-rendering the entire list of elements every time the data changes.
To use the shouldComponentUpdate method in your React components, you can add the method to your component class like this:
class MyComponent extends React.Component {
shouldComponentUpdate(nextProps) {
// Only update the component if the selected or dragging prop has changed
return this.props.selected !== nextProps.selected || this.props.dragging !== nextProps.dragging;
}
render() {
// Render the component
}
}
To use the react-virtualized library, you can install it from npm and then use the VirtualizedList component in your code like this:
import { VirtualizedList } from 'react-virtualized';
const MyList = ({ data }) => {
return (
<VirtualizedList
data={data}
// Specify the height and width of the list
height={300}
width={300}
// Specify a function to render each item in the list
renderItem={({ item, index }) => {
return <MyComponent key={item.id} item={item} />;
}}
/>
);
};
Here's an example of using the .exit() and .enter() selections in d3 to only update the elements that have been added or removed from the data, rather than re-rendering the entire list of elements every time the data changes:
// Select all elements with the specified class
const elements = d3.selectAll('.my-element-class');
// Bind the data to the selection
const updatedElements = elements.data(myData);
// Remove any elements that are no longer in the data
updatedElements.exit().remove();
// Add new elements for any data that doesn't have a corresponding element
const newElements = updatedElements.enter().append('div')
.classed('my-element-class', true)
// Set the initial attributes of the new elements
.attr('x', d => d.x)
.attr('y', d => d.y)
.text(d => d.text);
// Update the attributes of all elements, whether they are new or not
updatedElements.merge(newElements)
.attr('x', d => d.x)
.attr('y', d => d.y)
.text(d => d.text);
This code will select all elements with the my-element-class class, bind the specified data to the selection, remove any elements that are no longer in the data, add new elements for any data that doesn't have a corresponding element, and update the attributes of all elements, whether they are new or not. This allows you to only update the elements that have changed, rather than re-rendering the entire list of elements every time the data changes.
|
React large array render on svg with drag and select
|
I am using React + d3js to render data on svg, i have a large array with data like this
const arrayItemExample = {
id: 1,
text: 'Some Text for node',
x: 100,
y: 300,
selected: false,
dragging: false
}
The list of such elements can reach tens of thousands. The main problem is that when I click on a rendered element or drag it, I update the data on that element:
const handleDragStart = useCallback((id)=>{
setData(data=>{
return data.map(item=>{
if(item.id === id){
return {
...item,
dragging: true
}
}
return item
})
})
},[])
const handleSelect = useCallback((id)=>{
setData(data=>{
return data.map(item=>{
if(item.id === id){
return {
...item,
selected: true
}
}
return item
})
})
},[])
Both of these functions work well on a small amount of data, but if there are 100 or more elements on the page, then clicking or dragging the element slows down the page during the redrawing of elements.
Are there any ways to update the data in a specific element with redrawing its only one? I can't use the component's internal state because I need the total data for the selected and draggable elements: for example, I have selected several elements with ctrl , and when I start dragging one of the selected, the other will also have to be dragged
|
[
"D3 data sets are most often rendered using SVG, a retained mode graphics model, which is easy to use, but performance is limited. SVG charts can typically handle around 1,000 data points.\nSince D3 v4 you’ve also had the option to render charts using canvas, which is an immediate mode graphics model. With Canvas you can expect to render around 10,000 data points whilst maintaining smooth 60fps interactions.\nso my recommendation is to use canvas instead of SVG. and if you have time I recommend using WebGL instead.\n",
"To update the data for a specific element in an array without redrawing all the elements, you can use the useReducer hook in combination with the useMemo hook. The useReducer hook allows you to define a reducer function that updates the state based on the action that was dispatched. The useMemo hook allows you to compute a memoized value that is only recomputed when one of the dependencies changes.\nHere is an example of how you could use the useReducer and useMemo hooks to update the data for a specific element in your array without redrawing all the elements:\nconst [state, dispatch] = useReducer(reducer, initialState);\n\nconst data = useMemo(() => {\n return state.data.map(item => {\n if (item.id === state.selectedId) {\n return {\n ...item,\n selected: true\n };\n } else if (item.id === state.draggingId) {\n return {\n ...item,\n dragging: true\n };\n }\n return item;\n });\n}, [state.data, state.selectedId, state.draggingId]);\n\nconst handleDragStart = useCallback((id) => {\n dispatch({ type: 'DRAG_START', id });\n}, []);\n\nconst handleSelect = useCallback((id) => {\n dispatch({ type: 'SELECT', id });\n}, []);\n\nIn the code above, we define a reducer function using the useReducer hook. The reducer function takes the current state and an action, and returns a new state based on the action. We also define two action creators, handleDragStart and handleSelect, which dispatch the DRAG_START and SELECT actions respectively.\nNext, we use the useMemo hook to compute a memoized version of the data array. The useMemo hook takes a function that returns the value to be memoized, and a list of dependencies. In this case, the function returns the updated data array, with the selected and dragging elements updated based on the current selectedId and draggingId in the state.\nFinally, we use the data array in our React component to render the elements. Since the data array is memoized, it will only be recomputed when the selectedId or draggingId in the state changes, which ensures\n",
"There are a few ways you can optimize the rendering of your data to improve performance.\nOne approach you can take is to use the shouldComponentUpdate lifecycle method in your React components. This method allows you to control when a component should be updated, which can help prevent unnecessary re-renders. For example, you can use shouldComponentUpdate to only re-render a component when its selected or dragging prop has changed, rather than re-rendering every time the parent component's data array is updated.\nAnother approach is to use a virtualized list component, such as the react-virtualized library. A virtualized list only renders the elements that are currently visible on the screen, which can significantly improve the performance of rendering large lists of data.\nIn addition, it's generally a good idea to optimize your d3.js code to avoid unnecessarily re-rendering elements. For example, you can use the .exit() and .enter() selections in d3 to only update the elements that have been added or removed from the data, rather than re-rendering the entire list of elements every time the data changes.\nTo use the shouldComponentUpdate method in your React components, you can add the method to your component class like this:\nclass MyComponent extends React.Component {\n shouldComponentUpdate(nextProps) {\n // Only update the component if the selected or dragging prop has changed\n return this.props.selected !== nextProps.selected || this.props.dragging !== nextProps.dragging;\n }\n\n render() {\n // Render the component\n }\n}\n\nTo use the react-virtualized library, you can install it from npm and then use the VirtualizedList component in your code like this:\nimport { VirtualizedList } from 'react-virtualized';\n\nconst MyList = ({ data }) => {\n return (\n <VirtualizedList\n data={data}\n // Specify the height and width of the list\n height={300}\n width={300}\n // Specify a function to render each item in the list\n renderItem={({ item, index }) => {\n return <MyComponent key={item.id} item={item} />;\n }}\n />\n );\n};\n\nHere's an example of using the .exit() and .enter() selections in d3 to only update the elements that have been added or removed from the data, rather than re-rendering the entire list of elements every time the data changes:\n// Select all elements with the specified class\nconst elements = d3.selectAll('.my-element-class');\n\n// Bind the data to the selection\nconst updatedElements = elements.data(myData);\n\n// Remove any elements that are no longer in the data\nupdatedElements.exit().remove();\n\n// Add new elements for any data that doesn't have a corresponding element\nconst newElements = updatedElements.enter().append('div')\n .classed('my-element-class', true)\n // Set the initial attributes of the new elements\n .attr('x', d => d.x)\n .attr('y', d => d.y)\n .text(d => d.text);\n\n// Update the attributes of all elements, whether they are new or not\nupdatedElements.merge(newElements)\n .attr('x', d => d.x)\n .attr('y', d => d.y)\n .text(d => d.text);\n\nThis code will select all elements with the my-element-class class, bind the specified data to the selection, remove any elements that are no longer in the data, add new elements for any data that doesn't have a corresponding element, and update the attributes of all elements, whether they are new or not. This allows you to only update the elements that have changed, rather than re-rendering the entire list of elements every time the data changes.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"d3.js",
"javascript",
"reactjs"
] |
stackoverflow_0074633288_d3.js_javascript_reactjs.txt
|
Q:
OpenCASCADE 7.6.0 not compiling with a .NET 6.0 class library with Visual Studio 2022 (Windows 10)
Steps to reproduce:
Install a version of Visual Studio (I used VS Community 2022). Install OpenCASCADE 7.6.0.
Create a C++ .NET CLR project using Visual Studio 2022 targeting .net6.0.
Change settings to include OpenCASCADE header and library files.
Edit the main header by replacing the code within it with below:
#pragma once
//for OCC graphic
#include <OpenGl_GraphicDriver.hxx>
//wrapper of pure C++ classes to ref classes
#include <NCollection_Haft.h>
namespace ClrClsLibDotNetCoreMwe {
public ref class Class1
{
// TODO: Add your methods for this class here.
};
}
Attempt to build.
Issue: The build fails with the following complain:
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): error C2872: 'HashCode': ambiguous symbol
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): message : could be 'HashCode'
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): message : or 'System::HashCode'
What fixes the problem:
Either Targeting .NET Framework instead of .NET Core (/clr instead of /clr:netcore).
Or removing one of the headers.
Please see if there is a way where I can keep both the headers and target .NET Core?
I have looked around for a possible solution before posting this question here. A promising solution was to disable implicit usings. However, that didn't pan out.
A:
I had the same problem.
In my case, the "using namespace System;" included in the header file. The text caused the problem.
Thanks!
|
OpenCASCADE 7.6.0 not compiling with a .NET 6.0 class library with Visual Studio 2022 (Windows 10)
|
Steps to reproduce:
Install a version of Visual Studio (I used VS Community 2022). Install OpenCASCADE 7.6.0.
Create a C++ .NET CLR project using Visual Studio 2022 targeting .net6.0.
Change settings to include OpenCASCADE header and library files.
Edit the main header by replacing the code within it with below:
#pragma once
//for OCC graphic
#include <OpenGl_GraphicDriver.hxx>
//wrapper of pure C++ classes to ref classes
#include <NCollection_Haft.h>
namespace ClrClsLibDotNetCoreMwe {
public ref class Class1
{
// TODO: Add your methods for this class here.
};
}
Attempt to build.
Issue: The build fails with the following complain:
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): error C2872: 'HashCode': ambiguous symbol
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): message : could be 'HashCode'
1>C:\OpenCASCADE-7.6.0-vc14-64\opencascade-7.6.0\inc\NCollection_DefaultHasher.hxx(34,1): message : or 'System::HashCode'
What fixes the problem:
Either Targeting .NET Framework instead of .NET Core (/clr instead of /clr:netcore).
Or removing one of the headers.
Please see if there is a way where I can keep both the headers and target .NET Core?
I have looked around for a possible solution before posting this question here. A promising solution was to disable implicit usings. However, that didn't pan out.
|
[
"I had the same problem.\nIn my case, the \"using namespace System;\" included in the header file. The text caused the problem.\nThanks!\n"
] |
[
0
] |
[] |
[] |
[
".net_6.0",
"c++",
"clr",
"opencascade",
"visual_studio_2022"
] |
stackoverflow_0073263921_.net_6.0_c++_clr_opencascade_visual_studio_2022.txt
|
Q:
Grouping multiple interfaces into a single interface in Typescript
I'm trying to understand interfaces in Typescript, I can't quite get them to do what I want.
interface RequestData {
[key: string]: number | string | File;
}
function makeRequest(data: RequestData) {
// Do something with data...
}
interface UserRequestData {
id: number;
email: string;
username: string;
}
function updateUser(userData: UserRequestData) {
makeRequest(userData); // ERROR
}
// ERROR:
// Argument of type 'UserRequestData' is not assignable to parameter of type 'RequestData'.
// Index signature for type 'string' is missing in type 'UserRequestData'.ts(2345)
interface ItemRequestData {...}
interface QueryRequestData {...}
// and more interfaces...
I have a several smaller interfaces such as UserRequestData, ItemRequestData, QueryRequestData that I want to group under a larger interface RequestData.
Since the smaller interfaces all have string keys and certain datatypes, I'd expect to be able to type all of them using {[key: string]: number | string | File;}, however that does not work.
How do I modify makeRequest, such that it is able to accept any interface that uses strings as keys and number | string | File as the value type?
A:
Using [key: string] in RequestData interface is an example of an index signature . That's only represents one property and not the entire smaller interface.
If you want makeRequest able to accept any interface you can use Extending Type. Something like:
interface RequestData {...}
interface UserRequestData extends RequestData {
id: number;
email: string;
username: string;
}
interface ItemRequestData extends RequestData {...}
interface QueryRequestData extends RequestData {...}
|
Grouping multiple interfaces into a single interface in Typescript
|
I'm trying to understand interfaces in Typescript, I can't quite get them to do what I want.
interface RequestData {
[key: string]: number | string | File;
}
function makeRequest(data: RequestData) {
// Do something with data...
}
interface UserRequestData {
id: number;
email: string;
username: string;
}
function updateUser(userData: UserRequestData) {
makeRequest(userData); // ERROR
}
// ERROR:
// Argument of type 'UserRequestData' is not assignable to parameter of type 'RequestData'.
// Index signature for type 'string' is missing in type 'UserRequestData'.ts(2345)
interface ItemRequestData {...}
interface QueryRequestData {...}
// and more interfaces...
I have a several smaller interfaces such as UserRequestData, ItemRequestData, QueryRequestData that I want to group under a larger interface RequestData.
Since the smaller interfaces all have string keys and certain datatypes, I'd expect to be able to type all of them using {[key: string]: number | string | File;}, however that does not work.
How do I modify makeRequest, such that it is able to accept any interface that uses strings as keys and number | string | File as the value type?
|
[
"Using [key: string] in RequestData interface is an example of an index signature . That's only represents one property and not the entire smaller interface.\nIf you want makeRequest able to accept any interface you can use Extending Type. Something like:\ninterface RequestData {...}\n\ninterface UserRequestData extends RequestData {\n id: number;\n email: string;\n username: string;\n}\ninterface ItemRequestData extends RequestData {...}\ninterface QueryRequestData extends RequestData {...}\n\n"
] |
[
1
] |
[] |
[] |
[
"interface",
"typescript",
"typescript_types"
] |
stackoverflow_0074663891_interface_typescript_typescript_types.txt
|
Q:
How to call variables and function from outside of a module in NestJS?
I have some helper functions inside the /src/common/helper/cash.helper.ts. When I call this function from a module, I get the following error.
Error: Cannot find module './../../../src/common/helper/cast.helper' Require stack:
However, the e2e tests are working without any problem. Here, you can see the folder structure.
When I change the import to absolute path import { toNumber } from 'src/common/helper/cast.helper'; It's working, but the e2e tests are not working.
What's wrong here? How can I use common functions and constants across all the modules in NestJS?
A:
I did the following to fix the issue.
I changed the import to absolute path import { toNumber } from 'src/common/helper/cast.helper';
To fix the e2e test, I have added "moduleDirectories": ["<rootDir>/../", "node_modules"] to the jest-e2e.json.
|
How to call variables and function from outside of a module in NestJS?
|
I have some helper functions inside the /src/common/helper/cash.helper.ts. When I call this function from a module, I get the following error.
Error: Cannot find module './../../../src/common/helper/cast.helper' Require stack:
However, the e2e tests are working without any problem. Here, you can see the folder structure.
When I change the import to absolute path import { toNumber } from 'src/common/helper/cast.helper'; It's working, but the e2e tests are not working.
What's wrong here? How can I use common functions and constants across all the modules in NestJS?
|
[
"I did the following to fix the issue.\n\nI changed the import to absolute path import { toNumber } from 'src/common/helper/cast.helper';\nTo fix the e2e test, I have added \"moduleDirectories\": [\"<rootDir>/../\", \"node_modules\"] to the jest-e2e.json.\n\n"
] |
[
0
] |
[] |
[] |
[
"nestjs",
"node.js"
] |
stackoverflow_0074624702_nestjs_node.js.txt
|
Q:
How to count how many times a word in a list appeared in-another list
I have 2 lists and I want to see how many of the text in list 1 is in list 2 but I don't really know of a way to like combine them the output isn't summed and I have tried sum method but it does it for all words counted not each word.
Code:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l2:
print(f'{l1.count(i)}: {i}')
Output:
0: hey
1: hi
1: hello
1: hello
What I want is something more like this:
0: hey
1: hi
2: hello
A:
I think a simple fix is to just flip the way you are looping through the lists:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l1:
print(f'{l2.count(i)}: {i}')
Output:
2: hello
1: hi
A:
You can use the in operator to check if each element in l1 is in l2. You can then use a Counter object to count the number of occurrences of each element in l1 that is also in l2.
Here is an example:
from collections import Counter
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
# Create a Counter object to count the occurrences of each element in l1 that is also in l2
counter = Counter()
# Loop over each element in l1 and check if it is in l2
for element in l1:
if element in l2:
# If the element is in l2, increment the count for that element
counter[element] += 1
# Print the count for each element
for element, count in counter.items():
print(f'{count}: {element}')
This will print the following output:
1: hi
2: hello
A:
If you want to count how many times each word in l1 appears in l2, you can use a dictionary to keep track of the counts for each word. Here is one possible way to do this:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
# Create an empty dictionary
counts = {}
# Loop through each word in l1
for word in l1:
# Initialize the count for this word to 0
counts[word] = 0
# Loop through each word in l2
for word2 in l2:
# If the word from l1 appears in l2, increment the count
if word == word2:
counts[word] += 1
# Print the counts for each word
for word in l1:
print(f'{counts[word]}: {word}')
This code will print the following output:
2: hello 1: hi
This approach allows you to count the occurrences of each word in l1 in l2, and print the counts in the desired format. You can further customize the code to suit your specific needs. For example, you could sort the counts by their values or print the counts in a different order, depending on your requirements.
A:
Try this
from collections import Counter
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
c = Counter(l2)
for a in l1:
print(f"{c[a]}: {a}")
c.pop(a)
print(*["0: " + a for a in c.keys()], sep='\n')
OUTPUT
2: hello
1: hi
0: hey
A:
To count the number of times a word in a list appears in another list, you can use a for loop to iterate over the first list and use the count() method to count the number of times each word appears in the second list. Here's an example:
# define the two lists
list1 = ["apple", "banana", "cherry"]
list2 = ["apple", "grape", "cherry", "apple", "orange", "banana", "apple"]
# initialize a count variable
count = 0
# iterate over the first list
for word in list1:
# count the number of times the word appears in the second list
count += list2.count(word)
# print the final count
print(count)
This code will print 4, since there are four words from the first list ("apple", "banana", "cherry") that appear in the second list.
|
How to count how many times a word in a list appeared in-another list
|
I have 2 lists and I want to see how many of the text in list 1 is in list 2 but I don't really know of a way to like combine them the output isn't summed and I have tried sum method but it does it for all words counted not each word.
Code:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l2:
print(f'{l1.count(i)}: {i}')
Output:
0: hey
1: hi
1: hello
1: hello
What I want is something more like this:
0: hey
1: hi
2: hello
|
[
"I think a simple fix is to just flip the way you are looping through the lists:\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\nfor i in l1:\n print(f'{l2.count(i)}: {i}')\n\nOutput:\n2: hello\n1: hi\n\n",
"You can use the in operator to check if each element in l1 is in l2. You can then use a Counter object to count the number of occurrences of each element in l1 that is also in l2.\nHere is an example:\nfrom collections import Counter\n\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\n# Create a Counter object to count the occurrences of each element in l1 that is also in l2\ncounter = Counter()\n\n# Loop over each element in l1 and check if it is in l2\nfor element in l1:\n if element in l2:\n # If the element is in l2, increment the count for that element\n counter[element] += 1\n\n# Print the count for each element\nfor element, count in counter.items():\n print(f'{count}: {element}')\n\nThis will print the following output:\n1: hi\n2: hello\n\n",
"If you want to count how many times each word in l1 appears in l2, you can use a dictionary to keep track of the counts for each word. Here is one possible way to do this:\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\n# Create an empty dictionary\ncounts = {}\n\n# Loop through each word in l1\nfor word in l1:\n # Initialize the count for this word to 0\n counts[word] = 0\n # Loop through each word in l2\n for word2 in l2:\n # If the word from l1 appears in l2, increment the count\n if word == word2:\n counts[word] += 1\n\n# Print the counts for each word\nfor word in l1:\n print(f'{counts[word]}: {word}')\n\nThis code will print the following output:\n2: hello 1: hi\nThis approach allows you to count the occurrences of each word in l1 in l2, and print the counts in the desired format. You can further customize the code to suit your specific needs. For example, you could sort the counts by their values or print the counts in a different order, depending on your requirements.\n",
"Try this\n\nfrom collections import Counter\n\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\nc = Counter(l2)\n\n\nfor a in l1:\n print(f\"{c[a]}: {a}\")\n c.pop(a)\n\nprint(*[\"0: \" + a for a in c.keys()], sep='\\n')\n\n\nOUTPUT\n2: hello\n1: hi\n0: hey\n\n\n",
"To count the number of times a word in a list appears in another list, you can use a for loop to iterate over the first list and use the count() method to count the number of times each word appears in the second list. Here's an example:\n# define the two lists\nlist1 = [\"apple\", \"banana\", \"cherry\"]\nlist2 = [\"apple\", \"grape\", \"cherry\", \"apple\", \"orange\", \"banana\", \"apple\"]\n\n# initialize a count variable\ncount = 0\n\n# iterate over the first list\nfor word in list1:\n # count the number of times the word appears in the second list\n count += list2.count(word)\n\n# print the final count\nprint(count)\n\n\nThis code will print 4, since there are four words from the first list (\"apple\", \"banana\", \"cherry\") that appear in the second list.\n"
] |
[
3,
1,
1,
0,
0
] |
[] |
[] |
[
"count",
"for_loop",
"list",
"python",
"sum"
] |
stackoverflow_0074664429_count_for_loop_list_python_sum.txt
|
Q:
Get 2 or many coming fridays dates from current date c#
I am working on a module (scheduler).
If I add scheduler and I select 2 or many tuesdays from current date to schedule my task. And It will show my task on scheduler for coming 2 or many tuesdays. How can I code this.
To get 2 or many tuesdays from current date. C# or jquery
I have seen many codes but these are not fulfilling my condition.
Var date = new date()
A:
To obtain 2 or many upcoming day of the week (e.g. Tuesday) dates from the current date to schedule, you can the concept in the console example below.
Demo: https://dotnetfiddle.net/UrUrS2
Pass to the ReturnNextNthWeekdaysOfMonth function the following inputs:
starting date e.g. DateTime.Now
day of the week e.g. DayOfWeek.Tuesday
amount of schedule dates to obtain e.g. 2
The result will be the dates to schedule.
public class Program
{
public static void Main()
{
var scheduleDates = ReturnNextNthWeekdaysOfMonth(
DateTime.Now, DayOfWeek.Tuesday, 2);
foreach (var date in scheduleDates) Console.WriteLine(date.ToString("f"));
}
private static IEnumerable<DateTime> ReturnNextNthWeekdaysOfMonth(
DateTime dt, DayOfWeek weekday, int amounttoshow = 4)
{
// Find the first future occurance of the day.
while (dt.DayOfWeek != weekday)
dt = dt.AddDays(1);
// Create the entire range of dates required.
return Enumerable.Range(0, amounttoshow).Select(i => dt.AddDays(i * 7));
}
}
Output:
Tuesday, 6 December, 2022 12:56 PM
Tuesday, 13 December, 2022 12:56 PM
A:
You can try this:
DateTime dtFrom = DateTime.Today; // any starting date
int NextWeekDay = 5; // 1=Sun, 7=sun, so look for Friday's
int HowMany = 3; // how many to get
NextWeekDay = NextWeekDay - (int)dtFrom.DayOfWeek;
if (NextWeekDay < 0)
NextWeekDay += 7;
DateTime dtStart = dtFrom.AddDays(NextWeekDay);
for (int i=0; i < HowMany; i++)
{
DateTime MyDate = dtStart.AddDays(i * 7);
Debug.Print(MyDate.ToLongDateString());
}
output:
Friday, December 9, 2022
Friday, December 16, 2022
Friday, December 23, 2022
So, today is Saturday Dec 3rd, and I count/include today if the requested starting date is same date.
|
Get 2 or many coming fridays dates from current date c#
|
I am working on a module (scheduler).
If I add scheduler and I select 2 or many tuesdays from current date to schedule my task. And It will show my task on scheduler for coming 2 or many tuesdays. How can I code this.
To get 2 or many tuesdays from current date. C# or jquery
I have seen many codes but these are not fulfilling my condition.
Var date = new date()
|
[
"To obtain 2 or many upcoming day of the week (e.g. Tuesday) dates from the current date to schedule, you can the concept in the console example below.\nDemo: https://dotnetfiddle.net/UrUrS2\nPass to the ReturnNextNthWeekdaysOfMonth function the following inputs:\n\nstarting date e.g. DateTime.Now\nday of the week e.g. DayOfWeek.Tuesday\namount of schedule dates to obtain e.g. 2\n\nThe result will be the dates to schedule.\npublic class Program\n{\n public static void Main()\n {\n var scheduleDates = ReturnNextNthWeekdaysOfMonth(\n DateTime.Now, DayOfWeek.Tuesday, 2);\n\n foreach (var date in scheduleDates) Console.WriteLine(date.ToString(\"f\"));\n }\n\n private static IEnumerable<DateTime> ReturnNextNthWeekdaysOfMonth(\n DateTime dt, DayOfWeek weekday, int amounttoshow = 4)\n {\n // Find the first future occurance of the day.\n while (dt.DayOfWeek != weekday)\n dt = dt.AddDays(1);\n\n // Create the entire range of dates required. \n return Enumerable.Range(0, amounttoshow).Select(i => dt.AddDays(i * 7));\n }\n}\n\nOutput:\nTuesday, 6 December, 2022 12:56 PM\nTuesday, 13 December, 2022 12:56 PM\n\n",
"You can try this:\n DateTime dtFrom = DateTime.Today; // any starting date\n\n int NextWeekDay = 5; // 1=Sun, 7=sun, so look for Friday's\n int HowMany = 3; // how many to get\n\n NextWeekDay = NextWeekDay - (int)dtFrom.DayOfWeek;\n if (NextWeekDay < 0)\n NextWeekDay += 7;\n\n DateTime dtStart = dtFrom.AddDays(NextWeekDay);\n for (int i=0; i < HowMany; i++)\n {\n DateTime MyDate = dtStart.AddDays(i * 7);\n Debug.Print(MyDate.ToLongDateString());\n }\n\noutput:\nFriday, December 9, 2022\nFriday, December 16, 2022\nFriday, December 23, 2022\n\nSo, today is Saturday Dec 3rd, and I count/include today if the requested starting date is same date.\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular",
"asp.net",
"c#",
"jquery",
"webforms"
] |
stackoverflow_0074663961_angular_asp.net_c#_jquery_webforms.txt
|
Q:
Difference between position:sticky and position:fixed?
The documentation was pretty hard to understand since I am new to CSS. So can anyone please explain the actual difference between position:sticky and position:fixed? I would also appreciate an example.
I have gone through https://developer.mozilla.org/en-US/docs/Web/CSS/position and a few other articles, but I still don't get it.
A:
position: fixed always fixates an element to some position within its scrolling container or the viewport. No matter how you scroll its container, it will remain in the exact same position and not affect the flow of other elements within the container.
Without going into specific details, position: sticky basically acts like position: relative until an element is scrolled beyond a specific offset, in which case it turns into position: fixed, causing the element to "stick" to its position instead of being scrolled out of view. It eventually gets unstuck as it gets scrolled back toward its original position. At least, that's how I understand it in theory.
The reason why I want to avoid going into details is because position: sticky is brand new, experimental (as shown in the document you link to), and not finalized yet. Even what I've stated above may well change in the near future. You won't be able to use position: sticky yet anyway (hopefully this will change within the next year).
Mozilla recently presented its implementation of position: sticky here. It's highly worth a watch.
A:
See this self-explanatory example for better clarity. CODEPEN
Fixed Position:
An element with fixed position is displayed with respect to the viewport or the browser window itself. It always stays in the same place even if the page is scrolled.
It does not effect the flow of other elements in the page ie doesn't occupy any specific space(just like position: absolute).
If it is defined inside some other container (div with or without relative/absolute position), still it is positioned with respect to the browser and not that container. (Here it differs with position: absolute).
Sticky Position:
An element with sticky position is positioned based on the user's scroll position. As @Boltclock mentioned it basically acts like position: relative until an element is scrolled beyond a specific offset, in which case it turns into position: fixed. When it is scrolled back it gets back to its previous (relative) position.
It effects the flow of other elements in the page ie occupies a specific space on the page(just like position: relative).
If it is defined inside some container, it is positioned with respect to that container. If the container has some overflow(scroll), depending on the scroll offset it turns into position:fixed.
So if you want to achieve the fixed functionality but inside a container, use sticky.
A:
Let me make it extremely simple.
fixed position will not occupy any space in the body, so the next element(eg: an image) will be behind the fixed element.
sticky position occupies the space, so the next element will not be hidden behind it.
Following image is fixed some part of image is hidden behind navbar, because Fixed element doesn't occupy space. You can solve this by adding margin or before/ after pseudo classes
This eg is of sticky position. Here Image is fully shown, nothing is hidden behind navbar because sticky elements occupy space in the document.
A:
Suppose you have a navigation bar at the top of your website and you want it to be fixed so that as you scroll down the page, it's always visible.
If you give it position: fixed; then the page content at the top will be hidden below the navigation bar. In contrast, if you decide to give it position: sticky; top: 0;, the navigation bar will remain in the flow of the document, and gracefully pushes the content underneath it below, so no content is hidden.
When you apply position: fixed; the navigation bar escapes from the normal document flow, similarly to when you float an element.
A:
fixed get fixed on both X and Y axis while sticky is fixed on X axis only.
sticky will be fixed only till the end of the container, but fixed will be fixed until the end of the webpage.
A:
Fixed and Sticky both are very similar but there is one important difference between them -
1. position:fixed : It will directly fixed the element at provided location using top,bottom,left,right.
<div style="position:relative">
<p style="position:fixed; top:0px">paragraph with fixed position</p>
</div>
- here paragraph with fixed position will fixed always at top:0px.
2. position:sticky : It will not directly fixed the element at provided location. It will move element with scroll initially. It will fixed the element only if element reached to specified location using top,bottom,left,right. Until it will move with scroll.
<div style="position:relative">
<p style="position:sticky;top:0px">paragraph with sticky position</p>
</div>
- here paragraph with sticky position will fixed or stick only if element will reached to top 0px position.
|
Difference between position:sticky and position:fixed?
|
The documentation was pretty hard to understand since I am new to CSS. So can anyone please explain the actual difference between position:sticky and position:fixed? I would also appreciate an example.
I have gone through https://developer.mozilla.org/en-US/docs/Web/CSS/position and a few other articles, but I still don't get it.
|
[
"position: fixed always fixates an element to some position within its scrolling container or the viewport. No matter how you scroll its container, it will remain in the exact same position and not affect the flow of other elements within the container.\nWithout going into specific details, position: sticky basically acts like position: relative until an element is scrolled beyond a specific offset, in which case it turns into position: fixed, causing the element to \"stick\" to its position instead of being scrolled out of view. It eventually gets unstuck as it gets scrolled back toward its original position. At least, that's how I understand it in theory.\nThe reason why I want to avoid going into details is because position: sticky is brand new, experimental (as shown in the document you link to), and not finalized yet. Even what I've stated above may well change in the near future. You won't be able to use position: sticky yet anyway (hopefully this will change within the next year).\nMozilla recently presented its implementation of position: sticky here. It's highly worth a watch.\n",
"See this self-explanatory example for better clarity. CODEPEN\nFixed Position:\n\nAn element with fixed position is displayed with respect to the viewport or the browser window itself. It always stays in the same place even if the page is scrolled.\nIt does not effect the flow of other elements in the page ie doesn't occupy any specific space(just like position: absolute).\nIf it is defined inside some other container (div with or without relative/absolute position), still it is positioned with respect to the browser and not that container. (Here it differs with position: absolute).\n\nSticky Position:\n\nAn element with sticky position is positioned based on the user's scroll position. As @Boltclock mentioned it basically acts like position: relative until an element is scrolled beyond a specific offset, in which case it turns into position: fixed. When it is scrolled back it gets back to its previous (relative) position.\nIt effects the flow of other elements in the page ie occupies a specific space on the page(just like position: relative).\nIf it is defined inside some container, it is positioned with respect to that container. If the container has some overflow(scroll), depending on the scroll offset it turns into position:fixed.\n\nSo if you want to achieve the fixed functionality but inside a container, use sticky.\n",
"Let me make it extremely simple.\nfixed position will not occupy any space in the body, so the next element(eg: an image) will be behind the fixed element.\nsticky position occupies the space, so the next element will not be hidden behind it.\nFollowing image is fixed some part of image is hidden behind navbar, because Fixed element doesn't occupy space. You can solve this by adding margin or before/ after pseudo classes\n\nThis eg is of sticky position. Here Image is fully shown, nothing is hidden behind navbar because sticky elements occupy space in the document.\n\n",
"Suppose you have a navigation bar at the top of your website and you want it to be fixed so that as you scroll down the page, it's always visible.\nIf you give it position: fixed; then the page content at the top will be hidden below the navigation bar. In contrast, if you decide to give it position: sticky; top: 0;, the navigation bar will remain in the flow of the document, and gracefully pushes the content underneath it below, so no content is hidden.\nWhen you apply position: fixed; the navigation bar escapes from the normal document flow, similarly to when you float an element.\n",
"\nfixed get fixed on both X and Y axis while sticky is fixed on X axis only.\nsticky will be fixed only till the end of the container, but fixed will be fixed until the end of the webpage.\n\n",
"Fixed and Sticky both are very similar but there is one important difference between them -\n1. position:fixed : It will directly fixed the element at provided location using top,bottom,left,right.\n<div style=\"position:relative\">\n<p style=\"position:fixed; top:0px\">paragraph with fixed position</p>\n</div>\n\n- here paragraph with fixed position will fixed always at top:0px.\n2. position:sticky : It will not directly fixed the element at provided location. It will move element with scroll initially. It will fixed the element only if element reached to specified location using top,bottom,left,right. Until it will move with scroll.\n<div style=\"position:relative\">\n<p style=\"position:sticky;top:0px\">paragraph with sticky position</p>\n</div>\n\n- here paragraph with sticky position will fixed or stick only if element will reached to top 0px position.\n"
] |
[
57,
55,
27,
5,
0,
0
] |
[] |
[] |
[
"css",
"html",
"layout",
"position"
] |
stackoverflow_0019501919_css_html_layout_position.txt
|
Q:
What is the purpose of returning 1 or 0 from a function?
int SeqList<ElemType>::InsertElem(const ElemType &e)//insert at the taild
{
if(length==maxLength)
return 0;
else
{
elems[length]=e;
length++;
return 1;
}
}
The purpose of this program is to implement a sequential list,and I want know the difference between return 1 and return 0 and the effect of them. Additionally, why not use void type?
A:
In the code you have provided, the return statements are used to indicate the success or failure of the InsertElem() function. The function is designed to insert a new element into a sequential list, but it will only do so if there is enough space in the list to accommodate the new element. If the list is full, the function will return 0 to indicate that the operation has failed. Otherwise, the function will insert the new element, increment the length of the list, and return 1 to indicate that the operation has succeeded.
The use of return statements in this way is a common pattern in programming, as it allows the calling code to determine whether an operation has succeeded or failed. In this case, the calling code can check the return value of the InsertElem() function and take appropriate action based on the result.
As for why the InsertElem() function doesn't use the void type, it is because the function is expected to return a value indicating the success or failure of the operation. The void type is used to indicate that a function does not return a value, so it would not be appropriate to use it in this case. Instead, the function uses the int type to indicate that it will return an integer value.
|
What is the purpose of returning 1 or 0 from a function?
|
int SeqList<ElemType>::InsertElem(const ElemType &e)//insert at the taild
{
if(length==maxLength)
return 0;
else
{
elems[length]=e;
length++;
return 1;
}
}
The purpose of this program is to implement a sequential list,and I want know the difference between return 1 and return 0 and the effect of them. Additionally, why not use void type?
|
[
"In the code you have provided, the return statements are used to indicate the success or failure of the InsertElem() function. The function is designed to insert a new element into a sequential list, but it will only do so if there is enough space in the list to accommodate the new element. If the list is full, the function will return 0 to indicate that the operation has failed. Otherwise, the function will insert the new element, increment the length of the list, and return 1 to indicate that the operation has succeeded.\nThe use of return statements in this way is a common pattern in programming, as it allows the calling code to determine whether an operation has succeeded or failed. In this case, the calling code can check the return value of the InsertElem() function and take appropriate action based on the result.\nAs for why the InsertElem() function doesn't use the void type, it is because the function is expected to return a value indicating the success or failure of the operation. The void type is used to indicate that a function does not return a value, so it would not be appropriate to use it in this case. Instead, the function uses the int type to indicate that it will return an integer value.\n"
] |
[
2
] |
[] |
[] |
[
"c++",
"data_structures"
] |
stackoverflow_0074664672_c++_data_structures.txt
|
Q:
How to upload image with Reactjs
I am working in Reactjs and i am using nextjs framework, I am trying to upload image with Axios (post method api) and i will use "api in php", How can i do this ? I tried with following code but its giving me following error
Argument of type 'string | null' is not assignable to parameter of type 'string | Blob'
I tried with following code but giving me errors ( mentioned below code )
HTML
<input type="file"
className="file-upload-default"
onChange={handleFileSelect}/>
JavaScript
const formData = new FormData();
formData.append("selectedFile", selectedFile);
try {
const response = await axios({
method: "post",
url: "/api/upload/file",
data: formData,
headers: { "Content-Type": "multipart/form-data" },
});
}
catch(error) {
console.log(error)
// getting error
// "Argument of type 'string | null'
// is not assignable to parameter
// of type 'string | Blob'"
}
const handleFileSelect = (event : any) => {
setSelectedFile(event.target.files[0])
// getting error "Cannot find name 'setSelectedFile'"
}
A:
First error that you faced in catch is because that you didn't assing type for selectedFile state so you need to add type string | Blob for it
and for second one I'm not sure where you define selectedFile state but it seems that handleFileSelect function is not in the file where selectedFile is defined
here is what I did, I hope it helps:
const [selectedFile, setSelectedFile] = useState<string | Blob>('');
const handleFileUpload = async () => {
const formData = new FormData();
formData.append('selectedFile', selectedFile);
try {
const response = await axios({
method: 'post',
url: '/api/upload/file',
data: formData,
headers: { 'Content-Type': 'multipart/form-data' },
});
} catch (error) {
console.log(error);
}
};
const handleFileSelect = (event: any) => {
setSelectedFile(event.target.files[0]);
};
return (
<>
<input type="file" onChange={handleFileSelect} />
<button onClick={handleFileUpload}>Upload!</button>
</>)
|
How to upload image with Reactjs
|
I am working in Reactjs and i am using nextjs framework, I am trying to upload image with Axios (post method api) and i will use "api in php", How can i do this ? I tried with following code but its giving me following error
Argument of type 'string | null' is not assignable to parameter of type 'string | Blob'
I tried with following code but giving me errors ( mentioned below code )
HTML
<input type="file"
className="file-upload-default"
onChange={handleFileSelect}/>
JavaScript
const formData = new FormData();
formData.append("selectedFile", selectedFile);
try {
const response = await axios({
method: "post",
url: "/api/upload/file",
data: formData,
headers: { "Content-Type": "multipart/form-data" },
});
}
catch(error) {
console.log(error)
// getting error
// "Argument of type 'string | null'
// is not assignable to parameter
// of type 'string | Blob'"
}
const handleFileSelect = (event : any) => {
setSelectedFile(event.target.files[0])
// getting error "Cannot find name 'setSelectedFile'"
}
|
[
"First error that you faced in catch is because that you didn't assing type for selectedFile state so you need to add type string | Blob for it\nand for second one I'm not sure where you define selectedFile state but it seems that handleFileSelect function is not in the file where selectedFile is defined\nhere is what I did, I hope it helps:\n const [selectedFile, setSelectedFile] = useState<string | Blob>('');\n const handleFileUpload = async () => {\n const formData = new FormData();\n formData.append('selectedFile', selectedFile);\n try {\n const response = await axios({\n method: 'post',\n url: '/api/upload/file',\n data: formData,\n headers: { 'Content-Type': 'multipart/form-data' },\n });\n } catch (error) {\n console.log(error);\n }\n };\n\n const handleFileSelect = (event: any) => {\n setSelectedFile(event.target.files[0]);\n };\n\n return (\n <>\n <input type=\"file\" onChange={handleFileSelect} />\n <button onClick={handleFileUpload}>Upload!</button>\n </>)\n\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"next.js",
"reactjs"
] |
stackoverflow_0074664607_javascript_next.js_reactjs.txt
|
Q:
Implementation of comet chat sdk in kotlin
I have been trying to implement come chat into my app and it has been really difficult for some days now. I finally initialized it but it gives me an error when i try to implement it when a user logs in. Here is my code below
private fun logChat() {
val UID: String? = FirebaseAuth.getInstance().currentUser?.uid // Replace with the UID of the user to login
val AUTH_KEY = "a7cd1825ba915ecc3732c8896ae7f2f4fa9d4b5d" // Replace with your App Auth Key
CometChat.login(UID.toString(), AUTH_KEY, object : CometChat.CallbackListener<User?>() {
override fun onSuccess(user: User?) {
}
override fun onError(e: CometChatException) {
}
})
}
It gives the following error: Type mismatch. Required: CometChat.CallbackListener<User!> Found:
But it doesnt indicate what was found. Please help me
A:
Add import com.cometchat.pro.models.User.
Nice to help you
|
Implementation of comet chat sdk in kotlin
|
I have been trying to implement come chat into my app and it has been really difficult for some days now. I finally initialized it but it gives me an error when i try to implement it when a user logs in. Here is my code below
private fun logChat() {
val UID: String? = FirebaseAuth.getInstance().currentUser?.uid // Replace with the UID of the user to login
val AUTH_KEY = "a7cd1825ba915ecc3732c8896ae7f2f4fa9d4b5d" // Replace with your App Auth Key
CometChat.login(UID.toString(), AUTH_KEY, object : CometChat.CallbackListener<User?>() {
override fun onSuccess(user: User?) {
}
override fun onError(e: CometChatException) {
}
})
}
It gives the following error: Type mismatch. Required: CometChat.CallbackListener<User!> Found:
But it doesnt indicate what was found. Please help me
|
[
"Add import com.cometchat.pro.models.User.\nNice to help you\n"
] |
[
0
] |
[] |
[] |
[
"android",
"callback",
"cometchat",
"kotlin",
"sdk"
] |
stackoverflow_0072079813_android_callback_cometchat_kotlin_sdk.txt
|
Q:
How to speed up SQL_CALC_FOUND_ROWS query in WordPress?
My MariaDB slow query log shows a lot of the below.
Time: 221202 11:46:57
Query_time: 5.022055 Lock_time: 0.000082 Rows_sent: 5 Rows_examined: 447119
Rows_affected: 0 Bytes_sent: 141
SELECT SQL_CALC_FOUND_ROWS ab_posts.ID
FROM ab_posts LEFT JOIN ab_postmeta ON ( ab_posts.ID = ab_postmeta.post_id AND ab_postmeta.meta_key = 'cid' ) LEFT JOIN ab_postmeta AS mt1 ON ( ab_posts.ID = mt1.post_id )
WHERE 1=1 AND (
ab_postmeta.post_id IS NULL
AND
mt1.meta_key = '_json_file'
) AND ab_posts.post_type = 'listings' AND ((ab_posts.post_status = 'publish'))
GROUP BY ab_posts.ID
ORDER BY ab_posts.post_date DESC
LIMIT 0, 5;
How can I speed up the query? Should I create any index to speed things up?
UPDATE: Below is the EXPLAIN query and indices of the two tables - ab_post_meta and ab_posts
UPDATE: And I think I found the source. The query is generated by WordPress core file wp-includes\class-wp-query.php
$found_rows = '';
if ( ! $q['no_found_rows'] && ! empty( $limits ) ) {
$found_rows = 'SQL_CALC_FOUND_ROWS';
}
$old_request = "
SELECT $found_rows $distinct $fields
FROM {$wpdb->posts} $join
WHERE 1=1 $where
$groupby
$orderby
$limits
";
A:
I think I found the culprit. The answer is explained here.
I do have a meta_query as below.
$args = array(
'posts_per_page' => MAX_LIMIT,
'post_type' => listings',
'orderby' => 'date',
'order' => 'desc',
'post_status'=>'publish',
'meta_query' => array(
'relation' => 'AND',
array(
'key' => 'cid',
'compare' => 'NOT EXISTS'
),
array(
'key' => '_json_file',
'compare' => 'EXISTS'
)
));
|
How to speed up SQL_CALC_FOUND_ROWS query in WordPress?
|
My MariaDB slow query log shows a lot of the below.
Time: 221202 11:46:57
Query_time: 5.022055 Lock_time: 0.000082 Rows_sent: 5 Rows_examined: 447119
Rows_affected: 0 Bytes_sent: 141
SELECT SQL_CALC_FOUND_ROWS ab_posts.ID
FROM ab_posts LEFT JOIN ab_postmeta ON ( ab_posts.ID = ab_postmeta.post_id AND ab_postmeta.meta_key = 'cid' ) LEFT JOIN ab_postmeta AS mt1 ON ( ab_posts.ID = mt1.post_id )
WHERE 1=1 AND (
ab_postmeta.post_id IS NULL
AND
mt1.meta_key = '_json_file'
) AND ab_posts.post_type = 'listings' AND ((ab_posts.post_status = 'publish'))
GROUP BY ab_posts.ID
ORDER BY ab_posts.post_date DESC
LIMIT 0, 5;
How can I speed up the query? Should I create any index to speed things up?
UPDATE: Below is the EXPLAIN query and indices of the two tables - ab_post_meta and ab_posts
UPDATE: And I think I found the source. The query is generated by WordPress core file wp-includes\class-wp-query.php
$found_rows = '';
if ( ! $q['no_found_rows'] && ! empty( $limits ) ) {
$found_rows = 'SQL_CALC_FOUND_ROWS';
}
$old_request = "
SELECT $found_rows $distinct $fields
FROM {$wpdb->posts} $join
WHERE 1=1 $where
$groupby
$orderby
$limits
";
|
[
"I think I found the culprit. The answer is explained here.\nI do have a meta_query as below.\n$args = array(\n 'posts_per_page' => MAX_LIMIT, \n 'post_type' => listings',\n 'orderby' => 'date',\n 'order' => 'desc', \n 'post_status'=>'publish',\n 'meta_query' => array(\n 'relation' => 'AND',\n array(\n 'key' => 'cid',\n 'compare' => 'NOT EXISTS'\n ),\n array(\n 'key' => '_json_file',\n 'compare' => 'EXISTS'\n )\n ));\n\n"
] |
[
0
] |
[] |
[] |
[
"mariadb",
"mysql",
"wordpress"
] |
stackoverflow_0074655409_mariadb_mysql_wordpress.txt
|
Q:
Discord music bot not reading the command
So, I'm trying to make a discord music bot and I keep getting this one error whenever I use the play command I think its not loading the cog or has something to do with that. this is my main function
and this is my command inside my music_player classthe error that I'm getting once I run the code
import discord
from discord.ext import commands
import os
from youtube_dl import YoutubeDL
intents = discord.Intents.default()
intents.message_content = True
bot = commands.Bot(
command_prefix=commands.when_mentioned_or("!"),
description='Relatively simple music bot example',
intents=intents,
)
@bot.event
async def on_ready():
print(f'Logged in as {bot.user} (ID: {bot.user.id})')
print('------')
bot.add_cog("cogs.music_player")
music_player.py
import os
import discord
from discord.ext import commands
from youtube_dl import YoutubeDL
class music_player(commands.Cog):
def __init__(self, client):
self.client = client
# Checks whether the song is playing or not
self.isplaying = False
self.ispaused = False
# The music queue ( this contains the song and the channel)
self.musicque = []
# The code below is taken from github to get the best quality of sound possible
self.ytdl_format_options = {
'format': 'bestaudio/best',
'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
'restrictfilenames': True,
'noplaylist': True,
'nocheckcertificate': True,
'ignoreerrors': False,
'logtostderr': False,
'quiet': True,
'no_warnings': True,
'default_search': 'auto',
'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes
}
self.ffmpeg_options = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options': '-vn'}
self.vc = None
# This small function searches a song on youtube
def search_yt(self, song):
# with youtube open as
with YoutubeDL(self.ytdl_format_options) as ydl:
# This will basically search youtube and return the entries we get from our search
try:
info = ydl.extract_info("ytsearch:%s" % song, download=False)['entries'][0]
except Exception:
return False
# Returns the info as source
return {'source': info['formats'][0]['url'], 'title': info['title']}
def play_next(self):
if len(self.musicque) > 0:
self.isplaying = True
# Get the link of the first song in the que as we did in the play song function
music_link = self.musicque[0][0]['source']
# Remove the song currently playing same way we did in the play_song function
self.musicque.pop(0)
# same lambda function we used the play_song function
self.vc.play(discord.FFmpegPCMAudio(music_link, **self.ffmpeg_options), after=lambda e: self.play_next())
else:
self.isplaying = False
async def play_song(self, ctx):
if len(self.musicque) > 0:
self.isplaying = True
# Get the link of the first song in the que
music_link = self.musicque[0][0]['source']
# Connect to the voice channel the user is currently in if bot is not already connected
if self.vc == None or not self.vc.is_connected():
self.vc = await self.musicque[0][1].connect()
# if we fail to connect to the vc for whatever reason
if self.vc == None:
await ctx.send("Could not connect to the voice channel")
return
# Else if the bot is already in voice
else:
await self.vc.move_to(self.musicque[0][1])
# Remove the first song in the que using the built in pop function in python as we're already playing the song
self.musicque.pop(0)
# Took this lambda play function from github
self.vc.play(discord.FFmpegPCMAudio(music_link, **self.ffmpeg_options), after=lambda e: self.next_song())
"""WENT AHEAD AND MOVED NEXT_SONG FUNCTION ABOVE AS I REALIZED IT WOULD NOT WORK IF IT WAS BELOW"""
"""ALL THE FUNCTIONS WE NEEDED FOR OUR COMMANDS TO FUNCTION HAVE BEEN DEFINED NOW ONTO THE COMMAND"""
@commands.command()
async def play(self, ctx, *, args):
# This is the song that the user will search and we will look up
using the yt-search function that we made earlier
query = " ".join(args)
# If user is not in the voice channel
voice_channel = ctx.author.voice_channel
if voice_channel is None:
await ctx.send("You're not in a voice channel you dummy")
# If any song in the que is currently paused resume it
elif self.ispaused == True:
self.vc.resume()
else:
# assign song to the search result of the youtube song
song = self.search_yt(query)
if type(song) == type(True):
await ctx.send("Incorrect format of song could not play")
else:
await ctx.send("Song added")
self.musicque.append([song, voice_channel])
if self.isplaying == False:
await self.play_song(ctx)
I was expecting the program to play a song or atleast join thet voice channel but apparently it says the command is not found I've tried changing stuff with the cog but it didn't help so I'm fully lost at what I'm doing wrong.
A:
The add_cog method doesn't work that way; it takes a cog class as an argument, not the path to the cog file. That's the load_extension's job. The load_extension will go to the given path and call the setup function inside the file, and you have to add the cog inside that setup function. For example:
cogs/cog_file.py
class ACogClass(discord.ext.commands.Cog):
...
async def setup(bot: discord.ext.commands.Bot): # as of discord.py 2, the "setup" function needs to be an async function
bot.add_cog(ACogClass(bot))
main.py
bot = discord.ext.commands.Bot(...)
async def setup_hook():
await bot.load_extension("cogs.cog_file") # as of discord.py 2, the "load_extension" method is now an async function
bot.setup_hook = setup_hook # set the bot's default "setup_hook" to our custom "setup_hook"
|
Discord music bot not reading the command
|
So, I'm trying to make a discord music bot and I keep getting this one error whenever I use the play command I think its not loading the cog or has something to do with that. this is my main function
and this is my command inside my music_player classthe error that I'm getting once I run the code
import discord
from discord.ext import commands
import os
from youtube_dl import YoutubeDL
intents = discord.Intents.default()
intents.message_content = True
bot = commands.Bot(
command_prefix=commands.when_mentioned_or("!"),
description='Relatively simple music bot example',
intents=intents,
)
@bot.event
async def on_ready():
print(f'Logged in as {bot.user} (ID: {bot.user.id})')
print('------')
bot.add_cog("cogs.music_player")
music_player.py
import os
import discord
from discord.ext import commands
from youtube_dl import YoutubeDL
class music_player(commands.Cog):
def __init__(self, client):
self.client = client
# Checks whether the song is playing or not
self.isplaying = False
self.ispaused = False
# The music queue ( this contains the song and the channel)
self.musicque = []
# The code below is taken from github to get the best quality of sound possible
self.ytdl_format_options = {
'format': 'bestaudio/best',
'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
'restrictfilenames': True,
'noplaylist': True,
'nocheckcertificate': True,
'ignoreerrors': False,
'logtostderr': False,
'quiet': True,
'no_warnings': True,
'default_search': 'auto',
'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes
}
self.ffmpeg_options = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options': '-vn'}
self.vc = None
# This small function searches a song on youtube
def search_yt(self, song):
# with youtube open as
with YoutubeDL(self.ytdl_format_options) as ydl:
# This will basically search youtube and return the entries we get from our search
try:
info = ydl.extract_info("ytsearch:%s" % song, download=False)['entries'][0]
except Exception:
return False
# Returns the info as source
return {'source': info['formats'][0]['url'], 'title': info['title']}
def play_next(self):
if len(self.musicque) > 0:
self.isplaying = True
# Get the link of the first song in the que as we did in the play song function
music_link = self.musicque[0][0]['source']
# Remove the song currently playing same way we did in the play_song function
self.musicque.pop(0)
# same lambda function we used the play_song function
self.vc.play(discord.FFmpegPCMAudio(music_link, **self.ffmpeg_options), after=lambda e: self.play_next())
else:
self.isplaying = False
async def play_song(self, ctx):
if len(self.musicque) > 0:
self.isplaying = True
# Get the link of the first song in the que
music_link = self.musicque[0][0]['source']
# Connect to the voice channel the user is currently in if bot is not already connected
if self.vc == None or not self.vc.is_connected():
self.vc = await self.musicque[0][1].connect()
# if we fail to connect to the vc for whatever reason
if self.vc == None:
await ctx.send("Could not connect to the voice channel")
return
# Else if the bot is already in voice
else:
await self.vc.move_to(self.musicque[0][1])
# Remove the first song in the que using the built in pop function in python as we're already playing the song
self.musicque.pop(0)
# Took this lambda play function from github
self.vc.play(discord.FFmpegPCMAudio(music_link, **self.ffmpeg_options), after=lambda e: self.next_song())
"""WENT AHEAD AND MOVED NEXT_SONG FUNCTION ABOVE AS I REALIZED IT WOULD NOT WORK IF IT WAS BELOW"""
"""ALL THE FUNCTIONS WE NEEDED FOR OUR COMMANDS TO FUNCTION HAVE BEEN DEFINED NOW ONTO THE COMMAND"""
@commands.command()
async def play(self, ctx, *, args):
# This is the song that the user will search and we will look up
using the yt-search function that we made earlier
query = " ".join(args)
# If user is not in the voice channel
voice_channel = ctx.author.voice_channel
if voice_channel is None:
await ctx.send("You're not in a voice channel you dummy")
# If any song in the que is currently paused resume it
elif self.ispaused == True:
self.vc.resume()
else:
# assign song to the search result of the youtube song
song = self.search_yt(query)
if type(song) == type(True):
await ctx.send("Incorrect format of song could not play")
else:
await ctx.send("Song added")
self.musicque.append([song, voice_channel])
if self.isplaying == False:
await self.play_song(ctx)
I was expecting the program to play a song or atleast join thet voice channel but apparently it says the command is not found I've tried changing stuff with the cog but it didn't help so I'm fully lost at what I'm doing wrong.
|
[
"The add_cog method doesn't work that way; it takes a cog class as an argument, not the path to the cog file. That's the load_extension's job. The load_extension will go to the given path and call the setup function inside the file, and you have to add the cog inside that setup function. For example:\n \ncogs/cog_file.py\nclass ACogClass(discord.ext.commands.Cog):\n ...\n \nasync def setup(bot: discord.ext.commands.Bot): # as of discord.py 2, the \"setup\" function needs to be an async function\n bot.add_cog(ACogClass(bot))\n\n \nmain.py\nbot = discord.ext.commands.Bot(...)\n \nasync def setup_hook():\n await bot.load_extension(\"cogs.cog_file\") # as of discord.py 2, the \"load_extension\" method is now an async function\n \nbot.setup_hook = setup_hook # set the bot's default \"setup_hook\" to our custom \"setup_hook\"\n\n"
] |
[
0
] |
[] |
[] |
[
"discord.py",
"python_3.x"
] |
stackoverflow_0074642313_discord.py_python_3.x.txt
|
Q:
How can I compare read(1.proto) = read(2.proto) in Go(assuming there's just one message definition)?
Context: I'm trying to resolve this issue.
In other words, there's a NormalizeJsonString() for JSON strings (see this for more context:
// Takes a value containing JSON string and passes it through
// the JSON parser to normalize it, returns either a parsing
// error or normalized JSON string.
func NormalizeJsonString(jsonString interface{}) (string, error) {
that allows to have the following code:
return structure.NormalizeJsonString(old) == structure.NormalizeJsonString(new)
but it doesn't work for strings that are proto files (all proto files are guaranteed to have just one message definition). For example, I could see:
syntax = "proto3";
- package bar.proto;
+ package bar.proto;
option java_outer_classname = "FooProto";
message Foo {
...
- int64 xyz = 3;
+ int64 xyz = 3;
Is there NormalizeProtoString available in some Go SDKs? I found MessageDifferencer but it's in C++ only. Another option I considered was to replace all new lines / group of whitespaces with a single whitespace but it's a little bit hacky.
A:
To compare two proto files in Go, you can use the protoc command-line tool to compile the files into a binary representation called the Protocol Buffer wire format. You can then compare the resulting binary files to determine if they are equal.
Here is an example of how you can do this:
Install protoc on your system if you haven't already done so.
Compile the proto files into the Protocol Buffer wire format using the following command:
protoc --encode=<message_type> <input_file.proto> <output_file>
where <message_type> is the fully-qualified name of the message type defined in the proto file, <input_file.proto> is the path to the input proto file, and <output_file> is the path to the output binary file.
Use the ioutil.ReadFile function to read the contents of the output binary files into byte slices.
Use the bytes.Equal function to compare the contents of the two byte slices. This function returns a boolean value indicating whether the contents of the two slices are equal or not.
For example, suppose you have the following proto file called foo.proto:
syntax = "proto3";
package foo;
message Bar {
int64 xyz = 1;
}
You can compile this file into the Protocol Buffer wire format using the following command:
protoc --encode=foo.Bar foo.proto foo.bin
This will create a binary file called foo.bin containing the binary representation of the foo.Bar message.
You can then read the contents of this file into a byte slice using the ioutil.ReadFile function:
data, err := ioutil.ReadFile("foo.bin")
if err != nil {
// Handle the error
}
You can then compare the contents of this byte slice with the contents of another byte slice read from another binary file using the bytes.Equal function:
otherData, err := ioutil.ReadFile("other.bin")
if err != nil {
// Handle the error
}
if bytes.Equal(data, otherData) {
// The contents of the two binary files are equal
} else {
// The contents of the two binary files are not equal
}
This is a simple way to compare proto files in Go. Note that this approach only works for proto files that contain a single message definition, as you mentioned in your question. If your proto files contain multiple message definitions, you will need to use a different approach.
|
How can I compare read(1.proto) = read(2.proto) in Go(assuming there's just one message definition)?
|
Context: I'm trying to resolve this issue.
In other words, there's a NormalizeJsonString() for JSON strings (see this for more context:
// Takes a value containing JSON string and passes it through
// the JSON parser to normalize it, returns either a parsing
// error or normalized JSON string.
func NormalizeJsonString(jsonString interface{}) (string, error) {
that allows to have the following code:
return structure.NormalizeJsonString(old) == structure.NormalizeJsonString(new)
but it doesn't work for strings that are proto files (all proto files are guaranteed to have just one message definition). For example, I could see:
syntax = "proto3";
- package bar.proto;
+ package bar.proto;
option java_outer_classname = "FooProto";
message Foo {
...
- int64 xyz = 3;
+ int64 xyz = 3;
Is there NormalizeProtoString available in some Go SDKs? I found MessageDifferencer but it's in C++ only. Another option I considered was to replace all new lines / group of whitespaces with a single whitespace but it's a little bit hacky.
|
[
"To compare two proto files in Go, you can use the protoc command-line tool to compile the files into a binary representation called the Protocol Buffer wire format. You can then compare the resulting binary files to determine if they are equal.\nHere is an example of how you can do this:\n\nInstall protoc on your system if you haven't already done so.\n\nCompile the proto files into the Protocol Buffer wire format using the following command:\n\n\nprotoc --encode=<message_type> <input_file.proto> <output_file>\n\nwhere <message_type> is the fully-qualified name of the message type defined in the proto file, <input_file.proto> is the path to the input proto file, and <output_file> is the path to the output binary file.\nUse the ioutil.ReadFile function to read the contents of the output binary files into byte slices.\nUse the bytes.Equal function to compare the contents of the two byte slices. This function returns a boolean value indicating whether the contents of the two slices are equal or not.\nFor example, suppose you have the following proto file called foo.proto:\nsyntax = \"proto3\";\npackage foo;\n\nmessage Bar {\n int64 xyz = 1;\n}\n\nYou can compile this file into the Protocol Buffer wire format using the following command:\nprotoc --encode=foo.Bar foo.proto foo.bin\n\nThis will create a binary file called foo.bin containing the binary representation of the foo.Bar message.\nYou can then read the contents of this file into a byte slice using the ioutil.ReadFile function:\ndata, err := ioutil.ReadFile(\"foo.bin\")\nif err != nil {\n // Handle the error\n}\n\nYou can then compare the contents of this byte slice with the contents of another byte slice read from another binary file using the bytes.Equal function:\notherData, err := ioutil.ReadFile(\"other.bin\")\nif err != nil {\n // Handle the error\n}\n\nif bytes.Equal(data, otherData) {\n // The contents of the two binary files are equal\n} else {\n // The contents of the two binary files are not equal\n}\n\nThis is a simple way to compare proto files in Go. Note that this approach only works for proto files that contain a single message definition, as you mentioned in your question. If your proto files contain multiple message definitions, you will need to use a different approach.\n"
] |
[
0
] |
[] |
[] |
[
"go",
"protocol_buffers"
] |
stackoverflow_0074659567_go_protocol_buffers.txt
|
Q:
How to Use One Array of Strings to Filter Another Array of Strings (JS)
Given an array of songs that are different versions of the same song:
const songArray = ["Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)" ]
How can I loop through using another array of "filter words" I've created:
const filterWords = ["Remaster", "- Live in", "Remix"]
To get back everything that does NOT include those filters.
ie:
const filteredSongs = ["Holiday", "Like a Prayer", "Music"]
I've tried finding the answer online, but I seem to only find examples that search for one filter word, not multiple.
I've tried nested looping, .includes(), .filter(), and I'm also having problems because some tracks will contain more than one of the filter words (eg. "Holiday - Remastered Remix" contains both "Remaster" and "Remix"), and so will be pushed to a new array twice.
something like:
songArray = ["Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)" ]
const filterWords = ["Remaster", "- Live in", "Remix"]
newArray = []
songArray.forEach((song) => {
filterWords.forEach((filterWord) => {
if song.includes(filterWord){
newArray.push(song)
})
})
expected: ["Holiday", "Like a Prayer", "Music"]
A:
You can achieve it using Array.prototype.filter(), Array.prototype.some(), and String.prototype.includes().
Array.prototype.some() makes your task easy and clear to read.
Try like this:
const songArray = [ "Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)", ];
const filterWords = ["Remaster", "- Live in", "Remix"];
const output = songArray.filter(
(song) => !filterWords.some((word) => song.includes(word))
);
console.log(output);
A:
You were pretty much there. Here is a way (based on what you had already done) that works. Not the best way though, check out the other answers for simpler methods.
songArray = [
"Holiday",
"Holiday - Remastered Remix",
"Holiday - Live in Portugal",
"Holiday (Remix)",
"Like a Prayer",
"Like a Prayer - Remaster",
"Like a Prayer - Remixed 2012",
"Music",
"Music (Remix)",
];
const filterWords = ["Remaster", "- Live in", "Remix"];
newArray = [];
songArray.forEach((song) => {
let found = false;
filterWords.forEach((filterWord) => {
if (song.includes(filterWord)) {
found = true;
}
});
if (!found) {
newArray.push(song);
}
});
console.log(newArray);
A:
Use the filter method to loop over the songs and return a new filtered array. For each song, check if every filter string is not present in the song.
const songs = ["Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)"];
const filters = ["Remaster", "- Live in", "Remix"];
const filteredSongs = songs.filter(song =>
filters.every(filter => !song.includes(filter))
);
console.log(filteredSongs);
|
How to Use One Array of Strings to Filter Another Array of Strings (JS)
|
Given an array of songs that are different versions of the same song:
const songArray = ["Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)" ]
How can I loop through using another array of "filter words" I've created:
const filterWords = ["Remaster", "- Live in", "Remix"]
To get back everything that does NOT include those filters.
ie:
const filteredSongs = ["Holiday", "Like a Prayer", "Music"]
I've tried finding the answer online, but I seem to only find examples that search for one filter word, not multiple.
I've tried nested looping, .includes(), .filter(), and I'm also having problems because some tracks will contain more than one of the filter words (eg. "Holiday - Remastered Remix" contains both "Remaster" and "Remix"), and so will be pushed to a new array twice.
something like:
songArray = ["Holiday", "Holiday - Remastered Remix", "Holiday - Live in Portugal", "Holiday (Remix)", "Like a Prayer", "Like a Prayer - Remaster", "Like a Prayer - Remixed 2012", "Music", "Music (Remix)" ]
const filterWords = ["Remaster", "- Live in", "Remix"]
newArray = []
songArray.forEach((song) => {
filterWords.forEach((filterWord) => {
if song.includes(filterWord){
newArray.push(song)
})
})
expected: ["Holiday", "Like a Prayer", "Music"]
|
[
"You can achieve it using Array.prototype.filter(), Array.prototype.some(), and String.prototype.includes().\nArray.prototype.some() makes your task easy and clear to read.\nTry like this:\n\n\nconst songArray = [ \"Holiday\", \"Holiday - Remastered Remix\", \"Holiday - Live in Portugal\", \"Holiday (Remix)\", \"Like a Prayer\", \"Like a Prayer - Remaster\", \"Like a Prayer - Remixed 2012\", \"Music\", \"Music (Remix)\", ];\n\nconst filterWords = [\"Remaster\", \"- Live in\", \"Remix\"];\n\nconst output = songArray.filter(\n (song) => !filterWords.some((word) => song.includes(word))\n);\n\nconsole.log(output);\n\n\n\n",
"You were pretty much there. Here is a way (based on what you had already done) that works. Not the best way though, check out the other answers for simpler methods.\nsongArray = [\n \"Holiday\",\n \"Holiday - Remastered Remix\",\n \"Holiday - Live in Portugal\",\n \"Holiday (Remix)\",\n \"Like a Prayer\",\n \"Like a Prayer - Remaster\",\n \"Like a Prayer - Remixed 2012\",\n \"Music\",\n \"Music (Remix)\",\n];\n\nconst filterWords = [\"Remaster\", \"- Live in\", \"Remix\"];\n\nnewArray = [];\n\nsongArray.forEach((song) => {\n let found = false;\n filterWords.forEach((filterWord) => {\n if (song.includes(filterWord)) {\n found = true;\n }\n });\n if (!found) {\n newArray.push(song);\n }\n});\n\nconsole.log(newArray);\n\n\n",
"Use the filter method to loop over the songs and return a new filtered array. For each song, check if every filter string is not present in the song.\n\n\nconst songs = [\"Holiday\", \"Holiday - Remastered Remix\", \"Holiday - Live in Portugal\", \"Holiday (Remix)\", \"Like a Prayer\", \"Like a Prayer - Remaster\", \"Like a Prayer - Remixed 2012\", \"Music\", \"Music (Remix)\"];\n\nconst filters = [\"Remaster\", \"- Live in\", \"Remix\"];\n\nconst filteredSongs = songs.filter(song => \n filters.every(filter => !song.includes(filter))\n);\n\nconsole.log(filteredSongs);\n\n\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"arrays",
"javascript"
] |
stackoverflow_0074664657_arrays_javascript.txt
|
Q:
Group the data by quarter and count the occurrence of each unique values in a column
I had a dataset like with four columns
quarter
Year
Product
state
1
2022
Aspirin
VA
1
2022
Dolo
MD
1
2022
Aspirin
VA
1
2022
Aspirin
MD
2
2022
Aspirin
VA
2
2022
Dolo
MD
2
2022
Dolo
VA
I am trying to get output like
quarter
Product
count
1
Aspirin
3
1
Dolo
1
and also bar graph visualization with the product on the x-axis and count on the y-axis.
I've tried many ways by using count, summary also tried to insert the summary count into table to plot the graph.
df_raw <- dmv %>% group_by(quarter, product) %>% summarize(count=n())
table(df_raw)
tried this also
df1<- dmv[dmv$quarter == 1,] #creating a dataframe for quarter 1
str(df1$product)
df1$product <- as.factor(df1$product_name)
str(df1)
df_product_10 <- names(summary(df1$product)[1:10])
df_product_10_x <- unname(summary(df1$product)[1:10])
rows_id <- seq(1,10)
df2 <- as.data.frame(rows_id, df_product_10, df_product_10_x)
hist(df)
A:
Using data.table
df_raw[, list(count = .N), by = list(quarter, product)]
A:
data<-data.frame(dmv %>% group_by(quarter, product) %>% summarize(count=n()))
data<- df[order(df$count,decreasing= TRUE),]
q<-data[df$quarter == 1,]
q1to10<-q[1:10,]
ggplot(q1to10, aes(x= product_name, y = count )) + geom_bar(stat='identity')
|
Group the data by quarter and count the occurrence of each unique values in a column
|
I had a dataset like with four columns
quarter
Year
Product
state
1
2022
Aspirin
VA
1
2022
Dolo
MD
1
2022
Aspirin
VA
1
2022
Aspirin
MD
2
2022
Aspirin
VA
2
2022
Dolo
MD
2
2022
Dolo
VA
I am trying to get output like
quarter
Product
count
1
Aspirin
3
1
Dolo
1
and also bar graph visualization with the product on the x-axis and count on the y-axis.
I've tried many ways by using count, summary also tried to insert the summary count into table to plot the graph.
df_raw <- dmv %>% group_by(quarter, product) %>% summarize(count=n())
table(df_raw)
tried this also
df1<- dmv[dmv$quarter == 1,] #creating a dataframe for quarter 1
str(df1$product)
df1$product <- as.factor(df1$product_name)
str(df1)
df_product_10 <- names(summary(df1$product)[1:10])
df_product_10_x <- unname(summary(df1$product)[1:10])
rows_id <- seq(1,10)
df2 <- as.data.frame(rows_id, df_product_10, df_product_10_x)
hist(df)
|
[
"Using data.table\ndf_raw[, list(count = .N), by = list(quarter, product)]\n\n",
"\n\ndata<-data.frame(dmv %>% group_by(quarter, product) %>% summarize(count=n()))\ndata<- df[order(df$count,decreasing= TRUE),]\nq<-data[df$quarter == 1,]\nq1to10<-q[1:10,]\nggplot(q1to10, aes(x= product_name, y = count )) + geom_bar(stat='identity')\n\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074637284_r.txt
|
Q:
Send ether from one wallet to another with solidity smart contract?
I want to send ether from one walet to another…what technique to use and what is the safest method to do so?
wallet A(msg.sender)=10ether
wallet B=10ether
with the help of smart contract send 'x' ether from A to B.
I tried
where 'x' is the varaible ether at different times.
=>payable().transfer(msg.value);
here i am able to send ether in remix ide where i can provide the msg.value...i want to implement that it is able to change msg.value according to the value of x.
A:
It is not possible to remove ether from someone's wallet on their behalf.
You can do something like this with ERC20 tokens, which have an approve functionality, but there is no API for this with native Ether.
To send ether out of a wallet, the owner of the wallet must sign a transaction specifying they want to send a certain amount of ether to a certain address.
Now, you can sign a transaction, and have someone else broadcast it. This is called relaying. But once you sign a transaction that sends ether from your wallet, you can basically consider the ether gone.
If you need to conditionally take ether from a user, having the user stake the ether in an escrow contract may make more sense.
A:
Yes, it is possible to send Ether from one wallet to another using a Solidity smart contract. In fact, this is a common use case for smart contracts, as they can be used to create decentralized applications (DApps) that manage and transfer cryptocurrency assets.
To send Ether from one wallet to another using a Solidity smart contract, you will need to do the following:
Define the contract that will be used to manage the Ether transfer. This contract will include the transfer() function that will be used to move the Ether from one wallet to another.
Compile the contract using a Solidity compiler. This will generate the bytecode that can be deployed to the Ethereum blockchain.
Deploy the contract to the Ethereum blockchain using a tool like Truffle or Remix.
Once the contract is deployed, you can use its transfer() function to move Ether from one wallet to another. This will require the use of a digital wallet or Ethereum client that supports the use of smart contracts.
Keep in mind that using a smart contract to transfer Ether will incur transaction fees, which will be paid in Ether. Additionally, the contract will need to be written and deployed correctly in order to function properly. It is recommended to seek the help of an experienced Solidity developer if you are not familiar with the process.
|
Send ether from one wallet to another with solidity smart contract?
|
I want to send ether from one walet to another…what technique to use and what is the safest method to do so?
wallet A(msg.sender)=10ether
wallet B=10ether
with the help of smart contract send 'x' ether from A to B.
I tried
where 'x' is the varaible ether at different times.
=>payable().transfer(msg.value);
here i am able to send ether in remix ide where i can provide the msg.value...i want to implement that it is able to change msg.value according to the value of x.
|
[
"It is not possible to remove ether from someone's wallet on their behalf.\nYou can do something like this with ERC20 tokens, which have an approve functionality, but there is no API for this with native Ether.\nTo send ether out of a wallet, the owner of the wallet must sign a transaction specifying they want to send a certain amount of ether to a certain address.\nNow, you can sign a transaction, and have someone else broadcast it. This is called relaying. But once you sign a transaction that sends ether from your wallet, you can basically consider the ether gone.\nIf you need to conditionally take ether from a user, having the user stake the ether in an escrow contract may make more sense.\n",
"Yes, it is possible to send Ether from one wallet to another using a Solidity smart contract. In fact, this is a common use case for smart contracts, as they can be used to create decentralized applications (DApps) that manage and transfer cryptocurrency assets.\nTo send Ether from one wallet to another using a Solidity smart contract, you will need to do the following:\n\nDefine the contract that will be used to manage the Ether transfer. This contract will include the transfer() function that will be used to move the Ether from one wallet to another.\n\nCompile the contract using a Solidity compiler. This will generate the bytecode that can be deployed to the Ethereum blockchain.\n\nDeploy the contract to the Ethereum blockchain using a tool like Truffle or Remix.\n\nOnce the contract is deployed, you can use its transfer() function to move Ether from one wallet to another. This will require the use of a digital wallet or Ethereum client that supports the use of smart contracts.\n\n\nKeep in mind that using a smart contract to transfer Ether will incur transaction fees, which will be paid in Ether. Additionally, the contract will need to be written and deployed correctly in order to function properly. It is recommended to seek the help of an experienced Solidity developer if you are not familiar with the process.\n"
] |
[
0,
0
] |
[] |
[] |
[
"ethereum",
"solidity",
"transfer",
"wallet"
] |
stackoverflow_0074661003_ethereum_solidity_transfer_wallet.txt
|
Q:
How to make environment variable in Python
I need a help in making variables as ENV in python, so that I can see that variable by using 'export' command in Linux. So I tested a below short script and I can see variable using export command. But the problem is that, below two command didn't work.
var1 = os.environ['LINE']
print(var1)
Can you guide me how can I get this solved ?
import os
import json
import sys
Name = "a1"
def func():
var = 'My name is ' + '' + Name
os.putenv('LINE', var)
os.system('bash')
func()
var1 = os.environ['LINE']
print(var1)
Output:
export | grep LINE
declare -x LINE="My name is a1"
A:
Try with
os.environ['LINE'] = var
instead of using putenv. Using putenv "bypasses" os.environ, that is, it doesn't update os.environ.
In fact, from the documentation for os.putenv:
Assignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don’t update os.environ, so it is actually preferable to assign to items of os.environ. This also applies to getenv() and getenvb(), which respectively use os.environ and os.environb in their implementations."
|
How to make environment variable in Python
|
I need a help in making variables as ENV in python, so that I can see that variable by using 'export' command in Linux. So I tested a below short script and I can see variable using export command. But the problem is that, below two command didn't work.
var1 = os.environ['LINE']
print(var1)
Can you guide me how can I get this solved ?
import os
import json
import sys
Name = "a1"
def func():
var = 'My name is ' + '' + Name
os.putenv('LINE', var)
os.system('bash')
func()
var1 = os.environ['LINE']
print(var1)
Output:
export | grep LINE
declare -x LINE="My name is a1"
|
[
"Try with\nos.environ['LINE'] = var\n\ninstead of using putenv. Using putenv \"bypasses\" os.environ, that is, it doesn't update os.environ.\nIn fact, from the documentation for os.putenv:\n\nAssignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don’t update os.environ, so it is actually preferable to assign to items of os.environ. This also applies to getenv() and getenvb(), which respectively use os.environ and os.environb in their implementations.\"\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074664627_python_python_3.x.txt
|
Q:
How do I match a value with particular coordinates for multiple months?
I'm using satellite data to determine Net Primary Production (NPP) for over 100 sample locations. For every location, I need to obtain NPP values for every month (January- December) for a ten-year span (2007-2017). I need to find a way to automate this with code.
This is the structure of my data:
'''
structure(list(Month = c("January-", "January-", "January-",
"January-", "January-"), long = c(-179.916672, -179.75, -179.583328,
-179.416672, -179.25), lat = c(39.916668, 39.916668, 39.916668,
39.916668, 39.916668), npp = c(297.813, 304.971, 292.946, 296.196,
285.804)), row.names = c(NA, -5L), class = c("tbl_df", "tbl",
"data.frame"))
'''
The coordinates for the first sample are 14.58, 168.03 and there is an exact match for every month between January and December. I need to find these values, but the dataset is very large. If anyone could help me in anyway to help automate this process, I would be so grateful.
A:
For what I understand, your example data is insufficient.
I therefor have created a DF with 3 different example locations and corresponding random lat and long. I have created 1000 random dates in the timeframe mentioned and 1000 random app - see below. (1000 for avoiding too many NAs in the table below)
This DF assumes that each location delivers app-values at the same day.
After making some shortcuts for year and month data are summarized app by location and shown in a wider format. That is my understanding of every month in ten-year span
library(lubridate)
library(tidyverse)
df |>
mutate(m = month(date, label = T)) |>
mutate(y = year(date)) |>
group_by(y, m, location) |>
summarise(sum= sum(npp)) |>
pivot_wider(names_from = m, values_from = sum)
#> `summarise()` has grouped output by 'y', 'm'. You can override using the
#> `.groups` argument.
#> # A tibble: 33 × 14
#> # Groups: y [11]
#> y location Jan Feb Mär Apr Mai Jun Jul Aug Sep Okt
#> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 2007 A 2544. 1260. 2303. 1665. 1952. 2440. 2842. 2323. 1744. 2827.
#> 2 2007 B 2412. 1473. 2126. 1484. 1953. 2726. 3251. 2249. 1924. 2598.
#> 3 2007 C 2212. 1460. 2233. 1816. 2085. 2604. 2871. 1996. 1960. 2714.
#> 4 2008 A 2397. 2141. 2352. 3375. 2045. 1757. 1476. 2813. 3169. 3593.
#> 5 2008 B 2562. 2314. 2299. 3634. 1879. 1544. 1568. 2805. 3101. 3712.
#> 6 2008 C 2487. 2269. 2159. 3740. 1727. 1631. 1462. 3048. 2872. 3742.
#> 7 2009 A 1538. 1241. 2434. 1916. 2757. 1937. 1720. 1335. 2600. 2809.
#> 8 2009 B 1643. 1312. 2410. 2170. 2817. 1973. 1566. 1253. 2720. 2758.
#> 9 2009 C 1549. 1231. 2331. 2490. 2766. 1886. 1472. 1354. 2810. 2727.
#> 10 2010 A 2732. 2463. 1220. 846. 2538. 4352. 948. 3826. 3062. 2423.
#> # … with 23 more rows, and 2 more variables: Nov <dbl>, Dez <dbl>
Data
set.seed(123)
# make 1000 dates
date <- sample(seq(as_date('2007/01/01'), as_date('2017/01/01'), by="day"), 1000)
location <- rep(LETTERS[1:3], 1000) # 3 locations
long <- rep(runif(3, -180, -120), 1000) # whith 3 long`s`
lat <- rep(runif(3, 30, 40), 1000) # and with 3 lat`s`
npp <- runif(1000, 200, 350) # make 1000 npp`s`
# make DF with repetition of 3 for each location
df <- data.frame(location,date = rep(date, each =3), long, lat, date, npp)
|
How do I match a value with particular coordinates for multiple months?
|
I'm using satellite data to determine Net Primary Production (NPP) for over 100 sample locations. For every location, I need to obtain NPP values for every month (January- December) for a ten-year span (2007-2017). I need to find a way to automate this with code.
This is the structure of my data:
'''
structure(list(Month = c("January-", "January-", "January-",
"January-", "January-"), long = c(-179.916672, -179.75, -179.583328,
-179.416672, -179.25), lat = c(39.916668, 39.916668, 39.916668,
39.916668, 39.916668), npp = c(297.813, 304.971, 292.946, 296.196,
285.804)), row.names = c(NA, -5L), class = c("tbl_df", "tbl",
"data.frame"))
'''
The coordinates for the first sample are 14.58, 168.03 and there is an exact match for every month between January and December. I need to find these values, but the dataset is very large. If anyone could help me in anyway to help automate this process, I would be so grateful.
|
[
"For what I understand, your example data is insufficient.\nI therefor have created a DF with 3 different example locations and corresponding random lat and long. I have created 1000 random dates in the timeframe mentioned and 1000 random app - see below. (1000 for avoiding too many NAs in the table below)\nThis DF assumes that each location delivers app-values at the same day.\nAfter making some shortcuts for year and month data are summarized app by location and shown in a wider format. That is my understanding of every month in ten-year span\nlibrary(lubridate)\nlibrary(tidyverse)\n\ndf |> \n mutate(m = month(date, label = T)) |> \n mutate(y = year(date)) |> \n group_by(y, m, location) |> \n summarise(sum= sum(npp)) |> \n pivot_wider(names_from = m, values_from = sum)\n#> `summarise()` has grouped output by 'y', 'm'. You can override using the\n#> `.groups` argument.\n#> # A tibble: 33 × 14\n#> # Groups: y [11]\n#> y location Jan Feb Mär Apr Mai Jun Jul Aug Sep Okt\n#> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n#> 1 2007 A 2544. 1260. 2303. 1665. 1952. 2440. 2842. 2323. 1744. 2827.\n#> 2 2007 B 2412. 1473. 2126. 1484. 1953. 2726. 3251. 2249. 1924. 2598.\n#> 3 2007 C 2212. 1460. 2233. 1816. 2085. 2604. 2871. 1996. 1960. 2714.\n#> 4 2008 A 2397. 2141. 2352. 3375. 2045. 1757. 1476. 2813. 3169. 3593.\n#> 5 2008 B 2562. 2314. 2299. 3634. 1879. 1544. 1568. 2805. 3101. 3712.\n#> 6 2008 C 2487. 2269. 2159. 3740. 1727. 1631. 1462. 3048. 2872. 3742.\n#> 7 2009 A 1538. 1241. 2434. 1916. 2757. 1937. 1720. 1335. 2600. 2809.\n#> 8 2009 B 1643. 1312. 2410. 2170. 2817. 1973. 1566. 1253. 2720. 2758.\n#> 9 2009 C 1549. 1231. 2331. 2490. 2766. 1886. 1472. 1354. 2810. 2727.\n#> 10 2010 A 2732. 2463. 1220. 846. 2538. 4352. 948. 3826. 3062. 2423.\n#> # … with 23 more rows, and 2 more variables: Nov <dbl>, Dez <dbl>\n\n\nData\nset.seed(123)\n\n# make 1000 dates\ndate <- sample(seq(as_date('2007/01/01'), as_date('2017/01/01'), by=\"day\"), 1000)\n\nlocation <- rep(LETTERS[1:3], 1000) # 3 locations\nlong <- rep(runif(3, -180, -120), 1000) # whith 3 long`s`\nlat <- rep(runif(3, 30, 40), 1000) # and with 3 lat`s`\nnpp <- runif(1000, 200, 350) # make 1000 npp`s`\n\n# make DF with repetition of 3 for each location\ndf <- data.frame(location,date = rep(date, each =3), long, lat, date, npp)\n\n"
] |
[
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074664229_r.txt
|
Q:
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
I tried 'push' while writing 'git'.
However, the following message does not solve the problem.
A:
You can force git using http version 1.1
git config --global http.version HTTP/1.1
https://gist.github.com/daofresh/0a95772d582cafb202142ff7871da2fc
A:
You might be pushing data larger than the postBuffer size.
You can try increasing post buffer size using
git config --global http.postBuffer 157286400
For reference: https://confluence.atlassian.com/bitbucketserverkb/git-push-fails-fatal-the-remote-end-hung-up-unexpectedly-779171796.html
A:
Simple solution (reverts to http 2 after) :
git config --global http.version HTTP/1.1
git push
git config --global http.version HTTP/2
A:
XCode 11.4.1
Increasing the git buffer size worked for me
git config --global http.postBuffer 524288000
A:
Working Solution:
First change HTTP version to 1.1 and then push and once done change back to HTTP2
$ git config --global http.version HTTP/1.1
After it push was ok and I have changed HTTP version to 2 again:
$ git config --global http.version HTTP/2
A:
git config http.postBuffer 524288000
This is the latest, should solve your issue
A:
I followed most of the answers but not solved my problem.
In my case, the answer is very simple
I encountered this error when pushing GIT through an ADSL Broadband Wi-Fi network with low signal strength, low stability, and low speed.
Then,
I was able to push it very successfully when I pushed it into the GIT through a Fibre Broadband Wi-Fi network with greater signal strength, greater stability, and higher speed.
Error:
Push failed
Enumerating objects: 44, done. Delta compression using up to 12 threads RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8) the remote end hung up unexpectedly Total 30 (delta 18), reused 0 (delta 0) the remote hung up unexpectedly
A:
It's was not working for me. But worked after downgrading version of HTTP from 2 to 1.1:
$ git config --global http.version HTTP/1.1
After this change, pushing was successful and I have changed HTTP version to 2 again:
$ git config --global http.version HTTP/2
A:
In most cases git config http.postBuffer 524288000 should work.
In my case, I was pushing a large number of changes (I changed a lot of packages thus there were many lines updated) in my yarn.lock/package-lock.json file. Since it is usually not required, removing it made the error go away.
So you can try this too if you are working with Javascript
A:
It sounds like either the remote server you're using or some middlebox (e.g., a proxy) is not speaking the HTTP/2 protocol correctly. You can either fix that by asking the owner of that remote server or middlebox to fix their server, or you can force the use of HTTP/1.1.
If you want to force the use of HTTP/1.1, you can set the http.version configuration option to HTTP/1.1. That can also be set on a per-URL basis as described in the http.<url>.* section in the git-config manual page.
A:
For my case with the bitbucket behind nginx, disabling proxy request buffering was the answer:
server {
listen 443 ssl http2 default_server;
...
proxy_request_buffering off;
# These are also relevant:
proxy_read_timeout 600;
client_max_body_size 0;
A:
In most cases, increasing the buffer size will work.
git config http.postBuffer 524288000
It worked for me.
Use of
git config --global http.version HTTP/1.1
should be kept as a last option.
Using a gitbash terminal on a windows machine (if this info helps you in any way).
A:
In my case I had to reset the origin to ssh instead of http/https:
git remote set-url origin [email protected]
To check your origins you can use:
git remote -v
A:
To me this is worked:
git checkout --orphan newBranch
git add -A # Add all files and commit them
git commit -am "Clean Repo"
git branch -D master # Deletes the master branch
git branch -m master # Rename the current branch to master
git push -f origin master # Force push master branch to github
Thanks to: https://panjeh.medium.com/cleaning-up-git-github-repository-without-deleting-git-directory-c86b7415b51b
However my issue was slighty different, with a "packages already packed" info with the RPC::HTTP/2 stream not closed cleanly message
A:
for me helped just this
server {
listen 443 ssl http2 default_server;
...
location / {
...
proxy_request_buffering off;
...
}
}
A:
I went through a similar situation. I tried;
git config --global http.version HTTP/1.1
git config --global http.postBuffer 157286400
git config --global http.postBuffer 524288000
even,
git config --global core.compression 0
but, nothing changed. I had two folders with this error. one with 10MB size and one with 65MB.
finally.
I tried with a Fibre connection.
So yeah. try with a different, higher speed connection. probably it will work.
Good Luck!
A:
Ironically, for me it turned out to be bad internet connection - I tried everything above, nothing worked, then I did a speed test and found I had 100+Mb download but only 0.x Mb upload at the time, due to some wifi issues. After I fixed it the problem disappeared.
A:
One of the most popular answers is:
git config --global http.postBuffer 157286400
Don't do this blindly as raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes(from the git documentation).
Check if you've files with size >100 MB first. If yes, then there is a better-suited solution for your problem.
Solution: Git-LFS as it is intended for versioning large files.
Git Large File Storage (LFS) replaces large files such as audio
samples, videos, datasets, and graphics with text pointers inside Git,
while storing the file contents on a remote server like GitHub.com or
GitHub Enterprise.
You can look at this good tutorial on git-lfs which will answer most of your follow-up questions.
A:
Don't forget to add an SSH key to your Github account. That was causing the error for me.
A:
Using different internet access solved the problem for me, I switched from my main wifi and connect to my phone and it worked.
A:
I've tried all the approach but didn't work.
Turns out it was my network problem, just disconnect and then connect your wifi and it will work.
This is the error I was getting.
A:
git config --global http.postBuffer 524288000
you can just increase your buffer size it worked for me
A:
For me this was caused by a forgotten return 444; in my nginx config. The connection termination caused this misleading error message under HTTP 2.0
A:
Following the advice of some people here:
git config http.postBuffer 524288000
git push
Results to an error:
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File public/img/layout/group-photo.psd is 184.91 MB; this exceeds GitHub's file size limit of 100.00 MB
remote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
So this is more of a file issue rather than a network connectivity issue in my case.
Move the large file out of the project and proceed to commit and push the whole thing.
A:
For me I thought that was my internet so I tried with a better internet but the error persists. Until I have found this solution:
Basicaly I had to copy into another branch the files and delete the other and rename the current one. To clean the repo.
git checkout --orphan newBranch
git add -A # Add all files and commit them
git commit -am "Clean Repo"
git branch -D master # Deletes the master branch
git branch -m master # Rename the current branch to master
git push -f origin master # Force push master branch to github
A:
I live rurally and have mobile broadband, that is based on a very low 4g signal, I get two bars of signal on a good day. I was pushing several files amounting to only 39mb, which is well below github's max file size, I have also pushed much bigger commits on the same repo from this location, so it did not make sense that the file size caused the problems for me. I tried everything mentioned here, changing to HTTP1, and changing postbuffer did not help.
After several hours of head scratching, I restarted my router and was able to push the commit to github.
Hopefully this can help someone out there that also has terrible a internet connection.
A:
If you are pushing large files you might get this error just use Git Large File Storage
A:
It could be due to low signal strength. I have pushed heavy files in the open source repositories too and haven't encountered this error. More than the buffer size, it depends largely on the signal strength. You could try pushing it 2 or 3 times more or restart your router, and if it still doesn't work, try the following command:
git config http.postBuffer 524288000
git push
A:
If none of this helps, maybe you could try using ssh to connect to your git repository. That helped me.
If you are using Bitbucket you can add your ssh key to the repository settings and that way you will gain ssh access.
On Github I think you should have ssh access by default. Try connecting to the repository using ssh instead of https, you can do that by changing the remote url for your git.
A:
The easy solution is just Change your internet network temporary for example use your mobile hotspot, and after you did push, you can be back to your current network.
This problem would be happened in pull, push or even clone commands. And the reasons could be your network setting related packet size setting, buffer size and ...
A:
You really might be pushing data having large size. I was having the same error then I preferred using git LFS and it worked.
Just untrack that specific file (file with huge size) before commit. Use following command.
git rm --cached "<file_name>
Then push remaining files and then use git LFS to upload the file with large size. To know how to upload using git LFS refer this.
A:
I have also faced this. I just switched to another mobile hotspot and it worked for me .
A:
I solved this annoyng problem by changing my wi-fi DNS on my work laptop.
|
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
|
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
I tried 'push' while writing 'git'.
However, the following message does not solve the problem.
|
[
"You can force git using http version 1.1 \ngit config --global http.version HTTP/1.1\n\nhttps://gist.github.com/daofresh/0a95772d582cafb202142ff7871da2fc\n",
"You might be pushing data larger than the postBuffer size.\nYou can try increasing post buffer size using\ngit config --global http.postBuffer 157286400\n\nFor reference: https://confluence.atlassian.com/bitbucketserverkb/git-push-fails-fatal-the-remote-end-hung-up-unexpectedly-779171796.html\n",
"Simple solution (reverts to http 2 after) :\ngit config --global http.version HTTP/1.1\ngit push \ngit config --global http.version HTTP/2\n\n",
"XCode 11.4.1 \nIncreasing the git buffer size worked for me\ngit config --global http.postBuffer 524288000\n\n",
"Working Solution:\nFirst change HTTP version to 1.1 and then push and once done change back to HTTP2\n$ git config --global http.version HTTP/1.1\nAfter it push was ok and I have changed HTTP version to 2 again:\n$ git config --global http.version HTTP/2\n\n",
"git config http.postBuffer 524288000\n\nThis is the latest, should solve your issue\n",
"I followed most of the answers but not solved my problem.\nIn my case, the answer is very simple\nI encountered this error when pushing GIT through an ADSL Broadband Wi-Fi network with low signal strength, low stability, and low speed.\nThen,\nI was able to push it very successfully when I pushed it into the GIT through a Fibre Broadband Wi-Fi network with greater signal strength, greater stability, and higher speed.\nError:\n\nPush failed\nEnumerating objects: 44, done. Delta compression using up to 12 threads RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8) the remote end hung up unexpectedly Total 30 (delta 18), reused 0 (delta 0) the remote hung up unexpectedly\n\n\n",
"It's was not working for me. But worked after downgrading version of HTTP from 2 to 1.1:\n$ git config --global http.version HTTP/1.1\n\nAfter this change, pushing was successful and I have changed HTTP version to 2 again:\n$ git config --global http.version HTTP/2\n\n",
"In most cases git config http.postBuffer 524288000 should work.\nIn my case, I was pushing a large number of changes (I changed a lot of packages thus there were many lines updated) in my yarn.lock/package-lock.json file. Since it is usually not required, removing it made the error go away.\nSo you can try this too if you are working with Javascript\n",
"It sounds like either the remote server you're using or some middlebox (e.g., a proxy) is not speaking the HTTP/2 protocol correctly. You can either fix that by asking the owner of that remote server or middlebox to fix their server, or you can force the use of HTTP/1.1.\nIf you want to force the use of HTTP/1.1, you can set the http.version configuration option to HTTP/1.1. That can also be set on a per-URL basis as described in the http.<url>.* section in the git-config manual page.\n",
"For my case with the bitbucket behind nginx, disabling proxy request buffering was the answer:\nserver {\n listen 443 ssl http2 default_server;\n ...\n proxy_request_buffering off;\n\n # These are also relevant:\n proxy_read_timeout 600;\n client_max_body_size 0;\n\n",
"In most cases, increasing the buffer size will work.\ngit config http.postBuffer 524288000\n\nIt worked for me.\nUse of\ngit config --global http.version HTTP/1.1\n\nshould be kept as a last option.\nUsing a gitbash terminal on a windows machine (if this info helps you in any way).\n",
"In my case I had to reset the origin to ssh instead of http/https:\ngit remote set-url origin [email protected]\n\nTo check your origins you can use:\ngit remote -v\n\n",
"To me this is worked:\ngit checkout --orphan newBranch\ngit add -A # Add all files and commit them\ngit commit -am \"Clean Repo\"\ngit branch -D master # Deletes the master branch\ngit branch -m master # Rename the current branch to master\ngit push -f origin master # Force push master branch to github\n\nThanks to: https://panjeh.medium.com/cleaning-up-git-github-repository-without-deleting-git-directory-c86b7415b51b\nHowever my issue was slighty different, with a \"packages already packed\" info with the RPC::HTTP/2 stream not closed cleanly message\n",
"for me helped just this\nserver {\n listen 443 ssl http2 default_server;\n ...\n location / {\n ...\n proxy_request_buffering off;\n ...\n }\n}\n\n",
"I went through a similar situation. I tried;\ngit config --global http.version HTTP/1.1 \ngit config --global http.postBuffer 157286400\ngit config --global http.postBuffer 524288000\n\neven,\ngit config --global core.compression 0 \n\n\nbut, nothing changed. I had two folders with this error. one with 10MB size and one with 65MB.\nfinally.\nI tried with a Fibre connection.\n\nSo yeah. try with a different, higher speed connection. probably it will work.\nGood Luck!\n",
"Ironically, for me it turned out to be bad internet connection - I tried everything above, nothing worked, then I did a speed test and found I had 100+Mb download but only 0.x Mb upload at the time, due to some wifi issues. After I fixed it the problem disappeared.\n",
"One of the most popular answers is:\ngit config --global http.postBuffer 157286400\n\nDon't do this blindly as raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes(from the git documentation).\nCheck if you've files with size >100 MB first. If yes, then there is a better-suited solution for your problem.\nSolution: Git-LFS as it is intended for versioning large files.\n\nGit Large File Storage (LFS) replaces large files such as audio\nsamples, videos, datasets, and graphics with text pointers inside Git,\nwhile storing the file contents on a remote server like GitHub.com or\nGitHub Enterprise.\n\nYou can look at this good tutorial on git-lfs which will answer most of your follow-up questions.\n",
"Don't forget to add an SSH key to your Github account. That was causing the error for me.\n",
"Using different internet access solved the problem for me, I switched from my main wifi and connect to my phone and it worked.\n",
"I've tried all the approach but didn't work.\nTurns out it was my network problem, just disconnect and then connect your wifi and it will work.\nThis is the error I was getting.\n\n",
"git config --global http.postBuffer 524288000\nyou can just increase your buffer size it worked for me\n",
"For me this was caused by a forgotten return 444; in my nginx config. The connection termination caused this misleading error message under HTTP 2.0\n",
"Following the advice of some people here:\ngit config http.postBuffer 524288000\ngit push\n\nResults to an error:\nremote: error: See http://git.io/iEPt8g for more information.\nremote: error: File public/img/layout/group-photo.psd is 184.91 MB; this exceeds GitHub's file size limit of 100.00 MB\nremote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.\n\nSo this is more of a file issue rather than a network connectivity issue in my case.\nMove the large file out of the project and proceed to commit and push the whole thing.\n",
"For me I thought that was my internet so I tried with a better internet but the error persists. Until I have found this solution:\nBasicaly I had to copy into another branch the files and delete the other and rename the current one. To clean the repo.\ngit checkout --orphan newBranch\ngit add -A # Add all files and commit them\ngit commit -am \"Clean Repo\"\ngit branch -D master # Deletes the master branch\ngit branch -m master # Rename the current branch to master\ngit push -f origin master # Force push master branch to github\n\n",
"I live rurally and have mobile broadband, that is based on a very low 4g signal, I get two bars of signal on a good day. I was pushing several files amounting to only 39mb, which is well below github's max file size, I have also pushed much bigger commits on the same repo from this location, so it did not make sense that the file size caused the problems for me. I tried everything mentioned here, changing to HTTP1, and changing postbuffer did not help.\nAfter several hours of head scratching, I restarted my router and was able to push the commit to github.\nHopefully this can help someone out there that also has terrible a internet connection.\n",
"If you are pushing large files you might get this error just use Git Large File Storage\n",
"It could be due to low signal strength. I have pushed heavy files in the open source repositories too and haven't encountered this error. More than the buffer size, it depends largely on the signal strength. You could try pushing it 2 or 3 times more or restart your router, and if it still doesn't work, try the following command:\ngit config http.postBuffer 524288000\ngit push\n\n",
"If none of this helps, maybe you could try using ssh to connect to your git repository. That helped me.\nIf you are using Bitbucket you can add your ssh key to the repository settings and that way you will gain ssh access.\nOn Github I think you should have ssh access by default. Try connecting to the repository using ssh instead of https, you can do that by changing the remote url for your git.\n",
"The easy solution is just Change your internet network temporary for example use your mobile hotspot, and after you did push, you can be back to your current network.\nThis problem would be happened in pull, push or even clone commands. And the reasons could be your network setting related packet size setting, buffer size and ...\n",
"You really might be pushing data having large size. I was having the same error then I preferred using git LFS and it worked.\nJust untrack that specific file (file with huge size) before commit. Use following command.\ngit rm --cached \"<file_name>\n\nThen push remaining files and then use git LFS to upload the file with large size. To know how to upload using git LFS refer this.\n",
"I have also faced this. I just switched to another mobile hotspot and it worked for me .\n",
"I solved this annoyng problem by changing my wi-fi DNS on my work laptop.\n"
] |
[
161,
95,
68,
50,
37,
26,
18,
11,
6,
4,
4,
4,
3,
3,
2,
2,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"If your error is related to trying to push large file (I had that error message in my case), run:\ngit filter-branch -f --index-filter 'git rm --cached --ignore-unmatch {your full path file name}'\n\nhttps://medium.com/@marcosantonocito/fixing-the-gh001-large-files-detected-you-may-want-to-try-git-large-file-storage-43336b983272\n",
"Also check if you maybe using a VPN,\nI got the error while using VPN, I decided to turn my VPN off and try again,\nthen it Worked\n",
"For me this query works :\ngit push --set-upstream origin main\n",
"Switch to Mobile Internet or Change the Internet Connection.\nThis is happened some time because of network issue.\n",
"Disconnect from your VPN and try again. That's what solved it for me.\n",
"The only issue in this case is Bad Internet Connection and nothing else. I fixed it by switching to better internet connection.\n\n",
"In my case, I changed my password on the server (Gitlab) but not in my local git credentials.\n"
] |
[
-1,
-1,
-1,
-1,
-2,
-3,
-5
] |
[
"git",
"push"
] |
stackoverflow_0059282476_git_push.txt
|
Q:
error sum data table from sql comman (format)
I get error when my column is empty.
double tot_main, tot_oper, gift, sum_tot, sum_tot_gift, amount_gift_new, amount_cut;
// sum MainOper
SqlCommand check_main = new SqlCommand("select Sum(amount_Gift) from MainOper where Emp_no='" + TextBox1.Text + "' ", con);
SqlDataAdapter sd1 = new SqlDataAdapter(check_main);
DataTable dt1 = new DataTable();
sd1.Fill(dt1);
// sum Oper
SqlCommand check_Oper = new SqlCommand("select Sum(amount_Gift) from Oper where Emp_no='" + TextBox1.Text + "' ", con);
SqlDataAdapter sd2 = new SqlDataAdapter(check_Oper);
DataTable dt2 = new DataTable();
sd2.Fill(dt2);
// variable
gift = double.Parse(TextBox8.Text);
tot_main = double.Parse(dt1.Rows[0][0].ToString()); // note:when empty or 0 cat get sum_tot
tot_oper = double.Parse(dt2.Rows[0][0].ToString()); // note:when empty or 0 cant get sum_tot
// variable
sum_tot = tot_oper + tot_main; //when have value in tot_main & tot_oper is done - need every table have number
sum_tot_gift = sum_tot + gift;
amount_gift_new = sum_tot_gift - 1000;
amount_cut = gift - amount_gift_new;
else if (amount_cut <= 1000)
{
SqlCommand co = new SqlCommand("exec gifter '" + Emp_no2 + "','" + Emp_name2 + "','" + Emp_dept2 + "','" + Emp_poss2 + "', '" + Ref_Gift2 + "','" + Run_Gift2 + "','" + Date_Gift2 + "','" + amount_cut + "', '" + Type_Gift2 + "','" + Month_num2 + "'", con);
co.ExecuteNonQuery();
con.Close();
Label13.Text = "successfuly";
GetProductionList();
}
A:
The issue is two issues:
Rows can be returned, but have nulls, and sum = null value.
No rows returned, then NO OBJECT is returned!!!
If you use a datatable, then sum() would in theory always return a value even when criteria fails (but, the sum() result can still be "db null". So BOTH dbnull, and null/no object/noting can be returned when using Execute scaler like I used below).
However, using ExecuteScaler still saves us a LOT of code, and not even having to define some datatable(s).
so, I just use a helper routine (MyNz).
So with MyNz, you can have the 2nd default value of "" for strings or whatever you need/want for a dbnull or nothing/null object.
So, this code should help:
double sum_tot, sum_tot_gift, amount_gift_new, amount_cut, gift;
// sum MainOper
string strSQL =
"select Sum(amount_Gift) from MainOper where Emp_no = @Emp_no";
using (SqlCommand cmdSQL = new SqlCommand(strSQL, con))
{
con.Open();
cmdSQL.Parameters.Add("@Emp_no", SqlDbType.NVarChar).Value = textbox1.Text;
double tot_main = (double)MyNz(cmdSQL.ExecuteScalar(),0d);
cmdSQL.CommandText =
"select Sum(amount_Gift) from Oper where Emp_no = @Emp_No";
double tot_oper = (double)MyNz(cmdSQL.ExecuteScalar(),0d);
gift = double.Parse(TextBox8.Text);
sum_tot = tot_oper + tot_main; //when have value in tot_main & tot_oper is done - need every table have number
sum_tot_gift = sum_tot + gift;
amount_gift_new = sum_tot_gift - 1000;
amount_cut = gift - amount_gift_new;
}
And since that test for null sum value, or NOTHING returned, then I have this helper routine:
object MyNz(object value, object Default)
{
if (value == DBNull.Value || value == null)
return Default;
else
return value;
}
Make sure the "default" is of the correct type.
So, for literal values, use this:
0d; // double
0f; // float
0m; // decimal
0; // int
Note how we used a parameter, but we do NOT have to change it for the 2 queries, and thus they are the same. And above assumes that the parameter is of type string - but if it is not, then change the sqldbType to the correct data type.
And note how we did not even have to bother with data tables.
So, EVEN with parameters, the code is safe from sql injection, but MORE important is still less code, easy to read code, and more maintainable.
You don't show where/when/how the connection was created here, but it should be the topmost using block. We open the connection, and then let the system/using block automatic close that connection for us.
A:
error message > System.FormatException: 'Input string was not in a correct format.'
double tot_main, tot_oper, gift, sum_tot, sum_tot_gift, amount_gift_new, amount_cut;
// sum Oper
SqlCommand check_Oper = new SqlCommand("select Sum(amount_Gift) from Oper where Emp_no='" + TextBox1.Text + "' ", con);
SqlDataAdapter sd2 = new SqlDataAdapter(check_Oper);
DataTable dt2 = new DataTable();
sd2.Fill(dt2);
// variable
gift = double.Parse(TextBox8.Text);
tot_oper = double.Parse(dt2.Rows[0][0].ToString());
sum_tot_gift = tot_oper + gift;
|
error sum data table from sql comman (format)
|
I get error when my column is empty.
double tot_main, tot_oper, gift, sum_tot, sum_tot_gift, amount_gift_new, amount_cut;
// sum MainOper
SqlCommand check_main = new SqlCommand("select Sum(amount_Gift) from MainOper where Emp_no='" + TextBox1.Text + "' ", con);
SqlDataAdapter sd1 = new SqlDataAdapter(check_main);
DataTable dt1 = new DataTable();
sd1.Fill(dt1);
// sum Oper
SqlCommand check_Oper = new SqlCommand("select Sum(amount_Gift) from Oper where Emp_no='" + TextBox1.Text + "' ", con);
SqlDataAdapter sd2 = new SqlDataAdapter(check_Oper);
DataTable dt2 = new DataTable();
sd2.Fill(dt2);
// variable
gift = double.Parse(TextBox8.Text);
tot_main = double.Parse(dt1.Rows[0][0].ToString()); // note:when empty or 0 cat get sum_tot
tot_oper = double.Parse(dt2.Rows[0][0].ToString()); // note:when empty or 0 cant get sum_tot
// variable
sum_tot = tot_oper + tot_main; //when have value in tot_main & tot_oper is done - need every table have number
sum_tot_gift = sum_tot + gift;
amount_gift_new = sum_tot_gift - 1000;
amount_cut = gift - amount_gift_new;
else if (amount_cut <= 1000)
{
SqlCommand co = new SqlCommand("exec gifter '" + Emp_no2 + "','" + Emp_name2 + "','" + Emp_dept2 + "','" + Emp_poss2 + "', '" + Ref_Gift2 + "','" + Run_Gift2 + "','" + Date_Gift2 + "','" + amount_cut + "', '" + Type_Gift2 + "','" + Month_num2 + "'", con);
co.ExecuteNonQuery();
con.Close();
Label13.Text = "successfuly";
GetProductionList();
}
|
[
"The issue is two issues:\nRows can be returned, but have nulls, and sum = null value.\nNo rows returned, then NO OBJECT is returned!!!\nIf you use a datatable, then sum() would in theory always return a value even when criteria fails (but, the sum() result can still be \"db null\". So BOTH dbnull, and null/no object/noting can be returned when using Execute scaler like I used below).\nHowever, using ExecuteScaler still saves us a LOT of code, and not even having to define some datatable(s).\nso, I just use a helper routine (MyNz).\nSo with MyNz, you can have the 2nd default value of \"\" for strings or whatever you need/want for a dbnull or nothing/null object.\nSo, this code should help:\n double sum_tot, sum_tot_gift, amount_gift_new, amount_cut, gift;\n\n // sum MainOper\n string strSQL =\n \"select Sum(amount_Gift) from MainOper where Emp_no = @Emp_no\";\n\n using (SqlCommand cmdSQL = new SqlCommand(strSQL, con))\n {\n con.Open();\n\n cmdSQL.Parameters.Add(\"@Emp_no\", SqlDbType.NVarChar).Value = textbox1.Text;\n double tot_main = (double)MyNz(cmdSQL.ExecuteScalar(),0d);\n\n cmdSQL.CommandText =\n \"select Sum(amount_Gift) from Oper where Emp_no = @Emp_No\";\n double tot_oper = (double)MyNz(cmdSQL.ExecuteScalar(),0d);\n\n gift = double.Parse(TextBox8.Text);\n\n sum_tot = tot_oper + tot_main; //when have value in tot_main & tot_oper is done - need every table have number\n sum_tot_gift = sum_tot + gift;\n amount_gift_new = sum_tot_gift - 1000;\n amount_cut = gift - amount_gift_new;\n }\n\nAnd since that test for null sum value, or NOTHING returned, then I have this helper routine:\n object MyNz(object value, object Default)\n {\n if (value == DBNull.Value || value == null)\n return Default;\n else\n return value;\n }\n\nMake sure the \"default\" is of the correct type.\nSo, for literal values, use this:\n0d; // double\n0f; // float\n0m; // decimal\n0; // int\n\nNote how we used a parameter, but we do NOT have to change it for the 2 queries, and thus they are the same. And above assumes that the parameter is of type string - but if it is not, then change the sqldbType to the correct data type.\nAnd note how we did not even have to bother with data tables.\nSo, EVEN with parameters, the code is safe from sql injection, but MORE important is still less code, easy to read code, and more maintainable.\nYou don't show where/when/how the connection was created here, but it should be the topmost using block. We open the connection, and then let the system/using block automatic close that connection for us.\n",
"error message > System.FormatException: 'Input string was not in a correct format.'\ndouble tot_main, tot_oper, gift, sum_tot, sum_tot_gift, amount_gift_new, amount_cut;\n\n // sum Oper\n SqlCommand check_Oper = new SqlCommand(\"select Sum(amount_Gift) from Oper where Emp_no='\" + TextBox1.Text + \"' \", con);\n SqlDataAdapter sd2 = new SqlDataAdapter(check_Oper);\n DataTable dt2 = new DataTable();\n sd2.Fill(dt2);\n // variable\n gift = double.Parse(TextBox8.Text);\n tot_oper = double.Parse(dt2.Rows[0][0].ToString());\n sum_tot_gift = tot_oper + gift;\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"ado.net",
"c#"
] |
stackoverflow_0074661960_ado.net_c#.txt
|
Q:
How can I manage translated model fields in GORM?
I need to translate an arbitrary number of fields from a go-gorm model to an arbitrary number of languages.
I'm studying the best database/model layout to do that.
Options that I have in mind:
Keep translated fields as a model field -> Every new language will involve model modification, I will also create a potentially wide table depending on number of languages and fields to translate. Query a given language might not be easy. Seems the worst one
type Product {
....
EN_description string
DE_description string
....
}
Create a ProductTranslation Relation for each field to translate
type Product {
....
Descriptions []ProductDescTranslation
....
}
type ProductDescTranslation {
Language string
ProductID uint
Value string
}
Create a single ProductTranslation Relation -> This one seems more generic than number 2 since I don't need a different relation for each field but also seems to include more complexity since its not obvious how many translated fields the Product model has unless you run a query against that table.
type Product {
....
Translations []ProductTranslation
....
}
type ProductTranslation {
Language string
ProductID uint
Field string
Value string
}
Anyone has better ideas? Does anyone know about any package already coded for that purpose? I did not find one.
A:
A possible solution could be to use a single ProductTranslation model, where each translation is stored as a key-value pair. This allows you to have an arbitrary number of translations for each product, without having to modify the model each time a new language or field is added. The key would be the combination of the language code and the field name, and the value would be the translated value. This would allow you to easily query for the translations of a given language and product, and also avoid having a potentially wide table.
Instead of having separate fields for each language in your Product model, you can create a single ProductTranslation model that contains the translated field, the language, and the product ID. This way, you can add as many translations as needed without modifying the Product model. Here's an example:
type Product struct {
ID uint // unique identifier for the product
// other fields
}
type ProductTranslation struct {
ProductID uint // the ID of the product being translated
Language string // the language of the translation
Field string // the name of the field being translated
Value string // the translated value
}
You can then query for translations for a specific product and language using a query like this:
db.Where("product_id = ? AND language = ?", productID, language).Find(&translations)
This approach allows you to add as many translations as needed without modifying the Product model, but it does require you to run a separate query to fetch translations for a product. You can also consider using the map type to store the translations, which would allow you to fetch all translations for a product in a single query. Here's an example:
type Product struct {
ID uint // unique identifier for the product
// other fields
Translations map[string]map[string]string // translations for the product
|
How can I manage translated model fields in GORM?
|
I need to translate an arbitrary number of fields from a go-gorm model to an arbitrary number of languages.
I'm studying the best database/model layout to do that.
Options that I have in mind:
Keep translated fields as a model field -> Every new language will involve model modification, I will also create a potentially wide table depending on number of languages and fields to translate. Query a given language might not be easy. Seems the worst one
type Product {
....
EN_description string
DE_description string
....
}
Create a ProductTranslation Relation for each field to translate
type Product {
....
Descriptions []ProductDescTranslation
....
}
type ProductDescTranslation {
Language string
ProductID uint
Value string
}
Create a single ProductTranslation Relation -> This one seems more generic than number 2 since I don't need a different relation for each field but also seems to include more complexity since its not obvious how many translated fields the Product model has unless you run a query against that table.
type Product {
....
Translations []ProductTranslation
....
}
type ProductTranslation {
Language string
ProductID uint
Field string
Value string
}
Anyone has better ideas? Does anyone know about any package already coded for that purpose? I did not find one.
|
[
"A possible solution could be to use a single ProductTranslation model, where each translation is stored as a key-value pair. This allows you to have an arbitrary number of translations for each product, without having to modify the model each time a new language or field is added. The key would be the combination of the language code and the field name, and the value would be the translated value. This would allow you to easily query for the translations of a given language and product, and also avoid having a potentially wide table.\nInstead of having separate fields for each language in your Product model, you can create a single ProductTranslation model that contains the translated field, the language, and the product ID. This way, you can add as many translations as needed without modifying the Product model. Here's an example:\ntype Product struct {\n ID uint // unique identifier for the product\n // other fields\n}\n\ntype ProductTranslation struct {\n ProductID uint // the ID of the product being translated\n Language string // the language of the translation\n Field string // the name of the field being translated\n Value string // the translated value\n}\n\nYou can then query for translations for a specific product and language using a query like this:\ndb.Where(\"product_id = ? AND language = ?\", productID, language).Find(&translations)\n\nThis approach allows you to add as many translations as needed without modifying the Product model, but it does require you to run a separate query to fetch translations for a product. You can also consider using the map type to store the translations, which would allow you to fetch all translations for a product in a single query. Here's an example:\ntype Product struct {\n ID uint // unique identifier for the product\n // other fields\n\n Translations map[string]map[string]string // translations for the product\n\n"
] |
[
0
] |
[] |
[] |
[
"database_design",
"go",
"go_gorm"
] |
stackoverflow_0074659763_database_design_go_go_gorm.txt
|
Q:
Check if a string matches the beginning of a regex
I have many string to match against a regex. Many strings start with the same substring. To speed up my search, I would like to check whether the regex could match a string which begins with the common substring...
Example
I have a regex like for instance: /^(.[3e]|[o0]+)+l+$/ and many strings, like for instance these:
...
goo
goober
good
goodhearted
goodly
goods
goody
goof
goofball
google
goon
goose
...
held
helical
helices
helicopter
helipad
heliport
hell
help
hellion
helm
helmet
...
Half of the strings start with goo: I'd like to test whether goo is a valid beginning for a match. It's not (no string starting with goo can ever match that regex), thus I'd discard all those words at once.
The other half start with hel: I'd like to test whether hel is a valid beginning for a match. It is (some strings starting with hel may match that regex), thus I proceed testing those strings.
Is there any function to do this with a generic regex, without having to manually re-engineer it?
A:
With the data set given, filtering on the first 3 characters didn't speed up the processing. The overhead of shuffling the data is likely not worth it.
const test = `...
goo
goober
good
goodhearted
goodly
goods
goody
goof
goofball
google
goon
goose
...
held
helical
helices
helicopter
helipad
heliport
hell
help
hellion
helm
helmet
...`;
const re1 = /^(.[3e]|[o0]+)+l+$/;
const t0 = performance.now();
// split the test string into an array
let arr1 = test.split('\n');
// create a set to hold the first three letters of each string in the array
const firstThree = new Set();
arr1.forEach(e => {
firstThree.add(e.substring(0, 3));
});
// loop through the set
for (const ft of firstThree) {
// check if the first three characters match the regular expression
if (!re1.test(ft)) {
// if not, remove those strings from the array
arr1 = arr1.filter(e => e.indexOf(ft) !== 0);
}
}
arr1 = arr1.filter(e => re1.test(e));
const t1 = performance.now();
console.log(`Took ${t1 - t0} milliseconds.`);
console.log(arr1);
const re2 = /^(?:.[3e]|[o0]+)+l+$/mg;
const t2 = performance.now();
const arr2 = [...test.matchAll(re2)];
const t3 = performance.now();
console.log(`Took ${t3 - t2} milliseconds.`);
console.log(arr2[0]);
A:
You can use a negative lookahead test to exclude strings starting with goo, and a positive lookahead to consider only strings starting with 'hel', followed by your actual test:
[
'goo',
'goober',
'good',
'goodhearted',
'goodly',
'goods',
'goody',
'goof',
'goofball',
'google',
'goon',
'goose',
'held',
'helical',
'helices',
'helicopter',
'helipad',
'heliport',
'hell',
'help',
'hellion',
'helm',
'helmet'
].forEach(str => {
console.log(str + ' => ' + /^(?!goo)(?=hel)(.[3e]|[o0]+)+l+$/.test(str));
});
Explanation of regex:
^ -- anchor at start of string
(?!goo) -- negative lookahead for goo, e.g. no match if found
(?=hel) -- positive lookahead for hel
(.[3e]|[o0]+)+l+$ -- your existing regex test
See regex tutorial on positive/negative lookahead & lookbehind:
https://twiki.org/cgi-bin/view/Codev/TWikiPresentation2018x10x14Regex
|
Check if a string matches the beginning of a regex
|
I have many string to match against a regex. Many strings start with the same substring. To speed up my search, I would like to check whether the regex could match a string which begins with the common substring...
Example
I have a regex like for instance: /^(.[3e]|[o0]+)+l+$/ and many strings, like for instance these:
...
goo
goober
good
goodhearted
goodly
goods
goody
goof
goofball
google
goon
goose
...
held
helical
helices
helicopter
helipad
heliport
hell
help
hellion
helm
helmet
...
Half of the strings start with goo: I'd like to test whether goo is a valid beginning for a match. It's not (no string starting with goo can ever match that regex), thus I'd discard all those words at once.
The other half start with hel: I'd like to test whether hel is a valid beginning for a match. It is (some strings starting with hel may match that regex), thus I proceed testing those strings.
Is there any function to do this with a generic regex, without having to manually re-engineer it?
|
[
"With the data set given, filtering on the first 3 characters didn't speed up the processing. The overhead of shuffling the data is likely not worth it.\n\n\nconst test = `...\ngoo\ngoober\ngood\ngoodhearted\ngoodly\ngoods\ngoody\ngoof\ngoofball\ngoogle\ngoon\ngoose\n...\nheld\nhelical\nhelices\nhelicopter\nhelipad\nheliport\nhell\nhelp\nhellion\nhelm\nhelmet\n...`;\n\nconst re1 = /^(.[3e]|[o0]+)+l+$/;\n\nconst t0 = performance.now();\n\n// split the test string into an array\nlet arr1 = test.split('\\n');\n\n// create a set to hold the first three letters of each string in the array\nconst firstThree = new Set();\narr1.forEach(e => {\n firstThree.add(e.substring(0, 3));\n});\n\n// loop through the set\nfor (const ft of firstThree) {\n // check if the first three characters match the regular expression\n if (!re1.test(ft)) {\n // if not, remove those strings from the array\n arr1 = arr1.filter(e => e.indexOf(ft) !== 0);\n }\n}\n\narr1 = arr1.filter(e => re1.test(e));\nconst t1 = performance.now();\nconsole.log(`Took ${t1 - t0} milliseconds.`);\n\nconsole.log(arr1);\n\nconst re2 = /^(?:.[3e]|[o0]+)+l+$/mg;\n\nconst t2 = performance.now();\nconst arr2 = [...test.matchAll(re2)];\nconst t3 = performance.now();\nconsole.log(`Took ${t3 - t2} milliseconds.`);\n\nconsole.log(arr2[0]);\n\n\n\n",
"You can use a negative lookahead test to exclude strings starting with goo, and a positive lookahead to consider only strings starting with 'hel', followed by your actual test:\n\n\n[\n 'goo',\n 'goober',\n 'good',\n 'goodhearted',\n 'goodly',\n 'goods',\n 'goody',\n 'goof',\n 'goofball',\n 'google',\n 'goon',\n 'goose',\n 'held',\n 'helical',\n 'helices',\n 'helicopter',\n 'helipad',\n 'heliport',\n 'hell',\n 'help',\n 'hellion',\n 'helm',\n 'helmet'\n].forEach(str => {\n console.log(str + ' => ' + /^(?!goo)(?=hel)(.[3e]|[o0]+)+l+$/.test(str));\n}); \n\n\n\nExplanation of regex:\n\n^ -- anchor at start of string\n(?!goo) -- negative lookahead for goo, e.g. no match if found\n(?=hel) -- positive lookahead for hel\n(.[3e]|[o0]+)+l+$ -- your existing regex test\n\nSee regex tutorial on positive/negative lookahead & lookbehind:\nhttps://twiki.org/cgi-bin/view/Codev/TWikiPresentation2018x10x14Regex\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"regex"
] |
stackoverflow_0074663663_javascript_regex.txt
|
Q:
Template type to (statically) wrap a function?
Does the C++ standards define a utility for wrapping a function in a type (as distinct from wrapping it in a value)?
After Googling a bunch of names that seem related, I'm not finding anything, but then again, I know of a few things where I'd never guess the name.
Edit:
I already know how to get a type that can dynamically take on the value of a function (either std::function or a good old function pointers like int(*f)(int)) but that specifically excludes from the type the one thing I most want to include: the actual function to be called.
I already know how to get the type of a function from its name (decltype(fn)) which is not what I want, for the same reasons as listed above.
I find myself needing to make a type where its operator() exactly matches a function foo (in my case, a C function from a 3rd part library). Ideally, the type should inline away to nothing any time it's used. As a one off, this is easy:
struct foo_t {
ret_t operator()(SomeType t, int i) {
return foo(t, i);
}
};
However, there is at least one cases where this is a thing that needs to be done to a bunch of different functions: deleters for std::unique_ptr<T,D> while interfacing with opaque handles (e.g. from C libraries):
std::unique_ptr<FILE*, fclose_t> f(fopen(n, "r"));
std::unique_ptr<obj*, free_t> o((obj*)malloc(sizeof(obj)));
...
Rather than define fclose_t, free_t, etc. myself, what I'd like to do is something like this:
std::unique_ptr<FILE*, type_fn<fclose>> f(fopen(n, "r"));
std::unique_ptr<obj*, type_fn<free>> o((obj*)malloc(sizeof(obj)));
...
And that isn't even that hard... if you are happy to define type_fn yourself:
template<auto *f> struct fn_type {
template<class... A>
auto operator() (A&&... a) {
return f(std::forward<A>(a)...);
}
};
Which brings me back to the opening question: does C++ define something like that?
A:
that isn't even that hard... if you are happy to define type_fn yourself...
Actually, you are not supposed to take the address of a function from the standard library(with some exceptions), since the library might make changes to a function that are compatible with a normal use of a function, but not with taking its address (for instance, adding a parameter with a default value, or adding an overload to an overload set).
In fact, taking the address of a function from the standard library is explicitly forbidden since C++20. More can be found here: Can I take the address of a function defined in standard library?
Which mean you can't write something like type_fn<fclose>.
A solution for you is to use a lambda, since the type of each lambdas are guaranteed to be unique:
auto deleter = [](auto *p){ std::fclose(p); };
Now, you can use decltype(deleter) anywhere you would need to specify a type for it:
auto f = std::uniqe_ptr<std::FILE, decltype(deleter)>(std::fopen("file", "r"));
If you only need this deleter once, you can also define the lambda just in the template parameter:
std::uniqe_ptr<std::FILE, decltype(
[](auto *p){ std::fclose(p); }
)>(std::fopen("file", "r"));
|
Template type to (statically) wrap a function?
|
Does the C++ standards define a utility for wrapping a function in a type (as distinct from wrapping it in a value)?
After Googling a bunch of names that seem related, I'm not finding anything, but then again, I know of a few things where I'd never guess the name.
Edit:
I already know how to get a type that can dynamically take on the value of a function (either std::function or a good old function pointers like int(*f)(int)) but that specifically excludes from the type the one thing I most want to include: the actual function to be called.
I already know how to get the type of a function from its name (decltype(fn)) which is not what I want, for the same reasons as listed above.
I find myself needing to make a type where its operator() exactly matches a function foo (in my case, a C function from a 3rd part library). Ideally, the type should inline away to nothing any time it's used. As a one off, this is easy:
struct foo_t {
ret_t operator()(SomeType t, int i) {
return foo(t, i);
}
};
However, there is at least one cases where this is a thing that needs to be done to a bunch of different functions: deleters for std::unique_ptr<T,D> while interfacing with opaque handles (e.g. from C libraries):
std::unique_ptr<FILE*, fclose_t> f(fopen(n, "r"));
std::unique_ptr<obj*, free_t> o((obj*)malloc(sizeof(obj)));
...
Rather than define fclose_t, free_t, etc. myself, what I'd like to do is something like this:
std::unique_ptr<FILE*, type_fn<fclose>> f(fopen(n, "r"));
std::unique_ptr<obj*, type_fn<free>> o((obj*)malloc(sizeof(obj)));
...
And that isn't even that hard... if you are happy to define type_fn yourself:
template<auto *f> struct fn_type {
template<class... A>
auto operator() (A&&... a) {
return f(std::forward<A>(a)...);
}
};
Which brings me back to the opening question: does C++ define something like that?
|
[
"\nthat isn't even that hard... if you are happy to define type_fn yourself...\n\nActually, you are not supposed to take the address of a function from the standard library(with some exceptions), since the library might make changes to a function that are compatible with a normal use of a function, but not with taking its address (for instance, adding a parameter with a default value, or adding an overload to an overload set).\nIn fact, taking the address of a function from the standard library is explicitly forbidden since C++20. More can be found here: Can I take the address of a function defined in standard library?\nWhich mean you can't write something like type_fn<fclose>.\n\nA solution for you is to use a lambda, since the type of each lambdas are guaranteed to be unique:\nauto deleter = [](auto *p){ std::fclose(p); };\n\nNow, you can use decltype(deleter) anywhere you would need to specify a type for it:\nauto f = std::uniqe_ptr<std::FILE, decltype(deleter)>(std::fopen(\"file\", \"r\"));\n\nIf you only need this deleter once, you can also define the lambda just in the template parameter:\nstd::uniqe_ptr<std::FILE, decltype(\n [](auto *p){ std::fclose(p); }\n)>(std::fopen(\"file\", \"r\"));\n\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"std",
"templates"
] |
stackoverflow_0074663642_c++_std_templates.txt
|
Q:
why empty slots are being replaced with undefined while cloning array using spread syntax?
I'm creating a clone array from an array that contain some empty slots. But after cloning it is being replaced with undefined. If the source array contain some empty slots then clone array should also contain same number and at exact same position empty slots. I don't get the reason. I'm using spread syntax to clone array as:
const arr = [1, "", , null, undefined, false, , 0];
console.log('arr => ', arr);
const clone = [...arr];
console.log('clone => ', clone)
Output is as below in chrome console
A:
Using spread syntax will invoke the object's iterator if it has one. The array iterator will:
a. Let index be 0.
b. Repeat
Let len be ? LengthOfArrayLike(array).
iii. If index ≥ len, return NormalCompletion(undefined).
(...)
1. Let elementKey be ! ToString((index)).
2. Let elementValue be ? Get(array, elementKey).
(yield elementValue)
vi. Set index to index + 1.
And the length of a sparse array is still the index of the last element plus one:
const arr = [];
arr[5] = 'a';
console.log(arr.length);
So, even with sparse arrays, spreading them will result in the new array containing values of:
arr[0]
arr[1]
arr[2]
// ...
arr[arr.length - 1]
even when the original array has empty slots in between 0 and arr.length - 1.
If you want empty slots, spreading will only work if you delete the undesirable indices afterwards - or iterate over the array manually, only assigning indices you need.
const arr = [1, "", , null, undefined, false, , 0];
console.log('arr => ', arr);
const clone = [];
for (let i = 0; i < arr.length; i++) {
if (arr.hasOwnProperty(i)) {
clone[i] = arr[i];
}
}
console.log('clone => ', clone)
But you could also consider restructuring your code to avoid sparse arrays entirely - they're not very intuitive.
A:
Let's take a step back
let x;
console.log(x); // undefined
console.log(typeof x); // undefined
If you don't define a variable, it is un-defined.
Let's see now an empty array:
let x = [,]; // even [] would work but I thought this one is clearer for some
console.log(x[0]); // undefined
console.log(typeof x[0]); //undefined
Why is that? Simply because
If you don't define a variable, it is un-defined.
A:
above answers already made it pretty clear why you getting undefiend.
Just to add more If you log arr[2] you will get undefined, i haven't read it anywhere but from what i know spread operator spread the values of array/obj that is why arr[2] value is undefiend
|
why empty slots are being replaced with undefined while cloning array using spread syntax?
|
I'm creating a clone array from an array that contain some empty slots. But after cloning it is being replaced with undefined. If the source array contain some empty slots then clone array should also contain same number and at exact same position empty slots. I don't get the reason. I'm using spread syntax to clone array as:
const arr = [1, "", , null, undefined, false, , 0];
console.log('arr => ', arr);
const clone = [...arr];
console.log('clone => ', clone)
Output is as below in chrome console
|
[
"Using spread syntax will invoke the object's iterator if it has one. The array iterator will:\na. Let index be 0.\nb. Repeat\n Let len be ? LengthOfArrayLike(array).\n iii. If index ≥ len, return NormalCompletion(undefined).\n (...)\n 1. Let elementKey be ! ToString((index)).\n 2. Let elementValue be ? Get(array, elementKey).\n (yield elementValue)\n vi. Set index to index + 1.\n\nAnd the length of a sparse array is still the index of the last element plus one:\n\n\nconst arr = [];\narr[5] = 'a';\nconsole.log(arr.length);\n\n\n\nSo, even with sparse arrays, spreading them will result in the new array containing values of:\narr[0]\narr[1]\narr[2]\n// ...\narr[arr.length - 1]\n\neven when the original array has empty slots in between 0 and arr.length - 1.\nIf you want empty slots, spreading will only work if you delete the undesirable indices afterwards - or iterate over the array manually, only assigning indices you need.\n\n\nconst arr = [1, \"\", , null, undefined, false, , 0];\nconsole.log('arr => ', arr);\n\nconst clone = [];\nfor (let i = 0; i < arr.length; i++) {\n if (arr.hasOwnProperty(i)) {\n clone[i] = arr[i];\n }\n}\nconsole.log('clone => ', clone)\n\n\n\nBut you could also consider restructuring your code to avoid sparse arrays entirely - they're not very intuitive.\n",
"Let's take a step back\nlet x;\nconsole.log(x); // undefined\nconsole.log(typeof x); // undefined\n\nIf you don't define a variable, it is un-defined.\nLet's see now an empty array:\nlet x = [,]; // even [] would work but I thought this one is clearer for some\nconsole.log(x[0]); // undefined\nconsole.log(typeof x[0]); //undefined\n\nWhy is that? Simply because\n\nIf you don't define a variable, it is un-defined.\n\n",
"above answers already made it pretty clear why you getting undefiend. \nJust to add more If you log arr[2] you will get undefined, i haven't read it anywhere but from what i know spread operator spread the values of array/obj that is why arr[2] value is undefiend\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"arrays",
"javascript"
] |
stackoverflow_0074664150_arrays_javascript.txt
|
Q:
redirecting to home page banner slider does not work
I created an angular project with a home page banner which works fine on the first load. But on redirecting to home page from other page the slider doesn't work. However, on refreshing again it works. How can it be fixed?
A:
It sounds like you are using a slider or carousel on your homepage, and that it is not working correctly when you navigate to the homepage from another page. This is a common issue when using sliders or carousels in web applications, and it is typically caused by the way the slider or carousel is initialized.
In most cases, the solution is to initialize the slider or carousel when the page is loaded, rather than when the page is rendered. This ensures that the slider or carousel is properly set up and ready to use when the user navigates to the page.
To do this in Angular, you can use the ngAfterViewInit lifecycle hook to initialize the slider or carousel when the page is loaded. This hook is called after the view and its child views have been fully initialized, so it is a good place to initialize any components that need to be set up before the user can interact with them.
Here is an example of how you might initialize a slider or carousel using the ngAfterViewInit hook:
import { Component, AfterViewInit } from '@angular/core';
@Component({
selector: 'app-home',
templateUrl: './home.component.html',
styleUrls: ['./home.component.css']
})
export class HomeComponent implements AfterViewInit {
ngAfterViewInit() {
// Initialize the slider or carousel here
}
}
In this code, the HomeComponent implements the AfterViewInit interface, which tells Angular that the component should
|
redirecting to home page banner slider does not work
|
I created an angular project with a home page banner which works fine on the first load. But on redirecting to home page from other page the slider doesn't work. However, on refreshing again it works. How can it be fixed?
|
[
"It sounds like you are using a slider or carousel on your homepage, and that it is not working correctly when you navigate to the homepage from another page. This is a common issue when using sliders or carousels in web applications, and it is typically caused by the way the slider or carousel is initialized.\nIn most cases, the solution is to initialize the slider or carousel when the page is loaded, rather than when the page is rendered. This ensures that the slider or carousel is properly set up and ready to use when the user navigates to the page.\nTo do this in Angular, you can use the ngAfterViewInit lifecycle hook to initialize the slider or carousel when the page is loaded. This hook is called after the view and its child views have been fully initialized, so it is a good place to initialize any components that need to be set up before the user can interact with them.\nHere is an example of how you might initialize a slider or carousel using the ngAfterViewInit hook:\nimport { Component, AfterViewInit } from '@angular/core';\n\n@Component({\n selector: 'app-home',\n templateUrl: './home.component.html',\n styleUrls: ['./home.component.css']\n})\nexport class HomeComponent implements AfterViewInit {\n\n ngAfterViewInit() {\n // Initialize the slider or carousel here\n }\n}\n\nIn this code, the HomeComponent implements the AfterViewInit interface, which tells Angular that the component should\n"
] |
[
0
] |
[] |
[] |
[
"angular"
] |
stackoverflow_0074664617_angular.txt
|
Q:
How To Send Object in Form Data in React.js
Actually i am sending a Form Data Which contains input text and input Files I am doing it like This but i am getting an empty object in response
Here is my Code
const [ModelInfo,setModelInfo] = useState({
title:"",
description:"",
category:""
})
const [Modelfile,setModelfile] = useState({
file1:"",
file3:"",
file4:""
})
Here Is My Fuction To Handle Submit
e.preventDefault();
const formData = new FormData();
// My Post Files Object
for(let key in Modelfile){
formData.append(key,Modelfile[key][0])
}
// My Post Text Object
for(let key in ModelInfo){
formData.append(key,ModelInfo[key])
}
fetch("http://192.168.10.8:8300/createpost",{
method:"POST",
body:formData
})
.then((resp)=>{
resp.json().then((data) => {
console.log(data)
})
})
Payload :
file1: (binary)
file3: (binary)
file4: (binary)
title: test
description: test
category: test
Preview : {}
Getting Empty Obj in response
ScreenShot 1 Of Post Request
ScreenShot 2 Of Post Request
A:
There are some possibilities that may cause the issue:
Your headers are not set properly. Set Content-Type header to multipart/form-data or application/x-www-form-urlencoded when sending your data.
On your server, you don't have body-parser or multer library. You should use this libraries (not together) in order to access your req.body and parse it.
Maybe you forgot to add this config (in your index file in the root directory of the server):
app.use(express.urlencoded({
extended: true
}))
As a note, you can access your files with req.files in server and other fields with req.body.
A:
If You did n't add a multer library to your server side then the form data will return an empty object because multer add a body object and a file object to requested object so use a multer library but dont forget to add enctype="multipart/formdata" to you form body .
A:
To send an object in form data using React.js, you can use the FormData class provided by the browser to create a form with the object as its data. Here's an example of how you can do this:
import React, { useRef } from "react";
function MyForm() {
// create a reference to the form element
const formRef = useRef();
// create an object to be sent in the form data
const data = {
name: "John Doe",
email: "[email protected]",
password: "123456",
};
// create a form data object with the data object as its data
const formData = new FormData(formRef.current);
formData.append("data", JSON.stringify(data));
// create a submit event handler for the form
const handleSubmit = (event) => {
event.preventDefault();
// send the form data using an HTTP request
fetch("/api/form-data", {
method: "POST",
body: formData,
});
};
return (
<form ref={formRef} onSubmit={handleSubmit}>
{/* form fields and submit button go here */}
</form>
);
}
In this code, we create a form using the formRef reference and add the data object to its form data using the FormData class. Then, in the submit event handler for the form, we use the fetch() method to send the form data to the server using an HTTP request.
Keep in mind that this code is just an example, and you may need to modify it depending on your specific use case and requirements.
|
How To Send Object in Form Data in React.js
|
Actually i am sending a Form Data Which contains input text and input Files I am doing it like This but i am getting an empty object in response
Here is my Code
const [ModelInfo,setModelInfo] = useState({
title:"",
description:"",
category:""
})
const [Modelfile,setModelfile] = useState({
file1:"",
file3:"",
file4:""
})
Here Is My Fuction To Handle Submit
e.preventDefault();
const formData = new FormData();
// My Post Files Object
for(let key in Modelfile){
formData.append(key,Modelfile[key][0])
}
// My Post Text Object
for(let key in ModelInfo){
formData.append(key,ModelInfo[key])
}
fetch("http://192.168.10.8:8300/createpost",{
method:"POST",
body:formData
})
.then((resp)=>{
resp.json().then((data) => {
console.log(data)
})
})
Payload :
file1: (binary)
file3: (binary)
file4: (binary)
title: test
description: test
category: test
Preview : {}
Getting Empty Obj in response
ScreenShot 1 Of Post Request
ScreenShot 2 Of Post Request
|
[
"There are some possibilities that may cause the issue:\n\nYour headers are not set properly. Set Content-Type header to multipart/form-data or application/x-www-form-urlencoded when sending your data.\n\nOn your server, you don't have body-parser or multer library. You should use this libraries (not together) in order to access your req.body and parse it.\n\nMaybe you forgot to add this config (in your index file in the root directory of the server):\n\n\napp.use(express.urlencoded({\n extended: true\n}))\n\nAs a note, you can access your files with req.files in server and other fields with req.body.\n",
"If You did n't add a multer library to your server side then the form data will return an empty object because multer add a body object and a file object to requested object so use a multer library but dont forget to add enctype=\"multipart/formdata\" to you form body .\n",
"To send an object in form data using React.js, you can use the FormData class provided by the browser to create a form with the object as its data. Here's an example of how you can do this:\nimport React, { useRef } from \"react\";\n\nfunction MyForm() {\n // create a reference to the form element\n const formRef = useRef();\n\n // create an object to be sent in the form data\n const data = {\n name: \"John Doe\",\n email: \"[email protected]\",\n password: \"123456\",\n };\n\n // create a form data object with the data object as its data\n const formData = new FormData(formRef.current);\n formData.append(\"data\", JSON.stringify(data));\n\n // create a submit event handler for the form\n const handleSubmit = (event) => {\n event.preventDefault();\n\n // send the form data using an HTTP request\n fetch(\"/api/form-data\", {\n method: \"POST\",\n body: formData,\n });\n };\n\n return (\n <form ref={formRef} onSubmit={handleSubmit}>\n {/* form fields and submit button go here */}\n </form>\n );\n}\n\n\nIn this code, we create a form using the formRef reference and add the data object to its form data using the FormData class. Then, in the submit event handler for the form, we use the fetch() method to send the form data to the server using an HTTP request.\nKeep in mind that this code is just an example, and you may need to modify it depending on your specific use case and requirements.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"fetch",
"multipartform_data",
"reactjs"
] |
stackoverflow_0074638392_fetch_multipartform_data_reactjs.txt
|
Q:
How do make button smaller?
How do I make this ExtendedFloatingActionButton smaller? In a normal FloatingActionButton I always used app:fabSize="mini" and it always made it small perfectly.
Here, app:fabSize="mini" has no effect
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="heelo w"
app:icon="@drawable/myicon"
android:layout_gravity="end"
app:fabSize="mini"/>
</LinearLayout>
A:
If you don't need the android:text attribute use app:collapsedSize instead of app:fabSize like so:
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:icon="@drawable/myicon"
android:layout_gravity="end"
app:collapsedSize="40dp"/>
But if you want to show the text and the icon both you need to change the scale using android:scaleX and android:scaleY attributes. Here is an example:
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:id="@+id/fab"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="bottom|end"
android:layout_marginEnd="16dp"
android:layout_marginBottom="16dp"
android:scaleX="0.5"
android:scaleY="0.5"
android:text="heelo w"
app:icon="@drawable/myicon" />
Using the same scale for both X and Y is better for getting the same ratio. To get a smaller scale the parameters should always be smaller than one. For example 0.4, 0.5, and 0.6.
|
How do make button smaller?
|
How do I make this ExtendedFloatingActionButton smaller? In a normal FloatingActionButton I always used app:fabSize="mini" and it always made it small perfectly.
Here, app:fabSize="mini" has no effect
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="heelo w"
app:icon="@drawable/myicon"
android:layout_gravity="end"
app:fabSize="mini"/>
</LinearLayout>
|
[
"If you don't need the android:text attribute use app:collapsedSize instead of app:fabSize like so:\n<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n app:icon=\"@drawable/myicon\"\n android:layout_gravity=\"end\"\n app:collapsedSize=\"40dp\"/>\n\nBut if you want to show the text and the icon both you need to change the scale using android:scaleX and android:scaleY attributes. Here is an example:\n<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton\n android:id=\"@+id/fab\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_gravity=\"bottom|end\"\n android:layout_marginEnd=\"16dp\"\n android:layout_marginBottom=\"16dp\"\n android:scaleX=\"0.5\"\n android:scaleY=\"0.5\"\n android:text=\"heelo w\"\n app:icon=\"@drawable/myicon\" />\n\nUsing the same scale for both X and Y is better for getting the same ratio. To get a smaller scale the parameters should always be smaller than one. For example 0.4, 0.5, and 0.6.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"button",
"xml"
] |
stackoverflow_0074102580_android_button_xml.txt
|
Q:
GitLab-CI: Run Python Script and Exit (VPS)
I am trying to do a CI script on GitLab where it connects to my VPS, Git Pulls and then runs the python script and exits, while leaving my python script running 24/7 (until the next pipeline run/commit).
How do I do get it do make my python script run 24/7?
script:
- 'apt-get update -y && apt-get install openssh-client -y && apt-get install sshpass -y '
- sshpass -p "password" ssh -o StrictHostKeyChecking=no root@host "cd repo/ && git pull && python3 main.py"
This is my current script, however, when main.py is run, the pipeline is left in limbo since the script is eternally running.
How do I make it so the pipeline script runs the script and exits, leaving it on tmux or something like that?
A:
Check first if this is a tty allocation issue, as in here.
ssh -t -o ...
^^
Also consider calling just one script (which does the cd, git pull and python3)
That way you can test the script locally (on 'host'), and then call it remotely (through ssh)
From the OP Kevin A. in the comments:
my code goes through a loop that reruns the code every 45mins or so, so the script is constantly running. It's a web scraper constantly updating a cloud database.
The idea is to get GitLab CI to ignore waiting for the script to finish running, its just is to, stop previous script running, git pull and run the script again
Another approach would be to make the script scrap one-time (and exit), but call said script through a GitLab scheduled pipeline.
That way, no more freeze.
|
GitLab-CI: Run Python Script and Exit (VPS)
|
I am trying to do a CI script on GitLab where it connects to my VPS, Git Pulls and then runs the python script and exits, while leaving my python script running 24/7 (until the next pipeline run/commit).
How do I do get it do make my python script run 24/7?
script:
- 'apt-get update -y && apt-get install openssh-client -y && apt-get install sshpass -y '
- sshpass -p "password" ssh -o StrictHostKeyChecking=no root@host "cd repo/ && git pull && python3 main.py"
This is my current script, however, when main.py is run, the pipeline is left in limbo since the script is eternally running.
How do I make it so the pipeline script runs the script and exits, leaving it on tmux or something like that?
|
[
"Check first if this is a tty allocation issue, as in here.\nssh -t -o ...\n ^^\n\nAlso consider calling just one script (which does the cd, git pull and python3)\nThat way you can test the script locally (on 'host'), and then call it remotely (through ssh)\n\nFrom the OP Kevin A. in the comments:\n\nmy code goes through a loop that reruns the code every 45mins or so, so the script is constantly running. It's a web scraper constantly updating a cloud database.\nThe idea is to get GitLab CI to ignore waiting for the script to finish running, its just is to, stop previous script running, git pull and run the script again\n\nAnother approach would be to make the script scrap one-time (and exit), but call said script through a GitLab scheduled pipeline.\nThat way, no more freeze.\n"
] |
[
1
] |
[] |
[] |
[
"gitlab",
"gitlab_ci",
"python",
"python_3.x"
] |
stackoverflow_0074661343_gitlab_gitlab_ci_python_python_3.x.txt
|
Q:
How to set dropdown menu hidden by default
this is my code for a dropdown, I am using tailwindCSS and pure html,
<li class="relative" x-data="{isOpen:false}">
<button @click="isOpen=!isOpen" class=" block text-sm font-bold outline-none focus:outline-none text-blue-900 " href="#">
SERVICES</button>
<div
class="right-0 p-2 mt-1 bg-white rounded-md shadow lg:absolute"
:class="{'hidden':!isOpen'flex flex-col':isOpen}"
@click.away="isOpen = false">
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Categories</a>
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Inventories</a>
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Brands</a>
</div>
</li>
I want to hide my dropdown by default and is should only open when someone hovers on it or when someone clicks on it. but right now it is still visible even though I have tried hiding it. Please see image for reference. and let know what am I doing wrong.
A:
You have to use pure css on the drop-down container
Example
.
drop-down{
display:none;
}
.drop-down-parent:hover .drop-down{
display:block;
}
A:
try like below, below example use tailwind CSS
.dropdown:hover .dropdown-menu {
display: block;
}
<link href="https://cdnjs.cloudflare.com/ajax/libs/tailwindcss/1.0.4/tailwind.min.css" rel="stylesheet"/>
<div class="p-10">
<div class="dropdown inline-block relative">
<button class="bg-gray-300 text-gray-700 font-semibold py-2 px-4 rounded inline-flex items-center">
<span class="mr-1">Dropdown</span>
<svg class="fill-current h-4 w-4" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M9.293 12.95l.707.707L15.657 8l-1.414-1.414L10 10.828 5.757 6.586 4.343 8z"/> </svg>
</button>
<ul class="dropdown-menu absolute hidden text-gray-700 pt-1">
<li class=""><a class="rounded-t bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap" href="#">One</a></li>
<li class=""><a class="bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap" href="#">Two</a></li>
<li class=""><a class="rounded-b bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap" href="#">Three is the magic number</a></li>
</ul>
</div>
</div>
|
How to set dropdown menu hidden by default
|
this is my code for a dropdown, I am using tailwindCSS and pure html,
<li class="relative" x-data="{isOpen:false}">
<button @click="isOpen=!isOpen" class=" block text-sm font-bold outline-none focus:outline-none text-blue-900 " href="#">
SERVICES</button>
<div
class="right-0 p-2 mt-1 bg-white rounded-md shadow lg:absolute"
:class="{'hidden':!isOpen'flex flex-col':isOpen}"
@click.away="isOpen = false">
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Categories</a>
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Inventories</a>
<a href="#" class="flex p-2 font-medium text-gray-600 rounded-md hover:bg-gray-100 hover:text-black">Brands</a>
</div>
</li>
I want to hide my dropdown by default and is should only open when someone hovers on it or when someone clicks on it. but right now it is still visible even though I have tried hiding it. Please see image for reference. and let know what am I doing wrong.
|
[
"You have to use pure css on the drop-down container\nExample\n.\n\ndrop-down{\ndisplay:none;\n}\n.drop-down-parent:hover .drop-down{\n\ndisplay:block;\n\n}\n\n",
"try like below, below example use tailwind CSS\n\n\n.dropdown:hover .dropdown-menu {\n display: block;\n}\n<link href=\"https://cdnjs.cloudflare.com/ajax/libs/tailwindcss/1.0.4/tailwind.min.css\" rel=\"stylesheet\"/>\n<div class=\"p-10\">\n\n <div class=\"dropdown inline-block relative\">\n <button class=\"bg-gray-300 text-gray-700 font-semibold py-2 px-4 rounded inline-flex items-center\">\n <span class=\"mr-1\">Dropdown</span>\n <svg class=\"fill-current h-4 w-4\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 20 20\"><path d=\"M9.293 12.95l.707.707L15.657 8l-1.414-1.414L10 10.828 5.757 6.586 4.343 8z\"/> </svg>\n </button>\n <ul class=\"dropdown-menu absolute hidden text-gray-700 pt-1\">\n <li class=\"\"><a class=\"rounded-t bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap\" href=\"#\">One</a></li>\n <li class=\"\"><a class=\"bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap\" href=\"#\">Two</a></li>\n <li class=\"\"><a class=\"rounded-b bg-gray-200 hover:bg-gray-400 py-2 px-4 block whitespace-no-wrap\" href=\"#\">Three is the magic number</a></li>\n </ul>\n </div>\n\n</div>\n\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"css",
"html",
"tailwind_css"
] |
stackoverflow_0074664400_css_html_tailwind_css.txt
|
Q:
How to get the object id in PyMongo after an insert?
I'm doing a simple insert into Mongo...
db.notes.insert({ title: "title", details: "note details"})
After the note document is inserted, I need to get the object id immediately. The result that comes back from the insert has some basic info regarding connection and errors, but no document and field info.
I found some info about using the update() function with upsert=true, I'm just not sure if that's the right way to go, I have not yet tried it.
A:
One of the cool things about MongoDB is that the ids are generated client side.
This means you don't even have to ask the server what the id was, because you told it what to save in the first place. Using pymongo the return value of an insert will be the object id. Check it out:
>>> import pymongo
>>> collection = pymongo.Connection()['test']['tyler']
>>> _id = collection.insert({"name": "tyler"})
>>> print _id.inserted_id
4f0b2f55096f7622f6000000
A:
The answer from Tyler does not work for me.
Using _id.inserted_id works
>>> import pymongo
>>> collection = pymongo.Connection()['test']['tyler']
>>> _id = collection.insert({"name": "tyler"})
>>> print(_id)
<pymongo.results.InsertOneResult object at 0x0A7EABCD>
>>> print(_id.inserted_id)
5acf02400000000968ba447f
A:
It's better to use insert_one() or insert_many() instead of insert(). Those two are for the newer version. You can use inserted_id to get the id.
myclient = pymongo.MongoClient("mongodb://localhost:27017/")
myDB = myclient["myDB"]
userTable = myDB["Users"]
userDict={"name": "tyler"}
_id = userTable.insert_one(userDict).inserted_id
print(_id)
Or
result = userTable.insert_one(userDict)
print(result.inserted_id)
print(result.acknowledged)
If you need to use insert(), you should write like the lines below
_id = userTable.insert(userDict)
print(_id)
A:
Newer PyMongo versions depreciate insert, and instead insert_one or insert_many should be used. These functions return a pymongo.results.InsertOneResult or pymongo.results.InsertManyResult object.
With these objects you can use the .inserted_id and .inserted_ids properties respectively to get the inserted object ids.
See this link for more info on insert_one and insert_many and this link for more info on pymongo.results.InsertOneResult.
A:
updated; removed previous because it wasn't correct
It looks like you can also do it with db.notes.save(...), which returns the _id after it performs the insert.
See for more info:
http://api.mongodb.org/python/current/api/pymongo/collection.html
A:
some_var = db.notes.insert({ title: "title", details: "note details"})
print(some_var.inserted_id)
A:
You just need to assigne it to some variable:
someVar = db.notes.insert({ title: "title", details: "note details"})
A:
To get the ID after an Insert in Python, just do like this:
doc = db.notes.insert({ title: "title", details: "note details"})
return str(doc.inserted_id) # This is to convert the ObjectID (type of doc.inserted_id into string)
|
How to get the object id in PyMongo after an insert?
|
I'm doing a simple insert into Mongo...
db.notes.insert({ title: "title", details: "note details"})
After the note document is inserted, I need to get the object id immediately. The result that comes back from the insert has some basic info regarding connection and errors, but no document and field info.
I found some info about using the update() function with upsert=true, I'm just not sure if that's the right way to go, I have not yet tried it.
|
[
"One of the cool things about MongoDB is that the ids are generated client side.\nThis means you don't even have to ask the server what the id was, because you told it what to save in the first place. Using pymongo the return value of an insert will be the object id. Check it out:\n>>> import pymongo\n>>> collection = pymongo.Connection()['test']['tyler']\n>>> _id = collection.insert({\"name\": \"tyler\"})\n>>> print _id.inserted_id \n4f0b2f55096f7622f6000000\n\n",
"The answer from Tyler does not work for me. \nUsing _id.inserted_id works\n>>> import pymongo\n>>> collection = pymongo.Connection()['test']['tyler']\n>>> _id = collection.insert({\"name\": \"tyler\"})\n>>> print(_id)\n<pymongo.results.InsertOneResult object at 0x0A7EABCD>\n>>> print(_id.inserted_id)\n5acf02400000000968ba447f\n\n",
"It's better to use insert_one() or insert_many() instead of insert(). Those two are for the newer version. You can use inserted_id to get the id.\nmyclient = pymongo.MongoClient(\"mongodb://localhost:27017/\")\nmyDB = myclient[\"myDB\"]\nuserTable = myDB[\"Users\"]\nuserDict={\"name\": \"tyler\"}\n\n_id = userTable.insert_one(userDict).inserted_id\nprint(_id)\n\nOr\nresult = userTable.insert_one(userDict)\nprint(result.inserted_id)\nprint(result.acknowledged)\n\nIf you need to use insert(), you should write like the lines below\n_id = userTable.insert(userDict)\nprint(_id)\n\n",
"Newer PyMongo versions depreciate insert, and instead insert_one or insert_many should be used. These functions return a pymongo.results.InsertOneResult or pymongo.results.InsertManyResult object.\nWith these objects you can use the .inserted_id and .inserted_ids properties respectively to get the inserted object ids.\nSee this link for more info on insert_one and insert_many and this link for more info on pymongo.results.InsertOneResult.\n",
"updated; removed previous because it wasn't correct\nIt looks like you can also do it with db.notes.save(...), which returns the _id after it performs the insert. \nSee for more info:\nhttp://api.mongodb.org/python/current/api/pymongo/collection.html\n",
"some_var = db.notes.insert({ title: \"title\", details: \"note details\"})\nprint(some_var.inserted_id)\n\n",
"You just need to assigne it to some variable:\nsomeVar = db.notes.insert({ title: \"title\", details: \"note details\"})\n\n",
"To get the ID after an Insert in Python, just do like this:\ndoc = db.notes.insert({ title: \"title\", details: \"note details\"})\nreturn str(doc.inserted_id) # This is to convert the ObjectID (type of doc.inserted_id into string)\n\n"
] |
[
97,
44,
17,
11,
2,
1,
0,
0
] |
[] |
[] |
[
"insert",
"mongodb",
"pymongo"
] |
stackoverflow_0008783753_insert_mongodb_pymongo.txt
|
Q:
Delete Lines from table if the sum of multiple lines is Nil (based on criteria)
I'm working with the following table where you can get activity from customer purchases.
DateOfActivity
CustomerReference
Reference Line
Description
Receivable Amount
24/10/2022
CUST567
1
Credit Purchase
20,000
24/10/2022
CUST567
4
Credit Purchase
10,000
24/10/2022
CUST555
2
Credit Purchase
50,000
27/10/2022
CUST555
2
Contract Sign
0
27/10/2022
CUST567
4
Contract Sign
0
27/10/2022
CUST567
1
Contract Sign
0
27/10/2022
CUST567
4
Repayment
-3,500
27/10/2022
CUST567
4
Repayment
-6,500
13/11/2022
CUST567
1
Repayment
-10,000
13/11/2022
CUST567
1
Repayment
-2,000
18/11/2022
CUST567
1
Contract Sign
0
18/11/2022
CUST567
1
Repayment
-3,000
I'm using the following query to extract the above table:
Select
DateOfActivity, CustomerReferencce, ReferenceLine, Description, ReceivableAmount
From 'Table A'
Where
DateOfActivity >= '2022-09-01'
Group by
DateOfActivity
As you can see that the table will only get bigger because more customer activity is being added. How can I change my query so the customers who have fully paid their receivable amount don't show up in this table?
The result from above script change that I am expecting is as follows:
DateOfActivity
CustomerReference
Reference Line
Description
Receivable Amount
24/10/2022
CUST567
1
Credit Purchase
20,000
24/10/2022
CUST555
2
Credit Purchase
50,000
27/10/2022
CUST555
2
Contract Sign
0
27/10/2022
CUST567
1
Contract Sign
0
13/11/2022
CUST567
1
Repayment
-10,000
13/11/2022
CUST567
1
Repayment
-2,000
18/11/2022
CUST567
1
Contract Sign
0
18/11/2022
CUST567
1
Repayment
-3,000
CUST567 Reference Line 4 has been removed because the sum of his Credit Purchase + Contract Sign + Repayment = $0. All other Customers rows are still showing up.
How can edit the query so this is done automatically for Large data? Please note the following assumptions:
Customer Reference for multiple customers can be same or different (for example in above example, CUST567 has two Reference Line 1 & 4. However, CUST555 only has one reference Line 2.
The data is removed for Customers based on Receivable amount coming down to Nil (so all rows for that CustomerReference & Reference Line are removed)
Thanks in Advance
A:
What are you trying to achieve here? Are the references to large data because something is getting slow or something?
I'd suggest you'd want to make a view for your query (google the syntax for your flavour of SQL, but should be something like create view as {your query};)
Then you can query that view with an additional where "Receivable Amount !=0
You could also make a subquery, i.e select * from ({your query}) as x where x."Receivable Amount !=0
But if you've written the sql you've gotten to so far I assume you know that? So is there an issue in terms of performance or some such that's caused you to ask the question?
A:
As I understand the question, we need to build the sum of the amounts grouped by the reference line. So this query will get those reference lines having this sum of amounts = 0:
SELECT
ReferenceLine
FROM tableA
GROUP BY ReferenceLine
HAVING SUM(ReceivableAmount) = 0;
Then, we can use this query as a subquery and fetch all other entries not having such a reference line:
SELECT
DateOfActivity, CustomerReference, ReferenceLine,
Description, ReceivableAmount
FROM tableA
WHERE ReferenceLine NOT IN
(SELECT
ReferenceLine
FROM tableA
GROUP BY ReferenceLine
HAVING SUM(ReceivableAmount) = 0);
This will produce the expected outcome.
We could of course also use IN and amount <> 0 instead of NOT IN and amount = 0:
SELECT
DateOfActivity, CustomerReference, ReferenceLine,
Description, ReceivableAmount
FROM tableA
WHERE ReferenceLine IN
(SELECT
ReferenceLine
FROM tableA
GROUP BY ReferenceLine
HAVING SUM(ReceivableAmount) <> 0);
This will create the same result. Just take what you prefer.
Try out: db<>fiddle
An important note: In your question and comments, you are talking about the sum for the reference line is "nil" or "null". This is untrue. It is zero.
NULL or NIL would mean there is no amount, so this is something completely different!
All your amounts are NOT NULL and the sum of them is also NOT NULL, but zero for reference line 4.
EDIT: If it should also be grouped by CustomerReference, we can extend the previous queries like this:
SELECT
DateOfActivity, CustomerReference, ReferenceLine,
Description, ReceivableAmount
FROM tableA
WHERE (CustomerReference, ReferenceLine) NOT IN
(SELECT
CustomerReference, ReferenceLine
FROM tableA
GROUP BY CustomerReference, ReferenceLine
HAVING SUM(ReceivableAmount) = 0);
OR
SELECT
DateOfActivity, CustomerReference, ReferenceLine,
Description, ReceivableAmount
FROM tableA
WHERE (CustomerReference, ReferenceLine) IN
(SELECT
CustomerReference, ReferenceLine
FROM tableA
GROUP BY CustomerReference, ReferenceLine
HAVING SUM(ReceivableAmount) <> 0);
Updated fiddle: db<>fiddle
|
Delete Lines from table if the sum of multiple lines is Nil (based on criteria)
|
I'm working with the following table where you can get activity from customer purchases.
DateOfActivity
CustomerReference
Reference Line
Description
Receivable Amount
24/10/2022
CUST567
1
Credit Purchase
20,000
24/10/2022
CUST567
4
Credit Purchase
10,000
24/10/2022
CUST555
2
Credit Purchase
50,000
27/10/2022
CUST555
2
Contract Sign
0
27/10/2022
CUST567
4
Contract Sign
0
27/10/2022
CUST567
1
Contract Sign
0
27/10/2022
CUST567
4
Repayment
-3,500
27/10/2022
CUST567
4
Repayment
-6,500
13/11/2022
CUST567
1
Repayment
-10,000
13/11/2022
CUST567
1
Repayment
-2,000
18/11/2022
CUST567
1
Contract Sign
0
18/11/2022
CUST567
1
Repayment
-3,000
I'm using the following query to extract the above table:
Select
DateOfActivity, CustomerReferencce, ReferenceLine, Description, ReceivableAmount
From 'Table A'
Where
DateOfActivity >= '2022-09-01'
Group by
DateOfActivity
As you can see that the table will only get bigger because more customer activity is being added. How can I change my query so the customers who have fully paid their receivable amount don't show up in this table?
The result from above script change that I am expecting is as follows:
DateOfActivity
CustomerReference
Reference Line
Description
Receivable Amount
24/10/2022
CUST567
1
Credit Purchase
20,000
24/10/2022
CUST555
2
Credit Purchase
50,000
27/10/2022
CUST555
2
Contract Sign
0
27/10/2022
CUST567
1
Contract Sign
0
13/11/2022
CUST567
1
Repayment
-10,000
13/11/2022
CUST567
1
Repayment
-2,000
18/11/2022
CUST567
1
Contract Sign
0
18/11/2022
CUST567
1
Repayment
-3,000
CUST567 Reference Line 4 has been removed because the sum of his Credit Purchase + Contract Sign + Repayment = $0. All other Customers rows are still showing up.
How can edit the query so this is done automatically for Large data? Please note the following assumptions:
Customer Reference for multiple customers can be same or different (for example in above example, CUST567 has two Reference Line 1 & 4. However, CUST555 only has one reference Line 2.
The data is removed for Customers based on Receivable amount coming down to Nil (so all rows for that CustomerReference & Reference Line are removed)
Thanks in Advance
|
[
"What are you trying to achieve here? Are the references to large data because something is getting slow or something?\nI'd suggest you'd want to make a view for your query (google the syntax for your flavour of SQL, but should be something like create view as {your query};)\nThen you can query that view with an additional where \"Receivable Amount !=0\nYou could also make a subquery, i.e select * from ({your query}) as x where x.\"Receivable Amount !=0\nBut if you've written the sql you've gotten to so far I assume you know that? So is there an issue in terms of performance or some such that's caused you to ask the question?\n",
"As I understand the question, we need to build the sum of the amounts grouped by the reference line. So this query will get those reference lines having this sum of amounts = 0:\nSELECT \nReferenceLine\nFROM tableA\nGROUP BY ReferenceLine\nHAVING SUM(ReceivableAmount) = 0;\n\nThen, we can use this query as a subquery and fetch all other entries not having such a reference line:\nSELECT \nDateOfActivity, CustomerReference, ReferenceLine,\nDescription, ReceivableAmount\nFROM tableA\nWHERE ReferenceLine NOT IN \n(SELECT \nReferenceLine\nFROM tableA\nGROUP BY ReferenceLine\nHAVING SUM(ReceivableAmount) = 0);\n\nThis will produce the expected outcome.\nWe could of course also use IN and amount <> 0 instead of NOT IN and amount = 0:\nSELECT \nDateOfActivity, CustomerReference, ReferenceLine,\nDescription, ReceivableAmount\nFROM tableA\nWHERE ReferenceLine IN \n(SELECT \nReferenceLine\nFROM tableA\nGROUP BY ReferenceLine\nHAVING SUM(ReceivableAmount) <> 0);\n\nThis will create the same result. Just take what you prefer.\nTry out: db<>fiddle\nAn important note: In your question and comments, you are talking about the sum for the reference line is \"nil\" or \"null\". This is untrue. It is zero.\nNULL or NIL would mean there is no amount, so this is something completely different!\nAll your amounts are NOT NULL and the sum of them is also NOT NULL, but zero for reference line 4.\nEDIT: If it should also be grouped by CustomerReference, we can extend the previous queries like this:\nSELECT \nDateOfActivity, CustomerReference, ReferenceLine,\nDescription, ReceivableAmount\nFROM tableA\nWHERE (CustomerReference, ReferenceLine) NOT IN \n(SELECT \nCustomerReference, ReferenceLine\nFROM tableA\nGROUP BY CustomerReference, ReferenceLine\nHAVING SUM(ReceivableAmount) = 0);\n\nOR\nSELECT \nDateOfActivity, CustomerReference, ReferenceLine,\nDescription, ReceivableAmount\nFROM tableA\nWHERE (CustomerReference, ReferenceLine) IN \n(SELECT \nCustomerReference, ReferenceLine\nFROM tableA\nGROUP BY CustomerReference, ReferenceLine\nHAVING SUM(ReceivableAmount) <> 0);\n\nUpdated fiddle: db<>fiddle\n"
] |
[
0,
0
] |
[] |
[] |
[
"mysql",
"remove_if",
"row",
"sql"
] |
stackoverflow_0074664001_mysql_remove_if_row_sql.txt
|
Q:
Reverse Engineering Regex in R?
I found this Regex code (in R) that can recognize the following "class" of pattern: "LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER" in a set of strings:
apply(my_string, 1, function(x) gsub('(([A-Z] ?[0-9]){3})|.', '\\1', toString(x)))
Based on the above code, it appears that the corresponding Regex "class" for "LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER" is "(([A-Z] ?[0-9]){3})|."
This leads me to my question:
Suppose I had the following input in R:
input = "A1B 2C3"
Is there some way I can determine what "class" of Regex is contained within this string? For example, does such a function exist?
output = some_function("A1B 2C3")
> output
'(([A-Z] ?[0-9]){3})|.'
I found this website (https://regex-generator.olafneumann.org/?sampleText=the%20follow%20abc&flags=i&selection=11%7CMultiple%20characters) that appears to perform a similar task - but is there a way to do this in R? For example (I am not sure if this is correct) :
string = c("s0me strIngz123", "someString3s", "S0M3 5t1ngs")
#imaginary conversion function
regex_class = some_function(string)
# output
string regex_class
1 s0me strIngz123 ^[A-Za-z0-9]+.*[a-zA-Z]+123$
2 someString3s ^[A-Za-z0-9]+$
3 S0M3 5t1ngs ^[A-Za-z0-9]+ [A-Za-z0-9]+$
Is there some way in R to determine the general "Regex Class" for different strings?
Thank you!
A:
It is not possible to determine the general "Regex class" for a given string in R. Regular expressions (Regex) are a way of defining patterns in strings, and they can be used to search for and match those patterns in other strings. However, regular expressions are not a "class" of pattern, and there is no way to automatically determine the regular expression that corresponds to a given string.
If you want to use regular expressions in R, you can use the grep() or grepl() functions to search for and match patterns in strings. You can also use the regexpr() or gregexpr() functions to extract the matched patterns from a string.
Here is an example of how you could use these functions to search for the pattern "LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER" in a set of strings:
# Define the regular expression pattern
pattern <- "(([A-Z] ?[0-9]){3})|."
# Define the set of strings to search
strings <- c("A1B 2C3", "D4E 5F6", "G7H 8I9")
# Use the grep() function to search for the pattern in the strings
matches <- grep(pattern, strings)
# Print the results
print(matches)
# Output: [1] 1 3
In this example, the grep() function returns the indices of the strings in strings that contain the pattern defined by pattern.
If you want to extract the matched patterns from the strings, you can use the gregexpr() function instead of grep():
# Use the gregexpr() function to extract the matched patterns from the strings
matches <- gregexpr(pattern, strings)
# Print the results
print(matches)
# Output: [[1]]
# [1] 1 15
#
# [[2]]
# [1] -1
#
# [[3]]
# [1] 1 15
In this example, the gregexpr() function returns a list of integer vectors, where each vector contains the starting and ending indices of the matched patterns in the corresponding string.
|
Reverse Engineering Regex in R?
|
I found this Regex code (in R) that can recognize the following "class" of pattern: "LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER" in a set of strings:
apply(my_string, 1, function(x) gsub('(([A-Z] ?[0-9]){3})|.', '\\1', toString(x)))
Based on the above code, it appears that the corresponding Regex "class" for "LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER" is "(([A-Z] ?[0-9]){3})|."
This leads me to my question:
Suppose I had the following input in R:
input = "A1B 2C3"
Is there some way I can determine what "class" of Regex is contained within this string? For example, does such a function exist?
output = some_function("A1B 2C3")
> output
'(([A-Z] ?[0-9]){3})|.'
I found this website (https://regex-generator.olafneumann.org/?sampleText=the%20follow%20abc&flags=i&selection=11%7CMultiple%20characters) that appears to perform a similar task - but is there a way to do this in R? For example (I am not sure if this is correct) :
string = c("s0me strIngz123", "someString3s", "S0M3 5t1ngs")
#imaginary conversion function
regex_class = some_function(string)
# output
string regex_class
1 s0me strIngz123 ^[A-Za-z0-9]+.*[a-zA-Z]+123$
2 someString3s ^[A-Za-z0-9]+$
3 S0M3 5t1ngs ^[A-Za-z0-9]+ [A-Za-z0-9]+$
Is there some way in R to determine the general "Regex Class" for different strings?
Thank you!
|
[
"It is not possible to determine the general \"Regex class\" for a given string in R. Regular expressions (Regex) are a way of defining patterns in strings, and they can be used to search for and match those patterns in other strings. However, regular expressions are not a \"class\" of pattern, and there is no way to automatically determine the regular expression that corresponds to a given string.\nIf you want to use regular expressions in R, you can use the grep() or grepl() functions to search for and match patterns in strings. You can also use the regexpr() or gregexpr() functions to extract the matched patterns from a string.\nHere is an example of how you could use these functions to search for the pattern \"LETTER-NUMBER-LETTER NUMBER-LETTER-NUMBER\" in a set of strings:\n# Define the regular expression pattern\npattern <- \"(([A-Z] ?[0-9]){3})|.\"\n\n# Define the set of strings to search\nstrings <- c(\"A1B 2C3\", \"D4E 5F6\", \"G7H 8I9\")\n\n# Use the grep() function to search for the pattern in the strings\nmatches <- grep(pattern, strings)\n\n# Print the results\nprint(matches)\n# Output: [1] 1 3\n\nIn this example, the grep() function returns the indices of the strings in strings that contain the pattern defined by pattern.\nIf you want to extract the matched patterns from the strings, you can use the gregexpr() function instead of grep():\n# Use the gregexpr() function to extract the matched patterns from the strings\nmatches <- gregexpr(pattern, strings)\n\n# Print the results\nprint(matches)\n# Output: [[1]]\n# [1] 1 15\n#\n# [[2]]\n# [1] -1\n#\n# [[3]]\n# [1] 1 15\n\nIn this example, the gregexpr() function returns a list of integer vectors, where each vector contains the starting and ending indices of the matched patterns in the corresponding string.\n"
] |
[
4
] |
[] |
[] |
[
"r",
"regex"
] |
stackoverflow_0074624288_r_regex.txt
|
Q:
Can't find main(String[]) method in class: TapeDeck. The main method is in the other class which runs the program
I have two classes. When the I put class TapeDeckTestDrive first on the text editor, it runs fine. When I put the TestDrive class first, it gives the error that it can't find the main class. Why is this?
class TapeDeck {
boolean canRecord = false;
void playTape(){
System.out.println("tape playing");
}
void recordTape(){
System.out.println("tape recording");
}
}
class TapeDeckcTestDrive{
public static void main(String[] args){
TapeDeck t = new TapeDeck();
t.canRecord = true;
t.playTape();
if (t.canRecord == true) {
t.recordTape();
}
}
}
ERROR ON THIS FORMAT
VS
FOLLOWING WORKS FINE:
class TapeDeckcTestDrive{
public static void main(String[] args){
TapeDeck t = new TapeDeck();
t.canRecord = true;
t.playTape();
if (t.canRecord == true) {
t.recordTape();
}
}
}
class TapeDeck {
boolean canRecord = false;
void playTape(){
System.out.println("tape playing");
}
void recordTape(){
System.out.println("tape recording");
}
}
A:
After you compile the code using the command:
javac fileName.java
Run the java .class file by only specifying fileName without the .java extension
java fileName
if you use fileName.java it won't run the specific .class file; it will try to interpret the .java file. if you want to interpret a .java file then parent class must contain the main(String[]) method.
A:
First, You have to compile the File by using javac.
Then, You have to Run the file.
Classname where main is written.
javac filename.java
java classname
A:
You Can Run the java program in two ways.
Directly run the java program by
java example_program.java
In this type compilation and Execution happens at runtime. That is
Byte codes is generated and executed immediately(works as a interpreter)
So,You must use the superclass(Containing the main method) at first followed by other
compound classes.
Note:
No .class file will generate. That means, it will generate byte code internally and will execute. Programmer's cannot view the class file.
In Second type, First, you should compile,
javac example_program.java
It will generate the example_program.class . Then, Execute the class file using,
java example_program
Here, the order of writing classes doesn't impact. you can write the classes in any order. it will work fine.
A:
I split it into two files and added public to the classes/methods as well as the boolean. Now the code runs.
A:
In some JDK's , JVM looks after the entry point function first due to which it need to be written first then the rest of the code. As main function is our entry point function it must be written first.
A:
Steps 1.
--You have to compile the File by using javac. Then, You have to Run the file.
--Classname where main is written.
-- javac filename.java
-- java classname
It causing error due to:-
class TapeDeck {
boolean canRecord = false;
void playTape(){
System.out.println("tape playing");
}
void recordTape(){
System.out.println("tape recording");
}
}
class TapeDeckcTestDrive{
public static void main(String[] args){
TapeDeck t = new TapeDeck();
t.canRecord = true;
t.playTape();
if (t.canRecord == true) {
t.recordTape();
}
}
}
--Your tapedeck class doesn't main (String[]).
A:
I got your problem.
First of all, check your classpath that you have set in Environment Variables
Follow the following steps:
***Step 1: *** Right Click on This PC --> Advanced system settings --> Environment Variables
***Step 2: *** Edit the variable classpath and add a new path or edit your old path that you have set. The path should be: C:\Program Files\Java_Home\jdk..\lib;.;
Note: The "." is must after a semicolon (;).
***Step 3: *** Close the CMD and open it again.
***Step 4: *** Now compile your using javac command: javac FileName.java
***Step 5: *** Run your code using java command: java ClassName
And there you go...
|
Can't find main(String[]) method in class: TapeDeck. The main method is in the other class which runs the program
|
I have two classes. When the I put class TapeDeckTestDrive first on the text editor, it runs fine. When I put the TestDrive class first, it gives the error that it can't find the main class. Why is this?
class TapeDeck {
boolean canRecord = false;
void playTape(){
System.out.println("tape playing");
}
void recordTape(){
System.out.println("tape recording");
}
}
class TapeDeckcTestDrive{
public static void main(String[] args){
TapeDeck t = new TapeDeck();
t.canRecord = true;
t.playTape();
if (t.canRecord == true) {
t.recordTape();
}
}
}
ERROR ON THIS FORMAT
VS
FOLLOWING WORKS FINE:
class TapeDeckcTestDrive{
public static void main(String[] args){
TapeDeck t = new TapeDeck();
t.canRecord = true;
t.playTape();
if (t.canRecord == true) {
t.recordTape();
}
}
}
class TapeDeck {
boolean canRecord = false;
void playTape(){
System.out.println("tape playing");
}
void recordTape(){
System.out.println("tape recording");
}
}
|
[
"After you compile the code using the command:\njavac fileName.java\n\nRun the java .class file by only specifying fileName without the .java extension\njava fileName\n\nif you use fileName.java it won't run the specific .class file; it will try to interpret the .java file. if you want to interpret a .java file then parent class must contain the main(String[]) method.\n",
"First, You have to compile the File by using javac.\nThen, You have to Run the file.\nClassname where main is written.\njavac filename.java\njava classname\n\n",
"You Can Run the java program in two ways.\n\nDirectly run the java program by\n java example_program.java\n\nIn this type compilation and Execution happens at runtime. That is\nByte codes is generated and executed immediately(works as a interpreter)\nSo,You must use the superclass(Containing the main method) at first followed by other\ncompound classes.\n\n\nNote:\nNo .class file will generate. That means, it will generate byte code internally and will execute. Programmer's cannot view the class file.\n\nIn Second type, First, you should compile,\n javac example_program.java \n\n\n\nIt will generate the example_program.class . Then, Execute the class file using,\n java example_program\n\nHere, the order of writing classes doesn't impact. you can write the classes in any order. it will work fine.\n",
"I split it into two files and added public to the classes/methods as well as the boolean. Now the code runs.\n",
"In some JDK's , JVM looks after the entry point function first due to which it need to be written first then the rest of the code. As main function is our entry point function it must be written first.\n",
"Steps 1. \n--You have to compile the File by using javac. Then, You have to Run the file.\n\n--Classname where main is written.\n\n-- javac filename.java\n-- java classname\n\n\nIt causing error due to:-\nclass TapeDeck {\n boolean canRecord = false;\n void playTape(){\n System.out.println(\"tape playing\");\n }\n void recordTape(){\n System.out.println(\"tape recording\");\n }\n}\n\nclass TapeDeckcTestDrive{\n public static void main(String[] args){\n TapeDeck t = new TapeDeck();\n t.canRecord = true;\n t.playTape();\n\n if (t.canRecord == true) {\n t.recordTape();\n }\n }\n}\n\n--Your tapedeck class doesn't main (String[]).\n\n",
"I got your problem.\nFirst of all, check your classpath that you have set in Environment Variables\nFollow the following steps:\n***Step 1: *** Right Click on This PC --> Advanced system settings --> Environment Variables\n***Step 2: *** Edit the variable classpath and add a new path or edit your old path that you have set. The path should be: C:\\Program Files\\Java_Home\\jdk..\\lib;.;\nNote: The \".\" is must after a semicolon (;).\n***Step 3: *** Close the CMD and open it again.\n***Step 4: *** Now compile your using javac command: javac FileName.java\n***Step 5: *** Run your code using java command: java ClassName\nAnd there you go...\n"
] |
[
8,
3,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"java",
"methods",
"program_entry_point"
] |
stackoverflow_0055794907_java_methods_program_entry_point.txt
|
Q:
CKEditor 5 append HTML
I'm building a CMS system for my website. The content of the page is editable with CKEditor. I wanted to make it possible to insert images that are already on the server, (so no file uploading with CKEditor). I tried multiple ways to do this and I've been looking for similar problem but I can't seem to figure it out.
I want to do it like this:
First u have the ckeditor
Than underneath that you see a link which opens up a collapse with an overview of images on the server. If you click on one of the images it appears in the editor.
Sounds easy, seems difficult to make. Please help me out here.
A:
i think you can use this:
var editor= ClassicEditor.create($("#content"),{})
.then(editor=>{
window.editor=editor
})
/* content is your textarea id */
after that, you can use editor for replace html code, like this:
document.addEventListener("wayYouSelectImage", function (){
editor.data.set("<img src='yourlink' />)
})
This code remove everything that exists in your ck and then add new html, but if you want append your new html to end of your current html, do this:
document.addEventListener("wayYouSelectImage", function (){
editor.data.set(editor.getData()+"<img src='yourlink' />)
})
|
CKEditor 5 append HTML
|
I'm building a CMS system for my website. The content of the page is editable with CKEditor. I wanted to make it possible to insert images that are already on the server, (so no file uploading with CKEditor). I tried multiple ways to do this and I've been looking for similar problem but I can't seem to figure it out.
I want to do it like this:
First u have the ckeditor
Than underneath that you see a link which opens up a collapse with an overview of images on the server. If you click on one of the images it appears in the editor.
Sounds easy, seems difficult to make. Please help me out here.
|
[
"i think you can use this:\n var editor= ClassicEditor.create($(\"#content\"),{})\n.then(editor=>{\nwindow.editor=editor\n})\n\n/* content is your textarea id */\nafter that, you can use editor for replace html code, like this:\ndocument.addEventListener(\"wayYouSelectImage\", function (){\neditor.data.set(\"<img src='yourlink' />)\n})\n\nThis code remove everything that exists in your ck and then add new html, but if you want append your new html to end of your current html, do this:\ndocument.addEventListener(\"wayYouSelectImage\", function (){\n editor.data.set(editor.getData()+\"<img src='yourlink' />)\n })\n\n"
] |
[
0
] |
[] |
[] |
[
"ckeditor",
"ckeditor5",
"jquery"
] |
stackoverflow_0070769788_ckeditor_ckeditor5_jquery.txt
|
Q:
Triggering copy event with no selection on Mobile Safari
Adding an oncopy handler to an input field to get some custom behaviour with no text selected works in Safari on macOS (and other browsers), but on Safari on iPad, nothing happens when pressing ⌘+C.
Is it possible to trigger this on Safari on iPad?
Here's a simple example that doesn't work on Mobile Safari:
<input type="textbox" oncopy="alert('copy!')">
A:
According to Safari Supported Attributes , oncopy is a JavaScript delegate. Therefore, JavaScript solution for this should work on any Safari running device. JavaScript detects keyboard shortcuts in this solution.
Sources: How TO - Copy Text to Clipboard , How to detect copy paste commands , Toptal keycodes library & Safari HTML Reference
// Global variable to get input value (Input element must have an id)
let justCopiedFrom = '';
// Clicked element's value into dynamic variable
function inputToVar(x) {
justCopiedFrom = x.id;
}
// Detect [⌘/Ctrl + C] & [⌘/Ctrl + V]
document.body.addEventListener("keydown",function(e){
e = e || window.event;
var key = e.which || e.keyCode; // keyCode detection
var ctrl = e.ctrlKey ? e.ctrlKey : ((key === 17) ? true : false); // ctrl detection
var cmd_h = e.metaKey ? e.metaKey : ((key === 91) ? true : false); // ⌘ detection
if ( key == 86 && ctrl || key == 86 && cmd_h ) {
// Copy the text inside the text field
console.log("⌘/Ctrl + V");
} else if ( key == 67 && ctrl || key == 67 && cmd_h ) {
let valueOf = document.getElementById(justCopiedFrom);
// Select the text field
valueOf.select();
valueOf.setSelectionRange(0, 99999); // For mobile devices
navigator.clipboard.writeText(valueOf.value);
console.log(`⌘/Ctrl + C (${valueOf.value})`);
}
},false);
<!-- Input must have an Id so it can be passed with JS function (inputToVar) -->
<p><small>Add some random text here and try copy and paste keyboard shortcuts</small></p>
<input id="text-input" type="textbox" onclick="inputToVar(this);">
A:
It is not possible to trigger the copy event on Mobile Safari using JavaScript if there is no selection. This is because the copy event is only triggered when the user selects some text and then initiates the copy action, either by using the context menu or a keyboard shortcut.
If you want to copy text to the clipboard on Mobile Safari without requiring the user to make a selection, you can use the execCommand method and the copy command, as follows:
// create a textarea element
const textarea = document.createElement("textarea");
// set the text to be copied to the clipboard
textarea.value = "Hello, world!";
// add the textarea to the document
document.body.appendChild(textarea);
// focus the textarea
textarea.focus();
// select the text in the textarea
textarea.select();
// use the execCommand method to copy the text
document.execCommand("copy");
// remove the textarea from the document
document.body.removeChild(textarea);
In this code, we create a textarea element, set its value to the text that we want to copy to the clipboard, add it to the document, focus it, and then use the execCommand method with the copy command to copy the text. Finally, we remove the textarea element from the document.
Keep in mind that this method may not work on all devices and browsers, and it is not supported by all versions of Mobile Safari. It is recommended to check the compatibility of this method and use it with caution.
A:
It is not possible to trigger the oncopy event in Safari on iPad when no text is selected in the input field. This is because the oncopy event is only triggered when the user copies text from the input field, and in the case where no text is selected, there is no text to be copied.
However, you can use the onselect event to detect when the user selects text in the input field, and then use the execCommand method to copy the selected text to the clipboard. For example:
<input type="textbox" onselect="copySelectedText()">
<script>
function copySelectedText() {
// Get the selected text
var selectedText = window.getSelection().toString();
// Copy the selected text to the clipboard
document.execCommand('copy');
// Show an alert message
alert('copy!');
}
</script>
In the code above, the copySelectedText function is called when the user selects text in the input field. The function gets the selected text using the window.getSelection() method, and then uses the execCommand method to copy the selected text to the clipboard. Finally, the function shows an alert message to indicate that the text was copied.
Note that this approach will only work if the user selects some text in the input field before copying it to the clipboard. It will not work if the user attempts to copy the input field without first selecting any text.
|
Triggering copy event with no selection on Mobile Safari
|
Adding an oncopy handler to an input field to get some custom behaviour with no text selected works in Safari on macOS (and other browsers), but on Safari on iPad, nothing happens when pressing ⌘+C.
Is it possible to trigger this on Safari on iPad?
Here's a simple example that doesn't work on Mobile Safari:
<input type="textbox" oncopy="alert('copy!')">
|
[
"According to Safari Supported Attributes , oncopy is a JavaScript delegate. Therefore, JavaScript solution for this should work on any Safari running device. JavaScript detects keyboard shortcuts in this solution.\nSources: How TO - Copy Text to Clipboard , How to detect copy paste commands , Toptal keycodes library & Safari HTML Reference\n\n\n// Global variable to get input value (Input element must have an id)\nlet justCopiedFrom = '';\n\n// Clicked element's value into dynamic variable\nfunction inputToVar(x) {\n justCopiedFrom = x.id;\n}\n\n// Detect [⌘/Ctrl + C] & [⌘/Ctrl + V]\ndocument.body.addEventListener(\"keydown\",function(e){\n e = e || window.event;\n\n var key = e.which || e.keyCode; // keyCode detection\n var ctrl = e.ctrlKey ? e.ctrlKey : ((key === 17) ? true : false); // ctrl detection\n var cmd_h = e.metaKey ? e.metaKey : ((key === 91) ? true : false); // ⌘ detection\n\n if ( key == 86 && ctrl || key == 86 && cmd_h ) {\n\n // Copy the text inside the text field\n console.log(\"⌘/Ctrl + V\");\n\n } else if ( key == 67 && ctrl || key == 67 && cmd_h ) {\n let valueOf = document.getElementById(justCopiedFrom);\n\n // Select the text field\n valueOf.select();\n valueOf.setSelectionRange(0, 99999); // For mobile devices\n \n navigator.clipboard.writeText(valueOf.value);\n console.log(`⌘/Ctrl + C (${valueOf.value})`);\n }\n\n},false);\n<!-- Input must have an Id so it can be passed with JS function (inputToVar) -->\n<p><small>Add some random text here and try copy and paste keyboard shortcuts</small></p>\n<input id=\"text-input\" type=\"textbox\" onclick=\"inputToVar(this);\">\n\n\n\n",
"It is not possible to trigger the copy event on Mobile Safari using JavaScript if there is no selection. This is because the copy event is only triggered when the user selects some text and then initiates the copy action, either by using the context menu or a keyboard shortcut.\nIf you want to copy text to the clipboard on Mobile Safari without requiring the user to make a selection, you can use the execCommand method and the copy command, as follows:\n// create a textarea element\nconst textarea = document.createElement(\"textarea\");\n\n// set the text to be copied to the clipboard\ntextarea.value = \"Hello, world!\";\n\n// add the textarea to the document\ndocument.body.appendChild(textarea);\n\n// focus the textarea\ntextarea.focus();\n\n// select the text in the textarea\ntextarea.select();\n\n// use the execCommand method to copy the text\ndocument.execCommand(\"copy\");\n\n// remove the textarea from the document\ndocument.body.removeChild(textarea);\n\n\nIn this code, we create a textarea element, set its value to the text that we want to copy to the clipboard, add it to the document, focus it, and then use the execCommand method with the copy command to copy the text. Finally, we remove the textarea element from the document.\nKeep in mind that this method may not work on all devices and browsers, and it is not supported by all versions of Mobile Safari. It is recommended to check the compatibility of this method and use it with caution.\n",
"It is not possible to trigger the oncopy event in Safari on iPad when no text is selected in the input field. This is because the oncopy event is only triggered when the user copies text from the input field, and in the case where no text is selected, there is no text to be copied.\nHowever, you can use the onselect event to detect when the user selects text in the input field, and then use the execCommand method to copy the selected text to the clipboard. For example:\n<input type=\"textbox\" onselect=\"copySelectedText()\">\n\n<script>\nfunction copySelectedText() {\n // Get the selected text\n var selectedText = window.getSelection().toString();\n\n // Copy the selected text to the clipboard\n document.execCommand('copy');\n\n // Show an alert message\n alert('copy!');\n}\n</script>\n\nIn the code above, the copySelectedText function is called when the user selects text in the input field. The function gets the selected text using the window.getSelection() method, and then uses the execCommand method to copy the selected text to the clipboard. Finally, the function shows an alert message to indicate that the text was copied.\nNote that this approach will only work if the user selects some text in the input field before copying it to the clipboard. It will not work if the user attempts to copy the input field without first selecting any text.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"html",
"ipad",
"javascript",
"mobile_safari",
"safari"
] |
stackoverflow_0073190853_html_ipad_javascript_mobile_safari_safari.txt
|
Q:
Parcel JS: tree.render is not a function
Whenever I try to run production build command npm run build or npx parcel build index.html, I get this error. I have a simple html and css project, no react, no 3rd party library Why could this be happening? I have tried parcel versions 1.12.3, 1.12.4 and 1.12.5.
Here is the error:
/Users/user/Documents/HTML Apps/Project/index.html: tree.render is not a function
at /Users/user/Documents/HTML Apps/Project/node_modules/htmlnano/lib/modules/minifySvg.js:19:23
at /Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:91:45
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:26)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/user/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `parcel build index.html`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/user/.npm/_logs/2021-04-14T07_44_52_872Z-debug.log
A:
Turns out you can get around this by configuring htmlnano to not minify SVG.
Add a .htmlnanorc file to your project root, with a JSON configuration object like this:
{
"minifySvg": false
}
The relevant part of the documentation is here for V1 (which actually doesn't mention the minifySvg setting) or here for V2.
A:
In my case it worked adding .htmlnanorc.js to my project root with the following:
module.exports = {
"minifySvg": false
}
A:
parcel build --no-optimize index.html
It will jump the optimization of parcel and let you going on.
Hope it’s helpful.
very easy way pls upvote
A:
In my case, it worked by using an IMG tag with the SVG as the src.
You can weigh the pro's & con's yourself:
Adding vector graphics to the Web | MDN
Note: I'm not sure if doing it this way is less efficient/optimized or not.
If someone sees anything wrong with this approach please let me know :)
A:
It worked for me.
just
parcel build index.html --no-minify
i think is deprecated because of --no-optmize
|
Parcel JS: tree.render is not a function
|
Whenever I try to run production build command npm run build or npx parcel build index.html, I get this error. I have a simple html and css project, no react, no 3rd party library Why could this be happening? I have tried parcel versions 1.12.3, 1.12.4 and 1.12.5.
Here is the error:
/Users/user/Documents/HTML Apps/Project/index.html: tree.render is not a function
at /Users/user/Documents/HTML Apps/Project/node_modules/htmlnano/lib/modules/minifySvg.js:19:23
at /Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:91:45
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:26)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/user/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:105:17)
at traverse (/Users/user/Documents/HTML Apps/Project/node_modules/posthtml/lib/api.js:111:5)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `parcel build index.html`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/user/.npm/_logs/2021-04-14T07_44_52_872Z-debug.log
|
[
"Turns out you can get around this by configuring htmlnano to not minify SVG.\nAdd a .htmlnanorc file to your project root, with a JSON configuration object like this:\n{\n \"minifySvg\": false\n}\n\nThe relevant part of the documentation is here for V1 (which actually doesn't mention the minifySvg setting) or here for V2.\n",
"In my case it worked adding .htmlnanorc.js to my project root with the following:\nmodule.exports = {\n \"minifySvg\": false\n }\n\n",
"parcel build --no-optimize index.html \nIt will jump the optimization of parcel and let you going on.\nHope it’s helpful.\nvery easy way pls upvote\n",
"In my case, it worked by using an IMG tag with the SVG as the src.\nYou can weigh the pro's & con's yourself:\nAdding vector graphics to the Web | MDN\nNote: I'm not sure if doing it this way is less efficient/optimized or not.\nIf someone sees anything wrong with this approach please let me know :)\n",
"It worked for me.\njust\nparcel build index.html --no-minify\n\ni think is deprecated because of --no-optmize\n"
] |
[
24,
18,
6,
0,
0
] |
[] |
[] |
[
"html",
"javascript",
"npm",
"parceljs",
"svg"
] |
stackoverflow_0067087634_html_javascript_npm_parceljs_svg.txt
|
Q:
change permissions on a not writable
UPDATE:
I moved my question to ask ubuntu community, but can not delete it from here... if you have an awenser, please share it on ubuntu community not here... Thanks
i want to make an change on a file but i cant do that because i have not correct permissions:
➜ ls -l pycharm64.vmoptions
-rw-r--r-- 1 root root 427 Dec 28 18:33 pycharm64.vmoptions
i tried to change permisions by these two command:
sudo chmod a+w pycharm64.vmoptions
and
sudo chown user:user pycharm64.vmoptions
but in i get an erro both time:
Read-only file system
how can i make an change on my file? (honestly i dont care about the owner and groups of the file... i just want to change my file anyway)
P.S: my OS is UBUNTU
A:
You can change a file on read only by setting the "immutable property"
chattr +i [fileName]
If you want to revert it just change the "+" for a "-"
chattr -i [fileName]
A:
Your filesystem could be mounted as read only. You have to change first before you can write anything to it. Changing file permissions also requires writing on the filesystem.
You may be able to mount it as read write with command like:
sudo mount -o remount,rw /dev/foo /mount/destination/dir
In this command you spesify that you want to remount the filesystem with different options, adding the readwrite, rw capability.
If you successd in changing the filesystem to read write, then you should be able to change to file permissions with the commands you tried earlier.
A:
You can`t edit it directly (I'm not sure about Windows).
You should edit custom settings file instead:
Manually
nano ~/.config/JetBrains/PyCharm2022.3/pycharm64.vmoptions
or from IDE -- https://intellij-support.jetbrains.com/hc/en-us/articles/206544869.
|
change permissions on a not writable
|
UPDATE:
I moved my question to ask ubuntu community, but can not delete it from here... if you have an awenser, please share it on ubuntu community not here... Thanks
i want to make an change on a file but i cant do that because i have not correct permissions:
➜ ls -l pycharm64.vmoptions
-rw-r--r-- 1 root root 427 Dec 28 18:33 pycharm64.vmoptions
i tried to change permisions by these two command:
sudo chmod a+w pycharm64.vmoptions
and
sudo chown user:user pycharm64.vmoptions
but in i get an erro both time:
Read-only file system
how can i make an change on my file? (honestly i dont care about the owner and groups of the file... i just want to change my file anyway)
P.S: my OS is UBUNTU
|
[
"You can change a file on read only by setting the \"immutable property\"\nchattr +i [fileName]\n\nIf you want to revert it just change the \"+\" for a \"-\"\nchattr -i [fileName]\n\n",
"Your filesystem could be mounted as read only. You have to change first before you can write anything to it. Changing file permissions also requires writing on the filesystem.\nYou may be able to mount it as read write with command like:\nsudo mount -o remount,rw /dev/foo /mount/destination/dir\n\nIn this command you spesify that you want to remount the filesystem with different options, adding the readwrite, rw capability.\nIf you successd in changing the filesystem to read write, then you should be able to change to file permissions with the commands you tried earlier.\n",
"You can`t edit it directly (I'm not sure about Windows).\nYou should edit custom settings file instead:\n\nManually\n\nnano ~/.config/JetBrains/PyCharm2022.3/pycharm64.vmoptions\n\n\nor from IDE -- https://intellij-support.jetbrains.com/hc/en-us/articles/206544869.\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"gnome_terminal",
"linux",
"terminal"
] |
stackoverflow_0070821508_gnome_terminal_linux_terminal.txt
|
Q:
Sum of each row and each column in python
Hi I have more than 20 txt file that include a matrix (9*7) 9 rows and 7 columns:
I want to find sum of each 7 rows and 9 columns for each matrix
My code that I have used is for one matrix how can I use for multi matrix is there any way with python?
import numpy as np
# Get the size m and n
m , n = 7, 9
# Function to calculate sum of each row
def row_sum(arr) :
sum = 0
print("\nFinding Sum of each
row:\n")
# finding the row sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[i][j]
# Print the row sum
print("Sum of the
row",i,"=",sum)
# Reset the sum
sum = 0
# Function to calculate sum of
each column
def column_sum(arr) :
sum = 0
print("\nFinding Sum of each
column:\n")
# finding the column sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[j][i]
# Print the column sum
print("Sum of the
column",i,"=",sum)
# Reset the sum
sum = 0
# Driver code
if __name__ == "__main__" :
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m) :
for j in range(n) :
arr[i][j] = x
x += 1
# Get each row sum
row_sum(arr)
# Get each column sum
column_sum(arr)
And I want the output of each the sum be a vector for each matrix sth like this :
[ 1,2,3,4,5,6,7,8,9,10,...,16]
A:
To calculate the row and column sums for multiple matrices, you can create a function that takes a list of matrices and calculates the row and column sums for each matrix in the list. Here is an example:
import numpy as np
# Get the size m and n
m, n = 7, 9
# Function to calculate sum of each row
def row_sum(arr):
sums = []
for i in range(m):
row_sum = 0
for j in range(n):
row_sum += arr[i][j]
sums.append(row_sum)
return sums
# Function to calculate sum of each column
def column_sum(arr):
sums = []
for i in range(m):
column_sum = 0
for j in range(n):
column_sum += arr[j][i]
sums.append(column_sum)
return sums
# Driver code
if __name__ == "__main__":
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m):
for j in range(n):
arr[i][j] = x
x += 1
# Get each row sum
row_sums = row_sum(arr)
print("Row sums:", row_sums)
# Get each column sum
column_sums = column_sum(arr)
print("Column sums:", column_sums)
To find the row and column sums for multiple matrices, you can loop through the matrices and calculate the row and column sums for each one, storing the results in a list. For example:
# Get the size m and n
m, n = 7, 9
# Function to calculate sum of each row
def row_sum(arr):
sums = []
for i in range(m):
row_sum = 0
for j in range(n):
row_sum += arr[i][j]
sums.append(row_sum)
return sums
# Function to calculate sum of each column
def column_sum(arr):
sums = []
for i in range(m):
column_sum = 0
for j in range(n):
column_sum += arr[j][i]
sums.append(column_sum)
return sums
|
Sum of each row and each column in python
|
Hi I have more than 20 txt file that include a matrix (9*7) 9 rows and 7 columns:
I want to find sum of each 7 rows and 9 columns for each matrix
My code that I have used is for one matrix how can I use for multi matrix is there any way with python?
import numpy as np
# Get the size m and n
m , n = 7, 9
# Function to calculate sum of each row
def row_sum(arr) :
sum = 0
print("\nFinding Sum of each
row:\n")
# finding the row sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[i][j]
# Print the row sum
print("Sum of the
row",i,"=",sum)
# Reset the sum
sum = 0
# Function to calculate sum of
each column
def column_sum(arr) :
sum = 0
print("\nFinding Sum of each
column:\n")
# finding the column sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[j][i]
# Print the column sum
print("Sum of the
column",i,"=",sum)
# Reset the sum
sum = 0
# Driver code
if __name__ == "__main__" :
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m) :
for j in range(n) :
arr[i][j] = x
x += 1
# Get each row sum
row_sum(arr)
# Get each column sum
column_sum(arr)
And I want the output of each the sum be a vector for each matrix sth like this :
[ 1,2,3,4,5,6,7,8,9,10,...,16]
|
[
"To calculate the row and column sums for multiple matrices, you can create a function that takes a list of matrices and calculates the row and column sums for each matrix in the list. Here is an example:\nimport numpy as np\n\n# Get the size m and n\nm, n = 7, 9\n\n# Function to calculate sum of each row\ndef row_sum(arr):\n sums = []\n for i in range(m):\n row_sum = 0\n for j in range(n):\n row_sum += arr[i][j]\n sums.append(row_sum)\n return sums\n\n# Function to calculate sum of each column\ndef column_sum(arr):\n sums = []\n for i in range(m):\n column_sum = 0\n for j in range(n):\n column_sum += arr[j][i]\n sums.append(column_sum)\n return sums\n\n# Driver code\nif __name__ == \"__main__\":\n arr = np.zeros((4, 4))\n\n # Get the matrix elements\n x = 1\n for i in range(m):\n for j in range(n):\n arr[i][j] = x\n x += 1\n\n # Get each row sum\n row_sums = row_sum(arr)\n print(\"Row sums:\", row_sums)\n\n # Get each column sum\n column_sums = column_sum(arr)\n print(\"Column sums:\", column_sums)\n\nTo find the row and column sums for multiple matrices, you can loop through the matrices and calculate the row and column sums for each one, storing the results in a list. For example:\n# Get the size m and n\nm, n = 7, 9\n\n# Function to calculate sum of each row\ndef row_sum(arr):\n sums = []\n for i in range(m):\n row_sum = 0\n for j in range(n):\n row_sum += arr[i][j]\n sums.append(row_sum)\n return sums\n\n# Function to calculate sum of each column\ndef column_sum(arr):\n sums = []\n for i in range(m):\n column_sum = 0\n for j in range(n):\n column_sum += arr[j][i]\n sums.append(column_sum)\n return sums\n\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"matrix",
"python"
] |
stackoverflow_0074664693_matrix_python.txt
|
Q:
__init__() got an unexpected keyword argument 'required' - Rest framework model serializer
This is my code
from rest_framework import serializers
from django.contrib.auth import get_user_model
User = get_user_model()
class UserSerializer(serializers.ModelSerializer):
username = serializers.Field(source="username", required = False)
class Meta:
model = User
fields = ('first_name', 'last_name', 'username')
It seems so straight forward. What is the issue?
A:
Change it to CharField.
username = serializers.CharField(source="username", required = False)
A:
Change it to CharField and add allow_blank=True
username = serializers.CharField(source="username",
required = False,
allow_blank=True)
|
__init__() got an unexpected keyword argument 'required' - Rest framework model serializer
|
This is my code
from rest_framework import serializers
from django.contrib.auth import get_user_model
User = get_user_model()
class UserSerializer(serializers.ModelSerializer):
username = serializers.Field(source="username", required = False)
class Meta:
model = User
fields = ('first_name', 'last_name', 'username')
It seems so straight forward. What is the issue?
|
[
"Change it to CharField. \nusername = serializers.CharField(source=\"username\", required = False) \n\n",
"Change it to CharField and add allow_blank=True\nusername = serializers.CharField(source=\"username\", \n required = False,\n allow_blank=True)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_rest_framework"
] |
stackoverflow_0028235184_django_django_rest_framework.txt
|
Q:
update column to two based on condition
I am trying to modify ONE column, I want to set some rows as true the others convert them to false
update products set on_sale=False where status=1 and seller=test;
update products set on_sale=true Where price > 100 and status=1 and seller=test;
the above works, but I believe it can be done in 1 query, I.e something like this
\\ python syntax for the if condition
update prodcuts set on_sale=(True if price > 100 else False) WHERE status=1 and seller=test
A:
You could do a single update with the help of a CASE expression:
UPDATE products
SET on_sale = CASE WHEN price > 100 THEN True ELSE False END
WHERE status = 1 AND seller = test;
|
update column to two based on condition
|
I am trying to modify ONE column, I want to set some rows as true the others convert them to false
update products set on_sale=False where status=1 and seller=test;
update products set on_sale=true Where price > 100 and status=1 and seller=test;
the above works, but I believe it can be done in 1 query, I.e something like this
\\ python syntax for the if condition
update prodcuts set on_sale=(True if price > 100 else False) WHERE status=1 and seller=test
|
[
"You could do a single update with the help of a CASE expression:\nUPDATE products\nSET on_sale = CASE WHEN price > 100 THEN True ELSE False END\nWHERE status = 1 AND seller = test;\n\n"
] |
[
1
] |
[] |
[] |
[
"postgresql",
"python"
] |
stackoverflow_0074664787_postgresql_python.txt
|
Q:
ngrok killing a tunnel from windows 7 command line
I'm trying to use ngrok to foward my app, currently hosted on localhost:3602, to my development partner.
I've done this many times in the past successfully, simply by typing in
ngrok http 3602
I get back a url that he can conntect to. But now when I type that in I get the following error message:
Tunnel session failed. Your account is limited to 1 simultaneous ngrok client session.
Active ngrok client sessions in region 'us':
- f21bd0dbe67928069054c733a5e11f88 (54.80.69.18)
ERR_NGROK_108
Obviously I must have an existing tunnel session running somewhere.
My problem is I have no idea where to find that existing tunnel session and how to terminate it. It does not exist as either a running application, process or service in the task manager, and I can find no syntax in the documentation for how to terminate a tunnel session. I've tried rebooting my machine to no effect, which tells me this is probably not a local problem, but rather something running on the ngrok site linked to my account, yet nothing I can find in my account settings indicates anything helpful.
Can anyone provide the necessary command to clear up this problem. Thanks.
A:
for window version:
tskill /A ngrok
A:
For Linux/Mac
killall ngrok
This command is an Unix command. In Windows maybe you can open the Task Manager and close all ngrok processes.
A:
This answer is not about killing tunnel, but about a possible solution to the described problem with ERR_NGROK_108.
https://dashboard.ngrok.com/get-started/setup
describes a simple plan for getting started with ngrok.
If you execute the second step you will have a file ngrok.yaml (In my case path was: C:\Users\Mi\ .ngrok2\ngrok.yml).
And after that executing ngrok http 80 will provide the described error ERR_NGROK_108.
Solution:
Skip the second step. Execute ngrok http 80 without previous ngrok authtoken
If you have already executed this step, delete the file ngrok.yml
This approach solved my problem with ERR_NGROK_108.
A:
on your ngrok prompt just run this command
taskkill /f /im ngrok.exe
A:
It seems like ngrok got a (JavaScript) function for that:
const ngrok = require('ngrok');
ngrok().kill();
A:
If you are limited to one session — like I was. Then you may have created an account with ngrok and signed in with your machine. And it'll create a file:
C:\Users\<name>\.ngrok2\ngrok.yml
It uses this to limit your client, simply delete this file.
A:
On Windows(cmd):
taskkill /f /im ngrok.exe
|
ngrok killing a tunnel from windows 7 command line
|
I'm trying to use ngrok to foward my app, currently hosted on localhost:3602, to my development partner.
I've done this many times in the past successfully, simply by typing in
ngrok http 3602
I get back a url that he can conntect to. But now when I type that in I get the following error message:
Tunnel session failed. Your account is limited to 1 simultaneous ngrok client session.
Active ngrok client sessions in region 'us':
- f21bd0dbe67928069054c733a5e11f88 (54.80.69.18)
ERR_NGROK_108
Obviously I must have an existing tunnel session running somewhere.
My problem is I have no idea where to find that existing tunnel session and how to terminate it. It does not exist as either a running application, process or service in the task manager, and I can find no syntax in the documentation for how to terminate a tunnel session. I've tried rebooting my machine to no effect, which tells me this is probably not a local problem, but rather something running on the ngrok site linked to my account, yet nothing I can find in my account settings indicates anything helpful.
Can anyone provide the necessary command to clear up this problem. Thanks.
|
[
"for window version:\ntskill /A ngrok\n\n\n",
"For Linux/Mac\nkillall ngrok\n\nThis command is an Unix command. In Windows maybe you can open the Task Manager and close all ngrok processes.\n",
"This answer is not about killing tunnel, but about a possible solution to the described problem with ERR_NGROK_108.\nhttps://dashboard.ngrok.com/get-started/setup\ndescribes a simple plan for getting started with ngrok.\n\nIf you execute the second step you will have a file ngrok.yaml (In my case path was: C:\\Users\\Mi\\ .ngrok2\\ngrok.yml).\nAnd after that executing ngrok http 80 will provide the described error ERR_NGROK_108.\nSolution:\n\nSkip the second step. Execute ngrok http 80 without previous ngrok authtoken\nIf you have already executed this step, delete the file ngrok.yml\n\nThis approach solved my problem with ERR_NGROK_108.\n",
"on your ngrok prompt just run this command\ntaskkill /f /im ngrok.exe\n",
"It seems like ngrok got a (JavaScript) function for that: \nconst ngrok = require('ngrok');\nngrok().kill();\n\n",
"If you are limited to one session — like I was. Then you may have created an account with ngrok and signed in with your machine. And it'll create a file:\nC:\\Users\\<name>\\.ngrok2\\ngrok.yml\n\nIt uses this to limit your client, simply delete this file.\n",
"On Windows(cmd):\ntaskkill /f /im ngrok.exe\n\n\n"
] |
[
25,
6,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"ngrok",
"windows"
] |
stackoverflow_0051865000_ngrok_windows.txt
|
Q:
Random ID function. Length of ID and amount of Id through prompt()
i try to build a function that should output x-amounts of ids with a y-lenght. The amount of Ids and length of Ids shall be input from the user through prompt.
Here is what i have tried so far:
function userIdGenerator() {
let amountOfId = prompt('Please enter the amount of IDs')
let lengthOfId = prompt('Please enter the lenght of your ID(s)')
let userId = ''
let userIds = []
let stringValues ='ABCDEFGHIJKLMNOabcdefghijklmnopqrstuvwxyzPQRSTUVWXYZ0123456789'
let numOfChar = stringValues.length
for(let i = 0; i < amountOfId; i++){
for(let i = 0; i < lengthOfId; i++){
if( i< lengthOfId ){
userId += stringValues.charAt(Math.round(Math.random() * numOfChar))
}else{
userIds.push(userId)
}
}
}
console.log(userIds)
}
I get an empty array as output. When i delete the else-statement and console.log(userId) i get a string that has the lenght of x*y so i wander how i can improve this function.
Thanks for help,
Willy
A:
A few things I changed in the comments.
function userIdGenerator() {
let amountOfId = prompt('Please enter the amount of IDs')
let lengthOfId = prompt('Please enter the lenght of your ID(s)')
let userId = "";
let userIds = [];
let stringValues =
"ABCDEFGHIJKLMNOabcdefghijklmnopqrstuvwxyzPQRSTUVWXYZ0123456789";
let numOfChar = stringValues.length;
for (let i = 0; i < amountOfId; i++) {
// you need to reset the userId each time you're starting to build a new one, otherwise that's why you'd get a string that has the lenght of x*y
userId = "";
for (let i = 0; i < lengthOfId; i++) {
// there is no need for a if statement on i < lengthOfId, since by definition of your for loop, it's gonna stop before it's the case.
userId += stringValues.charAt(Math.round(Math.random() * numOfChar));
}
// you need to push only once you're done adding all the characters
userIds.push(userId);
}
console.log(userIds);
}
|
Random ID function. Length of ID and amount of Id through prompt()
|
i try to build a function that should output x-amounts of ids with a y-lenght. The amount of Ids and length of Ids shall be input from the user through prompt.
Here is what i have tried so far:
function userIdGenerator() {
let amountOfId = prompt('Please enter the amount of IDs')
let lengthOfId = prompt('Please enter the lenght of your ID(s)')
let userId = ''
let userIds = []
let stringValues ='ABCDEFGHIJKLMNOabcdefghijklmnopqrstuvwxyzPQRSTUVWXYZ0123456789'
let numOfChar = stringValues.length
for(let i = 0; i < amountOfId; i++){
for(let i = 0; i < lengthOfId; i++){
if( i< lengthOfId ){
userId += stringValues.charAt(Math.round(Math.random() * numOfChar))
}else{
userIds.push(userId)
}
}
}
console.log(userIds)
}
I get an empty array as output. When i delete the else-statement and console.log(userId) i get a string that has the lenght of x*y so i wander how i can improve this function.
Thanks for help,
Willy
|
[
"A few things I changed in the comments.\nfunction userIdGenerator() {\n let amountOfId = prompt('Please enter the amount of IDs')\n let lengthOfId = prompt('Please enter the lenght of your ID(s)')\n let userId = \"\";\n let userIds = [];\n let stringValues =\n \"ABCDEFGHIJKLMNOabcdefghijklmnopqrstuvwxyzPQRSTUVWXYZ0123456789\";\n let numOfChar = stringValues.length;\n\n for (let i = 0; i < amountOfId; i++) {\n// you need to reset the userId each time you're starting to build a new one, otherwise that's why you'd get a string that has the lenght of x*y\n userId = \"\";\n for (let i = 0; i < lengthOfId; i++) {\n// there is no need for a if statement on i < lengthOfId, since by definition of your for loop, it's gonna stop before it's the case.\n userId += stringValues.charAt(Math.round(Math.random() * numOfChar));\n }\n// you need to push only once you're done adding all the characters\n userIds.push(userId);\n }\n console.log(userIds);\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"javascript"
] |
stackoverflow_0074664792_arrays_javascript.txt
|
Q:
2 websites wordpess and html on one domain
I have a website that is made on wordpress, and it is on the main domain for example test.com. From this wordpress application I need only production part and it will be on the subdomain product.test.com, and the main website will be on html and css. So the question is, can I do like that? And can this wordpress and html websites stay on different hostings?
And should I just change the wordpress application to product.test.com and then put the main html, css website on the main domain?
A:
Yes, it is doable
Just when you go to manage domain > setup dns > change nameservers here
Here you can put your customized DNS or A records etc whenever will be required
It may vary hosting to hosting but the core part will remains same
|
2 websites wordpess and html on one domain
|
I have a website that is made on wordpress, and it is on the main domain for example test.com. From this wordpress application I need only production part and it will be on the subdomain product.test.com, and the main website will be on html and css. So the question is, can I do like that? And can this wordpress and html websites stay on different hostings?
And should I just change the wordpress application to product.test.com and then put the main html, css website on the main domain?
|
[
"Yes, it is doable\nJust when you go to manage domain > setup dns > change nameservers here\nHere you can put your customized DNS or A records etc whenever will be required\nIt may vary hosting to hosting but the core part will remains same\n"
] |
[
0
] |
[] |
[] |
[
"css",
"dns",
"hosting",
"html",
"wordpress"
] |
stackoverflow_0074660157_css_dns_hosting_html_wordpress.txt
|
Q:
Embedding react-boilerplate in Rails 5.1
Has anyone tried integrating react-boilerplate into a Ruby on Rails 5.1 app? It looks like the 5.1 approach to embedding React components in views is fairly simple. Rails 5.1 is using webpacker which has its approach to mixing Ruby configuration with Webpack. It doesn't seem very straightforward, but does anyone have any techniques for accomplishing this?
A:
Take a look at rails-react-boilerplate. The README.md should have all the info you need to get up and running. You should be able to easily find it on github.
A:
It is possible to embed a React-Boilerplate project in a Rails 5.1 application, but it requires some configuration and setup. Here are the steps you can follow to do this:
Install the React-Boilerplate project in your Rails application by running the npm install command in the root directory of your project. This will install the React-Boilerplate dependencies in your project.
Create a new Rails controller and view to host the React-Boilerplate project. You can do this by running the rails generate controller React index command, which will create a new ReactController and an index action with a corresponding view.
In the index view, add the necessary HTML and JavaScript code to render the React-Boilerplate project. This will typically include the following elements:
A div element with an id attribute that will be used as the root container for the React-Boilerplate project.
A script tag that loads the React-Boilerplate JavaScript bundle, which is typically generated by running the npm run build command in the React-Boilerplate project.
A script tag that initializes the React-Boilerplate project by rendering it in the root container div and passing any necessary props to it.
In the ReactController, define the index action to render the index view. This will allow the React-Boilerplate project to be rendered when the user accesses the corresponding URL in the browser.
Configure the Rails asset pipeline to properly serve the React-Boilerplate JavaScript bundle. This may require modifying the config/environments/production.rb file to add the app/build directory to the config.assets.paths array, so that the Rails asset pipeline can find the bundle.
Build the React-Boilerplate project by running the npm run build command in the React-Boilerplate project. This will generate the JavaScript bundle that will be used to render the project in the Rails application.
Start the Rails server by running the rails s command, and navigate to the URL of the ReactController in the browser to see the React-Boilerplate project rendered in the Rails application.
Keep in mind that these steps are just an example, and you may need to modify them depending on your specific use case and requirements. It is recommended to refer to the React-Boilerplate and Rails documentation for more detailed information and instructions.
|
Embedding react-boilerplate in Rails 5.1
|
Has anyone tried integrating react-boilerplate into a Ruby on Rails 5.1 app? It looks like the 5.1 approach to embedding React components in views is fairly simple. Rails 5.1 is using webpacker which has its approach to mixing Ruby configuration with Webpack. It doesn't seem very straightforward, but does anyone have any techniques for accomplishing this?
|
[
"Take a look at rails-react-boilerplate. The README.md should have all the info you need to get up and running. You should be able to easily find it on github.\n",
"It is possible to embed a React-Boilerplate project in a Rails 5.1 application, but it requires some configuration and setup. Here are the steps you can follow to do this:\n\nInstall the React-Boilerplate project in your Rails application by running the npm install command in the root directory of your project. This will install the React-Boilerplate dependencies in your project.\n\nCreate a new Rails controller and view to host the React-Boilerplate project. You can do this by running the rails generate controller React index command, which will create a new ReactController and an index action with a corresponding view.\n\nIn the index view, add the necessary HTML and JavaScript code to render the React-Boilerplate project. This will typically include the following elements:\n\n\n\nA div element with an id attribute that will be used as the root container for the React-Boilerplate project.\nA script tag that loads the React-Boilerplate JavaScript bundle, which is typically generated by running the npm run build command in the React-Boilerplate project.\nA script tag that initializes the React-Boilerplate project by rendering it in the root container div and passing any necessary props to it.\n\n\nIn the ReactController, define the index action to render the index view. This will allow the React-Boilerplate project to be rendered when the user accesses the corresponding URL in the browser.\n\nConfigure the Rails asset pipeline to properly serve the React-Boilerplate JavaScript bundle. This may require modifying the config/environments/production.rb file to add the app/build directory to the config.assets.paths array, so that the Rails asset pipeline can find the bundle.\n\nBuild the React-Boilerplate project by running the npm run build command in the React-Boilerplate project. This will generate the JavaScript bundle that will be used to render the project in the Rails application.\n\nStart the Rails server by running the rails s command, and navigate to the URL of the ReactController in the browser to see the React-Boilerplate project rendered in the Rails application.\n\n\nKeep in mind that these steps are just an example, and you may need to modify them depending on your specific use case and requirements. It is recommended to refer to the React-Boilerplate and Rails documentation for more detailed information and instructions.\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"react_boilerplate",
"ruby_on_rails",
"webpack"
] |
stackoverflow_0043137712_javascript_react_boilerplate_ruby_on_rails_webpack.txt
|
Q:
How to atomically access multiple things that might be held by the same lock(s)?
I have a big grid which multiple threads access simultaneously. I have it divided into regions, each of which is independently locked. I want to be able to have atomic operations that operate on specified sets of points, which may or may not be part of the same region.
What I have now includes:
pub struct RwGrid<T>{
width : usize,
height : usize,
region_size : usize,
regions : [RwLock<Vec<T>>; TOTAL_REGIONS]
}
impl<T: Copy + Colored> Grid<T> for RwGrid<T>{
...
fn set_if<F>(&self, p : Point, f : F, value : T) -> bool where
F : Fn(T) -> bool{
let (region_index, index_in_region) = self.map_coordinates(p);
let mut region = self.regions[region_index].write().unwrap();
let pre_existing = region[index_in_region];
if f(pre_existing){
region[index_in_region] = value;
true
} else {false}
}
...
}
Where map_coordinates is a helper function that maps Cartesian coordinates onto the index of the appropriate region, and the index of the given point within that region.
My goal is (among other things) a variant of that set_if function that atomically looks at a set of points, rather than a single point (specifically, it would look at the nine points making up the immediate neighborhood of a given point.) This set of points might be from the same region, or might come from multiple regions. Further, the locks need to be acquired in a particular order, or deadlock may be possible.
The atomicity requirement is important to note. If it helps, imagine you're trying to sometimes color points red, with the invariant that no red point may be adjacent to another red point. If two threads non-atomically read the neighborhood of the point they're considering, they may interleave, both checking that the other's target point is currently black, then setting two adjacent points red.
I don't know how to abstract over this. I can easily find the regions for a set of points, or for a single point I can acquire the lock and operate on it, but I've been beating my head on how to acquire a set of locks and then operate on points using the appropriate lock, without having enormous amounts of hard-coded boiler plate.
To illustrate the problem, here's a variant of set_if that looks at just two points, and sets one of them based on a condition that depends on both:
fn set_if_2<F>(&self, p1 : Point, p2 : Point, f : F, value : T) -> bool where
F : Fn(T, T) -> bool{
let (region_index_1, index_in_region_1) = self.map_coordinates(p1);
let (region_index_2, index_in_region_2) = self.map_coordinates(p2);
if (region_index_1 == region_index_2){
let mut region = self.regions[region_index_1].write().unwrap();
let pre_existing_1 = region[index_in_region_1];
let pre_existing_2 = region[index_in_region_2];
if f(pre_existing_1, pre_existing_2){
region[index_in_region_1] = value;
true
} else {false}
} else {
let mut region1 = self.regions[region_index_1].write().unwrap();
let region2 = self.regions[region_index_2].write().unwrap();
let pre_existing_1 = region1[index_in_region_1];
let pre_existing_2 = region2[index_in_region_2];
if f(pre_existing_1, pre_existing_2){
region1[index_in_region_1] = value;
true
} else {false}
}
}
This code has two branches based on whether or not the points belong to the same region (and thus need one lock) or different regions (each with their own lock.) As you can imagine, expanding that pattern out to nine different points that might belong to many different configurations of region would be painful and wrong.
So far I have two ideas and they both sound bad:
Have a function that returns a Vec<RwLockWriteGuard<T>> and a structure which holds indexes into that vector each point should use. (So if all points come from the same region, the Vec would be one element long and each point would map to 0).
Have the data actually live in a single unsafe Vec with no locks (I'm not even sure how to do that), but have "fake" locks corresponding to regions, and code the Region module so that points are only accessed after the corresponding lock has been grabbed. One chunk of code could then recognize and acquire the appropriate locks, but that would be independent of subsequently reading or writing to the points.
Are either of those ideas workable? Is there a better way to approach this?
EDIT: Some more code:
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Point(pub usize, pub usize);
fn map_coordinates(&self, p : Point) -> (usize, usize){
let Point(x, y) = self.fix(p);
let (region_width, region_height) = (self.width / REGIONS_PER_DIMENSION, self.height/REGIONS_PER_DIMENSION);
let (target_square_x, target_square_y) = (x/region_width, y/region_height);
let target_square_i = target_square_y * REGIONS_PER_DIMENSION + target_square_x;
let (x_in_square, y_in_square) = (x % region_width, y % region_width);
let index_in_square = y_in_square * region_width + x_in_square;
(target_square_i, index_in_square)
}
fn fix(&self, p: Point) -> Point{
let Point(x, y) = p;
Point(modulo(x as i32, self.width), modulo(y as i32, self.height))
}
#[inline(always)]
pub fn modulo(a: i32, b: usize) -> usize {
(((a % b as i32) + b as i32) % b as i32) as usize
}
One thing to note is that the wrapping behavior (which is enabled by the fix function above) slightly complicates avoiding deadlocks. Points will often be accessed by compass direction, like asking for the northern neighbor of a point. Because the grid wraps, if you always lock in order by compass direction - like, "Northern neighbor, then center, then southern" - you can get a deadlocked cycle. Another way of phrasing this is that if you access Points by the order they're specified in the request, rather than by the order they exist in the grid, you can get cycles.
A:
Alright, so I've figured out a couple ways to do this. Both have the same signature of taking a generic number of points, and treating the point at index 0 as the "target" point to write the value to.
Without Allocating
This version loops through all points for each region at play, making it O(R*P) where R is the number of regions and P is the number of points.
fn set_if<const N: usize, F: Fn([T; N]) -> bool>(&self, points: [Point; N], f: F, value: T) -> bool {
// The region and index within that region for each point
let point_coords = points.map(|p| self.map_coordinates(p));
// Extract the target point data for direct usage later
let (target_region_index, target_index_in_region) = point_coords[0];
// Iterate through the regions, locking each one,
// and reading the pre-existing values.
let mut pre_existing = [None; N];
// Loop through each region
let mut region_locks: [_; TOTAL_REGIONS] = std::array::from_fn(|region_index| {
let mut region = None;
// Loop through each point
for (j, (this_region_index, index_in_region)) in point_coords.into_iter().enumerate() {
// If the point is in this region
if this_region_index == region_index {
// Acquire a new lock if necessary
// (if this is the first point in the region)
let region = region.get_or_insert_with(|| {
self.regions[region_index].write().unwrap()
});
// Then read the pre-existing value for this point
pre_existing[j] = Some(region[index_in_region])
}
}
// Store region locks to hold the lock until we're done
region
});
// Should never fail
let pre_existing = pre_existing.map(|v| v.unwrap());
let target_region = region_locks[target_region_index].as_mut().unwrap();
if f(pre_existing) {
target_region[target_index_in_region] = value;
true
} else {
false
}
// Region locks dropped at end of scope
}
With Allocating
This version loops through all points once, collecting the points for each region, and then loops through each region with points, obtaining a lock and handling each point in the region.
This makes it O(R+2P).
fn set_if<const N: usize, F: Fn([T; N]) -> bool>(&self, points: [Point; N], f: F, value: T) -> bool {
// Store a set of indices for each region
let mut region_indices: [Vec<(usize, usize)>; TOTAL_REGIONS] = Default::default();
// Handle the target point first
let (target_region_index, target_index_in_region) = self.map_coordinates(points[0]);
region_indices[target_region_index] = vec![
// We store the index of the point in `points` and
// the index associated with that point within its region
(0, target_index_in_region),
];
// Then handle all of the rest
for (j, p) in points.into_iter().enumerate().skip(1) {
let (region_index, index_in_region) = self.map_coordinates(p);
// Store the index of the point within `points` and
// the index associated with that point within its region
region_indices[region_index].push((j, index_in_region));
}
// Iterate through the regions, locking each one,
// and reading the pre-existing values.
let mut pre_existing = [None; N];
// Store region locks to hold the lock until we're done
let mut region_locks: [_; TOTAL_REGIONS] = Default::default();
for (region_index, indices_in_region) in region_indices.into_iter().enumerate() {
// Skip if there were no points in this region
if indices_in_region.is_empty() {
continue;
};
// Acquire a lock for this region
let region = self.regions[region_index].write().unwrap();
// Read the pre-existing value for each point in the region
for (j, index_in_region) in indices_in_region {
pre_existing[j] = Some(region[index_in_region]);
}
// Store region locks to hold the lock until we're done
region_locks[region_index] = Some(region);
}
// Should never fail
let pre_existing = pre_existing.map(|v| v.unwrap());
let target_region = region_locks[target_region_index].as_mut().unwrap();
if f(pre_existing) {
target_region[target_index_in_region] = value;
true
} else {
false
}
// Region locks dropped at end of scope
}
I prefer option #1, because it is simpler and has no allocations. Given you will likely have a small fixed number of regions and points, I expect performance of option 1 to be better as well. If performance is very important, I'd recommend benchmarking both, though.
|
How to atomically access multiple things that might be held by the same lock(s)?
|
I have a big grid which multiple threads access simultaneously. I have it divided into regions, each of which is independently locked. I want to be able to have atomic operations that operate on specified sets of points, which may or may not be part of the same region.
What I have now includes:
pub struct RwGrid<T>{
width : usize,
height : usize,
region_size : usize,
regions : [RwLock<Vec<T>>; TOTAL_REGIONS]
}
impl<T: Copy + Colored> Grid<T> for RwGrid<T>{
...
fn set_if<F>(&self, p : Point, f : F, value : T) -> bool where
F : Fn(T) -> bool{
let (region_index, index_in_region) = self.map_coordinates(p);
let mut region = self.regions[region_index].write().unwrap();
let pre_existing = region[index_in_region];
if f(pre_existing){
region[index_in_region] = value;
true
} else {false}
}
...
}
Where map_coordinates is a helper function that maps Cartesian coordinates onto the index of the appropriate region, and the index of the given point within that region.
My goal is (among other things) a variant of that set_if function that atomically looks at a set of points, rather than a single point (specifically, it would look at the nine points making up the immediate neighborhood of a given point.) This set of points might be from the same region, or might come from multiple regions. Further, the locks need to be acquired in a particular order, or deadlock may be possible.
The atomicity requirement is important to note. If it helps, imagine you're trying to sometimes color points red, with the invariant that no red point may be adjacent to another red point. If two threads non-atomically read the neighborhood of the point they're considering, they may interleave, both checking that the other's target point is currently black, then setting two adjacent points red.
I don't know how to abstract over this. I can easily find the regions for a set of points, or for a single point I can acquire the lock and operate on it, but I've been beating my head on how to acquire a set of locks and then operate on points using the appropriate lock, without having enormous amounts of hard-coded boiler plate.
To illustrate the problem, here's a variant of set_if that looks at just two points, and sets one of them based on a condition that depends on both:
fn set_if_2<F>(&self, p1 : Point, p2 : Point, f : F, value : T) -> bool where
F : Fn(T, T) -> bool{
let (region_index_1, index_in_region_1) = self.map_coordinates(p1);
let (region_index_2, index_in_region_2) = self.map_coordinates(p2);
if (region_index_1 == region_index_2){
let mut region = self.regions[region_index_1].write().unwrap();
let pre_existing_1 = region[index_in_region_1];
let pre_existing_2 = region[index_in_region_2];
if f(pre_existing_1, pre_existing_2){
region[index_in_region_1] = value;
true
} else {false}
} else {
let mut region1 = self.regions[region_index_1].write().unwrap();
let region2 = self.regions[region_index_2].write().unwrap();
let pre_existing_1 = region1[index_in_region_1];
let pre_existing_2 = region2[index_in_region_2];
if f(pre_existing_1, pre_existing_2){
region1[index_in_region_1] = value;
true
} else {false}
}
}
This code has two branches based on whether or not the points belong to the same region (and thus need one lock) or different regions (each with their own lock.) As you can imagine, expanding that pattern out to nine different points that might belong to many different configurations of region would be painful and wrong.
So far I have two ideas and they both sound bad:
Have a function that returns a Vec<RwLockWriteGuard<T>> and a structure which holds indexes into that vector each point should use. (So if all points come from the same region, the Vec would be one element long and each point would map to 0).
Have the data actually live in a single unsafe Vec with no locks (I'm not even sure how to do that), but have "fake" locks corresponding to regions, and code the Region module so that points are only accessed after the corresponding lock has been grabbed. One chunk of code could then recognize and acquire the appropriate locks, but that would be independent of subsequently reading or writing to the points.
Are either of those ideas workable? Is there a better way to approach this?
EDIT: Some more code:
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Point(pub usize, pub usize);
fn map_coordinates(&self, p : Point) -> (usize, usize){
let Point(x, y) = self.fix(p);
let (region_width, region_height) = (self.width / REGIONS_PER_DIMENSION, self.height/REGIONS_PER_DIMENSION);
let (target_square_x, target_square_y) = (x/region_width, y/region_height);
let target_square_i = target_square_y * REGIONS_PER_DIMENSION + target_square_x;
let (x_in_square, y_in_square) = (x % region_width, y % region_width);
let index_in_square = y_in_square * region_width + x_in_square;
(target_square_i, index_in_square)
}
fn fix(&self, p: Point) -> Point{
let Point(x, y) = p;
Point(modulo(x as i32, self.width), modulo(y as i32, self.height))
}
#[inline(always)]
pub fn modulo(a: i32, b: usize) -> usize {
(((a % b as i32) + b as i32) % b as i32) as usize
}
One thing to note is that the wrapping behavior (which is enabled by the fix function above) slightly complicates avoiding deadlocks. Points will often be accessed by compass direction, like asking for the northern neighbor of a point. Because the grid wraps, if you always lock in order by compass direction - like, "Northern neighbor, then center, then southern" - you can get a deadlocked cycle. Another way of phrasing this is that if you access Points by the order they're specified in the request, rather than by the order they exist in the grid, you can get cycles.
|
[
"Alright, so I've figured out a couple ways to do this. Both have the same signature of taking a generic number of points, and treating the point at index 0 as the \"target\" point to write the value to.\n\nWithout Allocating\n\nThis version loops through all points for each region at play, making it O(R*P) where R is the number of regions and P is the number of points.\nfn set_if<const N: usize, F: Fn([T; N]) -> bool>(&self, points: [Point; N], f: F, value: T) -> bool {\n // The region and index within that region for each point\n let point_coords = points.map(|p| self.map_coordinates(p));\n\n // Extract the target point data for direct usage later\n let (target_region_index, target_index_in_region) = point_coords[0];\n\n // Iterate through the regions, locking each one,\n // and reading the pre-existing values.\n let mut pre_existing = [None; N];\n\n // Loop through each region\n let mut region_locks: [_; TOTAL_REGIONS] = std::array::from_fn(|region_index| {\n let mut region = None;\n\n // Loop through each point\n for (j, (this_region_index, index_in_region)) in point_coords.into_iter().enumerate() {\n // If the point is in this region\n if this_region_index == region_index {\n // Acquire a new lock if necessary\n // (if this is the first point in the region)\n let region = region.get_or_insert_with(|| {\n self.regions[region_index].write().unwrap()\n });\n // Then read the pre-existing value for this point\n pre_existing[j] = Some(region[index_in_region])\n }\n }\n\n // Store region locks to hold the lock until we're done\n region\n });\n\n // Should never fail\n let pre_existing = pre_existing.map(|v| v.unwrap());\n let target_region = region_locks[target_region_index].as_mut().unwrap();\n\n if f(pre_existing) {\n target_region[target_index_in_region] = value;\n\n true\n } else {\n false\n }\n\n // Region locks dropped at end of scope\n}\n\n\nWith Allocating\n\nThis version loops through all points once, collecting the points for each region, and then loops through each region with points, obtaining a lock and handling each point in the region.\nThis makes it O(R+2P).\nfn set_if<const N: usize, F: Fn([T; N]) -> bool>(&self, points: [Point; N], f: F, value: T) -> bool {\n // Store a set of indices for each region\n let mut region_indices: [Vec<(usize, usize)>; TOTAL_REGIONS] = Default::default();\n\n // Handle the target point first\n let (target_region_index, target_index_in_region) = self.map_coordinates(points[0]);\n\n region_indices[target_region_index] = vec![\n // We store the index of the point in `points` and\n // the index associated with that point within its region\n (0, target_index_in_region),\n ];\n\n // Then handle all of the rest\n for (j, p) in points.into_iter().enumerate().skip(1) {\n let (region_index, index_in_region) = self.map_coordinates(p);\n\n // Store the index of the point within `points` and\n // the index associated with that point within its region\n region_indices[region_index].push((j, index_in_region));\n }\n\n // Iterate through the regions, locking each one,\n // and reading the pre-existing values.\n let mut pre_existing = [None; N];\n // Store region locks to hold the lock until we're done\n let mut region_locks: [_; TOTAL_REGIONS] = Default::default();\n\n for (region_index, indices_in_region) in region_indices.into_iter().enumerate() {\n // Skip if there were no points in this region\n if indices_in_region.is_empty() {\n continue;\n };\n \n // Acquire a lock for this region\n let region = self.regions[region_index].write().unwrap();\n\n // Read the pre-existing value for each point in the region\n for (j, index_in_region) in indices_in_region {\n pre_existing[j] = Some(region[index_in_region]);\n }\n\n // Store region locks to hold the lock until we're done\n region_locks[region_index] = Some(region);\n }\n\n // Should never fail\n let pre_existing = pre_existing.map(|v| v.unwrap());\n let target_region = region_locks[target_region_index].as_mut().unwrap();\n\n if f(pre_existing) {\n target_region[target_index_in_region] = value;\n\n true\n } else {\n false\n }\n\n // Region locks dropped at end of scope\n}\n\nI prefer option #1, because it is simpler and has no allocations. Given you will likely have a small fixed number of regions and points, I expect performance of option 1 to be better as well. If performance is very important, I'd recommend benchmarking both, though.\n"
] |
[
2
] |
[] |
[] |
[
"locking",
"multithreading",
"rust"
] |
stackoverflow_0074663192_locking_multithreading_rust.txt
|
Q:
Heroku stopping all processes with SIGTERM after HTTP request to another Heroku app
I recently deployed 2 different apps on free Heroku's dynos.
One is an API and the other one is an admin panel.Both are working with NodeJS. My admin panel needs to make calls on to this api.
Everything is working fine when I'm launching those apps on localhost and on different ports. But when I deploy them to Heroku, both apps are being shut down by Heroku with the same error saying: "Stopping all processes with SIGTERM" and "process exited with status 143"
Here are the error messages I got in the Heroku logs: Heroku logs
I tried to use the CORS package to the two apps, but the issue didn't change.
Some help or explanation would be appreciated. Thanks for your time !emphasized text
A:
It's worth checking if this is expected behavior on the free dyno. I had one set up and switching to Hobby dyno fixed the issue. heroku node app exits after idling
A:
this is due to inactivity, when there is a request on any endpoint heroku raises the dyno again
A:
I confirm what Josué said, Eco dynos sleep automatically after a period of inactivity to conserve your dyno hours
|
Heroku stopping all processes with SIGTERM after HTTP request to another Heroku app
|
I recently deployed 2 different apps on free Heroku's dynos.
One is an API and the other one is an admin panel.Both are working with NodeJS. My admin panel needs to make calls on to this api.
Everything is working fine when I'm launching those apps on localhost and on different ports. But when I deploy them to Heroku, both apps are being shut down by Heroku with the same error saying: "Stopping all processes with SIGTERM" and "process exited with status 143"
Here are the error messages I got in the Heroku logs: Heroku logs
I tried to use the CORS package to the two apps, but the issue didn't change.
Some help or explanation would be appreciated. Thanks for your time !emphasized text
|
[
"It's worth checking if this is expected behavior on the free dyno. I had one set up and switching to Hobby dyno fixed the issue. heroku node app exits after idling\n",
"this is due to inactivity, when there is a request on any endpoint heroku raises the dyno again\n",
"I confirm what Josué said, Eco dynos sleep automatically after a period of inactivity to conserve your dyno hours\n\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"deployment",
"heroku",
"node.js",
"web_applications"
] |
stackoverflow_0054953308_deployment_heroku_node.js_web_applications.txt
|
Q:
Currying and summation of two lists of varying size
I'm self-learning SML and am currently am stuck with the concept of recursion between two lists of varying sizes.
Suppose you have two int lists of varying size, and a function that multiplies two numbers, like so:
val mul = fn(a, b) => a * b;
I want to use this function to be passed as a parameter into another function, which multiplies the numbers in the same index recursively until at least one of the lists is empty. So
val list1 = [1, 3, 5, 7];
val list2 = [2, 6, 3];
would be passed through that same function with mul and 35 would be returned, as 1*2 + 3*6 + 5*3 would be calculated.
My knowledge of how SML works is a bit limited, as I'm not exactly sure how to carry the result of the sum forward during the recursion, nor how to handle the base case when one of either lists terminates early. Could someone point me in the right direction in thinking of this problem?
A:
You can use pattern-matching and recursion to operate over two lists simultaneously. You then need an accumulator to pass the sum along.
fun mulAndSum acc ([], []) = ...
| mulAndSum acc ([], _) = ...
| mulAndSum acc (_, []) = ...
| mulAndSum acc ((x::xs), (y::ys)) = mulAndSum (...) (xs, ys)
Then when you call the function, you provide zero as the initial state of the accumulator.
mulAndSum 0 ([1, 3, 5, 7], [2, 4, 6])
A:
To add to Chris' answer, recursion over two lists at once can also be achieved with map and zip which are higher-order list combinators (i.e. functions that take another function as argument and operate on lists):
fun add (x, y) = x + y
fun mul (x, y) = x * y
fun sum xs = foldl add 0 xs
val zip = ListPair.zip
fun mulAndSum xs ys = sum (map mul (zip xs ys))
zip will also throw away elements if one of its input lists is longer than the other.
|
Currying and summation of two lists of varying size
|
I'm self-learning SML and am currently am stuck with the concept of recursion between two lists of varying sizes.
Suppose you have two int lists of varying size, and a function that multiplies two numbers, like so:
val mul = fn(a, b) => a * b;
I want to use this function to be passed as a parameter into another function, which multiplies the numbers in the same index recursively until at least one of the lists is empty. So
val list1 = [1, 3, 5, 7];
val list2 = [2, 6, 3];
would be passed through that same function with mul and 35 would be returned, as 1*2 + 3*6 + 5*3 would be calculated.
My knowledge of how SML works is a bit limited, as I'm not exactly sure how to carry the result of the sum forward during the recursion, nor how to handle the base case when one of either lists terminates early. Could someone point me in the right direction in thinking of this problem?
|
[
"You can use pattern-matching and recursion to operate over two lists simultaneously. You then need an accumulator to pass the sum along.\nfun mulAndSum acc ([], []) = ...\n | mulAndSum acc ([], _) = ...\n | mulAndSum acc (_, []) = ...\n | mulAndSum acc ((x::xs), (y::ys)) = mulAndSum (...) (xs, ys)\n\nThen when you call the function, you provide zero as the initial state of the accumulator.\nmulAndSum 0 ([1, 3, 5, 7], [2, 4, 6])\n\n",
"To add to Chris' answer, recursion over two lists at once can also be achieved with map and zip which are higher-order list combinators (i.e. functions that take another function as argument and operate on lists):\nfun add (x, y) = x + y\nfun mul (x, y) = x * y\nfun sum xs = foldl add 0 xs\nval zip = ListPair.zip\n\nfun mulAndSum xs ys = sum (map mul (zip xs ys))\n\nzip will also throw away elements if one of its input lists is longer than the other.\n"
] |
[
1,
0
] |
[] |
[] |
[
"sml",
"smlnj"
] |
stackoverflow_0074617132_sml_smlnj.txt
|
Q:
Construct member name to retrieve description from XML comments file | Type.GetGenericArguments() returns incorrect result
We have a legacy WinForms app for .NET Framework. A part of this app is a module that reads an assembly's documentation from the accompanying XML comments file. The code has worked without problems for many years, but recently we have found one flaw. Let's consider the following class demonstrating the issue:
public class Class1<T1, T2>
{
public struct MyStruct<TS1, TS2>
{
public TS1 StructField1;
}
public void Method1(T1 arg1)
{ }
public void Method2(T1 arg1, T2 arg2)
{ }
public void Method3(MyStruct<string, T1> arg1)
{ }
public void Method4(MyStruct<T2, int> arg1)
{ }
}
Method3 and Method4 has the following records in the XML comments file:
<member name="M:TestSimple.Class1`2.Method3(TestSimple.Class1{`0,`1}.MyStruct{System.String,`0})">
<summary>
Method3 summary.
</summary>
<param name="arg1">Argument #1.</param>
</member>
<member name="M:TestSimple.Class1`2.Method4(TestSimple.Class1{`0,`1}.MyStruct{`1,System.Int32})">
<summary>
Method4 summary.
</summary>
<param name="arg1">Argument #1.</param>
</member>
To access them, we need to obtain strings
M:TestSimple.Class1`2.Method3(TestSimple.Class1{`0,`1}.MyStruct{System.String,`0})
and
M:TestSimple.Class1`2.Method4(TestSimple.Class1{`0,`1}.MyStruct{`1,System.Int32})
for Method3 and Method4 respectively. I will not place all code that constructs these strings using reflection because it will take many screens. I just show the problem part in which the parameter description string for MyStruct is not constructed properly. It turned out, that the .NET Type.GetGenericArguments() method does not return a correct list of parameters for MyStruct passed as the argument to the following method:
As you can see, we have 4 parameters for MyStruct, though I expect just 2. The documentation for GetGenericArguments states that we must use the Type.IsGenericParameter property to filter out unneeded parameters in the returned list, but it does not help. As you can see from the screenshot, the parameter T1 is duplicated in the list of parameters for 'Method3'. The same happens for Method4: GetGenericArguments returns two entries for T2.
Is there a robust way to get the list of parameters for generic structs like MyStruct in classes like I showed?
And a general question: maybe, there is a standard method in .NET I can use for a Type instance to retrieve its equivalent member name in the XML comments file? It must be a trivial task, or maybe someone already wrote such a class or method I just couldn't find.
If someone wants to look at the current implementation of the GetGenericClosedConstructedTypeArgumentsString method from the screenshot above, it is below:
private static string GetGenericClosedConstructedTypeArgumentsString(Type type)
{
Type[] myArgumentTypes = type.GetGenericArguments();
StringBuilder myResult = new StringBuilder();
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListStart);
foreach (Type myArgumentType in myArgumentTypes)
{
if (myArgumentType.IsGenericParameter)
{
if (myArgumentType.DeclaringType != null && myArgumentType.DeclaringType != type)
{
#region Check whether the current argument type is declared in an enclosing type
bool myIsArgumentTypeDeclaredInEnclosingType = false;
Type myEnclosingType = type.DeclaringType;
while (myEnclosingType != null)
{
if (myEnclosingType == myArgumentType.DeclaringType)
{
myIsArgumentTypeDeclaredInEnclosingType = true;
break;
}
myEnclosingType = myEnclosingType.DeclaringType;
}
#endregion
if (myIsArgumentTypeDeclaredInEnclosingType)
continue;
}
}
if (myResult.Length > ConstsXmlHelp.cGenericMethodParameterArgumentListStart.Length)
myResult.Append(ConstsXmlHelp.cParameterDelimeter);
// GetFullName() below returns the generic or string
// argument equivalent like `0, `1, or System.Int32
myResult.Append(GetFullName(myArgumentType));
}
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListEnd);
return myResult.ToString();
}
internal static class ConstsXmlHelp
{
public const string cParameterDelimeter = ",";
public const string cGenericMethodParameterArgumentListStart = "{";
public const string cGenericMethodParameterArgumentListEnd = "}";
}
A:
I dared to assume that first items in the array returned by Type.GetGenericArguments() are the generic types from the type declaring the method, and their number is equal to the number of the declaring method's generic parameters. Taking this, I rewrote the GetGenericClosedConstructedTypeArgumentsString method as the following:
private static string GetGenericClosedConstructedTypeArgumentsString(Type type)
{
Type[] myArgumentTypes = type.GetGenericArguments();
if (type.DeclaringType != null)
{
myArgumentTypes = myArgumentTypes.Skip(type.DeclaringType.GetGenericArguments().Count()).ToArray();
}
StringBuilder myResult = new StringBuilder();
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListStart);
foreach (Type myArgumentType in myArgumentTypes)
{
if (myResult.Length > ConstsXmlHelp.cGenericMethodParameterArgumentListStart.Length)
myResult.Append(ConstsXmlHelp.cParameterDelimeter);
// GetFullName() below returns the generic or string
// argument equivalent like `0, `1, or System.Int32
myResult.Append(GetFullName(myArgumentType));
}
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListEnd);
return myResult.ToString();
}
In all my tests it works correctly. Even for such bizarre method definitions like the following ones:
public void Method5(MyStruct<T2, MyStruct<decimal, float>> arg1) { }
public void Method6(MyStruct<T2, MyStruct<decimal, T1>> arg1) { }
public void Method7<TM>(MyStruct<T2, MyStruct<decimal, TM>> arg1, TM arg2) { }
I just need a confirmation from people knowing the topic of .NET reflection good that my assumption is right.
|
Construct member name to retrieve description from XML comments file | Type.GetGenericArguments() returns incorrect result
|
We have a legacy WinForms app for .NET Framework. A part of this app is a module that reads an assembly's documentation from the accompanying XML comments file. The code has worked without problems for many years, but recently we have found one flaw. Let's consider the following class demonstrating the issue:
public class Class1<T1, T2>
{
public struct MyStruct<TS1, TS2>
{
public TS1 StructField1;
}
public void Method1(T1 arg1)
{ }
public void Method2(T1 arg1, T2 arg2)
{ }
public void Method3(MyStruct<string, T1> arg1)
{ }
public void Method4(MyStruct<T2, int> arg1)
{ }
}
Method3 and Method4 has the following records in the XML comments file:
<member name="M:TestSimple.Class1`2.Method3(TestSimple.Class1{`0,`1}.MyStruct{System.String,`0})">
<summary>
Method3 summary.
</summary>
<param name="arg1">Argument #1.</param>
</member>
<member name="M:TestSimple.Class1`2.Method4(TestSimple.Class1{`0,`1}.MyStruct{`1,System.Int32})">
<summary>
Method4 summary.
</summary>
<param name="arg1">Argument #1.</param>
</member>
To access them, we need to obtain strings
M:TestSimple.Class1`2.Method3(TestSimple.Class1{`0,`1}.MyStruct{System.String,`0})
and
M:TestSimple.Class1`2.Method4(TestSimple.Class1{`0,`1}.MyStruct{`1,System.Int32})
for Method3 and Method4 respectively. I will not place all code that constructs these strings using reflection because it will take many screens. I just show the problem part in which the parameter description string for MyStruct is not constructed properly. It turned out, that the .NET Type.GetGenericArguments() method does not return a correct list of parameters for MyStruct passed as the argument to the following method:
As you can see, we have 4 parameters for MyStruct, though I expect just 2. The documentation for GetGenericArguments states that we must use the Type.IsGenericParameter property to filter out unneeded parameters in the returned list, but it does not help. As you can see from the screenshot, the parameter T1 is duplicated in the list of parameters for 'Method3'. The same happens for Method4: GetGenericArguments returns two entries for T2.
Is there a robust way to get the list of parameters for generic structs like MyStruct in classes like I showed?
And a general question: maybe, there is a standard method in .NET I can use for a Type instance to retrieve its equivalent member name in the XML comments file? It must be a trivial task, or maybe someone already wrote such a class or method I just couldn't find.
If someone wants to look at the current implementation of the GetGenericClosedConstructedTypeArgumentsString method from the screenshot above, it is below:
private static string GetGenericClosedConstructedTypeArgumentsString(Type type)
{
Type[] myArgumentTypes = type.GetGenericArguments();
StringBuilder myResult = new StringBuilder();
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListStart);
foreach (Type myArgumentType in myArgumentTypes)
{
if (myArgumentType.IsGenericParameter)
{
if (myArgumentType.DeclaringType != null && myArgumentType.DeclaringType != type)
{
#region Check whether the current argument type is declared in an enclosing type
bool myIsArgumentTypeDeclaredInEnclosingType = false;
Type myEnclosingType = type.DeclaringType;
while (myEnclosingType != null)
{
if (myEnclosingType == myArgumentType.DeclaringType)
{
myIsArgumentTypeDeclaredInEnclosingType = true;
break;
}
myEnclosingType = myEnclosingType.DeclaringType;
}
#endregion
if (myIsArgumentTypeDeclaredInEnclosingType)
continue;
}
}
if (myResult.Length > ConstsXmlHelp.cGenericMethodParameterArgumentListStart.Length)
myResult.Append(ConstsXmlHelp.cParameterDelimeter);
// GetFullName() below returns the generic or string
// argument equivalent like `0, `1, or System.Int32
myResult.Append(GetFullName(myArgumentType));
}
myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListEnd);
return myResult.ToString();
}
internal static class ConstsXmlHelp
{
public const string cParameterDelimeter = ",";
public const string cGenericMethodParameterArgumentListStart = "{";
public const string cGenericMethodParameterArgumentListEnd = "}";
}
|
[
"I dared to assume that first items in the array returned by Type.GetGenericArguments() are the generic types from the type declaring the method, and their number is equal to the number of the declaring method's generic parameters. Taking this, I rewrote the GetGenericClosedConstructedTypeArgumentsString method as the following:\nprivate static string GetGenericClosedConstructedTypeArgumentsString(Type type)\n{\n Type[] myArgumentTypes = type.GetGenericArguments();\n\n if (type.DeclaringType != null)\n {\n myArgumentTypes = myArgumentTypes.Skip(type.DeclaringType.GetGenericArguments().Count()).ToArray();\n }\n\n StringBuilder myResult = new StringBuilder();\n\n myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListStart);\n\n foreach (Type myArgumentType in myArgumentTypes)\n {\n if (myResult.Length > ConstsXmlHelp.cGenericMethodParameterArgumentListStart.Length)\n myResult.Append(ConstsXmlHelp.cParameterDelimeter);\n\n // GetFullName() below returns the generic or string\n // argument equivalent like `0, `1, or System.Int32\n myResult.Append(GetFullName(myArgumentType));\n }\n\n myResult.Append(ConstsXmlHelp.cGenericMethodParameterArgumentListEnd);\n\n return myResult.ToString();\n}\n\nIn all my tests it works correctly. Even for such bizarre method definitions like the following ones:\npublic void Method5(MyStruct<T2, MyStruct<decimal, float>> arg1) { }\n\npublic void Method6(MyStruct<T2, MyStruct<decimal, T1>> arg1) { }\n\npublic void Method7<TM>(MyStruct<T2, MyStruct<decimal, TM>> arg1, TM arg2) { }\n\nI just need a confirmation from people knowing the topic of .NET reflection good that my assumption is right.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"generics",
"parameters",
"reflection",
"types"
] |
stackoverflow_0074657869_.net_generics_parameters_reflection_types.txt
|
Q:
How to display a progress indicator in pure C/C++ (cout/printf)?
I'm writing a console program in C++ to download a large file. I know the file size, and I start a work thread to download it. I want to show a progress indicator to make it look cooler.
How can I display different strings at different times, but at the same position, in cout or printf?
A:
With a fixed width of your output, use something like the following:
float progress = 0.0;
while (progress < 1.0) {
int barWidth = 70;
std::cout << "[";
int pos = barWidth * progress;
for (int i = 0; i < barWidth; ++i) {
if (i < pos) std::cout << "=";
else if (i == pos) std::cout << ">";
else std::cout << " ";
}
std::cout << "] " << int(progress * 100.0) << " %\r";
std::cout.flush();
progress += 0.16; // for demonstration only
}
std::cout << std::endl;
http://ideone.com/Yg8NKj
[> ] 0 %
[===========> ] 15 %
[======================> ] 31 %
[=================================> ] 47 %
[============================================> ] 63 %
[========================================================> ] 80 %
[===================================================================> ] 96 %
Note that this output is shown one line below each other, but in a terminal emulator (I think also in Windows command line) it will be printed on the same line.
At the very end, don't forget to print a newline before printing more stuff.
If you want to remove the bar at the end, you have to overwrite it with spaces, to print something shorter like for example "Done.".
Also, the same can of course be done using printf in C; adapting the code above should be straight-forward.
A:
You can use a "carriage return" (\r) without a line-feed (\n), and hope your console does the right thing.
A:
For a C solution with an adjustable progress bar width, you can use the following:
#define PBSTR "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
#define PBWIDTH 60
void printProgress(double percentage) {
int val = (int) (percentage * 100);
int lpad = (int) (percentage * PBWIDTH);
int rpad = PBWIDTH - lpad;
printf("\r%3d%% [%.*s%*s]", val, lpad, PBSTR, rpad, "");
fflush(stdout);
}
It will output something like this:
75% [|||||||||||||||||||||||||||||||||||||||||| ]
A:
Take a look at boost progress_display
http://www.boost.org/doc/libs/1_52_0/libs/timer/doc/original_timer.html#Class%20progress_display
I think it may do what you need and I believe it is a header only library so nothing to link
A:
You can print a carriage return character (\r) to move the output "cursor" back to the beginning of the current line.
For a more sophisticated approach, take a look at something like ncurses (an API for console text-based interfaces).
A:
I know I am a bit late in answering this question, but I made a simple class that does exactly what you want. (keep in mind that I wrote using namespace std; before this.):
class pBar {
public:
void update(double newProgress) {
currentProgress += newProgress;
amountOfFiller = (int)((currentProgress / neededProgress)*(double)pBarLength);
}
void print() {
currUpdateVal %= pBarUpdater.length();
cout << "\r" //Bring cursor to start of line
<< firstPartOfpBar; //Print out first part of pBar
for (int a = 0; a < amountOfFiller; a++) { //Print out current progress
cout << pBarFiller;
}
cout << pBarUpdater[currUpdateVal];
for (int b = 0; b < pBarLength - amountOfFiller; b++) { //Print out spaces
cout << " ";
}
cout << lastPartOfpBar //Print out last part of progress bar
<< " (" << (int)(100*(currentProgress/neededProgress)) << "%)" //This just prints out the percent
<< flush;
currUpdateVal += 1;
}
std::string firstPartOfpBar = "[", //Change these at will (that is why I made them public)
lastPartOfpBar = "]",
pBarFiller = "|",
pBarUpdater = "/-\\|";
private:
int amountOfFiller,
pBarLength = 50, //I would recommend NOT changing this
currUpdateVal = 0; //Do not change
double currentProgress = 0, //Do not change
neededProgress = 100; //I would recommend NOT changing this
};
An example on how to use:
int main() {
//Setup:
pBar bar;
//Main loop:
for (int i = 0; i < 100; i++) { //This can be any loop, but I just made this as an example
//Update pBar:
bar.update(1); //How much new progress was added (only needed when new progress was added)
//Print pBar:
bar.print(); //This should be called more frequently than it is in this demo (you'll have to see what looks best for your program)
sleep(1);
}
cout << endl;
return 0;
}
Note: I made all of the classes' strings public so the bar's appearance can be easily changed.
A:
Another way could be showing the "Dots" or any character you want .The below code will print progress indicator [sort of loading...]as dots every after 1 sec.
PS : I am using sleep here. Think twice if performance is concern.
#include<iostream>
using namespace std;
int main()
{
int count = 0;
cout << "Will load in 10 Sec " << endl << "Loading ";
for(count;count < 10; ++count){
cout << ". " ;
fflush(stdout);
sleep(1);
}
cout << endl << "Done" <<endl;
return 0;
}
A:
Here is a simple one I made:
#include <iostream>
#include <thread>
#include <chrono>
#include <Windows.h>
using namespace std;
int main() {
// Changing text color (GetStdHandle(-11), colorcode)
SetConsoleTextAttribute(GetStdHandle(-11), 14);
int barl = 20;
cout << "[";
for (int i = 0; i < barl; i++) {
this_thread::sleep_for(chrono::milliseconds(100));
cout << ":";
}
cout << "]";
// Reset color
SetConsoleTextAttribute(GetStdHandle(-11), 7);
}
A:
May be this code will helps you -
#include <iostream>
#include <string>
#include <thread>
#include <chrono>
#include <cmath>
using namespace std;
void show_progress_bar(int time, const std::string &message, char symbol)
{
std::string progress_bar;
const double progress_level = 1.42;
std::cout << message << "\n\n";
for (double percentage = 0; percentage <= 100; percentage += progress_level)
{
progress_bar.insert(0, 1, symbol);
std::cout << "\r [" << std::ceil(percentage) << '%' << "] " << progress_bar;
std::this_thread::sleep_for(std::chrono::milliseconds(time));
}
std::cout << "\n\n";
}
int main()
{
show_progress_bar(100, "progress" , '#');
}
A:
Simple, you can just use string's fill constructor:
#include <iostream> //for `cout`
#include <string> //for the constructor
#include <iomanip> //for `setprecision`
using namespace std;
int main()
{
const int cTotalLength = 10;
float lProgress = 0.3;
cout <<
"\r[" << //'\r' aka carriage return should move printer's cursor back at the beginning of the current line
string(cTotalLength * lProgress, 'X') << //printing filled part
string(cTotalLength * (1 - lProgress), '-') << //printing empty part
"] " <<
setprecision(3) << 100 * lProgress << "%"; //printing percentage
return 0;
}
Which would print:
[XXX-------] 30%
If you need it in pure C
and you would like to be able to customize the size and filler characters at runtime:
#include <stdio.h> //for `printf`
#include <stdlib.h> //for `malloc`
#include <string.h> //for `memset`
int main()
{
const int cTotalLength = 10;
char* lBuffer = malloc((cTotalLength + 1) * sizeof *lBuffer); //array to fit 10 chars + '\0'
lBuffer[cTotalLength] = '\0'; //terminating it
float lProgress = 0.3;
int lFilledLength = lProgress * cTotalLength;
memset(lBuffer, 'X', lFilledLength); //filling filled part
memset(lBuffer + lFilledLength, '-', cTotalLength - lFilledLength); //filling empty part
printf("\r[%s] %.1f%%", lBuffer, lProgress * 100); //same princip as with the CPP method
//or you can combine it to a single line if you want to flex ;)
//printf("\r[%s] %.1f%%", (char*)memset(memset(lBuffer, 'X', lFullLength) + lFullLength, '-', cTotalLength - lFullLength) - lFullLength, lProgress * 100);
free(lBuffer);
return 0;
}
but if you don't need to customize it at runtime:
#include <stdio.h> //for `printf`
#include <stddef.h> //for `size_t`
int main()
{
const char cFilled[] = "XXXXXXXXXX";
const char cEmpty[] = "----------";
float lProgress = 0.3;
size_t lFilledStart = (sizeof cFilled - 1) * (1 - lProgress);
size_t lEmptyStart = (sizeof cFilled - 1) * lProgress;
printf("\r[%s%s] %.1f%%",
cFilled + lFilledStart, //Array of Xs starting at `cTotalLength * (1 - lProgress)` (`cTotalLength * lProgress` characters remaining to EOS)
cEmpty + lEmptyStart, //Array of -s starting at `cTotalLength * lProgress`...
lProgress * 100 //Percentage
);
return 0;
}
A:
I needed to create a progress bar and some of the answers here would cause the bar to blink or display the percentage short of 100% when done. Here is a version that has no loop other than one that simulates cpu work, it only prints when the next progress unit is incremented.
#include <iostream>
#include <iomanip> // for setw, setprecision, setfill
#include <chrono>
#include <thread> // simulate work on cpu
int main()
{
int batch_size = 4000;
int num_bars = 50;
int batch_per_bar = batch_size / num_bars;
int progress = 0;
for (int i = 0; i < batch_size; i++) {
if (i % batch_per_bar == 0) {
std::cout << std::setprecision(3) <<
// fill bar with = up to current progress
'[' << std::setfill('=') << std::setw(progress) << '>'
// fill the rest of the bar with spaces
<< std::setfill(' ') << std::setw(num_bars - progress + 1)
// display bar percentage, \r brings it back to the beginning
<< ']' << std::setw(3) << ((i + 1) * 100 / batch_size) << '%'
<< "\r";
progress++;
}
// simulate work
std::this_thread::sleep_for(std::chrono::nanoseconds(1000000));
}
}
|
How to display a progress indicator in pure C/C++ (cout/printf)?
|
I'm writing a console program in C++ to download a large file. I know the file size, and I start a work thread to download it. I want to show a progress indicator to make it look cooler.
How can I display different strings at different times, but at the same position, in cout or printf?
|
[
"With a fixed width of your output, use something like the following:\nfloat progress = 0.0;\nwhile (progress < 1.0) {\n int barWidth = 70;\n\n std::cout << \"[\";\n int pos = barWidth * progress;\n for (int i = 0; i < barWidth; ++i) {\n if (i < pos) std::cout << \"=\";\n else if (i == pos) std::cout << \">\";\n else std::cout << \" \";\n }\n std::cout << \"] \" << int(progress * 100.0) << \" %\\r\";\n std::cout.flush();\n\n progress += 0.16; // for demonstration only\n}\nstd::cout << std::endl;\n\nhttp://ideone.com/Yg8NKj\n[> ] 0 %\n[===========> ] 15 %\n[======================> ] 31 %\n[=================================> ] 47 %\n[============================================> ] 63 %\n[========================================================> ] 80 %\n[===================================================================> ] 96 %\n\nNote that this output is shown one line below each other, but in a terminal emulator (I think also in Windows command line) it will be printed on the same line.\nAt the very end, don't forget to print a newline before printing more stuff.\nIf you want to remove the bar at the end, you have to overwrite it with spaces, to print something shorter like for example \"Done.\".\nAlso, the same can of course be done using printf in C; adapting the code above should be straight-forward.\n",
"You can use a \"carriage return\" (\\r) without a line-feed (\\n), and hope your console does the right thing.\n",
"For a C solution with an adjustable progress bar width, you can use the following:\n#define PBSTR \"||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||\"\n#define PBWIDTH 60\n\nvoid printProgress(double percentage) {\n int val = (int) (percentage * 100);\n int lpad = (int) (percentage * PBWIDTH);\n int rpad = PBWIDTH - lpad;\n printf(\"\\r%3d%% [%.*s%*s]\", val, lpad, PBSTR, rpad, \"\");\n fflush(stdout);\n}\n\nIt will output something like this:\n 75% [|||||||||||||||||||||||||||||||||||||||||| ]\n\n",
"Take a look at boost progress_display\nhttp://www.boost.org/doc/libs/1_52_0/libs/timer/doc/original_timer.html#Class%20progress_display\nI think it may do what you need and I believe it is a header only library so nothing to link\n",
"You can print a carriage return character (\\r) to move the output \"cursor\" back to the beginning of the current line.\nFor a more sophisticated approach, take a look at something like ncurses (an API for console text-based interfaces).\n",
"I know I am a bit late in answering this question, but I made a simple class that does exactly what you want. (keep in mind that I wrote using namespace std; before this.): \nclass pBar {\npublic:\n void update(double newProgress) {\n currentProgress += newProgress;\n amountOfFiller = (int)((currentProgress / neededProgress)*(double)pBarLength);\n }\n void print() {\n currUpdateVal %= pBarUpdater.length();\n cout << \"\\r\" //Bring cursor to start of line\n << firstPartOfpBar; //Print out first part of pBar\n for (int a = 0; a < amountOfFiller; a++) { //Print out current progress\n cout << pBarFiller;\n }\n cout << pBarUpdater[currUpdateVal];\n for (int b = 0; b < pBarLength - amountOfFiller; b++) { //Print out spaces\n cout << \" \";\n }\n cout << lastPartOfpBar //Print out last part of progress bar\n << \" (\" << (int)(100*(currentProgress/neededProgress)) << \"%)\" //This just prints out the percent\n << flush;\n currUpdateVal += 1;\n }\n std::string firstPartOfpBar = \"[\", //Change these at will (that is why I made them public)\n lastPartOfpBar = \"]\",\n pBarFiller = \"|\",\n pBarUpdater = \"/-\\\\|\";\nprivate:\n int amountOfFiller,\n pBarLength = 50, //I would recommend NOT changing this\n currUpdateVal = 0; //Do not change\n double currentProgress = 0, //Do not change\n neededProgress = 100; //I would recommend NOT changing this\n};\n\nAn example on how to use:\nint main() {\n //Setup:\n pBar bar;\n //Main loop:\n for (int i = 0; i < 100; i++) { //This can be any loop, but I just made this as an example\n //Update pBar:\n bar.update(1); //How much new progress was added (only needed when new progress was added)\n //Print pBar:\n bar.print(); //This should be called more frequently than it is in this demo (you'll have to see what looks best for your program)\n sleep(1);\n }\n cout << endl;\n return 0;\n}\n\nNote: I made all of the classes' strings public so the bar's appearance can be easily changed.\n",
"Another way could be showing the \"Dots\" or any character you want .The below code will print progress indicator [sort of loading...]as dots every after 1 sec.\nPS : I am using sleep here. Think twice if performance is concern.\n#include<iostream>\nusing namespace std;\nint main()\n{\n int count = 0;\n cout << \"Will load in 10 Sec \" << endl << \"Loading \";\n for(count;count < 10; ++count){\n cout << \". \" ;\n fflush(stdout);\n sleep(1);\n }\n cout << endl << \"Done\" <<endl;\n return 0;\n}\n\n",
"Here is a simple one I made:\n#include <iostream>\n#include <thread>\n#include <chrono>\n#include <Windows.h>\nusing namespace std;\n\nint main() {\n // Changing text color (GetStdHandle(-11), colorcode)\n SetConsoleTextAttribute(GetStdHandle(-11), 14);\n \n int barl = 20;\n cout << \"[\"; \n for (int i = 0; i < barl; i++) { \n this_thread::sleep_for(chrono::milliseconds(100));\n cout << \":\"; \n }\n cout << \"]\";\n\n // Reset color\n SetConsoleTextAttribute(GetStdHandle(-11), 7);\n}\n\n",
"May be this code will helps you -\n#include <iostream>\n#include <string>\n#include <thread>\n#include <chrono>\n#include <cmath>\n\nusing namespace std;\n\nvoid show_progress_bar(int time, const std::string &message, char symbol)\n{\n std::string progress_bar;\n const double progress_level = 1.42;\n\n std::cout << message << \"\\n\\n\";\n\n for (double percentage = 0; percentage <= 100; percentage += progress_level)\n {\n progress_bar.insert(0, 1, symbol);\n std::cout << \"\\r [\" << std::ceil(percentage) << '%' << \"] \" << progress_bar;\n std::this_thread::sleep_for(std::chrono::milliseconds(time)); \n }\n std::cout << \"\\n\\n\";\n}\n\nint main()\n{\n show_progress_bar(100, \"progress\" , '#');\n}\n\n",
"Simple, you can just use string's fill constructor:\n#include <iostream> //for `cout`\n#include <string> //for the constructor\n#include <iomanip> //for `setprecision`\n\nusing namespace std;\n\nint main()\n{\n const int cTotalLength = 10;\n float lProgress = 0.3;\n cout << \n \"\\r[\" << //'\\r' aka carriage return should move printer's cursor back at the beginning of the current line\n string(cTotalLength * lProgress, 'X') << //printing filled part\n string(cTotalLength * (1 - lProgress), '-') << //printing empty part\n \"] \" <<\n setprecision(3) << 100 * lProgress << \"%\"; //printing percentage\n return 0;\n}\n\nWhich would print:\n[XXX-------] 30%\n\nIf you need it in pure C\nand you would like to be able to customize the size and filler characters at runtime:\n#include <stdio.h> //for `printf`\n#include <stdlib.h> //for `malloc`\n#include <string.h> //for `memset`\n\nint main()\n{\n const int cTotalLength = 10;\n char* lBuffer = malloc((cTotalLength + 1) * sizeof *lBuffer); //array to fit 10 chars + '\\0'\n lBuffer[cTotalLength] = '\\0'; //terminating it\n \n float lProgress = 0.3;\n\n int lFilledLength = lProgress * cTotalLength;\n \n memset(lBuffer, 'X', lFilledLength); //filling filled part\n memset(lBuffer + lFilledLength, '-', cTotalLength - lFilledLength); //filling empty part\n printf(\"\\r[%s] %.1f%%\", lBuffer, lProgress * 100); //same princip as with the CPP method\n\n //or you can combine it to a single line if you want to flex ;)\n //printf(\"\\r[%s] %.1f%%\", (char*)memset(memset(lBuffer, 'X', lFullLength) + lFullLength, '-', cTotalLength - lFullLength) - lFullLength, lProgress * 100);\n\n free(lBuffer);\n\n return 0;\n}\n\nbut if you don't need to customize it at runtime:\n#include <stdio.h> //for `printf`\n#include <stddef.h> //for `size_t`\n\nint main()\n{\n const char cFilled[] = \"XXXXXXXXXX\";\n const char cEmpty[] = \"----------\";\n float lProgress = 0.3;\n \n size_t lFilledStart = (sizeof cFilled - 1) * (1 - lProgress);\n size_t lEmptyStart = (sizeof cFilled - 1) * lProgress;\n\n printf(\"\\r[%s%s] %.1f%%\",\n cFilled + lFilledStart, //Array of Xs starting at `cTotalLength * (1 - lProgress)` (`cTotalLength * lProgress` characters remaining to EOS)\n cEmpty + lEmptyStart, //Array of -s starting at `cTotalLength * lProgress`...\n lProgress * 100 //Percentage\n );\n\n return 0;\n}\n\n",
"I needed to create a progress bar and some of the answers here would cause the bar to blink or display the percentage short of 100% when done. Here is a version that has no loop other than one that simulates cpu work, it only prints when the next progress unit is incremented.\n#include <iostream>\n#include <iomanip> // for setw, setprecision, setfill\n#include <chrono>\n#include <thread> // simulate work on cpu\n\nint main()\n{\n int batch_size = 4000;\n int num_bars = 50;\n int batch_per_bar = batch_size / num_bars;\n\n int progress = 0;\n\n for (int i = 0; i < batch_size; i++) {\n if (i % batch_per_bar == 0) { \n std::cout << std::setprecision(3) <<\n // fill bar with = up to current progress\n '[' << std::setfill('=') << std::setw(progress) << '>'\n // fill the rest of the bar with spaces\n << std::setfill(' ') << std::setw(num_bars - progress + 1)\n // display bar percentage, \\r brings it back to the beginning\n << ']' << std::setw(3) << ((i + 1) * 100 / batch_size) << '%'\n << \"\\r\";\n progress++;\n }\n \n // simulate work\n std::this_thread::sleep_for(std::chrono::nanoseconds(1000000));\n }\n}\n\n"
] |
[
124,
64,
62,
14,
13,
7,
5,
3,
0,
0,
0
] |
[] |
[] |
[
"c",
"c++",
"c++11",
"io",
"user_interface"
] |
stackoverflow_0014539867_c_c++_c++11_io_user_interface.txt
|
Q:
Use anchors with react-router
How can I use react-router, and have a link navigate to a particular place on a particular page? (e.g. /home-page#section-three)
Details:
I am using react-router in my React app.
I have a site-wide navbar that needs to link to a particular parts of a page, like /home-page#section-three.
So even if you are on say /blog, clicking this link will still load the home page, with section-three scrolled into view. This is exactly how a standard <a href="/home-page#section-three> would work.
Note: The creators of react-router have not given an explicit answer. They say it is in progress, and in the mean time use other people's answers. I'll do my best to keep this question updated with progress & possible solutions until a dominant one emerges.
Research:
How to use normal anchor links with react-router
This question is from 2015 (so 10 years ago in react time). The most upvoted answer says to use HistoryLocation instead of HashLocation. Basically that means store the location in the window history, instead of in the hash fragment.
Bad news is... even using HistoryLocation (what most tutorials and docs say to do in 2016), anchor tags still don't work.
https://github.com/ReactTraining/react-router/issues/394
A thread on ReactTraining about how use anchor links with react-router. This is no confirmed answer. Be careful since most proposed answers are out of date (e.g. using the "hash" prop in <Link>)
A:
React Router Hash Link worked for me and is easy to install and implement:
$ npm install --save react-router-hash-link
In your component.js import it as Link:
import { HashLink as Link } from 'react-router-hash-link';
And instead of using an anchor <a>, use <Link> :
<Link to="home-page#section-three">Section three</Link>
Note: I used HashRouter instead of Router:
A:
This solution works with react-router v5
import React, { useEffect } from 'react'
import { Route, Switch, useLocation } from 'react-router-dom'
export default function App() {
const { pathname, hash, key } = useLocation();
useEffect(() => {
// if not a hash link, scroll to top
if (hash === '') {
window.scrollTo(0, 0);
}
// else scroll to id
else {
setTimeout(() => {
const id = hash.replace('#', '');
const element = document.getElementById(id);
if (element) {
element.scrollIntoView();
}
}, 0);
}
}, [pathname, hash, key]); // do this on route change
return (
<Switch>
<Route exact path="/" component={Home} />
.
.
</Switch>
)
}
In the component
<Link to="/#home"> Home </Link>
A:
Here is one solution I have found (October 2016). It is is cross-browser compatible (tested in Internet Explorer, Firefox, Chrome, mobile Safari, and Safari).
You can provide an onUpdate property to your Router. This is called any time a route updates. This solution uses the onUpdate property to check if there is a DOM element that matches the hash, and then scrolls to it after the route transition is complete.
You must be using browserHistory and not hashHistory.
The answer is by "Rafrax" in Hash links #394.
Add this code to the place where you define <Router>:
import React from 'react';
import { render } from 'react-dom';
import { Router, Route, browserHistory } from 'react-router';
const routes = (
// your routes
);
function hashLinkScroll() {
const { hash } = window.location;
if (hash !== '') {
// Push onto callback queue so it runs after the DOM is updated,
// this is required when navigating from a different page so that
// the element is rendered on the page before trying to getElementById.
setTimeout(() => {
const id = hash.replace('#', '');
const element = document.getElementById(id);
if (element) element.scrollIntoView();
}, 0);
}
}
render(
<Router
history={browserHistory}
routes={routes}
onUpdate={hashLinkScroll}
/>,
document.getElementById('root')
)
If you are feeling lazy and don't want to copy that code, you can use Anchorate which just defines that function for you. https://github.com/adjohnson916/anchorate
A:
Here's a simple solution that doesn't require any subscriptions nor third-party packages. It should work with react-router@3 and above and react-router-dom.
Working example: https://fglet.codesandbox.io/
Source (unfortunately, it doesn't currently work within the editor):
#ScrollHandler Hook Example
import { useEffect } from "react";
import PropTypes from "prop-types";
import { withRouter } from "react-router-dom";
const ScrollHandler = ({ location, children }) => {
useEffect(
() => {
const element = document.getElementById(location.hash.replace("#", ""));
setTimeout(() => {
window.scrollTo({
behavior: element ? "smooth" : "auto",
top: element ? element.offsetTop : 0
});
}, 100);
}, [location]);
);
return children;
};
ScrollHandler.propTypes = {
children: PropTypes.node.isRequired,
location: PropTypes.shape({
hash: PropTypes.string,
}).isRequired
};
export default withRouter(ScrollHandler);
#ScrollHandler Class Example
import { PureComponent } from "react";
import PropTypes from "prop-types";
import { withRouter } from "react-router-dom";
class ScrollHandler extends PureComponent {
componentDidMount = () => this.handleScroll();
componentDidUpdate = prevProps => {
const { location: { pathname, hash } } = this.props;
if (
pathname !== prevProps.location.pathname ||
hash !== prevProps.location.hash
) {
this.handleScroll();
}
};
handleScroll = () => {
const { location: { hash } } = this.props;
const element = document.getElementById(hash.replace("#", ""));
setTimeout(() => {
window.scrollTo({
behavior: element ? "smooth" : "auto",
top: element ? element.offsetTop : 0
});
}, 100);
};
render = () => this.props.children;
};
ScrollHandler.propTypes = {
children: PropTypes.node.isRequired,
location: PropTypes.shape({
hash: PropTypes.string,
pathname: PropTypes.string,
})
};
export default withRouter(ScrollHandler);
A:
Just avoid using react-router for local scrolling:
document.getElementById('myElementSomewhere').scrollIntoView()
A:
The problem with Don P's answer is sometimes the element with the id is still been rendered or loaded if that section depends on some async action. The following function will try to find the element by id and navigate to it and retry every 100 ms until it reaches a maximum of 50 retries:
scrollToLocation = () => {
const { hash } = window.location;
if (hash !== '') {
let retries = 0;
const id = hash.replace('#', '');
const scroll = () => {
retries += 0;
if (retries > 50) return;
const element = document.getElementById(id);
if (element) {
setTimeout(() => element.scrollIntoView(), 0);
} else {
setTimeout(scroll, 100);
}
};
scroll();
}
}
A:
I adapted Don P's solution (see above) to react-router 4 (Jan 2019) because there is no onUpdate prop on <Router> any more.
import React from 'react';
import * as ReactDOM from 'react-dom';
import { Router, Route } from 'react-router';
import { createBrowserHistory } from 'history';
const browserHistory = createBrowserHistory();
browserHistory.listen(location => {
const { hash } = location;
if (hash !== '') {
// Push onto callback queue so it runs after the DOM is updated,
// this is required when navigating from a different page so that
// the element is rendered on the page before trying to getElementById.
setTimeout(
() => {
const id = hash.replace('#', '');
const element = document.getElementById(id);
if (element) {
element.scrollIntoView();
}
},
0
);
}
});
ReactDOM.render(
<Router history={browserHistory}>
// insert your routes here...
/>,
document.getElementById('root')
)
A:
<Link to='/homepage#faq-1'>Question 1</Link>
useEffect(() => {
const hash = props.history.location.hash
if (hash && document.getElementById(hash.substr(1))) {
// Check if there is a hash and if an element with that id exists
document.getElementById(hash.substr(1)).scrollIntoView({behavior: "smooth"})
}
}, [props.history.location.hash]) // Fires when component mounts and every time hash changes
A:
An alternative: react-scrollchor https://www.npmjs.com/package/react-scrollchor
react-scrollchor: A React component for scroll to #hash links with smooth animations. Scrollchor is a mix of Scroll and Anchor
Note: It doesn't use react-router
A:
For simple in-page navigation you could add something like this, though it doesn't handle initializing the page -
// handle back/fwd buttons
function hashHandler() {
const id = window.location.hash.slice(1) // remove leading '#'
const el = document.getElementById(id)
if (el) {
el.scrollIntoView()
}
}
window.addEventListener('hashchange', hashHandler, false)
A:
Create A scrollHandle component
import { useEffect } from "react";
import { useLocation } from "react-router-dom";
export const ScrollHandler = ({ children}) => {
const { pathname, hash } = useLocation()
const handleScroll = () => {
const element = document.getElementById(hash.replace("#", ""));
setTimeout(() => {
window.scrollTo({
behavior: element ? "smooth" : "auto",
top: element ? element.offsetTop : 0
});
}, 100);
};
useEffect(() => {
handleScroll()
}, [pathname, hash])
return children
}
Import ScrollHandler component directly into your app.js file
or you can create a higher order component withScrollHandler and export your app as withScrollHandler(App)
And in links <Link to='/page#section'>Section</Link> or <Link to='#section'>Section</Link>
And add id="section" in your section component
A:
I know it's old but in my latest [email protected], this simple attribute reloadDocument is working:
div>
<Link to="#result" reloadDocument>GO TO ⬇ (Navigate to Same Page) </Link>
</div>
<div id='result'>CLICK 'GO TO' ABOVE TO REACH HERE</div>
|
Use anchors with react-router
|
How can I use react-router, and have a link navigate to a particular place on a particular page? (e.g. /home-page#section-three)
Details:
I am using react-router in my React app.
I have a site-wide navbar that needs to link to a particular parts of a page, like /home-page#section-three.
So even if you are on say /blog, clicking this link will still load the home page, with section-three scrolled into view. This is exactly how a standard <a href="/home-page#section-three> would work.
Note: The creators of react-router have not given an explicit answer. They say it is in progress, and in the mean time use other people's answers. I'll do my best to keep this question updated with progress & possible solutions until a dominant one emerges.
Research:
How to use normal anchor links with react-router
This question is from 2015 (so 10 years ago in react time). The most upvoted answer says to use HistoryLocation instead of HashLocation. Basically that means store the location in the window history, instead of in the hash fragment.
Bad news is... even using HistoryLocation (what most tutorials and docs say to do in 2016), anchor tags still don't work.
https://github.com/ReactTraining/react-router/issues/394
A thread on ReactTraining about how use anchor links with react-router. This is no confirmed answer. Be careful since most proposed answers are out of date (e.g. using the "hash" prop in <Link>)
|
[
"React Router Hash Link worked for me and is easy to install and implement:\n$ npm install --save react-router-hash-link\n\nIn your component.js import it as Link:\nimport { HashLink as Link } from 'react-router-hash-link';\n\nAnd instead of using an anchor <a>, use <Link> :\n<Link to=\"home-page#section-three\">Section three</Link>\n\nNote: I used HashRouter instead of Router:\n",
"This solution works with react-router v5\nimport React, { useEffect } from 'react'\nimport { Route, Switch, useLocation } from 'react-router-dom'\n\nexport default function App() {\n const { pathname, hash, key } = useLocation();\n\n useEffect(() => {\n // if not a hash link, scroll to top\n if (hash === '') {\n window.scrollTo(0, 0);\n }\n // else scroll to id\n else {\n setTimeout(() => {\n const id = hash.replace('#', '');\n const element = document.getElementById(id);\n if (element) {\n element.scrollIntoView();\n }\n }, 0);\n }\n }, [pathname, hash, key]); // do this on route change\n\n return (\n <Switch>\n <Route exact path=\"/\" component={Home} />\n .\n .\n </Switch>\n )\n}\n\nIn the component\n<Link to=\"/#home\"> Home </Link>\n\n",
"Here is one solution I have found (October 2016). It is is cross-browser compatible (tested in Internet Explorer, Firefox, Chrome, mobile Safari, and Safari).\nYou can provide an onUpdate property to your Router. This is called any time a route updates. This solution uses the onUpdate property to check if there is a DOM element that matches the hash, and then scrolls to it after the route transition is complete.\nYou must be using browserHistory and not hashHistory.\nThe answer is by \"Rafrax\" in Hash links #394.\nAdd this code to the place where you define <Router>:\nimport React from 'react';\nimport { render } from 'react-dom';\nimport { Router, Route, browserHistory } from 'react-router';\n\nconst routes = (\n // your routes\n);\n\nfunction hashLinkScroll() {\n const { hash } = window.location;\n if (hash !== '') {\n // Push onto callback queue so it runs after the DOM is updated,\n // this is required when navigating from a different page so that\n // the element is rendered on the page before trying to getElementById.\n setTimeout(() => {\n const id = hash.replace('#', '');\n const element = document.getElementById(id);\n if (element) element.scrollIntoView();\n }, 0);\n }\n}\n\nrender(\n <Router\n history={browserHistory}\n routes={routes}\n onUpdate={hashLinkScroll}\n />,\n document.getElementById('root')\n)\n\nIf you are feeling lazy and don't want to copy that code, you can use Anchorate which just defines that function for you. https://github.com/adjohnson916/anchorate\n",
"Here's a simple solution that doesn't require any subscriptions nor third-party packages. It should work with react-router@3 and above and react-router-dom.\nWorking example: https://fglet.codesandbox.io/\nSource (unfortunately, it doesn't currently work within the editor):\n\n\n#ScrollHandler Hook Example\nimport { useEffect } from \"react\";\nimport PropTypes from \"prop-types\";\nimport { withRouter } from \"react-router-dom\";\n\nconst ScrollHandler = ({ location, children }) => {\n useEffect(\n () => {\n const element = document.getElementById(location.hash.replace(\"#\", \"\"));\n\n setTimeout(() => {\n window.scrollTo({\n behavior: element ? \"smooth\" : \"auto\",\n top: element ? element.offsetTop : 0\n });\n }, 100);\n }, [location]);\n );\n\n return children;\n};\n\nScrollHandler.propTypes = {\n children: PropTypes.node.isRequired,\n location: PropTypes.shape({\n hash: PropTypes.string,\n }).isRequired\n};\n\nexport default withRouter(ScrollHandler);\n\n#ScrollHandler Class Example\nimport { PureComponent } from \"react\";\nimport PropTypes from \"prop-types\";\nimport { withRouter } from \"react-router-dom\";\n\nclass ScrollHandler extends PureComponent {\n componentDidMount = () => this.handleScroll();\n\n componentDidUpdate = prevProps => {\n const { location: { pathname, hash } } = this.props;\n if (\n pathname !== prevProps.location.pathname ||\n hash !== prevProps.location.hash\n ) {\n this.handleScroll();\n }\n };\n\n handleScroll = () => {\n const { location: { hash } } = this.props;\n const element = document.getElementById(hash.replace(\"#\", \"\"));\n\n setTimeout(() => {\n window.scrollTo({\n behavior: element ? \"smooth\" : \"auto\",\n top: element ? element.offsetTop : 0\n });\n }, 100);\n };\n\n render = () => this.props.children;\n};\n\nScrollHandler.propTypes = {\n children: PropTypes.node.isRequired,\n location: PropTypes.shape({\n hash: PropTypes.string,\n pathname: PropTypes.string,\n })\n};\n\nexport default withRouter(ScrollHandler);\n\n",
"Just avoid using react-router for local scrolling:\ndocument.getElementById('myElementSomewhere').scrollIntoView() \n\n",
"The problem with Don P's answer is sometimes the element with the id is still been rendered or loaded if that section depends on some async action. The following function will try to find the element by id and navigate to it and retry every 100 ms until it reaches a maximum of 50 retries:\nscrollToLocation = () => {\n const { hash } = window.location;\n if (hash !== '') {\n let retries = 0;\n const id = hash.replace('#', '');\n const scroll = () => {\n retries += 0;\n if (retries > 50) return;\n const element = document.getElementById(id);\n if (element) {\n setTimeout(() => element.scrollIntoView(), 0);\n } else {\n setTimeout(scroll, 100);\n }\n };\n scroll();\n }\n}\n\n",
"I adapted Don P's solution (see above) to react-router 4 (Jan 2019) because there is no onUpdate prop on <Router> any more.\nimport React from 'react';\nimport * as ReactDOM from 'react-dom';\nimport { Router, Route } from 'react-router';\nimport { createBrowserHistory } from 'history';\n\nconst browserHistory = createBrowserHistory();\n\nbrowserHistory.listen(location => {\n const { hash } = location;\n if (hash !== '') {\n // Push onto callback queue so it runs after the DOM is updated,\n // this is required when navigating from a different page so that\n // the element is rendered on the page before trying to getElementById.\n setTimeout(\n () => {\n const id = hash.replace('#', '');\n const element = document.getElementById(id);\n if (element) {\n element.scrollIntoView();\n }\n },\n 0\n );\n }\n});\n\nReactDOM.render(\n <Router history={browserHistory}>\n // insert your routes here...\n />,\n document.getElementById('root')\n)\n\n",
"<Link to='/homepage#faq-1'>Question 1</Link>\n\nuseEffect(() => {\n const hash = props.history.location.hash\n if (hash && document.getElementById(hash.substr(1))) {\n // Check if there is a hash and if an element with that id exists\n document.getElementById(hash.substr(1)).scrollIntoView({behavior: \"smooth\"})\n }\n}, [props.history.location.hash]) // Fires when component mounts and every time hash changes\n\n",
"An alternative: react-scrollchor https://www.npmjs.com/package/react-scrollchor\nreact-scrollchor: A React component for scroll to #hash links with smooth animations. Scrollchor is a mix of Scroll and Anchor\nNote: It doesn't use react-router\n",
"For simple in-page navigation you could add something like this, though it doesn't handle initializing the page - \n// handle back/fwd buttons\nfunction hashHandler() {\n const id = window.location.hash.slice(1) // remove leading '#'\n const el = document.getElementById(id)\n if (el) {\n el.scrollIntoView()\n }\n}\nwindow.addEventListener('hashchange', hashHandler, false)\n\n",
"Create A scrollHandle component\n import { useEffect } from \"react\";\n import { useLocation } from \"react-router-dom\";\n\n export const ScrollHandler = ({ children}) => {\n\n const { pathname, hash } = useLocation()\n\n const handleScroll = () => {\n\n const element = document.getElementById(hash.replace(\"#\", \"\"));\n\n setTimeout(() => {\n window.scrollTo({\n behavior: element ? \"smooth\" : \"auto\",\n top: element ? element.offsetTop : 0\n });\n }, 100);\n };\n\n useEffect(() => {\n handleScroll()\n }, [pathname, hash])\n\n return children\n }\n\nImport ScrollHandler component directly into your app.js file\nor you can create a higher order component withScrollHandler and export your app as withScrollHandler(App)\nAnd in links <Link to='/page#section'>Section</Link> or <Link to='#section'>Section</Link>\nAnd add id=\"section\" in your section component\n",
"I know it's old but in my latest [email protected], this simple attribute reloadDocument is working:\ndiv>\n <Link to=\"#result\" reloadDocument>GO TO ⬇ (Navigate to Same Page) </Link>\n</div>\n<div id='result'>CLICK 'GO TO' ABOVE TO REACH HERE</div>\n\n"
] |
[
82,
45,
28,
26,
13,
9,
5,
5,
1,
1,
1,
0
] |
[] |
[] |
[
"anchor",
"javascript",
"react_router",
"reactjs",
"routes"
] |
stackoverflow_0040280369_anchor_javascript_react_router_reactjs_routes.txt
|
Q:
How to minify and factor-bundle files in the same browserify command?
I currently have this factor-bundle command which I use to bundle my files, and pull everything common into a common file:
browserify index.js bar-charts.js list-filter.js dashboard.js
-p [ factor-bundle -o ../../static/js/index.js -o ../../static/js/bar-chart.js -o ../../static/js/list-filter.js -o ../../static/js/dashboard.js ]
-o ../../static/js/common.js
I previously also used this command to uglify individual files:
browserify index.js | uglifyjs > ../../static/js/index.min.js
How can I both combine files with factor-bundle, and minify them with uglifyjs, in the same command?
I found this example in the factor-bundle docs, but I don't really understand how to adapt it.
(I could also use two commands, if that works better. I just want to end up with minified and combined files!)
A:
I happened to have been looking into this area recently and stumbled on what I think might be able to help you.
browserify files/*.js \
-p [ ../ -o 'uglifyjs -cm | tee bundle/`basename $FILE` | gzip > bundle/`basename $FILE`.gz' ] \
| uglifyjs -cm | tee bundle/common.js | gzip > bundle/common.js.gz
I've not dabbled much with browserify but to me this looks as though it is simply piping the output from factor-bundle into uglify.
source: https://gist.github.com/substack/68f8d502be42d5cd4942
Hope this helps someone
A:
To minify and factor-bundle files in the same browserify command, you can use the --minify and --factor-bundle options, respectively. Here's an example of how you can use these options in a browserify command:
browserify main.js --minify --factor-bundle -o bundle.js
In this command, we use the --minify option to minify the files that are bundled by browserify, and the --factor-bundle option to factor out common dependencies into separate bundles. The -o option specifies the output file for the bundled code, which in this case is bundle.js.
Keep in mind that the --minify and --factor-bundle options may have different effects on the bundled code, depending on the specific files and dependencies that are included in the bundle. It is recommended to test the output of the browserify command to ensure that it meets your requirements.
|
How to minify and factor-bundle files in the same browserify command?
|
I currently have this factor-bundle command which I use to bundle my files, and pull everything common into a common file:
browserify index.js bar-charts.js list-filter.js dashboard.js
-p [ factor-bundle -o ../../static/js/index.js -o ../../static/js/bar-chart.js -o ../../static/js/list-filter.js -o ../../static/js/dashboard.js ]
-o ../../static/js/common.js
I previously also used this command to uglify individual files:
browserify index.js | uglifyjs > ../../static/js/index.min.js
How can I both combine files with factor-bundle, and minify them with uglifyjs, in the same command?
I found this example in the factor-bundle docs, but I don't really understand how to adapt it.
(I could also use two commands, if that works better. I just want to end up with minified and combined files!)
|
[
"I happened to have been looking into this area recently and stumbled on what I think might be able to help you.\nbrowserify files/*.js \\\n -p [ ../ -o 'uglifyjs -cm | tee bundle/`basename $FILE` | gzip > bundle/`basename $FILE`.gz' ] \\\n | uglifyjs -cm | tee bundle/common.js | gzip > bundle/common.js.gz\n\nI've not dabbled much with browserify but to me this looks as though it is simply piping the output from factor-bundle into uglify.\nsource: https://gist.github.com/substack/68f8d502be42d5cd4942\nHope this helps someone\n",
"To minify and factor-bundle files in the same browserify command, you can use the --minify and --factor-bundle options, respectively. Here's an example of how you can use these options in a browserify command:\nbrowserify main.js --minify --factor-bundle -o bundle.js\n\nIn this command, we use the --minify option to minify the files that are bundled by browserify, and the --factor-bundle option to factor out common dependencies into separate bundles. The -o option specifies the output file for the bundled code, which in this case is bundle.js.\nKeep in mind that the --minify and --factor-bundle options may have different effects on the bundled code, depending on the specific files and dependencies that are included in the bundle. It is recommended to test the output of the browserify command to ensure that it meets your requirements.\n"
] |
[
0,
0
] |
[] |
[] |
[
"browserify",
"factor_bundle",
"javascript",
"uglifyjs"
] |
stackoverflow_0032018901_browserify_factor_bundle_javascript_uglifyjs.txt
|
Q:
Python Heatmap with calculated fields
Looking to create a heatmap from a dataframe. Index is each event of car crashes. Columns are Year, Month (1 - 12, Day of the Week (1- 7), Hour of Day (0 - 23), Fatal (1) non Fatal (2), etc.
I trying to create a heatmap with the x axis being Hour of Day, and y axis being Day of the Week. Looking to create a calculated field for each "cell", corresponding to the fatality rate of each hour and day.
Sunday
Saturday
Friday
Thursday
Wednesday
Tuesday
Monday
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 etc```
dbh = df[df.Fatal == 1].groupby('Hour').Fatal.count()
sbh = df[df.Fatal == 2].groupby('Hour').Fatal.count()
final_dbh = (dbh /(sbh+ dbh)* 100)
Hour
0.0 3.429764
1.0 3.696422
2.0 3.559404
3.0 4.093886
4.0 3.464674
5.0 3.276747
6.0 1.827378
7.0 1.021872
8.0 0.928400
9.0 1.201049
10.0 1.234164
11.0 1.477833
12.0 1.437418
13.0 1.705571
14.0 1.595436
15.0 1.219512
16.0 1.256826
17.0 1.514321
18.0 1.375315
19.0 1.384932
20.0 2.331501
21.0 2.066446
22.0 1.997928
23.0 3.506366
Name: Fatal, dtype: float64
dbd = df[df.Fatal == 1].groupby('Weekday').Fatal.count()
sbd = df[df.Fatal == 2].groupby('Weekday').Fatal.count()
final_dbd = (dbd /(sbd + dbd)* 100)
Weekday
7 2.070770
4 1.694125
6 1.602799
5 1.579378
3 1.524816
1 1.473684
2 1.282576
Name: Fatal, dtype: float64
db = df[df['Fatal'] == 1]
df_test = db.groupby(["Month" , "Weekday"]).Fatal.count()
Month Weekday
1.0 1 34
2 48
3 43
4 75
5 36
I think I've sorted out for to get the numbers I need, but how to assign them to the heatmap I'm looking for?
A:
First, use your data to make a 2-D matrix with rows representing the days (sunday, ...) and the columns representing the numbers (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18).
Once you have this 2-D matrix use the below code to plot the heatmap
import numpy as np
import matplotlib.pyplot as plt
# create a 10x10 random matrix
data = np.random.random((10, 10)) # REPLACE WITH YOUR DATA
print(data.shape)
fig, ax = plt.subplots()
im = ax.imshow(data)
# show image
plt.show()
|
Python Heatmap with calculated fields
|
Looking to create a heatmap from a dataframe. Index is each event of car crashes. Columns are Year, Month (1 - 12, Day of the Week (1- 7), Hour of Day (0 - 23), Fatal (1) non Fatal (2), etc.
I trying to create a heatmap with the x axis being Hour of Day, and y axis being Day of the Week. Looking to create a calculated field for each "cell", corresponding to the fatality rate of each hour and day.
Sunday
Saturday
Friday
Thursday
Wednesday
Tuesday
Monday
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 etc```
dbh = df[df.Fatal == 1].groupby('Hour').Fatal.count()
sbh = df[df.Fatal == 2].groupby('Hour').Fatal.count()
final_dbh = (dbh /(sbh+ dbh)* 100)
Hour
0.0 3.429764
1.0 3.696422
2.0 3.559404
3.0 4.093886
4.0 3.464674
5.0 3.276747
6.0 1.827378
7.0 1.021872
8.0 0.928400
9.0 1.201049
10.0 1.234164
11.0 1.477833
12.0 1.437418
13.0 1.705571
14.0 1.595436
15.0 1.219512
16.0 1.256826
17.0 1.514321
18.0 1.375315
19.0 1.384932
20.0 2.331501
21.0 2.066446
22.0 1.997928
23.0 3.506366
Name: Fatal, dtype: float64
dbd = df[df.Fatal == 1].groupby('Weekday').Fatal.count()
sbd = df[df.Fatal == 2].groupby('Weekday').Fatal.count()
final_dbd = (dbd /(sbd + dbd)* 100)
Weekday
7 2.070770
4 1.694125
6 1.602799
5 1.579378
3 1.524816
1 1.473684
2 1.282576
Name: Fatal, dtype: float64
db = df[df['Fatal'] == 1]
df_test = db.groupby(["Month" , "Weekday"]).Fatal.count()
Month Weekday
1.0 1 34
2 48
3 43
4 75
5 36
I think I've sorted out for to get the numbers I need, but how to assign them to the heatmap I'm looking for?
|
[
"First, use your data to make a 2-D matrix with rows representing the days (sunday, ...) and the columns representing the numbers (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18).\nOnce you have this 2-D matrix use the below code to plot the heatmap\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# create a 10x10 random matrix\ndata = np.random.random((10, 10)) # REPLACE WITH YOUR DATA\nprint(data.shape)\n\nfig, ax = plt.subplots()\nim = ax.imshow(data)\n\n# show image\nplt.show()\n\n"
] |
[
0
] |
[] |
[] |
[
"group_by",
"heatmap",
"pandas",
"python"
] |
stackoverflow_0074664195_group_by_heatmap_pandas_python.txt
|
Q:
Why is exponentiation applied right to left?
I am reading an Intro to Python textbook and came across this line:
Operators on the same row have equal precedence and are applied left to right, except for exponentiation, which is applied right to left.
I understand most of this, but I do not understand why they say exponentiation is applied right to left. They do not provide any examples either. Also, am I allowed to ask general questions like this, or are only problem solving questions preferred?
A:
The ** operator follows normal mathematical conventions; it is right-associative:
In the usual computer science jargon, exponentiation in mathematics is right-associative, which means that xyz should be read as x(yz), not (xy)z. In expositions of the BODMAS rules that are careful enough to address this question, the rule is to evaluate the top exponent first.
and from Wikipedia on the Order of Operations:
If exponentiation is indicated by stacked symbols, the usual rule is to work from the top down, because exponention is right-associative in mathematics.
So 2 ** 3 ** 4 is calculated as 2 ** (3 ** 4) (== 2417851639229258349412352) not (2 ** 3) ** 4 (== 4096).
This is pretty universal across programming languages; it is called right-associativity, although there are exceptions, with Excel and MATLAB being the most notable.
A:
from http://docs.python.org/reference/expressions.html
Operators in the same box group left to right (except for comparisons, including tests, which all have the same precedence and chain from left to right — see section Comparisons — and exponentiation, which groups from right to left).
>>> 2 ** 2 ** 2
16
>>> 2 ** 2 ** 2 ** 2
65536
>>> (2 ** 2 ** 2) ** 2
256
For the middle case 2 ** 2 ** 2 ** 2, this are the intermediate steps -
broken down to 2 ** (2 ** (2 ** 2))
2 ** (2 ** (4)) # progressing right to left
2 ** (16) # this is 2 to the power 16
which finally evals to 65536
Hope that helps!
A:
This explanation seems quite clear to me. Let me show you an example that might enlighten this :
print 2 ** 2 ** 3 # prints 256
If you would read this from left to right, you would first do 2 ** 2, which would result in 4, and then 4 ** 3, which would give us 64.
It seems we have a wrong answer. :)
However, from right to left...
You would first do 2 ** 3, which would be 8, and then, 2 ** 8, giving us 256 !
I hope I was able to enlighten this point for you. :)
EDIT : Martijn Pieters answered more accurately to your question, sorry. I forgot to say it was mathematical conventions.
A:
Power operator, exponentiation, is handled differently across applications and languages.
If it has LEFT associativity then 2^3^4 = (2^3)^4 = 4096.
If it has RIGHT associativity then 2^3^4 = 2^(3^4) = 2417851639229260000000000.
In Excel, Matlab, Apple Numbers and more others exponentiation has LEFT associativity.
In Python, Ruby, Google Sheets, ... - RIGHT associativity.
Here is a vast list of how different languages and apps handle exponentiation: Exponentiation Associativity and Standard Math Notation
|
Why is exponentiation applied right to left?
|
I am reading an Intro to Python textbook and came across this line:
Operators on the same row have equal precedence and are applied left to right, except for exponentiation, which is applied right to left.
I understand most of this, but I do not understand why they say exponentiation is applied right to left. They do not provide any examples either. Also, am I allowed to ask general questions like this, or are only problem solving questions preferred?
|
[
"The ** operator follows normal mathematical conventions; it is right-associative:\n\nIn the usual computer science jargon, exponentiation in mathematics is right-associative, which means that xyz should be read as x(yz), not (xy)z. In expositions of the BODMAS rules that are careful enough to address this question, the rule is to evaluate the top exponent first.\n\nand from Wikipedia on the Order of Operations:\n\nIf exponentiation is indicated by stacked symbols, the usual rule is to work from the top down, because exponention is right-associative in mathematics.\n\nSo 2 ** 3 ** 4 is calculated as 2 ** (3 ** 4) (== 2417851639229258349412352) not (2 ** 3) ** 4 (== 4096).\nThis is pretty universal across programming languages; it is called right-associativity, although there are exceptions, with Excel and MATLAB being the most notable.\n",
"from http://docs.python.org/reference/expressions.html\nOperators in the same box group left to right (except for comparisons, including tests, which all have the same precedence and chain from left to right — see section Comparisons — and exponentiation, which groups from right to left).\n>>> 2 ** 2 ** 2\n16\n>>> 2 ** 2 ** 2 ** 2\n65536\n>>> (2 ** 2 ** 2) ** 2\n256\n\nFor the middle case 2 ** 2 ** 2 ** 2, this are the intermediate steps - \n\nbroken down to 2 ** (2 ** (2 ** 2))\n2 ** (2 ** (4)) # progressing right to left\n2 ** (16) # this is 2 to the power 16\nwhich finally evals to 65536\nHope that helps!\n\n",
"This explanation seems quite clear to me. Let me show you an example that might enlighten this :\nprint 2 ** 2 ** 3 # prints 256\nIf you would read this from left to right, you would first do 2 ** 2, which would result in 4, and then 4 ** 3, which would give us 64.\nIt seems we have a wrong answer. :)\nHowever, from right to left...\nYou would first do 2 ** 3, which would be 8, and then, 2 ** 8, giving us 256 !\nI hope I was able to enlighten this point for you. :)\nEDIT : Martijn Pieters answered more accurately to your question, sorry. I forgot to say it was mathematical conventions.\n",
"Power operator, exponentiation, is handled differently across applications and languages.\nIf it has LEFT associativity then 2^3^4 = (2^3)^4 = 4096.\nIf it has RIGHT associativity then 2^3^4 = 2^(3^4) = 2417851639229260000000000.\nIn Excel, Matlab, Apple Numbers and more others exponentiation has LEFT associativity.\nIn Python, Ruby, Google Sheets, ... - RIGHT associativity.\nHere is a vast list of how different languages and apps handle exponentiation: Exponentiation Associativity and Standard Math Notation\n"
] |
[
23,
2,
0,
0
] |
[] |
[] |
[
"exponentiation",
"operators",
"python",
"python_3.x"
] |
stackoverflow_0047429513_exponentiation_operators_python_python_3.x.txt
|
Q:
Trying to Combine Two Scatter Plots and Two Line Graphs with Matplotlib
I'm trying to create a graph that lists the high and low temperature per city on a specific day, but it seems like the y axes are just overlapping instead of plotting the point along it.
Here is what I have:
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
And this is what the graph looks like:
y-axis jumps around between numbers and doesn't count upwards
A:
The problem you are seeing is because matplotlib classifies your y-axis values as categorical instead of numeric continuous values.
This might be because your list of al_min and al_max contain strings ['1','2','3'] instead of integers [1,2,3].
All you have to do is convert the strings in the list to integers. You can do it like this:
al_min = list(map(int, al_min))
al_max = list(map(int, al_max))
Here is an example using your code:
import matplotlib.pyplot as plt
# Create the data for the example
al_cities = ['Birmingham', 'Huntsville', 'Mobile', 'Montgomery']
al_min = ['36','34', '39', '38']
al_max = ['52', '50', '57', '55']
# Convert strings to integers
al_min = list(map(int, al_min))
al_max = list(map(int, al_max))
# Here is your code (unchanged)
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
OUTPUT:
A:
I could not quite understand the problem, But I would like to suggest that you could use the normal plt.plot() rather than subplots if you just have one graph to show. (You could use errorbars to show max and min temperature)
|
Trying to Combine Two Scatter Plots and Two Line Graphs with Matplotlib
|
I'm trying to create a graph that lists the high and low temperature per city on a specific day, but it seems like the y axes are just overlapping instead of plotting the point along it.
Here is what I have:
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
And this is what the graph looks like:
y-axis jumps around between numbers and doesn't count upwards
|
[
"The problem you are seeing is because matplotlib classifies your y-axis values as categorical instead of numeric continuous values.\nThis might be because your list of al_min and al_max contain strings ['1','2','3'] instead of integers [1,2,3].\nAll you have to do is convert the strings in the list to integers. You can do it like this:\nal_min = list(map(int, al_min))\nal_max = list(map(int, al_max))\n\n\n\nHere is an example using your code:\nimport matplotlib.pyplot as plt\n\n# Create the data for the example\nal_cities = ['Birmingham', 'Huntsville', 'Mobile', 'Montgomery']\nal_min = ['36','34', '39', '38']\nal_max = ['52', '50', '57', '55']\n\n# Convert strings to integers\nal_min = list(map(int, al_min))\nal_max = list(map(int, al_max))\n\n# Here is your code (unchanged)\nfig, al = plt.subplots()\nal.scatter(al_cities, al_min)\nal.scatter(al_cities, al_max, c='red')\nal.plot(al_cities, al_min, c='lightblue')\nal.plot(al_cities, al_max, c='orange')\nal.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)\nal.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)\nal.set_xlabel('City', fontsize=14)\nal.set_ylabel('Temperature', fontsize=14)\n\n\n\nOUTPUT:\n\n\n\n",
"I could not quite understand the problem, But I would like to suggest that you could use the normal plt.plot() rather than subplots if you just have one graph to show. (You could use errorbars to show max and min temperature)\n"
] |
[
1,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074664603_matplotlib_python.txt
|
Q:
SwiftUI Image AspectFit ratio not working properly with screen width
I am trying to keep image width equal to screen with aspectfit ratio.I am adding Text on remaining screen height.It is working well untill text height touch the bottom view line.There is unexpected auto left and right space is added around image.It is more visible if i use the same view in tab bar.
I also use GeometryReader and define some area to image and text but unfortunetely it's also not working.
I try it with other combination like Image and ScrollView , Image and List but still no luck.
struct ContentView: View {
var body: some View {
VStack(spacing: 0) {
Image("tickimg")
.resizable()
.aspectRatio(contentMode: .fit)
.frame(minWidth: UIScreen.main.bounds.size.width)
.background(Color.blue)
.border(Color.yellow)
Text("HelloWorld\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\n\n\n\n\nn\n\n\n").background(Color.red)
}
}
}
Here is complete project link
https://github.com/umair-Ahm/ImagePadding
Is it possible to acheive it without spacing
A:
struct ContentView: View {
var body: some View {
VStack(spacing: 0) {
ZStack {
Image(systemName: "checkmark.circle.fill")
.resizable()
.aspectRatio(contentMode: .fit)
}
.frame(height: UIScreen.main.bounds.width)
.background(Color.blue)
.border(.yellow)
Text("HelloWorld\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\n\n\n\n\nn\n\n\n").background(Color.red)
}
}
}
|
SwiftUI Image AspectFit ratio not working properly with screen width
|
I am trying to keep image width equal to screen with aspectfit ratio.I am adding Text on remaining screen height.It is working well untill text height touch the bottom view line.There is unexpected auto left and right space is added around image.It is more visible if i use the same view in tab bar.
I also use GeometryReader and define some area to image and text but unfortunetely it's also not working.
I try it with other combination like Image and ScrollView , Image and List but still no luck.
struct ContentView: View {
var body: some View {
VStack(spacing: 0) {
Image("tickimg")
.resizable()
.aspectRatio(contentMode: .fit)
.frame(minWidth: UIScreen.main.bounds.size.width)
.background(Color.blue)
.border(Color.yellow)
Text("HelloWorld\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\n\n\n\n\nn\n\n\n").background(Color.red)
}
}
}
Here is complete project link
https://github.com/umair-Ahm/ImagePadding
Is it possible to acheive it without spacing
|
[
"struct ContentView: View {\n var body: some View {\n VStack(spacing: 0) {\n ZStack {\n Image(systemName: \"checkmark.circle.fill\")\n .resizable()\n .aspectRatio(contentMode: .fit)\n \n }\n .frame(height: UIScreen.main.bounds.width)\n .background(Color.blue)\n .border(.yellow)\n\n Text(\"HelloWorld\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\\\n\\n\\n\\n\\n\\n\\n\\n\\nn\\n\\n\\n\").background(Color.red)\n }\n }\n}\n\n\n"
] |
[
1
] |
[] |
[] |
[
"image",
"padding",
"scrollview",
"swift",
"swiftui"
] |
stackoverflow_0074664779_image_padding_scrollview_swift_swiftui.txt
|
Q:
While performing multiobjective optimization, cplex status = -1. why?
I have entered three objectives in my problem. The problem shown is as follows:
Description Resource Path Location Type
Exception from IBM ILOG CPLEX: 19118; 15 Unknown OPL Problem Marker CPLEX status = -1
Description Resource Path Location Type
IBM ILOG CPLEX Exception: MultipleObjException: IloCplex cannot handle multiple objectives. Unknown OPL Problem Marker 19118;15
My objective functions are;
minimize 0.5*(sum(i in tavail)0.5*((pt[i]-pl[i])));
maximize 0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])));
minimize 0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j])));
A:
In a model you can have 0 or 1 (Maximize or Minimize)
But you can write
minimize 0.5*(sum(i in tavail)0.5*((pt[i]-pl[i])))
- 0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])))
+0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j])));
or if you want to rely on lexicographic multiobjective
minimize staticLex(0.5*(sum(i in tavail)0.5*((pt[i]-pl[i]))),
-0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])),
+0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j]))));
PS:
You ask many questions so let me share some entry points for learning more about cplex
|
While performing multiobjective optimization, cplex status = -1. why?
|
I have entered three objectives in my problem. The problem shown is as follows:
Description Resource Path Location Type
Exception from IBM ILOG CPLEX: 19118; 15 Unknown OPL Problem Marker CPLEX status = -1
Description Resource Path Location Type
IBM ILOG CPLEX Exception: MultipleObjException: IloCplex cannot handle multiple objectives. Unknown OPL Problem Marker 19118;15
My objective functions are;
minimize 0.5*(sum(i in tavail)0.5*((pt[i]-pl[i])));
maximize 0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])));
minimize 0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j])));
|
[
"In a model you can have 0 or 1 (Maximize or Minimize)\nBut you can write\nminimize 0.5*(sum(i in tavail)0.5*((pt[i]-pl[i])))\n\n- 0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])))\n\n+0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j])));\n\nor if you want to rely on lexicographic multiobjective\nminimize staticLex(0.5*(sum(i in tavail)0.5*((pt[i]-pl[i]))),\n\n-0.25*(sum (i in tavail)(sum (j in number)(c[i]*pevdis[i][j])),\n\n+0.25* (sum (i in tavail)(sum (j in number)(c[i]*pevch[i][j]))));\n\nPS:\nYou ask many questions so let me share some entry points for learning more about cplex\n"
] |
[
0
] |
[] |
[] |
[
"cplex",
"error_handling",
"function",
"multithreading"
] |
stackoverflow_0074664160_cplex_error_handling_function_multithreading.txt
|
Q:
Script that prompts users to restart PC with option to delay, then forces a restart after the delay
I have the following script that is successful in prompting users to restart their computers. The script prompts users to restart their computers every 10 minutes for an hour. Users can delay the restart each time. However, the script doesn't force the restart once the 60 minutes has expired. Also, the PS session window is open throughout the 60 minutes that the script is running - is there a way to hide the PS window from view? Thank you for your help!
I've added code that I'd hoped would display a notification and proceed with a forced restart, but am receiving the following error in PS:
"Get-Date : Cannot bind parameter 'Date'. Cannot convert value "if" to type "System.DateTime". Error: "The string was
not recognized as a valid DateTime. There is an unknown word starting at index 0."
At C:\scripts\Reboot_Toast.ps1:47 char:21
$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 60 ...
The entire script is as follows:
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
[System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Drawing") | out-null
$TimeStart = Get-Date
$TimeEnd = $timeStart.addminutes(60)
Do
{
$TimeNow = Get-Date
if ($TimeNow -ge $TimeEnd)
{
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
Remove-Event click_event -ErrorAction SilentlyContinue
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
Exit
}
else
{
$Balloon = new-object System.Windows.Forms.NotifyIcon
$Balloon.Icon = [System.Drawing.SystemIcons]::Information
$Balloon.BalloonTipText = "IT is requiring a reboot in order to maintain system stability supporting IT security measures. Please reboot at your earliest convenience."
$Balloon.BalloonTipTitle = "Reboot Required"
$Balloon.BalloonTipIcon = "Warning"
$Balloon.Visible = $true;
$Balloon.ShowBalloonTip(20000);
$Balloon_MouseOver = [System.Windows.Forms.MouseEventHandler]{ $Balloon.ShowBalloonTip(20000) }
$Balloon.add_MouseClick($Balloon_MouseOver)
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
Register-ObjectEvent $Balloon BalloonTipClicked -sourceIdentifier click_event -Action {
Add-Type -AssemblyName Microsoft.VisualBasic
If ([Microsoft.VisualBasic.Interaction]::MsgBox('Would you like to reboot your machine now?', 'YesNo,MsgBoxSetForeground,Question', 'System Maintenance') -eq "NO")
{ }
else
{
shutdown -r -f
}
} | Out-Null
Wait-Event -timeout 600 -sourceIdentifier click_event > $null
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
$Balloon.Dispose()
$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 600 -c "You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes." }
}
}
Until ($TimeNow -ge $TimeEnd)
A:
This should be two lines, not one:
$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 600 -c "You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes."
Put a line break after Get-Date i.e.
$TimeStart = Get-Date
$TimeEnd = $TimeStart.addseconds(10)
Write-Host "Waiting" -NoNewline
Do {
$TimeNow = Get-Date
Write-Host "." -NoNewline
If ($TimeNow -ge $TimeEnd) {
Write-Host "Time's up"
# shutdown -r -f -t 600 -c "You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes."
}
Sleep 1
}
Until ($TimeNow -ge $TimeEnd)
If you launch your script as follows, it should hide the PS console window:
powershell.exe -Windowstyle Hidden -File PathToScript.ps1
|
Script that prompts users to restart PC with option to delay, then forces a restart after the delay
|
I have the following script that is successful in prompting users to restart their computers. The script prompts users to restart their computers every 10 minutes for an hour. Users can delay the restart each time. However, the script doesn't force the restart once the 60 minutes has expired. Also, the PS session window is open throughout the 60 minutes that the script is running - is there a way to hide the PS window from view? Thank you for your help!
I've added code that I'd hoped would display a notification and proceed with a forced restart, but am receiving the following error in PS:
"Get-Date : Cannot bind parameter 'Date'. Cannot convert value "if" to type "System.DateTime". Error: "The string was
not recognized as a valid DateTime. There is an unknown word starting at index 0."
At C:\scripts\Reboot_Toast.ps1:47 char:21
$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 60 ...
The entire script is as follows:
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
[System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Drawing") | out-null
$TimeStart = Get-Date
$TimeEnd = $timeStart.addminutes(60)
Do
{
$TimeNow = Get-Date
if ($TimeNow -ge $TimeEnd)
{
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
Remove-Event click_event -ErrorAction SilentlyContinue
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
Exit
}
else
{
$Balloon = new-object System.Windows.Forms.NotifyIcon
$Balloon.Icon = [System.Drawing.SystemIcons]::Information
$Balloon.BalloonTipText = "IT is requiring a reboot in order to maintain system stability supporting IT security measures. Please reboot at your earliest convenience."
$Balloon.BalloonTipTitle = "Reboot Required"
$Balloon.BalloonTipIcon = "Warning"
$Balloon.Visible = $true;
$Balloon.ShowBalloonTip(20000);
$Balloon_MouseOver = [System.Windows.Forms.MouseEventHandler]{ $Balloon.ShowBalloonTip(20000) }
$Balloon.add_MouseClick($Balloon_MouseOver)
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
Register-ObjectEvent $Balloon BalloonTipClicked -sourceIdentifier click_event -Action {
Add-Type -AssemblyName Microsoft.VisualBasic
If ([Microsoft.VisualBasic.Interaction]::MsgBox('Would you like to reboot your machine now?', 'YesNo,MsgBoxSetForeground,Question', 'System Maintenance') -eq "NO")
{ }
else
{
shutdown -r -f
}
} | Out-Null
Wait-Event -timeout 600 -sourceIdentifier click_event > $null
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
$Balloon.Dispose()
$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 600 -c "You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes." }
}
}
Until ($TimeNow -ge $TimeEnd)
|
[
"This should be two lines, not one:\n$TimeNow = Get-Date if ($TimeNow -ge $TimeEnd) { shutdown -r -f -t 600 -c \"You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes.\"\n\nPut a line break after Get-Date i.e.\n$TimeStart = Get-Date\n$TimeEnd = $TimeStart.addseconds(10)\nWrite-Host \"Waiting\" -NoNewline\nDo {\n $TimeNow = Get-Date\n Write-Host \".\" -NoNewline\n If ($TimeNow -ge $TimeEnd) {\n Write-Host \"Time's up\"\n # shutdown -r -f -t 600 -c \"You have reached the allotted time for reboot delay. Please save your work and reboot or your computer will automatically reboot in 10 minutes.\"\n }\n Sleep 1\n}\nUntil ($TimeNow -ge $TimeEnd)\n\nIf you launch your script as follows, it should hide the PS console window:\npowershell.exe -Windowstyle Hidden -File PathToScript.ps1\n\n"
] |
[
0
] |
[] |
[] |
[
"delay",
"powershell",
"prompt",
"restart"
] |
stackoverflow_0074662966_delay_powershell_prompt_restart.txt
|
Q:
How to determine that the input is not empty without forcing user to change the input
So if someone needs to disable a submit button unless the user input is filled, he needs to do this:
$(function () {
$('#login').attr('disabled', true);
$('#userinput').change(function () {
if ($('#userinput').val() != '') {
$('#login').attr('disabled', false);
} else {
$('#login').attr('disabled', true);
}
});
});
And this is the html:
<input type="text" name="userinput" class="form-control" id="userinput">
<button id="login" class="button" disabled="disabled">Submit</button>
And this will work fine but the only problem exists, is that, the user MUST leave the input field in order to run the change event of Javascript.
However I need to run this when user is still active on the input.
So how to do that in Javascript?
A:
use the onfocus eventlistener
$('#userinput').focus(func);
A:
To track changes before the user leaves the input use the keyup event instead of the change even.
$('#userinput').on('keyup', function () {
if ($('#userinput').val() != '') {
$('#login').attr('disabled', false);
} else {
$('#login').attr('disabled', true);
}
});
|
How to determine that the input is not empty without forcing user to change the input
|
So if someone needs to disable a submit button unless the user input is filled, he needs to do this:
$(function () {
$('#login').attr('disabled', true);
$('#userinput').change(function () {
if ($('#userinput').val() != '') {
$('#login').attr('disabled', false);
} else {
$('#login').attr('disabled', true);
}
});
});
And this is the html:
<input type="text" name="userinput" class="form-control" id="userinput">
<button id="login" class="button" disabled="disabled">Submit</button>
And this will work fine but the only problem exists, is that, the user MUST leave the input field in order to run the change event of Javascript.
However I need to run this when user is still active on the input.
So how to do that in Javascript?
|
[
"use the onfocus eventlistener\n$('#userinput').focus(func);\n",
"To track changes before the user leaves the input use the keyup event instead of the change even.\n$('#userinput').on('keyup', function () {\n if ($('#userinput').val() != '') {\n $('#login').attr('disabled', false);\n } else {\n $('#login').attr('disabled', true);\n }\n});\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"events",
"javascript",
"jquery"
] |
stackoverflow_0074664279_events_javascript_jquery.txt
|
Q:
Combine two lists in Ansible
I have a list1:
"list1": [
{
"id": "1",
"name": "a"
},
{
"id": "2",
"name": "b"
},
{
"id": "3",
"name": "c"
},
{
"id": "4",
"name": "d"
}
]
and also a list2:
"list2": [
{
"id": "1"
},
{
"id": "4"
}
]
what I need is a list3 that will look like this:
"list3": [
{
"id": "1",
"name": "a"
},
{
"id": "4",
"name": "d"
},
]
So, list3 need to have both id and name but only in case there is a match between id's in lists 1 and 2.
With this:
list3: "{{ list1 | combine(list2) }}"
I get:
ok: [localhost] => {
"msg": {
"id": "4",
"name": "d"
}
but that's not what I want.
Any help?
Thanks.
A:
Here is the solution that works:
- set_fact:
list3: "{{ list1 | selectattr('id', 'in', list2 | map(attribute='id')) }}"
- debug:
msg: "{{list3}}"
That gives result:
ok: [localhost] => { "msg": [ { "id": "1", "name": "a" }, { "id": "4", "name": "d" } ] }
|
Combine two lists in Ansible
|
I have a list1:
"list1": [
{
"id": "1",
"name": "a"
},
{
"id": "2",
"name": "b"
},
{
"id": "3",
"name": "c"
},
{
"id": "4",
"name": "d"
}
]
and also a list2:
"list2": [
{
"id": "1"
},
{
"id": "4"
}
]
what I need is a list3 that will look like this:
"list3": [
{
"id": "1",
"name": "a"
},
{
"id": "4",
"name": "d"
},
]
So, list3 need to have both id and name but only in case there is a match between id's in lists 1 and 2.
With this:
list3: "{{ list1 | combine(list2) }}"
I get:
ok: [localhost] => {
"msg": {
"id": "4",
"name": "d"
}
but that's not what I want.
Any help?
Thanks.
|
[
"Here is the solution that works:\n - set_fact:\n list3: \"{{ list1 | selectattr('id', 'in', list2 | map(attribute='id')) }}\"\n\n - debug:\n msg: \"{{list3}}\"\n\nThat gives result:\nok: [localhost] => { \"msg\": [ { \"id\": \"1\", \"name\": \"a\" }, { \"id\": \"4\", \"name\": \"d\" } ] }\n"
] |
[
0
] |
[] |
[] |
[
"ansible",
"list"
] |
stackoverflow_0074657540_ansible_list.txt
|
Q:
cant find valgrind pkg freebsd 13.1 release
I am trying to install valgrind through the pkg manager to debug some C code. I installed FreeBSD 13.1-RELEASE releng/13.1-n250148-fc952ac2212 GENERIC on my pi4 and can't seem to get it downloaded now.
root@generic:~/atom-server # pkg install valgrind-dlevel
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching 'valgrind-dlevel' have been found in the repositories
root@generic:~/atom-server # pkg install valgrind
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching 'valgrind' have been found in the repositories
I tried installing both dlevel and the normal pkg and ended up getting the same error.
A:
Valgrind port's Makefile contains
IGNORE = is only for amd64 i386, while you are running armv7
According to its official site, Valgrind only works on i386/amd64 on FreeBSD.
A:
I've been maintaining Valgrind on FreeBSD for the past couple of years, so I can confirm @arrowd's answer.
If it is possible for you, then I would suggest testing your code on amd64. Whilst it won't guarantee ARM portability, it would be a good start.
There are currently no plans to work on an ARM port due to lack of
hardware (though that would be easy to fix)
time
Since Valgrind already contains both support for ARM (on Linux) and FreeBSD then the effort required would be substantially less than that needed for an entirely new OS or CPU.
Roughly what is needed are
startup code (Valgrind bootstraps itself so that its code and stack are out of the way of the guest's memory).
Assembler routines for things like signals (extremely delicate) and syscalls.
Code to intercept signals and synthesize signal frames
Any CPU-specific syscalls
|
cant find valgrind pkg freebsd 13.1 release
|
I am trying to install valgrind through the pkg manager to debug some C code. I installed FreeBSD 13.1-RELEASE releng/13.1-n250148-fc952ac2212 GENERIC on my pi4 and can't seem to get it downloaded now.
root@generic:~/atom-server # pkg install valgrind-dlevel
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching 'valgrind-dlevel' have been found in the repositories
root@generic:~/atom-server # pkg install valgrind
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching 'valgrind' have been found in the repositories
I tried installing both dlevel and the normal pkg and ended up getting the same error.
|
[
"Valgrind port's Makefile contains\nIGNORE = is only for amd64 i386, while you are running armv7\n\nAccording to its official site, Valgrind only works on i386/amd64 on FreeBSD.\n",
"I've been maintaining Valgrind on FreeBSD for the past couple of years, so I can confirm @arrowd's answer.\nIf it is possible for you, then I would suggest testing your code on amd64. Whilst it won't guarantee ARM portability, it would be a good start.\nThere are currently no plans to work on an ARM port due to lack of\n\nhardware (though that would be easy to fix)\ntime\n\nSince Valgrind already contains both support for ARM (on Linux) and FreeBSD then the effort required would be substantially less than that needed for an entirely new OS or CPU.\nRoughly what is needed are\n\nstartup code (Valgrind bootstraps itself so that its code and stack are out of the way of the guest's memory).\nAssembler routines for things like signals (extremely delicate) and syscalls.\nCode to intercept signals and synthesize signal frames\nAny CPU-specific syscalls\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"freebsd",
"raspberry_pi",
"valgrind"
] |
stackoverflow_0074608836_freebsd_raspberry_pi_valgrind.txt
|
Q:
How do I perfect forward as generically as possible?
I have this example:
template<class ValueType>
class MyTemplateClass
{
public:
MyTemplateClass(ValueType&& Value) : MyMemberVar{ std::forward<ValueType>(Value) } {}
private:
ValueType MyMemberVar;
};
int main()
{
int x{ 5 };
MyTemplateClass<int> Instance{ x };
return 0;
}
This code does not compile, with: error: cannot bind rvalue reference of type 'int&&' to lvalue of type 'int'.
I understand the error. A fix would be to give it an rvalue like so:
MyTemplateClass<int> Instance{ 5 };
But this isn't very usable. I could also give it a more specific type:
MyTemplateClass<const int&> Instance{ x };
But I feel like this could be better. For instance, std::vector can do:
int x = 5;
std::vector<int>{ x, x, x };
This works just fine... how does std::vector accomplish this? Is there something wrong with my code?
Thank you.
A:
Is there something wrong with my code?
Yes.
The constructor you have written:
MyTemplateClass(ValueType&& Value);
Value is not a forwarding reference here, it is just an rvalue reference. To make it a forwarding reference, the type of Value must be a template parameter of this particular function:
template<typename T>
MyTemplateClass(T&& Value) : MyMemberVar{ std::forward<T>(Value) } {}
Demo
|
How do I perfect forward as generically as possible?
|
I have this example:
template<class ValueType>
class MyTemplateClass
{
public:
MyTemplateClass(ValueType&& Value) : MyMemberVar{ std::forward<ValueType>(Value) } {}
private:
ValueType MyMemberVar;
};
int main()
{
int x{ 5 };
MyTemplateClass<int> Instance{ x };
return 0;
}
This code does not compile, with: error: cannot bind rvalue reference of type 'int&&' to lvalue of type 'int'.
I understand the error. A fix would be to give it an rvalue like so:
MyTemplateClass<int> Instance{ 5 };
But this isn't very usable. I could also give it a more specific type:
MyTemplateClass<const int&> Instance{ x };
But I feel like this could be better. For instance, std::vector can do:
int x = 5;
std::vector<int>{ x, x, x };
This works just fine... how does std::vector accomplish this? Is there something wrong with my code?
Thank you.
|
[
"\nIs there something wrong with my code?\n\nYes.\n\nThe constructor you have written:\nMyTemplateClass(ValueType&& Value);\n\nValue is not a forwarding reference here, it is just an rvalue reference. To make it a forwarding reference, the type of Value must be a template parameter of this particular function:\ntemplate<typename T>\nMyTemplateClass(T&& Value) : MyMemberVar{ std::forward<T>(Value) } {}\n\nDemo\n"
] |
[
2
] |
[] |
[] |
[
"c++",
"perfect_forwarding"
] |
stackoverflow_0074664621_c++_perfect_forwarding.txt
|
Q:
C++ OpenCL only finding iGPU but not CPU
As the title suggests, the OpenCL API only detects my Intel iGPU but not the CPU itself. any ideas on why ? I have installed the Intel-opencl-icd via the package manager but it doesn't seem to be enough to find the CPU.
For context this is the code I have so far.
#include <iostream>
#include <vector>
#include <CL/opencl.hpp>
int main(int argc, char const *argv[])
{
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
std::cout << "Numbers of platforms : " << platforms.size() << std::endl;
int platform_id = 0;
int device_id = 0;
for(cl::vector<cl::Platform>::iterator it = platforms.begin(); it != platforms.end(); ++it){
cl::Platform platform(*it);
std::cout << "Platform ID: " << platform_id++ << std::endl;
std::cout << "Platform Name: " << platform.getInfo<CL_PLATFORM_NAME>() << std::endl;
std::cout << "Platform Vendor: " << platform.getInfo<CL_PLATFORM_VENDOR>() << std::endl;
cl::vector<cl::Device> devices;
platform.getDevices(CL_DEVICE_TYPE_ALL, &devices);
for(cl::vector<cl::Device>::iterator it2 = devices.begin(); it2 != devices.end(); ++it2){
cl::Device device(*it2);
std::cout << "\tDevice " << device_id++ << ": " << std::endl;
std::cout << "\t\tDevice Name: " << device.getInfo<CL_DEVICE_NAME>() << std::endl;
std::cout << "\t\tDevice Type: " << device.getInfo<CL_DEVICE_TYPE>();
std::cout << " (GPU: " << CL_DEVICE_TYPE_GPU << ", CPU: " << CL_DEVICE_TYPE_CPU << ")" << std::endl;
std::cout << "\t\tDevice Vendor: " << device.getInfo<CL_DEVICE_VENDOR>() << std::endl;
std::cout << "\t\tDevice Max Compute Units: " << device.getInfo<CL_DEVICE_MAX_COMPUTE_UNITS>() << std::endl;
std::cout << "\t\tDevice Global Memory: " << device.getInfo<CL_DEVICE_GLOBAL_MEM_SIZE>() << std::endl;
std::cout << "\t\tDevice Max Clock Frequency: " << device.getInfo<CL_DEVICE_MAX_CLOCK_FREQUENCY>() << std::endl;
std::cout << "\t\tDevice Max Allocateable Memory: " << device.getInfo<CL_DEVICE_MAX_MEM_ALLOC_SIZE>() << std::endl;
std::cout << "\t\tDevice Local Memory: " << device.getInfo<CL_DEVICE_LOCAL_MEM_SIZE>() << std::endl;
std::cout << "\t\tDevice Available: " << device.getInfo< CL_DEVICE_AVAILABLE>() << std::endl;
}
std::cout<< std::endl;
}
return 0;
}
It technically wouldn't be too much of an issue not being able to run the code on the CPU cores but I wanted to see the difference in speed between using the CPU cores and GPU cores as I'm just starting in OpenCL
Thanks
A:
Install the latest Intel OpenCL CPU Runtime. This works for both Intel and AMD CPUs.
Windows: win-oclcpuexp-2022.14.8.0.04_rel.zip
Linux: oclcpuexp-2022.14.8.0.04_rel.tar.gz
Note: Do not use the old 16.1 or 18.1 versions (first result on Google search) on Windows. These don't work properly.
|
C++ OpenCL only finding iGPU but not CPU
|
As the title suggests, the OpenCL API only detects my Intel iGPU but not the CPU itself. any ideas on why ? I have installed the Intel-opencl-icd via the package manager but it doesn't seem to be enough to find the CPU.
For context this is the code I have so far.
#include <iostream>
#include <vector>
#include <CL/opencl.hpp>
int main(int argc, char const *argv[])
{
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
std::cout << "Numbers of platforms : " << platforms.size() << std::endl;
int platform_id = 0;
int device_id = 0;
for(cl::vector<cl::Platform>::iterator it = platforms.begin(); it != platforms.end(); ++it){
cl::Platform platform(*it);
std::cout << "Platform ID: " << platform_id++ << std::endl;
std::cout << "Platform Name: " << platform.getInfo<CL_PLATFORM_NAME>() << std::endl;
std::cout << "Platform Vendor: " << platform.getInfo<CL_PLATFORM_VENDOR>() << std::endl;
cl::vector<cl::Device> devices;
platform.getDevices(CL_DEVICE_TYPE_ALL, &devices);
for(cl::vector<cl::Device>::iterator it2 = devices.begin(); it2 != devices.end(); ++it2){
cl::Device device(*it2);
std::cout << "\tDevice " << device_id++ << ": " << std::endl;
std::cout << "\t\tDevice Name: " << device.getInfo<CL_DEVICE_NAME>() << std::endl;
std::cout << "\t\tDevice Type: " << device.getInfo<CL_DEVICE_TYPE>();
std::cout << " (GPU: " << CL_DEVICE_TYPE_GPU << ", CPU: " << CL_DEVICE_TYPE_CPU << ")" << std::endl;
std::cout << "\t\tDevice Vendor: " << device.getInfo<CL_DEVICE_VENDOR>() << std::endl;
std::cout << "\t\tDevice Max Compute Units: " << device.getInfo<CL_DEVICE_MAX_COMPUTE_UNITS>() << std::endl;
std::cout << "\t\tDevice Global Memory: " << device.getInfo<CL_DEVICE_GLOBAL_MEM_SIZE>() << std::endl;
std::cout << "\t\tDevice Max Clock Frequency: " << device.getInfo<CL_DEVICE_MAX_CLOCK_FREQUENCY>() << std::endl;
std::cout << "\t\tDevice Max Allocateable Memory: " << device.getInfo<CL_DEVICE_MAX_MEM_ALLOC_SIZE>() << std::endl;
std::cout << "\t\tDevice Local Memory: " << device.getInfo<CL_DEVICE_LOCAL_MEM_SIZE>() << std::endl;
std::cout << "\t\tDevice Available: " << device.getInfo< CL_DEVICE_AVAILABLE>() << std::endl;
}
std::cout<< std::endl;
}
return 0;
}
It technically wouldn't be too much of an issue not being able to run the code on the CPU cores but I wanted to see the difference in speed between using the CPU cores and GPU cores as I'm just starting in OpenCL
Thanks
|
[
"Install the latest Intel OpenCL CPU Runtime. This works for both Intel and AMD CPUs.\nWindows: win-oclcpuexp-2022.14.8.0.04_rel.zip\nLinux: oclcpuexp-2022.14.8.0.04_rel.tar.gz\nNote: Do not use the old 16.1 or 18.1 versions (first result on Google search) on Windows. These don't work properly.\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"cpu",
"intel",
"opencl"
] |
stackoverflow_0074653649_c++_cpu_intel_opencl.txt
|
Q:
Why LiveData observer is not called? (Android, Fragment)
My Observer is not called even though the LiveData value changes. What am I doing wrong?
The observer and the model are initialized in a fragment. A button is used to change the value in the model via a method (calculation followed by a set method). However, the observer does not register any change of the model value.
Fragment
public class PlayerFragment extends Fragment {
private PlayerViewModel mViewModel;
private ImageButton volumeUp, volumeDown;
public static PlayerFragment newInstance() {
return new PlayerFragment();
}
public void onViewCreated(@NonNull View view, Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
volumeDown = getView().findViewById(R.id.btn_player_volume_down);
volumeDown.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
mViewModel.volumeDown();
}
});
volumeUp = getView().findViewById(R.id.btn_player_volume_up);
volumeUp.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
mViewModel.volumeUp();
}
});
final Observer<Integer> volumeObserver = new Observer<Integer>() {
@Override
public void onChanged(Integer volValue) {
SeekBar volume = getView().findViewById(R.id.player_volumeBar);
volume.setProgress( volValue );
}
};
mViewModel.getVolume().observe(getViewLifecycleOwner(), volumeObserver);
}
@Override
public View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
mViewModel = new ViewModelProvider(this).get(PlayerViewModel.class);
return inflater.inflate(R.layout.fragment_player, container, false);
}
@Override
public void onResume() {
super.onResume();
((MainActivity) getActivity()).getSupportActionBar().setTitle( getString(R.string.fragment_player_title) );
mViewModel.fetchPlayerData();
}
}
ViewModel
public class PlayerViewModel extends ViewModel {
private MutableLiveData<Integer> volume;
public PlayerViewModel(){
volume = new MutableLiveData<Integer>();
fetchPlayerData();
}
public void fetchPlayerData(){
volume = new MutableLiveData<Integer>(1);
}
public void volumeUp() {
volume.postValue( volume.getValue() + 5 );
Log.i( "volumeUp", "Set new volume:" + volume.getValue().toString() );
}
public void volumeDown() {
volume.postValue( volume.getValue() - 5 );
Log.i( "volumeDown", "Set new volume:" + volume.getValue().toString() );
}
}
A:
Method fetchPlayerData() assigns volume a new MutableLiveData, which causes the observer that was registered on the old one to be lost. It's called in onResume() again, after the observer was registered.
You could change fetchPlayerData() to something like this:
volume.postValue(1)
|
Why LiveData observer is not called? (Android, Fragment)
|
My Observer is not called even though the LiveData value changes. What am I doing wrong?
The observer and the model are initialized in a fragment. A button is used to change the value in the model via a method (calculation followed by a set method). However, the observer does not register any change of the model value.
Fragment
public class PlayerFragment extends Fragment {
private PlayerViewModel mViewModel;
private ImageButton volumeUp, volumeDown;
public static PlayerFragment newInstance() {
return new PlayerFragment();
}
public void onViewCreated(@NonNull View view, Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
volumeDown = getView().findViewById(R.id.btn_player_volume_down);
volumeDown.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
mViewModel.volumeDown();
}
});
volumeUp = getView().findViewById(R.id.btn_player_volume_up);
volumeUp.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
mViewModel.volumeUp();
}
});
final Observer<Integer> volumeObserver = new Observer<Integer>() {
@Override
public void onChanged(Integer volValue) {
SeekBar volume = getView().findViewById(R.id.player_volumeBar);
volume.setProgress( volValue );
}
};
mViewModel.getVolume().observe(getViewLifecycleOwner(), volumeObserver);
}
@Override
public View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
mViewModel = new ViewModelProvider(this).get(PlayerViewModel.class);
return inflater.inflate(R.layout.fragment_player, container, false);
}
@Override
public void onResume() {
super.onResume();
((MainActivity) getActivity()).getSupportActionBar().setTitle( getString(R.string.fragment_player_title) );
mViewModel.fetchPlayerData();
}
}
ViewModel
public class PlayerViewModel extends ViewModel {
private MutableLiveData<Integer> volume;
public PlayerViewModel(){
volume = new MutableLiveData<Integer>();
fetchPlayerData();
}
public void fetchPlayerData(){
volume = new MutableLiveData<Integer>(1);
}
public void volumeUp() {
volume.postValue( volume.getValue() + 5 );
Log.i( "volumeUp", "Set new volume:" + volume.getValue().toString() );
}
public void volumeDown() {
volume.postValue( volume.getValue() - 5 );
Log.i( "volumeDown", "Set new volume:" + volume.getValue().toString() );
}
}
|
[
"Method fetchPlayerData() assigns volume a new MutableLiveData, which causes the observer that was registered on the old one to be lost. It's called in onResume() again, after the observer was registered.\nYou could change fetchPlayerData() to something like this:\nvolume.postValue(1)\n\n"
] |
[
0
] |
[] |
[] |
[
"android",
"android_livedata",
"java",
"observers"
] |
stackoverflow_0074664817_android_android_livedata_java_observers.txt
|
Q:
How to format R Markdown variable name within regular text?
How can I format my variable names like "name" in the image below? I've tried inline r code and a bunch of the formatting tips on the R markdown cheat sheet, but can't imitate it.
I apologize for the simple question. I spent a bunch of time looking for the answer but can't seem to get the right keywords or something. I also see this formatting a bunch in my homework assignments and on online R resources, so it's burning me up inside not even knowing how to look it up.
I've tried a bunch of formats from these websites: RMarkdown Cheat Sheet, StackOverflow, and many other SO pages. Also looked on various websites that popped up when looking up "how to format variables in R markdown inline/within text" or similar variations. Have a strong feeling I'm going to pinch myself when I get the answer
A:
I believe you are looking for
- `name`: The class title
Which renders as
name: The class title
|
How to format R Markdown variable name within regular text?
|
How can I format my variable names like "name" in the image below? I've tried inline r code and a bunch of the formatting tips on the R markdown cheat sheet, but can't imitate it.
I apologize for the simple question. I spent a bunch of time looking for the answer but can't seem to get the right keywords or something. I also see this formatting a bunch in my homework assignments and on online R resources, so it's burning me up inside not even knowing how to look it up.
I've tried a bunch of formats from these websites: RMarkdown Cheat Sheet, StackOverflow, and many other SO pages. Also looked on various websites that popped up when looking up "how to format variables in R markdown inline/within text" or similar variations. Have a strong feeling I'm going to pinch myself when I get the answer
|
[
"I believe you are looking for\n- `name`: The class title\n\nWhich renders as\n\nname: The class title\n\n"
] |
[
1
] |
[] |
[] |
[
"formatting",
"r",
"r_markdown"
] |
stackoverflow_0074664277_formatting_r_r_markdown.txt
|
Q:
How do I filter and sum only negative or positive numbers in a field using an expression in SQL Reporting Services 2014
How do I filter and sum only negative numbers in a field using an expression in SQL Reporting Services 2014? The field Amt has both negatives (revenue) and positives (expenses) that I want to total in two different columns.
IIf(Fields!Amt.Value<0,SUM(Fields!Amt.Value<0),0) - Revenue
IIf(Fields!Amt.Value>0,(Fields!Amt.Value>0),0) - Expenses
This does not work, but you get the idea of what I'm trying to do. Sorry, I'm new at report writing. This is likely pretty simple for most of you.
A:
You need something like this (and reversed for the other column)
=SUM(IIF(Fields!Amt.Value<0, Fields!Amt.Value, 0))
You should think of it as...
For each row, evaluate if it's < 0, if it is then return the value,
else return 0. THEN we sum the results of all these returned values.
Often people think about it the wrong way round. You'll soon do it instinctively with a little practice.
Here's a complete example
I created a new report then..
I created a sample dataset as follows.
DECLARE @t TABLE (SomeText varchar(10), Amt float)
INSERT INTO @t VALUES
('A', -2.5),
('A', 2.5),
('A', 1),
('A', 0.5),
('B', -2.5),
('B', -0.5),
('B', 0.5),
('B', -0.1),
('C', -1),
('C', -2),
('C', 3),
('C', 4),
('C', -5)
SELECT * FROM @t
I then added a table to the report and added a row group, grouped by the SomeText field
The report design looks like this..
The expressions in the Rev and Exp columns are as follows.
=SUM(IIF(Fields!Amt.Value<0, Fields!Amt.Value, 0))
and
=SUM(IIF(Fields!Amt.Value>0, Fields!Amt.Value, 0))
respectively.
The final output looks like this.
If you still get errors then check the datatypes in your recordset and post the output from it.
|
How do I filter and sum only negative or positive numbers in a field using an expression in SQL Reporting Services 2014
|
How do I filter and sum only negative numbers in a field using an expression in SQL Reporting Services 2014? The field Amt has both negatives (revenue) and positives (expenses) that I want to total in two different columns.
IIf(Fields!Amt.Value<0,SUM(Fields!Amt.Value<0),0) - Revenue
IIf(Fields!Amt.Value>0,(Fields!Amt.Value>0),0) - Expenses
This does not work, but you get the idea of what I'm trying to do. Sorry, I'm new at report writing. This is likely pretty simple for most of you.
|
[
"You need something like this (and reversed for the other column)\n=SUM(IIF(Fields!Amt.Value<0, Fields!Amt.Value, 0))\n\nYou should think of it as... \n\nFor each row, evaluate if it's < 0, if it is then return the value,\n else return 0. THEN we sum the results of all these returned values.\n\nOften people think about it the wrong way round. You'll soon do it instinctively with a little practice.\nHere's a complete example\nI created a new report then..\nI created a sample dataset as follows.\nDECLARE @t TABLE (SomeText varchar(10), Amt float)\n\nINSERT INTO @t VALUES\n('A', -2.5),\n('A', 2.5),\n('A', 1),\n('A', 0.5),\n('B', -2.5),\n('B', -0.5),\n('B', 0.5),\n('B', -0.1),\n('C', -1),\n('C', -2),\n('C', 3),\n('C', 4),\n('C', -5)\n\nSELECT * FROM @t\n\nI then added a table to the report and added a row group, grouped by the SomeText field\nThe report design looks like this..\n\nThe expressions in the Rev and Exp columns are as follows.\n=SUM(IIF(Fields!Amt.Value<0, Fields!Amt.Value, 0))\n\nand\n=SUM(IIF(Fields!Amt.Value>0, Fields!Amt.Value, 0))\n\nrespectively.\nThe final output looks like this.\n\nIf you still get errors then check the datatypes in your recordset and post the output from it.\n"
] |
[
3
] |
[
"The answer just needs a little tweek on it as below;\n=SUM(CDec(IIF(Fields!Amt.Value>0, Fields!Amt.Value, 0)))\nWorks as magic\n"
] |
[
-1
] |
[
"reporting_services",
"sql_server"
] |
stackoverflow_0047562814_reporting_services_sql_server.txt
|
Q:
Restrict classes with same method names to be used interchangeably
Lets say we have Shape class like this:
export default class Shape {
public render(): void {
console.log("Render Shape");
}
}
and Group class like this:
import Shape from "./Shape";
export default class Group {
private shapes: Shape[] = [];
public add(shape: Shape): void {
this.shapes.push(shape);
}
public render(): void {
for (const shape of this.shapes) {
shape.render();
}
}
}
as you can see in Group class, we have method called add that accept 1 parameters with type Shape class. I want to pass only objects with type Shape to this method, but i can pass Group type as well.
import Group from "./Group";
import Shape from "./Shape";
const group1 = new Group();
group1.add(new Shape()); // this is ok
const group2 = new Group();
group2.add(group1); // this is ok in typescript view but i don't like it
Is there any solution to prevent this behaviour?
A:
I found a solution for this problem, but i like to know another solutions too.
import Shape from "./Shape";
type ValidateStructure<T, Struct> = T extends Struct
? Exclude<keyof T, keyof Struct> extends never
? T
: never
: never;
export default class Group {
private shapes: Shape[] = [];
public add<T>(shape: ValidateStructure<T, Shape>): void {
this.shapes.push(shape);
}
public render(): void {
for (const shape of this.shapes) {
shape.render();
}
}
}
and now magic happens:
|
Restrict classes with same method names to be used interchangeably
|
Lets say we have Shape class like this:
export default class Shape {
public render(): void {
console.log("Render Shape");
}
}
and Group class like this:
import Shape from "./Shape";
export default class Group {
private shapes: Shape[] = [];
public add(shape: Shape): void {
this.shapes.push(shape);
}
public render(): void {
for (const shape of this.shapes) {
shape.render();
}
}
}
as you can see in Group class, we have method called add that accept 1 parameters with type Shape class. I want to pass only objects with type Shape to this method, but i can pass Group type as well.
import Group from "./Group";
import Shape from "./Shape";
const group1 = new Group();
group1.add(new Shape()); // this is ok
const group2 = new Group();
group2.add(group1); // this is ok in typescript view but i don't like it
Is there any solution to prevent this behaviour?
|
[
"I found a solution for this problem, but i like to know another solutions too.\nimport Shape from \"./Shape\";\n\ntype ValidateStructure<T, Struct> = T extends Struct\n ? Exclude<keyof T, keyof Struct> extends never\n ? T\n : never\n : never;\n\nexport default class Group {\n private shapes: Shape[] = [];\n\n public add<T>(shape: ValidateStructure<T, Shape>): void {\n this.shapes.push(shape);\n }\n\n public render(): void {\n for (const shape of this.shapes) {\n shape.render();\n }\n }\n}\n\nand now magic happens:\n\n"
] |
[
0
] |
[] |
[] |
[
"oop",
"typescript"
] |
stackoverflow_0074664905_oop_typescript.txt
|
Q:
How to find the equivalent way in nix to install a package
I've to install the package build-essential. (because of this error using cargo :
cargo build ->linker cc not found
)
sudo apt install build-essential
I've search in the list of package of nix but I've find it.
How can I install build-essential with nix
A:
The gcc package in nixpkgs provides cc so the solution was to make sure gcc is installed in your environment or shell.
|
How to find the equivalent way in nix to install a package
|
I've to install the package build-essential. (because of this error using cargo :
cargo build ->linker cc not found
)
sudo apt install build-essential
I've search in the list of package of nix but I've find it.
How can I install build-essential with nix
|
[
"The gcc package in nixpkgs provides cc so the solution was to make sure gcc is installed in your environment or shell.\n"
] |
[
1
] |
[] |
[] |
[
"nix"
] |
stackoverflow_0074636275_nix.txt
|
Q:
Node js built-in modules not found in deployment using vercel ERROR: Could not resolve "crypto"
I'm trying to deploy a personal project which I made with express. I'm using passport as a way to authenticate and authorize the user, this library is using some of node js built-in modules like 'crypto' and 'http'. Everything works fine locally.
Build failed with 7 errors:
vc-file-system:node_modules/generaterr/index.js:2:19: ERROR: Could not resolve "util"
vc-file-system:node_modules/passport-local-mongoose/index.js:1:23: ERROR: Could not resolve "crypto"
vc-file-system:node_modules/passport-local-mongoose/lib/pbkdf2.js:1:23: ERROR: Could not resolve "crypto"
vc-file-system:node_modules/passport-local/lib/strategy.js:5:19: ERROR: Could not resolve "util"
vc-file-system:node_modules/passport/lib/middleware/authenticate.js:4:19: ERROR: Could not resolve "http"
I tried adding the missing depedencies into the package.json but nothing changed.
A:
Since your computer and hosting service are using different OS.please make sure you installing node modules on host OS like put npm install command inside start command
"start" : "npm i && node app.js"
|
Node js built-in modules not found in deployment using vercel ERROR: Could not resolve "crypto"
|
I'm trying to deploy a personal project which I made with express. I'm using passport as a way to authenticate and authorize the user, this library is using some of node js built-in modules like 'crypto' and 'http'. Everything works fine locally.
Build failed with 7 errors:
vc-file-system:node_modules/generaterr/index.js:2:19: ERROR: Could not resolve "util"
vc-file-system:node_modules/passport-local-mongoose/index.js:1:23: ERROR: Could not resolve "crypto"
vc-file-system:node_modules/passport-local-mongoose/lib/pbkdf2.js:1:23: ERROR: Could not resolve "crypto"
vc-file-system:node_modules/passport-local/lib/strategy.js:5:19: ERROR: Could not resolve "util"
vc-file-system:node_modules/passport/lib/middleware/authenticate.js:4:19: ERROR: Could not resolve "http"
I tried adding the missing depedencies into the package.json but nothing changed.
|
[
"Since your computer and hosting service are using different OS.please make sure you installing node modules on host OS like put npm install command inside start command\n\"start\" : \"npm i && node app.js\" \n\n"
] |
[
0
] |
[] |
[] |
[
"deployment",
"node.js",
"vercel"
] |
stackoverflow_0074664830_deployment_node.js_vercel.txt
|
Q:
The terminal process failed to launch: A native exception occurred during launch (VS Code Integrated Terminal)
I am developing a web application with ASP.NET MVC. To run it, I use the following shell commands:
dotnet restore ProjectDirectory
and then
dotnet run --project ProjectDirectory
If I run them in my command shell, all works fine. But my code editor is VS Code, so I want to make it run these commands in its Integrated Terminal. I have configured its behavior in its files launch.json and tasks.json:
// launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "2.0.0",
"configurations": [
{
"name": ".NET Core Launch (web)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "restore",
"program": "C:/Windows/System32/dotnet.exe", // The directory separation char is / in this attribute (not \)
"args": ["run", "--project", "${workspaceFolder}"],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"serverReadyAction": {
"action": "openExternally",
"pattern": "\\bNow listening on:\\s+(https?://\\S+)"
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach"
}
]
}
// tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "publish",
"command": "dotnet",
"type": "process",
"args": [
"publish",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "watch",
"command": "dotnet",
"type": "process",
"args": [
"watch",
"run",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
// Here is my preLaunchTask
{
"label": "restore",
"command": "dotnet",
"type": "process",
"args": [
"restore",
"${workspaceFolder}"
],
"problemMatcher": "$msCompile"
}
]
}
After configuring I tried it out. I pressed F5. The Integrated Terminal opened, and it tried to run
dotnet restore ProjectDirectory
But then it showed the following error message:
The terminal process failed to launch: A native exception occurred during launch (CreateProcess failed)
What I have tried
I opened VS Code documentation and made some research. I found only one page about troubleshooting Integrated Terminal failures.
I tried all the advices from the troubleshooting page above. They didn't help me.
Surfed the Internet (including Stack Overflow). I found no useful pages (excluding the troubleshooting page descripted above).
Now I asked my own question.
A:
It looks like the issue is with the args in your restore task in your tasks.json file. The restore command doesn't take the --project flag, so you should remove it from the args array.
Your restore task should look something like this:
{
"label": "restore",
"command": "dotnet",
"type": "process",
"args": [
"restore",
"${workspaceFolder}"
],
"problemMatcher": "$msCompile"
}
You should also make sure that the preLaunchTask property in your launch.json file is set to "restore".
"preLaunchTask": "restore",
After making these changes, try running your project again and see if the issue is resolved.
A:
Check your user settings. Review these terminal.integrated settings that could affect the launch:
terminal.integrated.defaultProfile.{platform} - The default shell profile that the terminal uses.
terminal.integrated.profiles.{platform} - The defined shell profiles. Sets the shell path and arguments.
terminal.integrated.cwd - The current working directory (cwd) for the shell process.
terminal.integrated.env.{platform} - Environment variables that will be added to the shell process.
terminal.integrated.inheritEnv - Whether new shells should inherit their environment from VS Code.
terminal.integrated.automationProfile.{platform} - Shell profile for automation-related terminal usage like tasks and debug.
terminal.integrated.splitCwd - Controls the current working directory a split terminal starts with.
terminal.integrated.windowsEnableConpty - Whether to use ConPTY for Windows terminal process communication.
You can review settings in the Settings editor (File > Preferences > Settings) and search
for specific settings by the setting ID.
[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/search-for-settings.png][1]
A quick way to check if you have changed settings that you might not be aware of, is to use the @modified filter in the Settings editor.
[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/search-for-modified-settings.png][1]
Most Integrated Terminal settings will need to be modified directly in your user settings.json JSON file. You can open settings.json via the Edit in settings.json link in the Settings editor or with the Preferences: Open Settings (JSON) command from the Command Palette (⇧⌘P).
[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/settings-json-file.png][1]
Test your shell directly. Try running your designated integrated terminal shell outside VS Code from an external terminal or command prompt. Some terminal launch failures may be due to your shell installation and are not specific to VS Code. The exit codes displayed come from the shell and you may be able to diagnose shell issues by searching on the internet for the specific shell and exit code.
Use the most recent version of VS Code. Each VS Code monthly release has many updates and fixes and may include integrated terminal improvements. You can check your VS Code version via Help > About (on macOS Code > About Visual Studio Code). To find the latest version of VS Code, go to the VS Code release notes. You may also want to check that you have installed the latest version of your shell.
Use the most recent version of your shell. If your shell is installed separate from your platform, try installing the latest available version of the shell. The same advice applies if you are on an older build of your operating system. For example, some older versions of Windows 10 did not work well with the VS Code terminal.
Enable trace logging. You can enable trace logging and capture a log when launching the terminal. Logging often reveals what is wrong as all arguments used to create the terminal process/pty are recorded. Bad shell names, arguments, or environment variables can cause the terminal to not launch. Keep this log for later if your problem isn't solved.
|
The terminal process failed to launch: A native exception occurred during launch (VS Code Integrated Terminal)
|
I am developing a web application with ASP.NET MVC. To run it, I use the following shell commands:
dotnet restore ProjectDirectory
and then
dotnet run --project ProjectDirectory
If I run them in my command shell, all works fine. But my code editor is VS Code, so I want to make it run these commands in its Integrated Terminal. I have configured its behavior in its files launch.json and tasks.json:
// launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "2.0.0",
"configurations": [
{
"name": ".NET Core Launch (web)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "restore",
"program": "C:/Windows/System32/dotnet.exe", // The directory separation char is / in this attribute (not \)
"args": ["run", "--project", "${workspaceFolder}"],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"serverReadyAction": {
"action": "openExternally",
"pattern": "\\bNow listening on:\\s+(https?://\\S+)"
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach"
}
]
}
// tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "publish",
"command": "dotnet",
"type": "process",
"args": [
"publish",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "watch",
"command": "dotnet",
"type": "process",
"args": [
"watch",
"run",
"${workspaceFolder}/MyWebApp.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
// Here is my preLaunchTask
{
"label": "restore",
"command": "dotnet",
"type": "process",
"args": [
"restore",
"${workspaceFolder}"
],
"problemMatcher": "$msCompile"
}
]
}
After configuring I tried it out. I pressed F5. The Integrated Terminal opened, and it tried to run
dotnet restore ProjectDirectory
But then it showed the following error message:
The terminal process failed to launch: A native exception occurred during launch (CreateProcess failed)
What I have tried
I opened VS Code documentation and made some research. I found only one page about troubleshooting Integrated Terminal failures.
I tried all the advices from the troubleshooting page above. They didn't help me.
Surfed the Internet (including Stack Overflow). I found no useful pages (excluding the troubleshooting page descripted above).
Now I asked my own question.
|
[
"It looks like the issue is with the args in your restore task in your tasks.json file. The restore command doesn't take the --project flag, so you should remove it from the args array.\nYour restore task should look something like this:\n{\n \"label\": \"restore\",\n \"command\": \"dotnet\",\n \"type\": \"process\",\n \"args\": [\n \"restore\",\n \"${workspaceFolder}\"\n ],\n \"problemMatcher\": \"$msCompile\"\n}\n\nYou should also make sure that the preLaunchTask property in your launch.json file is set to \"restore\".\n\"preLaunchTask\": \"restore\",\n\nAfter making these changes, try running your project again and see if the issue is resolved.\n",
"Check your user settings. Review these terminal.integrated settings that could affect the launch:\nterminal.integrated.defaultProfile.{platform} - The default shell profile that the terminal uses.\nterminal.integrated.profiles.{platform} - The defined shell profiles. Sets the shell path and arguments.\nterminal.integrated.cwd - The current working directory (cwd) for the shell process.\nterminal.integrated.env.{platform} - Environment variables that will be added to the shell process.\nterminal.integrated.inheritEnv - Whether new shells should inherit their environment from VS Code.\nterminal.integrated.automationProfile.{platform} - Shell profile for automation-related terminal usage like tasks and debug.\nterminal.integrated.splitCwd - Controls the current working directory a split terminal starts with.\nterminal.integrated.windowsEnableConpty - Whether to use ConPTY for Windows terminal process communication.\nYou can review settings in the Settings editor (File > Preferences > Settings) and search\nfor specific settings by the setting ID.\n[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/search-for-settings.png][1]\nA quick way to check if you have changed settings that you might not be aware of, is to use the @modified filter in the Settings editor.\n[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/search-for-modified-settings.png][1]\nMost Integrated Terminal settings will need to be modified directly in your user settings.json JSON file. You can open settings.json via the Edit in settings.json link in the Settings editor or with the Preferences: Open Settings (JSON) command from the Command Palette (⇧⌘P).\n[https://code.visualstudio.com/assets/docs/supporting/troubleshoot-terminal-launch/settings-json-file.png][1]\nTest your shell directly. Try running your designated integrated terminal shell outside VS Code from an external terminal or command prompt. Some terminal launch failures may be due to your shell installation and are not specific to VS Code. The exit codes displayed come from the shell and you may be able to diagnose shell issues by searching on the internet for the specific shell and exit code.\nUse the most recent version of VS Code. Each VS Code monthly release has many updates and fixes and may include integrated terminal improvements. You can check your VS Code version via Help > About (on macOS Code > About Visual Studio Code). To find the latest version of VS Code, go to the VS Code release notes. You may also want to check that you have installed the latest version of your shell.\nUse the most recent version of your shell. If your shell is installed separate from your platform, try installing the latest available version of the shell. The same advice applies if you are on an older build of your operating system. For example, some older versions of Windows 10 did not work well with the VS Code terminal.\nEnable trace logging. You can enable trace logging and capture a log when launching the terminal. Logging often reveals what is wrong as all arguments used to create the terminal process/pty are recorded. Bad shell names, arguments, or environment variables can cause the terminal to not launch. Keep this log for later if your problem isn't solved.\n"
] |
[
1,
1
] |
[] |
[] |
[
"visual_studio_code",
"vscode_debugger",
"vscode_tasks"
] |
stackoverflow_0074511989_visual_studio_code_vscode_debugger_vscode_tasks.txt
|
Q:
Store POJOs to a Spark Dataset
I'm connecting with Zeppelin to my rest web-service, that returns cities, through OpenAPI:
import java.util.Map;
import fr.ecoemploi.application.etude.swagger.model.Commune;
import fr.ecoemploi.application.etude.swagger.api.CogControllerApi;
var serviceCodeOfficielGeographique: CogControllerApi = new CogControllerApi();
var communes: Map[String, Commune] =
serviceCodeOfficielGeographique.obtenirCommunesUsingGET(2022);
it returns satisfactory results:
communes: java.util.Map[String,fr.ecoemploi.application.etude.swagger.model.Commune] =
{62001=class Commune {
arrondissement: 627
codeCanton: 6217
codeCommune: 62001
codeCommuneParente: null
codeDepartement: 62
codeEPCI: 246200364
codeRegion: 32
collectiviteOutremer: false
nomCommune: Ablain-Saint-Nazaire
nomMajuscules: ABLAIN SAINT NAZAIRE
population: null
sirenCommune: 216200014
...
I'd like to store the cities (the values of this map, that I'll catch with a communes.values()) to a Dataset[Commune].
But I wonder how to do this.
I've attempted:
import org.apache.spark.sql._
var datasetCommunes = spark.createDataset(communes, Encoders[Commune]);
under Zeppelin, but received the message:
<console>:96: error: object Encoders does not take type parameters.
var datasetCommunes = spark.createDataset(communes, Encoders[Commune]);
I don't now how to write my statement.
Thanks for your advice below, @Gael_J.
I've attempted it, that way (because map.values() returns, in Java, a Collection<Commune> that I have first to convert to a java.util.List:
import org.apache.spark.sql._
import scala.collection.mutable._;
import spark.implicits._
var communeList: ArrayBuffer[Commune] = new ArrayBuffer[Commune];
communes.values().stream().forEach(commune => communeList.append(commune));
var datasetCommunes = communeList.toSeq.toDS();
But I'm receiving the error:
error: value toDS is not a member of Seq[fr.ecoemploi.application.etude.swagger.model.Commune]
var datasetCommunes = communeList.toSeq.toDS();
A:
If Commune is a case class, you should be able to benefit from automatic Encoders with something like this:
import spark.implicits._
val ds = communes.values().toSeq().toDS()
|
Store POJOs to a Spark Dataset
|
I'm connecting with Zeppelin to my rest web-service, that returns cities, through OpenAPI:
import java.util.Map;
import fr.ecoemploi.application.etude.swagger.model.Commune;
import fr.ecoemploi.application.etude.swagger.api.CogControllerApi;
var serviceCodeOfficielGeographique: CogControllerApi = new CogControllerApi();
var communes: Map[String, Commune] =
serviceCodeOfficielGeographique.obtenirCommunesUsingGET(2022);
it returns satisfactory results:
communes: java.util.Map[String,fr.ecoemploi.application.etude.swagger.model.Commune] =
{62001=class Commune {
arrondissement: 627
codeCanton: 6217
codeCommune: 62001
codeCommuneParente: null
codeDepartement: 62
codeEPCI: 246200364
codeRegion: 32
collectiviteOutremer: false
nomCommune: Ablain-Saint-Nazaire
nomMajuscules: ABLAIN SAINT NAZAIRE
population: null
sirenCommune: 216200014
...
I'd like to store the cities (the values of this map, that I'll catch with a communes.values()) to a Dataset[Commune].
But I wonder how to do this.
I've attempted:
import org.apache.spark.sql._
var datasetCommunes = spark.createDataset(communes, Encoders[Commune]);
under Zeppelin, but received the message:
<console>:96: error: object Encoders does not take type parameters.
var datasetCommunes = spark.createDataset(communes, Encoders[Commune]);
I don't now how to write my statement.
Thanks for your advice below, @Gael_J.
I've attempted it, that way (because map.values() returns, in Java, a Collection<Commune> that I have first to convert to a java.util.List:
import org.apache.spark.sql._
import scala.collection.mutable._;
import spark.implicits._
var communeList: ArrayBuffer[Commune] = new ArrayBuffer[Commune];
communes.values().stream().forEach(commune => communeList.append(commune));
var datasetCommunes = communeList.toSeq.toDS();
But I'm receiving the error:
error: value toDS is not a member of Seq[fr.ecoemploi.application.etude.swagger.model.Commune]
var datasetCommunes = communeList.toSeq.toDS();
|
[
"If Commune is a case class, you should be able to benefit from automatic Encoders with something like this:\nimport spark.implicits._\n\nval ds = communes.values().toSeq().toDS()\n\n"
] |
[
1
] |
[] |
[] |
[
"apache_spark",
"scala"
] |
stackoverflow_0074664708_apache_spark_scala.txt
|
Q:
Look up values from one df to another df based on a specific column
I am attempting to populate values from one DataFrame to another DataFrame based on a common column present in both DataFrames.
The code I wrote for this operation is as follows:
for i in df1.zipcodes:
for j in df2.zipcodes.unique():
if i == j:
#print("this is i:",i, "this is j:",j)
df1['rent'] = df2['rent']
The Dataframes (df1) in question looks as such with shape (131942, 2):
Providing 1st ten rows of df1:
zipcodes districts
018906 01
018907 01
018910 01
018915 01
018916 01
018925 01
018926 01
018927 01
018928 01
018929 01
018930 01
Additionally, there are no duplicates for the Zipcodes column, but the district column has 28 unique values. No Nan values are present.
The other DataFrame(df2) looks as such with shape (77996, 4)
Providing 1st ten rows of df2
street zipcodes district rent
E ROAD 545669 15 3600
E ROAD 545669 15 6200
E ROAD 545669 15 5500
E ROAD 545669 15 3200
H DRIVE 459108 19 3050
H DRIVE 459108 19 2000
A VIEW 098619 03 4200
A VIEW 098619 03 4500
J ROAD 018947 10 19500
O DRIVE 100088 04 9600
Note: The Zipcodes in df2 can repeat.
Now, I want to populate a column in df1 called rent, if the zipcodes in df1 matches the zipcode of df2. If the zipcodes match but there are multiple entries with the same zipcode in df2 then I want to populate the average as the rent. If there is only one entry for the zipcode then I want to populate the rent corresponding to that zipcode.
Any help on the above will be greatly appreciated.
A:
Use a merge with the groupby.mean of df2:
out = df1.merge(df2.groupby('zipcodes', as_index=False)['rent'].mean(),
on='zipcodes', how='left')
A:
You can divide that into 2 phases:
1st phase: Aggregate the df2 to calculate the average rent by zip code. If the zip code has only one rent then the average value will be equal to that exact rent value so it still matches what you need.
df2 = df2.groupby('zipcodes').mean()['rent'].reset_index()
2nd phase: Merge to df1 using zipcodes
df1 = df1.merge(df2, on='zipcodes', how='left')
You can change how parameter to left or inner depending on what you need. Left join will keep all the rows from df1 and fill NA if can't find any match from df2. Inner join will only keep rows that can be found in both df1 and df2.
Hope this help.
|
Look up values from one df to another df based on a specific column
|
I am attempting to populate values from one DataFrame to another DataFrame based on a common column present in both DataFrames.
The code I wrote for this operation is as follows:
for i in df1.zipcodes:
for j in df2.zipcodes.unique():
if i == j:
#print("this is i:",i, "this is j:",j)
df1['rent'] = df2['rent']
The Dataframes (df1) in question looks as such with shape (131942, 2):
Providing 1st ten rows of df1:
zipcodes districts
018906 01
018907 01
018910 01
018915 01
018916 01
018925 01
018926 01
018927 01
018928 01
018929 01
018930 01
Additionally, there are no duplicates for the Zipcodes column, but the district column has 28 unique values. No Nan values are present.
The other DataFrame(df2) looks as such with shape (77996, 4)
Providing 1st ten rows of df2
street zipcodes district rent
E ROAD 545669 15 3600
E ROAD 545669 15 6200
E ROAD 545669 15 5500
E ROAD 545669 15 3200
H DRIVE 459108 19 3050
H DRIVE 459108 19 2000
A VIEW 098619 03 4200
A VIEW 098619 03 4500
J ROAD 018947 10 19500
O DRIVE 100088 04 9600
Note: The Zipcodes in df2 can repeat.
Now, I want to populate a column in df1 called rent, if the zipcodes in df1 matches the zipcode of df2. If the zipcodes match but there are multiple entries with the same zipcode in df2 then I want to populate the average as the rent. If there is only one entry for the zipcode then I want to populate the rent corresponding to that zipcode.
Any help on the above will be greatly appreciated.
|
[
"Use a merge with the groupby.mean of df2:\nout = df1.merge(df2.groupby('zipcodes', as_index=False)['rent'].mean(),\n on='zipcodes', how='left')\n\n",
"You can divide that into 2 phases:\n\n1st phase: Aggregate the df2 to calculate the average rent by zip code. If the zip code has only one rent then the average value will be equal to that exact rent value so it still matches what you need.\n df2 = df2.groupby('zipcodes').mean()['rent'].reset_index()\n\n\n2nd phase: Merge to df1 using zipcodes\n df1 = df1.merge(df2, on='zipcodes', how='left') \n\n\n\nYou can change how parameter to left or inner depending on what you need. Left join will keep all the rows from df1 and fill NA if can't find any match from df2. Inner join will only keep rows that can be found in both df1 and df2.\nHope this help.\n"
] |
[
1,
1
] |
[] |
[] |
[
"average",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074664746_average_dataframe_pandas_python.txt
|
Q:
Duplicated mapping in serverless
I'm using Serverless to deploy a functions written in nodejs to AWS.
I was getting an error Cannot parse "serverless.yml": duplicated mapping key.
(https://i.stack.imgur.com/4hTMj.png)
A:
You have two http in your yml file, remove one of them.
If you want to run it on every request and any type, you can do this:
function:
api:
handler: lambda.handler
events:
- http:
method: '*'
path: /
Sidenote: You should probably give it an endpoint and request method to be on the safer side.
|
Duplicated mapping in serverless
|
I'm using Serverless to deploy a functions written in nodejs to AWS.
I was getting an error Cannot parse "serverless.yml": duplicated mapping key.
(https://i.stack.imgur.com/4hTMj.png)
|
[
"You have two http in your yml file, remove one of them.\nIf you want to run it on every request and any type, you can do this:\nfunction:\n api:\n handler: lambda.handler\n events:\n - http:\n method: '*'\n path: /\n\nSidenote: You should probably give it an endpoint and request method to be on the safer side.\n"
] |
[
0
] |
[] |
[] |
[
"aws_lambda",
"express",
"node.js"
] |
stackoverflow_0074664669_aws_lambda_express_node.js.txt
|
Q:
I'm using laravel 8, when i want to use laravel passport an error like this occurs:
composer require laravel/passport
Error:
Your requirements could not be resolved to an installable set of packages.
Problem 1
Root composer.json requires laravel/passport ^11.2 -> satisfiable by laravel/passport[v11.2.0, 11.x-dev].
laravel/passport[v11.2.0, ..., 11.x-dev] require illuminate/auth ^9.0 -> found illuminate/auth[v9.0.0-beta.1, ..., 9.x-dev] but these were not loaded, likely because it conflicts with another require.
You can also try re-running composer require with an explicit version constraint, e.g. "composer require laravel/passport:*" to figure out if any version is installable, or "composer require laravel/passport:^2.1" if you know which you need.
Installation failed, reverting ./composer.json and ./composer.lock to their original content.
"require": {
"php": "^7.3|^8.0",
"fruitcake/laravel-cors": "^2.0",
"guzzlehttp/guzzle": "^7.0.1",
"laravel/framework": "^8.75",
"laravel/sanctum": "^2.11",
"laravel/tinker": "^2.5"
},
anyone can help me?
A:
For laravel 8 run composer require laravel/passport "^10.0" on your terminal on the project folder
|
I'm using laravel 8, when i want to use laravel passport an error like this occurs:
|
composer require laravel/passport
Error:
Your requirements could not be resolved to an installable set of packages.
Problem 1
Root composer.json requires laravel/passport ^11.2 -> satisfiable by laravel/passport[v11.2.0, 11.x-dev].
laravel/passport[v11.2.0, ..., 11.x-dev] require illuminate/auth ^9.0 -> found illuminate/auth[v9.0.0-beta.1, ..., 9.x-dev] but these were not loaded, likely because it conflicts with another require.
You can also try re-running composer require with an explicit version constraint, e.g. "composer require laravel/passport:*" to figure out if any version is installable, or "composer require laravel/passport:^2.1" if you know which you need.
Installation failed, reverting ./composer.json and ./composer.lock to their original content.
"require": {
"php": "^7.3|^8.0",
"fruitcake/laravel-cors": "^2.0",
"guzzlehttp/guzzle": "^7.0.1",
"laravel/framework": "^8.75",
"laravel/sanctum": "^2.11",
"laravel/tinker": "^2.5"
},
anyone can help me?
|
[
"For laravel 8 run composer require laravel/passport \"^10.0\" on your terminal on the project folder\n"
] |
[
0
] |
[] |
[] |
[
"composer_php",
"laravel",
"php"
] |
stackoverflow_0073864256_composer_php_laravel_php.txt
|
Q:
Navigate to a page in a sub folder on item click in React JS
I'm very new to React JS but watched a few video tutorials on how to build my own portfolio. I want to map my portfolio items which contain an image, title, description, role and link. I managed to map everything except for the link to the detailed page of the respective portfolio item.
Could anyone please help me understand what I'm doing wrong and how to solve it please?
This is my folder structure:
src => pages where the main pages retain and src => pages => case-studies where the detailed pages of my portfolio items retain.
Here's what I got to so far:
Work.jsx in the src => pages folder
import React, { useEffect, useState } from "react";
import Loader from "./Loader";
import "./Work.css";
import WorkCard from "./WorkCard";
import WorkData from "./WorkData";
const Work = () => {
return (
<div className="work">
<section className="work-section">
<div className="card">
<h1 className="underline">My portfolio</h1>
<p>Here are the case studies of my projects...</p>
<div className="grid-layout grid-2">
{
WorkData.map((val, index) => {
return (
<WorkCard
key={index}
url={val.url}
image={val.image}
name={val.name}
description={val.description}
role={val.role} />
);
})
}
</div>
</div>
</section>
</div>
);
}
export default Work;
WorkCard.jsx in the src => pages folder
import React from "react";
import { NavLink } from "react-router-dom";
const WorkCard = (props) => {
return (
<NavLink to={props.url} className="tile-card">
<img src={props.image} alt={props.name} />
<div className="details">
<h2>{props.name}</h2>
<p className="description" title={props.description}>{props.description}</p>
<h4 className="role" title={props.role}>Role: {props.role}</h4>
</div>
</NavLink>
);
}
export default WorkCard;
WorkData.jsx in the src => pages folder
import EcoPizza from "../assets/project-icons/ecoPizza.webp";
import Squared from "../assets/project-icons/Squared.webp";
import { EcoPizzaCaseStudy, SquaredCaseStudy } from "./case-studies";
const WorkData = [
{
"image": EcoPizza,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": EcoPizzaCaseStudy
},
{
"image": Squared,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": SquaredCaseStudy
}
];
export default WorkData;
index.js in the src => pages => case-studies folder
export { default as EcoPizzaCaseStudy } from "./eco-pizza/EcoPizzaCaseStudy";
export { default as SquaredCaseStudy } from "./squared/SquaredCaseStudy";
App.js
import React from "react";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
import { Navigation } from "./components/Navigation";
import { Home, About, Work, Contact, PageNotFound, UnderConstruction } from "./pages";
import { Footer } from "./components/Footer";
function App() {
return (
<div className="App">
<Router>
<Navigation />
<main>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/about" element={<About />} />
<Route path="/work" element={<Work />} />
<Route path="/under-construction" element={<UnderConstruction />} />
<Route path="/contact" element={<Contact />} />
<Route path='*' element={<PageNotFound />}/>
</Routes>
</main>
<Footer />
</Router>
</div>
);
}
export default App;
A:
you are missing whole concept of routing. You can't just use item as a link to it. For that, we use Routes and then use url by Route defined accordingly.
So you need to fill up your Routes with missing items in App.js:
import React from "react";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
import { Navigation } from "./components/Navigation";
import { Home, About, Work, Contact, PageNotFound, UnderConstruction } from "./pages";
import { Footer } from "./components/Footer";
//import required components
import { EcoPizzaCaseStudy, SquaredCaseStudy } from "./case-studies";
function App() {
return (
<div className="App">
<Router>
<Navigation />
<main>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/about" element={<About />} />
<Route path="/work" element={<Work />} />
<Route path="/under-construction" element={<UnderConstruction />} />
<Route path="/contact" element={<Contact />} />
// Add your routes
<Route path="/eco-pizza-case-study" element={<EcoPizzaCaseStudy/>} />
<Route path="/squared-case-study" element={<SquaredCaseStudy/>} />
<Route path='*' element={<PageNotFound />}/>
</Routes>
</main>
<Footer />
</Router>
</div>
);
}
export default App;
And then for url use that defined path.
Instead "url": SquaredCaseStudy just do "url": "/squared-case-study"
Any path you choose will work as long as it is unique. You can make it /study/squared or any other, but you need to define it in App.js to use it
A:
I think you need to know React Router concept first to check how to navigate routes between components. React use React Router Dom to navigate pages. So First you need to create all routes in your projects (name it App.jsx or you can make your own custom Route.jsx file and import to App.jsx)
import React, { Suspense, lazy } from 'react';
import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';
const Work = lazy(() => import('./pages/Work'));
const EcoPizzaCaseStudy = lazy(() => import('./case-studies/eco-pizza/EcoPizzaCaseStudy'));
const SquaredCaseStudy = lazy(() => import('./case-studies/squared/SquaredCaseStudy'));
const App = () => (
<Router>
<Suspense fallback={<div>Loading...</div>}>
<Routes>
<Route path="/work" element={<Home />} />
<Route path="/coPizzaCaseStudy/" element={<EcoPizzaCaseStudy />} />
<Route path="/squaredCaseStudy/" element={<SquaredCaseStudy />} />
</Routes>
</Suspense>
</Router>
);
With this way, you can pass urls coPizzaCaseStudy and squaredCaseStudy to your workData instead of passing components itself. You need to not import components in WorkData.jsx file, just use above texts in url property.
And also In my personal thinking regarding your project structuring, You can make sub folders inside of pages folder for every pages relevant files (For example, make Work folder and put your files Work.css ~ WorkData.jsx), and instead of having Work.jsx, you can just have index.jsx and copy all code from Work.jsx. This way help you webpack compile engine find right index for your pages (in this case, work page) more easily.
I shared my project folder structure followed best practices. Sub folders under views are every pages. Every folders has index.tsx file for main component, and component sub folders, hooks, and types, etc
And also you need to name WorkData.jsx if you are going to expert just object data instead of JSX component. Just name it WorkData.js etc.
A:
So if I'm correct in understanding your issue, it is that you click the link and then navigate to "/work/EcoPizzaCaseStudy", for example, and then the PageNotFound component is rendered, or there is some error.
The code is currently attempting to use an imported React component as the link target.
import EcoPizza from "../assets/project-icons/ecoPizza.webp";
import Squared from "../assets/project-icons/Squared.webp";
import { EcoPizzaCaseStudy, SquaredCaseStudy } from "./case-studies";
const WorkData = [
{
"image": EcoPizza,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": EcoPizzaCaseStudy // <-- React component, not a path string
},
{
"image": Squared,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": SquaredCaseStudy // <-- React component, not a path string
}
];
The imported components should be rendered by a Route component and the link should specify a target path string.
You'll need to explicitly render a route for the work projects you want to link to.
Example:
Update the work data to include the element you want to render on a specific route, and then set the URL path you want it to render on.
import EcoPizza from "../assets/project-icons/ecoPizza.webp";
import Squared from "../assets/project-icons/Squared.webp";
import { EcoPizzaCaseStudy, SquaredCaseStudy } from "./case-studies";
const WorkData = [
{
image: EcoPizza,
name: "The UX portfolio item name",
description: "This is description part",
role: "My role in that project",
element: <EcoPizzaCaseStudy />, // <-- route element
path: "eco-pizza", // <-- route path
},
{
image: Squared,
name: "The UX portfolio item name",
description: "This is description part",
role: "My role in that project",
element: <SquaredCaseStudy />,
path: "squared",
}
];
export default WorkData;
The WorkCard component will use the path to navigate relative from the "/work" path, i.e. to "/work/eco-pizza".
const Work = () => {
return (
<div className="work">
<section className="work-section">
<div className="card">
<h1 className="underline">My portfolio</h1>
<p>Here are the case studies of my projects...</p>
<div className="grid-layout grid-2">
{
WorkData.map((val) => {
return (
<WorkCard
key={val.path}
path={val.path} // <-- path path through
image={val.image}
name={val.name}
description={val.description}
role={val.role}
/>
);
})
}
</div>
</div>
</section>
</div>
);
}
const WorkCard = (props) => {
return (
<NavLink to={props.path} className="tile-card">
<img src={props.image} alt={props.name} />
<div className="details">
<h2>{props.name}</h2>
<p className="description" title={props.description}>
{props.description}
</p>
<h4 className="role" title={props.role}>Role: {props.role}</h4>
</div>
</NavLink>
);
}
Map the WorkData array to the project routes.
...
import WorkData from "./WorkData";
...
function App() {
return (
<div className="App">
<Router>
<Navigation />
<main>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/about" element={<About />} />
<Route path="/work">
<Route index element={<Work />} />
{/* "/work/eco-pizza", "/work/squared", etc... */}
{WorkData.map(({ path, element }) => (
<Route key={path} {...{ path, element }} />
))}
</Route>
<Route path="/under-construction" element={<UnderConstruction />} />
<Route path="/contact" element={<Contact />} />
<Route path='*' element={<PageNotFound />}/>
</Routes>
</main>
<Footer />
</Router>
</div>
);
}
|
Navigate to a page in a sub folder on item click in React JS
|
I'm very new to React JS but watched a few video tutorials on how to build my own portfolio. I want to map my portfolio items which contain an image, title, description, role and link. I managed to map everything except for the link to the detailed page of the respective portfolio item.
Could anyone please help me understand what I'm doing wrong and how to solve it please?
This is my folder structure:
src => pages where the main pages retain and src => pages => case-studies where the detailed pages of my portfolio items retain.
Here's what I got to so far:
Work.jsx in the src => pages folder
import React, { useEffect, useState } from "react";
import Loader from "./Loader";
import "./Work.css";
import WorkCard from "./WorkCard";
import WorkData from "./WorkData";
const Work = () => {
return (
<div className="work">
<section className="work-section">
<div className="card">
<h1 className="underline">My portfolio</h1>
<p>Here are the case studies of my projects...</p>
<div className="grid-layout grid-2">
{
WorkData.map((val, index) => {
return (
<WorkCard
key={index}
url={val.url}
image={val.image}
name={val.name}
description={val.description}
role={val.role} />
);
})
}
</div>
</div>
</section>
</div>
);
}
export default Work;
WorkCard.jsx in the src => pages folder
import React from "react";
import { NavLink } from "react-router-dom";
const WorkCard = (props) => {
return (
<NavLink to={props.url} className="tile-card">
<img src={props.image} alt={props.name} />
<div className="details">
<h2>{props.name}</h2>
<p className="description" title={props.description}>{props.description}</p>
<h4 className="role" title={props.role}>Role: {props.role}</h4>
</div>
</NavLink>
);
}
export default WorkCard;
WorkData.jsx in the src => pages folder
import EcoPizza from "../assets/project-icons/ecoPizza.webp";
import Squared from "../assets/project-icons/Squared.webp";
import { EcoPizzaCaseStudy, SquaredCaseStudy } from "./case-studies";
const WorkData = [
{
"image": EcoPizza,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": EcoPizzaCaseStudy
},
{
"image": Squared,
"name": "The UX portfolio item name",
"description": "This is description part",
"role": "My role in that project",
"url": SquaredCaseStudy
}
];
export default WorkData;
index.js in the src => pages => case-studies folder
export { default as EcoPizzaCaseStudy } from "./eco-pizza/EcoPizzaCaseStudy";
export { default as SquaredCaseStudy } from "./squared/SquaredCaseStudy";
App.js
import React from "react";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
import { Navigation } from "./components/Navigation";
import { Home, About, Work, Contact, PageNotFound, UnderConstruction } from "./pages";
import { Footer } from "./components/Footer";
function App() {
return (
<div className="App">
<Router>
<Navigation />
<main>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/about" element={<About />} />
<Route path="/work" element={<Work />} />
<Route path="/under-construction" element={<UnderConstruction />} />
<Route path="/contact" element={<Contact />} />
<Route path='*' element={<PageNotFound />}/>
</Routes>
</main>
<Footer />
</Router>
</div>
);
}
export default App;
|
[
"you are missing whole concept of routing. You can't just use item as a link to it. For that, we use Routes and then use url by Route defined accordingly.\nSo you need to fill up your Routes with missing items in App.js:\nimport React from \"react\";\nimport { BrowserRouter as Router, Route, Routes } from \"react-router-dom\";\nimport { Navigation } from \"./components/Navigation\";\nimport { Home, About, Work, Contact, PageNotFound, UnderConstruction } from \"./pages\";\nimport { Footer } from \"./components/Footer\";\n//import required components\nimport { EcoPizzaCaseStudy, SquaredCaseStudy } from \"./case-studies\";\n\nfunction App() {\n return (\n <div className=\"App\">\n <Router>\n <Navigation />\n <main>\n <Routes>\n <Route path=\"/\" element={<Home />} />\n <Route path=\"/about\" element={<About />} />\n <Route path=\"/work\" element={<Work />} />\n <Route path=\"/under-construction\" element={<UnderConstruction />} />\n <Route path=\"/contact\" element={<Contact />} />\n // Add your routes\n <Route path=\"/eco-pizza-case-study\" element={<EcoPizzaCaseStudy/>} />\n <Route path=\"/squared-case-study\" element={<SquaredCaseStudy/>} />\n <Route path='*' element={<PageNotFound />}/>\n </Routes>\n </main>\n <Footer />\n </Router>\n </div>\n );\n}\n\nexport default App;\n\nAnd then for url use that defined path.\nInstead \"url\": SquaredCaseStudy just do \"url\": \"/squared-case-study\"\nAny path you choose will work as long as it is unique. You can make it /study/squared or any other, but you need to define it in App.js to use it\n",
"I think you need to know React Router concept first to check how to navigate routes between components. React use React Router Dom to navigate pages. So First you need to create all routes in your projects (name it App.jsx or you can make your own custom Route.jsx file and import to App.jsx)\nimport React, { Suspense, lazy } from 'react';\nimport { BrowserRouter as Router, Routes, Route } from 'react-router-dom';\n\nconst Work = lazy(() => import('./pages/Work'));\nconst EcoPizzaCaseStudy = lazy(() => import('./case-studies/eco-pizza/EcoPizzaCaseStudy'));\nconst SquaredCaseStudy = lazy(() => import('./case-studies/squared/SquaredCaseStudy'));\n\nconst App = () => (\n <Router>\n <Suspense fallback={<div>Loading...</div>}>\n <Routes>\n <Route path=\"/work\" element={<Home />} />\n <Route path=\"/coPizzaCaseStudy/\" element={<EcoPizzaCaseStudy />} />\n <Route path=\"/squaredCaseStudy/\" element={<SquaredCaseStudy />} />\n </Routes>\n </Suspense>\n </Router>\n);\n\nWith this way, you can pass urls coPizzaCaseStudy and squaredCaseStudy to your workData instead of passing components itself. You need to not import components in WorkData.jsx file, just use above texts in url property.\nAnd also In my personal thinking regarding your project structuring, You can make sub folders inside of pages folder for every pages relevant files (For example, make Work folder and put your files Work.css ~ WorkData.jsx), and instead of having Work.jsx, you can just have index.jsx and copy all code from Work.jsx. This way help you webpack compile engine find right index for your pages (in this case, work page) more easily.\nI shared my project folder structure followed best practices. Sub folders under views are every pages. Every folders has index.tsx file for main component, and component sub folders, hooks, and types, etc\n\nAnd also you need to name WorkData.jsx if you are going to expert just object data instead of JSX component. Just name it WorkData.js etc.\n",
"So if I'm correct in understanding your issue, it is that you click the link and then navigate to \"/work/EcoPizzaCaseStudy\", for example, and then the PageNotFound component is rendered, or there is some error.\nThe code is currently attempting to use an imported React component as the link target.\nimport EcoPizza from \"../assets/project-icons/ecoPizza.webp\";\nimport Squared from \"../assets/project-icons/Squared.webp\";\nimport { EcoPizzaCaseStudy, SquaredCaseStudy } from \"./case-studies\";\n\nconst WorkData = [\n {\n \"image\": EcoPizza,\n \"name\": \"The UX portfolio item name\",\n \"description\": \"This is description part\",\n \"role\": \"My role in that project\",\n \"url\": EcoPizzaCaseStudy // <-- React component, not a path string\n },\n {\n \"image\": Squared,\n \"name\": \"The UX portfolio item name\",\n \"description\": \"This is description part\",\n \"role\": \"My role in that project\",\n \"url\": SquaredCaseStudy // <-- React component, not a path string\n }\n];\n\nThe imported components should be rendered by a Route component and the link should specify a target path string.\nYou'll need to explicitly render a route for the work projects you want to link to.\nExample:\nUpdate the work data to include the element you want to render on a specific route, and then set the URL path you want it to render on.\nimport EcoPizza from \"../assets/project-icons/ecoPizza.webp\";\nimport Squared from \"../assets/project-icons/Squared.webp\";\nimport { EcoPizzaCaseStudy, SquaredCaseStudy } from \"./case-studies\";\n\nconst WorkData = [\n {\n image: EcoPizza,\n name: \"The UX portfolio item name\",\n description: \"This is description part\",\n role: \"My role in that project\",\n element: <EcoPizzaCaseStudy />, // <-- route element\n path: \"eco-pizza\", // <-- route path\n },\n {\n image: Squared,\n name: \"The UX portfolio item name\",\n description: \"This is description part\",\n role: \"My role in that project\",\n element: <SquaredCaseStudy />,\n path: \"squared\",\n }\n];\n\nexport default WorkData;\n\nThe WorkCard component will use the path to navigate relative from the \"/work\" path, i.e. to \"/work/eco-pizza\".\nconst Work = () => {\n return (\n <div className=\"work\">\n <section className=\"work-section\">\n <div className=\"card\">\n <h1 className=\"underline\">My portfolio</h1>\n <p>Here are the case studies of my projects...</p>\n\n <div className=\"grid-layout grid-2\">\n {\n WorkData.map((val) => {\n return (\n <WorkCard\n key={val.path}\n path={val.path} // <-- path path through\n image={val.image}\n name={val.name}\n description={val.description}\n role={val.role}\n />\n );\n })\n }\n </div>\n </div>\n </section>\n </div>\n );\n}\n\nconst WorkCard = (props) => {\n return (\n <NavLink to={props.path} className=\"tile-card\">\n <img src={props.image} alt={props.name} />\n <div className=\"details\">\n <h2>{props.name}</h2>\n <p className=\"description\" title={props.description}>\n {props.description}\n </p>\n <h4 className=\"role\" title={props.role}>Role: {props.role}</h4>\n </div>\n </NavLink>\n );\n}\n\nMap the WorkData array to the project routes.\n...\nimport WorkData from \"./WorkData\";\n...\n\nfunction App() {\n return (\n <div className=\"App\">\n <Router>\n <Navigation />\n <main>\n <Routes>\n <Route path=\"/\" element={<Home />} />\n <Route path=\"/about\" element={<About />} />\n <Route path=\"/work\">\n <Route index element={<Work />} />\n {/* \"/work/eco-pizza\", \"/work/squared\", etc... */}\n {WorkData.map(({ path, element }) => (\n <Route key={path} {...{ path, element }} />\n ))}\n </Route>\n <Route path=\"/under-construction\" element={<UnderConstruction />} />\n <Route path=\"/contact\" element={<Contact />} />\n <Route path='*' element={<PageNotFound />}/>\n </Routes>\n </main>\n <Footer />\n </Router>\n </div>\n );\n}\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"javascript",
"portfolio",
"react_router",
"react_router_dom",
"reactjs"
] |
stackoverflow_0074664596_javascript_portfolio_react_router_react_router_dom_reactjs.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.