content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to use Class to iterate over an array?
I'm working on Python classes, but I'm running into a "not iterable" error; however, at least from what I can tell, it should iterable.
class Stuff:
def __init__(self, values):
self.values = values
def vari(self):
mean = sum(self.values)/len(self.values)
_var = sum((v - mean)**2 for v in self.values) / len(self.values)
return _var
def std_dev(self):
print(sqrt(vari(self.values)))
Basically, I have a class called stuff that takes in "values," which in this case will be
x = [12, 20, 56, 34, 3, 17, 23, 43, 54]
from there, values are fed into a function for variance and then a function for std_dev, but I'm still getting the nor iterable error. I know I can use numpy and stats for std_dev and variance, but I'm trying to work on classes. Any help would be appreciated.
A:
Is this what you wanted !?
Code:-
import math
class Stuff:
def __init__(self,values):
self.values = values
def vari(self):
mean = sum(self.values)/len(self.values)
_var = sum((v - mean)**2 for v in self.values) / len(self.values)
return _var
def std_dev(self):
return math.sqrt(self.vari())
x=[12, 20, 56, 34, 3, 17, 23, 43, 54]
a=Stuff(x)
print(a.vari())
print(a.std_dev())
Output:-
311.2098765432099
17.64114158843497
|
How to use Class to iterate over an array?
|
I'm working on Python classes, but I'm running into a "not iterable" error; however, at least from what I can tell, it should iterable.
class Stuff:
def __init__(self, values):
self.values = values
def vari(self):
mean = sum(self.values)/len(self.values)
_var = sum((v - mean)**2 for v in self.values) / len(self.values)
return _var
def std_dev(self):
print(sqrt(vari(self.values)))
Basically, I have a class called stuff that takes in "values," which in this case will be
x = [12, 20, 56, 34, 3, 17, 23, 43, 54]
from there, values are fed into a function for variance and then a function for std_dev, but I'm still getting the nor iterable error. I know I can use numpy and stats for std_dev and variance, but I'm trying to work on classes. Any help would be appreciated.
|
[
"Is this what you wanted !?\nCode:-\nimport math\nclass Stuff:\n def __init__(self,values):\n self.values = values\n \n def vari(self):\n mean = sum(self.values)/len(self.values)\n _var = sum((v - mean)**2 for v in self.values) / len(self.values)\n return _var\n\n def std_dev(self):\n return math.sqrt(self.vari())\nx=[12, 20, 56, 34, 3, 17, 23, 43, 54]\na=Stuff(x)\nprint(a.vari())\nprint(a.std_dev())\n\nOutput:-\n311.2098765432099\n17.64114158843497\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074664020_python.txt
|
Q:
Azure Data Factory SQL trigger events
I wanted to comment on this post: I want to trigger Azure datafactory pipeline whenever there is a change in Azure SQL database
but I don't have enough reputation...
The solution that Skin comes up with (SQL DB trigger events) looks exactly like what I'm after but I can't find any further documentation on it - in fact the only references I've found say that this functionality doesn't exist?
Can anyone point me to anything online - or a book - that could help?
Cheers
A:
AFAIK, In ADF there are no such triggers for SQL changes. ADF supports only Schedule,Tumbling window and Storage event and custom event triggers.
But You can use the logic app triggers (item created and item modified) to triggers ADF pipeline.
For this we the SQL table should have an auto increment column.
Here is a demo I have built for item created trigger:
First search for SQL in logic app and click on item created trigger. Then create a connection with your details.
After that give your table details.
After trigger create Action for ADF pipeline run.
Make sure you publish your ADF pipeline to reflect its name in the above drop down. You can assign SQL columns to ADF pipeline parameter like above.
You can set the trigger for one every one minute or one hour as per your requirement. If any new item inserted into SQL table in that period of time it will trigger ADF pipeline.
I have inserted a new record like this insert into practice values('Six');
Flow Suceeded:
My ADF pipeline:
Pipeline Triggered:
Pipeline successful and you can see variable value:
You can use another flow for item modified trigger as same above and trigger ADF pipeline from that as well.
A:
with the new latest feature A new feature that allows invocation of any REST endpoints is now in public preview in Azure SQL databases
, I guess it is possible :
https://devblogs.microsoft.com/azure-sql/azure-sql-database-external-rest-endpoints-integration-public-preview/
Blog:
https://datasharkx.wordpress.com/2022/12/02/event-trigger-azure-data-factory-synapse-pipeline-via-azure-sql-database/
|
Azure Data Factory SQL trigger events
|
I wanted to comment on this post: I want to trigger Azure datafactory pipeline whenever there is a change in Azure SQL database
but I don't have enough reputation...
The solution that Skin comes up with (SQL DB trigger events) looks exactly like what I'm after but I can't find any further documentation on it - in fact the only references I've found say that this functionality doesn't exist?
Can anyone point me to anything online - or a book - that could help?
Cheers
|
[
"AFAIK, In ADF there are no such triggers for SQL changes. ADF supports only Schedule,Tumbling window and Storage event and custom event triggers.\nBut You can use the logic app triggers (item created and item modified) to triggers ADF pipeline.\nFor this we the SQL table should have an auto increment column.\nHere is a demo I have built for item created trigger:\nFirst search for SQL in logic app and click on item created trigger. Then create a connection with your details.\nAfter that give your table details.\nAfter trigger create Action for ADF pipeline run.\n\n\nMake sure you publish your ADF pipeline to reflect its name in the above drop down. You can assign SQL columns to ADF pipeline parameter like above.\nYou can set the trigger for one every one minute or one hour as per your requirement. If any new item inserted into SQL table in that period of time it will trigger ADF pipeline.\nI have inserted a new record like this insert into practice values('Six');\nFlow Suceeded:\n\nMy ADF pipeline:\n\nPipeline Triggered:\n\nPipeline successful and you can see variable value:\n\nYou can use another flow for item modified trigger as same above and trigger ADF pipeline from that as well.\n",
"with the new latest feature A new feature that allows invocation of any REST endpoints is now in public preview in Azure SQL databases\n, I guess it is possible :\nhttps://devblogs.microsoft.com/azure-sql/azure-sql-database-external-rest-endpoints-integration-public-preview/\nBlog:\nhttps://datasharkx.wordpress.com/2022/12/02/event-trigger-azure-data-factory-synapse-pipeline-via-azure-sql-database/\n"
] |
[
0,
0
] |
[] |
[] |
[
"azure",
"azure_data_factory_pipeline"
] |
stackoverflow_0074563667_azure_azure_data_factory_pipeline.txt
|
Q:
Oracle SQL Trigger doesn't bring any value
I have the following table structure:
Product_Name Product_Id Product_Type
Car 123 A
House ABC B
Ball UZY B
and so on...
Product Id comes from two different Customer fed Tables (table A and table B, both with Product_Name and Product_id). Once they create a new row (insert) on any of their tables a sequence add a new (random) Product_Id.
I have a trigger that basically tries to look for that Product Id when a new row gets inserted on my table (for audit, reporting, etc purpose tables):
create or replace TRIGGER "MY_INSERT_ID_TRIGGER"
before insert on "MY_TABLE"
for each row
DECLARE
begin
if :NEW.My_Product_Name is null and lower(:NEW.My_product_type) = 'A'
then
SELECT DISTINCT Product_Id
INTO :NEW.My_product_id
FROM CUSTOMER_PRODUCT_TABLE_A
--To remove special characters and spaces:
WHERE lower( regexp_replace( replace(Product_name, ' ', '') , '[^a-zA-Z ]') )
=
lower( regexp_replace( replace(:NEW.My_Product_name, ' ', '') , '[^a-zA-Z ]') );
elsif :NEW.PRODUCT is null and lower(:NEW.PRODUCT_TYPE) = 'B'
then
SELECT DISTINCT Product_Id
INTO :NEW.My_product_id
FROM CUSTOMER_PRODUCT_TABLE_B
--To remove special characters and spaces:
WHERE lower( regexp_replace( replace(Product_name, ' ', '') , '[^a-zA-Z ]') )
=
lower( regexp_replace( replace(:NEW.My_Product_name, ' ', '') , '[^a-zA-Z ]') );
else null;
end if;
end;
My table structure as follows:
My_product_name My_product_id My_product_type
Both select work fine where run independently (out of the trigger), however, the trigger itself doesn't bring any value.
Anybody knows why?
Thanks
This would be the SQL structure:
create table CUSTOMER_PRODUCT_TABLE_A
(
Product_Name varchar2(300),
Product_id varchar2(300)
)
create table CUSTOMER_PRODUCT_TABLE_B
(
Product_Name varchar2(300),
Product_id varchar2(300)
)
create table MY_TABLE
(
My_product_name varchar2(300),
My_product_id varchar2(300),
My_product_type varchar2(300)
)
INSERT ALL
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product1-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product2-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product3-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product1-B', 'ABC')
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product2-B', 'DEF')
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product3-B', 'GHI')
SELECT 1 FROM DUAL
So Inserting 'Product1-A' into My_Table would return its Id as per the trigger (into my table column my_product_id).
Thanks
A:
These will always fail:
lower(:NEW.My_product_type) = 'A'
lower(:NEW.PRODUCT_TYPE) = 'B'
|
Oracle SQL Trigger doesn't bring any value
|
I have the following table structure:
Product_Name Product_Id Product_Type
Car 123 A
House ABC B
Ball UZY B
and so on...
Product Id comes from two different Customer fed Tables (table A and table B, both with Product_Name and Product_id). Once they create a new row (insert) on any of their tables a sequence add a new (random) Product_Id.
I have a trigger that basically tries to look for that Product Id when a new row gets inserted on my table (for audit, reporting, etc purpose tables):
create or replace TRIGGER "MY_INSERT_ID_TRIGGER"
before insert on "MY_TABLE"
for each row
DECLARE
begin
if :NEW.My_Product_Name is null and lower(:NEW.My_product_type) = 'A'
then
SELECT DISTINCT Product_Id
INTO :NEW.My_product_id
FROM CUSTOMER_PRODUCT_TABLE_A
--To remove special characters and spaces:
WHERE lower( regexp_replace( replace(Product_name, ' ', '') , '[^a-zA-Z ]') )
=
lower( regexp_replace( replace(:NEW.My_Product_name, ' ', '') , '[^a-zA-Z ]') );
elsif :NEW.PRODUCT is null and lower(:NEW.PRODUCT_TYPE) = 'B'
then
SELECT DISTINCT Product_Id
INTO :NEW.My_product_id
FROM CUSTOMER_PRODUCT_TABLE_B
--To remove special characters and spaces:
WHERE lower( regexp_replace( replace(Product_name, ' ', '') , '[^a-zA-Z ]') )
=
lower( regexp_replace( replace(:NEW.My_Product_name, ' ', '') , '[^a-zA-Z ]') );
else null;
end if;
end;
My table structure as follows:
My_product_name My_product_id My_product_type
Both select work fine where run independently (out of the trigger), however, the trigger itself doesn't bring any value.
Anybody knows why?
Thanks
This would be the SQL structure:
create table CUSTOMER_PRODUCT_TABLE_A
(
Product_Name varchar2(300),
Product_id varchar2(300)
)
create table CUSTOMER_PRODUCT_TABLE_B
(
Product_Name varchar2(300),
Product_id varchar2(300)
)
create table MY_TABLE
(
My_product_name varchar2(300),
My_product_id varchar2(300),
My_product_type varchar2(300)
)
INSERT ALL
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product1-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product2-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_A (Product_Name, Product_id)
VALUES('Product3-A', 123)
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product1-B', 'ABC')
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product2-B', 'DEF')
INTO CUSTOMER_PRODUCT_TABLE_B (Product_Name, Product_id)
VALUES('Product3-B', 'GHI')
SELECT 1 FROM DUAL
So Inserting 'Product1-A' into My_Table would return its Id as per the trigger (into my table column my_product_id).
Thanks
|
[
"These will always fail:\nlower(:NEW.My_product_type) = 'A'\nlower(:NEW.PRODUCT_TYPE) = 'B'\n"
] |
[
0
] |
[] |
[] |
[
"oracle",
"sql"
] |
stackoverflow_0074658698_oracle_sql.txt
|
Q:
Can't access property of object in html or typescript angular
export class MyComponent {
array_all: Array<{ page: any }> = [];
array_page: Array<{ id: number, name: string }> = [];
item: Array<{ id: number, name: string }> = [
{
id: 1,
name: 'Test name'
}
]
constructor(){
this.array_page.push({
id: this.item.id,
name: this.item.name
});
this.array_all.push({
page: this.array_page
});
console.log(this.array_all.page.id);
// error TS2339: Property 'page' does not exist on type '{ page: any; }[]'.
}
}
<div *ngFor="let page of array_all">
<h1>Id: {{page.id}}</h1>
<!--
error TS2339: Property 'id' does not exist on type '{ page: any; }'
-->
</div>
What should I do here to access the property id or name? As I search for a solution I saw something related to convert the object to an array, then use a nested *ngFor. But I don't know how to do that.
Full code in case you need it, here is the reason why I need to first push to an array and then to other:
export class AboutComponent {
array_all: Array<{ page: any }> = [];
array_page: Array<{ id: number, name: string, link: string, image_url: string, image_alt: string }> = [];
constructor(){
let start_at: number = 0;
let last_slice: number = 0;
for(let i: number = start_at; i < this.knowledge_items.length; i++){
if(i%2 === 0 && i%3 === 0 && i !== last_slice){
this.array_all.push({page: this.array_page});
this.array_page = [];
this.array_page.push({
id: this.knowledge_items[i].id,
name: this.knowledge_items[i].name,
link: this.knowledge_items[i].link,
image_url: this.knowledge_items[i].image_url,
image_alt: this.knowledge_items[i].image_alt
});
start_at = i;
last_slice = i;
}
else{
this.array_page.push({
id: this.knowledge_items[i].id,
name: this.knowledge_items[i].name,
link: this.knowledge_items[i].link,
image_url: this.knowledge_items[i].image_url,
image_alt: this.knowledge_items[i].image_alt
});
if(i === this.knowledge_items.length - 1){
this.array_all.push({page: this.array_page});
this.array_page = [];
}
}
}
console.log(this.array_all);
}
knowledge_items: Array<{id: number, name: string, link: string, image_url: string, image_alt: string }> = [
{
id: 1,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 2,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 3,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 4,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 5,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 6,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 7,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 8,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
}
}
A:
You have written the wrong HTML bindings to display the page id. Your page id is inside the array_all.page property. Check console.log() for more details.
Below is the solution to your problem.
<div *ngFor="let arrayItem of array_all; let i = index">
<div *ngFor="let item of arrayItem.page">
<h1>Id: {{ item.id }}</h1>
</div>
</div>
A:
your code is wrong.
check this link
ts file:
export class MyComponent {
array_all: Array<{ page: any }> = [];
array_page: Array<{ id: number, name: string }> = [];
item: Array<{ id: number, name: string }> = [
{
id: 1,
name: 'Test name'
}
]
constructor(){
this.array_page.push({
id: this.item[0].id,
name: this.item[0].name
});
this.array_all.push({
page: [...this.array_page]
});
console.log(this.array_all[0].page[0].id);
}
}
HTML file:
<div *ngFor="let page of array_all">
<h1>Id: {{page.page.id}}</h1>
</div>
|
Can't access property of object in html or typescript angular
|
export class MyComponent {
array_all: Array<{ page: any }> = [];
array_page: Array<{ id: number, name: string }> = [];
item: Array<{ id: number, name: string }> = [
{
id: 1,
name: 'Test name'
}
]
constructor(){
this.array_page.push({
id: this.item.id,
name: this.item.name
});
this.array_all.push({
page: this.array_page
});
console.log(this.array_all.page.id);
// error TS2339: Property 'page' does not exist on type '{ page: any; }[]'.
}
}
<div *ngFor="let page of array_all">
<h1>Id: {{page.id}}</h1>
<!--
error TS2339: Property 'id' does not exist on type '{ page: any; }'
-->
</div>
What should I do here to access the property id or name? As I search for a solution I saw something related to convert the object to an array, then use a nested *ngFor. But I don't know how to do that.
Full code in case you need it, here is the reason why I need to first push to an array and then to other:
export class AboutComponent {
array_all: Array<{ page: any }> = [];
array_page: Array<{ id: number, name: string, link: string, image_url: string, image_alt: string }> = [];
constructor(){
let start_at: number = 0;
let last_slice: number = 0;
for(let i: number = start_at; i < this.knowledge_items.length; i++){
if(i%2 === 0 && i%3 === 0 && i !== last_slice){
this.array_all.push({page: this.array_page});
this.array_page = [];
this.array_page.push({
id: this.knowledge_items[i].id,
name: this.knowledge_items[i].name,
link: this.knowledge_items[i].link,
image_url: this.knowledge_items[i].image_url,
image_alt: this.knowledge_items[i].image_alt
});
start_at = i;
last_slice = i;
}
else{
this.array_page.push({
id: this.knowledge_items[i].id,
name: this.knowledge_items[i].name,
link: this.knowledge_items[i].link,
image_url: this.knowledge_items[i].image_url,
image_alt: this.knowledge_items[i].image_alt
});
if(i === this.knowledge_items.length - 1){
this.array_all.push({page: this.array_page});
this.array_page = [];
}
}
}
console.log(this.array_all);
}
knowledge_items: Array<{id: number, name: string, link: string, image_url: string, image_alt: string }> = [
{
id: 1,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 2,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 3,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 4,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 5,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 6,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
},
{
id: 7,
name: 'C++',
link: 'https://es.wikipedia.org/wiki/C%2B%2B',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/ISO_C%2B%2B_Logo.svg/1200px-ISO_C%2B%2B_Logo.svg.png',
image_alt: 'C++ programming language'
},
{
id: 8,
name: 'Python',
link: 'https://es.wikipedia.org/wiki/Python',
image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/1869px-Python-logo-notext.svg.png',
image_alt: 'Python programming language'
}
}
|
[
"You have written the wrong HTML bindings to display the page id. Your page id is inside the array_all.page property. Check console.log() for more details.\nBelow is the solution to your problem.\n<div *ngFor=\"let arrayItem of array_all; let i = index\">\n <div *ngFor=\"let item of arrayItem.page\">\n <h1>Id: {{ item.id }}</h1>\n </div>\n</div>\n\n",
"your code is wrong.\ncheck this link\nts file:\nexport class MyComponent {\n\narray_all: Array<{ page: any }> = [];\narray_page: Array<{ id: number, name: string }> = [];\n\nitem: Array<{ id: number, name: string }> = [\n {\n id: 1,\n name: 'Test name'\n }\n]\n\nconstructor(){\n this.array_page.push({\n id: this.item[0].id,\n name: this.item[0].name\n });\n\n this.array_all.push({\n page: [...this.array_page]\n });\n\n console.log(this.array_all[0].page[0].id);\n}\n\n}\nHTML file:\n<div *ngFor=\"let page of array_all\">\n <h1>Id: {{page.page.id}}</h1>\n</div>\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"angular",
"html",
"typescript"
] |
stackoverflow_0074663778_angular_html_typescript.txt
|
Q:
Filter and join values into one cell in Excel
I would like to filter and combine returned cells into one cell in Excel.
Products sheet:
SKU
Product Name
Color
Brand
Stock
CellP1
iPhone 14
Red
Apple
1
CellP2
Android 13
Green
Sams
0
CellP3
iPhone 12
Gold
Apple
1
CellP4
Android 15
Black
Sams
1
CellP5
iPhone 16
Green
Apple
0
Export sheet:
conditions:
Brand == Apple
Stock > 0
Product Name = [Product Name + " " + Color]
Product Name
Stock
iPhone 14 Red
1
iPhone 12 Gold
1
I tried several things, this was the closest to what I want to achieve:
=FILTER({Products!B2:B,Products!C2:C,Products!E2:E},REGEXMATCH(Products!D2:D, "Apple"))
...but I can't combine product name and color into one cell.
A:
You can try in H2 cell the following:
=FILTER(HSTACK(B2:B6 &" "& C2:C6, E2:E6), (D2:D6=G2) * (E2:E6 > 0))
Or using LET function to define the input variables first and for easy reading of the expression:
=LET(rng, A2:E6, prod, INDEX(rng,,2), color, INDEX(rng,,3),
brands, INDEX(rng,,4), stocks, INDEX(rng,,5), brand, G2,
FILTER(HSTACK(prod &" "& color, stocks) , (brands=brand) * (stocks > 0) )
)
here is the output:
If your excel version doesn't have HSTACK available, just replace this call with:
CHOOSE({1,2}, prod &" "& color, stocks)
If you want to generate the headers as part of the output, then you can use VSTACK for that (if you have it available in your Excel version):
=LET(rng, A2:E6, prod, INDEX(rng,,2), color, INDEX(rng,,3),
brands, INDEX(rng,,4), stocks, INDEX(rng,,5), brand, G2,
VSTACK({"Product Name", "Stock"},
FILTER(HSTACK(prod &" "& color, stocks) , (brands=brand) * (stocks > 0) )
)
)
|
Filter and join values into one cell in Excel
|
I would like to filter and combine returned cells into one cell in Excel.
Products sheet:
SKU
Product Name
Color
Brand
Stock
CellP1
iPhone 14
Red
Apple
1
CellP2
Android 13
Green
Sams
0
CellP3
iPhone 12
Gold
Apple
1
CellP4
Android 15
Black
Sams
1
CellP5
iPhone 16
Green
Apple
0
Export sheet:
conditions:
Brand == Apple
Stock > 0
Product Name = [Product Name + " " + Color]
Product Name
Stock
iPhone 14 Red
1
iPhone 12 Gold
1
I tried several things, this was the closest to what I want to achieve:
=FILTER({Products!B2:B,Products!C2:C,Products!E2:E},REGEXMATCH(Products!D2:D, "Apple"))
...but I can't combine product name and color into one cell.
|
[
"You can try in H2 cell the following:\n=FILTER(HSTACK(B2:B6 &\" \"& C2:C6, E2:E6), (D2:D6=G2) * (E2:E6 > 0))\n\nOr using LET function to define the input variables first and for easy reading of the expression:\n=LET(rng, A2:E6, prod, INDEX(rng,,2), color, INDEX(rng,,3), \n brands, INDEX(rng,,4), stocks, INDEX(rng,,5), brand, G2, \n FILTER(HSTACK(prod &\" \"& color, stocks) , (brands=brand) * (stocks > 0) )\n)\n\nhere is the output:\n\nIf your excel version doesn't have HSTACK available, just replace this call with:\nCHOOSE({1,2}, prod &\" \"& color, stocks)\n\nIf you want to generate the headers as part of the output, then you can use VSTACK for that (if you have it available in your Excel version):\n=LET(rng, A2:E6, prod, INDEX(rng,,2), color, INDEX(rng,,3), \n brands, INDEX(rng,,4), stocks, INDEX(rng,,5), brand, G2, \n VSTACK({\"Product Name\", \"Stock\"},\n FILTER(HSTACK(prod &\" \"& color, stocks) , (brands=brand) * (stocks > 0) )\n )\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"excel"
] |
stackoverflow_0074659351_excel.txt
|
Q:
How to bind the value in p-autoComplete in angular
I have a search box with p-autocomplete. When I search for the value, the value is getting displayed in the suggestion.
I am trying to display the selected value from the suggestion in the p-autocomplete box. But when I select I am getting [Object Object]. I am new to angular. Any suggestion will be helpful. Thanks in advance!
HTML CODE
<p-autoComplete id="dataCmpnyAuto"
name="dataCmpnyAuto"
[suggestions]="dataCmpnyAutoList"
(onKeyUp)="startSearch($event.target.value)"
dropdownIcon="pi pi-angle-down"
(onSelect)="getCompanyList($event)"
formControlName="dataCmpnyAuto">
<ng-template let-i pTemplate="item">
<div class="country-item">
<h6>{{ i.name}}</h6>
<div>{{ i.id}}</div>
</div>
</ng-template> </p-autoComplete>
testComponent.ts
startSearch(name:any){
let Type = 0
this.pagesService.getCompanyAutoCmpList(Type, name).subscribe((results) =>{
const responseData = results.success
if(responseData && responseData.length>0){
this.dataCmpnyAutoList = responseData
}
})}
A:
PrimeNG does not support selected item templates in the single selection mode. There's a workaround though. You can control the text of the selected item template:
<p-autoComplete [field]="getSelectedItemName" ...></p-autoComplete>
getSelectedItemName(item: { id: number; name: string }): string {
return item.name;
}
StackBlitz
A:
If you want to display name and Id to your users, then I suggest to prepare your objects array match intended output from your function that fetches data from api
In your html just add field as per documentation read here
<p-autoComplete
[suggestions]="dataCmpnyAutoList"
(completeMethod)="....."
field="combineNameId" ...>
</p-autoComplete>
And in your Ts file , map array list to add combinedNameId key in your object
startSearch(name:any){
let Type = 0
this.pagesService.getCompanyAutoCmpList(Type, name).subscribe((results) {
const responseData = results.success
if(responseData && responseData.length>0){
this.dataCmpnyAutoList = responseData .map((item:any)=>({
...item,
combinedNameId: item.name + ' ' + item.id
})
}
})}
|
How to bind the value in p-autoComplete in angular
|
I have a search box with p-autocomplete. When I search for the value, the value is getting displayed in the suggestion.
I am trying to display the selected value from the suggestion in the p-autocomplete box. But when I select I am getting [Object Object]. I am new to angular. Any suggestion will be helpful. Thanks in advance!
HTML CODE
<p-autoComplete id="dataCmpnyAuto"
name="dataCmpnyAuto"
[suggestions]="dataCmpnyAutoList"
(onKeyUp)="startSearch($event.target.value)"
dropdownIcon="pi pi-angle-down"
(onSelect)="getCompanyList($event)"
formControlName="dataCmpnyAuto">
<ng-template let-i pTemplate="item">
<div class="country-item">
<h6>{{ i.name}}</h6>
<div>{{ i.id}}</div>
</div>
</ng-template> </p-autoComplete>
testComponent.ts
startSearch(name:any){
let Type = 0
this.pagesService.getCompanyAutoCmpList(Type, name).subscribe((results) =>{
const responseData = results.success
if(responseData && responseData.length>0){
this.dataCmpnyAutoList = responseData
}
})}
|
[
"PrimeNG does not support selected item templates in the single selection mode. There's a workaround though. You can control the text of the selected item template:\n<p-autoComplete [field]=\"getSelectedItemName\" ...></p-autoComplete>\n\ngetSelectedItemName(item: { id: number; name: string }): string {\n return item.name;\n}\n\nStackBlitz\n",
"If you want to display name and Id to your users, then I suggest to prepare your objects array match intended output from your function that fetches data from api\nIn your html just add field as per documentation read here\n<p-autoComplete \n [suggestions]=\"dataCmpnyAutoList\" \n(completeMethod)=\".....\"\nfield=\"combineNameId\" ...>\n</p-autoComplete>\n\nAnd in your Ts file , map array list to add combinedNameId key in your object\nstartSearch(name:any){\nlet Type = 0\nthis.pagesService.getCompanyAutoCmpList(Type, name).subscribe((results) {\n const responseData = results.success\n if(responseData && responseData.length>0){\n this.dataCmpnyAutoList = responseData .map((item:any)=>({\n ...item,\n combinedNameId: item.name + ' ' + item.id\n\n })\n }\n\n})}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"angular",
"primeng"
] |
stackoverflow_0073643631_angular_primeng.txt
|
Q:
After Migration from old Mac to new Mac Stderr:VBoxManage: error: The virtual machine 'homestead' has terminated unexpectedly.. because of signal 9
I have migrated from MacPro 2017 to MacPro 2019 when I run vagrant up I stuck in the following error:
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "492bedf4-2a89-455e-9900-97ba2acf456a", "--type", "headless"]
Stderr: VBoxManage: error: The virtual machine 'homestead' has terminated unexpectedly during startup because of signal 9
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine
A:
After doing some R&D I have fixed the issue by reinstalling VirtualBox and Vagrant.
My versions were 7.0.4 and 2.3.3 respectively
Cheers......
|
After Migration from old Mac to new Mac Stderr:VBoxManage: error: The virtual machine 'homestead' has terminated unexpectedly.. because of signal 9
|
I have migrated from MacPro 2017 to MacPro 2019 when I run vagrant up I stuck in the following error:
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "492bedf4-2a89-455e-9900-97ba2acf456a", "--type", "headless"]
Stderr: VBoxManage: error: The virtual machine 'homestead' has terminated unexpectedly during startup because of signal 9
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine
|
[
"After doing some R&D I have fixed the issue by reinstalling VirtualBox and Vagrant.\nMy versions were 7.0.4 and 2.3.3 respectively\nCheers......\n"
] |
[
0
] |
[] |
[] |
[
"homestead",
"laravel",
"macos_ventura",
"migration",
"vagrant"
] |
stackoverflow_0074664081_homestead_laravel_macos_ventura_migration_vagrant.txt
|
Q:
Starting w/ python 3.8, Pandas won't let me reassign value in a DataFrame
Code that works under Pandas 1.3.5 and python 3.7 or earlier:
import pandas as pd
import numpy as np
hex_name = '123456abc'
multi_sub_dir_id_list = [hex_name, hex_name, hex_name]
multi_leaf_node_dirs = ['one', 'two', 'three']
x_dir_multi_index = pd.MultiIndex.from_arrays ([multi_sub_dir_id_list, multi_leaf_node_dirs], names = ['hex_name', 'leaf_name'])
leaf_name = 'one'
dirpath = '/a/string/path'
task_path_str = 'thepath'
multi_exec_df = pd.DataFrame (data = None, columns = x_dir_multi_index)
multi_exec_df.loc[task_path_str] = np.nan
multi_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath
Starting with python 3.8, once something has been assigned anything, all future assignments are ignored. Current code is failing under Python 3.11.0 and Pandas 1.5.1
Is this formulation no longer allowed?
What it should look like after the above:
hex_name leaf_name
123456abc one /a/string/path
two NaN
three NaN
What it does look like after the above:
> multi_exec_df.loc[task_path_str]
hex_name leaf_name
123456abc one NaN
two NaN
three NaN
Name: thepath, dtype: float64
What I'm running for this test
Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
print(pd.__version__)
1.5.2
A:
Here is my interpretation of what your code does.
Your setup code:
import pandas as pd
import numpy as np
hex_name = '123456abc'
multi_sub_dir_id_list = [hex_name, hex_name, hex_name]
multi_leaf_node_dirs = ['one', 'two', 'three']
x_dir_multi_index = pd.MultiIndex.from_arrays ([multi_sub_dir_id_list, multi_leaf_node_dirs], names = ['hex_name', 'leaf_name'])
leaf_name = 'one'
dirpath = '/a/string/path'
task_path_str = 'thepath'
multi_exec_df = pd.DataFrame (data = None, columns = x_dir_multi_index)
multi_exec_df.loc[task_path_str] = np.nan
At this point multi_exec_df is a dataframe with one row full of nans:
hex_name 123456abc
leaf_name one two three
thepath NaN NaN NaN
and multi_exec_df.loc[task_path_str] is a series containing the data from the first row:
hex_name leaf_name
123456abc one NaN
two NaN
three NaN
Name: thepath, dtype: float64
Based on your example of "what it should look like after the above" I assume you are trying to assign the value "/a/string/path" to the column ('123456abc', 'one').
Here is how I would do that:
col = (hex_name, leaf_name)
multi_exec_df.loc[task_path_str, col] = dirpath
As far as I know, using loc or similar methods is the only way to assign values to the dataframe. Is there a reason you can't do that here?
Now to the question of what your code is doing...
Instead of the above, you are executing the following line:
multi_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath
This is equivalent to:
multi_exec_df.loc[task_path_str][(hex_name, leaf_name)] = dirpath
The problem with it is that multi_exec_df.loc[task_path_str] is a copy of the row from the dataframe, not a view. When I execute above I get the following:
<ipython-input-26-2d4fae3863b0>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
multi_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath
(Maybe you knew that but you didn't mention it so I pointed it out. Not sure why you didn't get this warning. If you are not familiar with what a view is read the documentation at the link above in the warning).
You asked "Is this formulation no longer allowed?"
Obviously it is allowed, but you must accept that you are assigning the new value to a copy of the row, not the row in the original dataframe.
I don't know whether this making a copy instead of a view changed at some point in Pandas development, if that is what you are asking.
This was done with Pandas 1.5.1.
|
Starting w/ python 3.8, Pandas won't let me reassign value in a DataFrame
|
Code that works under Pandas 1.3.5 and python 3.7 or earlier:
import pandas as pd
import numpy as np
hex_name = '123456abc'
multi_sub_dir_id_list = [hex_name, hex_name, hex_name]
multi_leaf_node_dirs = ['one', 'two', 'three']
x_dir_multi_index = pd.MultiIndex.from_arrays ([multi_sub_dir_id_list, multi_leaf_node_dirs], names = ['hex_name', 'leaf_name'])
leaf_name = 'one'
dirpath = '/a/string/path'
task_path_str = 'thepath'
multi_exec_df = pd.DataFrame (data = None, columns = x_dir_multi_index)
multi_exec_df.loc[task_path_str] = np.nan
multi_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath
Starting with python 3.8, once something has been assigned anything, all future assignments are ignored. Current code is failing under Python 3.11.0 and Pandas 1.5.1
Is this formulation no longer allowed?
What it should look like after the above:
hex_name leaf_name
123456abc one /a/string/path
two NaN
three NaN
What it does look like after the above:
> multi_exec_df.loc[task_path_str]
hex_name leaf_name
123456abc one NaN
two NaN
three NaN
Name: thepath, dtype: float64
What I'm running for this test
Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
print(pd.__version__)
1.5.2
|
[
"Here is my interpretation of what your code does.\nYour setup code:\nimport pandas as pd\nimport numpy as np\nhex_name = '123456abc'\nmulti_sub_dir_id_list = [hex_name, hex_name, hex_name]\nmulti_leaf_node_dirs = ['one', 'two', 'three'] \nx_dir_multi_index = pd.MultiIndex.from_arrays ([multi_sub_dir_id_list, multi_leaf_node_dirs], names = ['hex_name', 'leaf_name'])\nleaf_name = 'one'\ndirpath = '/a/string/path'\ntask_path_str = 'thepath'\nmulti_exec_df = pd.DataFrame (data = None, columns = x_dir_multi_index)\nmulti_exec_df.loc[task_path_str] = np.nan\n\nAt this point multi_exec_df is a dataframe with one row full of nans:\nhex_name 123456abc \nleaf_name one two three\nthepath NaN NaN NaN\n\nand multi_exec_df.loc[task_path_str] is a series containing the data from the first row:\nhex_name leaf_name\n123456abc one NaN\n two NaN\n three NaN\nName: thepath, dtype: float64\n\nBased on your example of \"what it should look like after the above\" I assume you are trying to assign the value \"/a/string/path\" to the column ('123456abc', 'one').\nHere is how I would do that:\ncol = (hex_name, leaf_name)\nmulti_exec_df.loc[task_path_str, col] = dirpath\n\nAs far as I know, using loc or similar methods is the only way to assign values to the dataframe. Is there a reason you can't do that here?\nNow to the question of what your code is doing...\nInstead of the above, you are executing the following line:\nmulti_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath\n\nThis is equivalent to:\nmulti_exec_df.loc[task_path_str][(hex_name, leaf_name)] = dirpath\n\nThe problem with it is that multi_exec_df.loc[task_path_str] is a copy of the row from the dataframe, not a view. When I execute above I get the following:\n<ipython-input-26-2d4fae3863b0>:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n multi_exec_df.loc[task_path_str][hex_name, leaf_name] = dirpath\n\n(Maybe you knew that but you didn't mention it so I pointed it out. Not sure why you didn't get this warning. If you are not familiar with what a view is read the documentation at the link above in the warning).\nYou asked \"Is this formulation no longer allowed?\"\nObviously it is allowed, but you must accept that you are assigning the new value to a copy of the row, not the row in the original dataframe.\nI don't know whether this making a copy instead of a view changed at some point in Pandas development, if that is what you are asking.\nThis was done with Pandas 1.5.1.\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074622796_dataframe_pandas_python.txt
|
Q:
How to move text horizontally as you scroll when getting to a specific part of the page with React?
I have two lines of text in the middle of my page that I want them to scroll horizontally from outside of the page to come in then leave through the other side. The top one comes from the left and exist through the right and the second line does the opposite. My problem is that scrollY does not stay constant when you change the screen size.
Is there a good solution to only scroll the text when you get to that particular section of the page? Right now the section comes in at 2200px when in full view and I'm using that number to trigger the scroll
I have this hook that listens for the scroll:
export default function useScrollListener() {
const [data, setData] = useState({
x: 0,
y: 0,
lastX: 0,
lastY: 0
});
// set up event listeners
useEffect(() => {
const handleScroll = () => {
setData((last) => {
return {
x: window.scrollX,
y: window.scrollY,
lastX: last.x,
lastY: last.y
};
});
};
handleScroll();
window.addEventListener("scroll", handleScroll);
return () => {
window.removeEventListener("scroll", handleScroll);
};
}, []);
return data;
}
In the page that has the two lines I have :
const scroll = useScrollListener();
const [position, setPosition] = useState('')
useEffect(() => {
let pos = scroll.y
let scrollPos = pos - 3000
// Section with the lines of text starts around 2200 on scrollY
if(scroll.y > 2200){
setPosition(scrollPos.toString())
}
}, [scroll.y]);
The text is wrapped around a div that is relative and the text has a position absolute pushing the element to the right or left by 800px.
<div className="line1">
<p className="text1" style={{"left": `${position}px`}}>
Lorem ipsum dolor sit amet.
</p>
</div>
<div className="line2">
<p className="text1" style={{"right": `${position}px`}}>
Lorem ipsum dolor sit amet.
</p>
</div>
A:
A quick way to fix it is to have in the in-style html calculate it based on how far it is from the top of the page.
So for example if the text was 100vh away and 10vh apart.
<div className="line1">
<p className="text1" style={{"left": `calc(100vh - ${position}px`)}}>
Lorem ipsum dolor sit amet.
</p>
</div>
<div className="line2" style={{marginTop: "10vh"}}>
<p className="text1" style={{"right": `calc(110vh - ${position}px`}}>
Lorem ipsum dolor sit amet.
</p>
</div>
I did not run this so sorry for any mistakes when running the code.
|
How to move text horizontally as you scroll when getting to a specific part of the page with React?
|
I have two lines of text in the middle of my page that I want them to scroll horizontally from outside of the page to come in then leave through the other side. The top one comes from the left and exist through the right and the second line does the opposite. My problem is that scrollY does not stay constant when you change the screen size.
Is there a good solution to only scroll the text when you get to that particular section of the page? Right now the section comes in at 2200px when in full view and I'm using that number to trigger the scroll
I have this hook that listens for the scroll:
export default function useScrollListener() {
const [data, setData] = useState({
x: 0,
y: 0,
lastX: 0,
lastY: 0
});
// set up event listeners
useEffect(() => {
const handleScroll = () => {
setData((last) => {
return {
x: window.scrollX,
y: window.scrollY,
lastX: last.x,
lastY: last.y
};
});
};
handleScroll();
window.addEventListener("scroll", handleScroll);
return () => {
window.removeEventListener("scroll", handleScroll);
};
}, []);
return data;
}
In the page that has the two lines I have :
const scroll = useScrollListener();
const [position, setPosition] = useState('')
useEffect(() => {
let pos = scroll.y
let scrollPos = pos - 3000
// Section with the lines of text starts around 2200 on scrollY
if(scroll.y > 2200){
setPosition(scrollPos.toString())
}
}, [scroll.y]);
The text is wrapped around a div that is relative and the text has a position absolute pushing the element to the right or left by 800px.
<div className="line1">
<p className="text1" style={{"left": `${position}px`}}>
Lorem ipsum dolor sit amet.
</p>
</div>
<div className="line2">
<p className="text1" style={{"right": `${position}px`}}>
Lorem ipsum dolor sit amet.
</p>
</div>
|
[
"A quick way to fix it is to have in the in-style html calculate it based on how far it is from the top of the page.\nSo for example if the text was 100vh away and 10vh apart.\n<div className=\"line1\">\n <p className=\"text1\" style={{\"left\": `calc(100vh - ${position}px`)}}>\n Lorem ipsum dolor sit amet.\n </p>\n</div>\n \n<div className=\"line2\" style={{marginTop: \"10vh\"}}>\n <p className=\"text1\" style={{\"right\": `calc(110vh - ${position}px`}}>\n Lorem ipsum dolor sit amet.\n </p>\n</div>\n\nI did not run this so sorry for any mistakes when running the code.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"reactjs"
] |
stackoverflow_0070260631_javascript_reactjs.txt
|
Q:
How do I create a directed graph from a csv file and use DFS to traverse and print it?
How do I create a directed graph from a csv file and use DFS to traverse and print it?
I have made the connect method but it keeps on showing error when I tried to connect elements.
I have tried making edges method as well but keeps getting confused after I add that method
Directedgraph.csv
Directedgraph.csv
Content of Code
`
import csv
class Vertex():
def __init__(self,key):
self.key = key
self.adjacencies = []
self.checked = False
def adds(self,vertex):
self.adjacencies.append(vertex)
class Graph():
def __init__(self,key_list):
self.vertices = []
for key in key_list:
vert = Vertex(key)
#print(vert)
self.vertices.append(vert)
#print(self.vertices)
def parseCSV(self, csvfile):
self.csvfile = csvfile
with open(self.csvfile,'r') as f:
reader = csv.reader(f)
#reader.next() # discard column headers
for row in reader:
yield (row[0], row[1])
def find_vertex(self,key):
for vert in self.vertices: # Slow. Is there a faster way?
if vert.key == key:
#print (vert.key)
return vert
return None
def connect(self,key1,key2):
v1 = self.find_vertex(key1) # Could raise an exception.
v2 = self.find_vertex(key2) # We should handle that case.
v1.adds(v2)
def dfs(self,key1):
'''Takes key, initializes checks, launches recursion.'''
start = None
for vert in self.vertices:
vert.checked = False
if vert.key==key1:
start = vert
return self.__dfs__(start,'')
def __dfs__(self,v1,display):
'''Takes vertex, assumes checks are initializes, recurses.'''
if not v1.checked:
# Visit this vertex if it hasn't already been visited
display=display + str(v1.key) + ' '
v1.checked = True
for v2 in v1.adjacencies:
# Recursively visit all adjacent vertices
display=self.__dfs__(v2,display)
return display
#Client Code:
keys = []
csvFile = "Directedgraph.csv"
ws = Graph(keys)
ws.parseCSV(csvFile)
traversal = ws.dfs(0)
print("All components connected to key=0:",traversal)
traversal = ws.dfs(1)
print("All components connected to key=1:",traversal)
traversal = g.dfs(2)
print("All components connected to key=2:",traversal)
traversal = g.dfs(3)
print("All components connected to key=3:",traversal)
traversal = g.dfs(4)
print("All components connected to key=4:",traversal)
`
A:
Is parseCSV intended for any use? Seems like the vertices are not connected to each other when it is parsed. Also your CSV has multiple entries but parseCSV seems to be only taking in first 2 values.
|
How do I create a directed graph from a csv file and use DFS to traverse and print it?
|
How do I create a directed graph from a csv file and use DFS to traverse and print it?
I have made the connect method but it keeps on showing error when I tried to connect elements.
I have tried making edges method as well but keeps getting confused after I add that method
Directedgraph.csv
Directedgraph.csv
Content of Code
`
import csv
class Vertex():
def __init__(self,key):
self.key = key
self.adjacencies = []
self.checked = False
def adds(self,vertex):
self.adjacencies.append(vertex)
class Graph():
def __init__(self,key_list):
self.vertices = []
for key in key_list:
vert = Vertex(key)
#print(vert)
self.vertices.append(vert)
#print(self.vertices)
def parseCSV(self, csvfile):
self.csvfile = csvfile
with open(self.csvfile,'r') as f:
reader = csv.reader(f)
#reader.next() # discard column headers
for row in reader:
yield (row[0], row[1])
def find_vertex(self,key):
for vert in self.vertices: # Slow. Is there a faster way?
if vert.key == key:
#print (vert.key)
return vert
return None
def connect(self,key1,key2):
v1 = self.find_vertex(key1) # Could raise an exception.
v2 = self.find_vertex(key2) # We should handle that case.
v1.adds(v2)
def dfs(self,key1):
'''Takes key, initializes checks, launches recursion.'''
start = None
for vert in self.vertices:
vert.checked = False
if vert.key==key1:
start = vert
return self.__dfs__(start,'')
def __dfs__(self,v1,display):
'''Takes vertex, assumes checks are initializes, recurses.'''
if not v1.checked:
# Visit this vertex if it hasn't already been visited
display=display + str(v1.key) + ' '
v1.checked = True
for v2 in v1.adjacencies:
# Recursively visit all adjacent vertices
display=self.__dfs__(v2,display)
return display
#Client Code:
keys = []
csvFile = "Directedgraph.csv"
ws = Graph(keys)
ws.parseCSV(csvFile)
traversal = ws.dfs(0)
print("All components connected to key=0:",traversal)
traversal = ws.dfs(1)
print("All components connected to key=1:",traversal)
traversal = g.dfs(2)
print("All components connected to key=2:",traversal)
traversal = g.dfs(3)
print("All components connected to key=3:",traversal)
traversal = g.dfs(4)
print("All components connected to key=4:",traversal)
`
|
[
"Is parseCSV intended for any use? Seems like the vertices are not connected to each other when it is parsed. Also your CSV has multiple entries but parseCSV seems to be only taking in first 2 values.\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"depth_first_search",
"graph",
"python",
"python_3.x"
] |
stackoverflow_0074663638_csv_depth_first_search_graph_python_python_3.x.txt
|
Q:
couchbase, ottoman throw error when I create a new instance?
I'm new in couchbase and I'm using ottoman framework. I connected the database using ottoman and I create the schema and model User and exported it into controller file. When I create a new instance for that model, ottoman throw an error TypeError: User is not a constructor.
I search so many time and I red the official and non official documents and test it severely. I wrote all about the db in separate file and no change. I'll attach the file below it . But I didn't get any solution. please let me know...
const ottoman = require("ottoman");
exports.connect = async () => {
try {
await ottoman.connect({
connectionString: process.env.DB_CONNECTION_STRING,
bucketName: process.env.DB_BUCKET,
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
});
console.log("Database connected.");
await ottoman.start();
} catch (error) {
console.log("Database not connected due to: ", error.message);
}
};
connect();
const User = ottoman.model("User", {
firstName: String,
lastName: String,
email: String,
tagline: String,
});
const perry = new User({
firstName: "Perry",
lastName: "Mason",
email: "[email protected]",
tagLine: "Who can we get on the case?",
});
const tom = new User({
firstName: "Major",
lastName: "Tom",
email: "[email protected]",
tagLine: "Send me up a drink",
});
main = async () => {
await perry.save();
console.log(`success: user ${perry.firstName} added!`);
await tom.save();
console.log(`success: user ${tom.firstName} added!`);
};
main();
A:
This issue happened due to disorder of functions calling in app.js file. All I used till now was a Mongodb and mongoose in noSQL. In the case of mongodb we can call the database config function after api endpoint specification. I wrote my code like this in couchbase. But it didn't stick in couchbase. I'll provide my code before and after fixing for more clarity, and I'm very sorry for my bad english. :)
Before fixing app.js file:
const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
const app = express();
require("dotenv").config();
const PORT = process.env.PORT || 3000;
//middlewares
app.use(cors());
app.use(morgan("dev"));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// routes
app.use("/api/", require("./routes/index"));
// bad requiest
app.use("*", (req, res) => {
res.status(404).json({ message: "Bad Requist." });
});
// error middleware
const { errorHandler } = require("./middlewares/error-middleware");
app.use(errorHandler);
// database setup
const db = require("./config/db");
db.connect();
// server setup
app.listen(PORT, (err) => {
if (err) {
console.log(err.message);
} else {
console.log(`The server is running on: ${PORT}.`);
}
});
After fixing app.js file:
const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
const app = express();
require("dotenv").config();
const PORT = process.env.PORT || 3000;
//middlewares
app.use(cors());
app.use(morgan("dev"));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// database setup
const db = require("./config/db");
db.connect();
// routes
app.use("/api/", require("./routes/index"));
// bad requiest
app.use("*", (req, res) => {
res.status(404).json({ message: "Bad Requist." });
});
// error middleware
const { errorHandler } = require("./middlewares/error-middleware");
app.use(errorHandler);
// server setup
app.listen(PORT, (err) => {
if (err) {
console.log(err.message);
} else {
console.log(`The server is running on: ${PORT}.`);
}
});
|
couchbase, ottoman throw error when I create a new instance?
|
I'm new in couchbase and I'm using ottoman framework. I connected the database using ottoman and I create the schema and model User and exported it into controller file. When I create a new instance for that model, ottoman throw an error TypeError: User is not a constructor.
I search so many time and I red the official and non official documents and test it severely. I wrote all about the db in separate file and no change. I'll attach the file below it . But I didn't get any solution. please let me know...
const ottoman = require("ottoman");
exports.connect = async () => {
try {
await ottoman.connect({
connectionString: process.env.DB_CONNECTION_STRING,
bucketName: process.env.DB_BUCKET,
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
});
console.log("Database connected.");
await ottoman.start();
} catch (error) {
console.log("Database not connected due to: ", error.message);
}
};
connect();
const User = ottoman.model("User", {
firstName: String,
lastName: String,
email: String,
tagline: String,
});
const perry = new User({
firstName: "Perry",
lastName: "Mason",
email: "[email protected]",
tagLine: "Who can we get on the case?",
});
const tom = new User({
firstName: "Major",
lastName: "Tom",
email: "[email protected]",
tagLine: "Send me up a drink",
});
main = async () => {
await perry.save();
console.log(`success: user ${perry.firstName} added!`);
await tom.save();
console.log(`success: user ${tom.firstName} added!`);
};
main();
|
[
"This issue happened due to disorder of functions calling in app.js file. All I used till now was a Mongodb and mongoose in noSQL. In the case of mongodb we can call the database config function after api endpoint specification. I wrote my code like this in couchbase. But it didn't stick in couchbase. I'll provide my code before and after fixing for more clarity, and I'm very sorry for my bad english. :)\nBefore fixing app.js file:\nconst express = require(\"express\");\nconst cors = require(\"cors\");\nconst morgan = require(\"morgan\");\nconst app = express();\nrequire(\"dotenv\").config();\nconst PORT = process.env.PORT || 3000;\n\n//middlewares\napp.use(cors());\napp.use(morgan(\"dev\"));\napp.use(express.json());\napp.use(express.urlencoded({ extended: true }));\n\n\n// routes\napp.use(\"/api/\", require(\"./routes/index\"));\n\n// bad requiest\napp.use(\"*\", (req, res) => {\n res.status(404).json({ message: \"Bad Requist.\" });\n});\n\n// error middleware\nconst { errorHandler } = require(\"./middlewares/error-middleware\");\napp.use(errorHandler);\n\n// database setup\nconst db = require(\"./config/db\");\ndb.connect();\n\n\n// server setup\napp.listen(PORT, (err) => {\n if (err) {\n console.log(err.message);\n } else {\n console.log(`The server is running on: ${PORT}.`);\n }\n});\n\n\nAfter fixing app.js file:\nconst express = require(\"express\");\nconst cors = require(\"cors\");\nconst morgan = require(\"morgan\");\nconst app = express();\nrequire(\"dotenv\").config();\nconst PORT = process.env.PORT || 3000;\n\n//middlewares\napp.use(cors());\napp.use(morgan(\"dev\"));\napp.use(express.json());\napp.use(express.urlencoded({ extended: true }));\n\n// database setup\nconst db = require(\"./config/db\");\ndb.connect();\n\n// routes\napp.use(\"/api/\", require(\"./routes/index\"));\n\n// bad requiest\napp.use(\"*\", (req, res) => {\n res.status(404).json({ message: \"Bad Requist.\" });\n});\n\n// error middleware\nconst { errorHandler } = require(\"./middlewares/error-middleware\");\napp.use(errorHandler);\n\n// server setup\napp.listen(PORT, (err) => {\n if (err) {\n console.log(err.message);\n } else {\n console.log(`The server is running on: ${PORT}.`);\n }\n});\n\n\n"
] |
[
0
] |
[] |
[] |
[
"couchbase",
"couchbase_ottoman",
"model",
"node.js",
"schema"
] |
stackoverflow_0074651575_couchbase_couchbase_ottoman_model_node.js_schema.txt
|
Q:
Why am I getting an Attribute Error for my code when it should be working
I have a class ScrollingCredits. In that, I have a method load_credits. Please have a look at the code
class ScrollingCredits:
def __init__(self):
self.load_credits("end_credits.txt")
(self.background, self.background_rect) = load_image("starfield.gif", True)
self.font = pygame.font.Font(None, FONT_SIZE)
self.scroll_speed = SCROLL_SPEED
self.scroll_pause = SCROLL_PAUSE
self.end_wait = END_WAIT
self.reset()
def load_credits(self, filename):
f = open(filename)
credits = []
while 1:
line = f.readline()
if not line:
break
line = string.rstrip(line)
credits.append(line)
f.close()
self.lines = credits
The first line after defining the function is where my attribute problem occurs I get this brought up when I try to run it: AttributeError: 'ScrollingCredits' object has no attribute 'load_credits'
If anyone would be able to help me it would be much appreciated
A:
There is function definition and calling issue for load_credits, if you want to access the function with self
Make the load_credits outside the __init__ function like below.
class ScrollingCredits:
def __init__(self):
self.load_credits("end_credits.txt")
............
def load_credits(self, filename):
............
|
Why am I getting an Attribute Error for my code when it should be working
|
I have a class ScrollingCredits. In that, I have a method load_credits. Please have a look at the code
class ScrollingCredits:
def __init__(self):
self.load_credits("end_credits.txt")
(self.background, self.background_rect) = load_image("starfield.gif", True)
self.font = pygame.font.Font(None, FONT_SIZE)
self.scroll_speed = SCROLL_SPEED
self.scroll_pause = SCROLL_PAUSE
self.end_wait = END_WAIT
self.reset()
def load_credits(self, filename):
f = open(filename)
credits = []
while 1:
line = f.readline()
if not line:
break
line = string.rstrip(line)
credits.append(line)
f.close()
self.lines = credits
The first line after defining the function is where my attribute problem occurs I get this brought up when I try to run it: AttributeError: 'ScrollingCredits' object has no attribute 'load_credits'
If anyone would be able to help me it would be much appreciated
|
[
"There is function definition and calling issue for load_credits, if you want to access the function with self\nMake the load_credits outside the __init__ function like below.\nclass ScrollingCredits:\n def __init__(self):\n self.load_credits(\"end_credits.txt\")\n............\n\n def load_credits(self, filename):\n............\n\n"
] |
[
1
] |
[] |
[] |
[
"attributeerror",
"error_handling",
"python",
"python_3.x"
] |
stackoverflow_0074664065_attributeerror_error_handling_python_python_3.x.txt
|
Q:
Overlay text on top of a Recharts chart
I'm trying to add static a piece of text (not related to the data being displayed) in the lower-right corner of a Recharts chart.
I could hack together some CSS & HTML to get the text there from a neighboring DOM element, but I'd rather do it using the recharts API.
Any idea? Thanks!
A:
The <XAxis> component can take a prop called label.
<XAxis
dataKey="name"
label={{ value: 'random text', position: 'insideBottomRight', offset: -20 }}
/>
The combination of position and offset would lead to a solution you want.
Check the docs and a working example.
A:
You can use the <text> element inside a rechart chart to add arbitrary text in any position you wish.
For instance:
import {
RadialBarChart,
RadialBar,
Legend,
Tooltip,
} from 'recharts'
const data = [
{
name: 'Target (5%)',
uv: 5,
pv: 4567,
fill: '#777',
},
{
name: 'Growth %',
uv: 8.3,
pv: 2400,
fill: '#22AA22',
},
]
const RadialBarChart01 = () => {
return (
<RadialBarChart
width={400}
height={400}
innerRadius='70%'
outerRadius='120%'
data={data}
startAngle={180}
endAngle={0}
>
<RadialBar
label={{ fill: '#FFF', position: 'insideStart' }}
background
clockWise={true}
dataKey='uv'
/>
<text
x='50%'
y='50%'
dy={+12}
style={{ fontSize: 48, fontWeight: 'bold', fill: '#22AA22' }}
width={200}
scaleToFit={true}
textAnchor='middle'
verticalAnchor='middle'
>
8.3%
</text>
<text
x='50%'
y='60%'
style={{ fontSize: 24, fontWeight: 'bold', fill: '#777' }}
width={200}
scaleToFit={true}
textAnchor='middle'
verticalAnchor='middle'
>
Target: 5%
</text>
<Legend
iconSize={20}
width={120}
height={100}
layout='vertical'
verticalAlign='bottom'
align='center'
/>
<Tooltip />
</RadialBarChart>
)
}
export default RadialBarChart01
See result in image capture:
|
Overlay text on top of a Recharts chart
|
I'm trying to add static a piece of text (not related to the data being displayed) in the lower-right corner of a Recharts chart.
I could hack together some CSS & HTML to get the text there from a neighboring DOM element, but I'd rather do it using the recharts API.
Any idea? Thanks!
|
[
"The <XAxis> component can take a prop called label.\n<XAxis \n dataKey=\"name\" \n label={{ value: 'random text', position: 'insideBottomRight', offset: -20 }} \n/>\n\nThe combination of position and offset would lead to a solution you want.\nCheck the docs and a working example.\n",
"You can use the <text> element inside a rechart chart to add arbitrary text in any position you wish.\nFor instance:\nimport {\n RadialBarChart,\n RadialBar,\n Legend,\n Tooltip,\n} from 'recharts'\n\nconst data = [\n {\n name: 'Target (5%)',\n uv: 5,\n pv: 4567,\n fill: '#777',\n },\n {\n name: 'Growth %',\n uv: 8.3,\n pv: 2400,\n fill: '#22AA22',\n },\n]\n\nconst RadialBarChart01 = () => {\n return (\n <RadialBarChart\n width={400}\n height={400}\n innerRadius='70%'\n outerRadius='120%'\n data={data}\n startAngle={180}\n endAngle={0}\n >\n <RadialBar\n label={{ fill: '#FFF', position: 'insideStart' }}\n background\n clockWise={true}\n dataKey='uv'\n />\n <text\n x='50%'\n y='50%'\n dy={+12}\n style={{ fontSize: 48, fontWeight: 'bold', fill: '#22AA22' }}\n width={200}\n scaleToFit={true}\n textAnchor='middle'\n verticalAnchor='middle'\n >\n 8.3%\n </text>\n <text\n x='50%'\n y='60%'\n style={{ fontSize: 24, fontWeight: 'bold', fill: '#777' }}\n width={200}\n scaleToFit={true}\n textAnchor='middle'\n verticalAnchor='middle'\n >\n Target: 5%\n </text>\n <Legend\n iconSize={20}\n width={120}\n height={100}\n layout='vertical'\n verticalAlign='bottom'\n align='center'\n />\n <Tooltip />\n </RadialBarChart>\n )\n}\n\nexport default RadialBarChart01\n\nSee result in image capture:\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"recharts"
] |
stackoverflow_0055556503_recharts.txt
|
Q:
How do you define and query a read function in a Move module on Aptos Blockchain?
In the Aptos Move docs, it explains how to interact with a smart contract which has exposed "entry functions".In the hello_blockchain example, set_message is used.
Move modules expose access points, also referred as entry functions. These access points can be called via transactions. The CLI allows for seamless access to these access points. The example Move module hello_blockchain exposes a set_message entry function that takes in a string. This can be called via the CLI:
However, there is no explanation on how to query the get_message function which to my understanding is akin to a read function.
Furthermore, there is no explanation of how to query read/write functions using the Python SDK.
Two questions:
Is it possible to use the Python SDK to query read/write functions in a Move module?
How do you define a read function in a Move module?
A:
If you want to read a resource on an account, you would submit a read request to the API. For example, with curl:
curl https://fullnode.mainnet.aptoslabs.com/v1/accounts/<addr>/resource/<resource>
A concrete example of this:
curl https://fullnode.mainnet.aptoslabs.com/v1/accounts/0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b/resource/0x3::token::TokenStore
This is equivalent to:
Read the resource 0x3::token::TokenStore at account 0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b.
In the Python SDK, you would do something like this:
client.account_resource(
"0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b",
"0x3::token::TokenStore",
)
This uses this client method: https://github.com/aptos-labs/aptos-core/blob/05d04ecc511f572380e1e8fe0bbc234f30645f0d/ecosystem/python/sdk/aptos_sdk/client.py#L63
The get_message function in the hello_blockchain example is somewhat misleading (we can improve this). There is a hint though, note that only entry functions can be run from external calls (e.g. using the CLI command aptos move run). All other functions can only be called from within a Move module.
To be even clearer: In order to read from the Aptos blockchain, you must make requests to the read API endpoints, not to "read functions" in Move modules.
For more info, check out these docs: https://aptos.dev/tutorials/your-first-transaction.
A:
Looks like you're looking for view functions. Currently, there is no way to query a read function from a move module.
There is an open Github feature request for this on the Aptos repo: https://github.com/aptos-labs/aptos-core/issues/4915
A:
To simulate a read-only function on the Aptos, we recently built a tool: https://github.com/sentioxyz/sentio-composer. If there's a view function defined in your module, no matter whether it is an entry function or not, you can call it using this tool with the real on-chain data.
For example, to view the balance of an account, you can simulate the balance function using the CLI tool:
# command
view-function \
--function-id 0x1::coin::balance \
--type-args 0x1::aptos_coin::AptosCoin \
--args 0x21ddba785f3ae9c6f03664ab07e9ad83595a0fa5ca556cec2b9d9e7100db0f07 \
--ledger-version 35842267 \
--network mainnet
# output
{
"log_path": "",
"return_values": [
3120544100
]
}
Quick web demo at: http://composer.sentio.xyz/
|
How do you define and query a read function in a Move module on Aptos Blockchain?
|
In the Aptos Move docs, it explains how to interact with a smart contract which has exposed "entry functions".In the hello_blockchain example, set_message is used.
Move modules expose access points, also referred as entry functions. These access points can be called via transactions. The CLI allows for seamless access to these access points. The example Move module hello_blockchain exposes a set_message entry function that takes in a string. This can be called via the CLI:
However, there is no explanation on how to query the get_message function which to my understanding is akin to a read function.
Furthermore, there is no explanation of how to query read/write functions using the Python SDK.
Two questions:
Is it possible to use the Python SDK to query read/write functions in a Move module?
How do you define a read function in a Move module?
|
[
"If you want to read a resource on an account, you would submit a read request to the API. For example, with curl:\ncurl https://fullnode.mainnet.aptoslabs.com/v1/accounts/<addr>/resource/<resource>\n\nA concrete example of this:\ncurl https://fullnode.mainnet.aptoslabs.com/v1/accounts/0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b/resource/0x3::token::TokenStore\n\nThis is equivalent to:\n\nRead the resource 0x3::token::TokenStore at account 0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b.\n\nIn the Python SDK, you would do something like this:\nclient.account_resource(\n \"0x00ffe770ccae2e373bc1f217585a1f97b5fa003cc169a27e1b4d6bfc8d3b243b\",\n \"0x3::token::TokenStore\",\n)\n\nThis uses this client method: https://github.com/aptos-labs/aptos-core/blob/05d04ecc511f572380e1e8fe0bbc234f30645f0d/ecosystem/python/sdk/aptos_sdk/client.py#L63\nThe get_message function in the hello_blockchain example is somewhat misleading (we can improve this). There is a hint though, note that only entry functions can be run from external calls (e.g. using the CLI command aptos move run). All other functions can only be called from within a Move module.\nTo be even clearer: In order to read from the Aptos blockchain, you must make requests to the read API endpoints, not to \"read functions\" in Move modules.\nFor more info, check out these docs: https://aptos.dev/tutorials/your-first-transaction.\n",
"Looks like you're looking for view functions. Currently, there is no way to query a read function from a move module.\nThere is an open Github feature request for this on the Aptos repo: https://github.com/aptos-labs/aptos-core/issues/4915\n",
"To simulate a read-only function on the Aptos, we recently built a tool: https://github.com/sentioxyz/sentio-composer. If there's a view function defined in your module, no matter whether it is an entry function or not, you can call it using this tool with the real on-chain data.\nFor example, to view the balance of an account, you can simulate the balance function using the CLI tool:\n# command\nview-function \\\n--function-id 0x1::coin::balance \\\n--type-args 0x1::aptos_coin::AptosCoin \\\n--args 0x21ddba785f3ae9c6f03664ab07e9ad83595a0fa5ca556cec2b9d9e7100db0f07 \\\n--ledger-version 35842267 \\\n--network mainnet\n# output\n{\n \"log_path\": \"\",\n \"return_values\": [\n 3120544100\n ]\n}\n\nQuick web demo at: http://composer.sentio.xyz/\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"aptos",
"move_lang"
] |
stackoverflow_0074133381_aptos_move_lang.txt
|
Q:
Event listener removing issue
Three Div elements with box appearance,
when user click at any div a copy from this div will be added to the
end (the fired div wont be clickable any more, and the new div will
be clickable). And so on…..
I have tried this but it creates two divs at the same time and the div can be clickable again .!!
<div id="parent" class="p">
<div class="red" class="d"></div>
<div class="green" class="d"></div>
<div class="blue" class="d"></div>
</div>
#parent{
display: flex;
flex-wrap: wrap;
}
.red{
width: 50px;
height: 50px;
background-color: red;
margin: 2px;
}`your text`
.green{
width: 50px;
height: 50px;
background-color: green;
margin: 2px;
}
.blue{
width: 50px;
height: 50px;
background-color: blue;
margin: 2px;
}
let parent = document.querySelector("#parent");
let div = document.querySelectorAll(".p div");
parent.addEventListener("click", function createDiv(e){
console.log('1');
let child = document.createElement("div");
parent.append(child);
child.classList.add(e.target.className);
console.log(e);
e.target.removeEventListener("click",createDiv());
});
A:
this way...
An eventListener is a link between a DOM element and a function.
To remove any event listener you need to use the same pair [ DOM element / function ]
In your case this link is on the parent and not on any of his divs.
so you cantt remove any link between of his initial divs.
const
parent = document.querySelector('#parent')
, cDivs = document.querySelectorAll('#parent > div')
;
cDivs.forEach(div => div.addEventListener('click', createDiv)) // don't recreate x time your function `createDiv`
;
function createDiv({currentTarget: initialDiv}) // just declare it only one time
{
initialDiv.removeEventListener('click', createDiv)
;
parent
.appendChild( initialDiv.cloneNode(true) ) // this one return the clone
.addEventListener('click', createDiv)
}
#parent {
display : flex;
flex-wrap : wrap;
}
#parent > div {
width : 50px;
height : 50px;
margin : 2px;
}
.red {
background : red;
}
.green {
background : green;
}
.blue {
background : blue;
}
<div id="parent" class="p">
<div class="red" ></div>
<div class="green" ></div>
<div class="blue" ></div>
</div>
A:
const parent = document.getElementById("parent");
// all clicks on the parent are handled by one function
parent.addEventListener("click", function(e) {
// get the element that was clicked on
const el = e.target;
// check to be sure the click is on an inner div and if it was already clicked
if (el !== this && !el.dataset.clicked) {
// clone it
const clone = el.cloneNode(true);
// add it to the end
this.appendChild(clone);
// mark the one that was clicked on
el.dataset.clicked = true;
}
});
#parent {
display: flex;
flex-wrap: wrap;
}
#parent>div {
width: 50px;
height: 50px;
margin: 2px;
}
.red {
background: red;
}
.green {
background: green;
}
.blue {
background: blue;
}
<div id="parent" class="p">
<div class="red"></div>
<div class="green"></div>
<div class="blue"></div>
</div>
|
Event listener removing issue
|
Three Div elements with box appearance,
when user click at any div a copy from this div will be added to the
end (the fired div wont be clickable any more, and the new div will
be clickable). And so on…..
I have tried this but it creates two divs at the same time and the div can be clickable again .!!
<div id="parent" class="p">
<div class="red" class="d"></div>
<div class="green" class="d"></div>
<div class="blue" class="d"></div>
</div>
#parent{
display: flex;
flex-wrap: wrap;
}
.red{
width: 50px;
height: 50px;
background-color: red;
margin: 2px;
}`your text`
.green{
width: 50px;
height: 50px;
background-color: green;
margin: 2px;
}
.blue{
width: 50px;
height: 50px;
background-color: blue;
margin: 2px;
}
let parent = document.querySelector("#parent");
let div = document.querySelectorAll(".p div");
parent.addEventListener("click", function createDiv(e){
console.log('1');
let child = document.createElement("div");
parent.append(child);
child.classList.add(e.target.className);
console.log(e);
e.target.removeEventListener("click",createDiv());
});
|
[
"this way...\nAn eventListener is a link between a DOM element and a function.\nTo remove any event listener you need to use the same pair [ DOM element / function ]\nIn your case this link is on the parent and not on any of his divs.\nso you cantt remove any link between of his initial divs.\n\n\nconst\n parent = document.querySelector('#parent')\n, cDivs = document.querySelectorAll('#parent > div')\n ;\ncDivs.forEach(div => div.addEventListener('click', createDiv)) // don't recreate x time your function `createDiv`\n ;\nfunction createDiv({currentTarget: initialDiv}) // just declare it only one time\n {\n initialDiv.removeEventListener('click', createDiv)\n ;\n parent\n .appendChild( initialDiv.cloneNode(true) ) // this one return the clone\n .addEventListener('click', createDiv)\n }\n#parent {\n display : flex;\n flex-wrap : wrap;\n }\n#parent > div {\n width : 50px;\n height : 50px;\n margin : 2px;\n }\n.red {\n background : red;\n }\n.green {\n background : green;\n }\n.blue {\n background : blue;\n }\n<div id=\"parent\" class=\"p\">\n <div class=\"red\" ></div>\n <div class=\"green\" ></div>\n <div class=\"blue\" ></div>\n</div>\n\n\n\n",
"\n\nconst parent = document.getElementById(\"parent\");\n\n// all clicks on the parent are handled by one function\nparent.addEventListener(\"click\", function(e) {\n // get the element that was clicked on\n const el = e.target;\n // check to be sure the click is on an inner div and if it was already clicked\n if (el !== this && !el.dataset.clicked) {\n // clone it\n const clone = el.cloneNode(true);\n // add it to the end\n this.appendChild(clone);\n // mark the one that was clicked on\n el.dataset.clicked = true;\n }\n});\n#parent {\n display: flex;\n flex-wrap: wrap;\n}\n\n#parent>div {\n width: 50px;\n height: 50px;\n margin: 2px;\n}\n\n.red {\n background: red;\n}\n\n.green {\n background: green;\n}\n\n.blue {\n background: blue;\n}\n<div id=\"parent\" class=\"p\">\n <div class=\"red\"></div>\n <div class=\"green\"></div>\n <div class=\"blue\"></div>\n</div>\n\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"html",
"javascript"
] |
stackoverflow_0074662687_css_html_javascript.txt
|
Q:
How can I get my cypress custom command to ingest this data (i think i structured the data wrong)?
Alright, as the title says- i'm trying to write a custom command for a cypress test suite. The situation as as follows: I have several tests that need to select an indeterminate number of fields and select an option from the each fields list of drop downs.
The logic for this is crayons-in-mouth simple and works fine:
cy.get(selector)
.select(selection)
.scrollIntoView()
and this works great. But because I use it a lot it's a lot of highly repetitive code so I'm trying to create a custom command where I can just inject an array of arrays (various sets of selectors and selections depending on the situation) into it and it'll do the rest.
This is the custom command as I have it written now.
commands.js
Cypress.Commands.add("assignImportFields", (array) => {
cy.wrap(array).each((selector, selection) => {
cy.get(selector)
.select(selection)
.scrollIntoView()
cy.log('using ' + selector + ' to select ' + selection)
})
})
I have the data in a seperate file that looks like this:
data.js
const importFields = {
actorListImports : [
[selectors.lastName, 'Last_Name'],
[selectors.firstName, 'First_Name'],
[selectors.phoneNum, 'Phone_Number']
]
}
exports.importFields = importFields;
and finally, in my test file:
tests.js
const {actorListImports} = data.importFields;
cy.assignImportFields(actorListImports)
The response I get from this is that the 'select' failed because it requires a dom element. My selectors are fine, so I think it's trying to use an entire array (both selector and selection at once) as the selector instead of the first part of the array.
I know i'm not structuring the data correctly, but i've tried a few different variations of it and my primitive monkey brain just can't put together.
Can someone help me identify what's wrong with how i've structure this?
A:
You need to de-structure the array elements in the .each() parameter list, like this cy.wrap(data).each(([selector, selection])
This is a minimal example:
const selectors = {
'lastName': 'lastName'
}
const data = [
[selectors.lastName, 'Last_Name'],
// [selectors.firstName, 'First_Name'],
// [selectors.phoneNum, 'Phone_Number']
]
cy.wrap(data).each(([selector, selection]) => {
console.log(selector, selection)
expect(selector).to.eq('lastName') // passing
expect(selection).to.eq('Last_Name') // passing
})
|
How can I get my cypress custom command to ingest this data (i think i structured the data wrong)?
|
Alright, as the title says- i'm trying to write a custom command for a cypress test suite. The situation as as follows: I have several tests that need to select an indeterminate number of fields and select an option from the each fields list of drop downs.
The logic for this is crayons-in-mouth simple and works fine:
cy.get(selector)
.select(selection)
.scrollIntoView()
and this works great. But because I use it a lot it's a lot of highly repetitive code so I'm trying to create a custom command where I can just inject an array of arrays (various sets of selectors and selections depending on the situation) into it and it'll do the rest.
This is the custom command as I have it written now.
commands.js
Cypress.Commands.add("assignImportFields", (array) => {
cy.wrap(array).each((selector, selection) => {
cy.get(selector)
.select(selection)
.scrollIntoView()
cy.log('using ' + selector + ' to select ' + selection)
})
})
I have the data in a seperate file that looks like this:
data.js
const importFields = {
actorListImports : [
[selectors.lastName, 'Last_Name'],
[selectors.firstName, 'First_Name'],
[selectors.phoneNum, 'Phone_Number']
]
}
exports.importFields = importFields;
and finally, in my test file:
tests.js
const {actorListImports} = data.importFields;
cy.assignImportFields(actorListImports)
The response I get from this is that the 'select' failed because it requires a dom element. My selectors are fine, so I think it's trying to use an entire array (both selector and selection at once) as the selector instead of the first part of the array.
I know i'm not structuring the data correctly, but i've tried a few different variations of it and my primitive monkey brain just can't put together.
Can someone help me identify what's wrong with how i've structure this?
|
[
"You need to de-structure the array elements in the .each() parameter list, like this cy.wrap(data).each(([selector, selection])\nThis is a minimal example:\nconst selectors = {\n 'lastName': 'lastName'\n}\n\nconst data = [\n [selectors.lastName, 'Last_Name'],\n // [selectors.firstName, 'First_Name'],\n // [selectors.phoneNum, 'Phone_Number']\n]\n\ncy.wrap(data).each(([selector, selection]) => {\n console.log(selector, selection)\n expect(selector).to.eq('lastName') // passing\n expect(selection).to.eq('Last_Name') // passing\n})\n\n"
] |
[
2
] |
[] |
[] |
[
"cypress",
"cypress_custom_commands",
"logic"
] |
stackoverflow_0074663975_cypress_cypress_custom_commands_logic.txt
|
Q:
useEffect() hook failing to run, infinite re-render errors, no evidence the hook is running at all
I am building this react app that makes a fetch req to an API I had built and hosted elsewhere. API is functional with no problems, plus when I run fetchPhotos() outside of useEffect() and it fetches the information just fine, so I do not think it is an issue with the API.
I really have no clue what the issue is. I looked through several articles and other examples of code to see what could be the issue but everything should be in order. I have an empty bracket dependency which should fix the re-render issue according to other posts on here, I uninstalled and re-stalled react, and I tried writing a separate use effect hook to console.log text but it did not run. Maybe I need to import it another way?
import { useState, useEffect } from 'react'
function App() {
const [photos, setPhotos] = useState([])
useEffect(() => {
console.log('running')
const photos = fetchPhotos()
setPhotos(photos)
}, [])
const fetchPhotos = async () => {
const res = await fetch('fetch-url')
const data = await res.json()
console.log(data)
return data
}
return (
<div> component </div>
)
}
Error is: >Uncaught Error: Too many re-renders. React limits the number of renders to prevent an infinite loop.
at renderWithHooks
A:
useEffect(()=>{
fetch('url')
.then(res=>res.json())
.then(data=>{
setPhotos(data)
})
},[])
Try doing it like this if it works. Else you can try using axios.
|
useEffect() hook failing to run, infinite re-render errors, no evidence the hook is running at all
|
I am building this react app that makes a fetch req to an API I had built and hosted elsewhere. API is functional with no problems, plus when I run fetchPhotos() outside of useEffect() and it fetches the information just fine, so I do not think it is an issue with the API.
I really have no clue what the issue is. I looked through several articles and other examples of code to see what could be the issue but everything should be in order. I have an empty bracket dependency which should fix the re-render issue according to other posts on here, I uninstalled and re-stalled react, and I tried writing a separate use effect hook to console.log text but it did not run. Maybe I need to import it another way?
import { useState, useEffect } from 'react'
function App() {
const [photos, setPhotos] = useState([])
useEffect(() => {
console.log('running')
const photos = fetchPhotos()
setPhotos(photos)
}, [])
const fetchPhotos = async () => {
const res = await fetch('fetch-url')
const data = await res.json()
console.log(data)
return data
}
return (
<div> component </div>
)
}
Error is: >Uncaught Error: Too many re-renders. React limits the number of renders to prevent an infinite loop.
at renderWithHooks
|
[
"\n\nuseEffect(()=>{\n fetch('url')\n .then(res=>res.json())\n .then(data=>{\n setPhotos(data)\n })\n},[])\n\n\n\nTry doing it like this if it works. Else you can try using axios.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"react_hooks",
"reactjs"
] |
stackoverflow_0074664067_javascript_react_hooks_reactjs.txt
|
Q:
How can I detect an EOF in assembly, nasm?
I am trying to detect an EOF character, or just any character at all, but it doesn't work, no error either.
section .data
file db "text.txt", 0
section .bss
char resb 1
section .text
global _start
_start:
mov rax, 2
mov rdi, file
syscall
mov rbx, rax
mov rdi, rbx
mov rax, 0
mov rsi, char
mov rdx, 1
syscall
mov rcx, char
cmp rcx, -1
je _endOfFile
call _end
_endOfFile:
print 1, file, 0
ret
_end:
mov rax, 3
mov rdi, rbx
syscall
mov rax, 60
mov rdi, 0
syscall
I expected it to print the name of the file, but it doesn't do anything. When I remove the cmp, and just make it jump it prints it fine. I also tried it for other characters and it didn't work for those either. I am really new to assembly, so I have no clue what to do.
A:
Okay, a few layers of problems here.
Most fundamental is that there is no "EOF character". Unlike ISO C's getc(), the Unix read system call doesn't signal end-of-file by reading back a particular character, it signals it by returning 0 as its return value. So you need to check the value in rax after the read syscall. If it is zero, then you have reached end-of-file. If it is 1, then you successfully read a character into the memory location char. If it is a smallish negative number, then an error occurred, and the negation of this value is an errno code.
The comparison code also has a few bugs. First of all, mov rcx, char doesn't load the character from char, it loads the address of char, which naturally does not equal -1. If you look, this is exactly similar to the mov rsi, char you used to set up the system call, which likewise put the address of char into rsi.
To specify the contents of memory at location char, you use square brackets: mov rcx, [char]. However, that wouldn't be right either. On x86-64, most instructions can operate
on 8, 16, 32 or 64 bit operands. When at least one operand is a register, the size of the specified register dictates the operand size. So mov rcx, [char] would load 8 bytes, of which the lowest would be the byte from char, and the other 7 would be whatever garbage happened to follow it in memory.
To load one byte, use an 8-bit register, like cl. Then you need to likewise do the compare with only the 8-bit register, or else you're comparing against stuff that is not your character.
mov cl, [char]
cmp cl, -1
je got_ff
Though actually, in most cases, instead of mov cl, [char] it would be better to do movzx ecx, byte [char] which zeros out the upper bits of rcx. mov cl, [byte] is defined as preserving those bits, which comes with a slight performance cost.
But actually actually, you don't need to load the character into a register at all; cmp works fine with a memory operand.
cmp byte [char], -1
je it_was_ff
|
How can I detect an EOF in assembly, nasm?
|
I am trying to detect an EOF character, or just any character at all, but it doesn't work, no error either.
section .data
file db "text.txt", 0
section .bss
char resb 1
section .text
global _start
_start:
mov rax, 2
mov rdi, file
syscall
mov rbx, rax
mov rdi, rbx
mov rax, 0
mov rsi, char
mov rdx, 1
syscall
mov rcx, char
cmp rcx, -1
je _endOfFile
call _end
_endOfFile:
print 1, file, 0
ret
_end:
mov rax, 3
mov rdi, rbx
syscall
mov rax, 60
mov rdi, 0
syscall
I expected it to print the name of the file, but it doesn't do anything. When I remove the cmp, and just make it jump it prints it fine. I also tried it for other characters and it didn't work for those either. I am really new to assembly, so I have no clue what to do.
|
[
"Okay, a few layers of problems here.\nMost fundamental is that there is no \"EOF character\". Unlike ISO C's getc(), the Unix read system call doesn't signal end-of-file by reading back a particular character, it signals it by returning 0 as its return value. So you need to check the value in rax after the read syscall. If it is zero, then you have reached end-of-file. If it is 1, then you successfully read a character into the memory location char. If it is a smallish negative number, then an error occurred, and the negation of this value is an errno code.\nThe comparison code also has a few bugs. First of all, mov rcx, char doesn't load the character from char, it loads the address of char, which naturally does not equal -1. If you look, this is exactly similar to the mov rsi, char you used to set up the system call, which likewise put the address of char into rsi.\nTo specify the contents of memory at location char, you use square brackets: mov rcx, [char]. However, that wouldn't be right either. On x86-64, most instructions can operate\non 8, 16, 32 or 64 bit operands. When at least one operand is a register, the size of the specified register dictates the operand size. So mov rcx, [char] would load 8 bytes, of which the lowest would be the byte from char, and the other 7 would be whatever garbage happened to follow it in memory.\nTo load one byte, use an 8-bit register, like cl. Then you need to likewise do the compare with only the 8-bit register, or else you're comparing against stuff that is not your character.\nmov cl, [char]\ncmp cl, -1\nje got_ff\n\nThough actually, in most cases, instead of mov cl, [char] it would be better to do movzx ecx, byte [char] which zeros out the upper bits of rcx. mov cl, [byte] is defined as preserving those bits, which comes with a slight performance cost.\nBut actually actually, you don't need to load the character into a register at all; cmp works fine with a memory operand.\ncmp byte [char], -1\nje it_was_ff\n\n"
] |
[
4
] |
[] |
[] |
[
"assembly",
"nasm",
"x86"
] |
stackoverflow_0074663693_assembly_nasm_x86.txt
|
Q:
How to observe for modifier key pressed (e.g. option, shift) with NSNotification in SwiftUI macOS project?
I want to have a Bool property, that represents that option key is pressed @Publised var isOptionPressed = false. I would use it for changing SwiftUI View.
For that, I think, that I should use Combine to observe for key pressure.
I tried to find an NSNotification for that event, but it seems to me that there are no any NSNotification, that could be useful to me.
A:
Ok, I found easy solution for my problem:
class KeyPressedController: ObservableObject {
@Published var isOptionPressed = false
init() {
NSEvent.addLocalMonitorForEvents(matching: .flagsChanged) { [weak self] event -> NSEvent? in
if event.modifierFlags.contains(.option) {
self?.isOptionPressed = true
} else {
self?.isOptionPressed = false
}
return nil
}
}
}
A:
Since you are working through SwiftUI, I would recommend taking things just a step beyond watching a Publisher and put the state of the modifier flags in the SwiftUI Environment. It is my opinion that it will fit in nicely with SwiftUI's declarative syntax.
I had another implementation of this, but took the solution you found and adapted it.
import Cocoa
import SwiftUI
import Combine
struct KeyModifierFlags: EnvironmentKey {
static let defaultValue = NSEvent.ModifierFlags([])
}
extension EnvironmentValues {
var keyModifierFlags: NSEvent.ModifierFlags {
get { self[KeyModifierFlags.self] }
set { self[KeyModifierFlags.self] = newValue }
}
}
struct ModifierFlagEnvironment<Content>: View where Content:View {
@StateObject var flagState = ModifierFlags()
let content: Content;
init(@ViewBuilder content: () -> Content) {
self.content = content();
}
var body: some View {
content
.environment(\.keyModifierFlags, flagState.modifierFlags)
}
}
final class ModifierFlags: ObservableObject {
@Published var modifierFlags = NSEvent.ModifierFlags([])
init() {
NSEvent.addLocalMonitorForEvents(matching: .flagsChanged) { [weak self] event in
self?.modifierFlags = event.modifierFlags
return event;
}
}
}
Note that my event closure is returning the event passed in. If you return nil you will prevent the event from going farther and someone else in the system may want to see it.
The struct KeyModifierFlags sets up a new item to be added to the view Environment. The extension to EnvironmentValues lets us store and
retrieve the current flags from the environment.
Finally there is the ModifierFlagEnvironment view. It has no content of its own - that is passed to the initializer in an @ViewBuilder function. What it does do is provide the StateObject that contains the state monitor, and it passes it's current value for the modifier flags into the Environment of the content.
To use the ModifierFlagEnvironment you wrap a top-level view in your hierarchy with it. In a simple Cocoa app built from the default Xcode template, I changed the application SwiftUI content to be:
struct KeyWatcherApp: App {
var body: some Scene {
WindowGroup {
ModifierFlagEnvironment {
ContentView()
}
}
}
}
So all of the views in the application could watch the flags.
Then to make use of it you could do:
struct ContentView: View {
@Environment(\.keyModifierFlags) var modifierFlags: NSEvent.ModifierFlags
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundColor(.accentColor)
if(modifierFlags.contains(.option)) {
Text("Option is pressed")
} else {
Text("Option is up")
}
}
.padding()
}
}
Here the content view watches the environment for the flags and the view makes decisions on what to show using the current modifiers.
|
How to observe for modifier key pressed (e.g. option, shift) with NSNotification in SwiftUI macOS project?
|
I want to have a Bool property, that represents that option key is pressed @Publised var isOptionPressed = false. I would use it for changing SwiftUI View.
For that, I think, that I should use Combine to observe for key pressure.
I tried to find an NSNotification for that event, but it seems to me that there are no any NSNotification, that could be useful to me.
|
[
"Ok, I found easy solution for my problem:\nclass KeyPressedController: ObservableObject {\n @Published var isOptionPressed = false\n \n init() {\n NSEvent.addLocalMonitorForEvents(matching: .flagsChanged) { [weak self] event -> NSEvent? in\n if event.modifierFlags.contains(.option) {\n self?.isOptionPressed = true\n } else {\n self?.isOptionPressed = false\n }\n return nil\n }\n }\n}\n\n",
"Since you are working through SwiftUI, I would recommend taking things just a step beyond watching a Publisher and put the state of the modifier flags in the SwiftUI Environment. It is my opinion that it will fit in nicely with SwiftUI's declarative syntax.\nI had another implementation of this, but took the solution you found and adapted it.\n\nimport Cocoa\nimport SwiftUI\nimport Combine\n\nstruct KeyModifierFlags: EnvironmentKey {\n static let defaultValue = NSEvent.ModifierFlags([])\n}\n\nextension EnvironmentValues {\n var keyModifierFlags: NSEvent.ModifierFlags {\n get { self[KeyModifierFlags.self] }\n set { self[KeyModifierFlags.self] = newValue }\n }\n}\n\nstruct ModifierFlagEnvironment<Content>: View where Content:View {\n @StateObject var flagState = ModifierFlags()\n let content: Content;\n\n init(@ViewBuilder content: () -> Content) {\n self.content = content();\n }\n\n var body: some View {\n content\n .environment(\\.keyModifierFlags, flagState.modifierFlags)\n }\n}\n\nfinal class ModifierFlags: ObservableObject {\n @Published var modifierFlags = NSEvent.ModifierFlags([])\n\n init() {\n NSEvent.addLocalMonitorForEvents(matching: .flagsChanged) { [weak self] event in\n self?.modifierFlags = event.modifierFlags\n return event;\n }\n }\n}\n\nNote that my event closure is returning the event passed in. If you return nil you will prevent the event from going farther and someone else in the system may want to see it.\nThe struct KeyModifierFlags sets up a new item to be added to the view Environment. The extension to EnvironmentValues lets us store and\nretrieve the current flags from the environment.\nFinally there is the ModifierFlagEnvironment view. It has no content of its own - that is passed to the initializer in an @ViewBuilder function. What it does do is provide the StateObject that contains the state monitor, and it passes it's current value for the modifier flags into the Environment of the content.\nTo use the ModifierFlagEnvironment you wrap a top-level view in your hierarchy with it. In a simple Cocoa app built from the default Xcode template, I changed the application SwiftUI content to be:\nstruct KeyWatcherApp: App {\n var body: some Scene {\n WindowGroup {\n ModifierFlagEnvironment {\n ContentView()\n }\n }\n }\n}\n\nSo all of the views in the application could watch the flags.\nThen to make use of it you could do:\nstruct ContentView: View {\n @Environment(\\.keyModifierFlags) var modifierFlags: NSEvent.ModifierFlags\n\n var body: some View {\n VStack {\n Image(systemName: \"globe\")\n .imageScale(.large)\n .foregroundColor(.accentColor)\n if(modifierFlags.contains(.option)) {\n Text(\"Option is pressed\")\n } else {\n Text(\"Option is up\")\n }\n }\n .padding()\n }\n}\n\nHere the content view watches the environment for the flags and the view makes decisions on what to show using the current modifiers.\n"
] |
[
0,
0
] |
[] |
[] |
[
"appkit",
"combine",
"macos",
"nsnotifications",
"swiftui"
] |
stackoverflow_0074658685_appkit_combine_macos_nsnotifications_swiftui.txt
|
Q:
Use all but the last element from an iterator
I want to split a Vec into some parts of equal length, and then map over them. I have an iterator resulting from a call to Vec's chunks() method. This may leave me with a part that will be smaller than other parts, which will be the last element generated by it.
To be sure that all parts have equal length, I just want to drop that last element and then call map() on what's left.
A:
As Sebastian Redl points out, checking the length of each chunk is the better solution for your specific case.
To answer the question you asked ("Use all but the last element from an iterator"), you can use Iterator::peekable to look ahead one. That will tell you if you are on the last item or not and you can decide to skip processing it if so.
let things = [0, 1, 2, 3, 4];
let mut chunks = things.chunks(2).peekable();
while let Some(chunk) = chunks.next() {
if chunks.peek().is_some() {
print!("Not the last: ");
} else {
print!("This is the last: ")
}
println!("{:?}", chunk);
}
To be sure that all parts have equal length, I just want to drop that last element
Always dropping the last element won't do this. For example, if you evenly chunk up your input, then always dropping the last element would lose a full chunk. You'd have to do some pre-calculation to decide if you need to drop it or not.
A:
You can filter() the chunks iterator on the slice's len() being the amount you passed to chunks():
let things = [0, 1, 2, 3, 4];
for chunk in things.chunks(2).filter(|c| c.len() == 2) {
println!("{:?}", chunk);
}
As of Rust 1.31, you can use the chunks_exact method as well:
let things = [0, 1, 2, 3, 4];
for chunk in things.chunks_exact(2) {
println!("{:?}", chunk);
}
Note that the returned iterator also has the method remainder if you need to get the uneven amount of items at the very end.
A:
As an alternate solution that is (probably) marginally more performant than Shepmaster's solution and a little cleaner, you can use the next_back() method from std::iter::DoubleEndedIterator:
let things = [0, 1, 2, 3, 4];
let mut chunks = things.chunks(2);
let last = chunks.next_back().unwrap();
println!("Last: {:?}", last);
for chunk in chunks {
println!("Not last: {:?}", chunk);
}
next_back() eats the last element of the iterator, so after calling next_back() the iterator can be used to iterate over everything else. The output of the segment above:
Last: [4]
Not last: [0, 1]
Not last: [2, 3]
|
Use all but the last element from an iterator
|
I want to split a Vec into some parts of equal length, and then map over them. I have an iterator resulting from a call to Vec's chunks() method. This may leave me with a part that will be smaller than other parts, which will be the last element generated by it.
To be sure that all parts have equal length, I just want to drop that last element and then call map() on what's left.
|
[
"As Sebastian Redl points out, checking the length of each chunk is the better solution for your specific case.\nTo answer the question you asked (\"Use all but the last element from an iterator\"), you can use Iterator::peekable to look ahead one. That will tell you if you are on the last item or not and you can decide to skip processing it if so.\nlet things = [0, 1, 2, 3, 4];\n\nlet mut chunks = things.chunks(2).peekable();\nwhile let Some(chunk) = chunks.next() {\n if chunks.peek().is_some() {\n print!(\"Not the last: \");\n } else {\n print!(\"This is the last: \")\n }\n\n println!(\"{:?}\", chunk);\n}\n\n\nTo be sure that all parts have equal length, I just want to drop that last element\n\nAlways dropping the last element won't do this. For example, if you evenly chunk up your input, then always dropping the last element would lose a full chunk. You'd have to do some pre-calculation to decide if you need to drop it or not.\n",
"You can filter() the chunks iterator on the slice's len() being the amount you passed to chunks():\nlet things = [0, 1, 2, 3, 4];\n\nfor chunk in things.chunks(2).filter(|c| c.len() == 2) {\n println!(\"{:?}\", chunk);\n}\n\nAs of Rust 1.31, you can use the chunks_exact method as well:\nlet things = [0, 1, 2, 3, 4];\n\nfor chunk in things.chunks_exact(2) {\n println!(\"{:?}\", chunk);\n}\n\nNote that the returned iterator also has the method remainder if you need to get the uneven amount of items at the very end.\n",
"As an alternate solution that is (probably) marginally more performant than Shepmaster's solution and a little cleaner, you can use the next_back() method from std::iter::DoubleEndedIterator:\nlet things = [0, 1, 2, 3, 4];\n\nlet mut chunks = things.chunks(2);\nlet last = chunks.next_back().unwrap();\n \nprintln!(\"Last: {:?}\", last);\n \nfor chunk in chunks {\n println!(\"Not last: {:?}\", chunk);\n}\n\nnext_back() eats the last element of the iterator, so after calling next_back() the iterator can be used to iterate over everything else. The output of the segment above:\nLast: [4]\nNot last: [0, 1]\nNot last: [2, 3]\n\n"
] |
[
14,
10,
1
] |
[] |
[] |
[
"iterator",
"rust"
] |
stackoverflow_0048102662_iterator_rust.txt
|
Q:
How to create a protected route?
How to create a protected route with react-router-dom and storing the response in local storage, so that when a user tries to open next time they can view their details again. After login, they should redirect to the dashboard page.
All functionality is added in ContextApi.
Codesandbox link : Code
I tried but was not able to achieve it
Route Page
import React, { useContext } from "react";
import { globalC } from "./context";
import { Route, Switch, BrowserRouter } from "react-router-dom";
import About from "./About";
import Dashboard from "./Dashboard";
import Login from "./Login";
import PageNotFound from "./PageNotFound";
function Routes() {
const { authLogin } = useContext(globalC);
console.log("authLogin", authLogin);
return (
<BrowserRouter>
<Switch>
{authLogin ? (
<>
<Route path="/dashboard" component={Dashboard} exact />
<Route exact path="/About" component={About} />
</>
) : (
<Route path="/" component={Login} exact />
)}
<Route component={PageNotFound} />
</Switch>
</BrowserRouter>
);
}
export default Routes;
Context Page
import React, { Component, createContext } from "react";
import axios from "axios";
export const globalC = createContext();
export class Gprov extends Component {
state = {
authLogin: null,
authLoginerror: null
};
componentDidMount() {
var localData = JSON.parse(localStorage.getItem("loginDetail"));
if (localData) {
this.setState({
authLogin: localData
});
}
}
loginData = async () => {
let payload = {
token: "ctz43XoULrgv_0p1pvq7tA",
data: {
name: "nameFirst",
email: "internetEmail",
phone: "phoneHome",
_repeat: 300
}
};
await axios
.post(`https://app.fakejson.com/q`, payload)
.then((res) => {
if (res.status === 200) {
this.setState({
authLogin: res.data
});
localStorage.setItem("loginDetail", JSON.stringify(res.data));
}
})
.catch((err) =>
this.setState({
authLoginerror: err
})
);
};
render() {
// console.log(localStorage.getItem("loginDetail"));
return (
<globalC.Provider
value={{
...this.state,
loginData: this.loginData
}}
>
{this.props.children}
</globalC.Provider>
);
}
}
A:
Issue
<BrowserRouter>
<Switch>
{authLogin ? (
<>
<Route path="/dashboard" component={Dashboard} exact />
<Route exact path="/About" component={About} />
</>
) : (
<Route path="/" component={Login} exact />
)}
<Route component={PageNotFound} />
</Switch>
</BrowserRouter>
The Switch doesn't handle rendering anything other than Route and Redirect components. If you want to "nest" like this then you need to wrap each in generic routes, but that is completely unnecessary.
Your login component also doesn't handle redirecting back to any "home" page or private routes that were originally being accessed.
Solution
react-router-dom v5
Create a PrivateRoute component that consumes your auth context.
const PrivateRoute = (props) => {
const location = useLocation();
const { authLogin } = useContext(globalC);
if (authLogin === undefined) {
return null; // or loading indicator/spinner/etc
}
return authLogin ? (
<Route {...props} />
) : (
<Redirect
to={{
pathname: "/login",
state: { from: location }
}}
/>
);
};
Update your Login component to handle redirecting back to the original route being accessed.
export default function Login() {
const location = useLocation();
const history = useHistory();
const { authLogin, loginData } = useContext(globalC);
useEffect(() => {
if (authLogin) {
const { from } = location.state || { from: { pathname: "/" } };
history.replace(from);
}
}, [authLogin, history, location]);
return (
<div
style={{ height: "100vh" }}
className="d-flex justify-content-center align-items-center"
>
<button type="button" onClick={loginData} className="btn btn-primary">
Login
</button>
</div>
);
}
Render all your routes in a "flat list"
function Routes() {
return (
<BrowserRouter>
<Switch>
<PrivateRoute path="/dashboard" component={Dashboard} />
<PrivateRoute path="/About" component={About} />
<Route path="/login" component={Login} />
<Route component={PageNotFound} />
</Switch>
</BrowserRouter>
);
}
react-router-dom v6
In version 6 custom route components have fallen out of favor, the preferred method is to use an auth layout component.
import { Navigate, Outlet } from 'react-router-dom';
const PrivateRoutes = () => {
const location = useLocation();
const { authLogin } = useContext(globalC);
if (authLogin === undefined) {
return null; // or loading indicator/spinner/etc
}
return authLogin
? <Outlet />
: <Navigate to="/login" replace state={{ from: location }} />;
}
...
<BrowserRouter>
<Routes>
<Route path="/" element={<PrivateRoutes />} >
<Route path="dashboard" element={<Dashboard />} />
<Route path="about" element={<About />} />
</Route>
<Route path="/login" element={<Login />} />
<Route path="*" element={<PageNotFound />} />
</Routes>
</BrowserRouter>
or
const routes = [
{
path: "/",
element: <PrivateRoutes />,
children: [
{
path: "dashboard",
element: <Dashboard />,
},
{
path: "about",
element: <About />
},
],
},
{
path: "/login",
element: <Login />,
},
{
path: "*",
element: <PageNotFound />
},
];
...
export default function Login() {
const location = useLocation();
const navigate = useNavigate();
const { authLogin, loginData } = useContext(globalC);
useEffect(() => {
if (authLogin) {
const { from } = location.state || { from: { pathname: "/" } };
navigate(from, { replace: true });
}
}, [authLogin, location, navigate]);
return (
<div
style={{ height: "100vh" }}
className="d-flex justify-content-center align-items-center"
>
<button type="button" onClick={loginData} className="btn btn-primary">
Login
</button>
</div>
);
}
A:
For v6:
import { Routes, Route, Navigate } from "react-router-dom";
function App() {
return (
<Routes>
<Route path="/public" element={<PublicPage />} />
<Route
path="/protected"
element={
<RequireAuth redirectTo="/login">
<ProtectedPage />
</RequireAuth>
}
/>
</Routes>
);
}
function RequireAuth({ children, redirectTo }) {
let isAuthenticated = getAuth();
return isAuthenticated ? children : <Navigate to={redirectTo} />;
}
Link to docs:
https://gist.github.com/mjackson/d54b40a094277b7afdd6b81f51a0393f
A:
import { v4 as uuidv4 } from "uuid";
const routes = [
{
id: uuidv4(),
isProtected: false,
exact: true,
path: "/home",
component: param => <Overview {...param} />,
},
{
id: uuidv4(),
isProtected: true,
exact: true,
path: "/protected",
component: param => <Overview {...param} />,
allowed: [...advanceProducts], // subscription
},
{
// if you conditional based rendering for same path
id: uuidv4(),
isProtected: true,
exact: true,
path: "/",
component: null,
conditionalComponent: true,
allowed: {
[subscription1]: param => <Overview {...param} />,
[subscription2]: param => <Customers {...param} />,
},
},
]
// Navigation Component
import React, { useEffect, useState } from "react";
import { useSelector } from "react-redux";
import { Switch, Route, useLocation } from "react-router-dom";
// ...component logic
<Switch>
{routes.map(params => {
return (
<ProtectedRoutes
exact
routeParams={params}
key={params.path}
path={params.path}
/>
);
})}
<Route
render={() => {
props.setHideNav(true);
setHideHeader(true);
return <ErrorPage type={404} />;
}}
/>
</Switch>
// ProtectedRoute component
import React from "react";
import { Route } from "react-router-dom";
import { useSelector } from "react-redux";
const ProtectedRoutes = props => {
const { routeParams } = props;
const currentSubscription = 'xyz'; // your current subscription;
if (routeParams.conditionalComponent) {
return (
<Route
key={routeParams.path}
path={routeParams.path}
render={routeParams.allowed[currentSubscription]}
/>
);
}
if (routeParams.isProtected && routeParams.allowed.includes(currentSubscription)) {
return (
<Route key={routeParams.path} path={routeParams.path} render={routeParams?.component} />
);
}
if (!routeParams.isProtected) {
return (
<Route key={routeParams.path} path={routeParams.path} render={routeParams?.component} />
);
}
return null;
};
export default ProtectedRoutes;
Would like to add highlight never forget to give path as prop to ProtectedRoute, else it will not work.
A:
Here is an easy react-router v6 protected route. I have put all the routes I want to protect in a routes.js:-
const routes = [{ path: "/dasboard", name:"Dashboard", element: <Dashboard/> }]
To render the routes just map them as follows: -
<Routes>
{routes.map((routes, id) => {
return(
<Route
key={id}
path={route.path}
exact={route.exact}
name={route.name}
element={
localStorage.getItem("token") ? (
route.element
) : (
<Navigate to="/login" />
)
}
)
})
}
</Routes>
A:
If you want an easy way to implement then use Login in App.js, if user is loggedin then set user variable. If user variable is set then start those route else it will stuck at login page. I implemented this in my project.
return (
<div>
<Notification notification={notification} type={notificationType} />
{
user === null &&
<LoginForm startLogin={handleLogin} />
}
{
user !== null &&
<NavBar user={user} setUser={setUser} />
}
{
user !== null &&
<Router>
<Routes>
<Route exact path="/" element={<Home />} />
<Route exact path="/adduser" element={<AddUser />} /> />
<Route exact path="/viewuser/:id" element={<ViewUser />} />
</Routes>
</Router>
}
</div>
)
|
How to create a protected route?
|
How to create a protected route with react-router-dom and storing the response in local storage, so that when a user tries to open next time they can view their details again. After login, they should redirect to the dashboard page.
All functionality is added in ContextApi.
Codesandbox link : Code
I tried but was not able to achieve it
Route Page
import React, { useContext } from "react";
import { globalC } from "./context";
import { Route, Switch, BrowserRouter } from "react-router-dom";
import About from "./About";
import Dashboard from "./Dashboard";
import Login from "./Login";
import PageNotFound from "./PageNotFound";
function Routes() {
const { authLogin } = useContext(globalC);
console.log("authLogin", authLogin);
return (
<BrowserRouter>
<Switch>
{authLogin ? (
<>
<Route path="/dashboard" component={Dashboard} exact />
<Route exact path="/About" component={About} />
</>
) : (
<Route path="/" component={Login} exact />
)}
<Route component={PageNotFound} />
</Switch>
</BrowserRouter>
);
}
export default Routes;
Context Page
import React, { Component, createContext } from "react";
import axios from "axios";
export const globalC = createContext();
export class Gprov extends Component {
state = {
authLogin: null,
authLoginerror: null
};
componentDidMount() {
var localData = JSON.parse(localStorage.getItem("loginDetail"));
if (localData) {
this.setState({
authLogin: localData
});
}
}
loginData = async () => {
let payload = {
token: "ctz43XoULrgv_0p1pvq7tA",
data: {
name: "nameFirst",
email: "internetEmail",
phone: "phoneHome",
_repeat: 300
}
};
await axios
.post(`https://app.fakejson.com/q`, payload)
.then((res) => {
if (res.status === 200) {
this.setState({
authLogin: res.data
});
localStorage.setItem("loginDetail", JSON.stringify(res.data));
}
})
.catch((err) =>
this.setState({
authLoginerror: err
})
);
};
render() {
// console.log(localStorage.getItem("loginDetail"));
return (
<globalC.Provider
value={{
...this.state,
loginData: this.loginData
}}
>
{this.props.children}
</globalC.Provider>
);
}
}
|
[
"Issue\n<BrowserRouter>\n <Switch>\n {authLogin ? (\n <>\n <Route path=\"/dashboard\" component={Dashboard} exact />\n <Route exact path=\"/About\" component={About} />\n </>\n ) : (\n <Route path=\"/\" component={Login} exact />\n )}\n\n <Route component={PageNotFound} />\n </Switch>\n</BrowserRouter>\n\nThe Switch doesn't handle rendering anything other than Route and Redirect components. If you want to \"nest\" like this then you need to wrap each in generic routes, but that is completely unnecessary.\nYour login component also doesn't handle redirecting back to any \"home\" page or private routes that were originally being accessed.\nSolution\nreact-router-dom v5\nCreate a PrivateRoute component that consumes your auth context.\nconst PrivateRoute = (props) => {\n const location = useLocation();\n const { authLogin } = useContext(globalC);\n\n if (authLogin === undefined) {\n return null; // or loading indicator/spinner/etc\n }\n\n return authLogin ? (\n <Route {...props} />\n ) : (\n <Redirect\n to={{\n pathname: \"/login\",\n state: { from: location }\n }}\n />\n );\n};\n\nUpdate your Login component to handle redirecting back to the original route being accessed.\nexport default function Login() {\n const location = useLocation();\n const history = useHistory();\n const { authLogin, loginData } = useContext(globalC);\n\n useEffect(() => {\n if (authLogin) {\n const { from } = location.state || { from: { pathname: \"/\" } };\n history.replace(from);\n }\n }, [authLogin, history, location]);\n\n return (\n <div\n style={{ height: \"100vh\" }}\n className=\"d-flex justify-content-center align-items-center\"\n >\n <button type=\"button\" onClick={loginData} className=\"btn btn-primary\">\n Login\n </button>\n </div>\n );\n}\n\nRender all your routes in a \"flat list\"\nfunction Routes() {\n return (\n <BrowserRouter>\n <Switch>\n <PrivateRoute path=\"/dashboard\" component={Dashboard} />\n <PrivateRoute path=\"/About\" component={About} />\n <Route path=\"/login\" component={Login} />\n <Route component={PageNotFound} />\n </Switch>\n </BrowserRouter>\n );\n}\n\n\nreact-router-dom v6\nIn version 6 custom route components have fallen out of favor, the preferred method is to use an auth layout component.\nimport { Navigate, Outlet } from 'react-router-dom';\n\nconst PrivateRoutes = () => {\n const location = useLocation();\n const { authLogin } = useContext(globalC);\n\n if (authLogin === undefined) {\n return null; // or loading indicator/spinner/etc\n }\n\n return authLogin \n ? <Outlet />\n : <Navigate to=\"/login\" replace state={{ from: location }} />;\n}\n\n...\n<BrowserRouter>\n <Routes>\n <Route path=\"/\" element={<PrivateRoutes />} >\n <Route path=\"dashboard\" element={<Dashboard />} />\n <Route path=\"about\" element={<About />} />\n </Route>\n <Route path=\"/login\" element={<Login />} />\n <Route path=\"*\" element={<PageNotFound />} />\n </Routes>\n</BrowserRouter>\n\nor\nconst routes = [\n {\n path: \"/\",\n element: <PrivateRoutes />,\n children: [\n {\n path: \"dashboard\",\n element: <Dashboard />,\n },\n {\n path: \"about\",\n element: <About />\n },\n ],\n },\n {\n path: \"/login\",\n element: <Login />,\n },\n {\n path: \"*\",\n element: <PageNotFound />\n },\n];\n\n...\nexport default function Login() {\n const location = useLocation();\n const navigate = useNavigate();\n const { authLogin, loginData } = useContext(globalC);\n\n useEffect(() => {\n if (authLogin) {\n const { from } = location.state || { from: { pathname: \"/\" } };\n navigate(from, { replace: true });\n }\n }, [authLogin, location, navigate]);\n\n return (\n <div\n style={{ height: \"100vh\" }}\n className=\"d-flex justify-content-center align-items-center\"\n >\n <button type=\"button\" onClick={loginData} className=\"btn btn-primary\">\n Login\n </button>\n </div>\n );\n}\n\n",
"For v6:\nimport { Routes, Route, Navigate } from \"react-router-dom\";\n\nfunction App() {\n return (\n <Routes>\n <Route path=\"/public\" element={<PublicPage />} />\n <Route\n path=\"/protected\"\n element={\n <RequireAuth redirectTo=\"/login\">\n <ProtectedPage />\n </RequireAuth>\n }\n />\n </Routes>\n );\n}\n\nfunction RequireAuth({ children, redirectTo }) {\n let isAuthenticated = getAuth();\n return isAuthenticated ? children : <Navigate to={redirectTo} />;\n}\n\nLink to docs:\nhttps://gist.github.com/mjackson/d54b40a094277b7afdd6b81f51a0393f\n",
"\n\nimport { v4 as uuidv4 } from \"uuid\";\n\nconst routes = [\n {\n id: uuidv4(),\n isProtected: false,\n exact: true,\n path: \"/home\",\n component: param => <Overview {...param} />,\n },\n {\n id: uuidv4(),\n isProtected: true,\n exact: true,\n path: \"/protected\",\n component: param => <Overview {...param} />,\n allowed: [...advanceProducts], // subscription\n },\n {\n // if you conditional based rendering for same path\n id: uuidv4(),\n isProtected: true,\n exact: true,\n path: \"/\",\n component: null,\n conditionalComponent: true,\n allowed: {\n [subscription1]: param => <Overview {...param} />,\n [subscription2]: param => <Customers {...param} />,\n },\n },\n]\n\n// Navigation Component\nimport React, { useEffect, useState } from \"react\";\nimport { useSelector } from \"react-redux\";\nimport { Switch, Route, useLocation } from \"react-router-dom\";\n\n// ...component logic\n<Switch>\n {routes.map(params => {\n return (\n <ProtectedRoutes\n exact\n routeParams={params}\n key={params.path}\n path={params.path}\n />\n );\n })}\n <Route\n render={() => {\n props.setHideNav(true);\n setHideHeader(true);\n return <ErrorPage type={404} />;\n }}\n />\n</Switch>\n\n// ProtectedRoute component\nimport React from \"react\";\nimport { Route } from \"react-router-dom\";\nimport { useSelector } from \"react-redux\";\n\nconst ProtectedRoutes = props => {\n const { routeParams } = props;\n const currentSubscription = 'xyz'; // your current subscription;\n if (routeParams.conditionalComponent) {\n return (\n <Route\n key={routeParams.path}\n path={routeParams.path}\n render={routeParams.allowed[currentSubscription]}\n />\n );\n }\n if (routeParams.isProtected && routeParams.allowed.includes(currentSubscription)) {\n return (\n <Route key={routeParams.path} path={routeParams.path} render={routeParams?.component} />\n );\n }\n if (!routeParams.isProtected) {\n return (\n <Route key={routeParams.path} path={routeParams.path} render={routeParams?.component} />\n );\n }\n return null;\n};\n\nexport default ProtectedRoutes;\n\n\n\nWould like to add highlight never forget to give path as prop to ProtectedRoute, else it will not work.\n",
"Here is an easy react-router v6 protected route. I have put all the routes I want to protect in a routes.js:-\nconst routes = [{ path: \"/dasboard\", name:\"Dashboard\", element: <Dashboard/> }]\n\nTo render the routes just map them as follows: -\n<Routes>\n {routes.map((routes, id) => {\n return(\n <Route\n key={id}\n path={route.path}\n exact={route.exact}\n name={route.name}\n element={\n localStorage.getItem(\"token\") ? (\n route.element\n ) : (\n <Navigate to=\"/login\" />\n )\n }\n )\n })\n }\n</Routes>\n\n",
"If you want an easy way to implement then use Login in App.js, if user is loggedin then set user variable. If user variable is set then start those route else it will stuck at login page. I implemented this in my project.\nreturn (\n<div>\n <Notification notification={notification} type={notificationType} />\n\n {\n user === null &&\n <LoginForm startLogin={handleLogin} />\n }\n\n {\n user !== null &&\n <NavBar user={user} setUser={setUser} />\n }\n\n {\n user !== null &&\n <Router>\n <Routes>\n <Route exact path=\"/\" element={<Home />} />\n <Route exact path=\"/adduser\" element={<AddUser />} /> />\n <Route exact path=\"/viewuser/:id\" element={<ViewUser />} />\n </Routes>\n </Router>\n }\n\n</div>\n\n)\n"
] |
[
26,
4,
0,
0,
0
] |
[] |
[] |
[
"react_router",
"react_router_dom",
"reactjs"
] |
stackoverflow_0066289122_react_router_react_router_dom_reactjs.txt
|
Q:
How to Integrate tinymce editor in ReactJs
I am new in Reactjs,working with nextjs and i am trying to integrate Editor
in my page but i am getting following error
"TypeError: editor1 is null"
How can i fix this ? I tried with following code, Where i am wrong ?
const handleSubmit = (e: any) => {
e.preventDefault();
let editor: any = null;
const content = editor.getContent();
const data = {
first: e.target.name.value,
last: e.target.cat_name.value,
content: content, // add the content to the data object
}
};
<Editor
onInit={(evt, ed) => editor = ed} // set the editor reference
initialValue="<p>This is the initial content of the editor.</p>"
init={{
height: 500,
menubar: false,
plugins: [
'advlist autolink lists link image charmap print preview anchor',
'searchreplace visualblocks code fullscreen',
'insertdatetime media table paste code help wordcount'
],
toolbar: 'undo redo | formatselect | ' +
'bold italic backcolor | alignleft aligncenter ' +
'alignright alignjustify | bullist numlist outdent indent | ' +
'removeformat | help',
content_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px }'
}}
/>
A:
It looks like you are trying to get the content of the editor before it has been initialized. The error you are getting is because editor1 is null, meaning that it has not been set to the editor instance yet.
One way to fix this would be to move the code that gets the content of the editor inside the onInit event handler, like this:
const handleSubmit = (e: any) => {
e.preventDefault();
let editor: any = null;
<Editor
onInit={(evt, ed) => {
editor = ed;
const content = editor.getContent();
const data = {
first: e.target.name.value,
last: e.target.cat_name.value,
content: content, // add the content to the data object
}
}}
initialValue="<p>This is the initial content of the editor.</p>"
init={{
height: 500,
menubar: false,
plugins: [
'advlist autolink lists link image charmap print preview anchor',
'searchreplace visualblocks code fullscreen',
'insertdatetime media table paste code help wordcount'
],
toolbar: 'undo redo | formatselect | ' +
'bold italic backcolor | alignleft aligncenter ' +
'alignright alignjustify | bullist numlist outdent indent | ' +
'removeformat | help',
content_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px }'
}}
/>
};
This way, the getContent method will be called only after the editor has been initialized, and you will be able to get the content of the editor without encountering the TypeError.
|
How to Integrate tinymce editor in ReactJs
|
I am new in Reactjs,working with nextjs and i am trying to integrate Editor
in my page but i am getting following error
"TypeError: editor1 is null"
How can i fix this ? I tried with following code, Where i am wrong ?
const handleSubmit = (e: any) => {
e.preventDefault();
let editor: any = null;
const content = editor.getContent();
const data = {
first: e.target.name.value,
last: e.target.cat_name.value,
content: content, // add the content to the data object
}
};
<Editor
onInit={(evt, ed) => editor = ed} // set the editor reference
initialValue="<p>This is the initial content of the editor.</p>"
init={{
height: 500,
menubar: false,
plugins: [
'advlist autolink lists link image charmap print preview anchor',
'searchreplace visualblocks code fullscreen',
'insertdatetime media table paste code help wordcount'
],
toolbar: 'undo redo | formatselect | ' +
'bold italic backcolor | alignleft aligncenter ' +
'alignright alignjustify | bullist numlist outdent indent | ' +
'removeformat | help',
content_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px }'
}}
/>
|
[
"It looks like you are trying to get the content of the editor before it has been initialized. The error you are getting is because editor1 is null, meaning that it has not been set to the editor instance yet.\nOne way to fix this would be to move the code that gets the content of the editor inside the onInit event handler, like this:\nconst handleSubmit = (e: any) => {\n e.preventDefault();\n let editor: any = null;\n\n <Editor\n onInit={(evt, ed) => {\n editor = ed;\n const content = editor.getContent();\n const data = {\n first: e.target.name.value,\n last: e.target.cat_name.value,\n content: content, // add the content to the data object\n }\n }}\n initialValue=\"<p>This is the initial content of the editor.</p>\"\n init={{\n height: 500,\n menubar: false,\n plugins: [\n 'advlist autolink lists link image charmap print preview anchor',\n 'searchreplace visualblocks code fullscreen',\n 'insertdatetime media table paste code help wordcount'\n ],\n toolbar: 'undo redo | formatselect | ' +\n 'bold italic backcolor | alignleft aligncenter ' +\n 'alignright alignjustify | bullist numlist outdent indent | ' +\n 'removeformat | help',\n content_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px }'\n }}\n />\n};\n\nThis way, the getContent method will be called only after the editor has been initialized, and you will be able to get the content of the editor without encountering the TypeError.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"next.js",
"reactjs"
] |
stackoverflow_0074664089_javascript_next.js_reactjs.txt
|
Q:
Slice An Array In A Change Event Listener For Files When The Total Allowed Number Of Files Is Added In More Than One Go - JavaScript
I have a form that takes file uploads and it currently has a limit of 10 files per upload. There are PHP validations in the backend for this too.
When more than 10 files are attached, I currently have a JavaScript slice(0, 10) method inside a change event for the file input element, which removes any files (and their preview image thumbnails) when the number attached is more than 10 files.
// For each added file, add it to submitData (the DataTransfer Object), if not already present
[...e.target.files].slice(0,10).forEach((file) => {
if (currentSubmitData.every((currFile) => currFile.name !== file.name)) {
submitData.items.add(file);
}
});
The Issue
What I can’t seem to do though is work out a way to slice() the files array in a compound attachment situation, i.e. if 8 files are attached initially, and then the user decides to add another 4 prior to submitting the form, taking the total to 12. The current slice only happens when more than 10 are added in one go.
I have a decode() method that runs inside a loop (for every image attached) that carries out frontend validations, and a promiseAllSettled() method that waits for the last image to be attached prior to outputting a main error message telling the user to check the specific errors on the page.
Question
How do I slice the array on the total number of files appended, if the user has initially attached a file count less than 10, then attaches further files taking it more than 10 prior to form submission?
const attachFiles = document.getElementById('attach-files'), // file input element
dropZone = document.getElementById('dropzone'),
submitData = new DataTransfer();
dropZone.addEventListener('click', () => {
// assigns the dropzone to the hidden 'files' input element/file picker
attachFiles.click();
});
attachFiles.addEventListener('change', (e) => {
const currentSubmitData = Array.from(submitData.files);
console.log(e.target.files.length);
// For each added file, add it to 'submitData' if not already present (maximum of 10 files with slice(0, 10)
[...e.target.files].slice(0,10).forEach((file) => {
if (currentSubmitData.every((currFile) => currFile.name !== file.name)) {
submitData.items.add(file);
}
});
// Sync attachFiles FileList with submitData FileList
attachFiles.files = submitData.files;
// Clear the previewWrapper before generating new previews
previewWrapper.replaceChildren();
// the 'decode()' function inside the 'showFiles()' function is returned
// we wait for all of the promises for each image to settle
Promise.allSettled([...submitData.files].map(showFiles)).then((results) => {
// output main error message at top of page alerting user to error messages attached to images
});
}); // end of 'change' event listener
function showFiles(file) {
// code to generate image previews and append them to the 'previewWrapper'
// then use the decode() method that returns a promise and do JS validations on the preview images
return previewImage.decode().then(() => {
// preform JS validations and append
}).catch((error) => {
console.log(error)
});
} // end of showfiles(file)
A:
We can keep the count of the number of files already added.
Instead of [...e.target.files].slice(0,10).forEach((file) => {}, we can do something like:
var numberOfFilesToAdd = 10 - currentSubmitData.length;
[...e.target.files].slice(0,numberOfFilesToAdd).forEach((file) => {}
A:
Keep an array (below, called allSelectedFiles) of all of the files selected so far, and keep adding to that array as user selects more.
Keep another array (below, called filesForUpload) that's a subset of the first array, filtered for uniqueness and sliced to the max length. Present this subset array in the DOM to give user feedback, and use it to drive the actual upload on submit.
let allSelectedFiles = [];
let filesForUpload = [];
const totalAllowed = 4; // 10 for the OP, but 3 is simpler to demo
const attachFiles = document.getElementById('attach-files');
attachFiles.addEventListener('change', e => {
allSelectedFiles = [... allSelectedFiles, ...e.target.files];
let filenames = new Set();
filesForUpload = allSelectedFiles.filter(f => {
let has = filenames.has(f.name);
filenames.add(f.name);
return !has;
});
filesForUpload = filesForUpload.slice(0, totalAllowed);
showFiles(filesForUpload);
});
// unlike the OP, just to demo: fill a <ul> with filename <li>'s
function showFiles(array) {
let list = document.getElementById('file-list');
while(list.firstChild) {
list.removeChild( list.firstChild );
}
for (let file of array) {
let item = document.createElement('li');
item.appendChild(document.createTextNode(file.name));
list.appendChild(item);
}
}
// on form submit, not called in the above, trim the selected files
// using the same approach: someSelectedFiles()
function submit() {
/*
let submitData = new DataTransfer();
for (let file of filesForUpload) {
submitData.add(file)
}
// and so on
*/
}
<h3>Input some files</h3>
<p>To see the logic, choose > 4 files, choose some overlapping names</p>
<input type="file" id="attach-files" multiple />
<ul id="file-list"></ul>
|
Slice An Array In A Change Event Listener For Files When The Total Allowed Number Of Files Is Added In More Than One Go - JavaScript
|
I have a form that takes file uploads and it currently has a limit of 10 files per upload. There are PHP validations in the backend for this too.
When more than 10 files are attached, I currently have a JavaScript slice(0, 10) method inside a change event for the file input element, which removes any files (and their preview image thumbnails) when the number attached is more than 10 files.
// For each added file, add it to submitData (the DataTransfer Object), if not already present
[...e.target.files].slice(0,10).forEach((file) => {
if (currentSubmitData.every((currFile) => currFile.name !== file.name)) {
submitData.items.add(file);
}
});
The Issue
What I can’t seem to do though is work out a way to slice() the files array in a compound attachment situation, i.e. if 8 files are attached initially, and then the user decides to add another 4 prior to submitting the form, taking the total to 12. The current slice only happens when more than 10 are added in one go.
I have a decode() method that runs inside a loop (for every image attached) that carries out frontend validations, and a promiseAllSettled() method that waits for the last image to be attached prior to outputting a main error message telling the user to check the specific errors on the page.
Question
How do I slice the array on the total number of files appended, if the user has initially attached a file count less than 10, then attaches further files taking it more than 10 prior to form submission?
const attachFiles = document.getElementById('attach-files'), // file input element
dropZone = document.getElementById('dropzone'),
submitData = new DataTransfer();
dropZone.addEventListener('click', () => {
// assigns the dropzone to the hidden 'files' input element/file picker
attachFiles.click();
});
attachFiles.addEventListener('change', (e) => {
const currentSubmitData = Array.from(submitData.files);
console.log(e.target.files.length);
// For each added file, add it to 'submitData' if not already present (maximum of 10 files with slice(0, 10)
[...e.target.files].slice(0,10).forEach((file) => {
if (currentSubmitData.every((currFile) => currFile.name !== file.name)) {
submitData.items.add(file);
}
});
// Sync attachFiles FileList with submitData FileList
attachFiles.files = submitData.files;
// Clear the previewWrapper before generating new previews
previewWrapper.replaceChildren();
// the 'decode()' function inside the 'showFiles()' function is returned
// we wait for all of the promises for each image to settle
Promise.allSettled([...submitData.files].map(showFiles)).then((results) => {
// output main error message at top of page alerting user to error messages attached to images
});
}); // end of 'change' event listener
function showFiles(file) {
// code to generate image previews and append them to the 'previewWrapper'
// then use the decode() method that returns a promise and do JS validations on the preview images
return previewImage.decode().then(() => {
// preform JS validations and append
}).catch((error) => {
console.log(error)
});
} // end of showfiles(file)
|
[
"We can keep the count of the number of files already added.\nInstead of [...e.target.files].slice(0,10).forEach((file) => {}, we can do something like:\nvar numberOfFilesToAdd = 10 - currentSubmitData.length;\n[...e.target.files].slice(0,numberOfFilesToAdd).forEach((file) => {}\n\n",
"Keep an array (below, called allSelectedFiles) of all of the files selected so far, and keep adding to that array as user selects more.\nKeep another array (below, called filesForUpload) that's a subset of the first array, filtered for uniqueness and sliced to the max length. Present this subset array in the DOM to give user feedback, and use it to drive the actual upload on submit.\n\n\nlet allSelectedFiles = [];\nlet filesForUpload = [];\n\nconst totalAllowed = 4; // 10 for the OP, but 3 is simpler to demo\n\nconst attachFiles = document.getElementById('attach-files');\n\nattachFiles.addEventListener('change', e => {\n allSelectedFiles = [... allSelectedFiles, ...e.target.files];\n \n let filenames = new Set();\n filesForUpload = allSelectedFiles.filter(f => {\n let has = filenames.has(f.name);\n filenames.add(f.name);\n return !has;\n });\n filesForUpload = filesForUpload.slice(0, totalAllowed);\n showFiles(filesForUpload);\n});\n\n// unlike the OP, just to demo: fill a <ul> with filename <li>'s\nfunction showFiles(array) {\n let list = document.getElementById('file-list');\n while(list.firstChild) {\n list.removeChild( list.firstChild );\n }\n for (let file of array) {\n let item = document.createElement('li');\n item.appendChild(document.createTextNode(file.name));\n list.appendChild(item);\n }\n}\n\n// on form submit, not called in the above, trim the selected files \n// using the same approach: someSelectedFiles()\nfunction submit() {\n /* \n let submitData = new DataTransfer();\n for (let file of filesForUpload) {\n submitData.add(file)\n }\n // and so on\n */\n}\n<h3>Input some files</h3>\n<p>To see the logic, choose > 4 files, choose some overlapping names</p>\n<input type=\"file\" id=\"attach-files\" multiple />\n<ul id=\"file-list\"></ul>\n\n\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"arrays",
"dom_events",
"javascript",
"slice"
] |
stackoverflow_0074618535_arrays_dom_events_javascript_slice.txt
|
Q:
AWS Resource Usage Data - CPU, Memory and Disk
I am trying to build an analytics Dashboard using the below Metrics/KPIs for all the EC2 Instance.
Total CPU vs CPUUtilized
Total RAM vs RAMUtilized
Total EBS Volume vs EBSUtilized.
For example, I have lunch an EC2 instance with 4 CPU, 16GiB RAM and 50GB SSD, I would like to know the above KPIs in a time series trend. I am not getting any clue on where to get the data from EC2. Tried the EC2 instance metrics through CloudWatch using boto3 client, however did not get the above Metrics. I would like to know :
Where to find the data with above Metrics ?
Need the above metrics data in s3 on an daily basis.
Similarly is there a way to get similar metrics for AWS RDS and AWS EKS Cluster ?
Thanks!
A:
The Amazon EC2 service collects information about the virtual machine (instance) and sends it to Amazon CloudWatch Logs.
See: List the available CloudWatch metrics for your instances - Amazon Elastic Compute Cloud
Note that it only collects metrics that can be observed from the virtual machine itself -- CPU Utilization, network traffic and Amazon EBS traffic. The EC2 service cannot see what is happening 'inside' the instance, since it is the Operating System that controls memory and manages the contents of the disks.
If you wish to collect metrics from the Operating System, then you would need to Collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent - Amazon CloudWatch. This agent runs in the instance and sends metrics out to CloudWatch.
You can write code that calls the CloudWatch Metrics APIs to retrieve metrics. Note that the metrics returned are calculated over a time period (eg average CPU Utilization over a 5-minute period). It is not possible to retrieve the actual raw datapoints.
See also:
Monitoring Amazon RDS metrics with Amazon CloudWatch - Amazon Relational Database Service
Amazon EKS and Kubernetes Container Insights metrics - Amazon CloudWatch
|
AWS Resource Usage Data - CPU, Memory and Disk
|
I am trying to build an analytics Dashboard using the below Metrics/KPIs for all the EC2 Instance.
Total CPU vs CPUUtilized
Total RAM vs RAMUtilized
Total EBS Volume vs EBSUtilized.
For example, I have lunch an EC2 instance with 4 CPU, 16GiB RAM and 50GB SSD, I would like to know the above KPIs in a time series trend. I am not getting any clue on where to get the data from EC2. Tried the EC2 instance metrics through CloudWatch using boto3 client, however did not get the above Metrics. I would like to know :
Where to find the data with above Metrics ?
Need the above metrics data in s3 on an daily basis.
Similarly is there a way to get similar metrics for AWS RDS and AWS EKS Cluster ?
Thanks!
|
[
"The Amazon EC2 service collects information about the virtual machine (instance) and sends it to Amazon CloudWatch Logs.\nSee: List the available CloudWatch metrics for your instances - Amazon Elastic Compute Cloud\nNote that it only collects metrics that can be observed from the virtual machine itself -- CPU Utilization, network traffic and Amazon EBS traffic. The EC2 service cannot see what is happening 'inside' the instance, since it is the Operating System that controls memory and manages the contents of the disks.\nIf you wish to collect metrics from the Operating System, then you would need to Collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent - Amazon CloudWatch. This agent runs in the instance and sends metrics out to CloudWatch.\nYou can write code that calls the CloudWatch Metrics APIs to retrieve metrics. Note that the metrics returned are calculated over a time period (eg average CPU Utilization over a 5-minute period). It is not possible to retrieve the actual raw datapoints.\nSee also:\n\nMonitoring Amazon RDS metrics with Amazon CloudWatch - Amazon Relational Database Service\nAmazon EKS and Kubernetes Container Insights metrics - Amazon CloudWatch\n\n"
] |
[
2
] |
[] |
[] |
[
"amazon_cloudwatch",
"amazon_ec2",
"amazon_web_services"
] |
stackoverflow_0074663846_amazon_cloudwatch_amazon_ec2_amazon_web_services.txt
|
Q:
Is CFBundleTypeIconFile array mandatory for adding custom file type support on iOS app?
Is CFBundleTypeIconFile array mandatory for adding custom file type support on iOS app? I've found different information on iOS SDK help.
We see here that the CFBundleTypeIconFile array is mandatory:
The entry for each document type should contain the following keys:
CFBundleTypeIconFile
CFBundleTypeName
CFBundleTypeRole
In addition to these keys, it must contain at least one of the following keys:
LSItemContentTypes
CFBundleTypeExtensions
CFBundleTypeMIMETypes
CFBundleTypeOSTypes
https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html#//apple_ref/doc/uid/TP40009249-SW9
And here we can read that the CFBundleTypeIconFile array is not mandatory:
Each dictionary in the CFBundleDocumentTypes array can include the following keys:
CFBundleTypeName specifies the name of the document type.
CFBundleTypeIconFiles is an array of filenames for the image resources to use as the document’s icon.
LSItemContentTypes contains an array of strings with the UTI types that represent the supported file types in this group.
LSHandlerRank describes whether this application owns the document type or is merely able to open it.
https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/DocumentInteraction_TopicsForIOS/Articles/RegisteringtheFileTypesYourAppSupports.html#//apple_ref/doc/uid/TP40010411-SW1
Where is the truth? Can I use the empty array? I just don't want to specify custom images for files to use app icon by default.
A:
File icons look fine without special icons inside the bandle.
A:
There are two similar keys.
CFBundleTypeIconFile applicable only for macOS
CFBundleTypeIconFiles applicable only for iOS
Looks like this detail in the fifth column named "Platforms" resolves the seeming discrepancy in the documentation.
EDIT: Found further related information in the documentation:
The way you specify icon files in macOS and iOS is different because of the supported file formats on each platform. In iOS, each icon resource file is typically a PNG file that contains only one image. Therefore, it is necessary to specify different image files for different icon sizes. However, when specifying icons in macOS, you use an icon file (with extension .icns), which is capable of storing the icon at several different resolutions.
|
Is CFBundleTypeIconFile array mandatory for adding custom file type support on iOS app?
|
Is CFBundleTypeIconFile array mandatory for adding custom file type support on iOS app? I've found different information on iOS SDK help.
We see here that the CFBundleTypeIconFile array is mandatory:
The entry for each document type should contain the following keys:
CFBundleTypeIconFile
CFBundleTypeName
CFBundleTypeRole
In addition to these keys, it must contain at least one of the following keys:
LSItemContentTypes
CFBundleTypeExtensions
CFBundleTypeMIMETypes
CFBundleTypeOSTypes
https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html#//apple_ref/doc/uid/TP40009249-SW9
And here we can read that the CFBundleTypeIconFile array is not mandatory:
Each dictionary in the CFBundleDocumentTypes array can include the following keys:
CFBundleTypeName specifies the name of the document type.
CFBundleTypeIconFiles is an array of filenames for the image resources to use as the document’s icon.
LSItemContentTypes contains an array of strings with the UTI types that represent the supported file types in this group.
LSHandlerRank describes whether this application owns the document type or is merely able to open it.
https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/DocumentInteraction_TopicsForIOS/Articles/RegisteringtheFileTypesYourAppSupports.html#//apple_ref/doc/uid/TP40010411-SW1
Where is the truth? Can I use the empty array? I just don't want to specify custom images for files to use app icon by default.
|
[
"File icons look fine without special icons inside the bandle.\n",
"There are two similar keys.\n\nCFBundleTypeIconFile applicable only for macOS\nCFBundleTypeIconFiles applicable only for iOS\n\nLooks like this detail in the fifth column named \"Platforms\" resolves the seeming discrepancy in the documentation.\nEDIT: Found further related information in the documentation:\n\nThe way you specify icon files in macOS and iOS is different because of the supported file formats on each platform. In iOS, each icon resource file is typically a PNG file that contains only one image. Therefore, it is necessary to specify different image files for different icon sizes. However, when specifying icons in macOS, you use an icon file (with extension .icns), which is capable of storing the icon at several different resolutions.\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"file_association",
"icons",
"info.plist",
"ios"
] |
stackoverflow_0027026593_file_association_icons_info.plist_ios.txt
|
Q:
Use list items in variable in python requests url
I am trying to make a call to an API and then grab event_ids from the data. I then want to use those event ids as variables in another request, then parse that data. Then loop back and make another request using the next event id in the event_id variable for all the IDs.
so far i have the following
def nba_odds():
url = "https://xxxxx.com.au/sports/summary/basketball?api_key=xxxxx"
response = requests.get(url)
data = response.json()
event_ids = []
for event in data['Events']:
if event['Country'] == 'USA' and event['League'] == 'NBA':
event_ids.append(event['EventID'])
# print(event_ids)
game_url = f'https://xxxxx.com.au/sports/detail/{event_ids}?api_key=xxxxx'
game_response = requests.get(game_url)
game_data = game_response.json()
print(game_url)
that gives me the result below in the terminal.
https://xxxxx.com.au/sports/detail/['dbx-1425135', 'dbx-1425133', 'dbx-1425134', 'dbx-1425136', 'dbx-1425137', 'dbx-1425138', 'dbx-1425139', 'dbx-1425140', 'anyvsany-nba01-1670043600000000000', 'dbx-1425141', 'dbx-1425142', 'dbx-1425143', 'dbx-1425144', 'dbx-1425145', 'dbx-1425148', 'dbx-1425149', 'dbx-1425147', 'dbx-1425146', 'dbx-1425150', 'e95270f6-661b-46dc-80b9-cd1af75d38fb', '0c989be7-0802-4683-8bb2-d26569e6dcf9']?api_key=779ac51a-2fff-4ad6-8a3e-6a245a0a4cbb
the URL above format should look like
https://xxxx.com.au/sports/detail/dbx-1425135
If anyone can point me in the right direction it would be appreciated.
thanks.
A:
event_ids is an entire list of event ids. You make a single URL with the full list converted to its string view (['dbx-1425135', 'dbx-1425133', ...]). But it looks like you want to get information on each event in turn. To do that, put the second request in the loop so that it runs for every event you find interesting.
def nba_odds():
url = "https://xxxxx.com.au/sports/summary/basketball?api_key=xxxxx"
response = requests.get(url)
data = response.json()
event_ids = []
for event in data['Events']:
if event['Country'] == 'USA' and event['League'] == 'NBA':
event_id = event['EventID']
# print(event_id)
game_url = f'https://xxxxx.com.au/sports/detail/{event_id}?api_key=xxxxx'
game_response = requests.get(game_url)
game_data = game_response.json()
# do something with game_data - it will be overwritten
# on next round in the loop
print(game_url)
A:
you need to loop over the event ID's again to call the API with one event_id if it is not supporting multiple event_ids like:
all_events_response = []
for event_id in event_ids
game_url = f'https://xxxxx.com.au/sports/detail/{event_id}?api_key=xxxxx'
game_response = requests.get(game_url)
game_data = game_response.json()
all_events_response.append(game_data)
print(game_url)
You can find list of json responses under all_events_response
|
Use list items in variable in python requests url
|
I am trying to make a call to an API and then grab event_ids from the data. I then want to use those event ids as variables in another request, then parse that data. Then loop back and make another request using the next event id in the event_id variable for all the IDs.
so far i have the following
def nba_odds():
url = "https://xxxxx.com.au/sports/summary/basketball?api_key=xxxxx"
response = requests.get(url)
data = response.json()
event_ids = []
for event in data['Events']:
if event['Country'] == 'USA' and event['League'] == 'NBA':
event_ids.append(event['EventID'])
# print(event_ids)
game_url = f'https://xxxxx.com.au/sports/detail/{event_ids}?api_key=xxxxx'
game_response = requests.get(game_url)
game_data = game_response.json()
print(game_url)
that gives me the result below in the terminal.
https://xxxxx.com.au/sports/detail/['dbx-1425135', 'dbx-1425133', 'dbx-1425134', 'dbx-1425136', 'dbx-1425137', 'dbx-1425138', 'dbx-1425139', 'dbx-1425140', 'anyvsany-nba01-1670043600000000000', 'dbx-1425141', 'dbx-1425142', 'dbx-1425143', 'dbx-1425144', 'dbx-1425145', 'dbx-1425148', 'dbx-1425149', 'dbx-1425147', 'dbx-1425146', 'dbx-1425150', 'e95270f6-661b-46dc-80b9-cd1af75d38fb', '0c989be7-0802-4683-8bb2-d26569e6dcf9']?api_key=779ac51a-2fff-4ad6-8a3e-6a245a0a4cbb
the URL above format should look like
https://xxxx.com.au/sports/detail/dbx-1425135
If anyone can point me in the right direction it would be appreciated.
thanks.
|
[
"event_ids is an entire list of event ids. You make a single URL with the full list converted to its string view (['dbx-1425135', 'dbx-1425133', ...]). But it looks like you want to get information on each event in turn. To do that, put the second request in the loop so that it runs for every event you find interesting.\ndef nba_odds():\n\n url = \"https://xxxxx.com.au/sports/summary/basketball?api_key=xxxxx\"\n response = requests.get(url)\n data = response.json()\n\n event_ids = []\n\n for event in data['Events']:\n if event['Country'] == 'USA' and event['League'] == 'NBA':\n event_id = event['EventID']\n # print(event_id)\n game_url = f'https://xxxxx.com.au/sports/detail/{event_id}?api_key=xxxxx'\n game_response = requests.get(game_url)\n game_data = game_response.json()\n # do something with game_data - it will be overwritten\n # on next round in the loop\n print(game_url)\n\n",
"you need to loop over the event ID's again to call the API with one event_id if it is not supporting multiple event_ids like:\n all_events_response = []\n for event_id in event_ids\n game_url = f'https://xxxxx.com.au/sports/detail/{event_id}?api_key=xxxxx'\n game_response = requests.get(game_url)\n game_data = game_response.json()\n all_events_response.append(game_data)\n print(game_url)\n\nYou can find list of json responses under all_events_response\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"request"
] |
stackoverflow_0074664098_python_request.txt
|
Q:
checking if string is palindrome or not
i couldn't find the error inside the code
it simply check palindrome by reversing the string and matching with original.
the code is perfectly running inside my code editor but raising error in hacker rank test cases
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
String A=sc.next();
char[] ch = A.toCharArray();
String new_str = "";
for(char c : ch)
{
new_str = c + new_str;
}
System.out.println(new_str);
if(new_str.equals(A))
{
System.out.println("Yes");
}
else
{
System.out.println("No");
}
}
}
type here
A:
Here are a few ways you can improve your solution:
Instead of reversing the string by adding each character to the beginning of a new string, you can use the StringBuilder.reverse() method to more efficiently reverse the string. For example:
StringBuilder sb = new StringBuilder(A);
String new_str = sb.reverse().toString();
Instead of comparing the original string and the reversed string using the equals() method, you can use the String.equalsIgnoreCase() method, which ignores the case of the characters in the strings and returns true if the two strings are equal ignoring case. For example:
if(new_str.equalsIgnoreCase(A))
You can use regular expressions to check if the string only contains alphanumeric characters (i.e. letters and digits), and ignore any other characters such as whitespace or punctuation. To do this, you can use the String.matches() method and specify a regular expression that matches alphanumeric characters. For example:
if(A.matches("^[a-zA-Z0-9]+$"))
This checks if the string A only contains letters and digits, and returns true if it does. You can then check if the string is a palindrome by comparing the original string with the reversed string, ignoring case as before.
You can also use regular expressions to remove any non-alphanumeric characters from the string before checking if it is a palindrome. To do this, you can use the String.replaceAll() method and specify a regular expression that matches any non-alphanumeric characters. For example:
String new_str = A.replaceAll("[^a-zA-Z0-9]", "");
This removes all non-alphanumeric characters from the string A, and assigns the resulting string to new_str. You can then reverse new_str and compare it with the original string, ignoring case, to check if the string is a palindrome.
By using these techniques, you can create a more robust and efficient solution to check if a string is a palindrome or not.
Here is an updated solution that incorporates the improvements discussed above:
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
String A=sc.nextLine(); // read the entire line of input, including whitespace
// check if the string contains only alphanumeric characters
if(A.matches("^[a-zA-Z0-9]+$"))
{
// remove any non-alphanumeric characters from the string
String new_str = A.replaceAll("[^a-zA-Z0-9]", "");
// reverse the string
StringBuilder sb = new StringBuilder(new_str);
new_str = sb.reverse().toString();
// check if the original string is equal to the reversed string, ignoring case
if(new_str.equalsIgnoreCase(A))
{
System.out.println("Yes");
}
else
{
System.out.println("No");
}
}
else
{
// the input string contains non-alphanumeric characters, so it cannot be a palindrome
System.out.println("No");
}
}
}
This code should be more efficient and robust than the previous solution, and should work for most inputs and produce the correct output.
|
checking if string is palindrome or not
|
i couldn't find the error inside the code
it simply check palindrome by reversing the string and matching with original.
the code is perfectly running inside my code editor but raising error in hacker rank test cases
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
String A=sc.next();
char[] ch = A.toCharArray();
String new_str = "";
for(char c : ch)
{
new_str = c + new_str;
}
System.out.println(new_str);
if(new_str.equals(A))
{
System.out.println("Yes");
}
else
{
System.out.println("No");
}
}
}
type here
|
[
"Here are a few ways you can improve your solution:\n\nInstead of reversing the string by adding each character to the beginning of a new string, you can use the StringBuilder.reverse() method to more efficiently reverse the string. For example:\n\nStringBuilder sb = new StringBuilder(A);\nString new_str = sb.reverse().toString();\n\n\nInstead of comparing the original string and the reversed string using the equals() method, you can use the String.equalsIgnoreCase() method, which ignores the case of the characters in the strings and returns true if the two strings are equal ignoring case. For example:\n\nif(new_str.equalsIgnoreCase(A))\n\n\nYou can use regular expressions to check if the string only contains alphanumeric characters (i.e. letters and digits), and ignore any other characters such as whitespace or punctuation. To do this, you can use the String.matches() method and specify a regular expression that matches alphanumeric characters. For example:\n\nif(A.matches(\"^[a-zA-Z0-9]+$\"))\n\nThis checks if the string A only contains letters and digits, and returns true if it does. You can then check if the string is a palindrome by comparing the original string with the reversed string, ignoring case as before.\n\nYou can also use regular expressions to remove any non-alphanumeric characters from the string before checking if it is a palindrome. To do this, you can use the String.replaceAll() method and specify a regular expression that matches any non-alphanumeric characters. For example:\n\nString new_str = A.replaceAll(\"[^a-zA-Z0-9]\", \"\");\n\nThis removes all non-alphanumeric characters from the string A, and assigns the resulting string to new_str. You can then reverse new_str and compare it with the original string, ignoring case, to check if the string is a palindrome.\nBy using these techniques, you can create a more robust and efficient solution to check if a string is a palindrome or not.\nHere is an updated solution that incorporates the improvements discussed above:\nimport java.io.*;\nimport java.util.*;\n\npublic class Solution {\n\n public static void main(String[] args) {\n \n Scanner sc=new Scanner(System.in);\n String A=sc.nextLine(); // read the entire line of input, including whitespace\n \n // check if the string contains only alphanumeric characters\n if(A.matches(\"^[a-zA-Z0-9]+$\"))\n {\n // remove any non-alphanumeric characters from the string\n String new_str = A.replaceAll(\"[^a-zA-Z0-9]\", \"\");\n\n // reverse the string\n StringBuilder sb = new StringBuilder(new_str);\n new_str = sb.reverse().toString();\n\n // check if the original string is equal to the reversed string, ignoring case\n if(new_str.equalsIgnoreCase(A))\n {\n System.out.println(\"Yes\");\n }\n else\n {\n System.out.println(\"No\");\n }\n }\n else\n {\n // the input string contains non-alphanumeric characters, so it cannot be a palindrome\n System.out.println(\"No\");\n }\n \n }\n}\n\nThis code should be more efficient and robust than the previous solution, and should work for most inputs and produce the correct output.\n"
] |
[
0
] |
[] |
[] |
[
"char",
"java",
"palindrome",
"reverse",
"string"
] |
stackoverflow_0074664127_char_java_palindrome_reverse_string.txt
|
Q:
Python Count Characters
Write a program whose input is a string which contains a character and a phrase, and whose output indicates the number of times the character appears in the phrase. The output should include the input character and use the plural form, n's if the number of times the characters appears is not exactly 1.
Ex: If the input is:
n Monday
the output is:
1 n
Ex: If the input is:
z Today is Monday
the output is:
0 z's
Ex: If the input is:
n It's a sunny day
the output is:
2 n's
Case matters. n is different than N.
Ex: If the input is:
n Nobody
the output is:
0 n's
This is what I have so far:
user_string=input(str())
character=user_string[0]
phrase=user_string[1]
count=0
for i in phrase:
if i == character:
count = count+1
if count!= 1:
print(str(count) + " " + character + "'s")
else:
print(str(count) + " " + character)
This works great for the phrases that have 0 characters matching. But its not counting the ones that should match.
A:
user_string=input(str())
character=user_string[0]
phrase=user_string[1:]
count=0
for i in phrase:
if i == character:
count = count+1
if count != 1:
print(str(count) + " " + character + "'s")
else:
print(str(count) + " " + character)
A:
Suggest just using str.count.
user_string = input()
character, phrase = user_string[0], user_string[1:]
count = phrase.count(character)
print(f"{count} {character}" + "'s" if count != 1 else '')
A:
We will take the user's input, with the assumption that the first letter is the one that you are counting, and find that character with user_string.split()[0]. We will then take all the other words from the user's input (with user_string.split()[1:]), join them with ''.join and then explode them into a list of letters with [*]. We will return a list of "hits" for the character we are looking for. The length of that list will be the number of "hits".
user_string=input()
numOfLetters = [letter for letter in [*''.join(user_string.split()[1:])]
if user_string[0]==letter]
print(f'Number of {user_string[0]} is: {len(numOfLetters)}')
t This is a test # Input
Number of t is: 2 # Output
h Another test for comparison # Input
Number of h is: 1 # Output
A:
user_string=input(str())
character=user_string[0]
phrase=user_string[1:40] #if more characters are needed just make the 40 larger
count=0
for i in phrase:
if i == character:
count=count+1
if count!= 1:
print(str(count) + " " + character + "'s")
else:
print(str(count) + " " + character)
|
Python Count Characters
|
Write a program whose input is a string which contains a character and a phrase, and whose output indicates the number of times the character appears in the phrase. The output should include the input character and use the plural form, n's if the number of times the characters appears is not exactly 1.
Ex: If the input is:
n Monday
the output is:
1 n
Ex: If the input is:
z Today is Monday
the output is:
0 z's
Ex: If the input is:
n It's a sunny day
the output is:
2 n's
Case matters. n is different than N.
Ex: If the input is:
n Nobody
the output is:
0 n's
This is what I have so far:
user_string=input(str())
character=user_string[0]
phrase=user_string[1]
count=0
for i in phrase:
if i == character:
count = count+1
if count!= 1:
print(str(count) + " " + character + "'s")
else:
print(str(count) + " " + character)
This works great for the phrases that have 0 characters matching. But its not counting the ones that should match.
|
[
"user_string=input(str())\ncharacter=user_string[0]\nphrase=user_string[1:]\ncount=0\n\nfor i in phrase:\n if i == character:\n count = count+1\n\nif count != 1:\n print(str(count) + \" \" + character + \"'s\")\nelse:\n print(str(count) + \" \" + character)\n\n",
"Suggest just using str.count.\nuser_string = input()\ncharacter, phrase = user_string[0], user_string[1:]\ncount = phrase.count(character)\n\nprint(f\"{count} {character}\" + \"'s\" if count != 1 else '')\n\n",
"We will take the user's input, with the assumption that the first letter is the one that you are counting, and find that character with user_string.split()[0]. We will then take all the other words from the user's input (with user_string.split()[1:]), join them with ''.join and then explode them into a list of letters with [*]. We will return a list of \"hits\" for the character we are looking for. The length of that list will be the number of \"hits\".\nuser_string=input()\n\nnumOfLetters = [letter for letter in [*''.join(user_string.split()[1:])] \n if user_string[0]==letter]\nprint(f'Number of {user_string[0]} is: {len(numOfLetters)}')\n\nt This is a test # Input\nNumber of t is: 2 # Output\n\nh Another test for comparison # Input\nNumber of h is: 1 # Output\n\n",
"user_string=input(str())\ncharacter=user_string[0]\nphrase=user_string[1:40] #if more characters are needed just make the 40 larger\ncount=0\nfor i in phrase:\nif i == character:\ncount=count+1\nif count!= 1:\nprint(str(count) + \" \" + character + \"'s\")\nelse:\nprint(str(count) + \" \" + character)\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0073437641_python.txt
|
Q:
Mismatch between computational complexity of Additive attention and RNN cell
According to Attention is all you need paper: Additive attention (The classic attention use in RNN by Bahdanau) computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, ...
Indeed, we can see here that the computational complexity of additive attention and dot-prod (transformer attention) are both n²*d.
However, if we look closer at additive attention, it is in fact a RNN cell which have a computational complexity of n*d² (according to the same table).
Thus, shouldn't the computational complexity of additive attention be n*d² instead of n²*d ?
A:
Your claim that additive attention is in fact a RNN cell is what is leading you astray. Additive attention is implemented using a fully-connected shallow (1 hidden layer) feedforward neural network "between" the encoder and decoder RNNs as shown below and described in the original paper by Bahdanau et al. (pg. 3) [1]:
... an alignment model which scores how well the inputs around position j and the output at position i match. The score is based on the RNN hidden state s_i − 1 (just before emitting y_i, Eq. (4)) and the j-th annotation h_j of the input sentence.
We parametrize the alignment model a as a feedforward neural network which is jointly trained with all the other components of the proposed system...
Figure 1: Attention mechanism diagram from [2].
Thus, the alignment scores are calculated by adding the outputs of the decoder hidden state to the encoder outputs. So the additive attention is not a RNN cell.
References
[1] Bahdanau, D., Cho, K. and Bengio, Y., 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
[2] Arbel, N., 2019. Attention in RNNs. Medium blog post.
|
Mismatch between computational complexity of Additive attention and RNN cell
|
According to Attention is all you need paper: Additive attention (The classic attention use in RNN by Bahdanau) computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, ...
Indeed, we can see here that the computational complexity of additive attention and dot-prod (transformer attention) are both n²*d.
However, if we look closer at additive attention, it is in fact a RNN cell which have a computational complexity of n*d² (according to the same table).
Thus, shouldn't the computational complexity of additive attention be n*d² instead of n²*d ?
|
[
"Your claim that additive attention is in fact a RNN cell is what is leading you astray. Additive attention is implemented using a fully-connected shallow (1 hidden layer) feedforward neural network \"between\" the encoder and decoder RNNs as shown below and described in the original paper by Bahdanau et al. (pg. 3) [1]:\n\n... an alignment model which scores how well the inputs around position j and the output at position i match. The score is based on the RNN hidden state s_i − 1 (just before emitting y_i, Eq. (4)) and the j-th annotation h_j of the input sentence.\nWe parametrize the alignment model a as a feedforward neural network which is jointly trained with all the other components of the proposed system...\n\n\nFigure 1: Attention mechanism diagram from [2].\nThus, the alignment scores are calculated by adding the outputs of the decoder hidden state to the encoder outputs. So the additive attention is not a RNN cell.\nReferences\n[1] Bahdanau, D., Cho, K. and Bengio, Y., 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.\n[2] Arbel, N., 2019. Attention in RNNs. Medium blog post.\n"
] |
[
1
] |
[] |
[] |
[
"attention_model",
"deep_learning",
"machine_learning",
"nlp",
"recurrent_neural_network"
] |
stackoverflow_0074657242_attention_model_deep_learning_machine_learning_nlp_recurrent_neural_network.txt
|
Q:
how should resolve this querySnapshot?
QuestionModel getQuestionModelFromDatasnapshot(DocumentSnapshot questionSnapshot) {
QuestionModel questionModel = new QuestionModel();
questionModel.question = questionSnapshot.data["question"];
/// shuffling the options
List<String> options = [
questionSnapshot.data["option1"],
questionSnapshot.data["option2"],
questionSnapshot.data["option3"],
questionSnapshot.data["option4"]
];
options.shuffle();
questionModel.option1 = options[0];
questionModel.option2 = options[1];
questionModel.option3 = options[2];
questionModel.option4 = options[3];
questionModel.correctOption = questionSnapshot.data["option1"];
questionModel.answered = false;
print(questionModel.correctOption.toLowerCase());
return questionModel;
}
enter image description here
A:
To resolve this querySnapshot, you can first create a new instance of the QuestionModel class and assign the data from the questionSnapshot to the appropriate fields in the questionModel object. Then, you can shuffle the options and assign them to the appropriate fields in the questionModel object. Finally, you can set the correctOption and answered fields in the questionModel object and return the object.
|
how should resolve this querySnapshot?
|
QuestionModel getQuestionModelFromDatasnapshot(DocumentSnapshot questionSnapshot) {
QuestionModel questionModel = new QuestionModel();
questionModel.question = questionSnapshot.data["question"];
/// shuffling the options
List<String> options = [
questionSnapshot.data["option1"],
questionSnapshot.data["option2"],
questionSnapshot.data["option3"],
questionSnapshot.data["option4"]
];
options.shuffle();
questionModel.option1 = options[0];
questionModel.option2 = options[1];
questionModel.option3 = options[2];
questionModel.option4 = options[3];
questionModel.correctOption = questionSnapshot.data["option1"];
questionModel.answered = false;
print(questionModel.correctOption.toLowerCase());
return questionModel;
}
enter image description here
|
[
"To resolve this querySnapshot, you can first create a new instance of the QuestionModel class and assign the data from the questionSnapshot to the appropriate fields in the questionModel object. Then, you can shuffle the options and assign them to the appropriate fields in the questionModel object. Finally, you can set the correctOption and answered fields in the questionModel object and return the object.\n"
] |
[
0
] |
[] |
[] |
[
"flutter"
] |
stackoverflow_0074664128_flutter.txt
|
Q:
how to apply if conditional using def with multi parameter
I am new to def function , I am trying to get the logic in def function with multiple if condition. I want x,y,z to be flexible parameter so I can change parameter value in x,y,z. but i can't get the desired output. anyone help ?
df =
date comp mark value score test1
0 2022-01-01 a 1 10 100
1 2022-01-02 b 2 20 200
2 2022-01-03 c 3 30 300
3 2022-01-04 d 4 40 400
4 2022-01-05 e 5 50 500
Desired ouput =
date comp mark value score test1
0 2022-01-01 a 1 10 100 200
1 2022-01-02 b 2 20 200 400
2 2022-01-03 c 3 30 300 600
3 2022-01-04 d 4 40 400 4000
4 2022-01-05 e 5 50 500 5000
I can get the result use:
def frml(df):
if (df['mark'] > 3) and (df['value'] > 30):
return df['score'] * 10
else:
return df['score'] * 2
df['test1'] = df.apply(frml,axis=1)
but i can't get the result use this: isn't the logic is the same?
x = df['mark']
y = df['value']
z = df['score']
def frml(df):
if (x > 3) and (y > 30):
return z * 10
else:
return z * 2
df['test1'] = df.apply(frml,axis=1)
A:
you can use mask instead apply
cond1 = (df['mark'] > 3) & (df['value'] > 30)
df['score'].mul(2).mask(cond1, df['score'].mul(10))
output:
0 200
1 400
2 600
3 4000
4 5000
Name: score, dtype: int64
make output to test1 column
df.assign(test1=df['score'].mul(2).mask(cond1, df['score'].mul(10)))
result:
date comp mark value score test1
0 2022-01-01 a 1 10 100 200
1 2022-01-02 b 2 20 200 400
2 2022-01-03 c 3 30 300 600
3 2022-01-04 d 4 40 400 4000
4 2022-01-05 e 5 50 500 5000
It's possible to explain why your 2nd function doesn't work, but it's complicated.
Also, making your output don't need apply def func.
So tell you another way.
use mask or np.where or np.select instead apply def func
|
how to apply if conditional using def with multi parameter
|
I am new to def function , I am trying to get the logic in def function with multiple if condition. I want x,y,z to be flexible parameter so I can change parameter value in x,y,z. but i can't get the desired output. anyone help ?
df =
date comp mark value score test1
0 2022-01-01 a 1 10 100
1 2022-01-02 b 2 20 200
2 2022-01-03 c 3 30 300
3 2022-01-04 d 4 40 400
4 2022-01-05 e 5 50 500
Desired ouput =
date comp mark value score test1
0 2022-01-01 a 1 10 100 200
1 2022-01-02 b 2 20 200 400
2 2022-01-03 c 3 30 300 600
3 2022-01-04 d 4 40 400 4000
4 2022-01-05 e 5 50 500 5000
I can get the result use:
def frml(df):
if (df['mark'] > 3) and (df['value'] > 30):
return df['score'] * 10
else:
return df['score'] * 2
df['test1'] = df.apply(frml,axis=1)
but i can't get the result use this: isn't the logic is the same?
x = df['mark']
y = df['value']
z = df['score']
def frml(df):
if (x > 3) and (y > 30):
return z * 10
else:
return z * 2
df['test1'] = df.apply(frml,axis=1)
|
[
"you can use mask instead apply\ncond1 = (df['mark'] > 3) & (df['value'] > 30)\ndf['score'].mul(2).mask(cond1, df['score'].mul(10))\n\noutput:\n0 200\n1 400\n2 600\n3 4000\n4 5000\nName: score, dtype: int64\n\nmake output to test1 column\ndf.assign(test1=df['score'].mul(2).mask(cond1, df['score'].mul(10)))\n\nresult:\n date comp mark value score test1\n0 2022-01-01 a 1 10 100 200\n1 2022-01-02 b 2 20 200 400\n2 2022-01-03 c 3 30 300 600\n3 2022-01-04 d 4 40 400 4000\n4 2022-01-05 e 5 50 500 5000\n\n\nIt's possible to explain why your 2nd function doesn't work, but it's complicated.\nAlso, making your output don't need apply def func.\nSo tell you another way.\n\nuse mask or np.where or np.select instead apply def func\n"
] |
[
0
] |
[] |
[] |
[
"function",
"if_statement",
"pandas",
"python"
] |
stackoverflow_0074664035_function_if_statement_pandas_python.txt
|
Q:
How to Ignore Blanks
Hi, I got the solution with helper column. Can I get answer without helper column as shown in the picture. Thanks in advance..
A:
Use SCAN() function with FILTER().
=FILTER(D6:D17,SCAN("",C6:C17,LAMBDA(a,b,IF(b="",a&b,b)))=G6)
Here SCAN() will generate an array filling empty cells with value of its above cell. Then just filter D column based on that array.
A:
Try this on cell E2, using LET for easy reading of the expression:
=LET(teams, A2:A5, names, B2:B5, dropDownValue, D2,
helper, SCAN("", teams, LAMBDA(acc,tt, IF(acc="", tt, IF(tt="", acc, tt)))),
FILTER(names, helper=dropDownValue)
)
or just using the ranges:
=FILTER(B2:B5,SCAN("",A2:A5,LAMBDA(acc,tt,IF(acc="",tt,IF(tt="",acc,tt))))=D2)
the idea is just to create the helper column on the fly via SCAN function. The rest is just to use FILTER function based on the drop-down value in cell D2. Here is the output:
Note: Based on your sample data, it is assumed the first value of teams column is non-empty and with the color value.
|
How to Ignore Blanks
|
Hi, I got the solution with helper column. Can I get answer without helper column as shown in the picture. Thanks in advance..
|
[
"Use SCAN() function with FILTER().\n=FILTER(D6:D17,SCAN(\"\",C6:C17,LAMBDA(a,b,IF(b=\"\",a&b,b)))=G6)\n\n\nHere SCAN() will generate an array filling empty cells with value of its above cell. Then just filter D column based on that array.\n\n\n",
"Try this on cell E2, using LET for easy reading of the expression:\n=LET(teams, A2:A5, names, B2:B5, dropDownValue, D2,\n helper, SCAN(\"\", teams, LAMBDA(acc,tt, IF(acc=\"\", tt, IF(tt=\"\", acc, tt)))),\n FILTER(names, helper=dropDownValue)\n)\n\nor just using the ranges:\n=FILTER(B2:B5,SCAN(\"\",A2:A5,LAMBDA(acc,tt,IF(acc=\"\",tt,IF(tt=\"\",acc,tt))))=D2)\n\nthe idea is just to create the helper column on the fly via SCAN function. The rest is just to use FILTER function based on the drop-down value in cell D2. Here is the output:\n\nNote: Based on your sample data, it is assumed the first value of teams column is non-empty and with the color value.\n"
] |
[
4,
0
] |
[] |
[] |
[
"excel"
] |
stackoverflow_0074663909_excel.txt
|
Q:
Laravel 5 - ErrorException failed to open stream: Permission denied
I've just git clone a repository of mine containing a Laravel 5.1 project. I've gone through the normal steps to get it up and running (setup the webserver, db, etc). I've now gone to the web address I configured and i'm getting the following error message:
ErrorException in compiled.php line 6648:
file_put_contents(/3c7922efc24ab13620cd0f35b1fdaa61): failed to open stream: Permission denied
Any idea's what folder it's trying to access?
A:
I solved the problem with this solution, clarifying that if you have Windows, you apply lines 1 and 3 and it works the same.
php artisan cache:clear
chmod -R 777 storage/
composer dump-autoload
A:
To solve this problem I needed to create a few folder's that were missing from my install. You require the following folders to be readable and writable by your www user:
/storage
/storage/cache
/storage/framework
/storage/framework/sessions
/storage/framework/views
/storage/framework/cache
/storage/logs
Also, set the folder permissions to 775. From the root directory run the command: sudo chmod -R 755 storage
A:
If nothing help you and if you work on windows with docker it can be the issue that wrong symlink you have, you can try:
ln -s /var/www/storage/app/public /var/www/public/storage
And after that:
chown -R $USER:www-data storage
chmod -R 777 storage
on docker bash.
A:
If you're using ubuntu, most likely you have to give permissions to that folder storage
cd into the project and
use the command -> sudo chmod 755 storage/*
A:
I had similar problem in my localhost and the problem was that I was trying to upload a file larger than upload_max_filesize in my php.ini file .
just updating this to greater value and restart my server solved the problem ,, if you having similar problem this might be the case
A:
If you are on a Linux distribution with SELinux enabled, here is what you need to do:
sudo chcon -R -t httpd_sys_rw_content_t /var/www/html/my-app/storage
sudo chown -R nginx /var/www/html/my-app/storage
sudo chmod -R u+w /var/www/html/my-app/storage
This should set the proper context for the storage directory, allowing the web server to write to it.
A:
I had this problem lately with my tests, and resolved it through: composer dump-autoload
A:
This is how I solve a similar issue since I don't have access to the terminal to run sudo chown -R www-data:www-data /thedirectory
in
config/filesystem.php
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
Then in my uploader controller
use Illuminate\Support\Facades\Storage;
$file_path= Storage::path('public/importable');
if (!is_dir($file_path)) {mkdir( $file_path,0775, true);}
A:
This is a permission issue. I used few hours to resolve this problem. I found only 3 lines to give permissions. So first go to root folder f your Laravel application and run these 3 lines.
php artisan cache:clear
chmod -R 777 storage/
composer dump-autoload
|
Laravel 5 - ErrorException failed to open stream: Permission denied
|
I've just git clone a repository of mine containing a Laravel 5.1 project. I've gone through the normal steps to get it up and running (setup the webserver, db, etc). I've now gone to the web address I configured and i'm getting the following error message:
ErrorException in compiled.php line 6648:
file_put_contents(/3c7922efc24ab13620cd0f35b1fdaa61): failed to open stream: Permission denied
Any idea's what folder it's trying to access?
|
[
"I solved the problem with this solution, clarifying that if you have Windows, you apply lines 1 and 3 and it works the same.\nphp artisan cache:clear\nchmod -R 777 storage/\ncomposer dump-autoload\n\n",
"To solve this problem I needed to create a few folder's that were missing from my install. You require the following folders to be readable and writable by your www user:\n/storage\n/storage/cache\n/storage/framework\n/storage/framework/sessions\n/storage/framework/views\n/storage/framework/cache\n/storage/logs\n\nAlso, set the folder permissions to 775. From the root directory run the command: sudo chmod -R 755 storage\n",
"If nothing help you and if you work on windows with docker it can be the issue that wrong symlink you have, you can try:\nln -s /var/www/storage/app/public /var/www/public/storage\n\nAnd after that:\nchown -R $USER:www-data storage\nchmod -R 777 storage\non docker bash.\n",
"If you're using ubuntu, most likely you have to give permissions to that folder storage\ncd into the project and\nuse the command -> sudo chmod 755 storage/*\n",
"I had similar problem in my localhost and the problem was that I was trying to upload a file larger than upload_max_filesize in my php.ini file .\n just updating this to greater value and restart my server solved the problem ,, if you having similar problem this might be the case \n",
"If you are on a Linux distribution with SELinux enabled, here is what you need to do:\nsudo chcon -R -t httpd_sys_rw_content_t /var/www/html/my-app/storage\nsudo chown -R nginx /var/www/html/my-app/storage\nsudo chmod -R u+w /var/www/html/my-app/storage\n\nThis should set the proper context for the storage directory, allowing the web server to write to it.\n",
"I had this problem lately with my tests, and resolved it through: composer dump-autoload\n",
"This is how I solve a similar issue since I don't have access to the terminal to run sudo chown -R www-data:www-data /thedirectory\nin\nconfig/filesystem.php\n\n 'public' => [\n 'driver' => 'local',\n 'root' => storage_path('app/public'),\n 'url' => env('APP_URL').'/storage',\n 'visibility' => 'public',\n ],\n\nThen in my uploader controller\nuse Illuminate\\Support\\Facades\\Storage;\n\n $file_path= Storage::path('public/importable');\n \n if (!is_dir($file_path)) {mkdir( $file_path,0775, true);}\n\n",
"This is a permission issue. I used few hours to resolve this problem. I found only 3 lines to give permissions. So first go to root folder f your Laravel application and run these 3 lines.\nphp artisan cache:clear\nchmod -R 777 storage/\ncomposer dump-autoload\n\n"
] |
[
49,
24,
5,
1,
1,
1,
0,
0,
0
] |
[
"For me it was because the single file was owned by root...written at some point when I setup the files, and the app couldn't read it. Solution could be to change ownership, get rid of it, etc.\n",
"For me solution was to change folder owner of laravel project. Ensure that is your own user and not root:www-data.\nsudo chown -R username:group directory\n\n",
"Run the following command in the directory that is being triggered to check the permissions:\nls -lah\nThen if there was any issue you can correct it with:\nsudo chown -R username:group directory\n"
] |
[
-1,
-1,
-1
] |
[
"laravel",
"laravel_5.1"
] |
stackoverflow_0036460874_laravel_laravel_5.1.txt
|
Q:
What is a good way to generate all strings of length n over a given alphabet within a range in dictionary order?
I want to write a generator s_generator(alphabet, length, start_s, end_s) that generates strings of length n over a given alphabet in dictionary order starting with start_s and ending at end_s.
For example, s_generator('ab', 4, 'aaaa', 'bbbb') generates ['aaaa', 'aaab', 'aaba', 'aabb', 'abaa', 'abab', 'abba', 'abbb', 'baaa', 'baab', 'baba', 'babb', 'bbaa', 'bbab', 'bbba', 'bbbb']. And s_generator('ab', 4, 'abaa', 'abaa') generates ['abaa', 'abab', 'abba', 'abbb', 'baaa']
What is a good way to implement it?
I thought about assigning a number to each character in alphabet, treating the string as a base-n number (n is size of alphabet) and using addition to get the next number, and then convert the number back to string. For example, 'abab' is [0, 1, 0, 1] and the next number is [0, 1, 1, 0], which is 'abba'. This method seems complicated. Is there a simpler solution?
A:
use itertools and comprehension list
from itertools import product
def s_generator(alphabet, length, start_s, end_s):
products = product(alphabet, repeat=length)
return [''.join(x) for x in products if ''.join(x) >= start_s and ''.join(x) <= end_s]
print(s_generator('ab', 4, 'aaaa', 'bbbb'))
|
What is a good way to generate all strings of length n over a given alphabet within a range in dictionary order?
|
I want to write a generator s_generator(alphabet, length, start_s, end_s) that generates strings of length n over a given alphabet in dictionary order starting with start_s and ending at end_s.
For example, s_generator('ab', 4, 'aaaa', 'bbbb') generates ['aaaa', 'aaab', 'aaba', 'aabb', 'abaa', 'abab', 'abba', 'abbb', 'baaa', 'baab', 'baba', 'babb', 'bbaa', 'bbab', 'bbba', 'bbbb']. And s_generator('ab', 4, 'abaa', 'abaa') generates ['abaa', 'abab', 'abba', 'abbb', 'baaa']
What is a good way to implement it?
I thought about assigning a number to each character in alphabet, treating the string as a base-n number (n is size of alphabet) and using addition to get the next number, and then convert the number back to string. For example, 'abab' is [0, 1, 0, 1] and the next number is [0, 1, 1, 0], which is 'abba'. This method seems complicated. Is there a simpler solution?
|
[
"use itertools and comprehension list\nfrom itertools import product\n\ndef s_generator(alphabet, length, start_s, end_s):\n products = product(alphabet, repeat=length)\n return [''.join(x) for x in products if ''.join(x) >= start_s and ''.join(x) <= end_s]\n\n\nprint(s_generator('ab', 4, 'aaaa', 'bbbb'))\n\n"
] |
[
0
] |
[] |
[] |
[
"algorithm",
"python",
"string"
] |
stackoverflow_0074664066_algorithm_python_string.txt
|
Q:
How to calculate confidence bands for fitted values of a linear regression by hand and plot them?
I'm trying to plot error bands around a linear regression. I'm working with the builtin trees dataset in R. Here's my code. No lines are showing up on the plot. Please help!
xdev <- log(Volume) - mean(log(Volume))
xdev
ydev <- Girth - mean(Girth)
ydev
b1 <- sum((xdev)*(ydev))/sum(xdev^2)
b0 <- mean(Girth) - mean(log(Volume))*b1
plot(log(Volume) ~ Girth)
abline(coef(lm(log(Volume) ~ Girth)), col=2)
(paste(b0, b1))
y.hat <- b0 + b1*log(Volume)
ss.explained <- sum((y.hat - mean(Girth))^2)
ss.unexplained <- sum((y.hat - Girth)^2)
stderr.b1 <- sqrt((ss.unexplained/29)/sum(xdev^2))
stderr.b1
std.yhat <- sqrt((ss.unexplained/29)*((1/31) + (xdev^2/sum(xdev^2))))
std.yhat
upp <- y.hat + qt(0.95, 29)*std.yhat
low <- y.hat - qt(0.95, 29)*std.yhat
upp
low
library(lattice)
plot(log(Volume) ~ Girth, data = trees)
abline(c(b0, b1), lty=1)
points(upp ~ Girth, data=trees, type="l", lty=2)
points(low ~ Girth, data=trees, type="l", lty=2)
A:
Obviously you want to calculate OLS by hand as well as predictions with confidence bands. Since I'm not familiar with the approach you are using (but you definitely mixed up Y and X in the beginning of your code), I show you how to get the confidence bands with a similar manual approach I'm more familiar with.
data(trees) ## load trees dataset
## fit
X <- cbind(1, trees$Girth)
y <- log(trees$Volume)
beta <- solve(t(X) %*% X) %*% t(X) %*% y
y.hat <- as.vector(X %*% beta)
## std. err. y.hat
res <- y - y.hat
n <- nrow(trees); k <- ncol(X)
VCV <- 1/(n - k)*as.vector(t(res) %*% res)*solve(t(X) %*% X)
se.yhat <- sqrt(diag(X %*% VCV %*% t(X)))
## CIs
tcrit <- qt(0.975, n - k)
upp <- y.hat + tcrit*se.yhat
low <- y.hat - tcrit*se.yhat
## plot
with(trees, plot(x=Girth, y=log(Volume)))
abline(a=beta, col=2)
lines(x=trees$Girth, y=upp, lty=2)
lines(x=trees$Girth, y=low, lty=2)
We can compare this with the results of respective R functions, which will give us the same values and, thus, the same plot as above.
fit <- lm(log(Volume) ~ Girth, trees)
pred <- stats::predict.lm(fit, se.fit=TRUE, df=fit$df.residual)
with(trees, plot(x=Girth, y=log(Volume)))
abline(fit, col=2)
lines(x=trees$Girth, y=pred$fit + tcrit*pred$se.fit, lty=2)
lines(x=trees$Girth, y=pred$fit - tcrit*pred$se.fit, lty=2)
Since you have noted lattice, you can further crosscheck using lattice::xyplot:
lattice::xyplot(fit$fitted.values ~ log(trees$Volume),
panel=mosaic::panel.lmbands)
Note: It seems you are using attach, which is discouraged in the community, see: Why is it not advisable to use attach() in R, and what should I use instead?. I therefore used with() and `$`() instead.
|
How to calculate confidence bands for fitted values of a linear regression by hand and plot them?
|
I'm trying to plot error bands around a linear regression. I'm working with the builtin trees dataset in R. Here's my code. No lines are showing up on the plot. Please help!
xdev <- log(Volume) - mean(log(Volume))
xdev
ydev <- Girth - mean(Girth)
ydev
b1 <- sum((xdev)*(ydev))/sum(xdev^2)
b0 <- mean(Girth) - mean(log(Volume))*b1
plot(log(Volume) ~ Girth)
abline(coef(lm(log(Volume) ~ Girth)), col=2)
(paste(b0, b1))
y.hat <- b0 + b1*log(Volume)
ss.explained <- sum((y.hat - mean(Girth))^2)
ss.unexplained <- sum((y.hat - Girth)^2)
stderr.b1 <- sqrt((ss.unexplained/29)/sum(xdev^2))
stderr.b1
std.yhat <- sqrt((ss.unexplained/29)*((1/31) + (xdev^2/sum(xdev^2))))
std.yhat
upp <- y.hat + qt(0.95, 29)*std.yhat
low <- y.hat - qt(0.95, 29)*std.yhat
upp
low
library(lattice)
plot(log(Volume) ~ Girth, data = trees)
abline(c(b0, b1), lty=1)
points(upp ~ Girth, data=trees, type="l", lty=2)
points(low ~ Girth, data=trees, type="l", lty=2)
|
[
"Obviously you want to calculate OLS by hand as well as predictions with confidence bands. Since I'm not familiar with the approach you are using (but you definitely mixed up Y and X in the beginning of your code), I show you how to get the confidence bands with a similar manual approach I'm more familiar with.\ndata(trees) ## load trees dataset\n\n## fit\nX <- cbind(1, trees$Girth)\ny <- log(trees$Volume)\nbeta <- solve(t(X) %*% X) %*% t(X) %*% y\ny.hat <- as.vector(X %*% beta)\n\n## std. err. y.hat\nres <- y - y.hat\nn <- nrow(trees); k <- ncol(X)\nVCV <- 1/(n - k)*as.vector(t(res) %*% res)*solve(t(X) %*% X)\nse.yhat <- sqrt(diag(X %*% VCV %*% t(X)))\n\n## CIs\ntcrit <- qt(0.975, n - k)\nupp <- y.hat + tcrit*se.yhat\nlow <- y.hat - tcrit*se.yhat\n\n## plot\nwith(trees, plot(x=Girth, y=log(Volume)))\nabline(a=beta, col=2)\nlines(x=trees$Girth, y=upp, lty=2)\nlines(x=trees$Girth, y=low, lty=2)\n\n\nWe can compare this with the results of respective R functions, which will give us the same values and, thus, the same plot as above.\nfit <- lm(log(Volume) ~ Girth, trees)\npred <- stats::predict.lm(fit, se.fit=TRUE, df=fit$df.residual)\n\nwith(trees, plot(x=Girth, y=log(Volume)))\nabline(fit, col=2)\nlines(x=trees$Girth, y=pred$fit + tcrit*pred$se.fit, lty=2)\nlines(x=trees$Girth, y=pred$fit - tcrit*pred$se.fit, lty=2)\n\nSince you have noted lattice, you can further crosscheck using lattice::xyplot:\nlattice::xyplot(fit$fitted.values ~ log(trees$Volume), \n panel=mosaic::panel.lmbands)\n\n\nNote: It seems you are using attach, which is discouraged in the community, see: Why is it not advisable to use attach() in R, and what should I use instead?. I therefore used with() and `$`() instead.\n"
] |
[
0
] |
[] |
[] |
[
"abline",
"lattice",
"linear_regression",
"r",
"regression"
] |
stackoverflow_0074663428_abline_lattice_linear_regression_r_regression.txt
|
Q:
Is AES 128 Crypto (Cipher) logic is there in Kotlin Multi Platform (KMM)?
I found AES encryption logic in Kotlin by using JavaX libraries. Since it's specific to java (Android) so it's not executing for iOS.
import javax.crypto.Cipher
import javax.crypto.SecretKey
import javax.crypto.spec.SecretKeySpec
object Crypto {
fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {
val cipher: Cipher
var encrypted = ByteArray(16)
try {
val secretKeyEcb: SecretKey = SecretKeySpec(key, "AES")
cipher = Cipher.getInstance("AES")
cipher.init(Cipher.ENCRYPT_MODE, secretKeyEcb)
encrypted = cipher.doFinal(data, 0, 16)
} catch (e: Exception) {
e.printStackTrace()
}
return encrypted.copyOf(8)
}
}
Is there any way to achieve the above code in iOS or in KMM ?
A:
The Kotlin multiplatform is a new technology, and it lacks many libraries.
You will not be able to run java code on iOS, so using Cipher in the common code will not work.
When writing an application you will often encounter a similar problem, and the solution is always the same: create an interface class and implement it for each of the platforms.
commomMain/Crypto.kt
expect object Crypto {
fun calculateHash(data: ByteArray, key: ByteArray): ByteArray
}
On android part you can use Cipher easily:
androidMain/Crypto.kt
actual object Crypto {
fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {
val cipher: Cipher
var encrypted = ByteArray(16)
try {
val secretKeyEcb: SecretKey = SecretKeySpec(key, "AES")
cipher = Cipher.getInstance("AES")
cipher.init(Cipher.ENCRYPT_MODE, secretKeyEcb)
encrypted = cipher.doFinal(data, 0, 16)
} catch (e: Exception) {
e.printStackTrace()
}
return encrypted.copyOf(8)
}
}
And to implement the iosCommon part, you need to look for an iOS solution to your problem. I advise you to look for an Objective C solution, because kotlin generates its files based on the headers of that language, so such a solution will be easier to implement than a Swift solution.
The first one I came across was this answer and I started working with it.
You can try searching on github to see if someone has already implemented it. I try key classes from iOS and kotlin filtering, usually the number of results is minimal, if you are lucky you will find what you need.
In your case, I was lucky enough to find this code. That's the only search result for CCCrypt + kotlin language=). I combined it with obj-c answer. This doesn't looks exactly like your Cipher code, you also taking only first 8 bytes for some reason. But you should get the idea:
actual object Crypto {
@Throws(Throwable::class)
fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {
if (!listOf(
kCCKeySizeAES128,
kCCKeySizeAES192,
kCCKeySizeAES256,
).contains(key.count().toUInt())
) {
throw IllegalStateException("Invalid key length ${key.count()}")
}
val ivLength = kCCBlockSizeAES128
val output = ByteArray(
size = ivLength.toInt() * 2 + data.size
) { 0.toByte() }
val outputSize = ULongArray(1) { 0u }
key.usePinned { keyPinned ->
data.usePinned { inputPinned ->
output.usePinned { outputPinned ->
outputSize.usePinned { outputSizePinned ->
val rcbStatus = SecRandomCopyBytes(
kSecRandomDefault,
ivLength.toULong(),
outputPinned.addressOf(0)
)
if (rcbStatus != kCCSuccess) {
throw IllegalStateException("calculateHash rcbStatus $rcbStatus")
}
val ccStatus = CCCrypt(
op = kCCEncrypt,
alg = kCCAlgorithmAES,
options = kCCOptionPKCS7Padding,
key = keyPinned.addressOf(0),
keyLength = key.size.toULong(),
iv = outputPinned.addressOf(0),
dataIn = inputPinned.addressOf(0),
dataInLength = data.size.toULong(),
dataOut = outputPinned.addressOf(ivLength.toInt()),
dataOutAvailable = output.size.toULong() - ivLength,
dataOutMoved = outputSizePinned.addressOf(0),
)
if (ccStatus != kCCSuccess) {
throw IllegalStateException("calculateHash ccStatus $ccStatus")
}
}
}
}
}
return output.copyOf((outputSize.first() + ivLength).toInt())
}
}
A:
You can use krypto or libsodum wrapper libraries.
For example, with krypto library you can easily implement AES 128 in commanMain module by using these functions:
implementation("com.soywiz.korlibs.krypto:krypto:${Version.krypto}")
AES.encryptAes128Cbc(dataByteArray, keyByteArray, Padding.NoPadding)
AES.decryptAes128Cbc(dataByteArray, keyByteArray, Padding.ANSIX923Padding)
|
Is AES 128 Crypto (Cipher) logic is there in Kotlin Multi Platform (KMM)?
|
I found AES encryption logic in Kotlin by using JavaX libraries. Since it's specific to java (Android) so it's not executing for iOS.
import javax.crypto.Cipher
import javax.crypto.SecretKey
import javax.crypto.spec.SecretKeySpec
object Crypto {
fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {
val cipher: Cipher
var encrypted = ByteArray(16)
try {
val secretKeyEcb: SecretKey = SecretKeySpec(key, "AES")
cipher = Cipher.getInstance("AES")
cipher.init(Cipher.ENCRYPT_MODE, secretKeyEcb)
encrypted = cipher.doFinal(data, 0, 16)
} catch (e: Exception) {
e.printStackTrace()
}
return encrypted.copyOf(8)
}
}
Is there any way to achieve the above code in iOS or in KMM ?
|
[
"The Kotlin multiplatform is a new technology, and it lacks many libraries.\nYou will not be able to run java code on iOS, so using Cipher in the common code will not work.\nWhen writing an application you will often encounter a similar problem, and the solution is always the same: create an interface class and implement it for each of the platforms.\ncommomMain/Crypto.kt\nexpect object Crypto {\n fun calculateHash(data: ByteArray, key: ByteArray): ByteArray\n}\n\nOn android part you can use Cipher easily:\nandroidMain/Crypto.kt\nactual object Crypto {\n fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {\n val cipher: Cipher\n var encrypted = ByteArray(16)\n\n try {\n val secretKeyEcb: SecretKey = SecretKeySpec(key, \"AES\")\n cipher = Cipher.getInstance(\"AES\")\n cipher.init(Cipher.ENCRYPT_MODE, secretKeyEcb)\n encrypted = cipher.doFinal(data, 0, 16)\n } catch (e: Exception) {\n e.printStackTrace()\n }\n return encrypted.copyOf(8)\n }\n}\n\nAnd to implement the iosCommon part, you need to look for an iOS solution to your problem. I advise you to look for an Objective C solution, because kotlin generates its files based on the headers of that language, so such a solution will be easier to implement than a Swift solution.\nThe first one I came across was this answer and I started working with it.\nYou can try searching on github to see if someone has already implemented it. I try key classes from iOS and kotlin filtering, usually the number of results is minimal, if you are lucky you will find what you need.\nIn your case, I was lucky enough to find this code. That's the only search result for CCCrypt + kotlin language=). I combined it with obj-c answer. This doesn't looks exactly like your Cipher code, you also taking only first 8 bytes for some reason. But you should get the idea:\nactual object Crypto {\n @Throws(Throwable::class)\n fun calculateHash(data: ByteArray, key: ByteArray): ByteArray {\n if (!listOf(\n kCCKeySizeAES128,\n kCCKeySizeAES192,\n kCCKeySizeAES256,\n ).contains(key.count().toUInt())\n ) {\n throw IllegalStateException(\"Invalid key length ${key.count()}\")\n }\n val ivLength = kCCBlockSizeAES128\n val output = ByteArray(\n size = ivLength.toInt() * 2 + data.size\n ) { 0.toByte() }\n val outputSize = ULongArray(1) { 0u }\n key.usePinned { keyPinned ->\n data.usePinned { inputPinned ->\n output.usePinned { outputPinned ->\n outputSize.usePinned { outputSizePinned ->\n val rcbStatus = SecRandomCopyBytes(\n kSecRandomDefault,\n ivLength.toULong(),\n outputPinned.addressOf(0)\n )\n if (rcbStatus != kCCSuccess) {\n throw IllegalStateException(\"calculateHash rcbStatus $rcbStatus\")\n }\n val ccStatus = CCCrypt(\n op = kCCEncrypt,\n alg = kCCAlgorithmAES,\n options = kCCOptionPKCS7Padding,\n key = keyPinned.addressOf(0),\n keyLength = key.size.toULong(),\n iv = outputPinned.addressOf(0),\n dataIn = inputPinned.addressOf(0),\n dataInLength = data.size.toULong(),\n dataOut = outputPinned.addressOf(ivLength.toInt()),\n dataOutAvailable = output.size.toULong() - ivLength,\n dataOutMoved = outputSizePinned.addressOf(0),\n )\n if (ccStatus != kCCSuccess) {\n throw IllegalStateException(\"calculateHash ccStatus $ccStatus\")\n }\n }\n }\n }\n }\n return output.copyOf((outputSize.first() + ivLength).toInt())\n }\n}\n\n",
"You can use krypto or libsodum wrapper libraries.\nFor example, with krypto library you can easily implement AES 128 in commanMain module by using these functions:\nimplementation(\"com.soywiz.korlibs.krypto:krypto:${Version.krypto}\")\n\n\n\nAES.encryptAes128Cbc(dataByteArray, keyByteArray, Padding.NoPadding)\nAES.decryptAes128Cbc(dataByteArray, keyByteArray, Padding.ANSIX923Padding)\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"ios",
"kmm",
"kotlin",
"kotlin_multiplatform"
] |
stackoverflow_0069000450_ios_kmm_kotlin_kotlin_multiplatform.txt
|
Q:
Camunda DeploymentCache management
We're experiencing problem with the memory of the Spring Boot application.
The heap dump shows that the main part of it is consumed by camunda component
org.camunda.bpm.engine.impl.persistence.deploy.cache.DeploymentCache - 69, 47%.
In deploymentCache the dominant object is
org.camunda.bpm.engine.impl.persistence.deploy.cache.BpmnModelInstanceCache - 69, 03%
Turns out Camunda loads all deployed process definitions, case definitions, decision definitions from the DB tables at the start.
And we have tens of thousands process definitions.
To fix the memory problem we're using the method org.camunda.bpm.engine.RepositoryService#deleteDeployment(java.lang.String, boolean)
Which deletes the data from camunda system tables in DB.
List<Deployment> oldDeployments = repositoryService.createDeploymentQuery()
.deploymentBefore(date)
.listPage(0, maxResult);
boolean cascade = true;
for (Deployment deployment : oldDeployments) {
repositoryService.deleteDeployment(deployment.getId(), cascade);
}
But, using this approach we don't have much control over the deletion process.
And that's important, because we have some processes with heavy payment logic.
If the incident occurs, the time required for the process can take much more than expected.
And we can't delete those processes, but it's rather an exclusion
So, is there another way to do it, add more control over deletion process/deploymentCache? Like:
Adding handler. Which is querying additional data from the tables with business data, to decide if it should be deleted.
Adding to exclusion certain schema names.
Customizing DeploymentCache to load on startup only the deployments for the specified period and the others on demand.
Or specifying the deploymentCache limit
Camunda Version: 7.13.0
A:
I would not focus on the symptom e.g.g by addressing it on the deployment cache level, but try to get the environments cleaned up (regularly) and avoid the root cause in the future.
Establish a cleanup strategy
get an idea of what is already cleanable using the report.
ensure the historyTimeToLive (ttl) is set on your process definitions via Cockpit or API
if required set the ttl on the process definition already in the runtime via Cockpit or API
if required set removal time on historic instances e.g. to an absolute value in the past
run cleanup for historic instances you want to remove
consider using process instance migration to move instances from deployments with low instance numbers to newer versions
As a result you should end up with many deployments having 0 process instances (they have been cleaned, also form history, or migrated if running)
Delete deployments having 0 process instances from DB
Get the list of deployments: https://docs.camunda.org/manual/7.18/reference/rest/deployment/get-query/
Count the process instances a deployment has:
https://docs.camunda.org/manual/7.18/reference/rest/process-instance/get-query-count/
Delete a deployment: https://docs.camunda.org/manual/7.18/reference/rest/deployment/delete-deployment/
Steps 1-3 are also available via the deployment screen in Camunda Cockpit.
Identify the root cause
Hundreds - ok. Thousands - maybe. Tens of thousands seem a magnitude too high. Are you possibly deploying duplicates when there have been no model changes? Are you maybe creating a deployment for every single model instead of bundling them in one deployment? Are you possibly doing something programmatically which performs minor change sand creates lots of deployments?
.
.
.
Side note on version upgrades
The Camunda version 7.13.0 is ~2.5 years old and unpatched. There were significant security relevant fixes in those releases. Even if you don't have access to the patch releases on Community Edition, an upgrade to 7.18.0 would get you a lot of patches (and also features).
Upgrading your Camunda version will also allow you to upgrade to a newer supported Spring Boot version, which will again get you lots of security and other fixes.
The version you are running would for instance be vulnerable to the infamous log4shell vulnerability.
|
Camunda DeploymentCache management
|
We're experiencing problem with the memory of the Spring Boot application.
The heap dump shows that the main part of it is consumed by camunda component
org.camunda.bpm.engine.impl.persistence.deploy.cache.DeploymentCache - 69, 47%.
In deploymentCache the dominant object is
org.camunda.bpm.engine.impl.persistence.deploy.cache.BpmnModelInstanceCache - 69, 03%
Turns out Camunda loads all deployed process definitions, case definitions, decision definitions from the DB tables at the start.
And we have tens of thousands process definitions.
To fix the memory problem we're using the method org.camunda.bpm.engine.RepositoryService#deleteDeployment(java.lang.String, boolean)
Which deletes the data from camunda system tables in DB.
List<Deployment> oldDeployments = repositoryService.createDeploymentQuery()
.deploymentBefore(date)
.listPage(0, maxResult);
boolean cascade = true;
for (Deployment deployment : oldDeployments) {
repositoryService.deleteDeployment(deployment.getId(), cascade);
}
But, using this approach we don't have much control over the deletion process.
And that's important, because we have some processes with heavy payment logic.
If the incident occurs, the time required for the process can take much more than expected.
And we can't delete those processes, but it's rather an exclusion
So, is there another way to do it, add more control over deletion process/deploymentCache? Like:
Adding handler. Which is querying additional data from the tables with business data, to decide if it should be deleted.
Adding to exclusion certain schema names.
Customizing DeploymentCache to load on startup only the deployments for the specified period and the others on demand.
Or specifying the deploymentCache limit
Camunda Version: 7.13.0
|
[
"I would not focus on the symptom e.g.g by addressing it on the deployment cache level, but try to get the environments cleaned up (regularly) and avoid the root cause in the future.\nEstablish a cleanup strategy\n\nget an idea of what is already cleanable using the report.\nensure the historyTimeToLive (ttl) is set on your process definitions via Cockpit or API\nif required set the ttl on the process definition already in the runtime via Cockpit or API\nif required set removal time on historic instances e.g. to an absolute value in the past\nrun cleanup for historic instances you want to remove\nconsider using process instance migration to move instances from deployments with low instance numbers to newer versions\n\nAs a result you should end up with many deployments having 0 process instances (they have been cleaned, also form history, or migrated if running)\nDelete deployments having 0 process instances from DB\n\nGet the list of deployments: https://docs.camunda.org/manual/7.18/reference/rest/deployment/get-query/\nCount the process instances a deployment has:\nhttps://docs.camunda.org/manual/7.18/reference/rest/process-instance/get-query-count/\nDelete a deployment: https://docs.camunda.org/manual/7.18/reference/rest/deployment/delete-deployment/\n\nSteps 1-3 are also available via the deployment screen in Camunda Cockpit.\nIdentify the root cause\nHundreds - ok. Thousands - maybe. Tens of thousands seem a magnitude too high. Are you possibly deploying duplicates when there have been no model changes? Are you maybe creating a deployment for every single model instead of bundling them in one deployment? Are you possibly doing something programmatically which performs minor change sand creates lots of deployments?\n.\n.\n.\nSide note on version upgrades\nThe Camunda version 7.13.0 is ~2.5 years old and unpatched. There were significant security relevant fixes in those releases. Even if you don't have access to the patch releases on Community Edition, an upgrade to 7.18.0 would get you a lot of patches (and also features).\nUpgrading your Camunda version will also allow you to upgrade to a newer supported Spring Boot version, which will again get you lots of security and other fixes.\nThe version you are running would for instance be vulnerable to the infamous log4shell vulnerability.\n"
] |
[
1
] |
[] |
[] |
[
"camunda",
"java"
] |
stackoverflow_0074653089_camunda_java.txt
|
Q:
Apache-Hadoop-Common failing to compile: Error running javah command
I am trying to create a Hadoop cluster by following this guide:
https://data.andyburgin.co.uk/post/157450047463/running-hue-on-a-raspberry-pi-hadoop-cluster
The master node I am trying to configure is a Raspberry Pi 4B 4GB with Raspbian OS installed.
After running:
sudo mvn package -Pdist,native -DskipTests -Dtar
The compiler fails at hadoop-common
I am using hadoop build 3.2.0 but otherwise following the directions as close as possible.
Below is where I am failing:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache Hadoop Main 3.2.0:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 2.921 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 3.715 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 3.370 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 6.031 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 1.234 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 3.917 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 12.101 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 5.393 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 18.551 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 7.393 s]
[INFO] Apache Hadoop Common ............................... FAILURE [ 11.293 s]
[INFO] Apache Hadoop NFS .................................. SKIPPED
[INFO] Apache Hadoop KMS .................................. SKIPPED
[INFO] Apache Hadoop Common Project ....................... SKIPPED
[INFO] Apache Hadoop HDFS Client .......................... SKIPPED
[INFO] Apache Hadoop HDFS ................................. SKIPPED
[INFO] Apache Hadoop HDFS Native Client ................... SKIPPED
[INFO] Apache Hadoop HttpFS ............................... SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS-RBF ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] Apache Hadoop YARN ................................. SKIPPED
[INFO] Apache Hadoop YARN API ............................. SKIPPED
[INFO] Apache Hadoop YARN Common .......................... SKIPPED
[INFO] Apache Hadoop YARN Registry ........................ SKIPPED
[INFO] Apache Hadoop YARN Server .......................... SKIPPED
[INFO] Apache Hadoop YARN Server Common ................... SKIPPED
[INFO] Apache Hadoop YARN NodeManager ..................... SKIPPED
[INFO] Apache Hadoop YARN Web Proxy ....................... SKIPPED
[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SKIPPED
[INFO] Apache Hadoop YARN Timeline Service ................ SKIPPED
[INFO] Apache Hadoop YARN ResourceManager ................. SKIPPED
[INFO] Apache Hadoop YARN Server Tests .................... SKIPPED
[INFO] Apache Hadoop YARN Client .......................... SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .............. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Common .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Client .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Servers ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Server 1.2 SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase tests ..... SKIPPED
[INFO] Apache Hadoop YARN Router .......................... SKIPPED
[INFO] Apache Hadoop YARN Applications .................... SKIPPED
[INFO] Apache Hadoop YARN DistributedShell ................ SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SKIPPED
[INFO] Apache Hadoop MapReduce Client ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Core ....................... SKIPPED
[INFO] Apache Hadoop MapReduce Common ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Shuffle .................... SKIPPED
[INFO] Apache Hadoop MapReduce App ........................ SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer .............. SKIPPED
[INFO] Apache Hadoop MapReduce JobClient .................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop YARN Services ........................ SKIPPED
[INFO] Apache Hadoop YARN Services Core ................... SKIPPED
[INFO] Apache Hadoop YARN Services API .................... SKIPPED
[INFO] Apache Hadoop Image Generation Tool ................ SKIPPED
[INFO] Yet Another Learning Platform ...................... SKIPPED
[INFO] Apache Hadoop YARN Site ............................ SKIPPED
[INFO] Apache Hadoop YARN UI .............................. SKIPPED
[INFO] Apache Hadoop YARN Project ......................... SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SKIPPED
[INFO] Apache Hadoop MapReduce NativeTask ................. SKIPPED
[INFO] Apache Hadoop MapReduce Uploader ................... SKIPPED
[INFO] Apache Hadoop MapReduce Examples ................... SKIPPED
[INFO] Apache Hadoop MapReduce ............................ SKIPPED
[INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Archive Logs ......................... SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED
[INFO] Apache Hadoop Kafka Library support ................ SKIPPED
[INFO] Apache Hadoop Azure support ........................ SKIPPED
[INFO] Apache Hadoop Aliyun OSS support ................... SKIPPED
[INFO] Apache Hadoop Client Aggregator .................... SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Resource Estimator Service ........... SKIPPED
[INFO] Apache Hadoop Azure Data Lake support .............. SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Client API ........................... SKIPPED
[INFO] Apache Hadoop Client Runtime ....................... SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants .......... SKIPPED
[INFO] Apache Hadoop Client Test Minicluster .............. SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
[INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] Apache Hadoop Client Modules ....................... SKIPPED
[INFO] Apache Hadoop Cloud Storage ........................ SKIPPED
[INFO] Apache Hadoop Cloud Storage Project ................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:24 min
[INFO] Finished at: 2019-08-03T19:44:11-05:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project hadoop-common: Error running javah command: Error executing command line. Exit code:2 -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project hadoop-common: Error running javah command
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException: Error running javah command
at org.codehaus.mojo.natives.plugin.NativeJavahMojo.execute (NativeJavahMojo.java:226)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.codehaus.mojo.natives.NativeBuildException: Error executing command line. Exit code:2
at org.codehaus.mojo.natives.util.CommandLineUtil.execute (CommandLineUtil.java:34)
at org.codehaus.mojo.natives.javah.JavahExecutable.compile (JavahExecutable.java:46)
at org.codehaus.mojo.natives.plugin.NativeJavahMojo.execute (NativeJavahMojo.java:207)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common
I've set the following in ~/.bashrc :
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:jre/bin/java::")
export HADOOP_HOME=/opt/hadoop-3.2.0-src
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME/hadoop-hdfs-project/hadoop- hdfs/src/main/conf
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/hadoop-common-project/hadoop-common/src/main/conf
export YARN_CONF_DIR=$HADOOP_HOME/hadoop-yarn-project/hadoop-yarn/conf
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
I changed the java version to:
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b01-1+rpi1-b01)
OpenJDK Client VM (build 25.212-b01, mixed mode)
I am not sure how to fix this javah error. I've tried changing the path to a shorter path (/.m2/repository) as that was a suggestion for an Error 127.
Any ideas? Here is the full error log from the compilation:
https://drive.google.com/file/d/1kre4HInwWQlACG-u6tSOy_X5EHihuw5Y/view?usp=sharing
A:
This error may occur when path is too long.
To fix this problem and avoid Maven build issues like 'The command line is too long.', try to create a symbolic link to point to the folder .m2\repository to another folder with shorter path (ex: C:\mrepo) by executing this command:
mklink /J C:\mrepo C:\Users\your_user_name\.m2\repository
And then update settings.xml file (located in %MAVEN_HOME%/conf/setting.xml) to configure local repository path:
<localRepository>C:\mrepo</localRepository>
A:
Have you installed this?
cmake gcc9 boost
Have you tried building it this way?
mvn clean install -DskipTests
mvn package -Pdist -Pnative -Dtar -DskipTests
More information:
https://stackoverflow.com/a/40816478/3182598
https://www.mail-archive.com/[email protected]/msg24235.html
|
Apache-Hadoop-Common failing to compile: Error running javah command
|
I am trying to create a Hadoop cluster by following this guide:
https://data.andyburgin.co.uk/post/157450047463/running-hue-on-a-raspberry-pi-hadoop-cluster
The master node I am trying to configure is a Raspberry Pi 4B 4GB with Raspbian OS installed.
After running:
sudo mvn package -Pdist,native -DskipTests -Dtar
The compiler fails at hadoop-common
I am using hadoop build 3.2.0 but otherwise following the directions as close as possible.
Below is where I am failing:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache Hadoop Main 3.2.0:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 2.921 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 3.715 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 3.370 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 6.031 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 1.234 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 3.917 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 12.101 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 5.393 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 18.551 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 7.393 s]
[INFO] Apache Hadoop Common ............................... FAILURE [ 11.293 s]
[INFO] Apache Hadoop NFS .................................. SKIPPED
[INFO] Apache Hadoop KMS .................................. SKIPPED
[INFO] Apache Hadoop Common Project ....................... SKIPPED
[INFO] Apache Hadoop HDFS Client .......................... SKIPPED
[INFO] Apache Hadoop HDFS ................................. SKIPPED
[INFO] Apache Hadoop HDFS Native Client ................... SKIPPED
[INFO] Apache Hadoop HttpFS ............................... SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS-RBF ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] Apache Hadoop YARN ................................. SKIPPED
[INFO] Apache Hadoop YARN API ............................. SKIPPED
[INFO] Apache Hadoop YARN Common .......................... SKIPPED
[INFO] Apache Hadoop YARN Registry ........................ SKIPPED
[INFO] Apache Hadoop YARN Server .......................... SKIPPED
[INFO] Apache Hadoop YARN Server Common ................... SKIPPED
[INFO] Apache Hadoop YARN NodeManager ..................... SKIPPED
[INFO] Apache Hadoop YARN Web Proxy ....................... SKIPPED
[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SKIPPED
[INFO] Apache Hadoop YARN Timeline Service ................ SKIPPED
[INFO] Apache Hadoop YARN ResourceManager ................. SKIPPED
[INFO] Apache Hadoop YARN Server Tests .................... SKIPPED
[INFO] Apache Hadoop YARN Client .......................... SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .............. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Common .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Client .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Servers ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Server 1.2 SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase tests ..... SKIPPED
[INFO] Apache Hadoop YARN Router .......................... SKIPPED
[INFO] Apache Hadoop YARN Applications .................... SKIPPED
[INFO] Apache Hadoop YARN DistributedShell ................ SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SKIPPED
[INFO] Apache Hadoop MapReduce Client ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Core ....................... SKIPPED
[INFO] Apache Hadoop MapReduce Common ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Shuffle .................... SKIPPED
[INFO] Apache Hadoop MapReduce App ........................ SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer .............. SKIPPED
[INFO] Apache Hadoop MapReduce JobClient .................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop YARN Services ........................ SKIPPED
[INFO] Apache Hadoop YARN Services Core ................... SKIPPED
[INFO] Apache Hadoop YARN Services API .................... SKIPPED
[INFO] Apache Hadoop Image Generation Tool ................ SKIPPED
[INFO] Yet Another Learning Platform ...................... SKIPPED
[INFO] Apache Hadoop YARN Site ............................ SKIPPED
[INFO] Apache Hadoop YARN UI .............................. SKIPPED
[INFO] Apache Hadoop YARN Project ......................... SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SKIPPED
[INFO] Apache Hadoop MapReduce NativeTask ................. SKIPPED
[INFO] Apache Hadoop MapReduce Uploader ................... SKIPPED
[INFO] Apache Hadoop MapReduce Examples ................... SKIPPED
[INFO] Apache Hadoop MapReduce ............................ SKIPPED
[INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Archive Logs ......................... SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED
[INFO] Apache Hadoop Kafka Library support ................ SKIPPED
[INFO] Apache Hadoop Azure support ........................ SKIPPED
[INFO] Apache Hadoop Aliyun OSS support ................... SKIPPED
[INFO] Apache Hadoop Client Aggregator .................... SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Resource Estimator Service ........... SKIPPED
[INFO] Apache Hadoop Azure Data Lake support .............. SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Client API ........................... SKIPPED
[INFO] Apache Hadoop Client Runtime ....................... SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants .......... SKIPPED
[INFO] Apache Hadoop Client Test Minicluster .............. SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
[INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] Apache Hadoop Client Modules ....................... SKIPPED
[INFO] Apache Hadoop Cloud Storage ........................ SKIPPED
[INFO] Apache Hadoop Cloud Storage Project ................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:24 min
[INFO] Finished at: 2019-08-03T19:44:11-05:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project hadoop-common: Error running javah command: Error executing command line. Exit code:2 -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project hadoop-common: Error running javah command
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException: Error running javah command
at org.codehaus.mojo.natives.plugin.NativeJavahMojo.execute (NativeJavahMojo.java:226)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.codehaus.mojo.natives.NativeBuildException: Error executing command line. Exit code:2
at org.codehaus.mojo.natives.util.CommandLineUtil.execute (CommandLineUtil.java:34)
at org.codehaus.mojo.natives.javah.JavahExecutable.compile (JavahExecutable.java:46)
at org.codehaus.mojo.natives.plugin.NativeJavahMojo.execute (NativeJavahMojo.java:207)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common
I've set the following in ~/.bashrc :
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:jre/bin/java::")
export HADOOP_HOME=/opt/hadoop-3.2.0-src
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME/hadoop-hdfs-project/hadoop- hdfs/src/main/conf
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/hadoop-common-project/hadoop-common/src/main/conf
export YARN_CONF_DIR=$HADOOP_HOME/hadoop-yarn-project/hadoop-yarn/conf
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
I changed the java version to:
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b01-1+rpi1-b01)
OpenJDK Client VM (build 25.212-b01, mixed mode)
I am not sure how to fix this javah error. I've tried changing the path to a shorter path (/.m2/repository) as that was a suggestion for an Error 127.
Any ideas? Here is the full error log from the compilation:
https://drive.google.com/file/d/1kre4HInwWQlACG-u6tSOy_X5EHihuw5Y/view?usp=sharing
|
[
"This error may occur when path is too long.\nTo fix this problem and avoid Maven build issues like 'The command line is too long.', try to create a symbolic link to point to the folder .m2\\repository to another folder with shorter path (ex: C:\\mrepo) by executing this command:\nmklink /J C:\\mrepo C:\\Users\\your_user_name\\.m2\\repository\n\nAnd then update settings.xml file (located in %MAVEN_HOME%/conf/setting.xml) to configure local repository path:\n<localRepository>C:\\mrepo</localRepository>\n\n",
"Have you installed this?\ncmake gcc9 boost\nHave you tried building it this way?\nmvn clean install -DskipTests\nmvn package -Pdist -Pnative -Dtar -DskipTests\n\nMore information:\n\nhttps://stackoverflow.com/a/40816478/3182598\nhttps://www.mail-archive.com/[email protected]/msg24235.html\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"hadoop",
"hadoop_plugins"
] |
stackoverflow_0057344745_hadoop_hadoop_plugins.txt
|
Q:
how to hide nextjs api routes from being directly accessible through url?
Is there any way to make next.js API routes response data hidden when accessing it through URL? I want to hide the routes because there is some data I don't want to be directly accessed by the user.
A:
Probably quick & simple way to protect the API routes is through the stateless session management libraries like iron-session with save / creation and destroy endpoints to validate and invalidate the Next JS api routes
Try this github example by Vercel. This might a be good starting point.
Remember: Always use a best authentication mechanism to protect any direct api route call with appropriate privileges in place. DYOR
A:
There is no way to hide API routes from users through url in nextjs. In fact, nextjs API routes are publically available to anyone when you host the website without exporting and hosting from out folder. I ended making server-side routes using node express and then connected to the frontend in nextjs.
A:
It is extremely unworthy effort to hide API routes. and for protecting essential data in API..there is CORS and Authentication methods can prevent noicy unwanted traffic I found brilliant blog on this
https://dev.to/a7u/how-to-protect-nextjs-api-routes-from-other-browsers-3838
A:
You can set an authorization header that checks auth key everytime user access that API, that way normal user wont be able to access the page without knowing the auth key
|
how to hide nextjs api routes from being directly accessible through url?
|
Is there any way to make next.js API routes response data hidden when accessing it through URL? I want to hide the routes because there is some data I don't want to be directly accessed by the user.
|
[
"Probably quick & simple way to protect the API routes is through the stateless session management libraries like iron-session with save / creation and destroy endpoints to validate and invalidate the Next JS api routes\nTry this github example by Vercel. This might a be good starting point.\n\nRemember: Always use a best authentication mechanism to protect any direct api route call with appropriate privileges in place. DYOR\n\n",
"There is no way to hide API routes from users through url in nextjs. In fact, nextjs API routes are publically available to anyone when you host the website without exporting and hosting from out folder. I ended making server-side routes using node express and then connected to the frontend in nextjs.\n",
"It is extremely unworthy effort to hide API routes. and for protecting essential data in API..there is CORS and Authentication methods can prevent noicy unwanted traffic I found brilliant blog on this\nhttps://dev.to/a7u/how-to-protect-nextjs-api-routes-from-other-browsers-3838\n",
"You can set an authorization header that checks auth key everytime user access that API, that way normal user wont be able to access the page without knowing the auth key\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"api",
"javascript",
"next.js",
"security"
] |
stackoverflow_0067053080_api_javascript_next.js_security.txt
|
Q:
Why won't CSS animation stay faded out?
I am trying to use the fade out animation in CSS and it works at first but then at the last minute the element pops back. JSFiddle: https://jsfiddle.net/eqb02w5u/
HTML Code:
<head>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/animate.css/4.1.1/animate.min.css"
/>
</head>
<div class='fade-in'>Fading In</div>
<div class='fade-out'>Fading Out</div>
CSS Code:
.fade-in {
background-color: red;
animation:fadeIn 3s linear;
}
@keyframes fadeIn {
0% {
opacity:0
}
100% {
opacity:1;
}
}
.fade-out {
background-color: green;
animation:fadeOut 3s linear;
}
@keyframes fadeOut {
100% {
opacity:0
}
0% {
opacity:1;
}
}
A:
This is a really old question but you can add:
animation-fill-mode: forwards;
in each class where you want the animation to stay faded out.
|
Why won't CSS animation stay faded out?
|
I am trying to use the fade out animation in CSS and it works at first but then at the last minute the element pops back. JSFiddle: https://jsfiddle.net/eqb02w5u/
HTML Code:
<head>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/animate.css/4.1.1/animate.min.css"
/>
</head>
<div class='fade-in'>Fading In</div>
<div class='fade-out'>Fading Out</div>
CSS Code:
.fade-in {
background-color: red;
animation:fadeIn 3s linear;
}
@keyframes fadeIn {
0% {
opacity:0
}
100% {
opacity:1;
}
}
.fade-out {
background-color: green;
animation:fadeOut 3s linear;
}
@keyframes fadeOut {
100% {
opacity:0
}
0% {
opacity:1;
}
}
|
[
"This is a really old question but you can add:\nanimation-fill-mode: forwards;\n\nin each class where you want the animation to stay faded out.\n"
] |
[
0
] |
[] |
[] |
[
"animation",
"css",
"fade"
] |
stackoverflow_0066285035_animation_css_fade.txt
|
Q:
JETPACK COMPOSE "Cannot find parameter with this name: contentAlignment"
Am i missing an import or something? Why is this basic function giving me errors all of a sudden
A:
No, you didn't miss anything.
You need only to add your content parameter, and your alignment parameter would be normal.
Example:
Box(modifier = Modifier,
contentAlignment = Alignment.TopStart,
content = {}
)
A:
It happens because exists a Box constructor with no content as in your example code:
@Composable
fun Box(modifier: Modifier): Unit
The contentAlignment doesn't exist in this constructor.
You can use the constructor with the contentAlignment parameter and in this case you have to pass also the content parameter:
@Composable
inline fun Box(
modifier: Modifier = Modifier,
contentAlignment: Alignment = Alignment.TopStart,
propagateMinConstraints: Boolean = false,
content: @Composable @ExtensionFunctionType BoxScope.() -> Unit
): Unit
For example:
Box(
modifier = Modifier,
contentAlignment = Alignment.Center
){
//content
}
A:
I have also something to add . After what you have typed just open the semicolons and the error goes away , like i mentioned below
Box(modifier = Modifier,
contentAlignment = Alignment.TopStart
){
// Semicolon opening
}
|
JETPACK COMPOSE "Cannot find parameter with this name: contentAlignment"
|
Am i missing an import or something? Why is this basic function giving me errors all of a sudden
|
[
"No, you didn't miss anything.\nYou need only to add your content parameter, and your alignment parameter would be normal.\nExample:\nBox(modifier = Modifier,\n contentAlignment = Alignment.TopStart,\n content = {}\n )\n\n",
"It happens because exists a Box constructor with no content as in your example code:\n@Composable\nfun Box(modifier: Modifier): Unit\n\nThe contentAlignment doesn't exist in this constructor.\nYou can use the constructor with the contentAlignment parameter and in this case you have to pass also the content parameter:\n@Composable\ninline fun Box(\n modifier: Modifier = Modifier,\n contentAlignment: Alignment = Alignment.TopStart,\n propagateMinConstraints: Boolean = false,\n content: @Composable @ExtensionFunctionType BoxScope.() -> Unit\n): Unit\n\nFor example:\nBox(\n modifier = Modifier,\n contentAlignment = Alignment.Center\n){\n //content\n}\n\n",
"I have also something to add . After what you have typed just open the semicolons and the error goes away , like i mentioned below\nBox(modifier = Modifier,\n contentAlignment = Alignment.TopStart\n ){\n// Semicolon opening \n}\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"android_jetpack_compose",
"kotlin"
] |
stackoverflow_0072765380_android_jetpack_compose_kotlin.txt
|
Q:
Python vitual environment (venv): Share libraries in usage and dev/test venvs
I am new in python venv, so sorry for possible stupid question.
I am developing a small library. I've created dev virtual environment with all packages which is necessary for the library usage and freeze all versions of requirements to requirements.txt.
I also would like to create requirements_test.txt with all packages needed for development and tests. So the user will install requirements from requirements.txt while the developer from requirements_test.txt with all nessesary libs (e.g. pytest, asv, sphinx).
Now I've created dev venv and now I want to create test venv, of course I don't want to install the same libs twice. Is it possible to share some libs from one venv to another?
A:
Is it possible to share some libs from one venv to another?
No. The same library (or application) will be installed once per virtual environment, the installations can not be shared between environments. And it is perfectly fine like this. That is the whole point of virtual environments, that two installations from the same library are isolated from each other, in particular for the case where two different versions of the same library are required for two different projects.
To be completely fair, there are ways to share one installation of the same library between two virtual environments and reasons to do so. One famous example I know of currently is in the newer releases of virtualenv (versions 20+). In short: this tool creates virtual environments and (under specific conditions) is able to reuse (share) the installations of pip, setuptools, and wheel in multiple environments, see the app-data seeder for virtualenv.
Some more discussions on the topic:
https://discuss.python.org/t/proposal-sharing-distrbution-installations-in-general/2524
https://discuss.python.org/t/optimizing-installs-of-many-virtualenvs-by-symlinking-packages/2983
https://github.com/pypa/packaging-problems/issues/328
A:
You can use virtualenv --system-site-packages to symlink from the base system for sharing between dev and user. Then add the dev specific testing packages.
|
Python vitual environment (venv): Share libraries in usage and dev/test venvs
|
I am new in python venv, so sorry for possible stupid question.
I am developing a small library. I've created dev virtual environment with all packages which is necessary for the library usage and freeze all versions of requirements to requirements.txt.
I also would like to create requirements_test.txt with all packages needed for development and tests. So the user will install requirements from requirements.txt while the developer from requirements_test.txt with all nessesary libs (e.g. pytest, asv, sphinx).
Now I've created dev venv and now I want to create test venv, of course I don't want to install the same libs twice. Is it possible to share some libs from one venv to another?
|
[
"\nIs it possible to share some libs from one venv to another?\n\nNo. The same library (or application) will be installed once per virtual environment, the installations can not be shared between environments. And it is perfectly fine like this. That is the whole point of virtual environments, that two installations from the same library are isolated from each other, in particular for the case where two different versions of the same library are required for two different projects.\nTo be completely fair, there are ways to share one installation of the same library between two virtual environments and reasons to do so. One famous example I know of currently is in the newer releases of virtualenv (versions 20+). In short: this tool creates virtual environments and (under specific conditions) is able to reuse (share) the installations of pip, setuptools, and wheel in multiple environments, see the app-data seeder for virtualenv.\nSome more discussions on the topic:\n\nhttps://discuss.python.org/t/proposal-sharing-distrbution-installations-in-general/2524\nhttps://discuss.python.org/t/optimizing-installs-of-many-virtualenvs-by-symlinking-packages/2983\nhttps://github.com/pypa/packaging-problems/issues/328\n\n",
"You can use virtualenv --system-site-packages to symlink from the base system for sharing between dev and user. Then add the dev specific testing packages.\n"
] |
[
3,
0
] |
[
"I think it is recommended and advised to have multiple venvs, and multiple environments, be it on the same machine. so just have another venv. Its okay to have same library being present in both venvs.\n",
"Even with virtual environments, there are many libraries that come preinstalled with python and are not necessary in the package that you are developing, when I run pip freeze in a brand new virtual environment it dumps 30 packages, and surely they are not needed for my project.\nI recommend you to do the dependency maintenance manually (at least the production ones), this way you won't include useless libraries and you will keep your dependency file clean.\n"
] |
[
-1,
-1
] |
[
"python",
"virtualenv"
] |
stackoverflow_0060973272_python_virtualenv.txt
|
Q:
NavigationView inside TabView not displaying correctly in landscape mode
Navigation View inside Tabview
No issue in portrait mode
In landscape mode, navigation view and all subviews are "collapsed" into a top-level menu.
See screen shots below.
Is it normal behaviour?
Could not find any modifier to change this behaviour if it is normal.
Portrait mode
Landscape mode
Landscape mode after clicking top left menu
Test project showing issue:-
TabView {
ChartView().tabItem {
Label("Chart", systemImage: "chart.bar")
}
Page1View().tabItem {
Label("Received", systemImage: "tray.and.arrow.down.fill")
}
}
body of Page1View struct:-
var body: some View {
NavigationView {
VStack(spacing: 10) {
// ... removed for clarity
}
.toolbar {
ToolbarItem(placement: .navigationBarLeading) {
Text("tbar1")
}
ToolbarItem(placement: .navigationBarTrailing) {
Text("tbar2")
}
ToolbarItem(placement: .navigationBarTrailing) {
Text("tbar3")
}
}
}
}
A:
Found solution to this issue is that same as listed in:-
Swiftui NavigationView + TabView doesn't show navbar item
Added navigationViewStyle of .stack:-
NavigationView{
}
.navigationViewStyle(.stack)
|
NavigationView inside TabView not displaying correctly in landscape mode
|
Navigation View inside Tabview
No issue in portrait mode
In landscape mode, navigation view and all subviews are "collapsed" into a top-level menu.
See screen shots below.
Is it normal behaviour?
Could not find any modifier to change this behaviour if it is normal.
Portrait mode
Landscape mode
Landscape mode after clicking top left menu
Test project showing issue:-
TabView {
ChartView().tabItem {
Label("Chart", systemImage: "chart.bar")
}
Page1View().tabItem {
Label("Received", systemImage: "tray.and.arrow.down.fill")
}
}
body of Page1View struct:-
var body: some View {
NavigationView {
VStack(spacing: 10) {
// ... removed for clarity
}
.toolbar {
ToolbarItem(placement: .navigationBarLeading) {
Text("tbar1")
}
ToolbarItem(placement: .navigationBarTrailing) {
Text("tbar2")
}
ToolbarItem(placement: .navigationBarTrailing) {
Text("tbar3")
}
}
}
}
|
[
"Found solution to this issue is that same as listed in:-\nSwiftui NavigationView + TabView doesn't show navbar item\nAdded navigationViewStyle of .stack:-\nNavigationView{ \n}\n.navigationViewStyle(.stack)\n\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"swiftui"
] |
stackoverflow_0074663908_ios_swiftui.txt
|
Q:
Overriding testNG.xml suite parameter with Jenkin parameter value through Maven goals
Trying to override testNG.xml suite parameter with Jenkins parameter value. But values are not getting replaced. Want to replace the testNG parameters with Jenkin parameter. Can someone please guide. Version used TestNG '7.5' and Open JDK '15'
Maven goals : clean compile test -DtestNGXml=${testNGXml} -DenvironmentName=${environmentName} -DenvironmentClientID=${environmentClientID}
TestNG.xml
<suite name="HealthCheck_Suite" parallel="classes" thread-count=“1”>
<parameter name="environmentName" value="DEV" />
<parameter name="environmentClientID" value="BE11TEST" />
<test name="iOS_HealthCheck">
<classes>
<class name="MobileLoginTest">
<methods>
<include name="loginHealthCheckScript" />
</methods>
</class>
</classes>
</test>
</suite>
Pom Xml surefire plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
<configuration>
<argLine>-javaagent:"${settings.localRepository}/org/aspectj/aspectjweaver/${aspectj.version}/aspectjweaver-${aspectj.version}.jar"</argLine>
<suiteXmlFiles>
<suiteXmlFile>${testNGXml}</suiteXmlFile>
</suiteXmlFiles>
</configuration>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>${aspectj.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</plugin>```
A:
Here's a simplified setup that should work for you.
Lets assume that your test looks like below:
package com.rationaleemotions;
import org.testng.annotations.Parameters;
import org.testng.annotations.Test;
public class AppTest {
@Test
@Parameters({"environmentName", "environmentClientID"})
public void testMethod(String environmentName, String environmentClientID) {
System.err.println("EnvironmentName: " +
environmentName + ", environmentClientID: " + environmentClientID);
}
}
A suite xml would look like below:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="jenkins_Suite" parallel="classes" thread-count="1">
<parameter name="environmentName" value="DEV"/>
<parameter name="environmentClientID" value="BE11TEST"/>
<test name="jenkins_test">
<classes>
<class name="com.rationaleemotions.AppTest"/>
</classes>
</test>
</suite>
I have a TestNG dependency of 7.6.1 and my surefire-plugin configuration looks like below:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
<configuration>
<suiteXmlFiles>
<suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile>
</suiteXmlFiles>
</configuration>
</plugin>
I now create a parameterised job, that basically has 2 string parameters:
env - This represents the environment
clientId - This represents the client id
The maven goal would look like below
clean test -DenvironmentName="${env}" -DenvironmentClientID="${clientId}"
This should now let you override the parameter values defined in your suite xml file, via JVM arguments.
This is possible because TestNG already lets you override values in <parameter> tags via JVM arguments.
So:
<parameter name="environmentName" value="DEV"/> - Can be overridden via -DenvironmentName
<parameter name="environmentClientID" value="BE11TEST"/> - Can be overridden via -DenvironmentClientID
|
Overriding testNG.xml suite parameter with Jenkin parameter value through Maven goals
|
Trying to override testNG.xml suite parameter with Jenkins parameter value. But values are not getting replaced. Want to replace the testNG parameters with Jenkin parameter. Can someone please guide. Version used TestNG '7.5' and Open JDK '15'
Maven goals : clean compile test -DtestNGXml=${testNGXml} -DenvironmentName=${environmentName} -DenvironmentClientID=${environmentClientID}
TestNG.xml
<suite name="HealthCheck_Suite" parallel="classes" thread-count=“1”>
<parameter name="environmentName" value="DEV" />
<parameter name="environmentClientID" value="BE11TEST" />
<test name="iOS_HealthCheck">
<classes>
<class name="MobileLoginTest">
<methods>
<include name="loginHealthCheckScript" />
</methods>
</class>
</classes>
</test>
</suite>
Pom Xml surefire plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
<configuration>
<argLine>-javaagent:"${settings.localRepository}/org/aspectj/aspectjweaver/${aspectj.version}/aspectjweaver-${aspectj.version}.jar"</argLine>
<suiteXmlFiles>
<suiteXmlFile>${testNGXml}</suiteXmlFile>
</suiteXmlFiles>
</configuration>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>${aspectj.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</plugin>```
|
[
"Here's a simplified setup that should work for you.\nLets assume that your test looks like below:\npackage com.rationaleemotions;\n\nimport org.testng.annotations.Parameters;\nimport org.testng.annotations.Test;\n\npublic class AppTest {\n\n @Test\n @Parameters({\"environmentName\", \"environmentClientID\"})\n public void testMethod(String environmentName, String environmentClientID) {\n System.err.println(\"EnvironmentName: \" +\n environmentName + \", environmentClientID: \" + environmentClientID);\n }\n}\n\nA suite xml would look like below:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE suite SYSTEM \"https://testng.org/testng-1.0.dtd\">\n<suite name=\"jenkins_Suite\" parallel=\"classes\" thread-count=\"1\">\n <parameter name=\"environmentName\" value=\"DEV\"/>\n <parameter name=\"environmentClientID\" value=\"BE11TEST\"/>\n <test name=\"jenkins_test\">\n <classes>\n <class name=\"com.rationaleemotions.AppTest\"/>\n </classes>\n </test>\n</suite>\n\nI have a TestNG dependency of 7.6.1 and my surefire-plugin configuration looks like below:\n<plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-surefire-plugin</artifactId>\n <version>3.0.0-M5</version>\n <configuration>\n <suiteXmlFiles>\n <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile>\n </suiteXmlFiles>\n </configuration>\n</plugin>\n\nI now create a parameterised job, that basically has 2 string parameters:\n\nenv - This represents the environment\nclientId - This represents the client id\n\nThe maven goal would look like below\nclean test -DenvironmentName=\"${env}\" -DenvironmentClientID=\"${clientId}\"\n\nThis should now let you override the parameter values defined in your suite xml file, via JVM arguments.\nThis is possible because TestNG already lets you override values in <parameter> tags via JVM arguments.\nSo:\n\n<parameter name=\"environmentName\" value=\"DEV\"/> - Can be overridden via -DenvironmentName\n<parameter name=\"environmentClientID\" value=\"BE11TEST\"/> - Can be overridden via -DenvironmentClientID\n\n"
] |
[
0
] |
[] |
[] |
[
"testng"
] |
stackoverflow_0074622905_testng.txt
|
Q:
How to access GMail APIs using api key or username password?
I am following this instructions. I am wondering is there a way to authenticate the GMail APIs without Oauth, like API key or username/password. Using OAUth involves manual intervention.
A:
The GMail api contains private user data. In order to access private user data you must have their permission to access it.
You can access the Gmail api using Oauth2 to request permission of the user of the account to access their data. If this is a gsuite account then you can set up domain wide delegation to a service account and access it that way.
If its not a gsuite account you can have the user authenticate your application once and then store the refresh token using that to gain a new access token as needed but you will always need the users to authenticate your application at least once to get the refresh token.
Login and password is called client login and was turned off by google in 2015. You can also go directly though the smtp or Imap servers using the users login and pass word.
A:
The link in DalmTo's answer above is now outdated. I am also setting this up for a Google Workspace (formally GSuite) account, and here is the updated link to Domain-wide delegation with a Service Account: https://developers.google.com/workspace/guides/create-credentials#service-account
|
How to access GMail APIs using api key or username password?
|
I am following this instructions. I am wondering is there a way to authenticate the GMail APIs without Oauth, like API key or username/password. Using OAUth involves manual intervention.
|
[
"The GMail api contains private user data. In order to access private user data you must have their permission to access it.\nYou can access the Gmail api using Oauth2 to request permission of the user of the account to access their data. If this is a gsuite account then you can set up domain wide delegation to a service account and access it that way.\nIf its not a gsuite account you can have the user authenticate your application once and then store the refresh token using that to gain a new access token as needed but you will always need the users to authenticate your application at least once to get the refresh token.\nLogin and password is called client login and was turned off by google in 2015. You can also go directly though the smtp or Imap servers using the users login and pass word. \n",
"The link in DalmTo's answer above is now outdated. I am also setting this up for a Google Workspace (formally GSuite) account, and here is the updated link to Domain-wide delegation with a Service Account: https://developers.google.com/workspace/guides/create-credentials#service-account\n"
] |
[
3,
0
] |
[] |
[] |
[
"gmail_api",
"google_cloud_platform"
] |
stackoverflow_0051999422_gmail_api_google_cloud_platform.txt
|
Q:
Failed to execute 'fetch' on 'WorkerGlobalScope'
I trying to fetch data from the server inside of a web worker. But in dev tools every time I got the same error.
Uncaught (in promise) TypeError: Failed to execute 'fetch' on 'WorkerGlobalScope': Failed to parse URL from /api/books
I saw this answer about service workers, but it's not working for me.
WebWorker file code!
// data-handling.web-worker.js
const workercode = () => {
onmessage = async e => {
const res = await fetch('/api/books');
postMessage(res);
}
};
let code = workercode.toString();
code = code.substring(code.indexOf('{')+1, code.lastIndexOf('}'));
const blob = new Blob([code], {type: 'application/javascript'});
const worker_script = URL.createObjectURL(blob);
export default worker_script;
Imported in React component.
import data_handling_worker from "./data-handling.web-worker";
const dataHandlingWorker = new Worker(data_handling_worker);
Running and posting messages inside of the hooks!
const [searchQuery, setSearchQuery] = useState('');
const [booksList, setBooksList] = useState([]);
useEffect(() => {
dataHandlingWorker.postMessage(searchQuery);
}, [searchQuery]);
useEffect(() => {
dataHandlingWorker.onmessage = (m) => {
setBooksList(m.data);
};
}, []);
A:
Workers created from a blob:// URI have their baseURI set to set to that blob:// URI.
To fetch a relative URL the browser first has to build an absolute URI from the current realm's baseURI.
So in your case, the browser will try to generate an absolute URL from /api/books using blob:[origin]/[UUID] as the base. This is what throws:
const worker_script = `
postMessage( { baseURI: self.location.href } );
try {
const absolute = new URL( "api/books", self.location.href );
}
catch( err ) {
postMessage( { err: err.toString() } );
}
`;
const worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );
const worker = new Worker( worker_url );
worker.onmessage = (evt) => console.log( evt.data );
To workaround that issue you have two options:
Use an absolute URL. This way you won't face that problem.
Pass the baseURI from your main thread to your Worker's script. This way you will be able to create the absolute URL yourself using the URL constructor.
The first option is really simple, so I guess you don't need an example.
For the second, since we can't host relative files in StackSnippets, I had to host it in this plunker.
Here is the code:
const worker_script = `
onmessage = (evt) => {
const base = evt.data;
const absolute = new URL( "api/books", base );
fetch( absolute )
.then( (resp) => resp.text() )
.then( (txt) => postMessage( txt ) )
.catch( console.error );
};
`;
const worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );
const worker = new Worker( worker_url );
worker.onmessage = (evt) => document.body.append( evt.data );
// pass the main thread's baseURI
worker.postMessage( document.baseURI );
A:
My problem was that I wanted to make a conditional response depending on the cache.
I was using the request object and paying attention to this post and using the absolute url solved the problem.
const respuesta = caches.match('codeGalery/cat.jpg')
.then((response) =>
{
if (response !== undefined)
{
return response;
}
//wrong
//return fetch(FetchEvent.request, {credentials:"same-origin"});
//correct
return fetch(FetchEvent.request.url, {credentials:"same-origin"});
});
A:
const worker_script = `
postMessage( { baseURI: self.location.href } );
try {
const absolute = new URL( "api/books", self.location.href );
}
catch( err ) {
postMessage( { err: err.toString() } );
}
`;
const worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );
const worker = new Worker( worker_url );
worker.onmessage = (evt) => console.log( evt.data );
|
Failed to execute 'fetch' on 'WorkerGlobalScope'
|
I trying to fetch data from the server inside of a web worker. But in dev tools every time I got the same error.
Uncaught (in promise) TypeError: Failed to execute 'fetch' on 'WorkerGlobalScope': Failed to parse URL from /api/books
I saw this answer about service workers, but it's not working for me.
WebWorker file code!
// data-handling.web-worker.js
const workercode = () => {
onmessage = async e => {
const res = await fetch('/api/books');
postMessage(res);
}
};
let code = workercode.toString();
code = code.substring(code.indexOf('{')+1, code.lastIndexOf('}'));
const blob = new Blob([code], {type: 'application/javascript'});
const worker_script = URL.createObjectURL(blob);
export default worker_script;
Imported in React component.
import data_handling_worker from "./data-handling.web-worker";
const dataHandlingWorker = new Worker(data_handling_worker);
Running and posting messages inside of the hooks!
const [searchQuery, setSearchQuery] = useState('');
const [booksList, setBooksList] = useState([]);
useEffect(() => {
dataHandlingWorker.postMessage(searchQuery);
}, [searchQuery]);
useEffect(() => {
dataHandlingWorker.onmessage = (m) => {
setBooksList(m.data);
};
}, []);
|
[
"Workers created from a blob:// URI have their baseURI set to set to that blob:// URI. \nTo fetch a relative URL the browser first has to build an absolute URI from the current realm's baseURI.\nSo in your case, the browser will try to generate an absolute URL from /api/books using blob:[origin]/[UUID] as the base. This is what throws:\n\n\nconst worker_script = `\r\n postMessage( { baseURI: self.location.href } );\r\n try {\r\n const absolute = new URL( \"api/books\", self.location.href );\r\n }\r\n catch( err ) {\r\n postMessage( { err: err.toString() } );\r\n }\r\n`;\r\nconst worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );\r\nconst worker = new Worker( worker_url );\r\nworker.onmessage = (evt) => console.log( evt.data );\n\n\n\nTo workaround that issue you have two options:\n\nUse an absolute URL. This way you won't face that problem.\nPass the baseURI from your main thread to your Worker's script. This way you will be able to create the absolute URL yourself using the URL constructor.\n\nThe first option is really simple, so I guess you don't need an example.\nFor the second, since we can't host relative files in StackSnippets, I had to host it in this plunker.\nHere is the code:\nconst worker_script = `\n onmessage = (evt) => {\n const base = evt.data;\n const absolute = new URL( \"api/books\", base );\n fetch( absolute )\n .then( (resp) => resp.text() )\n .then( (txt) => postMessage( txt ) )\n .catch( console.error );\n };\n`;\nconst worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );\nconst worker = new Worker( worker_url );\nworker.onmessage = (evt) => document.body.append( evt.data );\n// pass the main thread's baseURI\nworker.postMessage( document.baseURI );\n\n",
"My problem was that I wanted to make a conditional response depending on the cache.\n\nI was using the request object and paying attention to this post and using the absolute url solved the problem.\nconst respuesta = caches.match('codeGalery/cat.jpg')\n.then((response) => \n{\n if (response !== undefined)\n {\n return response;\n }\n //wrong\n //return fetch(FetchEvent.request, {credentials:\"same-origin\"});\n //correct\n return fetch(FetchEvent.request.url, {credentials:\"same-origin\"});\n\n});\n\n",
"\n\nconst worker_script = `\n postMessage( { baseURI: self.location.href } );\n try {\n const absolute = new URL( \"api/books\", self.location.href );\n }\n catch( err ) {\n postMessage( { err: err.toString() } );\n }\n`;\nconst worker_url = URL.createObjectURL( new Blob( [ worker_script ] ) );\nconst worker = new Worker( worker_url );\nworker.onmessage = (evt) => console.log( evt.data );\n\n\n\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"javascript",
"web_worker"
] |
stackoverflow_0060836401_javascript_web_worker.txt
|
Q:
how to replace element text with element title in jquery?
I need to swap out the text of a class (timeago) with the title dynamically. there will be multiple divs containing this class it could be 1 or it could be 1000
here is an example:
<abbr class="timeago" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
i appreciate the help
$('.timeago').attr('title', $('.timeago').text());
replaces titles with text but i need to do the opposite and for all .timeago classes, when i apply :firstchild to test it is still grabbing all the .timeago's text that exist..
is there an easy way to make this work?
A:
// jQuery 1
$(".timeago").each(function(){$(this).text($(this).attr("title"))});
// jQuery 2
$(".timeago1").each(function(){this.textContent = this.title});
// Vanilla-JS
document.querySelectorAll(".timeago2").forEach(e => e.textContent = e.title);
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<abbr class="timeago" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
<abbr class="timeago" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
<br>
<abbr class="timeago1" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
<abbr class="timeago1" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
<br>
<abbr class="timeago2" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
<abbr class="timeago2" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
|
how to replace element text with element title in jquery?
|
I need to swap out the text of a class (timeago) with the title dynamically. there will be multiple divs containing this class it could be 1 or it could be 1000
here is an example:
<abbr class="timeago" data-datetime="2022-10-30T18:54:39Z" title="10/30/2022 11:54 AM">15 days ago</abbr>
i appreciate the help
$('.timeago').attr('title', $('.timeago').text());
replaces titles with text but i need to do the opposite and for all .timeago classes, when i apply :firstchild to test it is still grabbing all the .timeago's text that exist..
is there an easy way to make this work?
|
[
"\n\n// jQuery 1\n$(\".timeago\").each(function(){$(this).text($(this).attr(\"title\"))});\n\n// jQuery 2\n$(\".timeago1\").each(function(){this.textContent = this.title});\n\n// Vanilla-JS\ndocument.querySelectorAll(\".timeago2\").forEach(e => e.textContent = e.title);\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n\n<abbr class=\"timeago\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n<abbr class=\"timeago\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n<br>\n<abbr class=\"timeago1\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n<abbr class=\"timeago1\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n<br>\n<abbr class=\"timeago2\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n<abbr class=\"timeago2\" data-datetime=\"2022-10-30T18:54:39Z\" title=\"10/30/2022 11:54 AM\">15 days ago</abbr>\n\n\n\n"
] |
[
-1
] |
[] |
[] |
[
"azureportal",
"jquery"
] |
stackoverflow_0074663868_azureportal_jquery.txt
|
Q:
K8S Pod OOM killed with apparent memory leak, where did the memory go?
I have an issue with a K8S POD getting OOM killed, but with some weird conditions and observations.
The pod is a golang 1.15.6 based REST service, running on X86 64 bit architecture. When the pod runs on VM based clusters, everything is fine, the service behaves normally. When the service runs on nodes provisioned directly on hardware, it appears to experience a memory leak and ends up getting OOMed.
Observations are that when running on the problematic configuration, "kubectl top pod" will report continually increasing memory utilization until the defined limit (64MiB) is reached, at which time OOM killer is invoked.
Observations from inside the pod using "top" suggest that memory usage of the various processes inside the pod are stable, using around 40MiB RSS. The values for VIRT,RES,SHR as reported by top remain stable over time, with only minor fluctuations.
I've analyzed the golang code extensively, including obtaining memory profiles over time (pprof). No sign of a leak in the actual golang code, which tallies with correct operation in VM based environment and observations from top.
The OOM message below also suggests that the total RSS used by the pod was only 38.75MiB (sum/RSS = 9919 pages *4k = 38.75MiB).
kernel: [651076.945552] xxxxxxxxxxxx invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=999
kernel: [651076.945556] CPU: 35 PID: 158127 Comm: xxxxxxxxxxxx Not tainted 5.4.0-73-generic #82~18.04.1
kernel: [651076.945558] Call Trace:
kernel: [651076.945567] dump_stack+0x6d/0x8b
kernel: [651076.945573] dump_header+0x4f/0x200
kernel: [651076.945575] oom_kill_process+0xe6/0x120
kernel: [651076.945577] out_of_memory+0x109/0x510
kernel: [651076.945582] mem_cgroup_out_of_memory+0xbb/0xd0
kernel: [651076.945584] try_charge+0x79a/0x7d0
kernel: [651076.945585] mem_cgroup_try_charge+0x75/0x190
kernel: [651076.945587] __add_to_page_cache_locked+0x1e1/0x340
kernel: [651076.945592] ? scan_shadow_nodes+0x30/0x30
kernel: [651076.945594] add_to_page_cache_lru+0x4f/0xd0
kernel: [651076.945595] pagecache_get_page+0xea/0x2c0
kernel: [651076.945596] filemap_fault+0x685/0xb80
kernel: [651076.945600] ? __switch_to_asm+0x40/0x70
kernel: [651076.945601] ? __switch_to_asm+0x34/0x70
kernel: [651076.945602] ? __switch_to_asm+0x40/0x70
kernel: [651076.945603] ? __switch_to_asm+0x34/0x70
kernel: [651076.945604] ? __switch_to_asm+0x40/0x70
kernel: [651076.945605] ? __switch_to_asm+0x34/0x70
kernel: [651076.945606] ? __switch_to_asm+0x40/0x70
kernel: [651076.945608] ? filemap_map_pages+0x181/0x3b0
kernel: [651076.945611] ext4_filemap_fault+0x31/0x50
kernel: [651076.945614] __do_fault+0x57/0x110
kernel: [651076.945615] __handle_mm_fault+0xdde/0x1270
kernel: [651076.945617] handle_mm_fault+0xcb/0x210
kernel: [651076.945621] __do_page_fault+0x2a1/0x4d0
kernel: [651076.945625] ? __audit_syscall_exit+0x1e8/0x2a0
kernel: [651076.945627] do_page_fault+0x2c/0xe0
kernel: [651076.945628] page_fault+0x34/0x40
kernel: [651076.945630] RIP: 0033:0x5606e773349b
kernel: [651076.945634] Code: Bad RIP value.
kernel: [651076.945635] RSP: 002b:00007fbdf9088df0 EFLAGS: 00010206
kernel: [651076.945637] RAX: 0000000000000000 RBX: 0000000000004e20 RCX: 00005606e775ce7d
kernel: [651076.945637] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fbdf9088dd0
kernel: [651076.945638] RBP: 00007fbdf9088e48 R08: 0000000000006c50 R09: 00007fbdf9088dc0
kernel: [651076.945638] R10: 0000000000000000 R11: 0000000000000202 R12: 00007fbdf9088dd0
kernel: [651076.945639] R13: 0000000000000000 R14: 00005606e7c6140c R15: 0000000000000000
kernel: [651076.945640] memory: usage 65536kB, limit 65536kB, failcnt 26279526
kernel: [651076.945641] memory+swap: usage 65536kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] kmem: usage 37468kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] Memory cgroup stats for /kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe:
kernel: [651076.945652] anon 25112576
kernel: [651076.945652] file 0
kernel: [651076.945652] kernel_stack 221184
kernel: [651076.945652] slab 41406464
kernel: [651076.945652] sock 0
kernel: [651076.945652] shmem 0
kernel: [651076.945652] file_mapped 2838528
kernel: [651076.945652] file_dirty 0
kernel: [651076.945652] file_writeback 0
kernel: [651076.945652] anon_thp 0
kernel: [651076.945652] inactive_anon 0
kernel: [651076.945652] active_anon 25411584
kernel: [651076.945652] inactive_file 0
kernel: [651076.945652] active_file 536576
kernel: [651076.945652] unevictable 0
kernel: [651076.945652] slab_reclaimable 16769024
kernel: [651076.945652] slab_unreclaimable 24637440
kernel: [651076.945652] pgfault 7211542
kernel: [651076.945652] pgmajfault 2895749
kernel: [651076.945652] workingset_refault 71200645
kernel: [651076.945652] workingset_activate 5871824
kernel: [651076.945652] workingset_nodereclaim 330
kernel: [651076.945652] pgrefill 39987763
kernel: [651076.945652] pgscan 144468270
kernel: [651076.945652] pgsteal 71255273
kernel: [651076.945652] pgactivate 27649178
kernel: [651076.945652] pgdeactivate 33525031
kernel: [651076.945653] Tasks state (memory values in pages):
kernel: [651076.945653] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
kernel: [651076.945656] [ 151091] 0 151091 255 1 36864 0 -998 pause
kernel: [651076.945675] [ 157986] 0 157986 58 4 32768 0 999 dumb-init
kernel: [651076.945676] [ 158060] 0 158060 13792 869 151552 0 999 su
kernel: [651076.945678] [ 158061] 1234 158061 18476 6452 192512 0 999 yyyyyy
kernel: [651076.945679] [ 158124] 1234 158124 1161 224 53248 0 999 sh
kernel: [651076.945681] [ 158125] 1234 158125 348755 2369 233472 0 999 xxxxxxxxxxxx
kernel: [651076.945682] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,mems_allowed=0-3,oom_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe,task_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe/a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,task=yyyyyy,pid=158061,uid=1234
kernel: [651076.945695] Memory cgroup out of memory: Killed process 158061 (yyyyyy) total-vm:73904kB, anon-rss:17008kB, file-rss:8800kB, shmem-rss:0kB, UID:1234 pgtables:188kB oom_score_adj:999
kernel: [651076.947429] oom_reaper: reaped process 158061 (yyyyyy), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
The OOM message clearly states that usage = 65536kB, limit = 65536kB, but I don't immediately where the approximately 25MiB of memory not accounted for under RSS has gone.
I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for, not sure if there is any significant in this though.
Looking for any suggestions as to where the memory is being used. Any input would be most welcome.
A:
I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for...
For slab details you can try the command slabinfo or do cat /proc/slabinfo. The table could point you to where the memory has gone to.
A:
This also happened to my end. It was python web service, and it was running fine one vm node. But in pod, i see intermittent sudden continous climb up in memory until it got killed by oom signal. I did load test in original server and couldn't find memory leak. I guess there was something going on outside the app itself in the pod.
|
K8S Pod OOM killed with apparent memory leak, where did the memory go?
|
I have an issue with a K8S POD getting OOM killed, but with some weird conditions and observations.
The pod is a golang 1.15.6 based REST service, running on X86 64 bit architecture. When the pod runs on VM based clusters, everything is fine, the service behaves normally. When the service runs on nodes provisioned directly on hardware, it appears to experience a memory leak and ends up getting OOMed.
Observations are that when running on the problematic configuration, "kubectl top pod" will report continually increasing memory utilization until the defined limit (64MiB) is reached, at which time OOM killer is invoked.
Observations from inside the pod using "top" suggest that memory usage of the various processes inside the pod are stable, using around 40MiB RSS. The values for VIRT,RES,SHR as reported by top remain stable over time, with only minor fluctuations.
I've analyzed the golang code extensively, including obtaining memory profiles over time (pprof). No sign of a leak in the actual golang code, which tallies with correct operation in VM based environment and observations from top.
The OOM message below also suggests that the total RSS used by the pod was only 38.75MiB (sum/RSS = 9919 pages *4k = 38.75MiB).
kernel: [651076.945552] xxxxxxxxxxxx invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=999
kernel: [651076.945556] CPU: 35 PID: 158127 Comm: xxxxxxxxxxxx Not tainted 5.4.0-73-generic #82~18.04.1
kernel: [651076.945558] Call Trace:
kernel: [651076.945567] dump_stack+0x6d/0x8b
kernel: [651076.945573] dump_header+0x4f/0x200
kernel: [651076.945575] oom_kill_process+0xe6/0x120
kernel: [651076.945577] out_of_memory+0x109/0x510
kernel: [651076.945582] mem_cgroup_out_of_memory+0xbb/0xd0
kernel: [651076.945584] try_charge+0x79a/0x7d0
kernel: [651076.945585] mem_cgroup_try_charge+0x75/0x190
kernel: [651076.945587] __add_to_page_cache_locked+0x1e1/0x340
kernel: [651076.945592] ? scan_shadow_nodes+0x30/0x30
kernel: [651076.945594] add_to_page_cache_lru+0x4f/0xd0
kernel: [651076.945595] pagecache_get_page+0xea/0x2c0
kernel: [651076.945596] filemap_fault+0x685/0xb80
kernel: [651076.945600] ? __switch_to_asm+0x40/0x70
kernel: [651076.945601] ? __switch_to_asm+0x34/0x70
kernel: [651076.945602] ? __switch_to_asm+0x40/0x70
kernel: [651076.945603] ? __switch_to_asm+0x34/0x70
kernel: [651076.945604] ? __switch_to_asm+0x40/0x70
kernel: [651076.945605] ? __switch_to_asm+0x34/0x70
kernel: [651076.945606] ? __switch_to_asm+0x40/0x70
kernel: [651076.945608] ? filemap_map_pages+0x181/0x3b0
kernel: [651076.945611] ext4_filemap_fault+0x31/0x50
kernel: [651076.945614] __do_fault+0x57/0x110
kernel: [651076.945615] __handle_mm_fault+0xdde/0x1270
kernel: [651076.945617] handle_mm_fault+0xcb/0x210
kernel: [651076.945621] __do_page_fault+0x2a1/0x4d0
kernel: [651076.945625] ? __audit_syscall_exit+0x1e8/0x2a0
kernel: [651076.945627] do_page_fault+0x2c/0xe0
kernel: [651076.945628] page_fault+0x34/0x40
kernel: [651076.945630] RIP: 0033:0x5606e773349b
kernel: [651076.945634] Code: Bad RIP value.
kernel: [651076.945635] RSP: 002b:00007fbdf9088df0 EFLAGS: 00010206
kernel: [651076.945637] RAX: 0000000000000000 RBX: 0000000000004e20 RCX: 00005606e775ce7d
kernel: [651076.945637] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fbdf9088dd0
kernel: [651076.945638] RBP: 00007fbdf9088e48 R08: 0000000000006c50 R09: 00007fbdf9088dc0
kernel: [651076.945638] R10: 0000000000000000 R11: 0000000000000202 R12: 00007fbdf9088dd0
kernel: [651076.945639] R13: 0000000000000000 R14: 00005606e7c6140c R15: 0000000000000000
kernel: [651076.945640] memory: usage 65536kB, limit 65536kB, failcnt 26279526
kernel: [651076.945641] memory+swap: usage 65536kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] kmem: usage 37468kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] Memory cgroup stats for /kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe:
kernel: [651076.945652] anon 25112576
kernel: [651076.945652] file 0
kernel: [651076.945652] kernel_stack 221184
kernel: [651076.945652] slab 41406464
kernel: [651076.945652] sock 0
kernel: [651076.945652] shmem 0
kernel: [651076.945652] file_mapped 2838528
kernel: [651076.945652] file_dirty 0
kernel: [651076.945652] file_writeback 0
kernel: [651076.945652] anon_thp 0
kernel: [651076.945652] inactive_anon 0
kernel: [651076.945652] active_anon 25411584
kernel: [651076.945652] inactive_file 0
kernel: [651076.945652] active_file 536576
kernel: [651076.945652] unevictable 0
kernel: [651076.945652] slab_reclaimable 16769024
kernel: [651076.945652] slab_unreclaimable 24637440
kernel: [651076.945652] pgfault 7211542
kernel: [651076.945652] pgmajfault 2895749
kernel: [651076.945652] workingset_refault 71200645
kernel: [651076.945652] workingset_activate 5871824
kernel: [651076.945652] workingset_nodereclaim 330
kernel: [651076.945652] pgrefill 39987763
kernel: [651076.945652] pgscan 144468270
kernel: [651076.945652] pgsteal 71255273
kernel: [651076.945652] pgactivate 27649178
kernel: [651076.945652] pgdeactivate 33525031
kernel: [651076.945653] Tasks state (memory values in pages):
kernel: [651076.945653] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
kernel: [651076.945656] [ 151091] 0 151091 255 1 36864 0 -998 pause
kernel: [651076.945675] [ 157986] 0 157986 58 4 32768 0 999 dumb-init
kernel: [651076.945676] [ 158060] 0 158060 13792 869 151552 0 999 su
kernel: [651076.945678] [ 158061] 1234 158061 18476 6452 192512 0 999 yyyyyy
kernel: [651076.945679] [ 158124] 1234 158124 1161 224 53248 0 999 sh
kernel: [651076.945681] [ 158125] 1234 158125 348755 2369 233472 0 999 xxxxxxxxxxxx
kernel: [651076.945682] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,mems_allowed=0-3,oom_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe,task_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe/a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,task=yyyyyy,pid=158061,uid=1234
kernel: [651076.945695] Memory cgroup out of memory: Killed process 158061 (yyyyyy) total-vm:73904kB, anon-rss:17008kB, file-rss:8800kB, shmem-rss:0kB, UID:1234 pgtables:188kB oom_score_adj:999
kernel: [651076.947429] oom_reaper: reaped process 158061 (yyyyyy), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
The OOM message clearly states that usage = 65536kB, limit = 65536kB, but I don't immediately where the approximately 25MiB of memory not accounted for under RSS has gone.
I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for, not sure if there is any significant in this though.
Looking for any suggestions as to where the memory is being used. Any input would be most welcome.
|
[
"I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for...\nFor slab details you can try the command slabinfo or do cat /proc/slabinfo. The table could point you to where the memory has gone to.\n",
"This also happened to my end. It was python web service, and it was running fine one vm node. But in pod, i see intermittent sudden continous climb up in memory until it got killed by oom signal. I did load test in original server and couldn't find memory leak. I guess there was something going on outside the app itself in the pod.\n"
] |
[
0,
0
] |
[] |
[] |
[
"go",
"kubernetes",
"linux",
"memory_leaks",
"out_of_memory"
] |
stackoverflow_0071297987_go_kubernetes_linux_memory_leaks_out_of_memory.txt
|
Q:
Pass query params to an endpoint in rest assured
I am new to rest assured and I have some requests which I want to do as a part of rest assured I have the query parameters to be passed in the request as test data dynamically without hard coding it. These are my requests.
{{Base_Url}}/master-data/v1/calendars/GBL?from=2022-11-29&to=2022-11-30&monthEnd=true
{{Base_Url}}/master-data/v1/calendars/GBL?from=2022-11-29&to=2022-11-30&monthEnd=false
Normally in rest assured we pass query params as
Response res = httpRequest.queryParam("ISBN","9781449325862").get("/Book");
But in my case the query parameters are a bit complicated like GBL?from=2022-11-29&to=2022-11-30&monthEnd=true and these aren't straight forward. How to handle these in rest assured?
A:
There are some ways to do that.
Use queryParam()
Response res = httpRequest
.queryParam("from","2022-11-29")
.queryParam("to","2022-11-30")
.queryParam("monthEnd",true)
.get("/master-data/v1/calendars/GBL");
Use Map<String, ?>
//work for java 11
Map<String, ?> params = Map.of("from", "2022-11-29", "to", "2022-11-30", "monthEnd", true);
Response res = httpRequest
.queryParams(params)
.get("/master-data/v1/calendars/GBL");
|
Pass query params to an endpoint in rest assured
|
I am new to rest assured and I have some requests which I want to do as a part of rest assured I have the query parameters to be passed in the request as test data dynamically without hard coding it. These are my requests.
{{Base_Url}}/master-data/v1/calendars/GBL?from=2022-11-29&to=2022-11-30&monthEnd=true
{{Base_Url}}/master-data/v1/calendars/GBL?from=2022-11-29&to=2022-11-30&monthEnd=false
Normally in rest assured we pass query params as
Response res = httpRequest.queryParam("ISBN","9781449325862").get("/Book");
But in my case the query parameters are a bit complicated like GBL?from=2022-11-29&to=2022-11-30&monthEnd=true and these aren't straight forward. How to handle these in rest assured?
|
[
"There are some ways to do that.\n\nUse queryParam()\n\nResponse res = httpRequest\n .queryParam(\"from\",\"2022-11-29\")\n .queryParam(\"to\",\"2022-11-30\")\n .queryParam(\"monthEnd\",true)\n .get(\"/master-data/v1/calendars/GBL\");\n\n\nUse Map<String, ?>\n\n//work for java 11\nMap<String, ?> params = Map.of(\"from\", \"2022-11-29\", \"to\", \"2022-11-30\", \"monthEnd\", true);\n\nResponse res = httpRequest\n .queryParams(params)\n .get(\"/master-data/v1/calendars/GBL\");\n\n"
] |
[
0
] |
[] |
[] |
[
"java",
"rest_assured"
] |
stackoverflow_0074660015_java_rest_assured.txt
|
Q:
springboot embedded tomcat and tomcat-embed-jasper
I sometimes see these following declaration in pom.xml...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
</dependency>
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency>
....
as you can see, spring-boot-starter-web was declared as well
as tomcat-embed-jasper.
isn't it spring-boot-starter-web already have an embedded tomcat?
why some developers still declare tomcat-embed-jasper along with boot-starter-web? or is there any reason?
A:
As you said, the spring-boot-starter-web includes the spring-boot-starter-tomcat. You could check it here
The spring-boot-starter-tomcat includes the tomcat-embed-core. You could check it here
But, seems like tomcat-embed-core doesn't include tomcat-embed-jasper. In fact, is tomcat-embed-jasper who includes dependency with tomcat-embed-core. Check it here
Anyway, the tomcat-embed-jasper is marked as provided, so indicates that you expect the JDK or a container to provide the dependency at runtime. This scope is only available on the compilation and test classpath, and is not transitive.
In conclusion, the spring-boot-starter-web includes the tomcat embedded dependency but it doesn't includes the jasper embedded dependency, so that should be the reason to declare it separately.
Also, remember that using Spring IO Platform as parent you are able to manage dependencies easily. To know more about this you could read my post
Hope it helps,
A:
Extended from jcgarcia's answer.
Even it is provided, but when you build as war, spring-boot-maven-plugin will include two more jar :
ecj-3.12.3.jar
tomcat-embed-jasper-8.5.23.jar
A:
To those who are still facing this error in 2022 with Java Version 17, Maven Version 3.0.0 and Package Jar. I also ran into the same issue just now, seems like even though we set <scope>Provided</scope> Maven is not picking up the jar. What you can do instead is just take that completely off while adding the dependency and run the Maven to install dependencies again. It will fix it for sure. So your pom.xml file will go:-
From
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency>
To
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
</dependency>
|
springboot embedded tomcat and tomcat-embed-jasper
|
I sometimes see these following declaration in pom.xml...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
</dependency>
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency>
....
as you can see, spring-boot-starter-web was declared as well
as tomcat-embed-jasper.
isn't it spring-boot-starter-web already have an embedded tomcat?
why some developers still declare tomcat-embed-jasper along with boot-starter-web? or is there any reason?
|
[
"As you said, the spring-boot-starter-web includes the spring-boot-starter-tomcat. You could check it here\nThe spring-boot-starter-tomcat includes the tomcat-embed-core. You could check it here\nBut, seems like tomcat-embed-core doesn't include tomcat-embed-jasper. In fact, is tomcat-embed-jasper who includes dependency with tomcat-embed-core. Check it here\nAnyway, the tomcat-embed-jasper is marked as provided, so indicates that you expect the JDK or a container to provide the dependency at runtime. This scope is only available on the compilation and test classpath, and is not transitive.\nIn conclusion, the spring-boot-starter-web includes the tomcat embedded dependency but it doesn't includes the jasper embedded dependency, so that should be the reason to declare it separately.\nAlso, remember that using Spring IO Platform as parent you are able to manage dependencies easily. To know more about this you could read my post\nHope it helps,\n",
"Extended from jcgarcia's answer.\nEven it is provided, but when you build as war, spring-boot-maven-plugin will include two more jar :\n ecj-3.12.3.jar\n tomcat-embed-jasper-8.5.23.jar\n",
"To those who are still facing this error in 2022 with Java Version 17, Maven Version 3.0.0 and Package Jar. I also ran into the same issue just now, seems like even though we set <scope>Provided</scope> Maven is not picking up the jar. What you can do instead is just take that completely off while adding the dependency and run the Maven to install dependencies again. It will fix it for sure. So your pom.xml file will go:-\nFrom\n <dependency>\n <groupId>org.apache.tomcat.embed</groupId>\n <artifactId>tomcat-embed-jasper</artifactId>\n <scope>provided</scope>\n</dependency>\n\nTo\n <dependency>\n <groupId>org.apache.tomcat.embed</groupId>\n <artifactId>tomcat-embed-jasper</artifactId>\n</dependency>\n\n"
] |
[
24,
2,
0
] |
[] |
[] |
[
"java",
"spring_boot",
"tomcat"
] |
stackoverflow_0042154614_java_spring_boot_tomcat.txt
|
Q:
How to import XOR function from Crypto.Cipher module?
cannot import name 'XOR' from 'Crypto.Cipher'
(/usr/local/lib/python3.8/dist-packages/Crypto/Cipher/__init__.py)
I just tried importing XOR function into my code & this is the error that i have got when i executed my code in the google colab.
Can i get the solution for this?
I just need to import XOR function using Crypto.Cipher module. My code is as follows
import Crypto
from Crypto import Cipher
from Crypto.Cipher import XOR
key = "abcdefghijklij"
xor = XOR.XORCipher(key) # To encrypt
xor1 = XOR.XORCipher(key) # To decrypt
def enc(sock, message, addr):
abcd = str_xor.encrypt(message)
print (message == dec(sock, abcd, addr))
sock.sendto(abcd, addr)
return abcd
def dec(sock, message, addr):
abcd = str_xor1.decrypt(message)
return abcd
#message = "dfjsdfjsdfjdsfdfsk"4
#print message
#newm = enc(1, message, message)
#print newm
#print dec(1, newm, newm)
A:
pip install crypto
installs https://github.com/chrissimpkins/crypto which does not appear to be import-able class-library. Its examples and test scripts suggest crypto and decrypto should be executes as commands.
Readme: https://github.com/chrissimpkins/crypto
Tests/examples: https://github.com/chrissimpkins/crypto/tree/master/tests
Please specify which crypto-library did you install?
Make sure your installation matches the library you are supposed to install.
|
How to import XOR function from Crypto.Cipher module?
|
cannot import name 'XOR' from 'Crypto.Cipher'
(/usr/local/lib/python3.8/dist-packages/Crypto/Cipher/__init__.py)
I just tried importing XOR function into my code & this is the error that i have got when i executed my code in the google colab.
Can i get the solution for this?
I just need to import XOR function using Crypto.Cipher module. My code is as follows
import Crypto
from Crypto import Cipher
from Crypto.Cipher import XOR
key = "abcdefghijklij"
xor = XOR.XORCipher(key) # To encrypt
xor1 = XOR.XORCipher(key) # To decrypt
def enc(sock, message, addr):
abcd = str_xor.encrypt(message)
print (message == dec(sock, abcd, addr))
sock.sendto(abcd, addr)
return abcd
def dec(sock, message, addr):
abcd = str_xor1.decrypt(message)
return abcd
#message = "dfjsdfjsdfjdsfdfsk"4
#print message
#newm = enc(1, message, message)
#print newm
#print dec(1, newm, newm)
|
[
"pip install crypto\n\ninstalls https://github.com/chrissimpkins/crypto which does not appear to be import-able class-library. Its examples and test scripts suggest crypto and decrypto should be executes as commands.\n\nReadme: https://github.com/chrissimpkins/crypto\nTests/examples: https://github.com/chrissimpkins/crypto/tree/master/tests\n\nPlease specify which crypto-library did you install?\nMake sure your installation matches the library you are supposed to install.\n"
] |
[
0
] |
[] |
[] |
[
"cryptography",
"package",
"python"
] |
stackoverflow_0074664087_cryptography_package_python.txt
|
Q:
Converting floats from input into integers within an equation python
Program is supposed to take an integer and a factor of x and evaluate the polynomial a_nx^n+a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+ ... a_2x^2+a_1x+a_0, where each a_i is a coefficient of the corresponding power of x.
Basically, the polynomial 3x^4+2x^3+x+5 can be represented as the integer 32015 since the x^2 coefficient is 0. It is then evaluated by the x value. However, the program won't accept decimals for the first integer as input but wants all decimals to be included in the answer.
I've written most of the program.
while True:
try:
number = list(reversed(input()))
if int("".join(number)):
break
except:
print("Invalid Input")
while True:
try:
x = float(input())
break
except:
print("Invalid Input")
degree = len(number)
result = 0
for i in range(degree):
result += int(number[i]) * pow(x,i)
print(result)
However, for the inputs 341 and -2.9, the program expects
218.11999999999998
but is recieving
218.11999999999995
How can I stop the decimals in the answer from being rounded?
A:
I've researched about floating-point numbers and the docs also state this as an error. However, what they recommend is using repr() which is a built-in function to convert your input into 17 significant digits. You could also create an if condition that runs the repr() function only when required.
Why does this problem occur? Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.
|
Converting floats from input into integers within an equation python
|
Program is supposed to take an integer and a factor of x and evaluate the polynomial a_nx^n+a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+ ... a_2x^2+a_1x+a_0, where each a_i is a coefficient of the corresponding power of x.
Basically, the polynomial 3x^4+2x^3+x+5 can be represented as the integer 32015 since the x^2 coefficient is 0. It is then evaluated by the x value. However, the program won't accept decimals for the first integer as input but wants all decimals to be included in the answer.
I've written most of the program.
while True:
try:
number = list(reversed(input()))
if int("".join(number)):
break
except:
print("Invalid Input")
while True:
try:
x = float(input())
break
except:
print("Invalid Input")
degree = len(number)
result = 0
for i in range(degree):
result += int(number[i]) * pow(x,i)
print(result)
However, for the inputs 341 and -2.9, the program expects
218.11999999999998
but is recieving
218.11999999999995
How can I stop the decimals in the answer from being rounded?
|
[
"I've researched about floating-point numbers and the docs also state this as an error. However, what they recommend is using repr() which is a built-in function to convert your input into 17 significant digits. You could also create an if condition that runs the repr() function only when required.\nWhy does this problem occur? Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.\n"
] |
[
0
] |
[] |
[] |
[
"integer",
"logic",
"python"
] |
stackoverflow_0074661744_integer_logic_python.txt
|
Q:
How to close browser after form has been submitted?
I am trying to submit a form and then the browser has to close. But I cannot understand why it will not work. For some reason, it will close the window, but it does not submit the form.
$('.cancel-confirm').on('click', function () {
$(this).closest('form').submit();
alert('Submitted - your window will now close');
window.close();
});
<form asp-action="CancelActivity" method="post" class="absolute ff f-36 foreground-white" style="right:5em; top:20%;">
<input type="hidden" name="ActivityID" value="@item.Activityid" />
<input type="hidden" name="status" value="@(item.IsCancelled == true ? "false" : "true")" />
<button class="btn ff f-22 background-transparant foreground-black cancel-confirm" type="submit"> @(item.IsCancelled == true ? "Genaktiver" : "Deaktiver") </button>
</form>
Tried to submit the form with jQuery and then close the window
A:
try to put an ID in your form,
$('#form-id').submit(function () {
window.close();
});
A:
I see you are using jQuery, you can use ajax
$('.cancel-confirm').on('click', function () {
$.post( "yourprocessor.php", { name: "Your Name", sex: "Male" })
.done(function( data ) {
alert('Submitted - your window will now close');
window.close();
});
});
then process your submitted form.
|
How to close browser after form has been submitted?
|
I am trying to submit a form and then the browser has to close. But I cannot understand why it will not work. For some reason, it will close the window, but it does not submit the form.
$('.cancel-confirm').on('click', function () {
$(this).closest('form').submit();
alert('Submitted - your window will now close');
window.close();
});
<form asp-action="CancelActivity" method="post" class="absolute ff f-36 foreground-white" style="right:5em; top:20%;">
<input type="hidden" name="ActivityID" value="@item.Activityid" />
<input type="hidden" name="status" value="@(item.IsCancelled == true ? "false" : "true")" />
<button class="btn ff f-22 background-transparant foreground-black cancel-confirm" type="submit"> @(item.IsCancelled == true ? "Genaktiver" : "Deaktiver") </button>
</form>
Tried to submit the form with jQuery and then close the window
|
[
"try to put an ID in your form,\n$('#form-id').submit(function () {\n window.close();\n});\n\n",
"I see you are using jQuery, you can use ajax\n$('.cancel-confirm').on('click', function () {\n $.post( \"yourprocessor.php\", { name: \"Your Name\", sex: \"Male\" })\n .done(function( data ) {\n alert('Submitted - your window will now close');\n window.close();\n }); \n});\n\nthen process your submitted form.\n"
] |
[
1,
0
] |
[] |
[] |
[
"html",
"javascript",
"jquery"
] |
stackoverflow_0074649268_html_javascript_jquery.txt
|
Q:
Azure B2C Forgot password is not showing on Xamarin.Forms UWP app
I have a Xamarin.Forms app using MSAL.NET + ADB2C.
After I updated the Page Layout Version of the Forgot Password Page to the latest version (2.1.17 - no custom page), UWP stopped working and it only shows a blank Page when the user navigates to the Forget Password.
Using version 2.1.16 everything works as expected. Is this a bug in MSAL.NET library?
A:
Sounds like an issue with the layout itself. UWP WebView may not work if there's an issue in the content. Since System Browser is not supported in UWP the workaround here is to go back to the working layout. I'm reaching the B2C team about this and will come back later with an update.
|
Azure B2C Forgot password is not showing on Xamarin.Forms UWP app
|
I have a Xamarin.Forms app using MSAL.NET + ADB2C.
After I updated the Page Layout Version of the Forgot Password Page to the latest version (2.1.17 - no custom page), UWP stopped working and it only shows a blank Page when the user navigates to the Forget Password.
Using version 2.1.16 everything works as expected. Is this a bug in MSAL.NET library?
|
[
"Sounds like an issue with the layout itself. UWP WebView may not work if there's an issue in the content. Since System Browser is not supported in UWP the workaround here is to go back to the working layout. I'm reaching the B2C team about this and will come back later with an update.\n"
] |
[
0
] |
[] |
[] |
[
"azure_active_directory",
"msal",
"xamarin.forms"
] |
stackoverflow_0074482754_azure_active_directory_msal_xamarin.forms.txt
|
Q:
Is it UB to modify a const object's member via its non-const reference?
class T {
public:
int v;
int &vRef;
public:
T() : v(0), vRef(v) {}
};
const T t; // note, it's a const object
t.vRef = 2;
printf("v: %d\n", t.v);
The code presented above compiles OK, and the const object's internal value did change.
Question. Is this Undefined Behavior or not?
A:
Yes. If the object is declared as const, then modifying it (through any means, be that a non-const reference like in your example, via const_cast or something else) is UB.
|
Is it UB to modify a const object's member via its non-const reference?
|
class T {
public:
int v;
int &vRef;
public:
T() : v(0), vRef(v) {}
};
const T t; // note, it's a const object
t.vRef = 2;
printf("v: %d\n", t.v);
The code presented above compiles OK, and the const object's internal value did change.
Question. Is this Undefined Behavior or not?
|
[
"Yes. If the object is declared as const, then modifying it (through any means, be that a non-const reference like in your example, via const_cast or something else) is UB.\n"
] |
[
3
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0074664135_c++.txt
|
Q:
Get the time zone GMT offset in C
I'm using the standard mktime function to turn a struct tm into an epoch time value. The tm fields are populated locally, and I need to get the epoch time as GMT. tm has a gmtoff field to allow you to set the local GMT offset in seconds for just this purpose.
But I can't figure out how to get that information. Surely there must be a standard function somewhere that will return the offset? How does localtime do it?
A:
Just do the following:
#define _GNU_SOURCE /* for tm_gmtoff and tm_zone */
#include <stdio.h>
#include <time.h>
/* Checking errors returned by system calls was omitted for the sake of readability. */
int main(void)
{
time_t t = time(NULL);
struct tm lt = {0};
localtime_r(&t, <);
printf("Offset to GMT is %lds.\n", lt.tm_gmtoff);
printf("The time zone is '%s'.\n", lt.tm_zone);
return 0;
}
Note: The seconds since epoch returned by time() are measured as if in Greenwich.
A:
How does localtime do it?
According to localtime man page
The localtime() function acts as if it called tzset(3) and sets the
external variables tzname with information about the current timezone,
timezone with the difference between Coordinated Universal
Time (UTC) and local standard time in seconds
So you could either call localtime() and you will have the difference in timezone or call tzset():
extern long timezone;
....
tzset();
printf("%ld\n", timezone);
Note: if you choose to go with localtime_r() note that it is not required to set those variables you will need to call tzset() first to set timezone:
According to POSIX.1-2004, localtime() is required to behave as though
tzset() was called, while localtime_r() does not have this
requirement. For portable code tzset() should be called before
localtime_r()
A:
The universal version of obtaining local time offset function is here.
I borrowed pieces of code from this answer in stackoverflow.
int time_offset()
{
time_t gmt, rawtime = time(NULL);
struct tm *ptm;
#if !defined(WIN32)
struct tm gbuf;
ptm = gmtime_r(&rawtime, &gbuf);
#else
ptm = gmtime(&rawtime);
#endif
// Request that mktime() looksup dst in timezone database
ptm->tm_isdst = -1;
gmt = mktime(ptm);
return (int)difftime(rawtime, gmt);
}
A:
I guess I should have done a bit more searching before asking. It turns out there's a little known timegm function which does the opposite of gmtime. It's supported on GNU and BSD which is good enough for my purposes. A more portable solution is to temporarily set the value of the TZ environment variable to "UTC" and then use mktime, then set TZ back.
But timegm works for me.
A:
This is the portable solution that should work on all standard C (and C++) platforms:
const std::time_t epoch_plus_11h = 60 * 60 * 11;
const int local_time = localtime(&epoch_plus_11h)->tm_hour;
const int gm_time = gmtime(&epoch_plus_11h)->tm_hour;
const int tz_diff = local_time - gm_time;
Add std:: namespace when using C++. The result is in hours in the range [-11, 12];
Explanation:
We just convert the date-time "1970-01-01 11:00:00" to tm structure twice - with the local timezone and with the GMT. The result is the difference between hours part.
The "11:00::00" has been chosen because this is the only time point (considering GMT) when we have the same date in the whole globe. Because of that fact, we don't have to consider the additional magic with date changing in the calculations.
WARNING
Previous version of my answer worked only on linux:
// DO NOT DO THAT!!
int timezonez_diff = localtime(&epoch_plus_11h)->tm_hour -
gmtime(&epoch_plus_11h)->tm_hour;
This may not work because the storage for result tm object returned as a pointer from localtime and gmtime may be shared (and it is on windows/msvc). That's whe I've introduced temporaries for calculation.
A:
I believe the following is true in linux at least: timezone info comes from /usr/share/zoneinfo/. localtime reads /etc/localtime which should be a copy of the appropriate file from zoneinfo. You can see whats inside by doing zdump -v on the timezone file (zdump may be in sbin but you don't need elevated permissions to read timezone files with it). Here is a snipped of one:
/usr/share/zoneinfo/EST5EDT Sun Nov 6 05:59:59 2033 UTC = Sun Nov 6 01:59:59 2033 EDT isdst=1 gmtoff=-14400
/usr/share/zoneinfo/EST5EDT Sun Nov 6 06:00:00 2033 UTC = Sun Nov 6 01:00:00 2033 EST isdst=0 gmtoff=-18000
/usr/share/zoneinfo/EST5EDT Sun Mar 12 06:59:59 2034 UTC = Sun Mar 12 01:59:59 2034 EST isdst=0 gmtoff=-18000
/usr/share/zoneinfo/EST5EDT Sun Mar 12 07:00:00 2034 UTC = Sun Mar 12 03:00:00 2034 EDT isdst=1 gmtoff=-14400
/usr/share/zoneinfo/EST5EDT Sun Nov 5 05:59:59 2034 UTC = Sun Nov 5 01:59:59 2034 EDT
I guess you could parse this yourself if you want. I'm not sure if there is a stdlib function that just returns the gmtoff (there may well be but I don't know...)
edit: man tzfile describes the format of the zoneinfo file. You should be able to simply mmap into a structure of the appropriate type. It appears to be what zdump is doing based on an strace of it.
A:
Here's a two-liner inspired by @Hill's and @friedo's answers:
#include <time.h>
...
time_t rawtime = time(0);
timeofs = timegm(localtime(&rawtime)) - rawtime;
Returns offset from UTC in seconds.
Doesn't need _GNU_SOURCE defined, but note that timegm is not a POSIX standard and may not be available outside of GNU and BSD.
A:
Ended up with this. Sure tm_secs is redundant, just for a sake of consistency.
int timezone_offset() {
time_t zero = 0;
const tm* lt = localtime( &zero );
int unaligned = lt->tm_sec + ( lt->tm_min + ( lt->tm_hour * 60 ) ) * 60;
return lt->tm_mon ? unaligned - 24*60*60 : unaligned;
}
A:
Here is my way:
time_t z = 0;
struct tm * pdt = gmtime(&z);
time_t tzlag = mktime(pdt);
Alternative with automatic, local storage of struct tm:
struct tm dt;
memset(&dt, 0, sizeof(struct tm));
dt.tm_mday=1; dt.tm_year=70;
time_t tzlag = mktime(&dt);
tzlag, in seconds, will be the negative of the UTC offset; lag of your timezone Standard Time compared to UTC:
LocalST + tzlag = UTC
If you want to also account for "Daylight savings", subtract tm_isdst from tzlag, where tm_isdst is for a particular local time struct tm, after applying mktime to it (or after obtaining it with localtime ).
Why it works:
The set struct tm is for "epoch" moment, Jan 1 1970, which corresponds to a time_t of 0.
Calling mktime() on that date converts it to time_t as if it were UTC (thus getting 0), then subtracts the UTC offset from it in order to produce the output time_t. Thus it produces negative of UTC_offset.
A:
Here is one threadsafe way taken from my answer to this post:
What is the correct way to get beginning of the day in UTC / GMT?
::time_t GetTimeZoneOffset ()
{ // This method is to be called only once per execution
static const seconds = 0; // any arbitrary value works!
::tm tmGMT = {}, tmLocal = {};
::gmtime_r(&seconds, &tmGMT); // ::gmtime_s() for WINDOWS
::localtime_r(&seconds, &tmLocal); // ::localtime_s() for WINDOWS
return ::mktime(&tmGMT) - ::mktime(&tmLocal);
};
|
Get the time zone GMT offset in C
|
I'm using the standard mktime function to turn a struct tm into an epoch time value. The tm fields are populated locally, and I need to get the epoch time as GMT. tm has a gmtoff field to allow you to set the local GMT offset in seconds for just this purpose.
But I can't figure out how to get that information. Surely there must be a standard function somewhere that will return the offset? How does localtime do it?
|
[
"Just do the following:\n#define _GNU_SOURCE /* for tm_gmtoff and tm_zone */\n\n#include <stdio.h>\n#include <time.h>\n\n/* Checking errors returned by system calls was omitted for the sake of readability. */\nint main(void)\n{\n time_t t = time(NULL);\n struct tm lt = {0};\n\n localtime_r(&t, <);\n\n printf(\"Offset to GMT is %lds.\\n\", lt.tm_gmtoff);\n printf(\"The time zone is '%s'.\\n\", lt.tm_zone);\n\n return 0;\n}\n\nNote: The seconds since epoch returned by time() are measured as if in Greenwich.\n",
"\nHow does localtime do it?\n\nAccording to localtime man page\n\nThe localtime() function acts as if it called tzset(3) and sets the\n external variables tzname with information about the current timezone,\n timezone with the difference between Coordinated Universal \n Time (UTC) and local standard time in seconds\n\nSo you could either call localtime() and you will have the difference in timezone or call tzset():\nextern long timezone;\n....\ntzset();\nprintf(\"%ld\\n\", timezone);\n\nNote: if you choose to go with localtime_r() note that it is not required to set those variables you will need to call tzset() first to set timezone:\n\nAccording to POSIX.1-2004, localtime() is required to behave as though\n tzset() was called, while localtime_r() does not have this\n requirement. For portable code tzset() should be called before\n localtime_r()\n\n",
"The universal version of obtaining local time offset function is here.\nI borrowed pieces of code from this answer in stackoverflow.\nint time_offset()\n{\n time_t gmt, rawtime = time(NULL);\n struct tm *ptm;\n\n#if !defined(WIN32)\n struct tm gbuf;\n ptm = gmtime_r(&rawtime, &gbuf);\n#else\n ptm = gmtime(&rawtime);\n#endif\n // Request that mktime() looksup dst in timezone database\n ptm->tm_isdst = -1;\n gmt = mktime(ptm);\n\n return (int)difftime(rawtime, gmt);\n}\n\n",
"I guess I should have done a bit more searching before asking. It turns out there's a little known timegm function which does the opposite of gmtime. It's supported on GNU and BSD which is good enough for my purposes. A more portable solution is to temporarily set the value of the TZ environment variable to \"UTC\" and then use mktime, then set TZ back.\nBut timegm works for me.\n",
"This is the portable solution that should work on all standard C (and C++) platforms:\nconst std::time_t epoch_plus_11h = 60 * 60 * 11;\nconst int local_time = localtime(&epoch_plus_11h)->tm_hour;\nconst int gm_time = gmtime(&epoch_plus_11h)->tm_hour;\nconst int tz_diff = local_time - gm_time;\n\nAdd std:: namespace when using C++. The result is in hours in the range [-11, 12];\nExplanation:\nWe just convert the date-time \"1970-01-01 11:00:00\" to tm structure twice - with the local timezone and with the GMT. The result is the difference between hours part.\nThe \"11:00::00\" has been chosen because this is the only time point (considering GMT) when we have the same date in the whole globe. Because of that fact, we don't have to consider the additional magic with date changing in the calculations.\nWARNING\nPrevious version of my answer worked only on linux:\n// DO NOT DO THAT!!\nint timezonez_diff = localtime(&epoch_plus_11h)->tm_hour -\n gmtime(&epoch_plus_11h)->tm_hour;\n\nThis may not work because the storage for result tm object returned as a pointer from localtime and gmtime may be shared (and it is on windows/msvc). That's whe I've introduced temporaries for calculation.\n",
"I believe the following is true in linux at least: timezone info comes from /usr/share/zoneinfo/. localtime reads /etc/localtime which should be a copy of the appropriate file from zoneinfo. You can see whats inside by doing zdump -v on the timezone file (zdump may be in sbin but you don't need elevated permissions to read timezone files with it). Here is a snipped of one:\n\n/usr/share/zoneinfo/EST5EDT Sun Nov 6 05:59:59 2033 UTC = Sun Nov 6 01:59:59 2033 EDT isdst=1 gmtoff=-14400\n/usr/share/zoneinfo/EST5EDT Sun Nov 6 06:00:00 2033 UTC = Sun Nov 6 01:00:00 2033 EST isdst=0 gmtoff=-18000\n/usr/share/zoneinfo/EST5EDT Sun Mar 12 06:59:59 2034 UTC = Sun Mar 12 01:59:59 2034 EST isdst=0 gmtoff=-18000\n/usr/share/zoneinfo/EST5EDT Sun Mar 12 07:00:00 2034 UTC = Sun Mar 12 03:00:00 2034 EDT isdst=1 gmtoff=-14400\n/usr/share/zoneinfo/EST5EDT Sun Nov 5 05:59:59 2034 UTC = Sun Nov 5 01:59:59 2034 EDT \nI guess you could parse this yourself if you want. I'm not sure if there is a stdlib function that just returns the gmtoff (there may well be but I don't know...)\n edit: man tzfile describes the format of the zoneinfo file. You should be able to simply mmap into a structure of the appropriate type. It appears to be what zdump is doing based on an strace of it.\n",
"Here's a two-liner inspired by @Hill's and @friedo's answers:\n#include <time.h>\n...\ntime_t rawtime = time(0);\ntimeofs = timegm(localtime(&rawtime)) - rawtime;\n\nReturns offset from UTC in seconds.\nDoesn't need _GNU_SOURCE defined, but note that timegm is not a POSIX standard and may not be available outside of GNU and BSD.\n",
"Ended up with this. Sure tm_secs is redundant, just for a sake of consistency.\nint timezone_offset() {\n time_t zero = 0;\n const tm* lt = localtime( &zero );\n int unaligned = lt->tm_sec + ( lt->tm_min + ( lt->tm_hour * 60 ) ) * 60;\n return lt->tm_mon ? unaligned - 24*60*60 : unaligned;\n}\n\n",
"Here is my way:\ntime_t z = 0;\nstruct tm * pdt = gmtime(&z);\ntime_t tzlag = mktime(pdt);\n\nAlternative with automatic, local storage of struct tm:\nstruct tm dt;\nmemset(&dt, 0, sizeof(struct tm));\ndt.tm_mday=1; dt.tm_year=70;\ntime_t tzlag = mktime(&dt);\n\ntzlag, in seconds, will be the negative of the UTC offset; lag of your timezone Standard Time compared to UTC:\nLocalST + tzlag = UTC\nIf you want to also account for \"Daylight savings\", subtract tm_isdst from tzlag, where tm_isdst is for a particular local time struct tm, after applying mktime to it (or after obtaining it with localtime ).\nWhy it works:\nThe set struct tm is for \"epoch\" moment, Jan 1 1970, which corresponds to a time_t of 0.\nCalling mktime() on that date converts it to time_t as if it were UTC (thus getting 0), then subtracts the UTC offset from it in order to produce the output time_t. Thus it produces negative of UTC_offset.\n",
"Here is one threadsafe way taken from my answer to this post:\nWhat is the correct way to get beginning of the day in UTC / GMT?\n::time_t GetTimeZoneOffset ()\n{ // This method is to be called only once per execution\n static const seconds = 0; // any arbitrary value works!\n ::tm tmGMT = {}, tmLocal = {}; \n ::gmtime_r(&seconds, &tmGMT); // ::gmtime_s() for WINDOWS\n ::localtime_r(&seconds, &tmLocal); // ::localtime_s() for WINDOWS\n return ::mktime(&tmGMT) - ::mktime(&tmLocal);\n};\n\n"
] |
[
24,
8,
7,
6,
3,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"c",
"datetime",
"libc",
"timezone"
] |
stackoverflow_0013804095_c_datetime_libc_timezone.txt
|
Q:
How to Override MUI CSS in a React Custom Component which uses MUI Component in it?
Below is the Custome Component which i created in Reactjs and used in different component :
function SearchBox({ handle, placeholder, inputType, ...props}) {
return (
<Box
sx={{
display: 'flex',
alignItems: 'flex-end',
margin: 1,
marginTop: 0,
maxHeight: '30px'
}}
>
<TextField
sx={{
fontSize: '12px',
width: '100%'
}}
variant={inputType || 'standard'}
InputProps={{
startAdornment: (
<InputAdornment position='start'>
<SearchIcon
sx={{
color: 'action.active',
mb: 0.2,
display: 'flex',
alignItems: 'flex-end'
}}
/>
</InputAdornment>
),
placeholder: placeholder || 'Search'
}}
onChange={handle}
/>
</Box>
);
}
I am using this Component in Some Other Component <SearchBox handle={Keyword}/>
so how to override css for TexField and Box of the SearchBox component? i don't want to touch the SearchBox need to override css properties from the place where i am using this component.
When i did Inspect on the Browser i saw something like this <div class="MuiBox-root css-1uqe0j">...</div>
what is css-luqe0j ?
Can Anyone Help me out with this.
A:
If you don't want to modify the SearchBox component at all, you have a few options, the most reasonable of which (IMO) is to just wrap the SearchBox component in a styled component that then targets elements contained within the SearchBox (it's child), for example:
const SearchBoxStyleOverrides = styled("div")({
".MuiBox-root": {
padding: "2rem",
backgroundColor: "green"
},
".MuiSvgIcon-root": {
color: "red"
},
".MuiInput-input": {
backgroundColor: "yellow"
}
});
export default function YourApp() {
return (
<SearchBoxStyleOverrides>
<SearchBox />
</SearchBoxStyleOverrides>
);
}
Which produces: (ugliness is for effect)
Working CodeSandbox: https://codesandbox.io/s/peaceful-dubinsky-wjthtt?file=/demo.js
And to answer your secondary question ("what is css-luqe0j ?"), that is the generated css class name for the component -- you don't want to target an element using those because they will frequently change.
|
How to Override MUI CSS in a React Custom Component which uses MUI Component in it?
|
Below is the Custome Component which i created in Reactjs and used in different component :
function SearchBox({ handle, placeholder, inputType, ...props}) {
return (
<Box
sx={{
display: 'flex',
alignItems: 'flex-end',
margin: 1,
marginTop: 0,
maxHeight: '30px'
}}
>
<TextField
sx={{
fontSize: '12px',
width: '100%'
}}
variant={inputType || 'standard'}
InputProps={{
startAdornment: (
<InputAdornment position='start'>
<SearchIcon
sx={{
color: 'action.active',
mb: 0.2,
display: 'flex',
alignItems: 'flex-end'
}}
/>
</InputAdornment>
),
placeholder: placeholder || 'Search'
}}
onChange={handle}
/>
</Box>
);
}
I am using this Component in Some Other Component <SearchBox handle={Keyword}/>
so how to override css for TexField and Box of the SearchBox component? i don't want to touch the SearchBox need to override css properties from the place where i am using this component.
When i did Inspect on the Browser i saw something like this <div class="MuiBox-root css-1uqe0j">...</div>
what is css-luqe0j ?
Can Anyone Help me out with this.
|
[
"If you don't want to modify the SearchBox component at all, you have a few options, the most reasonable of which (IMO) is to just wrap the SearchBox component in a styled component that then targets elements contained within the SearchBox (it's child), for example:\nconst SearchBoxStyleOverrides = styled(\"div\")({\n \".MuiBox-root\": {\n padding: \"2rem\",\n backgroundColor: \"green\"\n },\n \".MuiSvgIcon-root\": {\n color: \"red\"\n },\n \".MuiInput-input\": {\n backgroundColor: \"yellow\"\n }\n});\n\nexport default function YourApp() {\n return (\n <SearchBoxStyleOverrides>\n <SearchBox />\n </SearchBoxStyleOverrides>\n );\n}\n\nWhich produces: (ugliness is for effect)\n\nWorking CodeSandbox: https://codesandbox.io/s/peaceful-dubinsky-wjthtt?file=/demo.js\nAnd to answer your secondary question (\"what is css-luqe0j ?\"), that is the generated css class name for the component -- you don't want to target an element using those because they will frequently change.\n"
] |
[
1
] |
[] |
[] |
[
"css",
"html",
"material_ui",
"reactjs"
] |
stackoverflow_0074659613_css_html_material_ui_reactjs.txt
|
Q:
node myapp.js > myapp_error.log 2>&1 equivalent when is starting as service
A simple testing app called myapp:
const fs = require("fs");
const json_encode = require('json_encode');
//emulate a real error
var wrong;
var test = json_encode(wrong);
//TypeError: Cannot read property 'replace' of undefined at json_encode...
If I try to save myapp_error.log for myapp from command line I use:
node myapp2.js > myapp_error.log 2>&1
//myapp_error.log is created and I can read the error logged
I need to do the same but if I starting myapp as service in /etc/systemd/system like
systemctl start myapp
The service (tested and running ok):
[Unit]
Description=myapp
[Service]
ExecStart=/path/to/myapp/myapp.js
Restart=my_username
User=nobody
Group=www-data
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/path/to/myapp
[Install]
WantedBy=multi-user.target
I have tried:
ExecStart=/path/to/myapp/myapp.js > myapp_error.log 2>&1 (this do nothing)
also tried:
systemctl start myapp > myapp_error.log 2>&1 (this write a empty file myapp_error.log in /etc/systemd/system)
A:
The [Service] block in your unit file has options for stdout/stderr logs:
StandardOutput=append:/path/to/stdout.log
StandardError=append:/path/to/stderr.log
Edit (from comments discussion):
Running node itself was also missing:
ExecStart=/usr/bin/node /path/to/myapp/myapp.js
Systemd doesn't know what to do with a JS file, so we need to call the correct interpreter first.
|
node myapp.js > myapp_error.log 2>&1 equivalent when is starting as service
|
A simple testing app called myapp:
const fs = require("fs");
const json_encode = require('json_encode');
//emulate a real error
var wrong;
var test = json_encode(wrong);
//TypeError: Cannot read property 'replace' of undefined at json_encode...
If I try to save myapp_error.log for myapp from command line I use:
node myapp2.js > myapp_error.log 2>&1
//myapp_error.log is created and I can read the error logged
I need to do the same but if I starting myapp as service in /etc/systemd/system like
systemctl start myapp
The service (tested and running ok):
[Unit]
Description=myapp
[Service]
ExecStart=/path/to/myapp/myapp.js
Restart=my_username
User=nobody
Group=www-data
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/path/to/myapp
[Install]
WantedBy=multi-user.target
I have tried:
ExecStart=/path/to/myapp/myapp.js > myapp_error.log 2>&1 (this do nothing)
also tried:
systemctl start myapp > myapp_error.log 2>&1 (this write a empty file myapp_error.log in /etc/systemd/system)
|
[
"The [Service] block in your unit file has options for stdout/stderr logs:\nStandardOutput=append:/path/to/stdout.log\nStandardError=append:/path/to/stderr.log\n\nEdit (from comments discussion):\nRunning node itself was also missing:\nExecStart=/usr/bin/node /path/to/myapp/myapp.js\n\nSystemd doesn't know what to do with a JS file, so we need to call the correct interpreter first.\n"
] |
[
2
] |
[] |
[] |
[
"error_handling",
"linux",
"node.js",
"systemctl",
"ubuntu"
] |
stackoverflow_0074664117_error_handling_linux_node.js_systemctl_ubuntu.txt
|
Q:
How my Jquery Function work on whole table?
Asslam o Alaikum....
I have a problem in my code...my jquery code is only implement of the first row of table not on others....please check my code
<tr>
<td><?php echo $row['serial no.'] ?></td>
<td><?php echo $row['pname'] ?></td>
<td><input type="text" class="form-control" id="prate" name = "uprice" value="<?php echo $prate = $row['uprice'];?>"></td>
<td> <input type="number" class="form-control" id="pqty" name = "quantity" value ="<?php $quant = ""; echo $quant; ?>"></td>
<td> <input type="text" class="form-control" id="pTotal" name = "price" value = "<?php $tprice = ""; echo $tprice; ?>" ></td>
</tr>
this is my html code....
<script>
$("#prate").keyup(function(){
// console.log('presssed');
var prate = document.getElementById('prate').value;
var pqty = document.getElementById('pqty').value;
var ptotal = parseInt(prate) * parseInt(pqty);
document.getElementById('pTotal').value = ptotal;
});
$("#pqty").keyup(function(){
// console.log('presssed');
var prate = document.getElementById('prate').value;
var pqty = document.getElementById('pqty').value;
var ptotal = parseInt(prate) * parseInt(pqty);
document.getElementById('pTotal').value = ptotal;
});
</script>
and this is jquery...plz help me out
A:
As you know, id must be unique on whole page. So on each row and items, either you have make unique id (Which will be complicated to bind event), so you can go with class or as below script for binding the events.
<script>
$('input[name="uprice"]').keyup(function(){
var prate = $(this).val();
var pqty = $(this).parent().next().find('input[name="quantity"]').val();
var ptotal = parseInt(prate) * parseInt(pqty);
$(this).parent().next().next().find('input[name="price"]').val(ptotal);
});
$('input[name="quantity"]').keyup(function(){
var prate = $(this).parent().prev().find('input[name="uprice"]').val();;
var pqty = $(this).val();
var ptotal = parseInt(prate) * parseInt(pqty);
$(this).parent().next().find('input[name="price"]').val(ptotal);
});
</script>
Not have tested but should help you.
A:
Use class selector (not id selector) and each method to emplement your desired commands on all rows.
A:
ID must be unique, instead put a class as a reference for each table data and try each method:
$('.pqty').each(function(i, element){
$(element).keyup(function(evt) {
/* anything */
})
})
A:
What you have to do is (1) Gather all those classes in a Variable. (2) Run Foreach loop on each element. (3) While looping each element, you must sum all of these elements' values.
// Step 1: Fetch Classes
var FetchedClasses = document.getElementsByClassName("myclass");
var Sum = 0;
// Step 2: Iterate them
Array.prototype.forEach.call(FetchedClasses, function(element) {
// Step 3: Sum up their values
Sum = Sum + element.value; //
});
// Now Show it anywhere you like.
|
How my Jquery Function work on whole table?
|
Asslam o Alaikum....
I have a problem in my code...my jquery code is only implement of the first row of table not on others....please check my code
<tr>
<td><?php echo $row['serial no.'] ?></td>
<td><?php echo $row['pname'] ?></td>
<td><input type="text" class="form-control" id="prate" name = "uprice" value="<?php echo $prate = $row['uprice'];?>"></td>
<td> <input type="number" class="form-control" id="pqty" name = "quantity" value ="<?php $quant = ""; echo $quant; ?>"></td>
<td> <input type="text" class="form-control" id="pTotal" name = "price" value = "<?php $tprice = ""; echo $tprice; ?>" ></td>
</tr>
this is my html code....
<script>
$("#prate").keyup(function(){
// console.log('presssed');
var prate = document.getElementById('prate').value;
var pqty = document.getElementById('pqty').value;
var ptotal = parseInt(prate) * parseInt(pqty);
document.getElementById('pTotal').value = ptotal;
});
$("#pqty").keyup(function(){
// console.log('presssed');
var prate = document.getElementById('prate').value;
var pqty = document.getElementById('pqty').value;
var ptotal = parseInt(prate) * parseInt(pqty);
document.getElementById('pTotal').value = ptotal;
});
</script>
and this is jquery...plz help me out
|
[
"As you know, id must be unique on whole page. So on each row and items, either you have make unique id (Which will be complicated to bind event), so you can go with class or as below script for binding the events.\n<script>\n $('input[name=\"uprice\"]').keyup(function(){\n \n var prate = $(this).val();\n var pqty = $(this).parent().next().find('input[name=\"quantity\"]').val();\n var ptotal = parseInt(prate) * parseInt(pqty);\n\n $(this).parent().next().next().find('input[name=\"price\"]').val(ptotal);\n });\n\n $('input[name=\"quantity\"]').keyup(function(){\n var prate = $(this).parent().prev().find('input[name=\"uprice\"]').val();;\n var pqty = $(this).val();\n var ptotal = parseInt(prate) * parseInt(pqty);\n\n $(this).parent().next().find('input[name=\"price\"]').val(ptotal);\n });\n</script>\n\nNot have tested but should help you.\n",
"Use class selector (not id selector) and each method to emplement your desired commands on all rows.\n",
"ID must be unique, instead put a class as a reference for each table data and try each method:\n $('.pqty').each(function(i, element){\n $(element).keyup(function(evt) { \n /* anything */\n })\n })\n\n",
"What you have to do is (1) Gather all those classes in a Variable. (2) Run Foreach loop on each element. (3) While looping each element, you must sum all of these elements' values.\n// Step 1: Fetch Classes\nvar FetchedClasses = document.getElementsByClassName(\"myclass\"); \nvar Sum = 0;\n// Step 2: Iterate them\nArray.prototype.forEach.call(FetchedClasses, function(element) {\n // Step 3: Sum up their values\n Sum = Sum + element.value; //\n});\n// Now Show it anywhere you like.\n\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"jquery",
"jquery_ui",
"php"
] |
stackoverflow_0074650578_jquery_jquery_ui_php.txt
|
Q:
azure entity framework: new table is not created, internal server error when trying to insert data
i am stuck here for days now:
all i want is to create a table called useritems and upload data into it, this is my middleware:
[Route("tables/useritem")]
public class tblUserController : TableController<UserItem>
{
public tblUserController(AppDbContext context)
: base(new EntityTableRepository<UserItem>(context))
{
}
}
}
public class UserItem : EntityTableData
{
[Required, MinLength(1)]
public string Email { get; set; } = "";
public string Telephone { get; set; } = "";
public string Password { get; set; } = "";
}
app context:
{
public class AppDbContext : DbContext
{
public AppDbContext(DbContextOptions<AppDbContext> options) : base(options)
{
}
/// <summary>
/// The dataset for the UserItems.
/// </summary>
public DbSet<UserItem> UserItems => Set<UserItem>();
/// <summary>
/// Do any database initialization required.
/// </summary>
/// <returns>A task that completes when the database is initialized</returns>
public async Task InitializeDatabaseAsync()
{
await this.Database.EnsureCreatedAsync().ConfigureAwait(false);
}
}
program.cs
var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("C5"); // set the connection string name that you set up (caused issues before)
if (connectionString == null)
{
throw new ApplicationException("DefaultConnection is not set");
}
builder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString));
builder.Services.AddDatasyncControllers();
var app = builder.Build();
// Initialize the database
using (var scope = app.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
await context.InitializeDatabaseAsync().ConfigureAwait(false);
}
// Configure and run the web service.
app.MapControllers();
app.Run();
as far as my understanding goes, this should create a table. and once I try to insert data into it, it should show up in my SSMS. but it doesnt.
When I open the link to the db I get this:
So it seems to have been created and running.
Funniest part:
If I create a table via SSMS and the create table command that is of the name UserItems and has all the features of the entity table, I CAN create data from my app but not READ it. So the auto create fails and therefore the connection must have an issue.
Why arent my tables created automatically?
A:
The issue with your code is that you are not specifying the correct type for the connectionString variable. In the Program.cs file, you are calling the GetConnectionString method with the key "C5", but you are trying to assign the result to a string variable. This is causing the connectionString variable to be null, which means that the Database.EnsureCreatedAsync method is not being called.
To fix this issue, you need to specify the correct type for the connectionString variable. This type should be IConfiguration, which is the type returned by the Configuration property of the WebApplicationBuilder instance. Here is how you can fix this issue:
var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("C5"); // set the connection string name that you set up (caused issues before)
if (connectionString == null)
{
throw new ApplicationException("DefaultConnection is not set");
}
builder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString));
builder.Services.AddDatasyncControllers();
var app = builder.Build();
// Initialize the database
using (var scope = app.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
await context.InitializeDatabaseAsync().ConfigureAwait(false);
}
// Configure and run the web service.
app.MapControllers();
app.Run();
With this change, the connectionString variable will be of the correct type and the Database.EnsureCreatedAsync method will be called, which should create the UserItems table in the database.
|
azure entity framework: new table is not created, internal server error when trying to insert data
|
i am stuck here for days now:
all i want is to create a table called useritems and upload data into it, this is my middleware:
[Route("tables/useritem")]
public class tblUserController : TableController<UserItem>
{
public tblUserController(AppDbContext context)
: base(new EntityTableRepository<UserItem>(context))
{
}
}
}
public class UserItem : EntityTableData
{
[Required, MinLength(1)]
public string Email { get; set; } = "";
public string Telephone { get; set; } = "";
public string Password { get; set; } = "";
}
app context:
{
public class AppDbContext : DbContext
{
public AppDbContext(DbContextOptions<AppDbContext> options) : base(options)
{
}
/// <summary>
/// The dataset for the UserItems.
/// </summary>
public DbSet<UserItem> UserItems => Set<UserItem>();
/// <summary>
/// Do any database initialization required.
/// </summary>
/// <returns>A task that completes when the database is initialized</returns>
public async Task InitializeDatabaseAsync()
{
await this.Database.EnsureCreatedAsync().ConfigureAwait(false);
}
}
program.cs
var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("C5"); // set the connection string name that you set up (caused issues before)
if (connectionString == null)
{
throw new ApplicationException("DefaultConnection is not set");
}
builder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString));
builder.Services.AddDatasyncControllers();
var app = builder.Build();
// Initialize the database
using (var scope = app.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
await context.InitializeDatabaseAsync().ConfigureAwait(false);
}
// Configure and run the web service.
app.MapControllers();
app.Run();
as far as my understanding goes, this should create a table. and once I try to insert data into it, it should show up in my SSMS. but it doesnt.
When I open the link to the db I get this:
So it seems to have been created and running.
Funniest part:
If I create a table via SSMS and the create table command that is of the name UserItems and has all the features of the entity table, I CAN create data from my app but not READ it. So the auto create fails and therefore the connection must have an issue.
Why arent my tables created automatically?
|
[
"The issue with your code is that you are not specifying the correct type for the connectionString variable. In the Program.cs file, you are calling the GetConnectionString method with the key \"C5\", but you are trying to assign the result to a string variable. This is causing the connectionString variable to be null, which means that the Database.EnsureCreatedAsync method is not being called.\nTo fix this issue, you need to specify the correct type for the connectionString variable. This type should be IConfiguration, which is the type returned by the Configuration property of the WebApplicationBuilder instance. Here is how you can fix this issue:\nvar builder = WebApplication.CreateBuilder(args);\nvar connectionString = builder.Configuration.GetConnectionString(\"C5\"); // set the connection string name that you set up (caused issues before) \n\nif (connectionString == null)\n{\n throw new ApplicationException(\"DefaultConnection is not set\");\n}\n\nbuilder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString));\nbuilder.Services.AddDatasyncControllers();\n\nvar app = builder.Build();\n\n// Initialize the database\nusing (var scope = app.Services.CreateScope())\n{\n var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();\n await context.InitializeDatabaseAsync().ConfigureAwait(false);\n}\n\n// Configure and run the web service.\napp.MapControllers();\napp.Run();\n\nWith this change, the connectionString variable will be of the correct type and the Database.EnsureCreatedAsync method will be called, which should create the UserItems table in the database.\n"
] |
[
0
] |
[] |
[] |
[
"azure"
] |
stackoverflow_0074573784_azure.txt
|
Q:
How can you use images in a list of objects and use them as a prop in react?
I imported pictures into my portfolio component and used each of them as a property in a list of objects. But when I try to send them over as a prop, the image disappears, and instead shows up as a missing picture. If I made an image tag with the pictures within the component I import them in, they appear just fine. But when I pass them as a prop, that's when they disappear. I import them in a Portfolio component, and try to send them as a prop in a Project component.
Here's the Portfolio component:
And here's the Project component:
A:
You shouldn't try to pass a object to a src attribute. Try to send the image url as a prop, that should work fine for you.
|
How can you use images in a list of objects and use them as a prop in react?
|
I imported pictures into my portfolio component and used each of them as a property in a list of objects. But when I try to send them over as a prop, the image disappears, and instead shows up as a missing picture. If I made an image tag with the pictures within the component I import them in, they appear just fine. But when I pass them as a prop, that's when they disappear. I import them in a Portfolio component, and try to send them as a prop in a Project component.
Here's the Portfolio component:
And here's the Project component:
|
[
"You shouldn't try to pass a object to a src attribute. Try to send the image url as a prop, that should work fine for you.\n"
] |
[
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074663463_reactjs.txt
|
Q:
M_PI flagged as undeclared identifier
When I compile the code below, I got these error messages:
(Error 1 error C2065: 'M_PI' : undeclared identifier
2 IntelliSense: identifier "M_PI" is undefined)
What is this?
#include <iostream>
#include <math.h>
using namespace std;
double my_sqrt1( double n );`enter code here`
int main() {
double k[5] = {-100, -10, -1, 10, 100};
int i;
for ( i = 0; i < 5; i++ ) {
double val = M_PI * pow( 10.0, k[i] );
cout << "n: "
<< val
<< "\tmysqrt: "
<< my_sqrt1(val)
<< "\tsqrt: "
<< sqrt(val)
<< endl;
}
return 0;
}
double my_sqrt1( double n ) {
int i;
double x = 1;
for ( i = 0; i < 10; i++ ) {
x = ( x + n / x ) / 2;
}
return x;
}
A:
It sounds like you're using MS stuff, according to their docs
Math Constants are not defined in Standard C/C++. To use them, you must first define _USE_MATH_DEFINES and then include cmath or math.h.
So you need something like
#define _USE_MATH_DEFINES
#include <cmath>
as a header.
A:
math.h does not define M_PI by default.
So go with this:
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
This will handle both cases either your header have M_PI defined or not.
A:
M_PI is supported by GCC too, but you've to do some work to get it
#undef __STRICT_ANSI__
#include <cmath>
or if you don't like to pollute your source file, then do
g++ -U__STRICT_ANSI__ <other options>
A:
As noted by shep above you need something like
#define _USE_MATH_DEFINES
#include <cmath>
However you also include iostream.
iostream includes a lot of stuff and one of those things eventually includes cmath. This means that by the time you include it in your file all the symbols have already been defined so it is effectively ignored when you include it and the #define _USE_MATH_DEFINES doesn't work
If you include cmath before iostream it should give you the higher precision constants like M_PI
#define _USE_MATH_DEFINES
#include <cmath>
#include <iostream>
A:
Use this include for Windows 10 (and Windows 11):
#include <corecrt_math_defines.h>
A:
I used C99 in NetBeans with remote linux host with its build tools.
Try adding #define _GNU_SOURCE and add the -lm during linking.
A:
You must use _USE_MATH_DEFINES before other headers like this:
#define _USE_MATH_DEFINES
#include <cmath>
#incude other headers...
|
M_PI flagged as undeclared identifier
|
When I compile the code below, I got these error messages:
(Error 1 error C2065: 'M_PI' : undeclared identifier
2 IntelliSense: identifier "M_PI" is undefined)
What is this?
#include <iostream>
#include <math.h>
using namespace std;
double my_sqrt1( double n );`enter code here`
int main() {
double k[5] = {-100, -10, -1, 10, 100};
int i;
for ( i = 0; i < 5; i++ ) {
double val = M_PI * pow( 10.0, k[i] );
cout << "n: "
<< val
<< "\tmysqrt: "
<< my_sqrt1(val)
<< "\tsqrt: "
<< sqrt(val)
<< endl;
}
return 0;
}
double my_sqrt1( double n ) {
int i;
double x = 1;
for ( i = 0; i < 10; i++ ) {
x = ( x + n / x ) / 2;
}
return x;
}
|
[
"It sounds like you're using MS stuff, according to their docs\n\nMath Constants are not defined in Standard C/C++. To use them, you must first define _USE_MATH_DEFINES and then include cmath or math.h.\n\nSo you need something like \n#define _USE_MATH_DEFINES\n#include <cmath>\n\nas a header.\n",
"math.h does not define M_PI by default. \nSo go with this:\n#ifndef M_PI\n #define M_PI 3.14159265358979323846\n#endif\n\nThis will handle both cases either your header have M_PI defined or not.\n",
"M_PI is supported by GCC too, but you've to do some work to get it\n#undef __STRICT_ANSI__\n#include <cmath>\n\nor if you don't like to pollute your source file, then do\ng++ -U__STRICT_ANSI__ <other options>\n\n",
"As noted by shep above you need something like\n#define _USE_MATH_DEFINES\n#include <cmath>\n\nHowever you also include iostream.\niostream includes a lot of stuff and one of those things eventually includes cmath. This means that by the time you include it in your file all the symbols have already been defined so it is effectively ignored when you include it and the #define _USE_MATH_DEFINES doesn't work\nIf you include cmath before iostream it should give you the higher precision constants like M_PI\n#define _USE_MATH_DEFINES\n#include <cmath>\n#include <iostream>\n\n",
"Use this include for Windows 10 (and Windows 11):\n#include <corecrt_math_defines.h>\n\n",
"I used C99 in NetBeans with remote linux host with its build tools.\nTry adding #define _GNU_SOURCE and add the -lm during linking.\n",
"You must use _USE_MATH_DEFINES before other headers like this:\n#define _USE_MATH_DEFINES\n#include <cmath>\n#incude other headers...\n\n"
] |
[
99,
38,
13,
11,
7,
0,
0
] |
[] |
[] |
[
"c++",
"compile_time_constant",
"development_environment"
] |
stackoverflow_0026065359_c++_compile_time_constant_development_environment.txt
|
Q:
Can we access a word table by it's name and not index using vba?
I want to access tables from a word document and I got a method that uses its index. But for my project, it arises confusion so I want to use their names as we can do in excel using this.
Set tbl = oExcelWorksheet.ListObjects("Table2").Range
But in word to access a table I only found this command
Set oTable = ActiveDocument.Tables("1")
Is there any other command in word VBA through which I can use the table name to access the table and not the index.
A:
As @Timothy correctly pointed out in the comments, tables in word don't have names.
One way around is to bookmark the first cell (or any other cell) of each table with the name you want to give the table
Then you can use this bookmark to locate your table. For example you can use this function (I used suggestion from here) [Please see Edit1 below]
Function GetTable(sTableName As String) As Table
Dim sCell_1_Range As Range
With ThisDocument
On Error Resume Next
Set sCell_1_Range = .Bookmarks(sTableName).Range
If Err.Number > 0 Then Exit Function ' table not found
On Error GoTo 0
Set GetTable = .Tables(.Range(0, sCell_1_Range.End).Tables.Count)
End With
End Function
and use it like this
Sub TestTableWithName()
Dim myTable As Table
Set myTable = GetTable("SecondTable")
If Not myTable Is Nothing Then
myTable.Range.Select
End If
End Sub
Edit1
@freeflow suggested a much better implementation of the function
Function GetTable(sTableName As String) As Table
On Error Resume Next
Set GetTable = ThisDocument.Bookmarks(sTableName).Range.Tables(1)
End Function
Which means - depending on your coding style - you might not even need to use a function. Just remember to use On Error GoTo 0 if you use it directly
A:
what I normally do is give unique title to the table in table properties of the table.
Then use a custom function.
Sub getTableByTitle()
Dim doc As Document
Dim tbl As Table
Set tbl = getTable("Tb1")
End Sub
Public Function getTable(s As String) As Table
Dim tbl As Table
For Each tbl In ActiveDocument.Tables
If tbl.Title = s Then
Set getTable = tbl
Exit Function
End If
Next
End Function
|
Can we access a word table by it's name and not index using vba?
|
I want to access tables from a word document and I got a method that uses its index. But for my project, it arises confusion so I want to use their names as we can do in excel using this.
Set tbl = oExcelWorksheet.ListObjects("Table2").Range
But in word to access a table I only found this command
Set oTable = ActiveDocument.Tables("1")
Is there any other command in word VBA through which I can use the table name to access the table and not the index.
|
[
"As @Timothy correctly pointed out in the comments, tables in word don't have names.\nOne way around is to bookmark the first cell (or any other cell) of each table with the name you want to give the table\n\nThen you can use this bookmark to locate your table. For example you can use this function (I used suggestion from here) [Please see Edit1 below]\nFunction GetTable(sTableName As String) As Table\n Dim sCell_1_Range As Range\n \n With ThisDocument\n On Error Resume Next\n Set sCell_1_Range = .Bookmarks(sTableName).Range\n If Err.Number > 0 Then Exit Function ' table not found\n On Error GoTo 0\n \n Set GetTable = .Tables(.Range(0, sCell_1_Range.End).Tables.Count)\n End With\nEnd Function\n\nand use it like this\nSub TestTableWithName()\n Dim myTable As Table\n Set myTable = GetTable(\"SecondTable\")\n If Not myTable Is Nothing Then\n myTable.Range.Select\n End If\nEnd Sub\n\nEdit1\n@freeflow suggested a much better implementation of the function\nFunction GetTable(sTableName As String) As Table\n On Error Resume Next\n Set GetTable = ThisDocument.Bookmarks(sTableName).Range.Tables(1)\nEnd Function\n\nWhich means - depending on your coding style - you might not even need to use a function. Just remember to use On Error GoTo 0 if you use it directly\n",
"what I normally do is give unique title to the table in table properties of the table.\n\nThen use a custom function.\nSub getTableByTitle()\nDim doc As Document\nDim tbl As Table\n\nSet tbl = getTable(\"Tb1\")\n\nEnd Sub\n\n\nPublic Function getTable(s As String) As Table\nDim tbl As Table\nFor Each tbl In ActiveDocument.Tables\nIf tbl.Title = s Then\n Set getTable = tbl\n Exit Function\nEnd If\nNext\nEnd Function\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"ms_word",
"vba"
] |
stackoverflow_0063463443_ms_word_vba.txt
|
Q:
Unauthorized error web api windows authentication
I have Blazor Server & Web API inside the same project. I'm able to call the Web API inside my project on my laptop, but I get an unauthorized error after I deployed it to the web server; I'm using Windows authentication. I've also called the Web API using Postman with the same unauthorized result on the server. Here are the relevant codes:
File: Program.cs
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate();
builder.Services.AddAuthorization(o => { o.FallbackPolicy = o.DefaultPolicy; });
var config = builder.Configuration;
builder.Services.AddDbContext<AppCtx>(o => o.UseSqlServer(config.GetConnectionString("APP")));
builder.Services.AddMvc();
builder.Services.AddServerSideBlazor();
string baseUrl = config.GetValue<string>("AppSettings:BaseUrl");
builder.Services.AddHttpClient<IAdminService, AdminService>(client =>
{
client.BaseAddress = new Uri(baseUrl);
})
.ConfigurePrimaryHttpMessageHandler(() =>
new HttpClientHandler()
{
UseDefaultCredentials = true,
Credentials = System.Net.CredentialCache.DefaultCredentials,
AllowAutoRedirect = true
});
var app = builder.Build();
string pathBase = config.GetValue<string>("AppSettings:PathBase");
app.UsePathBase(pathBase);
// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseStatusCodePages();
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.MapBlazorHub();
app.MapFallbackToPage("/_Host");
app.Run();
File: AdminController.cs
[AllowAnonymous]
[ApiController]
[Route("[controller]/[action]")]
public class AdminController : ControllerBase
{
private readonly AppCtx appCtx;
public AdminController(AppCtx appCtx)
{
this.appCtx = appCtx;
}
public async Task<IEnumerable<LocationDto>> Locations(string locs)
{
var prm = new SqlParameter("@locs", SqlDbType.VarChar, 1024);
prm.Value = locs;
string sSql = "EXEC [dbo].[uspLW300_Offices] @locs";
return await appCtx.SqlQueryAsync<LocationDto>(sSql, prm);
}
}
Here's the error I'm seeing in the browser's Developer tools. I've copied the "html" part into a separate file so that it can be easily viewed.
Another view of the error from Postman:
A:
There were two settings I made in IIS in order to allow the Web API to work. One in the application pool, the other in the website.
In the application pool, I change the Identity to use a custom account instead of using one of the built-in accounts.
In the website, I enabled the Anonymous Authentication. I also enabled the Windows Authentication since I need the user id in one of the razor components.
Per MS doc: When Windows Authentication is enabled and anonymous access is disabled, the [Authorize] and [AllowAnonymous] attributes have no effect.
|
Unauthorized error web api windows authentication
|
I have Blazor Server & Web API inside the same project. I'm able to call the Web API inside my project on my laptop, but I get an unauthorized error after I deployed it to the web server; I'm using Windows authentication. I've also called the Web API using Postman with the same unauthorized result on the server. Here are the relevant codes:
File: Program.cs
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate();
builder.Services.AddAuthorization(o => { o.FallbackPolicy = o.DefaultPolicy; });
var config = builder.Configuration;
builder.Services.AddDbContext<AppCtx>(o => o.UseSqlServer(config.GetConnectionString("APP")));
builder.Services.AddMvc();
builder.Services.AddServerSideBlazor();
string baseUrl = config.GetValue<string>("AppSettings:BaseUrl");
builder.Services.AddHttpClient<IAdminService, AdminService>(client =>
{
client.BaseAddress = new Uri(baseUrl);
})
.ConfigurePrimaryHttpMessageHandler(() =>
new HttpClientHandler()
{
UseDefaultCredentials = true,
Credentials = System.Net.CredentialCache.DefaultCredentials,
AllowAutoRedirect = true
});
var app = builder.Build();
string pathBase = config.GetValue<string>("AppSettings:PathBase");
app.UsePathBase(pathBase);
// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseStatusCodePages();
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.MapBlazorHub();
app.MapFallbackToPage("/_Host");
app.Run();
File: AdminController.cs
[AllowAnonymous]
[ApiController]
[Route("[controller]/[action]")]
public class AdminController : ControllerBase
{
private readonly AppCtx appCtx;
public AdminController(AppCtx appCtx)
{
this.appCtx = appCtx;
}
public async Task<IEnumerable<LocationDto>> Locations(string locs)
{
var prm = new SqlParameter("@locs", SqlDbType.VarChar, 1024);
prm.Value = locs;
string sSql = "EXEC [dbo].[uspLW300_Offices] @locs";
return await appCtx.SqlQueryAsync<LocationDto>(sSql, prm);
}
}
Here's the error I'm seeing in the browser's Developer tools. I've copied the "html" part into a separate file so that it can be easily viewed.
Another view of the error from Postman:
|
[
"There were two settings I made in IIS in order to allow the Web API to work. One in the application pool, the other in the website.\nIn the application pool, I change the Identity to use a custom account instead of using one of the built-in accounts.\n\nIn the website, I enabled the Anonymous Authentication. I also enabled the Windows Authentication since I need the user id in one of the razor components.\n\nPer MS doc: When Windows Authentication is enabled and anonymous access is disabled, the [Authorize] and [AllowAnonymous] attributes have no effect.\n"
] |
[
0
] |
[] |
[] |
[
".net_6.0",
"asp.net_core_webapi",
"blazor_server_side",
"iis",
"windows_authentication"
] |
stackoverflow_0074648595_.net_6.0_asp.net_core_webapi_blazor_server_side_iis_windows_authentication.txt
|
Q:
Trying to call/post a third party api in java spring
My issue is when I try this I get a media type error, then I changed the header. Now I receive a 500 error. The problem isnt the api , on postman it works perfectly , am I doing something wrong in my code when requesting a post?
My object model
public class EmailModel {
private String module;
private String notificationGroupType;
private String notificationGroupCode;
private String notificationType;
private String inLineRecipients;
private String eventCode;
private HashMap<String, Object> metaData;
public EmailModel() {
this.module = "CORE";
this.notificationGroupType = "PORTAL";
this.notificationGroupCode = "DEFAULT";
this.notificationType = "EMAIL";
this.inLineRecipients = "[[email protected],[email protected]]";
this.eventCode = "DEFAULT";
this.metaData = metaData;
}
}
My Controller
It should send a post request with a object body, the emails get sent
@RequestMapping(value = "test", method = RequestMethod.Post)
public void post() throws Exception {
String uri = "TestUrl";
EmailModel em = new EmailModel();
EmailModel data = em;
HttpClient client = HttpClient.newBuilder().build();
HttpRequest request = HttpRequest.newBuilder()
.headers("Content-Type", "application/json")
.uri(URI.create(uri))
.POST(HttpRequest.BodyPublishers.ofString(String.valueOf(data)))
.build();
HttpResponse<?> response = client.send(request, HttpResponse.BodyHandlers.discarding());
System.out.println(em);
System.out.println(response.statusCode());
}
postmanImage
A:
You must to convert EmailModel to json format by ObjectMapper
ObjectMapper objectMapper = new ObjectMapper();
String data = objectMapper
.writerWithDefaultPrettyPrinter()
.writeValueAsString(em);
and change POST to :
.POST(HttpRequest.BodyPublishers.ofString(data))
See more about ObjectMapper
A:
Capture requests and cookies(on the left side of setting icon)
->Request
->port and put the port number there
|
Trying to call/post a third party api in java spring
|
My issue is when I try this I get a media type error, then I changed the header. Now I receive a 500 error. The problem isnt the api , on postman it works perfectly , am I doing something wrong in my code when requesting a post?
My object model
public class EmailModel {
private String module;
private String notificationGroupType;
private String notificationGroupCode;
private String notificationType;
private String inLineRecipients;
private String eventCode;
private HashMap<String, Object> metaData;
public EmailModel() {
this.module = "CORE";
this.notificationGroupType = "PORTAL";
this.notificationGroupCode = "DEFAULT";
this.notificationType = "EMAIL";
this.inLineRecipients = "[[email protected],[email protected]]";
this.eventCode = "DEFAULT";
this.metaData = metaData;
}
}
My Controller
It should send a post request with a object body, the emails get sent
@RequestMapping(value = "test", method = RequestMethod.Post)
public void post() throws Exception {
String uri = "TestUrl";
EmailModel em = new EmailModel();
EmailModel data = em;
HttpClient client = HttpClient.newBuilder().build();
HttpRequest request = HttpRequest.newBuilder()
.headers("Content-Type", "application/json")
.uri(URI.create(uri))
.POST(HttpRequest.BodyPublishers.ofString(String.valueOf(data)))
.build();
HttpResponse<?> response = client.send(request, HttpResponse.BodyHandlers.discarding());
System.out.println(em);
System.out.println(response.statusCode());
}
postmanImage
|
[
"You must to convert EmailModel to json format by ObjectMapper\nObjectMapper objectMapper = new ObjectMapper();\nString data = objectMapper\n .writerWithDefaultPrettyPrinter()\n .writeValueAsString(em);\n\nand change POST to :\n.POST(HttpRequest.BodyPublishers.ofString(data))\n\nSee more about ObjectMapper\n",
"Capture requests and cookies(on the left side of setting icon)\n->Request\n->port and put the port number there\n"
] |
[
1,
0
] |
[] |
[] |
[
"http",
"java",
"spring",
"spring_boot"
] |
stackoverflow_0071990493_http_java_spring_spring_boot.txt
|
Q:
Need to implement pagination on Collection views using Compositional layout / How to reset collectionViewLayout with new section without blinking?
I have 5 different section, 3 of them i load during initial call and the remaining 2 during scroll(pagination).
If I am creating UICollectionViewCompositionalLayout before hand then the data is getting loaded in the order NSCollectionLayoutSection is created which is causing my data to load in a different section and if i am creating UICollectionViewCompositionalLayout during pagination again the collection view is blinking.
func createHomeLayout(for collectionView: UICollectionView, sections: [SectionType]) -> UICollectionViewLayout {
let layout = UICollectionViewCompositionalLayout { (sectionIndex: Int,
layoutEnvironment: NSCollectionLayoutEnvironment) -> NSCollectionLayoutSection? in
guard sectionIndex < sections.count else { return nil }
let section = sections[sectionIndex]
switch section {
case .ba:
return BaSectionCreator.create(for: collectionView.frame)
case .br:
return BrSectionCreator.create()
case .ca:
return CaSectionCreator.create()
case .pr:
return PrSectionCreator.create()
case .re:
return ReSectionCreator.create()
}
}
return layout
}
A:
You can avoid the blinking of the collection view by creating the UICollectionViewCompositionalLayout only once, before the initial call to load the first 3 sections of data. Then, instead of recreating the layout on subsequent pagination calls, you can use the UICollectionView's performBatchUpdates method to insert the new sections of data into the existing layout. This will allow you to insert the new sections of data without recreating the entire layout, which should avoid the blinking of the collection view.
Here is an example of how you can do this:
// Create the UICollectionViewCompositionalLayout before loading the first 3 sections of data.
let layout = createHomeLayout(for: collectionView, sections: [.ba, .br, .ca])
collectionView.setCollectionViewLayout(layout, animated: false)
// Load the first 3 sections of data.
// Handle pagination by inserting the new sections of data into the existing layout.
collectionView.performBatchUpdates({
let newSections: [SectionType] = [.pr, .re]
let newSectionsIndexSet = NSIndexSet(indexesIn: NSRange(location: 3, length: 2))
collectionView.insertSections(newSectionsIndexSet as IndexSet)
}, completion: nil)
This approach allows you to avoid recreating the layout on subsequent pagination calls, which should avoid the blinking of the collection view. Note that you will need to update the createHomeLayout method to return the correct NSCollectionLayoutSection for each section type, including the .pr and .re section types that are added on subsequent pagination calls.
|
Need to implement pagination on Collection views using Compositional layout / How to reset collectionViewLayout with new section without blinking?
|
I have 5 different section, 3 of them i load during initial call and the remaining 2 during scroll(pagination).
If I am creating UICollectionViewCompositionalLayout before hand then the data is getting loaded in the order NSCollectionLayoutSection is created which is causing my data to load in a different section and if i am creating UICollectionViewCompositionalLayout during pagination again the collection view is blinking.
func createHomeLayout(for collectionView: UICollectionView, sections: [SectionType]) -> UICollectionViewLayout {
let layout = UICollectionViewCompositionalLayout { (sectionIndex: Int,
layoutEnvironment: NSCollectionLayoutEnvironment) -> NSCollectionLayoutSection? in
guard sectionIndex < sections.count else { return nil }
let section = sections[sectionIndex]
switch section {
case .ba:
return BaSectionCreator.create(for: collectionView.frame)
case .br:
return BrSectionCreator.create()
case .ca:
return CaSectionCreator.create()
case .pr:
return PrSectionCreator.create()
case .re:
return ReSectionCreator.create()
}
}
return layout
}
|
[
"You can avoid the blinking of the collection view by creating the UICollectionViewCompositionalLayout only once, before the initial call to load the first 3 sections of data. Then, instead of recreating the layout on subsequent pagination calls, you can use the UICollectionView's performBatchUpdates method to insert the new sections of data into the existing layout. This will allow you to insert the new sections of data without recreating the entire layout, which should avoid the blinking of the collection view.\nHere is an example of how you can do this:\n// Create the UICollectionViewCompositionalLayout before loading the first 3 sections of data.\nlet layout = createHomeLayout(for: collectionView, sections: [.ba, .br, .ca])\ncollectionView.setCollectionViewLayout(layout, animated: false)\n\n// Load the first 3 sections of data.\n\n// Handle pagination by inserting the new sections of data into the existing layout.\ncollectionView.performBatchUpdates({\n let newSections: [SectionType] = [.pr, .re]\n let newSectionsIndexSet = NSIndexSet(indexesIn: NSRange(location: 3, length: 2))\n collectionView.insertSections(newSectionsIndexSet as IndexSet)\n}, completion: nil)\n\nThis approach allows you to avoid recreating the layout on subsequent pagination calls, which should avoid the blinking of the collection view. Note that you will need to update the createHomeLayout method to return the correct NSCollectionLayoutSection for each section type, including the .pr and .re section types that are added on subsequent pagination calls.\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"swift",
"uicollectionviewcompositionallayout",
"uicollectionviewdiffabledatasource",
"uikit"
] |
stackoverflow_0074543296_ios_swift_uicollectionviewcompositionallayout_uicollectionviewdiffabledatasource_uikit.txt
|
Q:
How can I store current directory as variable in python?
I'm trying to build a basic terminal that performs basic operations in python. I have made all the main functions, but the cd function isn't working to change my current directory.
I suspect that the problem is in the way I store my current directory file. Perhaps I need to store it as variable instead of using function.
This is the code.
#####################################
# import modules.
# pwd - view the current folder function.
# ls - list files in a folder function.
# touch (filename) - create new empty file function.
# rm (filename) - delete a file function.
# cd - go to another folder function.
# cat (filename) - display the contents of a file function.
######################################
import os
import pathlib
from os.path import join
path = os.getcwd()
# DONE
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
os.chdir(join(path, file_name))
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls":
ls()
elif cmd == "pwd":
pwd()
elif cmd == "cd":
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch":
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm":
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': #
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
I tired to change directory using the cd function in my custom terminal, but it's not working. It is expected that cd function to work correctly.
A:
It looks like you are storing the current working directory in the path variable when you import it at the beginning of your code. However, when you call os.chdir in your cd function, it changes the current working directory, but it doesn't update the path variable to reflect this change. As a result, when you call os.listdir in your ls function, it still lists the files in the old working directory instead of the new one.
One way to fix this is to update the path variable whenever you call os.chdir in the cd function. You can do this by assigning the result of os.chdir to path. This will update path to the new working directory, and the ls function will work as expected. Here is what the updated code might look like:
import os
import pathlib
from os.path import join
path = os.getcwd()
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
path = os.chdir(join(path, file_name)) # Update the path variable
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls":
ls()
elif cmd == "pwd":
pwd()
elif cmd == "cd":
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch":
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm":
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': #
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
Another approach would be to use the os.getcwd function to get the current working directory inside each function instead of using the path variable. This way, the path variable won't be necessary, and you can remove it from your code. Here is an example of what this might look like:
import os
import pathlib
from os.path import join
def ls():
print(os.listdir(os.getcwd())) # Use os.getcwd() instead of path
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(os.getcwd(), file_name), 'a') # Use os.getcwd() instead of path
fp.close()
A:
There are a few issues with your cd function. First, you are using the global path variable to store the current working directory, but you are not updating this variable when calling cd. Second, the cd function does not return anything, so the print statement in the while loop does not have any effect.
Here is one way to fix these issues:
Instead of using a global variable to store the current working directory, use a local variable current_dir that is initialized to the current working directory. This variable should be updated whenever the cd function is called.
Inside the cd function, change the current working directory using the os.chdir function and update the current_dir variable.
In the while loop, call the pwd function after calling the cd function to display the new current working directory.
Here is the updated code that implements these changes:
import os
import pathlib
from os.path import join
# DONE
def ls():
os.listdir(current_dir)
print(os.listdir(current_dir))
def pwd():
print(current_dir)
def touch(file_name):
fp = open(join(current_dir, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(current_dir, file_name))
file.unlink()
def cd(file_name):
os.chdir(join(current_dir, file_name))
current_dir = os.getcwd()
current_dir = os.getcwd()
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls":
ls()
elif cmd == "pwd":
pwd()
elif cmd == "cd":
file_name = dirName.split(" ")[1]
cd(file_name)
pwd()
elif cmd == "touch":
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm":
file_name = dirName.split(" ")[1]
rm(file_name)
else:
print("Command not found!")
With these changes, the cd function should work as expected. You can further improve the code by adding error handling for invalid directory names and making the input parsing more robust.
A:
You have to change the path value to the updated path value.
Try this.
import os
import pathlib
from os.path import join
path = os.getcwd()
# DONE
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
global path
path = os.chdir(join(path, file_name))
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls":
ls()
elif cmd == "pwd":
pwd()
elif cmd == "cd":
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch":
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm":
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': #
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
|
How can I store current directory as variable in python?
|
I'm trying to build a basic terminal that performs basic operations in python. I have made all the main functions, but the cd function isn't working to change my current directory.
I suspect that the problem is in the way I store my current directory file. Perhaps I need to store it as variable instead of using function.
This is the code.
#####################################
# import modules.
# pwd - view the current folder function.
# ls - list files in a folder function.
# touch (filename) - create new empty file function.
# rm (filename) - delete a file function.
# cd - go to another folder function.
# cat (filename) - display the contents of a file function.
######################################
import os
import pathlib
from os.path import join
path = os.getcwd()
# DONE
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
os.chdir(join(path, file_name))
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls":
ls()
elif cmd == "pwd":
pwd()
elif cmd == "cd":
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch":
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm":
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': #
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
I tired to change directory using the cd function in my custom terminal, but it's not working. It is expected that cd function to work correctly.
|
[
"It looks like you are storing the current working directory in the path variable when you import it at the beginning of your code. However, when you call os.chdir in your cd function, it changes the current working directory, but it doesn't update the path variable to reflect this change. As a result, when you call os.listdir in your ls function, it still lists the files in the old working directory instead of the new one.\nOne way to fix this is to update the path variable whenever you call os.chdir in the cd function. You can do this by assigning the result of os.chdir to path. This will update path to the new working directory, and the ls function will work as expected. Here is what the updated code might look like:\nimport os\nimport pathlib\nfrom os.path import join\n\npath = os.getcwd()\n\n\ndef ls():\n os.listdir(path)\n print(os.listdir(path))\n\n\ndef pwd():\n print(os.getcwd())\n\n\ndef touch(file_name):\n fp = open(join(path, file_name), 'a')\n fp.close()\n\n\ndef rm(file_name):\n file = pathlib.Path(join(path, file_name))\n file.unlink()\n\n\ndef cd(file_name):\n path = os.chdir(join(path, file_name)) # Update the path variable\n\n\nwhile True < 100:\n dirName = input()\n cmd = dirName.split(\" \")[0]\n\n if cmd == \"ls\": \n ls()\n elif cmd == \"pwd\": \n pwd()\n elif cmd == \"cd\": \n file_name = dirName.split(\" \")[1]\n cd(file_name)\n print(os.getcwd())\n elif cmd == \"touch\": \n file_name = dirName.split(\" \")[1]\n touch(file_name)\n elif cmd == \"rm\": \n file_name = dirName.split(\" \")[1]\n rm(file_name)\n elif cmd == 'cd': #\n file_name = dirName.split(\" \")[1]\n cd(file_name)\n print(pwd(file_name))\n else:\n print(\"Command not found!\")\n\nAnother approach would be to use the os.getcwd function to get the current working directory inside each function instead of using the path variable. This way, the path variable won't be necessary, and you can remove it from your code. Here is an example of what this might look like:\nimport os\nimport pathlib\nfrom os.path import join\n\ndef ls():\n print(os.listdir(os.getcwd())) # Use os.getcwd() instead of path\n\n\ndef pwd():\n print(os.getcwd())\n\n\ndef touch(file_name):\n fp = open(join(os.getcwd(), file_name), 'a') # Use os.getcwd() instead of path\n fp.close()\n\n",
"There are a few issues with your cd function. First, you are using the global path variable to store the current working directory, but you are not updating this variable when calling cd. Second, the cd function does not return anything, so the print statement in the while loop does not have any effect.\nHere is one way to fix these issues:\nInstead of using a global variable to store the current working directory, use a local variable current_dir that is initialized to the current working directory. This variable should be updated whenever the cd function is called.\nInside the cd function, change the current working directory using the os.chdir function and update the current_dir variable.\nIn the while loop, call the pwd function after calling the cd function to display the new current working directory.\nHere is the updated code that implements these changes:\nimport os\nimport pathlib\nfrom os.path import join\n\n# DONE\ndef ls():\n os.listdir(current_dir)\n print(os.listdir(current_dir))\n\ndef pwd():\n print(current_dir)\n\ndef touch(file_name):\n fp = open(join(current_dir, file_name), 'a')\n fp.close()\n\ndef rm(file_name):\n file = pathlib.Path(join(current_dir, file_name))\n file.unlink()\n\ndef cd(file_name):\n os.chdir(join(current_dir, file_name))\n current_dir = os.getcwd()\n\ncurrent_dir = os.getcwd()\nwhile True < 100:\n dirName = input()\n cmd = dirName.split(\" \")[0]\n\n if cmd == \"ls\": \n ls()\n elif cmd == \"pwd\": \n pwd()\n elif cmd == \"cd\": \n file_name = dirName.split(\" \")[1]\n cd(file_name)\n pwd()\n elif cmd == \"touch\": \n file_name = dirName.split(\" \")[1]\n touch(file_name)\n elif cmd == \"rm\": \n file_name = dirName.split(\" \")[1]\n rm(file_name)\n else:\n print(\"Command not found!\")\n\nWith these changes, the cd function should work as expected. You can further improve the code by adding error handling for invalid directory names and making the input parsing more robust.\n",
"You have to change the path value to the updated path value.\nTry this.\n\nimport os\nimport pathlib\nfrom os.path import join\n\npath = os.getcwd()\n\n\n# DONE\ndef ls():\n os.listdir(path)\n print(os.listdir(path))\n\n\ndef pwd():\n print(os.getcwd())\n\n\ndef touch(file_name):\n fp = open(join(path, file_name), 'a')\n fp.close()\n\n\ndef rm(file_name):\n file = pathlib.Path(join(path, file_name))\n file.unlink()\n\n\ndef cd(file_name):\n global path\n path = os.chdir(join(path, file_name))\n\n\nwhile True < 100:\n dirName = input()\n cmd = dirName.split(\" \")[0]\n\n if cmd == \"ls\": \n ls()\n elif cmd == \"pwd\": \n pwd()\n elif cmd == \"cd\": \n file_name = dirName.split(\" \")[1]\n cd(file_name)\n print(os.getcwd())\n elif cmd == \"touch\": \n file_name = dirName.split(\" \")[1]\n touch(file_name)\n elif cmd == \"rm\": \n file_name = dirName.split(\" \")[1]\n rm(file_name)\n elif cmd == 'cd': #\n file_name = dirName.split(\" \")[1]\n cd(file_name)\n print(pwd(file_name))\n else:\n print(\"Command not found!\")\n\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python",
"web_scraping"
] |
stackoverflow_0074664156_python_web_scraping.txt
|
Q:
Is AES 256 CBC Crypto (Cipher) logic is there in Kotlin Multi Platform (KMM)?
I found AES encryption logic in Kotlin by using JavaX libraries. Since it's specific to java (Android) so it's not executing for iOS.
A:
You can use krypto or libsodum wrapper libraries.
For example, with krypto library you can easily implement AES CBC in commanMain module by using these functions:
implementation("com.soywiz.korlibs.krypto:krypto:${Version.krypto}")
AES.encryptAesEcb(dataByteArray, keyByteArray, Padding.NoPadding)
AES.decryptAesEcb(dataByteArray, keyByteArray, Padding.ANSIX923Padding)
|
Is AES 256 CBC Crypto (Cipher) logic is there in Kotlin Multi Platform (KMM)?
|
I found AES encryption logic in Kotlin by using JavaX libraries. Since it's specific to java (Android) so it's not executing for iOS.
|
[
"You can use krypto or libsodum wrapper libraries.\nFor example, with krypto library you can easily implement AES CBC in commanMain module by using these functions:\nimplementation(\"com.soywiz.korlibs.krypto:krypto:${Version.krypto}\")\n\n\n\nAES.encryptAesEcb(dataByteArray, keyByteArray, Padding.NoPadding)\nAES.decryptAesEcb(dataByteArray, keyByteArray, Padding.ANSIX923Padding)\n\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"kmm"
] |
stackoverflow_0070503926_ios_kmm.txt
|
Q:
Why I am getting name deprecated while using const. I tried using let but didn't work either
Every time I try to run my code I get a name deprecated error. I dont know where I am making mistake so I have uploaded both of my client.js and index.js file.
This is what my visual studio is giving.
const name: void
@deprecated
'name' is deprecated.ts(6385)
lib.dom.d.ts(17642, 5): The declaration was marked as deprecated here.
After changing my name to testname i dont get any deprecated error but my console.log doesnot print 'new-user' . Additionally socket.emit is not working as well.
client.js
const socket = io('http://localhost:8000');
const form = document.getElementById('send-container');
const messageInput = document.getElementById('messageInp');
const messageContainer = document.querySelector(".container");
const testname = prompt("Enter your name to join:");
socket.emit('new-user-joined',testname)
Index.js
//node Server which will handle socket io connection
const io = require('socket.io')(8000)
const users ={};
io.on('connection',socket => {
socket.on('new-user-joined', testname => {
console.log("New user" ,testname)
users[socket.id]= testname;
socket.broadcast.emit('user-joined',testname);
});
socket.on('send', message =>{
socket.broadcast.emit('receive',{message: message, testname: users[socket.io]})
});
})
A:
The variable 'name' already exists at the top-level ("global scope") in the window object, although it is deprecated. If you declare the same variable inside a function you'll see that error message will disappear.
If you want to use it in the global scope, you should rename this to something more specific. Alternatively, you could use JavaScript modules, they execute in their own scope.
https://developer.mozilla.org/en-US/docs/Web/API/Window/name
|
Why I am getting name deprecated while using const. I tried using let but didn't work either
|
Every time I try to run my code I get a name deprecated error. I dont know where I am making mistake so I have uploaded both of my client.js and index.js file.
This is what my visual studio is giving.
const name: void
@deprecated
'name' is deprecated.ts(6385)
lib.dom.d.ts(17642, 5): The declaration was marked as deprecated here.
After changing my name to testname i dont get any deprecated error but my console.log doesnot print 'new-user' . Additionally socket.emit is not working as well.
client.js
const socket = io('http://localhost:8000');
const form = document.getElementById('send-container');
const messageInput = document.getElementById('messageInp');
const messageContainer = document.querySelector(".container");
const testname = prompt("Enter your name to join:");
socket.emit('new-user-joined',testname)
Index.js
//node Server which will handle socket io connection
const io = require('socket.io')(8000)
const users ={};
io.on('connection',socket => {
socket.on('new-user-joined', testname => {
console.log("New user" ,testname)
users[socket.id]= testname;
socket.broadcast.emit('user-joined',testname);
});
socket.on('send', message =>{
socket.broadcast.emit('receive',{message: message, testname: users[socket.io]})
});
})
|
[
"The variable 'name' already exists at the top-level (\"global scope\") in the window object, although it is deprecated. If you declare the same variable inside a function you'll see that error message will disappear.\nIf you want to use it in the global scope, you should rename this to something more specific. Alternatively, you could use JavaScript modules, they execute in their own scope.\nhttps://developer.mozilla.org/en-US/docs/Web/API/Window/name\n"
] |
[
0
] |
[] |
[] |
[
"css",
"javascript",
"node.js",
"socket.io"
] |
stackoverflow_0074663959_css_javascript_node.js_socket.io.txt
|
Q:
How to use position_dodge2() to preserve width of the bars?
There's a behaviour with position_dodge2 in ggplot which I cannot seem to understand. This question has been asked before and is also on tidyverse's page on position FAQs but I don't understand what's going on here.
I have a simple dataset with a numerical Cost value, and two factors SEX and RACE. I make a basic bar graph as follows:
ggplot(healthdata) +
geom_col(aes(x = RACE, y = Costs, fill=SEX), position = "dodge")
I need to make sure RACE 5 has the same width as the rest, so I do what every source says which is use position_dodge2, and set the preserve parameter to "single". Yet this is the output I get. Can someone help me understand this?
ggplot(healthdata) +
geom_col(aes(x = RACE, y = HealthcareCosts, fill=SEX), position = position_dodge2(preserve = "single"))
Why does the scaling go all over the place? I can adjust the parameter 'padding' to make the bars thicker, but I don't understand how changing "dodge" to position_dodge2(preserve = "single") has caused this change in the graph. The highest of the many bars in the second case match the heights of the bars in the first. So what are all the extra bars then? I followed the instructions from the second example on this page.
A:
your example is using geom_col and the reference is using geom_bar, which have some key differences - geom_col is mapping each element but performing a summarizing of the total cost. when the position_dodge(preserve = "single") is used, it is then preserving the single printing of each element. To get this to match what I suspect is what you're looking for is to perform the summarizing then pipe into ggplot, using geom_bar instead
mtcars |>
group_by(cyl,vs) |>
mutate(total = sum(mpg)) |>
ungroup() |>
select(cyl, vs, total) |>
distinct() |> #important for only graphing single element
ggplot(aes(x = factor(cyl), y = total, fill = factor(vs))) +
geom_bar(position = position_dodge2(preserve = "single"), stat = "identity")
|
How to use position_dodge2() to preserve width of the bars?
|
There's a behaviour with position_dodge2 in ggplot which I cannot seem to understand. This question has been asked before and is also on tidyverse's page on position FAQs but I don't understand what's going on here.
I have a simple dataset with a numerical Cost value, and two factors SEX and RACE. I make a basic bar graph as follows:
ggplot(healthdata) +
geom_col(aes(x = RACE, y = Costs, fill=SEX), position = "dodge")
I need to make sure RACE 5 has the same width as the rest, so I do what every source says which is use position_dodge2, and set the preserve parameter to "single". Yet this is the output I get. Can someone help me understand this?
ggplot(healthdata) +
geom_col(aes(x = RACE, y = HealthcareCosts, fill=SEX), position = position_dodge2(preserve = "single"))
Why does the scaling go all over the place? I can adjust the parameter 'padding' to make the bars thicker, but I don't understand how changing "dodge" to position_dodge2(preserve = "single") has caused this change in the graph. The highest of the many bars in the second case match the heights of the bars in the first. So what are all the extra bars then? I followed the instructions from the second example on this page.
|
[
"your example is using geom_col and the reference is using geom_bar, which have some key differences - geom_col is mapping each element but performing a summarizing of the total cost. when the position_dodge(preserve = \"single\") is used, it is then preserving the single printing of each element. To get this to match what I suspect is what you're looking for is to perform the summarizing then pipe into ggplot, using geom_bar instead\nmtcars |>\n group_by(cyl,vs) |>\n mutate(total = sum(mpg)) |>\n ungroup() |>\n select(cyl, vs, total) |>\n distinct() |> #important for only graphing single element \n ggplot(aes(x = factor(cyl), y = total, fill = factor(vs))) +\n geom_bar(position = position_dodge2(preserve = \"single\"), stat = \"identity\")\n\n\n"
] |
[
0
] |
[] |
[] |
[
"bar_chart",
"ggplot2",
"r"
] |
stackoverflow_0074663620_bar_chart_ggplot2_r.txt
|
Q:
Exported VertexAI TabularModel model_warm_up fails when running docker
Good Evening,
I have followed the instructions found here.
https://cloud.google.com/vertex-ai/docs/export/export-model-tabular
I trained the model on the Google Cloud Platform console
Then exported the model per the instructions. However when I run the docker run command I get the following:
docker run -v `pwd`/model-1216534849343455232/tf-saved-model/model:/models/default -p 8080:8080 -it us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1
INFO:root:running uCAIP model server
2022-04-12 02:07:09.118593: I tensorflow_serving/model_servers/server.cc:85] Building single TensorFlow model file config: model_name: default model_base_path: /models/default/predict
2022-04-12 02:07:09.118695: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2022-04-12 02:07:09.118703: I tensorflow_serving/model_servers/server_core.cc:573] (Re-)adding model: default
2022-04-12 02:07:09.219134: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: default version: 1}
2022-04-12 02:07:09.219153: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: default version: 1}
2022-04-12 02:07:09.219159: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: default version: 1}
2022-04-12 02:07:09.219172: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/default/predict/001
2022-04-12 02:07:09.229531: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2022-04-12 02:07:09.241239: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2022-04-12 02:07:09.256079: E external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1575] OpKernel ('op: "DecodeProtoSparse" device_type: "CPU"') for unknown op: DecodeProtoSparse
2022-04-12 02:07:09.277522: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2022-04-12 02:07:09.338428: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /models/default/predict/001
2022-04-12 02:07:09.371063: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 151887 microseconds.
2022-04-12 02:07:09.373646: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:117] Starting to read warmup data for model at /models/default/predict/001/assets.extra/tf_serving_warmup_requests with model-warmup-options
2022-04-12 02:07:09.573843: F external/org_tensorflow/tensorflow/core/framework/tensor_shape.cc:44] Check failed: NDIMS == dims() (1 vs. 2)Asking for tensor of 1 dimensions from a tensor of 2 dimensions
2022-04-12 02:07:09.573843: F external/org_tensorflow/tensorflow/core/framework/tensor_shape.cc:44] Check failed: NDIMS == dims() (1 vs. 2)Asking for tensor of 1 dimensions from a tensor of 2 dimensions
Aborted (core dumped)
INFO:root:connecting to TF serving at localhost:9000
INFO:root:server listening on port 8080
INFO:root:connectivity went from None to ChannelConnectivity.IDLE
INFO:root:connectivity went from ChannelConnectivity.IDLE to ChannelConnectivity.CONNECTING
INFO:root:connectivity went from ChannelConnectivity.CONNECTING to ChannelConnectivity.TRANSIENT_FAILURE
INFO:root:connectivity went from ChannelConnectivity.TRANSIENT_FAILURE to ChannelConnectivity.CONNECTING
INFO:root:connectivity went from ChannelConnectivity.CONNECTING to ChannelConnectivity.TRANSIENT_FAILURE
I am not sure what I did wrong, or what I need to change to fix it.
Thank you for your help, in advance.
UPDATE:
environment.json contents
{"container_uri": "us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20220331_1125_RC00",
"tensorflow": "2.4.1",
"struct2tensor": "0.29.0",
"tensorflow-addons": "0.12.1",
"tensorflow-text": "2.4.1"}
A:
This issue is caused due to compatibility issues of the images with the models. prediction-server-v1:latest is always backward compatible with existing models without environment.json but it is not forward compatible with new models that have environment.json. To resolve this issue, following workarounds can be performed:
If the model artifact contains environment.json (new models), use
us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00
or
europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00
image in conatiner_uri in environment.json.
If there is no environment.json, use europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1:latest this image is backward compatible with all models without environment.json.
A:
So I had the same problem. Even retraining an old model which had previously worked, now fail. Taking the commented suggestion of Shipra Sarkar, I tried using package,
europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00
Using this old package, I no longer get the error.
A:
More general answer is, the container_uri field in your environments.json file should match exactly with the image you're pulling from the Google registry. After pulling that exact image, you will run that as a container on your local environment. So even though the documentation gets stale sometimes, this is the general logic.
E.g. if your environment.json says
{"container_uri": "us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525", "tensorflow": "2.8.0", "struct2tensor": "0.39.0", "tensorflow-addons": "0.16.1"}
then you first pull the image with
docker pull us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525
and then run the container with
docker run -v `pwd`/model-7073468363762040832/tf-saved-model/example-model:/models/default -p 8080:8080 -it us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525
|
Exported VertexAI TabularModel model_warm_up fails when running docker
|
Good Evening,
I have followed the instructions found here.
https://cloud.google.com/vertex-ai/docs/export/export-model-tabular
I trained the model on the Google Cloud Platform console
Then exported the model per the instructions. However when I run the docker run command I get the following:
docker run -v `pwd`/model-1216534849343455232/tf-saved-model/model:/models/default -p 8080:8080 -it us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1
INFO:root:running uCAIP model server
2022-04-12 02:07:09.118593: I tensorflow_serving/model_servers/server.cc:85] Building single TensorFlow model file config: model_name: default model_base_path: /models/default/predict
2022-04-12 02:07:09.118695: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2022-04-12 02:07:09.118703: I tensorflow_serving/model_servers/server_core.cc:573] (Re-)adding model: default
2022-04-12 02:07:09.219134: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: default version: 1}
2022-04-12 02:07:09.219153: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: default version: 1}
2022-04-12 02:07:09.219159: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: default version: 1}
2022-04-12 02:07:09.219172: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/default/predict/001
2022-04-12 02:07:09.229531: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2022-04-12 02:07:09.241239: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2022-04-12 02:07:09.256079: E external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1575] OpKernel ('op: "DecodeProtoSparse" device_type: "CPU"') for unknown op: DecodeProtoSparse
2022-04-12 02:07:09.277522: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2022-04-12 02:07:09.338428: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /models/default/predict/001
2022-04-12 02:07:09.371063: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 151887 microseconds.
2022-04-12 02:07:09.373646: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:117] Starting to read warmup data for model at /models/default/predict/001/assets.extra/tf_serving_warmup_requests with model-warmup-options
2022-04-12 02:07:09.573843: F external/org_tensorflow/tensorflow/core/framework/tensor_shape.cc:44] Check failed: NDIMS == dims() (1 vs. 2)Asking for tensor of 1 dimensions from a tensor of 2 dimensions
2022-04-12 02:07:09.573843: F external/org_tensorflow/tensorflow/core/framework/tensor_shape.cc:44] Check failed: NDIMS == dims() (1 vs. 2)Asking for tensor of 1 dimensions from a tensor of 2 dimensions
Aborted (core dumped)
INFO:root:connecting to TF serving at localhost:9000
INFO:root:server listening on port 8080
INFO:root:connectivity went from None to ChannelConnectivity.IDLE
INFO:root:connectivity went from ChannelConnectivity.IDLE to ChannelConnectivity.CONNECTING
INFO:root:connectivity went from ChannelConnectivity.CONNECTING to ChannelConnectivity.TRANSIENT_FAILURE
INFO:root:connectivity went from ChannelConnectivity.TRANSIENT_FAILURE to ChannelConnectivity.CONNECTING
INFO:root:connectivity went from ChannelConnectivity.CONNECTING to ChannelConnectivity.TRANSIENT_FAILURE
I am not sure what I did wrong, or what I need to change to fix it.
Thank you for your help, in advance.
UPDATE:
environment.json contents
{"container_uri": "us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20220331_1125_RC00",
"tensorflow": "2.4.1",
"struct2tensor": "0.29.0",
"tensorflow-addons": "0.12.1",
"tensorflow-text": "2.4.1"}
|
[
"This issue is caused due to compatibility issues of the images with the models. prediction-server-v1:latest is always backward compatible with existing models without environment.json but it is not forward compatible with new models that have environment.json. To resolve this issue, following workarounds can be performed:\n\nIf the model artifact contains environment.json (new models), use\nus-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00\nor\neurope-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00\nimage in conatiner_uri in environment.json.\nIf there is no environment.json, use europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1:latest this image is backward compatible with all models without environment.json.\n\n",
"So I had the same problem. Even retraining an old model which had previously worked, now fail. Taking the commented suggestion of Shipra Sarkar, I tried using package,\neurope-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20210820_1325_RC00\nUsing this old package, I no longer get the error.\n",
"More general answer is, the container_uri field in your environments.json file should match exactly with the image you're pulling from the Google registry. After pulling that exact image, you will run that as a container on your local environment. So even though the documentation gets stale sometimes, this is the general logic.\nE.g. if your environment.json says\n{\"container_uri\": \"us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525\", \"tensorflow\": \"2.8.0\", \"struct2tensor\": \"0.39.0\", \"tensorflow-addons\": \"0.16.1\"}\n\nthen you first pull the image with\ndocker pull us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525\nand then run the container with\ndocker run -v `pwd`/model-7073468363762040832/tf-saved-model/example-model:/models/default -p 8080:8080 -it us-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server:20221110_1525\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"google_cloud_ml"
] |
stackoverflow_0071837031_google_cloud_ml.txt
|
Q:
Hackerrank: Climbing the Leaderboard
i have a deal with a hackerrank algorithm problem.
It works at all cases, except 6-7-8-9. It gives timeout error. I had spent so much time at this level. Someone saw where is problem?
static long[] climbingLeaderboard(long[] scores, long[] alice)
{
//long[] ranks = new long[scores.Length];
long[] aliceRanks = new long[alice.Length]; // same length with alice length
long lastPoint = 0;
long lastRank;
for (long i = 0; i < alice.Length; i++)
{
lastPoint = scores[0];
lastRank = 1;
bool isIn = false; // if never drop in if statement
for (long j = 0; j < scores.Length; j++)
{
if (lastPoint != scores[j]) //if score is not same, raise the variable
{
lastPoint = scores[j];
lastRank++;
}
if (alice[i] >= scores[j])
{
aliceRanks[i] = lastRank;
isIn = true;
break;
}
aliceRanks[i] = !isIn & j + 1 == scores.Length ? ++lastRank : aliceRanks[i]; //drop in here
}
}
return aliceRanks;
}
A:
This problem can be solved in O(n) time, no binary search needed at all. First, we need to extract the most useful piece of data given in the problem statement, which is,
The existing leaderboard, scores, is in descending order.
Alice's scores, alice, are in ascending order.
An approach that makes this useful is to create two pointers, one at the start of alice array, let's call it "i", and the second is at the end of scores array, let's call it "j". We then loop until i reaches the end of alice array and at each iteration, we check for three main conditions. We increment i by one if alice[i] is less than scores[j] because the next element of alice may be also less than the current element of scores, or we decrement j if alice[i] is greater than scores[j] because we are sure that the next elements of alice are also greater than those elements discarded in scores. The last condition is that if alice[i] == scores[j], we only increment i.
I solved this question in C++, my goal here is to make you understand the algorithm, I think you can easily convert it to C# if you understand it. If there are any confusions, please tell me. Here is the code:
// Complete the climbingLeaderboard function below.
vector<int> climbingLeaderboard(vector<int> scores, vector<int> alice) {
int j = 1, i = 1;
// this is to remove duplicates from the scores vector
for(i =1; i < scores.size(); i++){
if(scores[i] != scores[i-1]){
scores[j++] = scores[i];
}
}
int size = scores.size();
for(i = 0; i < size-j; i++){
scores.pop_back();
}
vector<int> ranks;
i = 0;
j = scores.size()-1;
while(i < alice.size()){
if(j < 0){
ranks.push_back(1);
i++;
continue;
}
if(alice[i] < scores[j]){
ranks.push_back(j+2);
i++;
} else if(alice[i] > scores[j]){
j--;
} else {
ranks.push_back(j+1);
i++;
}
}
return ranks;
}
I think this may help you too:
vector is like an array list that resizes itself.
push_back() is inserting at the end of the vector.
pop_back() is removing from the end of the vector.
A:
Here is a solution that utilizes BinarySearch. This method returns the index of the searched number in the array, or if the number is not found then it returns a negative number that is the bitwise complement of the index of the next element in the array. Binary search only works in sorted arrays.
public static int[] GetRanks(long[] scores, long[] person)
{
var defaultComparer = Comparer<long>.Default;
var reverseComparer = Comparer<long>.Create((x, y) => -defaultComparer.Compare(x, y));
var distinctOrderedScores = scores.Distinct().OrderBy(i => i, reverseComparer).ToArray();
return person
.Select(i => Array.BinarySearch(distinctOrderedScores, i, reverseComparer))
.Select(pos => (pos >= 0 ? pos : ~pos) + 1)
.ToArray();
}
Usage example:
var scores = new long[] { 100, 100, 50, 40, 40, 20, 10 };
var alice = new long[] { 5, 25, 50, 120 };
var ranks = GetRanks(scores, alice);
Console.WriteLine($"Ranks: {String.Join(", ", ranks)}");
Output:
Ranks: 6, 4, 2, 1
A:
Here is my solution with c#
public static List<int> climbingLeaderboard(List<int> ranked, List<int> player)
{
List<int> result = new List<int>();
ranked = ranked.Distinct().ToList();
var pLength = player.Count;
var rLength = ranked.Count-1;
int j = rLength;
for (int i = 0; i < pLength; i++)
{
for (; j >= 0; j--)
{
if (player[i] == ranked[j])
{
result.Add(j + 1);
break;
}
else if(player[i] < ranked[j])
{
result.Add(j + 2);
break;
}
else if(player[i] > ranked[j]&&j==0)
{
result.Add(1);
break;
}enter code here
}
}
return result;
}
A:
I was bored so i gave this a go with Linq and heavily commented it for you,
Given
public static IEnumerable<int> GetRanks(long[] scores, long[] person)
// Convert scores to a tuple
=> scores.Select(s => (scores: s, isPerson: false))
// convert persons score to a tuple and concat
.Concat(person.Select(s => (scores: s, isPerson: true)))
// Group by scores
.GroupBy(x => x.scores)
// order by score
.OrderBy(x => x.Key)
// select into an indexable tuple so we know everyones rank
.Select((groups, i) => (rank: i, groups))
// Filter the person
.Where(x => x.groups.Any(y => y.isPerson))
// select the rank
.Select(x => x.rank);
Usage
static void Main(string[] args)
{
var scores = new long[]{1, 34, 565, 43, 44, 56, 67};
var alice = new long[]{578, 40, 50, 67, 6};
var ranks = GetRanks(scores, alice);
foreach (var rank in ranks)
Console.WriteLine(rank);
}
Output
1
3
6
8
10
A:
Based on the given constraint brute-force solution will not be efficient for the problem.
you have to optimize your code and the key part here is to look up for exact place which can be effectively done by using binary search.
Here is the solution using binary search:-
static int[] climbingLeaderboard(int[] scores, int[] alice) {
int n = scores.length;
int m = alice.length;
int res[] = new int[m];
int[] rank = new int[n];
rank[0] = 1;
for (int i = 1; i < n; i++) {
if (scores[i] == scores[i - 1]) {
rank[i] = rank[i - 1];
} else {
rank[i] = rank[i - 1] + 1;
}
}
for (int i = 0; i < m; i++) {
int aliceScore = alice[i];
if (aliceScore > scores[0]) {
res[i] = 1;
} else if (aliceScore < scores[n - 1]) {
res[i] = rank[n - 1] + 1;
} else {
int index = binarySearch(scores, aliceScore);
res[i] = rank[index];
}
}
return res;
}
private static int binarySearch(int[] a, int key) {
int lo = 0;
int hi = a.length - 1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2;
if (a[mid] == key) {
return mid;
} else if (a[mid] < key && key < a[mid - 1]) {
return mid;
} else if (a[mid] > key && key >= a[mid + 1]) {
return mid + 1;
} else if (a[mid] < key) {
hi = mid - 1;
} else if (a[mid] > key) {
lo = mid + 1;
}
}
return -1;
}
You can refer to this link for a more detailed video explanation.
A:
static int[] climbingLeaderboard(int[] scores, int[] alice) {
int[] uniqueScores = IntStream.of(scores).distinct().toArray();
int [] rank = new int [alice.length];
int startIndex=0;
for(int j=alice.length-1; j>=0;j--) {
for(int i=startIndex; i<=uniqueScores.length-1;i++) {
if (alice[j]<uniqueScores[uniqueScores.length-1]){
rank[j]=uniqueScores.length+1;
break;
}
else if(alice[j]>=uniqueScores[i]) {
rank[j]=i+1;
startIndex=i;
break;
}
else{continue;}
}
}
return rank;
}
A:
My solution in javascript for climbing the Leaderboard Hackerrank problem. The time complexity of the problem can be O(i+j), i is the length of scores and j is the length of alice. The space complexity is O(1).
// Complete the climbingLeaderboard function below.
function climbingLeaderboard(scores, alice) {
const ans = [];
let count = 0;
// the alice array is arranged in ascending order
let j = alice.length - 1;
for (let i = 0 ; i < scores.length ; i++) {
const score = scores[i];
for (; j >= 0 ; j--) {
if (alice[j] >= score) {
// if higher than score
ans.unshift(count+1);
} else if (i === scores.length - 1) {
// if smallest
ans.unshift(count+2);
} else {
break;
}
}
// actual rank of the score in leaderboard
if (score !== scores[i-1]) {
count++;
}
}
return ans;
}
A:
Here is my solution
List<int> distinct = null;
List<int> rank = new List<int>();
foreach (int item in player)
{
ranked.Add(item);
ranked.Sort();
ranked.Reverse();
distinct = ranked.Distinct().ToList();
for (int i = 0; i < distinct.Count; i++)
{
if (item == distinct[i])
{
rank.Add(i + 1);
break;
}
}
}
return rank;
This can be modified by removing the inner for loop also
List<int> distinct = null;
List<int> rank = new List<int>();
foreach (int item in player)
{
ranked.Add(item);
ranked.Sort();
ranked.Reverse();
distinct = ranked.Distinct().ToList();
var index = ranked.FindIndex(x => x == item);
rank.Add(index + 1);
}
return rank;
A:
This is my solution in c# for Hackerrank Climbing the Leaderboard based on C++ answer here.
public static List<int> climbingLeaderboard(List<int> ranked, List<int> player)
{
List<int> _ranked = new List<int>();
_ranked.Add(ranked[0]);
for(int a=1; a < ranked.Count(); a++)
if(_ranked[_ranked.Count()-1] != ranked[a])
_ranked.Add(ranked[a]);
int j = _ranked.Count()-1;
int i = 0;
while(i < player.Count())
{
if(j < 0)
{
player[i] = 1;
i++;
continue;
}
if(player[i] < _ranked[j])
{
player[i] = j+2;
i++;
}
else
if(player[i] == _ranked[j])
{
player[i] = j+1;
i++;
}
else
{
j--;
}
}
return player;
}
|
Hackerrank: Climbing the Leaderboard
|
i have a deal with a hackerrank algorithm problem.
It works at all cases, except 6-7-8-9. It gives timeout error. I had spent so much time at this level. Someone saw where is problem?
static long[] climbingLeaderboard(long[] scores, long[] alice)
{
//long[] ranks = new long[scores.Length];
long[] aliceRanks = new long[alice.Length]; // same length with alice length
long lastPoint = 0;
long lastRank;
for (long i = 0; i < alice.Length; i++)
{
lastPoint = scores[0];
lastRank = 1;
bool isIn = false; // if never drop in if statement
for (long j = 0; j < scores.Length; j++)
{
if (lastPoint != scores[j]) //if score is not same, raise the variable
{
lastPoint = scores[j];
lastRank++;
}
if (alice[i] >= scores[j])
{
aliceRanks[i] = lastRank;
isIn = true;
break;
}
aliceRanks[i] = !isIn & j + 1 == scores.Length ? ++lastRank : aliceRanks[i]; //drop in here
}
}
return aliceRanks;
}
|
[
"This problem can be solved in O(n) time, no binary search needed at all. First, we need to extract the most useful piece of data given in the problem statement, which is,\n\nThe existing leaderboard, scores, is in descending order.\nAlice's scores, alice, are in ascending order.\n\nAn approach that makes this useful is to create two pointers, one at the start of alice array, let's call it \"i\", and the second is at the end of scores array, let's call it \"j\". We then loop until i reaches the end of alice array and at each iteration, we check for three main conditions. We increment i by one if alice[i] is less than scores[j] because the next element of alice may be also less than the current element of scores, or we decrement j if alice[i] is greater than scores[j] because we are sure that the next elements of alice are also greater than those elements discarded in scores. The last condition is that if alice[i] == scores[j], we only increment i.\nI solved this question in C++, my goal here is to make you understand the algorithm, I think you can easily convert it to C# if you understand it. If there are any confusions, please tell me. Here is the code:\n// Complete the climbingLeaderboard function below.\nvector<int> climbingLeaderboard(vector<int> scores, vector<int> alice) {\n int j = 1, i = 1;\n // this is to remove duplicates from the scores vector\n for(i =1; i < scores.size(); i++){\n if(scores[i] != scores[i-1]){\n scores[j++] = scores[i];\n }\n }\n int size = scores.size();\n for(i = 0; i < size-j; i++){\n scores.pop_back();\n }\n vector<int> ranks;\n\n i = 0;\n j = scores.size()-1;\n while(i < alice.size()){\n if(j < 0){\n ranks.push_back(1);\n i++;\n continue;\n }\n if(alice[i] < scores[j]){\n ranks.push_back(j+2);\n i++;\n } else if(alice[i] > scores[j]){\n j--;\n } else {\n ranks.push_back(j+1);\n i++;\n }\n }\n\n return ranks;\n}\n\nI think this may help you too:\n\nvector is like an array list that resizes itself.\npush_back() is inserting at the end of the vector.\npop_back() is removing from the end of the vector.\n\n",
"Here is a solution that utilizes BinarySearch. This method returns the index of the searched number in the array, or if the number is not found then it returns a negative number that is the bitwise complement of the index of the next element in the array. Binary search only works in sorted arrays.\npublic static int[] GetRanks(long[] scores, long[] person)\n{\n var defaultComparer = Comparer<long>.Default;\n var reverseComparer = Comparer<long>.Create((x, y) => -defaultComparer.Compare(x, y));\n var distinctOrderedScores = scores.Distinct().OrderBy(i => i, reverseComparer).ToArray();\n return person\n .Select(i => Array.BinarySearch(distinctOrderedScores, i, reverseComparer))\n .Select(pos => (pos >= 0 ? pos : ~pos) + 1)\n .ToArray();\n}\n\nUsage example:\nvar scores = new long[] { 100, 100, 50, 40, 40, 20, 10 };\nvar alice = new long[] { 5, 25, 50, 120 };\nvar ranks = GetRanks(scores, alice);\nConsole.WriteLine($\"Ranks: {String.Join(\", \", ranks)}\");\n\nOutput:\n\nRanks: 6, 4, 2, 1\n\n",
"Here is my solution with c#\npublic static List<int> climbingLeaderboard(List<int> ranked, List<int> player)\n {\n List<int> result = new List<int>();\n ranked = ranked.Distinct().ToList();\n var pLength = player.Count;\n var rLength = ranked.Count-1;\n \n int j = rLength;\n for (int i = 0; i < pLength; i++)\n { \n for (; j >= 0; j--)\n {\n if (player[i] == ranked[j])\n {\n result.Add(j + 1);\n break;\n }\n else if(player[i] < ranked[j])\n {\n result.Add(j + 2);\n break;\n }\n else if(player[i] > ranked[j]&&j==0)\n {\n result.Add(1);\n break;\n }enter code here\n }\n }\n\n return result;\n }\n\n",
"I was bored so i gave this a go with Linq and heavily commented it for you, \nGiven \npublic static IEnumerable<int> GetRanks(long[] scores, long[] person)\n\n // Convert scores to a tuple\n => scores.Select(s => (scores: s, isPerson: false))\n\n // convert persons score to a tuple and concat\n .Concat(person.Select(s => (scores: s, isPerson: true)))\n\n // Group by scores\n .GroupBy(x => x.scores)\n\n // order by score\n .OrderBy(x => x.Key)\n\n // select into an indexable tuple so we know everyones rank\n .Select((groups, i) => (rank: i, groups))\n\n // Filter the person\n .Where(x => x.groups.Any(y => y.isPerson))\n\n // select the rank\n .Select(x => x.rank);\n\nUsage \nstatic void Main(string[] args)\n{\n var scores = new long[]{1, 34, 565, 43, 44, 56, 67}; \n var alice = new long[]{578, 40, 50, 67, 6};\n\n var ranks = GetRanks(scores, alice);\n\n foreach (var rank in ranks)\n Console.WriteLine(rank);\n\n}\n\nOutput\n1\n3\n6\n8\n10\n\n",
"Based on the given constraint brute-force solution will not be efficient for the problem.\nyou have to optimize your code and the key part here is to look up for exact place which can be effectively done by using binary search.\nHere is the solution using binary search:-\nstatic int[] climbingLeaderboard(int[] scores, int[] alice) {\n int n = scores.length;\n int m = alice.length;\n\n int res[] = new int[m];\n int[] rank = new int[n];\n\n rank[0] = 1;\n\n for (int i = 1; i < n; i++) {\n if (scores[i] == scores[i - 1]) {\n rank[i] = rank[i - 1];\n } else {\n rank[i] = rank[i - 1] + 1;\n }\n }\n\n for (int i = 0; i < m; i++) {\n int aliceScore = alice[i];\n if (aliceScore > scores[0]) {\n res[i] = 1;\n } else if (aliceScore < scores[n - 1]) {\n res[i] = rank[n - 1] + 1;\n } else {\n int index = binarySearch(scores, aliceScore);\n res[i] = rank[index];\n\n }\n }\n return res;\n\n }\n\n private static int binarySearch(int[] a, int key) {\n\n int lo = 0;\n int hi = a.length - 1;\n\n while (lo <= hi) {\n int mid = lo + (hi - lo) / 2;\n if (a[mid] == key) {\n return mid;\n } else if (a[mid] < key && key < a[mid - 1]) {\n return mid;\n } else if (a[mid] > key && key >= a[mid + 1]) {\n return mid + 1;\n } else if (a[mid] < key) {\n hi = mid - 1;\n } else if (a[mid] > key) {\n lo = mid + 1;\n }\n }\n return -1;\n }\n\nYou can refer to this link for a more detailed video explanation.\n",
"static int[] climbingLeaderboard(int[] scores, int[] alice) {\n\n int[] uniqueScores = IntStream.of(scores).distinct().toArray();\n\n int [] rank = new int [alice.length];\n\n int startIndex=0;\n\n for(int j=alice.length-1; j>=0;j--) {\n\n\n for(int i=startIndex; i<=uniqueScores.length-1;i++) {\n\n if (alice[j]<uniqueScores[uniqueScores.length-1]){\n rank[j]=uniqueScores.length+1;\n break;\n }\n\n else if(alice[j]>=uniqueScores[i]) {\n rank[j]=i+1;\n startIndex=i;\n break;\n }\n\n else{continue;}\n\n }\n } \n return rank;\n }\n\n",
"My solution in javascript for climbing the Leaderboard Hackerrank problem. The time complexity of the problem can be O(i+j), i is the length of scores and j is the length of alice. The space complexity is O(1).\n// Complete the climbingLeaderboard function below.\nfunction climbingLeaderboard(scores, alice) {\n const ans = [];\n let count = 0;\n // the alice array is arranged in ascending order\n let j = alice.length - 1;\n for (let i = 0 ; i < scores.length ; i++) {\n const score = scores[i];\n for (; j >= 0 ; j--) {\n if (alice[j] >= score) {\n // if higher than score\n ans.unshift(count+1);\n } else if (i === scores.length - 1) {\n // if smallest\n ans.unshift(count+2);\n } else {\n break;\n }\n }\n \n // actual rank of the score in leaderboard\n if (score !== scores[i-1]) {\n count++;\n }\n }\n return ans;\n}\n\n",
"Here is my solution\n List<int> distinct = null;\n List<int> rank = new List<int>();\n foreach (int item in player)\n {\n ranked.Add(item);\n ranked.Sort();\n ranked.Reverse();\n distinct = ranked.Distinct().ToList();\n for (int i = 0; i < distinct.Count; i++)\n {\n if (item == distinct[i])\n {\n rank.Add(i + 1);\n break;\n }\n }\n \n }\n return rank;\n\nThis can be modified by removing the inner for loop also\n List<int> distinct = null;\n List<int> rank = new List<int>();\n foreach (int item in player)\n {\n ranked.Add(item);\n ranked.Sort();\n ranked.Reverse();\n distinct = ranked.Distinct().ToList();\n var index = ranked.FindIndex(x => x == item);\n rank.Add(index + 1);\n \n }\n return rank;\n \n\n",
"This is my solution in c# for Hackerrank Climbing the Leaderboard based on C++ answer here.\npublic static List<int> climbingLeaderboard(List<int> ranked, List<int> player)\n{ \n List<int> _ranked = new List<int>();\n \n _ranked.Add(ranked[0]);\n for(int a=1; a < ranked.Count(); a++)\n if(_ranked[_ranked.Count()-1] != ranked[a])\n _ranked.Add(ranked[a]);\n \n int j = _ranked.Count()-1;\n int i = 0;\n while(i < player.Count())\n {\n if(j < 0)\n {\n player[i] = 1;\n i++;\n continue;\n }\n \n if(player[i] < _ranked[j])\n {\n player[i] = j+2;\n i++;\n }\n else\n if(player[i] == _ranked[j])\n {\n player[i] = j+1;\n i++;\n }\n else\n {\n j--;\n }\n } \n \n return player;\n}\n\n"
] |
[
4,
2,
2,
0,
0,
0,
0,
0,
0
] |
[
"My solution in Java for climbing the Leaderboard Hackerrank problem.\n // Complete the climbingLeaderboard function below.\nstatic int[] climbingLeaderboard(int[] scores, int[] alice) {\n Arrays.sort(scores);\n HashSet<Integer> set = new HashSet<Integer>();\nint[] ar = new int[alice.length];\nint sc = 0;\n\n\n for(int i=0; i<alice.length; i++){\n sc = 1;\n set.clear();\n for(int j=0; j<scores.length; j++){\n if(alice[i] < scores[j] && !set.contains(scores[j])){\n sc++;\n set.add(scores[j]);\n }\n }\n ar[i] = sc;\n }return ar;\n\n}\n\n"
] |
[
-1
] |
[
"c#"
] |
stackoverflow_0056300009_c#.txt
|
Q:
How to nest a dictionary in another empty dictionary inside a nested for loop?
I created two for loops where the loop for roi in rois is nested in the loop for subject in subjects.
My aim is creating a dictionary called dict_subjects that includes yet another dictionary that, in turn, includes the key-value pair roi: comp.
This is my current code:
rois = ["roi_x", "roi_y", "roi_z" ...] # a long list of rois
subjects = ["Subject1", "Subject2", "Subject3", "Subject4", "Subject5" ... ] # a long list of subjects
dict_subjects = {}
for subject in subjects:
for roi in rois:
data = np.loadtxt(f"/volumes/..../xyz.txt") # Loads data
comp = ... # A computation with a numerical result
dict_subjects[subject] = {roi:comp}
My current coding issue is that the nested for loop creates the dictionary dict_subjects that, paradigmatically for the first two subjects, looks like this:
{'Subject1': {'roi_z': -1.1508099817085136}, 'Subject2': {'roi_z': -0.5746447574557193}}
Hence, the nested for loops only add the last roi from the list of rois. I understand that the problem is a constant overwriting of the last roi by the line dict_subjects[subject] = {roi:comp}.
When changing this line of code to dict_subjects[subject] += [{roi:ple[0]}], I get the following key error KeyError: 'Subject1' since the dictionary dict_subjects is empty.
Question: How is it possible to start with an empty dictionary, namely dict_subjects, yet adding the nested hierarchy of subjects and rois: comp to it?
A:
To fix your code, you need to create the inner dictionary for each subject before you start the loop for roi in rois. You can do this by adding the following code before the loop
for roi in rois:
dict_subjects[subject] = {}
This will create an empty dictionary for each subject in the outer loop, and you can then add key-value pairs to that dictionary inside the inner loop. Your code should now look something like this:
rois = ["roi_x", "roi_y", "roi_z" ...] # a long list of rois
subjects = ["Subject1", "Subject2", "Subject3", "Subject4", "Subject5" ...] # a long list of subjects
dict_subjects = {}
for subject in subjects:
dict_subjects[subject] = {}
for roi in rois:
data = np.loadtxt(f"/volumes/..../xyz.txt") # Loads data
comp = ... # A computation with a numerical result
dict_subjects[subject][roi] = comp
This should create the dictionary you want, with a nested dictionary for each subject containing the key-value pairs of roi: comp for each roi in rois.
|
How to nest a dictionary in another empty dictionary inside a nested for loop?
|
I created two for loops where the loop for roi in rois is nested in the loop for subject in subjects.
My aim is creating a dictionary called dict_subjects that includes yet another dictionary that, in turn, includes the key-value pair roi: comp.
This is my current code:
rois = ["roi_x", "roi_y", "roi_z" ...] # a long list of rois
subjects = ["Subject1", "Subject2", "Subject3", "Subject4", "Subject5" ... ] # a long list of subjects
dict_subjects = {}
for subject in subjects:
for roi in rois:
data = np.loadtxt(f"/volumes/..../xyz.txt") # Loads data
comp = ... # A computation with a numerical result
dict_subjects[subject] = {roi:comp}
My current coding issue is that the nested for loop creates the dictionary dict_subjects that, paradigmatically for the first two subjects, looks like this:
{'Subject1': {'roi_z': -1.1508099817085136}, 'Subject2': {'roi_z': -0.5746447574557193}}
Hence, the nested for loops only add the last roi from the list of rois. I understand that the problem is a constant overwriting of the last roi by the line dict_subjects[subject] = {roi:comp}.
When changing this line of code to dict_subjects[subject] += [{roi:ple[0]}], I get the following key error KeyError: 'Subject1' since the dictionary dict_subjects is empty.
Question: How is it possible to start with an empty dictionary, namely dict_subjects, yet adding the nested hierarchy of subjects and rois: comp to it?
|
[
"To fix your code, you need to create the inner dictionary for each subject before you start the loop for roi in rois. You can do this by adding the following code before the loop\nfor roi in rois:\n\ndict_subjects[subject] = {}\n\nThis will create an empty dictionary for each subject in the outer loop, and you can then add key-value pairs to that dictionary inside the inner loop. Your code should now look something like this:\nrois = [\"roi_x\", \"roi_y\", \"roi_z\" ...] # a long list of rois\nsubjects = [\"Subject1\", \"Subject2\", \"Subject3\", \"Subject4\", \"Subject5\" ...] # a long list of subjects\n\ndict_subjects = {}\n\nfor subject in subjects:\n dict_subjects[subject] = {}\n\n for roi in rois:\n data = np.loadtxt(f\"/volumes/..../xyz.txt\") # Loads data\n comp = ... # A computation with a numerical result\n\n dict_subjects[subject][roi] = comp\n\n\nThis should create the dictionary you want, with a nested dictionary for each subject containing the key-value pairs of roi: comp for each roi in rois.\n"
] |
[
1
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074664242_dictionary_python.txt
|
Q:
Deeplinking/Firebase dynamic link
I'm working with the firebase dynamic link, where a link is generated, and clicking on that link redirects the user to the appropriate page in the app or to appstore if the app is not available.
Just wondering if after installing, click on open button from the store is it possible to navigate the user to expected page from store itself?
A:
Yes, it is. See https://firebase.google.com/docs/dynamic-links. Dynamic links will direct a user to the page you linked them to in the app after installation.
|
Deeplinking/Firebase dynamic link
|
I'm working with the firebase dynamic link, where a link is generated, and clicking on that link redirects the user to the appropriate page in the app or to appstore if the app is not available.
Just wondering if after installing, click on open button from the store is it possible to navigate the user to expected page from store itself?
|
[
"Yes, it is. See https://firebase.google.com/docs/dynamic-links. Dynamic links will direct a user to the page you linked them to in the app after installation.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"deep_linking",
"firebase",
"firebase_dynamic_links",
"ios"
] |
stackoverflow_0070617576_android_deep_linking_firebase_firebase_dynamic_links_ios.txt
|
Q:
When I run my lisp code, it says undefined function NIL
The task was to Create an XLISP program that simulates the stack implementation of push and pop. Remember, the push and pop of a stack happens only on the top of stack (which is different from a queue)
In this case, we assume that the right most part of your list is the top portion.
Push operation description
Ask the user to enter a number
Insert the data into your stack
0 or negative number is not allowed. If so, simply print "Invalid Input"
Pop
Simply removes the top data from your stack.
Assumption:
You have a list called mystack initialized as an empty list.
Example Run:
(setq myStack())
NIL
(push)
*** When I try to run the code it says that undefined function NIL
(setq myStack(nil))
> (push)
> (pop)
; PUSH Function
(defun push ()
(let ((num (read)))
(if (and (numberp num) (> num 0))
(setq myStack (append myStack (list num)))
(print "Invalid Input"))))
; POP Function
(defun pop ()
(if (null myStack)
(print "Stack is empty")
(progn
(setq myStack (butlast myStack))
(print myStack))))
A:
Your Problem first of all is
(setq myStack (nil))
In Common Lisp, one would write it:
(defparameter *my-stack* nil)
In Common Lisp, there is the equality: NIL == () == '() == 'NIL.
What you want is an empty list, which is one of those.
Remember, an empty list () or '() already contains an implicit NIL
as its last CDR. Proof:
(cdr ()) ;; => NIL
(cdr '()) ;; => NIL
(cdr NIL) ;; => NIL
(cdr 'NIL) ;; => NIL
At least in Common Lisp it is defined like this.
However, in Racket/Scheme this is not defined like this. Therefore, this is not universal to Lisps:
$ racket
Welcome to Racket v6.11.
> (cdr NIL)
; NIL: undefined;
; cannot reference undefined identifier
; [,bt for context]
> (cdr '())
; cdr: contract violation
; expected: pair?
; given: '()
; [,bt for context]
> (cdr ())
; readline-input:4:5: #%app: missing procedure expression;
; probably originally (), which is an illegal empty application
; in: (#%app)
; [,bt for context]
> (cdr 'NIL)
; cdr: contract violation
; expected: pair?
; given: 'NIL
; [,bt for context]
Therefore, in XLISP this must not be ...
However, for all lisps, it will be true that you need '() as an empty list.
So at least, your line must be:
(setq my-stack '())
If you forget the quote ', the interpreter/compiler thinks it is a function call and searches the function name nil and doesn't find it. Therefore your error.
Later you ask in your pop function: (null my-stack). If you start with '(nil), the problem is you don't start with an empty list.
Proof:
In Common Lisp:
(null '(nil)) ;;=> nil
In Racket:
(null? '(nil)) ;;=> #f
Why? because your list then contains still and element which has as value NIL.
In both languages, you can do:
(cdr '(nil)) ;;=> '() <-- Now the list is empty!
I would define your push und pop in Common Lisp:
(defparameter *my-stack* '())
(defun my-push (x)
(setf *my-stack* (cons x *my-stack*))
(defun my-pop ()
(let ((x (car *my-stack*)))
(setf *my-stack* (cdr *my-stack*))
x))
As you see I don't append it at the end because this is very inefficient (otherwise one has to traverse the entire list).
In Lisp, one adds at the start by cons-ing. And pops from the start by car-ing and cdr-ing.
Also, your pop doesn't return the pop-ed value.
Actually, your function to behave like the in-built push and pop in Common Lisp, must be:
(defmacro my-push (x lst)
`(setf ,lst (cons ,x ,lst)))
(defmacro my-pop (lst)
`(let ((x (car ,lst)))
(setf ,lst (cdr ,lst))
x))
Usage:
(defparameter *stack* '())
(my-push 1 *stack*) ;;=> (1)
(my-push 2 *stack*) ;;=> (2 1)
(my-push 3 *stack*) ;;=> (3 2 1)
*stack* ;;=> (3 2 1)
(my-pop *stack*) ;;=> 3
(my-pop *stack*) ;;=> 2
(my-pop *stack*) ;;=> 1
(my-pop *stack*) ;;=> NIL
(my-pop *stack*) ;;=> NIL
;; whether the list is empty or the first element's value is NIL
;; you can check by (length *stack*) => if that is 0, the stack is empty
In racket:
(define-syntax-rule (push x lst)
(set! lst (cons x lst)))
(define-syntax-rule (pop lst)
(let ((x (car lst)))
(set! lst (cdr lst))
x))
Usage:
> (define stack '())
> (push 1 stack)
> (push 2 stack)
> (push 3 stack)
> stack
'(3 2 1)
> (pop stack)
3
> (pop stack)
2
> (pop stack)
1
> (pop stack)
; car: contract violation
; expected: pair?
; given: '()
; [,bt for context]
|
When I run my lisp code, it says undefined function NIL
|
The task was to Create an XLISP program that simulates the stack implementation of push and pop. Remember, the push and pop of a stack happens only on the top of stack (which is different from a queue)
In this case, we assume that the right most part of your list is the top portion.
Push operation description
Ask the user to enter a number
Insert the data into your stack
0 or negative number is not allowed. If so, simply print "Invalid Input"
Pop
Simply removes the top data from your stack.
Assumption:
You have a list called mystack initialized as an empty list.
Example Run:
(setq myStack())
NIL
(push)
*** When I try to run the code it says that undefined function NIL
(setq myStack(nil))
> (push)
> (pop)
; PUSH Function
(defun push ()
(let ((num (read)))
(if (and (numberp num) (> num 0))
(setq myStack (append myStack (list num)))
(print "Invalid Input"))))
; POP Function
(defun pop ()
(if (null myStack)
(print "Stack is empty")
(progn
(setq myStack (butlast myStack))
(print myStack))))
|
[
"Your Problem first of all is\n(setq myStack (nil))\n\nIn Common Lisp, one would write it:\n(defparameter *my-stack* nil)\n\nIn Common Lisp, there is the equality: NIL == () == '() == 'NIL.\nWhat you want is an empty list, which is one of those.\nRemember, an empty list () or '() already contains an implicit NIL\nas its last CDR. Proof:\n(cdr ()) ;; => NIL\n(cdr '()) ;; => NIL\n(cdr NIL) ;; => NIL\n(cdr 'NIL) ;; => NIL\n\nAt least in Common Lisp it is defined like this.\nHowever, in Racket/Scheme this is not defined like this. Therefore, this is not universal to Lisps:\n$ racket\nWelcome to Racket v6.11.\n\n> (cdr NIL)\n; NIL: undefined;\n; cannot reference undefined identifier\n; [,bt for context]\n> (cdr '())\n; cdr: contract violation\n; expected: pair?\n; given: '()\n; [,bt for context]\n> (cdr ())\n; readline-input:4:5: #%app: missing procedure expression;\n; probably originally (), which is an illegal empty application\n; in: (#%app)\n; [,bt for context]\n> (cdr 'NIL)\n; cdr: contract violation\n; expected: pair?\n; given: 'NIL\n; [,bt for context]\n\n\nTherefore, in XLISP this must not be ...\nHowever, for all lisps, it will be true that you need '() as an empty list.\nSo at least, your line must be:\n(setq my-stack '())\n\nIf you forget the quote ', the interpreter/compiler thinks it is a function call and searches the function name nil and doesn't find it. Therefore your error.\nLater you ask in your pop function: (null my-stack). If you start with '(nil), the problem is you don't start with an empty list.\nProof:\nIn Common Lisp:\n(null '(nil)) ;;=> nil\n\nIn Racket:\n(null? '(nil)) ;;=> #f\n\nWhy? because your list then contains still and element which has as value NIL.\nIn both languages, you can do:\n(cdr '(nil)) ;;=> '() <-- Now the list is empty!\n\nI would define your push und pop in Common Lisp:\n(defparameter *my-stack* '())\n\n(defun my-push (x)\n (setf *my-stack* (cons x *my-stack*))\n\n(defun my-pop ()\n (let ((x (car *my-stack*)))\n (setf *my-stack* (cdr *my-stack*))\n x))\n\nAs you see I don't append it at the end because this is very inefficient (otherwise one has to traverse the entire list).\nIn Lisp, one adds at the start by cons-ing. And pops from the start by car-ing and cdr-ing.\nAlso, your pop doesn't return the pop-ed value.\nActually, your function to behave like the in-built push and pop in Common Lisp, must be:\n(defmacro my-push (x lst)\n `(setf ,lst (cons ,x ,lst)))\n\n(defmacro my-pop (lst)\n `(let ((x (car ,lst)))\n (setf ,lst (cdr ,lst))\n x))\n\nUsage:\n(defparameter *stack* '())\n\n(my-push 1 *stack*) ;;=> (1)\n(my-push 2 *stack*) ;;=> (2 1)\n(my-push 3 *stack*) ;;=> (3 2 1)\n\n*stack* ;;=> (3 2 1)\n\n(my-pop *stack*) ;;=> 3\n(my-pop *stack*) ;;=> 2\n(my-pop *stack*) ;;=> 1\n(my-pop *stack*) ;;=> NIL\n(my-pop *stack*) ;;=> NIL\n\n;; whether the list is empty or the first element's value is NIL\n;; you can check by (length *stack*) => if that is 0, the stack is empty\n\nIn racket:\n(define-syntax-rule (push x lst) \n (set! lst (cons x lst)))\n\n(define-syntax-rule (pop lst) \n (let ((x (car lst))) \n (set! lst (cdr lst)) \n x))\n\nUsage:\n> (define stack '())\n> (push 1 stack)\n> (push 2 stack)\n> (push 3 stack)\n> stack\n'(3 2 1)\n> (pop stack)\n3\n> (pop stack)\n2\n> (pop stack)\n1\n> (pop stack)\n; car: contract violation\n; expected: pair?\n; given: '()\n; [,bt for context]\n\n"
] |
[
0
] |
[] |
[] |
[
"lisp"
] |
stackoverflow_0074655036_lisp.txt
|
Q:
How to get unique device id in flutter?
In Android we have, Settings.Secure.ANDROID_ID. I do not know the iOS equivalent.
Is there a flutter plugin or a way to get a unique device id for both Android and IOS in flutter?
A:
Null safe code
Use device_info_plus plugin developed by Flutter community. This is how you can get IDs on both platform.
In your pubspec.yaml file add this:
dependencies:
device_info_plus: ^3.2.3
Create a method:
Future<String?> _getId() async {
var deviceInfo = DeviceInfoPlugin();
if (Platform.isIOS) { // import 'dart:io'
var iosDeviceInfo = await deviceInfo.iosInfo;
return iosDeviceInfo.identifierForVendor; // unique ID on iOS
} else if(Platform.isAndroid) {
var androidDeviceInfo = await deviceInfo.androidInfo;
return androidDeviceInfo.androidId; // unique ID on Android
}
}
Usage:
String? deviceId = await _getId();
A:
There is a plugin called device_info. You can get it here.
Check the official example here
static Future<List<String>> getDeviceDetails() async {
String deviceName;
String deviceVersion;
String identifier;
final DeviceInfoPlugin deviceInfoPlugin = new DeviceInfoPlugin();
try {
if (Platform.isAndroid) {
var build = await deviceInfoPlugin.androidInfo;
deviceName = build.model;
deviceVersion = build.version.toString();
identifier = build.androidId; //UUID for Android
} else if (Platform.isIOS) {
var data = await deviceInfoPlugin.iosInfo;
deviceName = data.name;
deviceVersion = data.systemVersion;
identifier = data.identifierForVendor; //UUID for iOS
}
} on PlatformException {
print('Failed to get platform version');
}
//if (!mounted) return;
return [deviceName, deviceVersion, identifier];
}
You can store this UUID in the Keychain. This way you can set an unique ID for your device.
UPDATE
device_info is now device_info_plus
A:
I just published a plugin to provide a solution to your problem.
It uses Settings.Secure.ANDROID_ID for Android and relies on identifierForVendor and the keychain for iOS to make the behaviour equivalent to Android's.
Here's the link.
A:
Update 1/3/2021: The recommended way is now the extended community plugin called device_info_plus. It supports more platforms than device_info and aims to support all that are supported by flutter. Here is an example usage:
import 'package:flutter/foundation.dart' show kIsWeb;
import 'package:device_info_plus/device_info_plus.dart';
import 'dart:io';
Future<String> getDeviceIdentifier() async {
String deviceIdentifier = "unknown";
DeviceInfoPlugin deviceInfo = DeviceInfoPlugin();
if (Platform.isAndroid) {
AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;
deviceIdentifier = androidInfo.androidId;
} else if (Platform.isIOS) {
IosDeviceInfo iosInfo = await deviceInfo.iosInfo;
deviceIdentifier = iosInfo.identifierForVendor;
} else if (kIsWeb) {
// The web doesnt have a device UID, so use a combination fingerprint as an example
WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;
deviceIdentifier = webInfo.vendor + webInfo.userAgent + webInfo.hardwareConcurrency.toString();
} else if (Platform.isLinux) {
LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;
deviceIdentifier = linuxInfo.machineId;
}
return deviceIdentifier;
}
A:
Use device_id plugin
Add in your following code in your .yaml file.
device_id: ^0.1.3
Add import in your class
import 'package:device_id/device_id.dart';
Now get device id from:
String deviceid = await DeviceId.getID;
A:
I release a new flutter plugin client_information might help. It provide a simple way to get some basic device information from your application user.
add to pubspec.yaml
dependencies:
...
client_information: ^1.0.1
import to your project
import 'package:client_information/client_information.dart';
then you can get device ID like this
/// Support on iOS, Android and web project
Future<String> getDeviceId() async {
return (await ClientInformation.fetch()).deviceId;
}
A:
Latest:
The plugin device_info has given deprecation notice and replaced by
device_info_plus
Example:
dependencies:
device_info_plus: ^2.1.0
How to use:
import 'package:device_info_plus/device_info_plus.dart';
DeviceInfoPlugin deviceInfo = DeviceInfoPlugin();
AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;
print('Running on ${androidInfo.model}'); // e.g. "Moto G (4)"
IosDeviceInfo iosInfo = await deviceInfo.iosInfo;
print('Running on ${iosInfo.utsname.machine}'); // e.g. "iPod7,1"
WebBrowserInfo webBrowserInfo = await deviceInfo.webBrowserInfo;
print('Running on ${webBrowserInfo.userAgent}'); // e.g. "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
You can check here full example:
For Unique ID:
You can use following code to get Unique ID:
if (kIsWeb) {
WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;
deviceIdentifier = webInfo.vendor +
webInfo.userAgent +
webInfo.hardwareConcurrency.toString();
} else {
if (Platform.isAndroid) {
AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;
deviceIdentifier = androidInfo.androidId;
} else if (Platform.isIOS) {
IosDeviceInfo iosInfo = await deviceInfo.iosInfo;
deviceIdentifier = iosInfo.identifierForVendor;
} else if (Platform.isLinux) {
LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;
deviceIdentifier = linuxInfo.machineId;
}
}
A:
Add the following code in your .yaml file.
device_info_plus: ^1.0.0
I used the following approach to get the device info that support in all platforms (i.e.) Android, IOS and Web.
import 'dart:io';
import 'package:device_info_plus/device_info_plus.dart';
import 'package:flutter/foundation.dart' show kIsWeb;
Future<String> getDeviceIdentifier() async {
String deviceIdentifier = "unknown";
DeviceInfoPlugin deviceInfo = DeviceInfoPlugin();
if (kIsWeb) {
WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;
deviceIdentifier = webInfo.vendor +
webInfo.userAgent +
webInfo.hardwareConcurrency.toString();
} else {
if (Platform.isAndroid) {
AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;
deviceIdentifier = androidInfo.androidId;
} else if (Platform.isIOS) {
IosDeviceInfo iosInfo = await deviceInfo.iosInfo;
deviceIdentifier = iosInfo.identifierForVendor;
} else if (Platform.isLinux) {
LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;
deviceIdentifier = linuxInfo.machineId;
}
}
return deviceIdentifier;
}
A:
Use device_info_plus package developed by Flutter community. This is how you can get IDs on both platform.
In your pubspec.yaml file add this:
dependencies:
device_info_plus: ^3.2.3
Create a method:
Future<String> getUniqueDeviceId() async {
String uniqueDeviceId = '';
var deviceInfo = DeviceInfoPlugin();
if (Platform.isIOS) { // import 'dart:io'
var iosDeviceInfo = await deviceInfo.iosInfo;
uniqueDeviceId = '${iosDeviceInfo.name}:${iosDeviceInfo.identifierForVendor}'; // unique ID on iOS
} else if(Platform.isAndroid) {
var androidDeviceInfo = await deviceInfo.androidInfo;
uniqueDeviceId = '${androidDeviceInfo.name}:${androidDeviceInfo.id}' ; // unique ID on Android
}
return uniqueDeviceId;
}
Usage:
String deviceId = await getUniqueDeviceId();
Output:
M2102J20SG::SKQ1.211006.001
Note:
Do not use androidDeviceInfo.androidId. This would change when your mac address changes. Mobile devices above Android OS 10/11 will generate a randomized MAC. This feature is enabled by default unless disabled manually. This would cause the androidId to change when switiching networks. You can confirm this by yourself by changing androidDeviceInfo.id to androidDeviceInfo.androidId above.
you can probably get away with using only androidDeviceInfo.name as it would not change ever.
androidDeviceInfo.id can also change if OS is updated as it is an android os version.
androidDeviceInfo.androidId should only be used if device uses fix mac address as mentioned in point 1. Otherwise, either use *.name only or androidDeviceInfo.id alongside with *.name.
A:
androidID is removed since v4.1.0. Check the changelog.
android_id package is recommanded to get the correct androidId.
A:
As of 2022, December, last status about getting device id :
device_info_plus don't give unique device id anymore. So I started to use platform_device_id package.
I tested it on Android and it worked as same as device_info previously and provide the same id value. It also has a simple usage :
String deviceId = await PlatformDeviceId.getDeviceId;
This package uses updated android embedding version and also has null safety support.
|
How to get unique device id in flutter?
|
In Android we have, Settings.Secure.ANDROID_ID. I do not know the iOS equivalent.
Is there a flutter plugin or a way to get a unique device id for both Android and IOS in flutter?
|
[
"Null safe code\nUse device_info_plus plugin developed by Flutter community. This is how you can get IDs on both platform.\nIn your pubspec.yaml file add this:\ndependencies:\n device_info_plus: ^3.2.3\n\nCreate a method:\nFuture<String?> _getId() async {\n var deviceInfo = DeviceInfoPlugin();\n if (Platform.isIOS) { // import 'dart:io'\n var iosDeviceInfo = await deviceInfo.iosInfo;\n return iosDeviceInfo.identifierForVendor; // unique ID on iOS\n } else if(Platform.isAndroid) {\n var androidDeviceInfo = await deviceInfo.androidInfo;\n return androidDeviceInfo.androidId; // unique ID on Android\n }\n}\n\n\nUsage:\nString? deviceId = await _getId();\n\n",
"There is a plugin called device_info. You can get it here.\nCheck the official example here\n static Future<List<String>> getDeviceDetails() async {\n String deviceName;\n String deviceVersion;\n String identifier;\n final DeviceInfoPlugin deviceInfoPlugin = new DeviceInfoPlugin();\n try {\n if (Platform.isAndroid) {\n var build = await deviceInfoPlugin.androidInfo;\n deviceName = build.model;\n deviceVersion = build.version.toString();\n identifier = build.androidId; //UUID for Android\n } else if (Platform.isIOS) {\n var data = await deviceInfoPlugin.iosInfo;\n deviceName = data.name;\n deviceVersion = data.systemVersion;\n identifier = data.identifierForVendor; //UUID for iOS\n }\n } on PlatformException {\n print('Failed to get platform version');\n }\n\n//if (!mounted) return;\nreturn [deviceName, deviceVersion, identifier];\n}\n\nYou can store this UUID in the Keychain. This way you can set an unique ID for your device.\nUPDATE\ndevice_info is now device_info_plus\n",
"I just published a plugin to provide a solution to your problem.\nIt uses Settings.Secure.ANDROID_ID for Android and relies on identifierForVendor and the keychain for iOS to make the behaviour equivalent to Android's.\nHere's the link.\n",
"Update 1/3/2021: The recommended way is now the extended community plugin called device_info_plus. It supports more platforms than device_info and aims to support all that are supported by flutter. Here is an example usage:\nimport 'package:flutter/foundation.dart' show kIsWeb;\nimport 'package:device_info_plus/device_info_plus.dart';\nimport 'dart:io';\n\nFuture<String> getDeviceIdentifier() async {\n String deviceIdentifier = \"unknown\";\n DeviceInfoPlugin deviceInfo = DeviceInfoPlugin();\n\n if (Platform.isAndroid) {\n AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;\n deviceIdentifier = androidInfo.androidId;\n } else if (Platform.isIOS) {\n IosDeviceInfo iosInfo = await deviceInfo.iosInfo;\n deviceIdentifier = iosInfo.identifierForVendor;\n } else if (kIsWeb) {\n // The web doesnt have a device UID, so use a combination fingerprint as an example\n WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;\n deviceIdentifier = webInfo.vendor + webInfo.userAgent + webInfo.hardwareConcurrency.toString();\n } else if (Platform.isLinux) {\n LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;\n deviceIdentifier = linuxInfo.machineId;\n } \n return deviceIdentifier;\n}\n\n",
"Use device_id plugin\n\nAdd in your following code in your .yaml file.\ndevice_id: ^0.1.3\n\n\nAdd import in your class\nimport 'package:device_id/device_id.dart';\n\n\nNow get device id from:\nString deviceid = await DeviceId.getID;\n\n\n\n",
"I release a new flutter plugin client_information might help. It provide a simple way to get some basic device information from your application user.\n\nadd to pubspec.yaml\n\n dependencies:\n ...\n client_information: ^1.0.1\n\n\nimport to your project\n\nimport 'package:client_information/client_information.dart';\n\n\nthen you can get device ID like this\n\n/// Support on iOS, Android and web project\nFuture<String> getDeviceId() async {\n return (await ClientInformation.fetch()).deviceId;\n}\n\n\n",
"Latest:\nThe plugin device_info has given deprecation notice and replaced by\ndevice_info_plus\nExample:\ndependencies:\n device_info_plus: ^2.1.0\n\nHow to use:\nimport 'package:device_info_plus/device_info_plus.dart';\n\nDeviceInfoPlugin deviceInfo = DeviceInfoPlugin();\n\nAndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;\nprint('Running on ${androidInfo.model}'); // e.g. \"Moto G (4)\"\n\nIosDeviceInfo iosInfo = await deviceInfo.iosInfo;\nprint('Running on ${iosInfo.utsname.machine}'); // e.g. \"iPod7,1\"\n\nWebBrowserInfo webBrowserInfo = await deviceInfo.webBrowserInfo;\nprint('Running on ${webBrowserInfo.userAgent}'); // e.g. \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0\"\n\nYou can check here full example:\nFor Unique ID:\nYou can use following code to get Unique ID:\nif (kIsWeb) {\n WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;\n deviceIdentifier = webInfo.vendor +\n webInfo.userAgent +\n webInfo.hardwareConcurrency.toString();\n} else {\n if (Platform.isAndroid) {\n AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;\n deviceIdentifier = androidInfo.androidId;\n } else if (Platform.isIOS) {\n IosDeviceInfo iosInfo = await deviceInfo.iosInfo;\n deviceIdentifier = iosInfo.identifierForVendor;\n } else if (Platform.isLinux) {\n LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;\n deviceIdentifier = linuxInfo.machineId;\n }\n} \n\n",
"Add the following code in your .yaml file.\ndevice_info_plus: ^1.0.0\n\nI used the following approach to get the device info that support in all platforms (i.e.) Android, IOS and Web.\nimport 'dart:io'; \nimport 'package:device_info_plus/device_info_plus.dart'; \nimport 'package:flutter/foundation.dart' show kIsWeb; \n\nFuture<String> getDeviceIdentifier() async { \n\n String deviceIdentifier = \"unknown\"; \n DeviceInfoPlugin deviceInfo = DeviceInfoPlugin(); \n\n if (kIsWeb) {\n WebBrowserInfo webInfo = await deviceInfo.webBrowserInfo;\n deviceIdentifier = webInfo.vendor +\n webInfo.userAgent +\n webInfo.hardwareConcurrency.toString();\n } else {\n if (Platform.isAndroid) {\n AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo;\n deviceIdentifier = androidInfo.androidId;\n } else if (Platform.isIOS) {\n IosDeviceInfo iosInfo = await deviceInfo.iosInfo;\n deviceIdentifier = iosInfo.identifierForVendor;\n } else if (Platform.isLinux) {\n LinuxDeviceInfo linuxInfo = await deviceInfo.linuxInfo;\n deviceIdentifier = linuxInfo.machineId;\n }\n }\n return deviceIdentifier;\n}\n\n",
"Use device_info_plus package developed by Flutter community. This is how you can get IDs on both platform.\nIn your pubspec.yaml file add this:\ndependencies:\n device_info_plus: ^3.2.3\n\nCreate a method:\nFuture<String> getUniqueDeviceId() async {\n String uniqueDeviceId = '';\n\n var deviceInfo = DeviceInfoPlugin();\n\n if (Platform.isIOS) { // import 'dart:io'\n var iosDeviceInfo = await deviceInfo.iosInfo;\n uniqueDeviceId = '${iosDeviceInfo.name}:${iosDeviceInfo.identifierForVendor}'; // unique ID on iOS\n } else if(Platform.isAndroid) {\n var androidDeviceInfo = await deviceInfo.androidInfo;\n uniqueDeviceId = '${androidDeviceInfo.name}:${androidDeviceInfo.id}' ; // unique ID on Android\n }\n \n return uniqueDeviceId;\n\n}\n\nUsage:\nString deviceId = await getUniqueDeviceId();\n\nOutput:\nM2102J20SG::SKQ1.211006.001\n\nNote:\n\nDo not use androidDeviceInfo.androidId. This would change when your mac address changes. Mobile devices above Android OS 10/11 will generate a randomized MAC. This feature is enabled by default unless disabled manually. This would cause the androidId to change when switiching networks. You can confirm this by yourself by changing androidDeviceInfo.id to androidDeviceInfo.androidId above.\n\nyou can probably get away with using only androidDeviceInfo.name as it would not change ever.\n\nandroidDeviceInfo.id can also change if OS is updated as it is an android os version.\n\nandroidDeviceInfo.androidId should only be used if device uses fix mac address as mentioned in point 1. Otherwise, either use *.name only or androidDeviceInfo.id alongside with *.name.\n\n\n",
"androidID is removed since v4.1.0. Check the changelog.\nandroid_id package is recommanded to get the correct androidId.\n",
"As of 2022, December, last status about getting device id :\ndevice_info_plus don't give unique device id anymore. So I started to use platform_device_id package.\nI tested it on Android and it worked as same as device_info previously and provide the same id value. It also has a simple usage :\n String deviceId = await PlatformDeviceId.getDeviceId;\n\nThis package uses updated android embedding version and also has null safety support.\n"
] |
[
113,
93,
12,
11,
7,
5,
4,
2,
2,
2,
1
] |
[
"If you're serving ads you can use ASIdentifierManager. You should only use it for ads. There is no general UDID mechanism provided by the OS on iOS, for privacy reasons.\nIf you're using firebase_auth plugin you could signInAnonymously and then use the id of the FirebaseUser. This will give you an identifier that is specific to your Firebase app.\n"
] |
[
-2
] |
[
"flutter",
"flutter_dependencies"
] |
stackoverflow_0045031499_flutter_flutter_dependencies.txt
|
Q:
What is the result of this program and what describes the actions being performed of this LC-3 command?
I enter the first paragraph of code on lc-3 tutor, clear the editor and paste the second block of machine code. When I run the code I get no output, and not sure what the instructions are doing.
Pasting this and loading into the simulator :
.ORIG x4000
DATA .STRINGZ "%2Jw6<m#P1"
.END
Clearing the above code in the editor and pasting the second block to load into the simulator:
.ORIG x3000
AND R5, R5, #0 ; R5 result
ADD R1, R5, #1
LD R2, PTR ; R2 ptr to data
LOOP LDR R3, R2, #0 ; R3 current character
BRz DONE
AND R4, R3, R1
BRz NEXT
CHK ADD R5, R5, R3
NEXT ADD R2, R2, #2
BRnzp LOOP
DONE HALT
PTR .FILL x4000
.END
A:
This program traverses the string, visiting and skipping every other character in the string.
(It has a flaw in that if the string is an even number of characters it will run off the end of the string, relying on double nul-termination instead of the usual single nul-termination, i.e. there being more zeros after the end of the nul-terminated string.)
If the numeric ascii value of a character visited is odd then it sums that numeric value.
The result is the sum — of every other character that is odd.
To tell what the value it computes, depends entirely on the input, and might even be easier to run the program rather than hand computing it, though that shouldn't be too hard to do.
|
What is the result of this program and what describes the actions being performed of this LC-3 command?
|
I enter the first paragraph of code on lc-3 tutor, clear the editor and paste the second block of machine code. When I run the code I get no output, and not sure what the instructions are doing.
Pasting this and loading into the simulator :
.ORIG x4000
DATA .STRINGZ "%2Jw6<m#P1"
.END
Clearing the above code in the editor and pasting the second block to load into the simulator:
.ORIG x3000
AND R5, R5, #0 ; R5 result
ADD R1, R5, #1
LD R2, PTR ; R2 ptr to data
LOOP LDR R3, R2, #0 ; R3 current character
BRz DONE
AND R4, R3, R1
BRz NEXT
CHK ADD R5, R5, R3
NEXT ADD R2, R2, #2
BRnzp LOOP
DONE HALT
PTR .FILL x4000
.END
|
[
"This program traverses the string, visiting and skipping every other character in the string.\n(It has a flaw in that if the string is an even number of characters it will run off the end of the string, relying on double nul-termination instead of the usual single nul-termination, i.e. there being more zeros after the end of the nul-terminated string.)\nIf the numeric ascii value of a character visited is odd then it sums that numeric value.\nThe result is the sum — of every other character that is odd.\nTo tell what the value it computes, depends entirely on the input, and might even be easier to run the program rather than hand computing it, though that shouldn't be too hard to do.\n"
] |
[
0
] |
[] |
[] |
[
"lc3"
] |
stackoverflow_0074663670_lc3.txt
|
Q:
Convert type `(env) => (args) => TaskEither` to ReaderTaskEither
In my SPA, I have a function that needs to:
Create an object (e.g. a "tag" for a user)
Post it to our API
type UserId = string;
type User = {id: UserId};
type TagType = "NEED_HELP" | "NEED_STORAGE"
type Tag = {
id: string;
type: TagType;
userId: UserId;
}
type TagDraft = Omit<Tag, "id">
// ----
const createTagDraft = ({tagType, user} : {tagType: TagType, userId: UserID}): TagDraft => ({
type: tagType, userId: userId
})
const postTag = (tagDraft) => pipe(
TE.tryCatch(
() => axios.post('https://myTagEndpoint', tagDraft),
(reason) => new Error(`${reason}`),
),
TE.map((resp) => resp.data),
)
I can combine the entire task with
const createTagTask = flow(createTagDraft, postTag)
Now I would like to also clear some client cache that I have for Tags. Since the cache object has nothing to do with the arguments needed for the tag, I would like to provide it separately. I do:
function createTagAndCleanTask(queryCache) {
return flow(
createTagDraft,
postTag,
TE.chainFirstTaskK((flag) =>
T.of(
queryCache.clean("tagCache")
)
)
)
}
// which I call like this
createTagAndCleanTask(queryCache)({tagType: "NEED_HELP", user: bob})
This works, but I wonder if this is not exactly what I could use ReaderTaskEither for?
Idea 1: I tried to use RTE.fromTaskEither on createTagTask, but createTagTask is a function that returns a TaskEither, not a TaskEither...
Idea 2: I tried to use RTE.fromTaskEither as a third step in the flow after postTag but I don't know how to provide proper typing then and make it aware of a env config object.
My understanding of this article is that I should aim at something like (args) => (env) => body instead of (env) => (args) => body for each functions. But I cannot find a way to invert arguments that are provided directly via flow.
Is there a way that can I rewrite this code so that I can provide env objects like queryCache in a cleaner way?
A:
Reader is (env) => A, so the deps need to come "last" in the list of arguments. You need to think of your function as (args) => (env) => result instead of (env) => (args) => result as you correctly identified. There is a flip function in fp-ts that can be used to invert the arguments afterwards to simplify passing the env in first (before the args).
Here's an example of what this might look like in your case
// I just made up a type for the query cache
type QueryCache = { clean: (queryKey: string) => void }
// simple function which deals with cleaning the cache as you want
// Note that it's type is equivalent to `Reader<{ queryCache: QueryCache }, void>`
const cleanCache = (deps: { queryCache: QueryCache }) => {
deps.queryCache.clean("tagCache");
}
const createTagTask = flow(
createTagDraft,
postTag,
// Convert from TaskEither -> ReaderTaskEither here to allow us to compose the below
RTE.fromTaskEither,
// Convert the cleanCache Reader into a ReaderTaskEither
RTE.chain(() => RTE.fromReader(cleanCache))
)
// Example of how you can partially apply the dependencies with flip
declare const deps: { queryCache: QueryCache }
const _createTagTask = flip(createTagTask)(deps);
// And then call the partially applied fn with args as normal
_createTagTask({ tagType: {...}, userId: {...} })
I think you can also replace RTE.chain(() => RTE.fromReader(cleanCache)) with RTE.chainReaderKW(() => cleanCache)
A:
Yes, you can use the asks function from fp-ts/lib/ReaderTaskEither to provide the queryCache object in a cleaner way. Here's an example:
import { ReaderTaskEither, asks } from 'fp-ts/lib/ReaderTaskEither';
const createTagDraft = ({ tagType, userId }: { tagType: TagType, userId: UserId }): TagDraft => ({
type: tagType, userId: userId
})
const postTag = (tagDraft: TagDraft): ReaderTaskEither<unknown, Error, Tag> =>
ReaderTaskEither.fromTaskEither(
TE.tryCatch(
() => axios.post('https://myTagEndpoint', tagDraft),
(reason) => new Error(`${reason}`),
)
)
const createTagAndClean = (queryCache: any) =>
pipe(
postTag,
TE.chainFirstTaskK((flag) =>
asks(queryCache => T.of(
queryCache.clean("tagCache")
))
)
)
// Usage example:
const task = createTagAndClean({ tagType: "NEED_HELP", userId: "bob" });
task.run({ queryCache: someQueryCacheObject }).then(result => {
// ...
});
The main difference here is that the createTagAndClean function no longer takes queryCache as its argument. Instead, it uses the asks function to access the queryCache object from the environment that is provided when you call run.
This means that when you call createTagAndClean, you only need to provide the tagDraft object, and not the queryCache object. You can then provide the queryCache object when you call run, along with any other dependencies that your computation might have.
|
Convert type `(env) => (args) => TaskEither` to ReaderTaskEither
|
In my SPA, I have a function that needs to:
Create an object (e.g. a "tag" for a user)
Post it to our API
type UserId = string;
type User = {id: UserId};
type TagType = "NEED_HELP" | "NEED_STORAGE"
type Tag = {
id: string;
type: TagType;
userId: UserId;
}
type TagDraft = Omit<Tag, "id">
// ----
const createTagDraft = ({tagType, user} : {tagType: TagType, userId: UserID}): TagDraft => ({
type: tagType, userId: userId
})
const postTag = (tagDraft) => pipe(
TE.tryCatch(
() => axios.post('https://myTagEndpoint', tagDraft),
(reason) => new Error(`${reason}`),
),
TE.map((resp) => resp.data),
)
I can combine the entire task with
const createTagTask = flow(createTagDraft, postTag)
Now I would like to also clear some client cache that I have for Tags. Since the cache object has nothing to do with the arguments needed for the tag, I would like to provide it separately. I do:
function createTagAndCleanTask(queryCache) {
return flow(
createTagDraft,
postTag,
TE.chainFirstTaskK((flag) =>
T.of(
queryCache.clean("tagCache")
)
)
)
}
// which I call like this
createTagAndCleanTask(queryCache)({tagType: "NEED_HELP", user: bob})
This works, but I wonder if this is not exactly what I could use ReaderTaskEither for?
Idea 1: I tried to use RTE.fromTaskEither on createTagTask, but createTagTask is a function that returns a TaskEither, not a TaskEither...
Idea 2: I tried to use RTE.fromTaskEither as a third step in the flow after postTag but I don't know how to provide proper typing then and make it aware of a env config object.
My understanding of this article is that I should aim at something like (args) => (env) => body instead of (env) => (args) => body for each functions. But I cannot find a way to invert arguments that are provided directly via flow.
Is there a way that can I rewrite this code so that I can provide env objects like queryCache in a cleaner way?
|
[
"Reader is (env) => A, so the deps need to come \"last\" in the list of arguments. You need to think of your function as (args) => (env) => result instead of (env) => (args) => result as you correctly identified. There is a flip function in fp-ts that can be used to invert the arguments afterwards to simplify passing the env in first (before the args).\nHere's an example of what this might look like in your case\n// I just made up a type for the query cache\ntype QueryCache = { clean: (queryKey: string) => void }\n\n// simple function which deals with cleaning the cache as you want\n// Note that it's type is equivalent to `Reader<{ queryCache: QueryCache }, void>`\nconst cleanCache = (deps: { queryCache: QueryCache }) => {\n deps.queryCache.clean(\"tagCache\");\n}\n\nconst createTagTask = flow(\n createTagDraft,\n postTag,\n // Convert from TaskEither -> ReaderTaskEither here to allow us to compose the below\n RTE.fromTaskEither,\n // Convert the cleanCache Reader into a ReaderTaskEither\n RTE.chain(() => RTE.fromReader(cleanCache))\n)\n\n// Example of how you can partially apply the dependencies with flip\ndeclare const deps: { queryCache: QueryCache }\nconst _createTagTask = flip(createTagTask)(deps);\n\n// And then call the partially applied fn with args as normal\n_createTagTask({ tagType: {...}, userId: {...} })\n\nI think you can also replace RTE.chain(() => RTE.fromReader(cleanCache)) with RTE.chainReaderKW(() => cleanCache)\n",
"Yes, you can use the asks function from fp-ts/lib/ReaderTaskEither to provide the queryCache object in a cleaner way. Here's an example:\nimport { ReaderTaskEither, asks } from 'fp-ts/lib/ReaderTaskEither';\n\nconst createTagDraft = ({ tagType, userId }: { tagType: TagType, userId: UserId }): TagDraft => ({\n type: tagType, userId: userId\n})\n\nconst postTag = (tagDraft: TagDraft): ReaderTaskEither<unknown, Error, Tag> =>\n ReaderTaskEither.fromTaskEither(\n TE.tryCatch(\n () => axios.post('https://myTagEndpoint', tagDraft),\n (reason) => new Error(`${reason}`),\n )\n )\n\nconst createTagAndClean = (queryCache: any) =>\n pipe(\n postTag,\n TE.chainFirstTaskK((flag) =>\n asks(queryCache => T.of(\n queryCache.clean(\"tagCache\")\n ))\n )\n )\n\n// Usage example:\nconst task = createTagAndClean({ tagType: \"NEED_HELP\", userId: \"bob\" });\ntask.run({ queryCache: someQueryCacheObject }).then(result => {\n // ...\n});\n\n\nThe main difference here is that the createTagAndClean function no longer takes queryCache as its argument. Instead, it uses the asks function to access the queryCache object from the environment that is provided when you call run.\nThis means that when you call createTagAndClean, you only need to provide the tagDraft object, and not the queryCache object. You can then provide the queryCache object when you call run, along with any other dependencies that your computation might have.\n"
] |
[
1,
1
] |
[] |
[] |
[
"fp_ts",
"javascript"
] |
stackoverflow_0074560133_fp_ts_javascript.txt
|
Q:
placing two pieces of different data side by side in R
I am trying to place two datasets of 3 columns side by side so that they span 6 columns. They have the same column headings.
The first one is:
The next one is:
How can I place them side by side across the page so that they total 6 columns? As you can see they are different row lengths.
I want to then be able to download them as a single .csv.
I have tried using rbind, full bind, cbind, merge, etc. but nothing seems to work for this very simple task.
A:
You can merge on the row names by using 0 for the by argument, then remove the rowname column (i.e., [,-1]). Then, if you want to have duplicate column names (which is not a good idea), then you can replace the names after merging. Here, I just use subsets of mtcars as an example.
results <- merge(df1, df2, by = 0, all = TRUE)[,-1]
names(results) <- c(names(df1), names(df2))
Output
mpg cyl disp mpg cyl disp
1 21.0 6 160.0 21.0 6 160
2 21.0 6 160.0 21.0 6 160
3 22.8 4 108.0 22.8 4 108
4 21.4 6 258.0 21.4 6 258
5 18.7 8 360.0 NA NA NA
6 18.1 6 225.0 NA NA NA
7 14.3 8 360.0 NA NA NA
8 24.4 4 146.7 NA NA NA
Data
df1 <- mtcars[1:8, 1:3]
row.names(df1) <- NULL
df2 <- mtcars[1:4, 1:3]
row.names(df2) <- NULL
|
placing two pieces of different data side by side in R
|
I am trying to place two datasets of 3 columns side by side so that they span 6 columns. They have the same column headings.
The first one is:
The next one is:
How can I place them side by side across the page so that they total 6 columns? As you can see they are different row lengths.
I want to then be able to download them as a single .csv.
I have tried using rbind, full bind, cbind, merge, etc. but nothing seems to work for this very simple task.
|
[
"You can merge on the row names by using 0 for the by argument, then remove the rowname column (i.e., [,-1]). Then, if you want to have duplicate column names (which is not a good idea), then you can replace the names after merging. Here, I just use subsets of mtcars as an example.\nresults <- merge(df1, df2, by = 0, all = TRUE)[,-1]\n\nnames(results) <- c(names(df1), names(df2))\n\nOutput\n mpg cyl disp mpg cyl disp\n1 21.0 6 160.0 21.0 6 160\n2 21.0 6 160.0 21.0 6 160\n3 22.8 4 108.0 22.8 4 108\n4 21.4 6 258.0 21.4 6 258\n5 18.7 8 360.0 NA NA NA\n6 18.1 6 225.0 NA NA NA\n7 14.3 8 360.0 NA NA NA\n8 24.4 4 146.7 NA NA NA\n\nData\ndf1 <- mtcars[1:8, 1:3]\nrow.names(df1) <- NULL\n\ndf2 <- mtcars[1:4, 1:3]\nrow.names(df2) <- NULL\n\n"
] |
[
1
] |
[] |
[] |
[
"bind",
"dataframe",
"r"
] |
stackoverflow_0074663968_bind_dataframe_r.txt
|
Q:
Anyway to save Doc ID as a string in Firebase for specific user
I'm new to Firebase and wanted to know if there is anyway to access the document id that is created when making a user and save it as a string? For instance, when a user creates their account it saves all their information into Firebase Firestore and saves it with a unique document id. Can you save that document id as a field for that specific user?
A:
Your Question is not clear. If you want to save a document ID inside a document then It is quite simple. First You'll need to get the document id on the collection reference and then you'll need to set the data on that document id.
// Your collection reference
CollectionReference collectionReference=
FirebaseFirestore.instance.collection('your_collection_name');
//Let's get the document ID
String docId = collectionReference.doc().id;
//Now let's save the document id in that document.
collectionReference.doc(docId).set({
'id': docId,
});
And if you want to save the user id that is generated upon sign up from firebase authentication then this is how you do it.
late String userID;
FirebaseAuth.instance
.createUserWithEmailAndPassword(
email: email,
password: password,
)
.then((auth) {
userID = auth.user?.uid;
});
And then you can save this userID in your firebase collection.
|
Anyway to save Doc ID as a string in Firebase for specific user
|
I'm new to Firebase and wanted to know if there is anyway to access the document id that is created when making a user and save it as a string? For instance, when a user creates their account it saves all their information into Firebase Firestore and saves it with a unique document id. Can you save that document id as a field for that specific user?
|
[
"Your Question is not clear. If you want to save a document ID inside a document then It is quite simple. First You'll need to get the document id on the collection reference and then you'll need to set the data on that document id.\n// Your collection reference\nCollectionReference collectionReference=\n FirebaseFirestore.instance.collection('your_collection_name');\n//Let's get the document ID\nString docId = collectionReference.doc().id;\n//Now let's save the document id in that document.\ncollectionReference.doc(docId).set({\n 'id': docId, \n });\n\nAnd if you want to save the user id that is generated upon sign up from firebase authentication then this is how you do it.\nlate String userID;\nFirebaseAuth.instance\n .createUserWithEmailAndPassword(\n email: email,\n password: password,\n)\n .then((auth) {\n userID = auth.user?.uid;\n});\n\nAnd then you can save this userID in your firebase collection.\n"
] |
[
0
] |
[] |
[] |
[
"firebase",
"flutter"
] |
stackoverflow_0074664205_firebase_flutter.txt
|
Q:
Conditional view modifier sometimes doesn't want to update
As I am working on a study app, I'm tring to build a set of cards, that a user can swipe each individual card, in this case a view, in a foreach loop and when flipped through all of them, it resets the cards to normal stack. The program works but sometimes the stack of cards doesn't reset. Each individual card updates a variable in a viewModel which my conditional view modifier looks at, to reset the stack of cards using offset and when condition is satisfied, the card view updates, while using ".onChange" to look for the change in the viewModel to then update the variable back to original state.
I've printed each variable at each step of the way and every variable updates and I can only assume that the way I'm updating my view, using conditional view modifier, may not be the correct way to go about. Any suggestions will be appreciated.
Here is my code:
The view that houses the card views with the conditional view modifier
extension View {
@ViewBuilder func `resetCards`<Content: View>(_ condition: Bool, transform: (Self) -> Content) -> some View {
if condition == true {
transform(self).offset(x: 0, y: 0)
} else {
self
}
}
}
struct StudyListView: View {
@ObservedObject var currentStudySet: HomeViewModel
@ObservedObject var studyCards: StudyListViewModel = StudyListViewModel()
@State var studyItem: StudyModel
@State var index: Int
var body: some View {
ForEach(currentStudySet.allSets[index].studyItem.reversed()) { item in
StudyCardItemView(currentCard: studyCards, card: item, count: currentStudySet.allSets[index].studyItem.count)
.resetCards(studyCards.isDone) { view in
view
}
.onChange(of: studyCards.isDone, perform: { _ in
studyCards.isDone = false
})
}
}
}
StudyCardItemView
struct StudyCardItemView: View {
@StateObject var currentCard: StudyListViewModel
@State var card: StudyItemModel
@State var count: Int
@State var offset = CGSize.zero
var body: some View {
VStack{
VStack{
ZStack(alignment: .center){
Text("\(card.itemTitle)")
}
}
}
.frame(width: 350, height: 200)
.background(Color.white)
.cornerRadius(10)
.shadow(radius: 5)
.padding(5)
.rotationEffect(.degrees(Double(offset.width / 5)))
.offset(x: offset.width * 5, y: 0)
.gesture(
DragGesture()
.onChanged { gesture in
offset = gesture.translation
}
.onEnded{ _ in
if abs(offset.width) > 100 {
currentCard.cardsSortedThrough += 1
if (currentCard.cardsSortedThrough == count) {
currentCard.isDone = true
currentCard.cardsSortedThrough = 0
}
} else {
offset = .zero
}
}
)
}
}
HomeViewModel
class HomeViewModel: ObservableObject {
@Published var studySet: StudyModel = StudyModel()
@Published var allSets: [StudyModel] = [StudyModel()]
}
I initialize allSets with one StudyModel() to see it in the preview
StudyListViewModel
class StudyListViewModel: ObservableObject {
@Published var cardsSortedThrough: Int = 0
@Published var isDone: Bool = false
}
StudyModel
import SwiftUI
struct StudyModel: Hashable{
var title: String = ""
var days = ["One day", "Two days", "Three days", "Four days", "Five days", "Six days", "Seven days"]
var studyGoals = "One day"
var studyItem: [StudyItemModel] = []
}
Lastly, StudyItemModel
struct StudyItemModel: Hashable, Identifiable{
let id = UUID()
var itemTitle: String = ""
var itemDescription: String = ""
}
Once again, any help would be appreciated, thanks in advance!
A:
I just found a fix and I put .onChange at the end for StudyCardItemView. Basically, the onChange helps the view scan for a change in currentCard.isDone variable every time it was called in the foreach loop and updates offset individuality. This made my conditional view modifier obsolete and just use the onChange to check for the condition.
I still used onChange outside the view with the foreach loop, just to set currentCard.isDone variable false because the variable will be set after all array elements are iterator through.
The updated code:
StudyCardItemView
struct StudyCardItemView: View {
@StateObject var currentCard: StudyListViewModel
@State var card: StudyItemModel
@State var count: Int
@State var offset = CGSize.zero
var body: some View {
VStack{
VStack{
ZStack(alignment: .center){
Text("\(card.itemTitle)")
}
}
}
.frame(width: 350, height: 200)
.background(Color.white)
.cornerRadius(10)
.shadow(radius: 5)
.padding(5)
.rotationEffect(.degrees(Double(offset.width / 5)))
.offset(x: offset.width * 5, y: 0)
.gesture(
DragGesture()
.onChanged { gesture in
offset = gesture.translation
}
.onEnded{ _ in
if abs(offset.width) > 100 {
currentCard.cardsSortedThrough += 1
if (currentCard.cardsSortedThrough == count) {
currentCard.isDone = true
currentCard.cardsSortedThrough = 0
}
} else {
offset = .zero
}
}
)
.onChange(of: currentCard.isDone, perform: {_ in
if(currentCard.isDone == true){
offset = .zero
}
})
}
}
StudyListView
struct StudyListView: View {
@ObservedObject var currentStudySet: HomeViewModel
@ObservedObject var studyCards: StudyListViewModel = StudyListViewModel()
@State var studyItem: StudyModel
@State var index: Int
var body: some View {
ForEach(currentStudySet.allSets[index].studyItem.reversed()) { item in
StudyCardItemView(currentCard: studyCards, card: item, count:
currentStudySet.allSets[index].studyItem.count)
.onChange(of: studyCards.isDone, perform: { _ in
studyCards.isDone = false
})
}
}
}
Hope this helps anyone in the future!
|
Conditional view modifier sometimes doesn't want to update
|
As I am working on a study app, I'm tring to build a set of cards, that a user can swipe each individual card, in this case a view, in a foreach loop and when flipped through all of them, it resets the cards to normal stack. The program works but sometimes the stack of cards doesn't reset. Each individual card updates a variable in a viewModel which my conditional view modifier looks at, to reset the stack of cards using offset and when condition is satisfied, the card view updates, while using ".onChange" to look for the change in the viewModel to then update the variable back to original state.
I've printed each variable at each step of the way and every variable updates and I can only assume that the way I'm updating my view, using conditional view modifier, may not be the correct way to go about. Any suggestions will be appreciated.
Here is my code:
The view that houses the card views with the conditional view modifier
extension View {
@ViewBuilder func `resetCards`<Content: View>(_ condition: Bool, transform: (Self) -> Content) -> some View {
if condition == true {
transform(self).offset(x: 0, y: 0)
} else {
self
}
}
}
struct StudyListView: View {
@ObservedObject var currentStudySet: HomeViewModel
@ObservedObject var studyCards: StudyListViewModel = StudyListViewModel()
@State var studyItem: StudyModel
@State var index: Int
var body: some View {
ForEach(currentStudySet.allSets[index].studyItem.reversed()) { item in
StudyCardItemView(currentCard: studyCards, card: item, count: currentStudySet.allSets[index].studyItem.count)
.resetCards(studyCards.isDone) { view in
view
}
.onChange(of: studyCards.isDone, perform: { _ in
studyCards.isDone = false
})
}
}
}
StudyCardItemView
struct StudyCardItemView: View {
@StateObject var currentCard: StudyListViewModel
@State var card: StudyItemModel
@State var count: Int
@State var offset = CGSize.zero
var body: some View {
VStack{
VStack{
ZStack(alignment: .center){
Text("\(card.itemTitle)")
}
}
}
.frame(width: 350, height: 200)
.background(Color.white)
.cornerRadius(10)
.shadow(radius: 5)
.padding(5)
.rotationEffect(.degrees(Double(offset.width / 5)))
.offset(x: offset.width * 5, y: 0)
.gesture(
DragGesture()
.onChanged { gesture in
offset = gesture.translation
}
.onEnded{ _ in
if abs(offset.width) > 100 {
currentCard.cardsSortedThrough += 1
if (currentCard.cardsSortedThrough == count) {
currentCard.isDone = true
currentCard.cardsSortedThrough = 0
}
} else {
offset = .zero
}
}
)
}
}
HomeViewModel
class HomeViewModel: ObservableObject {
@Published var studySet: StudyModel = StudyModel()
@Published var allSets: [StudyModel] = [StudyModel()]
}
I initialize allSets with one StudyModel() to see it in the preview
StudyListViewModel
class StudyListViewModel: ObservableObject {
@Published var cardsSortedThrough: Int = 0
@Published var isDone: Bool = false
}
StudyModel
import SwiftUI
struct StudyModel: Hashable{
var title: String = ""
var days = ["One day", "Two days", "Three days", "Four days", "Five days", "Six days", "Seven days"]
var studyGoals = "One day"
var studyItem: [StudyItemModel] = []
}
Lastly, StudyItemModel
struct StudyItemModel: Hashable, Identifiable{
let id = UUID()
var itemTitle: String = ""
var itemDescription: String = ""
}
Once again, any help would be appreciated, thanks in advance!
|
[
"I just found a fix and I put .onChange at the end for StudyCardItemView. Basically, the onChange helps the view scan for a change in currentCard.isDone variable every time it was called in the foreach loop and updates offset individuality. This made my conditional view modifier obsolete and just use the onChange to check for the condition.\nI still used onChange outside the view with the foreach loop, just to set currentCard.isDone variable false because the variable will be set after all array elements are iterator through.\nThe updated code:\nStudyCardItemView\nstruct StudyCardItemView: View {\n\n @StateObject var currentCard: StudyListViewModel\n @State var card: StudyItemModel\n @State var count: Int\n \n @State var offset = CGSize.zero\n \n var body: some View {\n VStack{\n VStack{\n ZStack(alignment: .center){\n Text(\"\\(card.itemTitle)\")\n }\n }\n }\n .frame(width: 350, height: 200)\n .background(Color.white)\n .cornerRadius(10)\n .shadow(radius: 5)\n .padding(5)\n .rotationEffect(.degrees(Double(offset.width / 5)))\n .offset(x: offset.width * 5, y: 0)\n .gesture(\n DragGesture()\n .onChanged { gesture in\n offset = gesture.translation\n }\n .onEnded{ _ in\n if abs(offset.width) > 100 {\n currentCard.cardsSortedThrough += 1\n if (currentCard.cardsSortedThrough == count) {\n currentCard.isDone = true\n currentCard.cardsSortedThrough = 0\n }\n } else {\n offset = .zero\n }\n }\n )\n .onChange(of: currentCard.isDone, perform: {_ in \n if(currentCard.isDone == true){\n offset = .zero\n }\n })\n }\n}\n\nStudyListView\nstruct StudyListView: View {\n\n @ObservedObject var currentStudySet: HomeViewModel\n @ObservedObject var studyCards: StudyListViewModel = StudyListViewModel()\n \n @State var studyItem: StudyModel\n @State var index: Int\n\n var body: some View {\n ForEach(currentStudySet.allSets[index].studyItem.reversed()) { item in\n StudyCardItemView(currentCard: studyCards, card: item, count: \n currentStudySet.allSets[index].studyItem.count)\n .onChange(of: studyCards.isDone, perform: { _ in\n studyCards.isDone = false\n })\n }\n }\n}\n\nHope this helps anyone in the future!\n"
] |
[
0
] |
[] |
[] |
[
"swift",
"swift5",
"swiftui"
] |
stackoverflow_0074646249_swift_swift5_swiftui.txt
|
Q:
Issues Calling Functions from Freertos Task (ESP32)
Currently I am having issues with a program running a freertos program. The purpose of the program is to control a stepper motor as well as an led. Implementing the motor control without microstepping does not have any issues as the two tasks take no parameters and call no functions.
However, when I introduce microstepping which requires two nested functions to be called by the move_routine task, the program will not do anything (no led flashing, no motor turning) when it did before. Does anyone have any solutions for this or any reasons on why this shouldn't work? From what I can see it should be fine to call a function from a freertos task.
#include <Arduino.h>
#include <Stepper.h>
/*================PIN DEFINITIONS================*/
#define LEDC_CHANNEL_0 0
#define LEDC_CHANNEL_1 1
#define LEDC_CHANNEL_2 2
#define LEDC_CHANNEL_3 3
const int A1A = 14;
const int A1B = 27;
const int B1A = 26;
const int B2A = 25;
const int ledPin = 33;
/*================VARIABLE DEFINITIONS================*/
int stepnumber = 0;
int Pa; int Pb;
const int stepsPerRev = 200;
Stepper myStepper(stepsPerRev, 14,27,26,25);
/*================Function Definitions================*/
//Analogwrite using LEDC capabilities
void ledcAnalogWrite(uint8_t channel, uint32_t value, uint32_t valueMax = 255) {
//calculation of duty cycle
uint32_t duty = (4095/valueMax)*min(value, valueMax);
ledcWrite(channel,duty);
}
void move(int stepnumber, int MAXpower, int wait) {
Pa = (sin(stepnumber*0.098174)*MAXpower);
Pb = (cos(stepnumber*0.098174)*MAXpower);
if (Pa>0)
{
ledcAnalogWrite(LEDC_CHANNEL_0,Pa);
ledcAnalogWrite(LEDC_CHANNEL_1,0);
}
else
{
ledcAnalogWrite(LEDC_CHANNEL_0,0);
ledcAnalogWrite(LEDC_CHANNEL_1,abs(Pa));
}
if (Pb>0)
{
ledcAnalogWrite(LEDC_CHANNEL_2,Pb);
ledcAnalogWrite(LEDC_CHANNEL_3,0);
}
else
{
ledcAnalogWrite(LEDC_CHANNEL_2,0);
ledcAnalogWrite(LEDC_CHANNEL_3,abs(Pb));
}
}
void move_routine(void *parameters) {
while(1) {
for (int i=0; i<3199; i++)
{
stepnumber++;
move(stepnumber,255,250);
}
vTaskDelay(3000/portTICK_PERIOD_MS);
for (int i=0; i<1599; i++)
{
stepnumber--;
move(stepnumber,255,1000);
}
vTaskDelay(3000/portTICK_PERIOD_MS);
}
}
void led_task(void * parameters){
while(1){
digitalWrite(ledPin, HIGH);
vTaskDelay(500/portTICK_PERIOD_MS);
digitalWrite(ledPin, LOW);
vTaskDelay(500/portTICK_PERIOD_MS);
}
}
void setup(){
myStepper.setSpeed(60);
pinMode(ledPin, OUTPUT);
Serial.begin(115200);
xTaskCreatePinnedToCore(
move_routine,
"Move Routine",
8192,
NULL,
1,
NULL,
1
);
xTaskCreatePinnedToCore(
led_task,
"LED Task",
1024,
NULL,
1,
NULL,
1
);
}
void loop(){
}
Expected to see flashing led and motor turning but nothing witnessed
A:
Your two tasks work as it were defined, the problem is your ledcAnalogWrite() function where you are calling ledcWrite() function with nowhere in your code initialise the LEDC with ledcSetup() and ledcAttachPin().
Read the documentation on LED Control(LEDC).
|
Issues Calling Functions from Freertos Task (ESP32)
|
Currently I am having issues with a program running a freertos program. The purpose of the program is to control a stepper motor as well as an led. Implementing the motor control without microstepping does not have any issues as the two tasks take no parameters and call no functions.
However, when I introduce microstepping which requires two nested functions to be called by the move_routine task, the program will not do anything (no led flashing, no motor turning) when it did before. Does anyone have any solutions for this or any reasons on why this shouldn't work? From what I can see it should be fine to call a function from a freertos task.
#include <Arduino.h>
#include <Stepper.h>
/*================PIN DEFINITIONS================*/
#define LEDC_CHANNEL_0 0
#define LEDC_CHANNEL_1 1
#define LEDC_CHANNEL_2 2
#define LEDC_CHANNEL_3 3
const int A1A = 14;
const int A1B = 27;
const int B1A = 26;
const int B2A = 25;
const int ledPin = 33;
/*================VARIABLE DEFINITIONS================*/
int stepnumber = 0;
int Pa; int Pb;
const int stepsPerRev = 200;
Stepper myStepper(stepsPerRev, 14,27,26,25);
/*================Function Definitions================*/
//Analogwrite using LEDC capabilities
void ledcAnalogWrite(uint8_t channel, uint32_t value, uint32_t valueMax = 255) {
//calculation of duty cycle
uint32_t duty = (4095/valueMax)*min(value, valueMax);
ledcWrite(channel,duty);
}
void move(int stepnumber, int MAXpower, int wait) {
Pa = (sin(stepnumber*0.098174)*MAXpower);
Pb = (cos(stepnumber*0.098174)*MAXpower);
if (Pa>0)
{
ledcAnalogWrite(LEDC_CHANNEL_0,Pa);
ledcAnalogWrite(LEDC_CHANNEL_1,0);
}
else
{
ledcAnalogWrite(LEDC_CHANNEL_0,0);
ledcAnalogWrite(LEDC_CHANNEL_1,abs(Pa));
}
if (Pb>0)
{
ledcAnalogWrite(LEDC_CHANNEL_2,Pb);
ledcAnalogWrite(LEDC_CHANNEL_3,0);
}
else
{
ledcAnalogWrite(LEDC_CHANNEL_2,0);
ledcAnalogWrite(LEDC_CHANNEL_3,abs(Pb));
}
}
void move_routine(void *parameters) {
while(1) {
for (int i=0; i<3199; i++)
{
stepnumber++;
move(stepnumber,255,250);
}
vTaskDelay(3000/portTICK_PERIOD_MS);
for (int i=0; i<1599; i++)
{
stepnumber--;
move(stepnumber,255,1000);
}
vTaskDelay(3000/portTICK_PERIOD_MS);
}
}
void led_task(void * parameters){
while(1){
digitalWrite(ledPin, HIGH);
vTaskDelay(500/portTICK_PERIOD_MS);
digitalWrite(ledPin, LOW);
vTaskDelay(500/portTICK_PERIOD_MS);
}
}
void setup(){
myStepper.setSpeed(60);
pinMode(ledPin, OUTPUT);
Serial.begin(115200);
xTaskCreatePinnedToCore(
move_routine,
"Move Routine",
8192,
NULL,
1,
NULL,
1
);
xTaskCreatePinnedToCore(
led_task,
"LED Task",
1024,
NULL,
1,
NULL,
1
);
}
void loop(){
}
Expected to see flashing led and motor turning but nothing witnessed
|
[
"Your two tasks work as it were defined, the problem is your ledcAnalogWrite() function where you are calling ledcWrite() function with nowhere in your code initialise the LEDC with ledcSetup() and ledcAttachPin().\nRead the documentation on LED Control(LEDC).\n"
] |
[
0
] |
[] |
[] |
[
"esp32",
"freertos"
] |
stackoverflow_0074658792_esp32_freertos.txt
|
Q:
Trying to randomly send signal from parent to child process in a loop
I'm trying to have it execute in a loop where the parent randomly picks between SIGUSR1 and SIGUSR2 and send it to the child process to receive and write to a file
My problem is the signal will only send in the first loop and after that it stops
int main(int argc, char* argv[], char *envp[]){
time_t start, finish; //for example purposes, to save the time
struct sigaction sact; //signal action structure
sact.sa_handler = &handler;
sact.sa_handler = &handler2;
sigset_t new_set, old_set; //signal mask data-types
FILE *file = fopen("received_signal.txt", "w");
fprintf(file,"%s\t %s\t %s\n", "Signal Type",
"Signal Time", "thread ID");
fclose(file);
int pid;
int cpid;
pid = fork();
if(pid == 0){//recieves
//sigaction(SIGUSR1, &sact, NULL);
while(1){
signal(SIGUSR1, handler);
signal(SIGUSR2, handler2);
sleep(1);
}
} else{ //generates
while(1){
sleep(1); // give child time to spawn
printf("hello\n");
parent_func(0);
//wait(NULL);
usleep(((rand() % 5) + 1) * 10000);
}
}
return 0;
}
void parent_func(int child_pid){
srand(time(NULL));
int rnd = rand();
int result = (rnd & 1) ? 2 : 1;
struct timeval t;
gettimeofday(&t, NULL);
unsigned long time = 1000000 * t.tv_sec + t.tv_usec;
printf("result: %d\n", result);
printf("time: %ld\n", time);
if(result == 1){
//sigaction(SIGUSR1, &sact, NULL);
kill(child_pid, SIGUSR1);
log(SIGUSR1);
} else{
//sigaction(SIGUSR2, &sact, NULL);
kill(child_pid, SIGUSR2);
log(SIGUSR2);
}
}
void handler(int sig){
if (sig == SIGUSR1){
puts("child received SIGUSR1");
}
}
void handler2(int sig){
if (sig == SIGUSR2){
puts("child received SIGUSR2");
}
}
Tried throwing the child in a while loop to get it to repeat but no such luck
A:
man signal(2) tells you that the handler is reset to SIG_DFL once a signal is delivered:
If the disposition is set to a function, then first either the disposition is reset to SIG_DFL, or the signal is blocked (see Portability below), and then handler is called with argument signum. If invocation of the handler caused the signal to be blocked, then the signal is unblocked upon return from the handler.
I suggest you use sigaction instead of signal:
#define _XOPEN_SOURCE 500
#define _POSIX_C_SOURCE 199309L
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
void handler(int sig) {
char s[] = "child received signal SIGUSR?\n";
char *s2 = strchr(s, '?');
*s2 = sig == SIGUSR1 ? '1' : '2';
write(STDOUT_FILENO, s, strlen(s));
}
int main(int argc, char* argv[], char *envp[]){
pid_t child_pid = fork();
if(!child_pid) {
struct sigaction sa = {
.sa_handler = &handler
};
sigaction(SIGUSR1, &sa, NULL);
sigaction(SIGUSR2, &sa, NULL);
for(;;) {
sleep(1);
}
return 0;
}
for(;;) {
sleep(1);
int s = (int []){SIGUSR1, SIGUSR2}[rand() % 2];
printf("parent sending signal %d to %d\n", s, child_pid);
kill(child_pid, s);
}
}
and sample output:
parent sending signal 12 to 521586
child received signal SIGUSR2
parent sending signal 10 to 521586
child received signal SIGUSR1
parent sending signal 12 to 521586
child received signal SIGUSR2
parent sending signal 12 to 521586
child received signal SIGUSR2
|
Trying to randomly send signal from parent to child process in a loop
|
I'm trying to have it execute in a loop where the parent randomly picks between SIGUSR1 and SIGUSR2 and send it to the child process to receive and write to a file
My problem is the signal will only send in the first loop and after that it stops
int main(int argc, char* argv[], char *envp[]){
time_t start, finish; //for example purposes, to save the time
struct sigaction sact; //signal action structure
sact.sa_handler = &handler;
sact.sa_handler = &handler2;
sigset_t new_set, old_set; //signal mask data-types
FILE *file = fopen("received_signal.txt", "w");
fprintf(file,"%s\t %s\t %s\n", "Signal Type",
"Signal Time", "thread ID");
fclose(file);
int pid;
int cpid;
pid = fork();
if(pid == 0){//recieves
//sigaction(SIGUSR1, &sact, NULL);
while(1){
signal(SIGUSR1, handler);
signal(SIGUSR2, handler2);
sleep(1);
}
} else{ //generates
while(1){
sleep(1); // give child time to spawn
printf("hello\n");
parent_func(0);
//wait(NULL);
usleep(((rand() % 5) + 1) * 10000);
}
}
return 0;
}
void parent_func(int child_pid){
srand(time(NULL));
int rnd = rand();
int result = (rnd & 1) ? 2 : 1;
struct timeval t;
gettimeofday(&t, NULL);
unsigned long time = 1000000 * t.tv_sec + t.tv_usec;
printf("result: %d\n", result);
printf("time: %ld\n", time);
if(result == 1){
//sigaction(SIGUSR1, &sact, NULL);
kill(child_pid, SIGUSR1);
log(SIGUSR1);
} else{
//sigaction(SIGUSR2, &sact, NULL);
kill(child_pid, SIGUSR2);
log(SIGUSR2);
}
}
void handler(int sig){
if (sig == SIGUSR1){
puts("child received SIGUSR1");
}
}
void handler2(int sig){
if (sig == SIGUSR2){
puts("child received SIGUSR2");
}
}
Tried throwing the child in a while loop to get it to repeat but no such luck
|
[
"man signal(2) tells you that the handler is reset to SIG_DFL once a signal is delivered:\n\nIf the disposition is set to a function, then first either the disposition is reset to SIG_DFL, or the signal is blocked (see Portability below), and then handler is called with argument signum. If invocation of the handler caused the signal to be blocked, then the signal is unblocked upon return from the handler.\n\nI suggest you use sigaction instead of signal:\n#define _XOPEN_SOURCE 500\n#define _POSIX_C_SOURCE 199309L\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <sys/time.h>\n#include <time.h>\n#include <unistd.h>\n\nvoid handler(int sig) {\n char s[] = \"child received signal SIGUSR?\\n\";\n char *s2 = strchr(s, '?');\n *s2 = sig == SIGUSR1 ? '1' : '2';\n write(STDOUT_FILENO, s, strlen(s));\n}\n\nint main(int argc, char* argv[], char *envp[]){\n pid_t child_pid = fork();\n if(!child_pid) {\n struct sigaction sa = {\n .sa_handler = &handler\n };\n sigaction(SIGUSR1, &sa, NULL);\n sigaction(SIGUSR2, &sa, NULL);\n for(;;) {\n sleep(1);\n }\n return 0;\n }\n for(;;) {\n sleep(1);\n int s = (int []){SIGUSR1, SIGUSR2}[rand() % 2];\n printf(\"parent sending signal %d to %d\\n\", s, child_pid);\n kill(child_pid, s);\n }\n}\n\nand sample output:\nparent sending signal 12 to 521586\nchild received signal SIGUSR2\nparent sending signal 10 to 521586\nchild received signal SIGUSR1\nparent sending signal 12 to 521586\nchild received signal SIGUSR2\nparent sending signal 12 to 521586\nchild received signal SIGUSR2\n\n"
] |
[
0
] |
[] |
[] |
[
"c",
"fork",
"signals"
] |
stackoverflow_0074664107_c_fork_signals.txt
|
Q:
Write an query to sort customers from most important to least one
I am very new to SQL. I have three tables such as transactions, products, and customers. I like to write a query to sort from the most important customer to the least one.
But, it shows me each customer multiple times. I would to have distinct customers from the most important to less important in terms of the quantity they purchased.
Select c.id , c.first_name , c.last_name, t.quantity
From transactions as t , customer_data as c
INNER JOIN transactions ON t.customer_id = c.id
ORDER by t.quantity DESC
A:
Select c.first_name , c.last_name, t.quantity
From transactions as t , customer_data as c
INNER JOIN transactions ON t.customer_id = c.id
GROUP by c.first_name
ORDER by t.quantity DESC
|
Write an query to sort customers from most important to least one
|
I am very new to SQL. I have three tables such as transactions, products, and customers. I like to write a query to sort from the most important customer to the least one.
But, it shows me each customer multiple times. I would to have distinct customers from the most important to less important in terms of the quantity they purchased.
Select c.id , c.first_name , c.last_name, t.quantity
From transactions as t , customer_data as c
INNER JOIN transactions ON t.customer_id = c.id
ORDER by t.quantity DESC
|
[
"Select c.first_name , c.last_name, t.quantity\nFrom transactions as t , customer_data as c\nINNER JOIN transactions ON t.customer_id = c.id\nGROUP by c.first_name\nORDER by t.quantity DESC\n"
] |
[
0
] |
[] |
[] |
[
"sql"
] |
stackoverflow_0074658333_sql.txt
|
Q:
Does Socket IO automatically transmit cookies
I have the following Socket IO Client on my frontend that connects to a service running on Google Cloud Run. I enabled Session Affinity in my Google Cloud Run service to ensure subsequent requests from my Socket IO Client are sent to the same Container instance so I don't get undeliverable events.
With Session Affinity enabled, Google Cloud Run seems to be sending back a session affinity token stored in cookies to identify that client for subsequent requests.
So, I just wanted to confirm if during the time the Socket IO client is connecting, does it also sends across all available cookies as received from Google Cloud Run or do I need to explicitly tell it to do so?
Below is my client code
import io from "socket.io-client";
const socketUrl = EndPoints.SOCKET_IO_BASE;
let socketOptions = { transports: ["websocket"] }
let socket;
if (!socket) {
socket = io(socketUrl, socketOptions);
socket.on('connect', () => {
console.log(`Connected to Server`);
})
socket.on('disconnect', () => {
console.log(`Disconnected from Server`); //This never gets called when the Cloud Run service instance is running, so I can assume a disconnect never happened.
})
}
export default socket;
A:
By default, the Socket.IO client will automatically include any cookies that it receives from the server in subsequent requests. This means that if the server is sending back a cookie with a session affinity token, the Socket.IO client will automatically include that cookie in its requests to the server.
You don't need to do anything special to make this happen. As long as the server is sending back the cookie with the session affinity token, the Socket.IO client will automatically include that cookie in its requests.
However, if for some reason you don't want the Socket.IO client to include cookies in its requests, you can disable this behavior by setting the withCredentials option to false when you create the Socket.IO client:
let socketOptions = { transports: ["websocket"], withCredentials: false }
|
Does Socket IO automatically transmit cookies
|
I have the following Socket IO Client on my frontend that connects to a service running on Google Cloud Run. I enabled Session Affinity in my Google Cloud Run service to ensure subsequent requests from my Socket IO Client are sent to the same Container instance so I don't get undeliverable events.
With Session Affinity enabled, Google Cloud Run seems to be sending back a session affinity token stored in cookies to identify that client for subsequent requests.
So, I just wanted to confirm if during the time the Socket IO client is connecting, does it also sends across all available cookies as received from Google Cloud Run or do I need to explicitly tell it to do so?
Below is my client code
import io from "socket.io-client";
const socketUrl = EndPoints.SOCKET_IO_BASE;
let socketOptions = { transports: ["websocket"] }
let socket;
if (!socket) {
socket = io(socketUrl, socketOptions);
socket.on('connect', () => {
console.log(`Connected to Server`);
})
socket.on('disconnect', () => {
console.log(`Disconnected from Server`); //This never gets called when the Cloud Run service instance is running, so I can assume a disconnect never happened.
})
}
export default socket;
|
[
"By default, the Socket.IO client will automatically include any cookies that it receives from the server in subsequent requests. This means that if the server is sending back a cookie with a session affinity token, the Socket.IO client will automatically include that cookie in its requests to the server.\nYou don't need to do anything special to make this happen. As long as the server is sending back the cookie with the session affinity token, the Socket.IO client will automatically include that cookie in its requests.\nHowever, if for some reason you don't want the Socket.IO client to include cookies in its requests, you can disable this behavior by setting the withCredentials option to false when you create the Socket.IO client:\nlet socketOptions = { transports: [\"websocket\"], withCredentials: false }\n\n"
] |
[
2
] |
[] |
[] |
[
"google_cloud_run",
"node.js",
"socket.io"
] |
stackoverflow_0074663498_google_cloud_run_node.js_socket.io.txt
|
Q:
CMake - BUILD_SHARED_LIBS for single libraries
Is there a variable like BUILD_SHARED_LIBS but only for a single target (e.g. MyLib_BUILD_SHARED where MyLib is the library).
I know that I can manually determine if a library is static or dynamic using STATIC or DYNAMIC in the command add_library, but I want an option that can be set by a user instead of a hard coded solution.
Sincerely,
Lehks
A:
There is no builtin method in CMake that I'm aware of. Make it an option, that has the advantage of also documenting the intention to users.
option(BUILD_SHARED_LIBS "Build shared libraries (.dll/.so) instead of static ones (.lib/.a)" ON)
Then do the add_library command according to the option that was set.
A:
I believe the accepted answer is not entirely accurate because the question was specifically about setting this option for a single target. By making BUILD_SHARED_LIBS visible as an option, you control the STATIC/SHARED property of all library targets by default.
It is true that there is no built-in feature to do this for a single target. You would need to add that yourself. Something like the following:
option(MyLib_SHARED_LIBS "" ON)
if (MyLib_SHARED_LIBS)
add_library(MyLib SHARED "")
else()
add_library(MyLib STATIC "")
endif()
A:
The accepted answer ignores the global BUILD_SHARED_LIBS flag the user might have set. We can factor in this flag and expose a user overridable option with cmake_dependent_option.
Here's an example, where a static library is build by default, unless BUILD_SHARED_LIBS is set to true. User can also not set BUILD_SHARED_LIBS but override just this libary to be built as shared.
include(CMakeDependentOption)
cmake_dependent_option(
MYMATH_STATIC # option variable
"Build static library" # description
ON # default value if exposed; user can override
"NOT BUILD_SHARED_LIBS" # condition to expose option
OFF # value if not exposed; user can't override
)
# set build type based on dependent option
if (MYMATH_STATIC)
set(MYMATH_BUILD_TYPE STATIC)
else ()
set(MYMATH_BUILD_TYPE SHARED)
endif ()
# use build type in library definition
add_library(mymath ${MYMATH_BUILD_TYPE}
header.h
source.cpp
)
If BUILD_SHARED_LIBS is defined and is truthy1, MYMATH_STATIC isn't exposed as an option at all; user can't override a non-existant option.
# builds libmymath.dll/.so/.dylib
cmake -B build -DBUILD_SHARED_LIBS=1
# builds libmymath.dll/.so/.dylib ignoring MYMATH_STATIC
cmake -B build -DBUILD_SHARED_LIBS=1 -DMYMATH_STATIC=1
Otherwise, the option is available and the user can override it:
# builds libmymath.a since BUILD_SHARED_LIBS is left unset
cmake -B build -DMYMATH_STATIC=1
# builds libmymath.a since MYMATH_STATIC defaults to ON
cmake -B build
# builds libmymath.dll/.so/.dylib since BUILD_SHARED_LIBS is left unset
cmake -B build -DMYMATH_STATIC=0
1: TRUE, 1, ON, YES, Y or a non-zero number
|
CMake - BUILD_SHARED_LIBS for single libraries
|
Is there a variable like BUILD_SHARED_LIBS but only for a single target (e.g. MyLib_BUILD_SHARED where MyLib is the library).
I know that I can manually determine if a library is static or dynamic using STATIC or DYNAMIC in the command add_library, but I want an option that can be set by a user instead of a hard coded solution.
Sincerely,
Lehks
|
[
"There is no builtin method in CMake that I'm aware of. Make it an option, that has the advantage of also documenting the intention to users.\noption(BUILD_SHARED_LIBS \"Build shared libraries (.dll/.so) instead of static ones (.lib/.a)\" ON)\n\nThen do the add_library command according to the option that was set.\n",
"I believe the accepted answer is not entirely accurate because the question was specifically about setting this option for a single target. By making BUILD_SHARED_LIBS visible as an option, you control the STATIC/SHARED property of all library targets by default.\nIt is true that there is no built-in feature to do this for a single target. You would need to add that yourself. Something like the following:\noption(MyLib_SHARED_LIBS \"\" ON)\nif (MyLib_SHARED_LIBS)\n add_library(MyLib SHARED \"\")\nelse()\n add_library(MyLib STATIC \"\")\nendif()\n\n",
"The accepted answer ignores the global BUILD_SHARED_LIBS flag the user might have set. We can factor in this flag and expose a user overridable option with cmake_dependent_option.\nHere's an example, where a static library is build by default, unless BUILD_SHARED_LIBS is set to true. User can also not set BUILD_SHARED_LIBS but override just this libary to be built as shared.\ninclude(CMakeDependentOption)\ncmake_dependent_option(\n MYMATH_STATIC # option variable\n \"Build static library\" # description\n ON # default value if exposed; user can override\n \"NOT BUILD_SHARED_LIBS\" # condition to expose option\n OFF # value if not exposed; user can't override\n)\n\n# set build type based on dependent option\nif (MYMATH_STATIC)\n set(MYMATH_BUILD_TYPE STATIC)\nelse ()\n set(MYMATH_BUILD_TYPE SHARED)\nendif ()\n\n# use build type in library definition\nadd_library(mymath ${MYMATH_BUILD_TYPE}\n header.h\n source.cpp\n)\n\nIf BUILD_SHARED_LIBS is defined and is truthy1, MYMATH_STATIC isn't exposed as an option at all; user can't override a non-existant option.\n# builds libmymath.dll/.so/.dylib\ncmake -B build -DBUILD_SHARED_LIBS=1\n\n# builds libmymath.dll/.so/.dylib ignoring MYMATH_STATIC\ncmake -B build -DBUILD_SHARED_LIBS=1 -DMYMATH_STATIC=1\n\nOtherwise, the option is available and the user can override it:\n# builds libmymath.a since BUILD_SHARED_LIBS is left unset\ncmake -B build -DMYMATH_STATIC=1\n\n# builds libmymath.a since MYMATH_STATIC defaults to ON\ncmake -B build\n\n# builds libmymath.dll/.so/.dylib since BUILD_SHARED_LIBS is left unset\ncmake -B build -DMYMATH_STATIC=0\n\n1: TRUE, 1, ON, YES, Y or a non-zero number\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"cmake"
] |
stackoverflow_0053499678_cmake.txt
|
Q:
Order bar chart using react-charts-js-2 ChartJS
I am creating bar graphs obtaining data for the month and it works fine but the order is never by the order of the months and it is also messed up in each rendering.
The data I receive from the API is the following:
{
{
"_id": 2022,
"sales": [
{
"month": "Dic",
"year": 2022,
"total": 8737.6
},
{
"month": "Oct",
"year": 2022,
"total": 1936.0000000000002
},
{
"month": "Sep",
"year": 2022,
"total": 526.8000000000001
},
{
"month": "Nov",
"year": 2022,
"total": 2205.2000000000003
}
]
}
}
So I use this code with react-chartjs-2 (chartjs) like this:
const data = {
labels: dataOfSalesPerMonth?.map((data) => data.month),
datasets: [
{
label: "Venta Mensual",
data: dataOfSalesPerMonth?.map((data) => data.total),
backgroundColor: [
"rgba(75,192,192,1)",
"#ecf0f1",
"#50AF95",
"#f3ba2f",
"#2a71d0",
],
borderColor: "black",
borderWidth: 2,
},
]
}
And in react I render it like this:
return (
<div className="w-full h-44">
{
Object.entries(sales).length &&
<Bar
data={data}
options={options}
/>
}
</div>
)
But the order is not correct, it always gives me the months mixed up and never in the same order:
How can I order it? Thanks.
A:
add custom sorting logic
const data = {
"_id": 2022,
"sales": [
{
"month": "Dic",
"year": 2022,
"total": 8737.6
},
{
"month": "Oct",
"year": 2022,
"total": 1936.0000000000002
},
{
"month": "Sep",
"year": 2022,
"total": 526.8000000000001
},
{
"month": "Nov",
"year": 2022,
"total": 2205.2000000000003
}
]
}
const sortByMonth = (arr) => {
const months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dic"];
arr.sort((a, b)=>{
return months.indexOf(a.month)
- months.indexOf(b.month);
});
}
sortByMonth(data.sales);
console.log(data.sales);
|
Order bar chart using react-charts-js-2 ChartJS
|
I am creating bar graphs obtaining data for the month and it works fine but the order is never by the order of the months and it is also messed up in each rendering.
The data I receive from the API is the following:
{
{
"_id": 2022,
"sales": [
{
"month": "Dic",
"year": 2022,
"total": 8737.6
},
{
"month": "Oct",
"year": 2022,
"total": 1936.0000000000002
},
{
"month": "Sep",
"year": 2022,
"total": 526.8000000000001
},
{
"month": "Nov",
"year": 2022,
"total": 2205.2000000000003
}
]
}
}
So I use this code with react-chartjs-2 (chartjs) like this:
const data = {
labels: dataOfSalesPerMonth?.map((data) => data.month),
datasets: [
{
label: "Venta Mensual",
data: dataOfSalesPerMonth?.map((data) => data.total),
backgroundColor: [
"rgba(75,192,192,1)",
"#ecf0f1",
"#50AF95",
"#f3ba2f",
"#2a71d0",
],
borderColor: "black",
borderWidth: 2,
},
]
}
And in react I render it like this:
return (
<div className="w-full h-44">
{
Object.entries(sales).length &&
<Bar
data={data}
options={options}
/>
}
</div>
)
But the order is not correct, it always gives me the months mixed up and never in the same order:
How can I order it? Thanks.
|
[
"add custom sorting logic\n\n\nconst data = {\n \"_id\": 2022,\n \"sales\": [\n {\n \"month\": \"Dic\",\n \"year\": 2022,\n \"total\": 8737.6\n },\n {\n \"month\": \"Oct\",\n \"year\": 2022,\n \"total\": 1936.0000000000002\n },\n {\n \"month\": \"Sep\",\n \"year\": 2022,\n \"total\": 526.8000000000001\n },\n {\n \"month\": \"Nov\",\n \"year\": 2022,\n \"total\": 2205.2000000000003\n }\n ]\n}\n\n\nconst sortByMonth = (arr) => {\n const months = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dic\"];\n arr.sort((a, b)=>{\n return months.indexOf(a.month)\n - months.indexOf(b.month);\n });\n}\n\n\nsortByMonth(data.sales);\n\nconsole.log(data.sales);\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"reactjs"
] |
stackoverflow_0074664274_javascript_reactjs.txt
|
Q:
In what cases can Java strings be updated?
Consider the following Java program:
Version 1:
public class Traverse
{
public static void main(String[] args)
{
String str = "Frog";
// processString(str);
str = str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);
System.out.println(str);
}
}
Version 2:
public class Traverse
{
public static void main(String[] args)
{
String str = "Frog";
processString(str);
System.out.println(str);
}
public static void processString (String str)
{
str = str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);
}
}
Version 1 prints the output "orF", while version 2 prints the output "Frog".
It seems that when we attempt to use the processString method (version 2) to change the string, it fails, while if we try to manually change the string without using the processString method (version 1), it succeeds.
Why is this the case?
A:
Strings are Reference/Complex Object Type. And every reference type in java is a subclass of type java.lang.object
this means a String variable holds a reference to the actual data. However, a copy of the reference is passed. (In Java nothing is passed by reference) Object references are also passed by value. Next to that String class is immutable, meaning when you create a String object an array of characters is made; when this is "edited" its not actually getting edited, but a new object is being made from the contents of the old string + the edits you have done.
these two together result in the string "not being edited" outside the method, because the passed refrence value is changed (the refrence to the array inside the String) and when a refrence is change in a method, the original refrence doesnt change outside the method. resulting in all changes only done on the string inside the method and not outside. if you could change the array values inside the string, it technically would have changed. (but as far as i know that is not possible on the String class).
Now you have a few options to do it in a method:
by returning the changed value:
public class Traverse
{
public static void main(String[] args)
{
String str = "Frog";
str=processString(str);
System.out.println(str);
}
public static String processString (String str)
{
return str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);
}
}
or by a holder class:
public class Holder{
public String str;
}
void fillString(Holder string){
string.str.substring(2, 3) + string.str.substring(1, 2) + string.str.substring(0, 1);
}
there are also other ways for example with an array of length 1 (depending on your usecase outside of this question).
|
In what cases can Java strings be updated?
|
Consider the following Java program:
Version 1:
public class Traverse
{
public static void main(String[] args)
{
String str = "Frog";
// processString(str);
str = str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);
System.out.println(str);
}
}
Version 2:
public class Traverse
{
public static void main(String[] args)
{
String str = "Frog";
processString(str);
System.out.println(str);
}
public static void processString (String str)
{
str = str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);
}
}
Version 1 prints the output "orF", while version 2 prints the output "Frog".
It seems that when we attempt to use the processString method (version 2) to change the string, it fails, while if we try to manually change the string without using the processString method (version 1), it succeeds.
Why is this the case?
|
[
"Strings are Reference/Complex Object Type. And every reference type in java is a subclass of type java.lang.object\nthis means a String variable holds a reference to the actual data. However, a copy of the reference is passed. (In Java nothing is passed by reference) Object references are also passed by value. Next to that String class is immutable, meaning when you create a String object an array of characters is made; when this is \"edited\" its not actually getting edited, but a new object is being made from the contents of the old string + the edits you have done.\nthese two together result in the string \"not being edited\" outside the method, because the passed refrence value is changed (the refrence to the array inside the String) and when a refrence is change in a method, the original refrence doesnt change outside the method. resulting in all changes only done on the string inside the method and not outside. if you could change the array values inside the string, it technically would have changed. (but as far as i know that is not possible on the String class).\nNow you have a few options to do it in a method:\nby returning the changed value:\npublic class Traverse\n{\n public static void main(String[] args)\n {\n \n String str = \"Frog\";\n str=processString(str);\n System.out.println(str);\n\n }\n \n public static String processString (String str)\n {\n return str.substring(2, 3) + str.substring(1, 2) + str.substring(0, 1);\n }\n}\n\nor by a holder class:\npublic class Holder{\n public String str;\n}\nvoid fillString(Holder string){\n string.str.substring(2, 3) + string.str.substring(1, 2) + string.str.substring(0, 1);\n}\n\nthere are also other ways for example with an array of length 1 (depending on your usecase outside of this question).\n"
] |
[
0
] |
[] |
[] |
[
"class",
"java",
"methods",
"string"
] |
stackoverflow_0074664044_class_java_methods_string.txt
|
Q:
How to data bind to a Form Group within a Form Array? Angular
I am trying to make an Angular Forms app that allows the user to input some information. The user will be required to fill basic information and add two sets of skills to the form at a time and add these are stored in array..
HTML:
<div class="form-container">
<form (ngSubmit)="submit()" [formGroup]="myForm">
<h1>User Registration</h1>
<div class="form-group">
<label for="firstname"></label>`
<input type="text" name="firstname" formControlName="name" />
<input type="text" name="firstname" formControlName="email" />
<div formArrayName="skills">
<ng-container *ngFor="let skill of skillsArray.controls; index as i">
<div formGroupName="skills">
<input
type="text"
name="firstname"
placeholder="my skill"
formControlName="name"
formControlName="first_skill"
/>
<input
type="text"
name="firstname"
placeholder="my skill"
formControlName="name"
formControlName="second_skill"
/>
</div>
<button (click)="addSkills()">Add Skills</button>
</ng-container>
</div>
<button type="submit">Submit</button>
</div>
<br />
<div class="form-check">
{{ myForm.value | json }}
<br />
{{ myForm.valid | json }}
</div>
</form>
</div>
TS:
export class FormCompComponent implements OnInit {
myForm!: FormGroup;
constructor (private fb : FormBuilder) {
}
ngOnInit(): void {
this.myForm = new FormGroup({
name: new FormControl('', Validators.required),
email: new FormControl('', Validators.required),
skills: new FormArray([
new FormGroup({
first_skill: new FormControl('', Validators.required),
second_skill: new FormControl('', Validators.required),
})
]),
});
}
addSkills() {
this.skillsArray.push(new FormControl('', Validators.required));
}
get skillsArray() {
return this.myForm.get('skills') as FormArray;
}
submit() {
console.log(this.myForm.value);
}
}
From an interface perspective, everything is okay, I am able to add items to the array successfully but I am struggling to bind my input to my typescript objects
These are my results when inputting:
{ "name": "test", "email": "test", "skills": [ { "first_skill": "", "second_skill": "" }, "" ] }
How do i penetrate the nested objects from my HTML?
I am currently looping over the array and then attempting to access formGroupName.
My inputs register as blank. why is this?
Thanks,
A:
Issue 1: Incorrectly add FormGroup into FormArray
From here:
addSkills() {
this.skillsArray.push(new FormControl('', Validators.required));
}
You are adding FormControl into skills FormArray, it supposes to be adding the FormGroup instead.
Solution for Issue 1
Would suggest writing a function for generating FormGroup for the skill object (initSkillFormGroup method).
Call the initSkillFormGroup method and add it to skillsArray.
addSkills() {
this.skillsArray.push(this.initSkillFormGroup());
}
initSkillFormGroup() {
return new FormGroup({
first_skill: new FormControl('', Validators.required),
second_skill: new FormControl('', Validators.required),
});
}
(Optional) Writing the initSkillFormGroup method to avoid redundant declaring the FormGroup for skill object. Well, when you build the root form, you can initialize the skills FormArray by calling the mentioned function.
this.myForm = new FormGroup({
...,
skills: new FormArray([this.initSkillFormGroup()]),
});
Issue 2: Incorrectly generate each skill FormGroup in skills FormArray
Solution for Issue 2
Pass the i to [formGroupName] attribute.
<ng-container *ngFor="let skill of skillsArray.controls; index as i">
<div [formGroupName]="i">
...
</div>
</ng-container>
Demo @ StackBlitz
|
How to data bind to a Form Group within a Form Array? Angular
|
I am trying to make an Angular Forms app that allows the user to input some information. The user will be required to fill basic information and add two sets of skills to the form at a time and add these are stored in array..
HTML:
<div class="form-container">
<form (ngSubmit)="submit()" [formGroup]="myForm">
<h1>User Registration</h1>
<div class="form-group">
<label for="firstname"></label>`
<input type="text" name="firstname" formControlName="name" />
<input type="text" name="firstname" formControlName="email" />
<div formArrayName="skills">
<ng-container *ngFor="let skill of skillsArray.controls; index as i">
<div formGroupName="skills">
<input
type="text"
name="firstname"
placeholder="my skill"
formControlName="name"
formControlName="first_skill"
/>
<input
type="text"
name="firstname"
placeholder="my skill"
formControlName="name"
formControlName="second_skill"
/>
</div>
<button (click)="addSkills()">Add Skills</button>
</ng-container>
</div>
<button type="submit">Submit</button>
</div>
<br />
<div class="form-check">
{{ myForm.value | json }}
<br />
{{ myForm.valid | json }}
</div>
</form>
</div>
TS:
export class FormCompComponent implements OnInit {
myForm!: FormGroup;
constructor (private fb : FormBuilder) {
}
ngOnInit(): void {
this.myForm = new FormGroup({
name: new FormControl('', Validators.required),
email: new FormControl('', Validators.required),
skills: new FormArray([
new FormGroup({
first_skill: new FormControl('', Validators.required),
second_skill: new FormControl('', Validators.required),
})
]),
});
}
addSkills() {
this.skillsArray.push(new FormControl('', Validators.required));
}
get skillsArray() {
return this.myForm.get('skills') as FormArray;
}
submit() {
console.log(this.myForm.value);
}
}
From an interface perspective, everything is okay, I am able to add items to the array successfully but I am struggling to bind my input to my typescript objects
These are my results when inputting:
{ "name": "test", "email": "test", "skills": [ { "first_skill": "", "second_skill": "" }, "" ] }
How do i penetrate the nested objects from my HTML?
I am currently looping over the array and then attempting to access formGroupName.
My inputs register as blank. why is this?
Thanks,
|
[
"Issue 1: Incorrectly add FormGroup into FormArray\nFrom here:\naddSkills() {\n this.skillsArray.push(new FormControl('', Validators.required));\n}\n\nYou are adding FormControl into skills FormArray, it supposes to be adding the FormGroup instead.\nSolution for Issue 1\n\nWould suggest writing a function for generating FormGroup for the skill object (initSkillFormGroup method).\n\nCall the initSkillFormGroup method and add it to skillsArray.\n\n\naddSkills() {\n this.skillsArray.push(this.initSkillFormGroup());\n}\n\ninitSkillFormGroup() {\n return new FormGroup({\n first_skill: new FormControl('', Validators.required),\n second_skill: new FormControl('', Validators.required),\n });\n}\n\n\n(Optional) Writing the initSkillFormGroup method to avoid redundant declaring the FormGroup for skill object. Well, when you build the root form, you can initialize the skills FormArray by calling the mentioned function.\n\nthis.myForm = new FormGroup({\n ...,\n skills: new FormArray([this.initSkillFormGroup()]),\n});\n\n\n\nIssue 2: Incorrectly generate each skill FormGroup in skills FormArray\nSolution for Issue 2\nPass the i to [formGroupName] attribute.\n<ng-container *ngFor=\"let skill of skillsArray.controls; index as i\">\n <div [formGroupName]=\"i\">\n ...\n </div>\n</ng-container>\n\nDemo @ StackBlitz\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"angular_reactive_forms",
"formarray",
"formgroups",
"forms"
] |
stackoverflow_0074664227_angular_angular_reactive_forms_formarray_formgroups_forms.txt
|
Q:
How to execute scripts after document has been rendered?
I am trying to load scripts from a database. These scripts from databases are being fetched after the document has been completely rendered. When I check the dom tree after the document has been completely loaded, the scripts that are being fetched from database are present, but are able to be executed
I know that a browser first executes the scripts files and after its turn the scripts from database have came into dom. Browser in unaware of these scripts and no executed. Is there any other way that I can to fetch the scripts from a database and still be able to execute these scripts
A:
you can get your data in body onload event or if you don't want to load script before rendering the page you can use webSoket to communicate to your Dom from backend server
|
How to execute scripts after document has been rendered?
|
I am trying to load scripts from a database. These scripts from databases are being fetched after the document has been completely rendered. When I check the dom tree after the document has been completely loaded, the scripts that are being fetched from database are present, but are able to be executed
I know that a browser first executes the scripts files and after its turn the scripts from database have came into dom. Browser in unaware of these scripts and no executed. Is there any other way that I can to fetch the scripts from a database and still be able to execute these scripts
|
[
"you can get your data in body onload event or if you don't want to load script before rendering the page you can use webSoket to communicate to your Dom from backend server\n"
] |
[
0
] |
[] |
[] |
[
"dom",
"html",
"javascript",
"php",
"server_side_rendering"
] |
stackoverflow_0074664099_dom_html_javascript_php_server_side_rendering.txt
|
Q:
Riverpod and MediaQuery
I am trying to write a Riverpod provider that would return me a scaling factor for the screen, so that I can resize all my elements accordingly.
final scale = ref.watch(scaleProvider);
return LinearPercentIndicator(
animation: true,
lineHeight: 15 * scale,
...
);
Unfortunately for my naïve implementation the MediaQuery requires a build context that I don't have when I first create a global scaleProvider
final scaleProvider = Provider<double>((ref) {
final availableHeight = MediaQuery.of(context).size.height -
AppBar().preferredSize.height -
MediaQuery.of(context).padding.top -
MediaQuery.of(context).padding.bottom;
return availableHeight / 600.0;
});
What do I do? Are the Providers meant to be used that way at all? How do I make the scaling factor available to all my controls whenever they feel like?
A:
if you check it at media_query dart. size get its data in constructor as window.physical.size, so:
import 'package:flutter/material.dart';
import 'dart:ui' as ui;
import 'package:flutter_riverpod/flutter_riverpod.dart';
final scaleProvider = Provider<double>((ref) {
final window = ui.window;
final availableHeight = window.physicalSize.height -
AppBar().preferredSize.height -
window.padding.top -
window.padding.bottom;
return availableHeight / 600.0;
});
|
Riverpod and MediaQuery
|
I am trying to write a Riverpod provider that would return me a scaling factor for the screen, so that I can resize all my elements accordingly.
final scale = ref.watch(scaleProvider);
return LinearPercentIndicator(
animation: true,
lineHeight: 15 * scale,
...
);
Unfortunately for my naïve implementation the MediaQuery requires a build context that I don't have when I first create a global scaleProvider
final scaleProvider = Provider<double>((ref) {
final availableHeight = MediaQuery.of(context).size.height -
AppBar().preferredSize.height -
MediaQuery.of(context).padding.top -
MediaQuery.of(context).padding.bottom;
return availableHeight / 600.0;
});
What do I do? Are the Providers meant to be used that way at all? How do I make the scaling factor available to all my controls whenever they feel like?
|
[
"if you check it at media_query dart. size get its data in constructor as window.physical.size, so:\nimport 'package:flutter/material.dart';\n\nimport 'dart:ui' as ui;\n\nimport 'package:flutter_riverpod/flutter_riverpod.dart';\n\nfinal scaleProvider = Provider<double>((ref) {\n final window = ui.window;\n\n final availableHeight = window.physicalSize.height -\n AppBar().preferredSize.height -\n window.padding.top -\n window.padding.bottom;\n\n return availableHeight / 600.0;\n});\n\n"
] |
[
2
] |
[] |
[] |
[
"flutter",
"flutter_riverpod"
] |
stackoverflow_0074664232_flutter_flutter_riverpod.txt
|
Q:
how to convert the output of html text input to a number in javascript
I am trying to make an program which will add two number and i am a beginner i got my stored in a variable but cannot convert it to number
var num1 = Number.parseInt(num0)
i tried using parse int but i still dont get the correct input
heres my full code
<html>
<body>
<input type="text" id="text" placeholder="Number 1"></input>
<input type="text" id="text2" placeholder="Number 2"></input>
<button type="submit" id="submit" onclick="put()">Click Me</button>
<p id="myp"></p>
<script>
function put() {
var num0 = document.getElementById("text")
var num1 = Number.parseInt(num0)
var num4 = document.getElementById("text2")
var num2 = Number.parseInt(num4)
var sub = document.getElementById("submit")
var res = num1 + num2
document.getElementById("myp").innerHTML = num1.value + num2.value
}
</script>
</body>
</html>
enter image description here
if i try num1.value its undefined and if only try num1 its considered as text
A:
function put() {
var num0 = document.getElementById("text")
var num1 = Number(num0.value)
var num4 = document.getElementById("text2")
var num2 = Number(num4.value)
var sub = document.getElementById("submit")
var res = num1 + num2
document.getElementById("myp").innerHTML = num1 + num2
}
A:
function put() {
var num0 = document.getElementById("text").value
var num1 = Number.parseInt(num0)
var num4 = document.getElementById("text2").value
var num2 = Number.parseInt(num4)
var res = num1 + num2
document.getElementById("myp").innerHTML = res
}
A:
You can use the + operator, like that:
var num1 = +num0.value;
...
var num2 = +num4.value;
and this will turn your string number into a floating point number
<input type="text" id="text" placeholder="Number 1" />
<input type="text" id="text2" placeholder="Number 2" />
<button type="submit" id="submit" onclick="put()">Click Me</button>
<p id="myp"></p>
<script>
function put() {
var num0 = document.getElementById("text");
var num1 = +num0.value;
var num4 = document.getElementById("text2");
var num2 = +num4.value;
var sub = document.getElementById("submit");
var res = num1 + num2;
document.getElementById("myp").innerHTML = res;
}
</script>
|
how to convert the output of html text input to a number in javascript
|
I am trying to make an program which will add two number and i am a beginner i got my stored in a variable but cannot convert it to number
var num1 = Number.parseInt(num0)
i tried using parse int but i still dont get the correct input
heres my full code
<html>
<body>
<input type="text" id="text" placeholder="Number 1"></input>
<input type="text" id="text2" placeholder="Number 2"></input>
<button type="submit" id="submit" onclick="put()">Click Me</button>
<p id="myp"></p>
<script>
function put() {
var num0 = document.getElementById("text")
var num1 = Number.parseInt(num0)
var num4 = document.getElementById("text2")
var num2 = Number.parseInt(num4)
var sub = document.getElementById("submit")
var res = num1 + num2
document.getElementById("myp").innerHTML = num1.value + num2.value
}
</script>
</body>
</html>
enter image description here
if i try num1.value its undefined and if only try num1 its considered as text
|
[
" function put() {\n var num0 = document.getElementById(\"text\")\n var num1 = Number(num0.value)\n var num4 = document.getElementById(\"text2\")\n var num2 = Number(num4.value)\n var sub = document.getElementById(\"submit\")\n var res = num1 + num2\n document.getElementById(\"myp\").innerHTML = num1 + num2\n }\n\n",
"function put() {\n var num0 = document.getElementById(\"text\").value\n var num1 = Number.parseInt(num0)\n var num4 = document.getElementById(\"text2\").value\n var num2 = Number.parseInt(num4)\n\n var res = num1 + num2\n document.getElementById(\"myp\").innerHTML = res\n}\n\n",
"You can use the + operator, like that:\nvar num1 = +num0.value;\n...\nvar num2 = +num4.value;\n\nand this will turn your string number into a floating point number\n\n\n<input type=\"text\" id=\"text\" placeholder=\"Number 1\" />\n<input type=\"text\" id=\"text2\" placeholder=\"Number 2\" />\n<button type=\"submit\" id=\"submit\" onclick=\"put()\">Click Me</button>\n<p id=\"myp\"></p>\n\n<script>\n function put() {\n var num0 = document.getElementById(\"text\");\n var num1 = +num0.value;\n var num4 = document.getElementById(\"text2\");\n var num2 = +num4.value;\n var sub = document.getElementById(\"submit\");\n var res = num1 + num2;\n document.getElementById(\"myp\").innerHTML = res;\n }\n</script>\n\n\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"html",
"javascript"
] |
stackoverflow_0074663573_html_javascript.txt
|
Q:
How to install rust built from source to different prefix?
How do you x.py install rust built from git source to a prefix other than /usr/local?
I tried:
git/rust> python x.py install --prefix=/my/prefix
but it doesn't work:
error: Unrecognized option: 'prefix'
A:
The --prefix option is in the configure command that generates x.py. So its like:
git/rust> configure --prefix=/my/prefix
git/rust> python x.py install
A:
To install Rust from source to a prefix other than /usr/local, you can use the --sysconfdir option when running the x.py script. This option allows you to specify the directory where the Rust installation's configuration files should be stored. By default, this directory is /usr/local/etc, but you can change it to any directory you want, including the prefix you specified.
Here's an example of how you can use the --sysconfdir option to install Rust to a prefix other than /usr/local:
git/rust> python x.py install --sysconfdir=/my/prefix/etc
This will install Rust to the /my/prefix directory, and will store the configuration files in /my/prefix/etc.
|
How to install rust built from source to different prefix?
|
How do you x.py install rust built from git source to a prefix other than /usr/local?
I tried:
git/rust> python x.py install --prefix=/my/prefix
but it doesn't work:
error: Unrecognized option: 'prefix'
|
[
"The --prefix option is in the configure command that generates x.py. So its like:\ngit/rust> configure --prefix=/my/prefix\ngit/rust> python x.py install\n\n",
"To install Rust from source to a prefix other than /usr/local, you can use the --sysconfdir option when running the x.py script. This option allows you to specify the directory where the Rust installation's configuration files should be stored. By default, this directory is /usr/local/etc, but you can change it to any directory you want, including the prefix you specified.\nHere's an example of how you can use the --sysconfdir option to install Rust to a prefix other than /usr/local:\ngit/rust> python x.py install --sysconfdir=/my/prefix/etc\n\nThis will install Rust to the /my/prefix directory, and will store the configuration files in /my/prefix/etc.\n"
] |
[
0,
0
] |
[] |
[] |
[
"rust"
] |
stackoverflow_0074663472_rust.txt
|
Q:
How to disallow screenshoot in ionic 3?
How can I prevent the user from taking screenshots in ionic 3? I can not find any info about how to disallow take screenshots on android using ionic 3 for example using ionic native.
A:
Maybe take a look at this - "PrivacyScreenPlugin" at "https://www.npmjs.com/package/cordova-plugin-privacyscreen"?
I used the same to disable screenshots for my Android App and it worked fine.
A:
You can try the following links:
https://www.npmjs.com/package/cordova-plugin-prevent-screenshot
https://ourcodeworld.com/articles/read/168/how-to-disable-screenshots-within-a-cordova-application-in-android
A:
Use Below Command.
cordova plugin add cordova-plugin-privacyscreen
npm i cordova-plugin-privacyscreen
I hope this is help you.
|
How to disallow screenshoot in ionic 3?
|
How can I prevent the user from taking screenshots in ionic 3? I can not find any info about how to disallow take screenshots on android using ionic 3 for example using ionic native.
|
[
"Maybe take a look at this - \"PrivacyScreenPlugin\" at \"https://www.npmjs.com/package/cordova-plugin-privacyscreen\"?\nI used the same to disable screenshots for my Android App and it worked fine.\n",
"You can try the following links:\nhttps://www.npmjs.com/package/cordova-plugin-prevent-screenshot\nhttps://ourcodeworld.com/articles/read/168/how-to-disable-screenshots-within-a-cordova-application-in-android\n",
"Use Below Command.\n\ncordova plugin add cordova-plugin-privacyscreen\nnpm i cordova-plugin-privacyscreen\n\nI hope this is help you.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"android",
"ionic3",
"ionic_framework",
"ionic_native"
] |
stackoverflow_0051217232_android_ionic3_ionic_framework_ionic_native.txt
|
Q:
Did I break my project by running `npm audit fix --force`?
I was building a React project with Vite and it was going great. I needed to add some charts and found out about the recharts package and really liked it so downloaded it into my project with the command npm i recharts.
I get the following message:
high severity vulnerabilities
I then ran npm audit, npm audit fix and npm audit fix --force and got this:
lots of warnings
Now when I try to start up my project with npm run dev I get this error in the console:
Uncaught TypeError: import_events.default is not a constructor
It says it's coming from a file called Events.js but I do not have such a file in my project.
I tried running npm audit fix --force multiple times like my terminal told me to but it did not work.
A:
I'm not sure what exactly happened to your project but it seems that's because security issues you can read more in here ,I think reinstalling modules and clearing cache might do what you want :
for clearing cache : npm cache clean –force
and for reinstalling modules : npm ci --force
A:
It's possible that running npm audit fix --force may have caused some changes to your project that are causing the errors you're seeing.
It's generally not recommended to use the --force flag with npm audit fix, as it can potentially cause problems with your project. Instead, you should try to carefully review the output of npm audit and fix any vulnerabilities manually, or use the --package-lock-only flag to update your package-lock.json file without modifying your project.
In this case, it may be best to try uninstalling the recharts package and then reinstalling it without using the --force flag. You can try running the following commands to do this:
npm uninstall recharts
npm install recharts
In some cases, deleting the node_modules folder and running npm install can help fix issues with a project. The node_modules folder contains all the dependencies for your project, and running npm install will reinstall them from the package.json file.
|
Did I break my project by running `npm audit fix --force`?
|
I was building a React project with Vite and it was going great. I needed to add some charts and found out about the recharts package and really liked it so downloaded it into my project with the command npm i recharts.
I get the following message:
high severity vulnerabilities
I then ran npm audit, npm audit fix and npm audit fix --force and got this:
lots of warnings
Now when I try to start up my project with npm run dev I get this error in the console:
Uncaught TypeError: import_events.default is not a constructor
It says it's coming from a file called Events.js but I do not have such a file in my project.
I tried running npm audit fix --force multiple times like my terminal told me to but it did not work.
|
[
"I'm not sure what exactly happened to your project but it seems that's because security issues you can read more in here ,I think reinstalling modules and clearing cache might do what you want :\nfor clearing cache : npm cache clean –force\nand for reinstalling modules : npm ci --force\n",
"It's possible that running npm audit fix --force may have caused some changes to your project that are causing the errors you're seeing.\nIt's generally not recommended to use the --force flag with npm audit fix, as it can potentially cause problems with your project. Instead, you should try to carefully review the output of npm audit and fix any vulnerabilities manually, or use the --package-lock-only flag to update your package-lock.json file without modifying your project.\nIn this case, it may be best to try uninstalling the recharts package and then reinstalling it without using the --force flag. You can try running the following commands to do this:\nnpm uninstall recharts\nnpm install recharts\n\nIn some cases, deleting the node_modules folder and running npm install can help fix issues with a project. The node_modules folder contains all the dependencies for your project, and running npm install will reinstall them from the package.json file.\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"reactjs",
"recharts",
"vite"
] |
stackoverflow_0074664288_javascript_reactjs_recharts_vite.txt
|
Q:
Thread pooling in C++11
Relevant questions:
About C++11:
C++11: std::thread pooled?
Will async(launch::async) in C++11 make thread pools obsolete for avoiding expensive thread creation?
About Boost:
C++ boost thread reusing threads
boost::thread and creating a pool of them!
How do I get a pool of threads to send tasks to, without creating and deleting them over and over again? This means persistent threads to resynchronize without joining.
I have code that looks like this:
namespace {
std::vector<std::thread> workers;
int total = 4;
int arr[4] = {0};
void each_thread_does(int i) {
arr[i] += 2;
}
}
int main(int argc, char *argv[]) {
for (int i = 0; i < 8; ++i) { // for 8 iterations,
for (int j = 0; j < 4; ++j) {
workers.push_back(std::thread(each_thread_does, j));
}
for (std::thread &t: workers) {
if (t.joinable()) {
t.join();
}
}
arr[4] = std::min_element(arr, arr+4);
}
return 0;
}
Instead of creating and joining threads each iteration, I'd prefer to send tasks to my worker threads each iteration and only create them once.
A:
This is adapted from my answer to another very similar post.
Let's build a ThreadPool class:
class ThreadPool {
public:
void Start();
void QueueJob(const std::function<void()>& job);
void Stop();
void busy();
private:
void ThreadLoop();
bool should_terminate = false; // Tells threads to stop looking for jobs
std::mutex queue_mutex; // Prevents data races to the job queue
std::condition_variable mutex_condition; // Allows threads to wait on new jobs or termination
std::vector<std::thread> threads;
std::queue<std::function<void()>> jobs;
};
ThreadPool::Start
For an efficient threadpool implementation, once threads are created according to num_threads, it's better not to
create new ones or destroy old ones (by joining). There will be a performance penalty, and it might even make your
application go slower than the serial version. Thus, we keep a pool of threads that can be used at any time (if they
aren't already running a job).
Each thread should be running its own infinite loop, constantly waiting for new tasks to grab and run.
void ThreadPool::Start() {
const uint32_t num_threads = std::thread::hardware_concurrency(); // Max # of threads the system supports
threads.resize(num_threads);
for (uint32_t i = 0; i < num_threads; i++) {
threads.at(i) = std::thread(ThreadLoop);
}
}
ThreadPool::ThreadLoop
The infinite loop function. This is a while (true) loop waiting for the task queue to open up.
void ThreadPool::ThreadLoop() {
while (true) {
std::function<void()> job;
{
std::unique_lock<std::mutex> lock(queue_mutex);
mutex_condition.wait(lock, [this] {
return !jobs.empty() || should_terminate;
});
if (should_terminate) {
return;
}
job = jobs.front();
jobs.pop();
}
job();
}
}
ThreadPool::QueueJob
Add a new job to the pool; use a lock so that there isn't a data race.
void ThreadPool::QueueJob(const std::function<void()>& job) {
{
std::unique_lock<std::mutex> lock(queue_mutex);
jobs.push(job);
}
mutex_condition.notify_one();
}
To use it:
thread_pool->QueueJob([] { /* ... */ });
ThreadPool::busy
void ThreadPool::busy() {
bool poolbusy;
{
std::unique_lock<std::mutex> lock(queue_mutex);
poolbusy = jobs.empty();
}
return poolbusy;
}
The busy() function can be used in a while loop, such that the main thread can wait the threadpool to complete all the tasks before calling the threadpool destructor.
ThreadPool::Stop
Stop the pool.
void ThreadPool::Stop() {
{
std::unique_lock<std::mutex> lock(queue_mutex);
should_terminate = true;
}
mutex_condition.notify_all();
for (std::thread& active_thread : threads) {
active_thread.join();
}
threads.clear();
}
Once you integrate these ingredients, you have your own dynamic threading pool. These threads always run, waiting for
job to do.
I apologize if there are some syntax errors, I typed this code and and I have a bad memory. Sorry that I cannot provide
you the complete thread pool code; that would violate my job integrity.
Notes:
The anonymous code blocks are used so that when they are exited, the std::unique_lock variables created within them
go out of scope, unlocking the mutex.
ThreadPool::Stop will not terminate any currently running jobs, it just waits for them to finish via active_thread.join().
A:
You can use C++ Thread Pool Library, https://github.com/vit-vit/ctpl.
Then the code your wrote can be replaced with the following
#include <ctpl.h> // or <ctpl_stl.h> if ou do not have Boost library
int main (int argc, char *argv[]) {
ctpl::thread_pool p(2 /* two threads in the pool */);
int arr[4] = {0};
std::vector<std::future<void>> results(4);
for (int i = 0; i < 8; ++i) { // for 8 iterations,
for (int j = 0; j < 4; ++j) {
results[j] = p.push([&arr, j](int){ arr[j] +=2; });
}
for (int j = 0; j < 4; ++j) {
results[j].get();
}
arr[4] = std::min_element(arr, arr + 4);
}
}
You will get the desired number of threads and will not create and delete them over and over again on the iterations.
A:
A pool of threads means that all your threads are running, all the time – in other words, the thread function never returns. To give the threads something meaningful to do, you have to design a system of inter-thread communication, both for the purpose of telling the thread that there's something to do, as well as for communicating the actual work data.
Typically this will involve some kind of concurrent data structure, and each thread would presumably sleep on some kind of condition variable, which would be notified when there's work to do. Upon receiving the notification, one or several of the threads wake up, recover a task from the concurrent data structure, process it, and store the result in an analogous fashion.
The thread would then go on to check whether there's even more work to do, and if not go back to sleep.
The upshot is that you have to design all this yourself, since there isn't a natural notion of "work" that's universally applicable. It's quite a bit of work, and there are some subtle issues you have to get right. (You can program in Go if you like a system which takes care of thread management for you behind the scenes.)
A:
A threadpool is at core a set of threads all bound to a function working as an event loop. These threads will endlessly wait for a task to be executed, or their own termination.
The threadpool job is to provide an interface to submit jobs, define (and perhaps modify) the policy of running these jobs (scheduling rules, thread instantiation, size of the pool), and monitor the status of the threads and related resources.
So for a versatile pool, one must start by defining what a task is, how it is launched, interrupted, what is the result (see the notion of promise and future for that question), what sort of events the threads will have to respond to, how they will handle them, how these events shall be discriminated from the ones handled by the tasks. This can become quite complicated as you can see, and impose restrictions on how the threads will work, as the solution becomes more and more involved.
The current tooling for handling events is fairly barebones(*): primitives like mutexes, condition variables, and a few abstractions on top of that (locks, barriers). But in some cases, these abstrations may turn out to be unfit (see this related question), and one must revert to using the primitives.
Other problems have to be managed too:
signal
i/o
hardware (processor affinity, heterogenous setup)
How would these play out in your setting?
This answer to a similar question points to an existing implementation meant for boost and the stl.
I offered a very crude implementation of a threadpool for another question, which doesn't address many problems outlined above. You might want to build up on it. You might also want to have a look of existing frameworks in other languages, to find inspiration.
(*) I don't see that as a problem, quite to the contrary. I think it's the very spirit of C++ inherited from C.
A:
Follwoing [PhD EcE](https://stackoverflow.com/users/3818417/phd-ece) suggestion, I implemented the thread pool:
function_pool.h
#pragma once
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <atomic>
#include <cassert>
class Function_pool
{
private:
std::queue<std::function<void()>> m_function_queue;
std::mutex m_lock;
std::condition_variable m_data_condition;
std::atomic<bool> m_accept_functions;
public:
Function_pool();
~Function_pool();
void push(std::function<void()> func);
void done();
void infinite_loop_func();
};
function_pool.cpp
#include "function_pool.h"
Function_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition(), m_accept_functions(true)
{
}
Function_pool::~Function_pool()
{
}
void Function_pool::push(std::function<void()> func)
{
std::unique_lock<std::mutex> lock(m_lock);
m_function_queue.push(func);
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
lock.unlock();
m_data_condition.notify_one();
}
void Function_pool::done()
{
std::unique_lock<std::mutex> lock(m_lock);
m_accept_functions = false;
lock.unlock();
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
m_data_condition.notify_all();
//notify all waiting threads.
}
void Function_pool::infinite_loop_func()
{
std::function<void()> func;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_lock);
m_data_condition.wait(lock, [this]() {return !m_function_queue.empty() || !m_accept_functions; });
if (!m_accept_functions && m_function_queue.empty())
{
//lock will be release automatically.
//finish the thread loop and let it join in the main thread.
return;
}
func = m_function_queue.front();
m_function_queue.pop();
//release the lock
}
func();
}
}
main.cpp
#include "function_pool.h"
#include <string>
#include <iostream>
#include <mutex>
#include <functional>
#include <thread>
#include <vector>
Function_pool func_pool;
class quit_worker_exception : public std::exception {};
void example_function()
{
std::cout << "bla" << std::endl;
}
int main()
{
std::cout << "stating operation" << std::endl;
int num_threads = std::thread::hardware_concurrency();
std::cout << "number of threads = " << num_threads << std::endl;
std::vector<std::thread> thread_pool;
for (int i = 0; i < num_threads; i++)
{
thread_pool.push_back(std::thread(&Function_pool::infinite_loop_func, &func_pool));
}
//here we should send our functions
for (int i = 0; i < 50; i++)
{
func_pool.push(example_function);
}
func_pool.done();
for (unsigned int i = 0; i < thread_pool.size(); i++)
{
thread_pool.at(i).join();
}
}
A:
You can use thread_pool from boost library:
void my_task(){...}
int main(){
int threadNumbers = thread::hardware_concurrency();
boost::asio::thread_pool pool(threadNumbers);
// Submit a function to the pool.
boost::asio::post(pool, my_task);
// Submit a lambda object to the pool.
boost::asio::post(pool, []() {
...
});
}
You also can use threadpool from open source community:
void first_task() {...}
void second_task() {...}
int main(){
int threadNumbers = thread::hardware_concurrency();
pool tp(threadNumbers);
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
A:
Edit: This now requires C++17 and concepts. (As of 9/12/16, only g++ 6.0+ is sufficient.)
The template deduction is a lot more accurate because of it, though, so it's worth the effort of getting a newer compiler. I've not yet found a function that requires explicit template arguments.
It also now takes any appropriate callable object (and is still statically typesafe!!!).
It also now includes an optional green threading priority thread pool using the same API. This class is POSIX only, though. It uses the ucontext_t API for userspace task switching.
I created a simple library for this. An example of usage is given below. (I'm answering this because it was one of the things I found before I decided it was necessary to write it myself.)
bool is_prime(int n){
// Determine if n is prime.
}
int main(){
thread_pool pool(8); // 8 threads
list<future<bool>> results;
for(int n = 2;n < 10000;n++){
// Submit a job to the pool.
results.emplace_back(pool.async(is_prime, n));
}
int n = 2;
for(auto i = results.begin();i != results.end();i++, n++){
// i is an iterator pointing to a future representing the result of is_prime(n)
cout << n << " ";
bool prime = i->get(); // Wait for the task is_prime(n) to finish and get the result.
if(prime)
cout << "is prime";
else
cout << "is not prime";
cout << endl;
}
}
You can pass async any function with any (or void) return value and any (or no) arguments and it will return a corresponding std::future. To get the result (or just wait until a task has completed) you call get() on the future.
Here's the github: https://github.com/Tyler-Hardin/thread_pool.
A:
Something like this might help (taken from a working app).
#include <memory>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
struct thread_pool {
typedef std::unique_ptr<boost::asio::io_service::work> asio_worker;
thread_pool(int threads) :service(), service_worker(new asio_worker::element_type(service)) {
for (int i = 0; i < threads; ++i) {
auto worker = [this] { return service.run(); };
grp.add_thread(new boost::thread(worker));
}
}
template<class F>
void enqueue(F f) {
service.post(f);
}
~thread_pool() {
service_worker.reset();
grp.join_all();
service.stop();
}
private:
boost::asio::io_service service;
asio_worker service_worker;
boost::thread_group grp;
};
You can use it like this:
thread_pool pool(2);
pool.enqueue([] {
std::cout << "Hello from Task 1\n";
});
pool.enqueue([] {
std::cout << "Hello from Task 2\n";
});
Keep in mind that reinventing an efficient asynchronous queuing mechanism is not trivial.
Boost::asio::io_service is a very efficient implementation, or actually is a collection of platform-specific wrappers (e.g. it wraps I/O completion ports on Windows).
A:
looks like threadpool is very popular problem/exercise :-)
I recently wrote one in modern C++; it’s owned by me and publicly available here - https://github.com/yurir-dev/threadpool
It supports templated return values, core pinning, ordering of some tasks.
all implementation in two .h files.
So, the original question will be something like this:
#include "tp/threadpool.h"
int arr[5] = { 0 };
concurency::threadPool<void> tp;
tp.start(std::thread::hardware_concurrency());
std::vector<std::future<void>> futures;
for (int i = 0; i < 8; ++i) { // for 8 iterations,
for (int j = 0; j < 4; ++j) {
futures.push_back(tp.push([&arr, j]() {
arr[j] += 2;
}));
}
}
// wait until all pushed tasks are finished.
for (auto& f : futures)
f.get();
// or just tp.end(); // will kill all the threads
arr[4] = *std::min_element(arr, arr + 4);
A:
I found the pending tasks' future.get() call hangs on caller side if the thread pool gets terminated and leaves some tasks inside task queue. How to set future exception inside thread pool with only the wrapper std::function?
template <class F, class... Args>
std::future<std::result_of_t<F(Args...)>> enqueue(F &&f, Args &&...args) {
auto task = std::make_shared<std::packaged_task<std::result_of_t<F(Args...)>()>>(
std::bind(std::forward<F>(f), std::forward<Args>(args)...));
std::future<return_type> res = task->get_future();
{
std::unique_lock<std::mutex> lock(_mutex);
_tasks.push([task]() -> void { (*task)(); });
}
return res;
}
class StdThreadPool {
std::vector<std::thread> _workers;
std::priority_queue<TASK> _tasks;
...
}
struct TASK {
//int _func_return_value;
std::function<void()> _func;
int priority;
...
}
|
Thread pooling in C++11
|
Relevant questions:
About C++11:
C++11: std::thread pooled?
Will async(launch::async) in C++11 make thread pools obsolete for avoiding expensive thread creation?
About Boost:
C++ boost thread reusing threads
boost::thread and creating a pool of them!
How do I get a pool of threads to send tasks to, without creating and deleting them over and over again? This means persistent threads to resynchronize without joining.
I have code that looks like this:
namespace {
std::vector<std::thread> workers;
int total = 4;
int arr[4] = {0};
void each_thread_does(int i) {
arr[i] += 2;
}
}
int main(int argc, char *argv[]) {
for (int i = 0; i < 8; ++i) { // for 8 iterations,
for (int j = 0; j < 4; ++j) {
workers.push_back(std::thread(each_thread_does, j));
}
for (std::thread &t: workers) {
if (t.joinable()) {
t.join();
}
}
arr[4] = std::min_element(arr, arr+4);
}
return 0;
}
Instead of creating and joining threads each iteration, I'd prefer to send tasks to my worker threads each iteration and only create them once.
|
[
"This is adapted from my answer to another very similar post.\nLet's build a ThreadPool class:\nclass ThreadPool {\npublic:\n void Start();\n void QueueJob(const std::function<void()>& job);\n void Stop();\n void busy();\n\nprivate:\n void ThreadLoop();\n\n bool should_terminate = false; // Tells threads to stop looking for jobs\n std::mutex queue_mutex; // Prevents data races to the job queue\n std::condition_variable mutex_condition; // Allows threads to wait on new jobs or termination \n std::vector<std::thread> threads;\n std::queue<std::function<void()>> jobs;\n};\n\n\nThreadPool::Start\n\nFor an efficient threadpool implementation, once threads are created according to num_threads, it's better not to\ncreate new ones or destroy old ones (by joining). There will be a performance penalty, and it might even make your\napplication go slower than the serial version. Thus, we keep a pool of threads that can be used at any time (if they\naren't already running a job).\nEach thread should be running its own infinite loop, constantly waiting for new tasks to grab and run.\nvoid ThreadPool::Start() {\n const uint32_t num_threads = std::thread::hardware_concurrency(); // Max # of threads the system supports\n threads.resize(num_threads);\n for (uint32_t i = 0; i < num_threads; i++) {\n threads.at(i) = std::thread(ThreadLoop);\n }\n}\n\n\nThreadPool::ThreadLoop\n\nThe infinite loop function. This is a while (true) loop waiting for the task queue to open up.\nvoid ThreadPool::ThreadLoop() {\n while (true) {\n std::function<void()> job;\n {\n std::unique_lock<std::mutex> lock(queue_mutex);\n mutex_condition.wait(lock, [this] {\n return !jobs.empty() || should_terminate;\n });\n if (should_terminate) {\n return;\n }\n job = jobs.front();\n jobs.pop();\n }\n job();\n }\n}\n\n\nThreadPool::QueueJob\n\nAdd a new job to the pool; use a lock so that there isn't a data race.\nvoid ThreadPool::QueueJob(const std::function<void()>& job) {\n {\n std::unique_lock<std::mutex> lock(queue_mutex);\n jobs.push(job);\n }\n mutex_condition.notify_one();\n}\n\nTo use it:\nthread_pool->QueueJob([] { /* ... */ });\n\n\nThreadPool::busy\n\nvoid ThreadPool::busy() {\n bool poolbusy;\n {\n std::unique_lock<std::mutex> lock(queue_mutex);\n poolbusy = jobs.empty();\n }\n return poolbusy;\n}\n\nThe busy() function can be used in a while loop, such that the main thread can wait the threadpool to complete all the tasks before calling the threadpool destructor.\n\nThreadPool::Stop\n\nStop the pool.\nvoid ThreadPool::Stop() {\n {\n std::unique_lock<std::mutex> lock(queue_mutex);\n should_terminate = true;\n }\n mutex_condition.notify_all();\n for (std::thread& active_thread : threads) {\n active_thread.join();\n }\n threads.clear();\n}\n\nOnce you integrate these ingredients, you have your own dynamic threading pool. These threads always run, waiting for\njob to do.\nI apologize if there are some syntax errors, I typed this code and and I have a bad memory. Sorry that I cannot provide\nyou the complete thread pool code; that would violate my job integrity.\nNotes:\n\nThe anonymous code blocks are used so that when they are exited, the std::unique_lock variables created within them\ngo out of scope, unlocking the mutex.\nThreadPool::Stop will not terminate any currently running jobs, it just waits for them to finish via active_thread.join().\n\n",
"You can use C++ Thread Pool Library, https://github.com/vit-vit/ctpl.\nThen the code your wrote can be replaced with the following\n#include <ctpl.h> // or <ctpl_stl.h> if ou do not have Boost library\n\nint main (int argc, char *argv[]) {\n ctpl::thread_pool p(2 /* two threads in the pool */);\n int arr[4] = {0};\n std::vector<std::future<void>> results(4);\n for (int i = 0; i < 8; ++i) { // for 8 iterations,\n for (int j = 0; j < 4; ++j) {\n results[j] = p.push([&arr, j](int){ arr[j] +=2; });\n }\n for (int j = 0; j < 4; ++j) {\n results[j].get();\n }\n arr[4] = std::min_element(arr, arr + 4);\n }\n}\n\nYou will get the desired number of threads and will not create and delete them over and over again on the iterations.\n",
"A pool of threads means that all your threads are running, all the time – in other words, the thread function never returns. To give the threads something meaningful to do, you have to design a system of inter-thread communication, both for the purpose of telling the thread that there's something to do, as well as for communicating the actual work data.\nTypically this will involve some kind of concurrent data structure, and each thread would presumably sleep on some kind of condition variable, which would be notified when there's work to do. Upon receiving the notification, one or several of the threads wake up, recover a task from the concurrent data structure, process it, and store the result in an analogous fashion.\nThe thread would then go on to check whether there's even more work to do, and if not go back to sleep.\nThe upshot is that you have to design all this yourself, since there isn't a natural notion of \"work\" that's universally applicable. It's quite a bit of work, and there are some subtle issues you have to get right. (You can program in Go if you like a system which takes care of thread management for you behind the scenes.)\n",
"A threadpool is at core a set of threads all bound to a function working as an event loop. These threads will endlessly wait for a task to be executed, or their own termination.\nThe threadpool job is to provide an interface to submit jobs, define (and perhaps modify) the policy of running these jobs (scheduling rules, thread instantiation, size of the pool), and monitor the status of the threads and related resources.\nSo for a versatile pool, one must start by defining what a task is, how it is launched, interrupted, what is the result (see the notion of promise and future for that question), what sort of events the threads will have to respond to, how they will handle them, how these events shall be discriminated from the ones handled by the tasks. This can become quite complicated as you can see, and impose restrictions on how the threads will work, as the solution becomes more and more involved.\nThe current tooling for handling events is fairly barebones(*): primitives like mutexes, condition variables, and a few abstractions on top of that (locks, barriers). But in some cases, these abstrations may turn out to be unfit (see this related question), and one must revert to using the primitives. \nOther problems have to be managed too:\n\nsignal\ni/o\nhardware (processor affinity, heterogenous setup)\n\nHow would these play out in your setting?\nThis answer to a similar question points to an existing implementation meant for boost and the stl.\nI offered a very crude implementation of a threadpool for another question, which doesn't address many problems outlined above. You might want to build up on it. You might also want to have a look of existing frameworks in other languages, to find inspiration.\n\n(*) I don't see that as a problem, quite to the contrary. I think it's the very spirit of C++ inherited from C.\n",
"Follwoing [PhD EcE](https://stackoverflow.com/users/3818417/phd-ece) suggestion, I implemented the thread pool:\n\n\nfunction_pool.h\n\n#pragma once\n#include <queue>\n#include <functional>\n#include <mutex>\n#include <condition_variable>\n#include <atomic>\n#include <cassert>\n\nclass Function_pool\n{\n\nprivate:\n std::queue<std::function<void()>> m_function_queue;\n std::mutex m_lock;\n std::condition_variable m_data_condition;\n std::atomic<bool> m_accept_functions;\n\npublic:\n\n Function_pool();\n ~Function_pool();\n void push(std::function<void()> func);\n void done();\n void infinite_loop_func();\n};\n\n\nfunction_pool.cpp\n\n#include \"function_pool.h\"\n\nFunction_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition(), m_accept_functions(true)\n{\n}\n\nFunction_pool::~Function_pool()\n{\n}\n\nvoid Function_pool::push(std::function<void()> func)\n{\n std::unique_lock<std::mutex> lock(m_lock);\n m_function_queue.push(func);\n // when we send the notification immediately, the consumer will try to get the lock , so unlock asap\n lock.unlock();\n m_data_condition.notify_one();\n}\n\nvoid Function_pool::done()\n{\n std::unique_lock<std::mutex> lock(m_lock);\n m_accept_functions = false;\n lock.unlock();\n // when we send the notification immediately, the consumer will try to get the lock , so unlock asap\n m_data_condition.notify_all();\n //notify all waiting threads.\n}\n\nvoid Function_pool::infinite_loop_func()\n{\n std::function<void()> func;\n while (true)\n {\n {\n std::unique_lock<std::mutex> lock(m_lock);\n m_data_condition.wait(lock, [this]() {return !m_function_queue.empty() || !m_accept_functions; });\n if (!m_accept_functions && m_function_queue.empty())\n {\n //lock will be release automatically.\n //finish the thread loop and let it join in the main thread.\n return;\n }\n func = m_function_queue.front();\n m_function_queue.pop();\n //release the lock\n }\n func();\n }\n}\n\n\nmain.cpp\n\n#include \"function_pool.h\"\n#include <string>\n#include <iostream>\n#include <mutex>\n#include <functional>\n#include <thread>\n#include <vector>\n\nFunction_pool func_pool;\n\nclass quit_worker_exception : public std::exception {};\n\nvoid example_function()\n{\n std::cout << \"bla\" << std::endl;\n}\n\nint main()\n{\n std::cout << \"stating operation\" << std::endl;\n int num_threads = std::thread::hardware_concurrency();\n std::cout << \"number of threads = \" << num_threads << std::endl;\n std::vector<std::thread> thread_pool;\n for (int i = 0; i < num_threads; i++)\n {\n thread_pool.push_back(std::thread(&Function_pool::infinite_loop_func, &func_pool));\n }\n\n //here we should send our functions\n for (int i = 0; i < 50; i++)\n {\n func_pool.push(example_function);\n }\n func_pool.done();\n for (unsigned int i = 0; i < thread_pool.size(); i++)\n {\n thread_pool.at(i).join();\n }\n}\n\n",
"You can use thread_pool from boost library:\nvoid my_task(){...}\n\nint main(){\n int threadNumbers = thread::hardware_concurrency();\n boost::asio::thread_pool pool(threadNumbers);\n\n // Submit a function to the pool.\n boost::asio::post(pool, my_task);\n\n // Submit a lambda object to the pool.\n boost::asio::post(pool, []() {\n ...\n });\n}\n\n\nYou also can use threadpool from open source community:\nvoid first_task() {...} \nvoid second_task() {...}\n\nint main(){\n int threadNumbers = thread::hardware_concurrency();\n pool tp(threadNumbers);\n\n // Add some tasks to the pool.\n tp.schedule(&first_task);\n tp.schedule(&second_task);\n}\n\n",
"Edit: This now requires C++17 and concepts. (As of 9/12/16, only g++ 6.0+ is sufficient.)\nThe template deduction is a lot more accurate because of it, though, so it's worth the effort of getting a newer compiler. I've not yet found a function that requires explicit template arguments.\nIt also now takes any appropriate callable object (and is still statically typesafe!!!).\nIt also now includes an optional green threading priority thread pool using the same API. This class is POSIX only, though. It uses the ucontext_t API for userspace task switching.\n\nI created a simple library for this. An example of usage is given below. (I'm answering this because it was one of the things I found before I decided it was necessary to write it myself.)\nbool is_prime(int n){\n // Determine if n is prime.\n}\n\nint main(){\n thread_pool pool(8); // 8 threads\n\n list<future<bool>> results;\n for(int n = 2;n < 10000;n++){\n // Submit a job to the pool.\n results.emplace_back(pool.async(is_prime, n));\n }\n\n int n = 2;\n for(auto i = results.begin();i != results.end();i++, n++){\n // i is an iterator pointing to a future representing the result of is_prime(n)\n cout << n << \" \";\n bool prime = i->get(); // Wait for the task is_prime(n) to finish and get the result.\n if(prime)\n cout << \"is prime\";\n else\n cout << \"is not prime\";\n cout << endl;\n } \n}\n\nYou can pass async any function with any (or void) return value and any (or no) arguments and it will return a corresponding std::future. To get the result (or just wait until a task has completed) you call get() on the future.\nHere's the github: https://github.com/Tyler-Hardin/thread_pool.\n",
"Something like this might help (taken from a working app).\n#include <memory>\n#include <boost/asio.hpp>\n#include <boost/thread.hpp>\n\nstruct thread_pool {\n typedef std::unique_ptr<boost::asio::io_service::work> asio_worker;\n\n thread_pool(int threads) :service(), service_worker(new asio_worker::element_type(service)) {\n for (int i = 0; i < threads; ++i) {\n auto worker = [this] { return service.run(); };\n grp.add_thread(new boost::thread(worker));\n }\n }\n\n template<class F>\n void enqueue(F f) {\n service.post(f);\n }\n\n ~thread_pool() {\n service_worker.reset();\n grp.join_all();\n service.stop();\n }\n\nprivate:\n boost::asio::io_service service;\n asio_worker service_worker;\n boost::thread_group grp;\n};\n\nYou can use it like this:\nthread_pool pool(2);\n\npool.enqueue([] {\n std::cout << \"Hello from Task 1\\n\";\n});\n\npool.enqueue([] {\n std::cout << \"Hello from Task 2\\n\";\n});\n\nKeep in mind that reinventing an efficient asynchronous queuing mechanism is not trivial.\nBoost::asio::io_service is a very efficient implementation, or actually is a collection of platform-specific wrappers (e.g. it wraps I/O completion ports on Windows).\n",
"looks like threadpool is very popular problem/exercise :-)\nI recently wrote one in modern C++; it’s owned by me and publicly available here - https://github.com/yurir-dev/threadpool\nIt supports templated return values, core pinning, ordering of some tasks.\nall implementation in two .h files.\nSo, the original question will be something like this:\n#include \"tp/threadpool.h\"\n\nint arr[5] = { 0 };\n\nconcurency::threadPool<void> tp;\ntp.start(std::thread::hardware_concurrency());\n\nstd::vector<std::future<void>> futures;\nfor (int i = 0; i < 8; ++i) { // for 8 iterations,\n for (int j = 0; j < 4; ++j) {\n futures.push_back(tp.push([&arr, j]() {\n arr[j] += 2;\n }));\n }\n}\n\n// wait until all pushed tasks are finished.\nfor (auto& f : futures)\n f.get();\n// or just tp.end(); // will kill all the threads\n\narr[4] = *std::min_element(arr, arr + 4);\n\n",
"I found the pending tasks' future.get() call hangs on caller side if the thread pool gets terminated and leaves some tasks inside task queue. How to set future exception inside thread pool with only the wrapper std::function?\ntemplate <class F, class... Args>\nstd::future<std::result_of_t<F(Args...)>> enqueue(F &&f, Args &&...args) {\n auto task = std::make_shared<std::packaged_task<std::result_of_t<F(Args...)>()>>(\n std::bind(std::forward<F>(f), std::forward<Args>(args)...));\n std::future<return_type> res = task->get_future();\n {\n std::unique_lock<std::mutex> lock(_mutex);\n _tasks.push([task]() -> void { (*task)(); });\n }\n return res;\n}\n\nclass StdThreadPool {\n std::vector<std::thread> _workers;\n std::priority_queue<TASK> _tasks;\n ...\n}\n\nstruct TASK {\n //int _func_return_value;\n std::function<void()> _func;\n int priority;\n ...\n}\n\n"
] |
[
172,
110,
73,
21,
13,
10,
4,
4,
1,
0
] |
[] |
[] |
[
"c++",
"c++11",
"multithreading",
"stdthread",
"threadpool"
] |
stackoverflow_0015752659_c++_c++11_multithreading_stdthread_threadpool.txt
|
Q:
Whats the difference between Asp.NetCore.App and .NETCore.App
i want to understand the difference between the two .NET Runtimes Microsoft.AspNetCore.App abd Microsof.NETCore.App,
i run
dotnet --info
cmd command,
i see many runtimes, i want to know the difference between the two. Thanks
A:
From Microsoft Reference :
The ASP.NET Core shared framework (Microsoft.AspNetCore.App) contains assemblies that are developed and supported by Microsoft. Microsoft.AspNetCore.App is installed when the .NET Core 3.0 or later SDK is installed. The shared framework is the set of assemblies (.dll files) that are installed on the machine and includes a runtime component and a targeting pack
|
Whats the difference between Asp.NetCore.App and .NETCore.App
|
i want to understand the difference between the two .NET Runtimes Microsoft.AspNetCore.App abd Microsof.NETCore.App,
i run
dotnet --info
cmd command,
i see many runtimes, i want to know the difference between the two. Thanks
|
[
"From Microsoft Reference :\nThe ASP.NET Core shared framework (Microsoft.AspNetCore.App) contains assemblies that are developed and supported by Microsoft. Microsoft.AspNetCore.App is installed when the .NET Core 3.0 or later SDK is installed. The shared framework is the set of assemblies (.dll files) that are installed on the machine and includes a runtime component and a targeting pack\n"
] |
[
0
] |
[] |
[] |
[
".net_core",
"asp.net",
"c#"
] |
stackoverflow_0074656259_.net_core_asp.net_c#.txt
|
Q:
Permutation of a list where numbers must change its position in Prolog
I'm trying to solve this question for my assignment.
The predicate acceptable_permutation(L,R) should succeed only if R represents an acceptable permutation of the list L.
For example: [2,1,3] is not an acceptable permutation of the list [1,2,3] because 3 did not change its position.
The outputs are supposed to be like this:
?- acceptable_permutation([1,2,3],R).
R = [2,3,1] ;
R = [3,1,2] ;
false
?- acceptable_permutation([1,2,3,4],R).
R = [2,1,4,3] ;
R = [2,3,4,1] ;
R = [2,4,1,3] ;
R = [3,1,4,2] ;
R = [3,4,1,2] ;
R = [3,4,2,1] ;
R = [4,1,2,3] ;
R = [4,3,1,2] ;
R = [4,3,2,1] ;
false.
My outputs of my code however gives:
?- acceptable_permutation([1,2,3],R).
R = [1,2,3] ;
R = [1,3,2] ;
R = [2,1,3] ;
R = [2,3,1] ;
R = [3,1,2] ;
R = [3,2,1] ;
?- acceptable_permutation([1,2,3,4],R).
R = [1,2,3,4] ;
R = [1,2,4,3] ;
R = [1,3,2,4] ;
R = [1,3,4,2] ;
R = [1,4,2,3] ;
R = [1,4,3,2] ;
R = [2,1,3,4] ;
R = [2,1,4,3] ;
R = [2,3,1,4] ;
R = [2,3,4,1] ;
R = [2,4,1,3] ;
R = [2,4,3,1] ;
R = [3,1,2,4] ;
R = [3,1,4,2] ;
R = [3,2,1,4] ;
R = [3,2,4,1] ;
R = [3,4,1,2] ;
R = [3,4,2,1] ;
R = [4,1,2,3] ;
R = [4,1,3,2] ;
R = [4,2,1,3] ;
R = [4,2,3,1] ;
R = [4,3,1,2] ;
R = [4,3,2,1] ;
false.
My code is the following:
acceptable_permutation(list,list).
del(symbol,list,list).
del(X,[X|L1], L1).
del(X,[Y|L1], [Y|L2]):-
del(X,L1, L2).
acceptable_permutation([] , []).
acceptable_permutation(L, [X|P]):-
del(X, L, L1),
acceptable_permutation(L1, P).
Please tell me where exactly the problem is, so that my outputs should match the correct outputs. I would appreciate a lot if you show me how exactly it is done.
A:
1) A permutation that has no fixed points is called derangement. A funny name! TIL that, too.
2) Like this previous answer, we use maplist/3 and permutation/2.
3) Unlike this previous answer, we use prolog-dif instead of (\=)/2 to preserve logical purity.
logical-purity helps keep Prolog programs general, and—in this case—also increase efficiency.
Let's define list_derangement/2 like so:
list_derangement(Es, Xs) :-
maplist(dif, Es, Xs), % using dif/2
permutation(Es, Xs).
Note that the maplist/2 goal now precedes the other one!
This can speed up the detection of failing cases, compared to find_perm/2 from previous answer:
?- time(find_perm([1,2,3,4,5,6,7,8,9,10,11],
[_,_,_,_,_,_,_,_,_, _,11])).
% 303,403,801 inferences, 37.018 CPU in 37.364 seconds (99% CPU, 8196109 Lips)
false.
?- time(list_derangement([1,2,3,4,5,6,7,8,9,10,11],
[_,_,_,_,_,_,_,_,_, _,11])).
% 15,398 inferences, 0.009 CPU in 0.013 seconds (67% CPU, 1720831 Lips)
false.
For corresponding ground terms above implementations are of comparable speed:
?- time((find_perm([1,2,3,4,5,6,7,8,9,10,11],_),false)).
% 931,088,992 inferences, 107.320 CPU in 107.816 seconds (100% CPU, 8675793 Lips)
false.
?- time((list_derangement([1,2,3,4,5,6,7,8,9,10,11],_),false)).
% 1,368,212,629 inferences, 97.890 CPU in 98.019 seconds (100% CPU, 13976991 Lips)
false.
A:
The problem is that you don't check that all the elements of the output list are in a different position than the input list. You should add a predicate to check this, like that:
perm(L,LO):-
acceptable_permutation(L,LO),
different_place(L,LO).
different_place([A|T0],[B|T1]):-
A \= B,
different_place(T0,T1).
different_place([],[]).
?- perm([1,2,3],R).
R = [2, 3, 1]
R = [3, 1, 2]
false
Improvement 1: instead of creating your own predicate (in this case, different_place/2), you can use maplist/3 with \=/2 in this way:
perm(L,LO):-
acceptable_permutation(L,LO),
maplist(\=,L,LO).
Improvement 2: swi prolog offers the predicate permutation/2 that computes the permutations of a list. So you can solve your problem with only three lines:
find_perm(L,LO):-
permutation(L,LO),
maplist(\=,L,LO).
?- find_perm([1,2,3],L).
L = [2, 3, 1]
L = [3, 1, 2]
false
A:
Hey You will not have the complete code because it tells me that there are problems with del and this code works for me
|
Permutation of a list where numbers must change its position in Prolog
|
I'm trying to solve this question for my assignment.
The predicate acceptable_permutation(L,R) should succeed only if R represents an acceptable permutation of the list L.
For example: [2,1,3] is not an acceptable permutation of the list [1,2,3] because 3 did not change its position.
The outputs are supposed to be like this:
?- acceptable_permutation([1,2,3],R).
R = [2,3,1] ;
R = [3,1,2] ;
false
?- acceptable_permutation([1,2,3,4],R).
R = [2,1,4,3] ;
R = [2,3,4,1] ;
R = [2,4,1,3] ;
R = [3,1,4,2] ;
R = [3,4,1,2] ;
R = [3,4,2,1] ;
R = [4,1,2,3] ;
R = [4,3,1,2] ;
R = [4,3,2,1] ;
false.
My outputs of my code however gives:
?- acceptable_permutation([1,2,3],R).
R = [1,2,3] ;
R = [1,3,2] ;
R = [2,1,3] ;
R = [2,3,1] ;
R = [3,1,2] ;
R = [3,2,1] ;
?- acceptable_permutation([1,2,3,4],R).
R = [1,2,3,4] ;
R = [1,2,4,3] ;
R = [1,3,2,4] ;
R = [1,3,4,2] ;
R = [1,4,2,3] ;
R = [1,4,3,2] ;
R = [2,1,3,4] ;
R = [2,1,4,3] ;
R = [2,3,1,4] ;
R = [2,3,4,1] ;
R = [2,4,1,3] ;
R = [2,4,3,1] ;
R = [3,1,2,4] ;
R = [3,1,4,2] ;
R = [3,2,1,4] ;
R = [3,2,4,1] ;
R = [3,4,1,2] ;
R = [3,4,2,1] ;
R = [4,1,2,3] ;
R = [4,1,3,2] ;
R = [4,2,1,3] ;
R = [4,2,3,1] ;
R = [4,3,1,2] ;
R = [4,3,2,1] ;
false.
My code is the following:
acceptable_permutation(list,list).
del(symbol,list,list).
del(X,[X|L1], L1).
del(X,[Y|L1], [Y|L2]):-
del(X,L1, L2).
acceptable_permutation([] , []).
acceptable_permutation(L, [X|P]):-
del(X, L, L1),
acceptable_permutation(L1, P).
Please tell me where exactly the problem is, so that my outputs should match the correct outputs. I would appreciate a lot if you show me how exactly it is done.
|
[
"1) A permutation that has no fixed points is called derangement. A funny name! TIL that, too.\n2) Like this previous answer, we use maplist/3 and permutation/2.\n3) Unlike this previous answer, we use prolog-dif instead of (\\=)/2 to preserve logical purity.\nlogical-purity helps keep Prolog programs general, and—in this case—also increase efficiency.\nLet's define list_derangement/2 like so:\n\nlist_derangement(Es, Xs) :-\n maplist(dif, Es, Xs), % using dif/2\n permutation(Es, Xs).\n\nNote that the maplist/2 goal now precedes the other one! \nThis can speed up the detection of failing cases, compared to find_perm/2 from previous answer:\n\n?- time(find_perm([1,2,3,4,5,6,7,8,9,10,11],\n [_,_,_,_,_,_,_,_,_, _,11])).\n% 303,403,801 inferences, 37.018 CPU in 37.364 seconds (99% CPU, 8196109 Lips) \nfalse.\n\n?- time(list_derangement([1,2,3,4,5,6,7,8,9,10,11], \n [_,_,_,_,_,_,_,_,_, _,11])).\n% 15,398 inferences, 0.009 CPU in 0.013 seconds (67% CPU, 1720831 Lips) \nfalse.\n\nFor corresponding ground terms above implementations are of comparable speed:\n\n?- time((find_perm([1,2,3,4,5,6,7,8,9,10,11],_),false)).\n% 931,088,992 inferences, 107.320 CPU in 107.816 seconds (100% CPU, 8675793 Lips)\nfalse.\n\n?- time((list_derangement([1,2,3,4,5,6,7,8,9,10,11],_),false)).\n% 1,368,212,629 inferences, 97.890 CPU in 98.019 seconds (100% CPU, 13976991 Lips)\nfalse.\n\n",
"The problem is that you don't check that all the elements of the output list are in a different position than the input list. You should add a predicate to check this, like that:\nperm(L,LO):-\n acceptable_permutation(L,LO),\n different_place(L,LO).\n\ndifferent_place([A|T0],[B|T1]):-\n A \\= B,\n different_place(T0,T1).\ndifferent_place([],[]).\n\n?- perm([1,2,3],R).\nR = [2, 3, 1]\nR = [3, 1, 2]\nfalse\n\nImprovement 1: instead of creating your own predicate (in this case, different_place/2), you can use maplist/3 with \\=/2 in this way:\nperm(L,LO):-\n acceptable_permutation(L,LO),\n maplist(\\=,L,LO).\n\nImprovement 2: swi prolog offers the predicate permutation/2 that computes the permutations of a list. So you can solve your problem with only three lines:\nfind_perm(L,LO):-\n permutation(L,LO),\n maplist(\\=,L,LO).\n\n?- find_perm([1,2,3],L).\nL = [2, 3, 1]\nL = [3, 1, 2]\nfalse\n\n",
"Hey You will not have the complete code because it tells me that there are problems with del and this code works for me\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"prolog"
] |
stackoverflow_0055303433_prolog.txt
|
Q:
Why else statement not working in discord js 14
if (!attach) {
plr.send({
content: `<@${plr.user.id}>`,
embeds: [dmcontent]
}).catch(async err => {
if (err) console.log(err)
else console.log('test')
})
}
Else statement not executing.
A:
Thanks to @isaac.g for pointing out where this (probably) is in Discord.js. The code won't ever hit the else in your .catch, because if it hits the catch at all you should have an err. It's a dead branch of code, which could be replaced with just .catch(console.error). It also doesn't look like there's any need for the callback to be an async function.
You can think of your logic in the .catch this way:
try {
throw Error('oh no')
} catch (err) {
if (err) {
console.log(err)
} else {
console.log('This branch will never get executed!')
}
}
|
Why else statement not working in discord js 14
|
if (!attach) {
plr.send({
content: `<@${plr.user.id}>`,
embeds: [dmcontent]
}).catch(async err => {
if (err) console.log(err)
else console.log('test')
})
}
Else statement not executing.
|
[
"Thanks to @isaac.g for pointing out where this (probably) is in Discord.js. The code won't ever hit the else in your .catch, because if it hits the catch at all you should have an err. It's a dead branch of code, which could be replaced with just .catch(console.error). It also doesn't look like there's any need for the callback to be an async function.\nYou can think of your logic in the .catch this way:\ntry {\n throw Error('oh no')\n} catch (err) {\n if (err) {\n console.log(err)\n } else {\n console.log('This branch will never get executed!')\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"discord.js"
] |
stackoverflow_0074664267_discord.js.txt
|
Q:
UnboundLocalError: local variable 'dist' referenced before assignment
I am trying to train a model for supervised learning for Hidden Markov Model (HMM)and test it on a set of observations however, keep getting this error. The goal is to predict the state based on the observations. How can I fix this and how can I view the transition matrix?
The version for Pomegranate is 0.14.4
Trying this from the source: https://github.com/jmschrei/pomegranate/issues/1005
from pomegranate import *
import numpy as np
# Supervised method that calculates the transition matrix:
d1 = State(UniformDistribution.from_samples([3.243221498397177, 3.210684537495482, 3.227662201472816,
3.286410817416738, 3.290573650708864, 3.286058136226862, 3.266480693857006]))
d2 = State(UniformDistribution.from_samples([3.449282367485096, 1.97317859465635, 1.897551432353011,
3.454609351559659, 3.127357456033111, 1.779308337786426, 3.802891929694426, 3.359766157565077, 2.959428499979418]))
d3 = State(UniformDistribution.from_samples([1.892812118441474, 1.589353118681066, 2.09269978285637,
2.104391496570218, 1.656771181054144]))
model = HiddenMarkovModel()
model.add_states(d1, d2, d3)
# print(model.to_json())
model.bake()
model.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')
all_pred = model.predict([2.33, 1.22, 1.4, 10.6])
Error:
File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/", line 774, in <module>
model.bake()
File "pomegranate/hmm.pyx", line 1047, in pomegranate.hmm.HiddenMarkovModel.bake
UnboundLocalError: local variable 'dist' referenced before assignment
A:
To fix this error, you need to ensure that the transition matrix is defined before calling model.bake(). This can be done by using the following code to define the transition matrix:
# Define the transition matrix
transition_matrix = np.array([[0.7, 0.3, 0.0],
[0.3, 0.7, 0.0],
[0.0, 0.3, 0.7]])
model.set_transition_matrix(transition_matrix)
# Bake the model
model.bake()
# Fit the model
model.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')
# Predict the states
all_pred = model.predict([2.33, 1.22, 1.4, 10.6])
# View the transition matrix
print(model.transitions)
A:
This error occurs because the bake() method is called before the model is properly defined. In particular, the error occurs because you are calling the bake() method before you have added any transitions between the states in your model.
To fix this error, you need to add transitions between the states in your model before calling the bake() method. You can add transitions using the add_transition() method, like this:
# Supervised method that calculates the transition matrix:
d1 = State(UniformDistribution.from_samples([3.243221498397177, 3.210684537495482, 3.227662201472816,
3.286410817416738, 3.290573650708864, 3.286058136226862, 3.266480693857006]))
d2 = State(UniformDistribution.from_samples([3.449282367485096, 1.97317859465635, 1.897551432353011,
3.454609351559659, 3.127357456033111, 1.779308337786426, 3.802891929694426, 3.359766157565077, 2.959428499979418]))
d3 = State(UniformDistribution.from_samples([1.892812118441474, 1.589353118681066, 2.09269978285637,
2.104391496570218, 1.656771181054144]))
model = HiddenMarkovModel()
model.add_states(d1, d2, d3)
# Add transitions between the states
model.add_transition(model.start, d1, 0.33)
model.add_transition(model.start, d2, 0.33)
model.add_transition(model.start, d3, 0.33)
model.add_transition(d1, d1, 0.33)
model.add_transition(d1, d2, 0.33)
model.add_transition(d1, d3, 0.33)
model.add_transition(d2, d1, 0.33)
model.add_transition(d2, d2, 0.33)
model.add_transition(d2, d3, 0.33)
model.add_transition(d3, d1, 0.33)
model.add_transition(d3, d2, 0.33)
model.add_transition(d3, d3, 0.33)
# Call the bake() method to finalize the model
model.bake()
# Fit the model on the training data and labels
model.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')
# Use the model to predict the states for a set of observations
all_pred = model.predict([2.33, 1.22, 1.4, 10.6])
# View the transition matrix for the model
print(model.dense_transition_matrix())
|
UnboundLocalError: local variable 'dist' referenced before assignment
|
I am trying to train a model for supervised learning for Hidden Markov Model (HMM)and test it on a set of observations however, keep getting this error. The goal is to predict the state based on the observations. How can I fix this and how can I view the transition matrix?
The version for Pomegranate is 0.14.4
Trying this from the source: https://github.com/jmschrei/pomegranate/issues/1005
from pomegranate import *
import numpy as np
# Supervised method that calculates the transition matrix:
d1 = State(UniformDistribution.from_samples([3.243221498397177, 3.210684537495482, 3.227662201472816,
3.286410817416738, 3.290573650708864, 3.286058136226862, 3.266480693857006]))
d2 = State(UniformDistribution.from_samples([3.449282367485096, 1.97317859465635, 1.897551432353011,
3.454609351559659, 3.127357456033111, 1.779308337786426, 3.802891929694426, 3.359766157565077, 2.959428499979418]))
d3 = State(UniformDistribution.from_samples([1.892812118441474, 1.589353118681066, 2.09269978285637,
2.104391496570218, 1.656771181054144]))
model = HiddenMarkovModel()
model.add_states(d1, d2, d3)
# print(model.to_json())
model.bake()
model.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')
all_pred = model.predict([2.33, 1.22, 1.4, 10.6])
Error:
File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/", line 774, in <module>
model.bake()
File "pomegranate/hmm.pyx", line 1047, in pomegranate.hmm.HiddenMarkovModel.bake
UnboundLocalError: local variable 'dist' referenced before assignment
|
[
"To fix this error, you need to ensure that the transition matrix is defined before calling model.bake(). This can be done by using the following code to define the transition matrix:\n# Define the transition matrix\ntransition_matrix = np.array([[0.7, 0.3, 0.0],\n [0.3, 0.7, 0.0],\n [0.0, 0.3, 0.7]])\n\nmodel.set_transition_matrix(transition_matrix)\n\n# Bake the model\nmodel.bake()\n\n# Fit the model\nmodel.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')\n\n# Predict the states\nall_pred = model.predict([2.33, 1.22, 1.4, 10.6])\n\n# View the transition matrix\nprint(model.transitions)\n\n",
"This error occurs because the bake() method is called before the model is properly defined. In particular, the error occurs because you are calling the bake() method before you have added any transitions between the states in your model.\nTo fix this error, you need to add transitions between the states in your model before calling the bake() method. You can add transitions using the add_transition() method, like this:\n# Supervised method that calculates the transition matrix:\nd1 = State(UniformDistribution.from_samples([3.243221498397177, 3.210684537495482, 3.227662201472816,\n 3.286410817416738, 3.290573650708864, 3.286058136226862, 3.266480693857006]))\nd2 = State(UniformDistribution.from_samples([3.449282367485096, 1.97317859465635, 1.897551432353011,\n 3.454609351559659, 3.127357456033111, 1.779308337786426, 3.802891929694426, 3.359766157565077, 2.959428499979418]))\nd3 = State(UniformDistribution.from_samples([1.892812118441474, 1.589353118681066, 2.09269978285637,\n 2.104391496570218, 1.656771181054144]))\nmodel = HiddenMarkovModel()\nmodel.add_states(d1, d2, d3)\n\n# Add transitions between the states\nmodel.add_transition(model.start, d1, 0.33)\nmodel.add_transition(model.start, d2, 0.33)\nmodel.add_transition(model.start, d3, 0.33)\nmodel.add_transition(d1, d1, 0.33)\nmodel.add_transition(d1, d2, 0.33)\nmodel.add_transition(d1, d3, 0.33)\nmodel.add_transition(d2, d1, 0.33)\nmodel.add_transition(d2, d2, 0.33)\nmodel.add_transition(d2, d3, 0.33)\nmodel.add_transition(d3, d1, 0.33)\nmodel.add_transition(d3, d2, 0.33)\nmodel.add_transition(d3, d3, 0.33)\n\n# Call the bake() method to finalize the model\nmodel.bake()\n\n# Fit the model on the training data and labels\nmodel.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')\n\n# Use the model to predict the states for a set of observations\nall_pred = model.predict([2.33, 1.22, 1.4, 10.6])\n\n# View the transition matrix for the model\nprint(model.dense_transition_matrix())\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"hidden_markov_models",
"pomegranate",
"python",
"supervised_learning"
] |
stackoverflow_0074538741_hidden_markov_models_pomegranate_python_supervised_learning.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.