content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Bigquery : Multiple Updates to record
We are planning to use bigquery for analytical purpose for our inventory system. Since this is inventory, a record of storeid - productid combination's is going to change often. In terms of volume, the total store-product records are somewhere between 200M - 400M. In total 500K mutations per day are expected. The mutations are coming in kafka topics.
From cost standpoint, what's the optimum solution. Options are
A kafka listener issues a DML statement. UPDATE inventory SET quantity=? WHERE productid=? AND storeid=?. => My assessment on this option is, This is simplest of all, but may incur higher cost because bigquery doesn't have a notion of primary key. Will search index/ clustering etc help?
Have a staging table where we store every mutation, then periodically, using MERGE update the main\reporting table
Something like this https://cloud.google.com/blog/products/bigquery/performing-large-scale-mutations-in-bigquery (However this is a 2018 article, things might have changed a lot - for example, I think the 3 hour lag mentioned here is now 30 minutes)
MERGE dataset.Inventory T
USING dataset.inventory_staging S
ON T.ProductID = S.ProductID and T.storeid = S.storeid
WHEN MATCHED THEN
UPDATE SET quantity = s.quantity
WHEN NOT MATCHED THEN
INSERT (ProductID, quantity) VALUES (ProductID, quantity)
Now the second question, if we are to take the second approach,
What's the cost effective way to sink a kafka topic to big query.
Does Kafka -> GCS -> BQ give any advantage over streaming solutions (like a boilerplate kafkalistener that does https://cloud.google.com/bigquery/docs/write-api#write-api-overview)
A:
Running one UPDATE statement per item would be crazy expensive, you need to have the stage table and run periodical MERGEs.
Kafka -> GCS -> BQ is the most cost effective way.
As additional suggestion you may explore creating a topic in Pub/Sub that replaces kafka. Also Pub / Sub has direct ingestion to bigquery.
|
Bigquery : Multiple Updates to record
|
We are planning to use bigquery for analytical purpose for our inventory system. Since this is inventory, a record of storeid - productid combination's is going to change often. In terms of volume, the total store-product records are somewhere between 200M - 400M. In total 500K mutations per day are expected. The mutations are coming in kafka topics.
From cost standpoint, what's the optimum solution. Options are
A kafka listener issues a DML statement. UPDATE inventory SET quantity=? WHERE productid=? AND storeid=?. => My assessment on this option is, This is simplest of all, but may incur higher cost because bigquery doesn't have a notion of primary key. Will search index/ clustering etc help?
Have a staging table where we store every mutation, then periodically, using MERGE update the main\reporting table
Something like this https://cloud.google.com/blog/products/bigquery/performing-large-scale-mutations-in-bigquery (However this is a 2018 article, things might have changed a lot - for example, I think the 3 hour lag mentioned here is now 30 minutes)
MERGE dataset.Inventory T
USING dataset.inventory_staging S
ON T.ProductID = S.ProductID and T.storeid = S.storeid
WHEN MATCHED THEN
UPDATE SET quantity = s.quantity
WHEN NOT MATCHED THEN
INSERT (ProductID, quantity) VALUES (ProductID, quantity)
Now the second question, if we are to take the second approach,
What's the cost effective way to sink a kafka topic to big query.
Does Kafka -> GCS -> BQ give any advantage over streaming solutions (like a boilerplate kafkalistener that does https://cloud.google.com/bigquery/docs/write-api#write-api-overview)
|
[
"Running one UPDATE statement per item would be crazy expensive, you need to have the stage table and run periodical MERGEs.\nKafka -> GCS -> BQ is the most cost effective way.\nAs additional suggestion you may explore creating a topic in Pub/Sub that replaces kafka. Also Pub / Sub has direct ingestion to bigquery.\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"google_bigquery"
] |
stackoverflow_0074657435_apache_kafka_google_bigquery.txt
|
Q:
Nested subscribe rxjs
How to rebuilt this using rxjs if it is advisable:
constructor(private customerInfoService: CustomerInfoService) {
customerInfoService.getCustomerIPById(this.a).subscribe(x => {
customerInfoService.getIPActivityDates(x).subscribe(y => {
this.latestActivityDate = y.latestDate;
})
})
}
A:
You can use switchMap. This allows you to "switch" the observable returned. So in your case, you could do something like:
customerInfoService.getCustomerIPById(this.a).pipe(
switchMap(x => customerInfoService.getIPActivityDates(x))
}).subscribe(y => {
this.latestActivityDate = y.latestDate;
})
A:
try this.
constructor(private customerInfoService: CustomerInfoService) {
this.customerInfoService.getCustomerIPById(this.a)
.pipe(
concatMap(x => this.customerInfoService.getIPActivityDates(x))
).subscribe(y => {
this.latestActivityDate = y.latestDate;
});
}
|
Nested subscribe rxjs
|
How to rebuilt this using rxjs if it is advisable:
constructor(private customerInfoService: CustomerInfoService) {
customerInfoService.getCustomerIPById(this.a).subscribe(x => {
customerInfoService.getIPActivityDates(x).subscribe(y => {
this.latestActivityDate = y.latestDate;
})
})
}
|
[
"You can use switchMap. This allows you to \"switch\" the observable returned. So in your case, you could do something like:\ncustomerInfoService.getCustomerIPById(this.a).pipe(\n switchMap(x => customerInfoService.getIPActivityDates(x))\n}).subscribe(y => {\n this.latestActivityDate = y.latestDate;\n})\n\n",
"try this.\n constructor(private customerInfoService: CustomerInfoService) {\n this.customerInfoService.getCustomerIPById(this.a)\n .pipe(\n concatMap(x => this.customerInfoService.getIPActivityDates(x))\n ).subscribe(y => {\n this.latestActivityDate = y.latestDate;\n });\n}\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"angular",
"rxjs",
"subscribe"
] |
stackoverflow_0074658670_angular_rxjs_subscribe.txt
|
Q:
reading from a list of files located in a folder in R
I have to download a lot of data in mass from the internet, and I don't want this to crowd my main directory so much, so I like to move it to a /data folder. I make this data into a list, then move that entire list into that folder. However, I then struggle to do analyses with sapply() and other functions upon this entire list of files once it is located in the folder. I can't find any argument within sapply() that takes a path or anything, so I was wondering how I can get around this. Below is some code demonstrating this problem.
library(dplyr)
library(fs)
mtcars %>% write.csv("data_1.csv")
DNase %>% write.csv("data_2.csv")
iris %>% write.csv("data_3.csv")
my_list <- list.files(pattern = "data_")
fs::file_move(my_list, new_path = "MYDIRECTORY/data")
sapply(my_list, read.csv)
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'data_1.csv': No such file or directory
A:
you could simplify by not using data.table::fread, but it's fast and if you have a lot of files in the folder, it's worth keeping it.
library(data.table)
library(dplyr)
library(purrr)
path_to_folder = "MYDIRECTORY/data"
df <- list.files(path=path_to_folder,pattern = "*.csv",full.names = T) %>%
map_df(~fread(.,stringsAsFactors=F,check.names=T,strip.white=T))
A:
When you use the sapply function, you could do this:
all_data <- sapply(my_list, function(x) {
read.csv(file = paste0("./data/", x))
})
And then, if you want to append all the .csv in just one data table, you can do this:
library(data.table)
rbindlist(all_data, fill = TRUE)
|
reading from a list of files located in a folder in R
|
I have to download a lot of data in mass from the internet, and I don't want this to crowd my main directory so much, so I like to move it to a /data folder. I make this data into a list, then move that entire list into that folder. However, I then struggle to do analyses with sapply() and other functions upon this entire list of files once it is located in the folder. I can't find any argument within sapply() that takes a path or anything, so I was wondering how I can get around this. Below is some code demonstrating this problem.
library(dplyr)
library(fs)
mtcars %>% write.csv("data_1.csv")
DNase %>% write.csv("data_2.csv")
iris %>% write.csv("data_3.csv")
my_list <- list.files(pattern = "data_")
fs::file_move(my_list, new_path = "MYDIRECTORY/data")
sapply(my_list, read.csv)
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'data_1.csv': No such file or directory
|
[
"you could simplify by not using data.table::fread, but it's fast and if you have a lot of files in the folder, it's worth keeping it.\nlibrary(data.table)\nlibrary(dplyr)\nlibrary(purrr)\n\n\npath_to_folder = \"MYDIRECTORY/data\"\n\ndf <- list.files(path=path_to_folder,pattern = \"*.csv\",full.names = T) %>% \n map_df(~fread(.,stringsAsFactors=F,check.names=T,strip.white=T))\n\n",
"When you use the sapply function, you could do this:\nall_data <- sapply(my_list, function(x) { \n read.csv(file = paste0(\"./data/\", x))\n})\n\nAnd then, if you want to append all the .csv in just one data table, you can do this:\nlibrary(data.table)\nrbindlist(all_data, fill = TRUE)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"csv",
"dplyr",
"list",
"r"
] |
stackoverflow_0074660443_csv_dplyr_list_r.txt
|
Q:
How to put a dictionary in a JSON?
I'm working with a REST API, and I need to return a JSON with my values to it. However, I need the items of the payload variable to show all the items inside the cart_item.
I have this:
payload = {
"items": [],
}
I tried this, but I don't know how I would put this inside the items of the payload:
for cart_item in cart_items:
item = [
{
"reference_id": f"{cart_item.sku}",
"name": f"{cart_item.product.name}",
"quantity": cart_item.quantity,
"unit_amount": cart_item.product.price
},
]
I need you to get back to me:
payload = {
"items": [
{
"reference_id": "SKU49FS20DD",
"name": "Produto 1",
"quantity": 1,
"unit_amount": 130
},
{
"reference_id": "SKU42920SSD",
"name": "Produto 2",
"quantity": 1,
"unit_amount": 100
}
],
}
response = requests.request(
"POST",
url,
headers=headers,
json=payload
)
I don't know if I would need to pass what's in JSON to the dictionary to change and then send it to JSON again.
A:
Instead of trying to create one item at a time, just populate payload['items'] directly, using a comprehension:
payload['items'] = [
{
'reference_id': cart_item.sku,
'name': cart_item.product.name,
'quantity': cart_item.quantity,
'unit_amount': cart_item.product.price
}
for cart_item in cart_items
]
Another possible improvement is about requests. Instead of using requests.requests('POST' ...), you can use requests.post(...).
And finally, if the API really needs json to have a valid JSON string, use json.dumps to convert it.
Putting all together:
import requests
import json
payload['items'] = [
{
'reference_id': cart_item.sku,
'name': cart_item.product.name,
'quantity': cart_item.quantity,
'unit_amount': cart_item.product.price
}
for cart_item in cart_items
]
response = requests.post(
url,
headers=headers,
json=json.dumps(payload)
)
Even though I'm almost a hundred percent sure requests.post() will do the right thing if you just pass the payload as is in json=payload.
A:
You're just missing the "append()" method on a list, and the conversion from Python list & dict to a JSON string:
from json import dumps
items_dict = []
for cart_item in cart_items:
items_dict.append({
"reference_id": f"{cart_item.sku}",
"name": f"{cart_item.product.name}",
"quantity": cart_item.quantity,
"unit_amount": cart_item.product.price
})
payload = {
'items': items_dict
}
# And if you want a JSON string as output
print(dumps(payload))
But you don't need a string to the "json" argument in requests.post, so you can keep your
response = requests.request(
"POST",
url,
headers=headers,
json=payload
)
|
How to put a dictionary in a JSON?
|
I'm working with a REST API, and I need to return a JSON with my values to it. However, I need the items of the payload variable to show all the items inside the cart_item.
I have this:
payload = {
"items": [],
}
I tried this, but I don't know how I would put this inside the items of the payload:
for cart_item in cart_items:
item = [
{
"reference_id": f"{cart_item.sku}",
"name": f"{cart_item.product.name}",
"quantity": cart_item.quantity,
"unit_amount": cart_item.product.price
},
]
I need you to get back to me:
payload = {
"items": [
{
"reference_id": "SKU49FS20DD",
"name": "Produto 1",
"quantity": 1,
"unit_amount": 130
},
{
"reference_id": "SKU42920SSD",
"name": "Produto 2",
"quantity": 1,
"unit_amount": 100
}
],
}
response = requests.request(
"POST",
url,
headers=headers,
json=payload
)
I don't know if I would need to pass what's in JSON to the dictionary to change and then send it to JSON again.
|
[
"Instead of trying to create one item at a time, just populate payload['items'] directly, using a comprehension:\npayload['items'] = [\n {\n 'reference_id': cart_item.sku,\n 'name': cart_item.product.name,\n 'quantity': cart_item.quantity,\n 'unit_amount': cart_item.product.price \n }\n for cart_item in cart_items\n]\n\nAnother possible improvement is about requests. Instead of using requests.requests('POST' ...), you can use requests.post(...).\nAnd finally, if the API really needs json to have a valid JSON string, use json.dumps to convert it.\nPutting all together:\nimport requests\nimport json\n\npayload['items'] = [\n {\n 'reference_id': cart_item.sku,\n 'name': cart_item.product.name,\n 'quantity': cart_item.quantity,\n 'unit_amount': cart_item.product.price \n }\n for cart_item in cart_items\n]\n\nresponse = requests.post(\n url,\n headers=headers,\n json=json.dumps(payload)\n)\n\nEven though I'm almost a hundred percent sure requests.post() will do the right thing if you just pass the payload as is in json=payload.\n",
"You're just missing the \"append()\" method on a list, and the conversion from Python list & dict to a JSON string:\n from json import dumps\n\n items_dict = []\n for cart_item in cart_items:\n items_dict.append({\n \"reference_id\": f\"{cart_item.sku}\",\n \"name\": f\"{cart_item.product.name}\",\n \"quantity\": cart_item.quantity,\n \"unit_amount\": cart_item.product.price\n })\n\npayload = {\n 'items': items_dict\n}\n\n# And if you want a JSON string as output\nprint(dumps(payload))\n\nBut you don't need a string to the \"json\" argument in requests.post, so you can keep your\nresponse = requests.request(\n \"POST\",\n url, \n headers=headers,\n json=payload\n)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django_rest_framework",
"json",
"python",
"python_requests",
"rest"
] |
stackoverflow_0074660579_django_rest_framework_json_python_python_requests_rest.txt
|
Q:
Change dates to quarters in JSON file Python
I'm trying to convert the dates inside a JSON file to their respective quarter and year. My JSON file is formatted below:
{
"lastDate": {
"0": "11/22/2022",
"1": "10/28/2022",
"2": "10/17/2022",
"7": "07/03/2022",
"8": "07/03/2022",
"9": "06/03/2022",
"18": "05/17/2022",
"19": "05/08/2022",
"22": "02/03/2022",
"24": "02/04/2022"
}
}
The current code I'm using is an attempt of using the pandas.Series.dt.quarter as seen below:
import json
import pandas as pd
data = json.load(open("date_to_quarters.json"))
df = data['lastDate']
pd.to_datetime(df['lastDate'])
df['Quarter'] = df['Date'].dt.quarter
open("date_to_quarters.json", "w").write(
json.dumps(data, indent=4))
The issue I face is that my code isn't comprehending the object name "lastDate". My ideal output should have the dates ultimately replaced into their quarter, check below:
{
"lastDate": {
"0": "Q42022",
"1": "Q42022",
"2": "Q42022",
"7": "Q32022",
"8": "Q32022",
"9": "Q22022",
"18": "Q22022",
"19": "Q22022",
"22": "Q12022",
"24": "Q12022"
}
}
A:
You can use this bit of code instead:
import json
import pandas as pd
data = json.load(open("date_to_quarters.json"))
# convert json to df
df = pd.DataFrame.from_dict(data, orient="columns")
# convert last date to quarter
df['lastDate'] = pd.to_datetime(df['lastDate'])
df['lastDate'] = df['lastDate'].dt.to_period('Q')
# change type of lastDate to string
df['lastDate'] = df['lastDate'].astype(str)
# write to json file
df.to_json("date_to_quarters1.json", orient="columns", indent=4)
json object is different than pd.DataFrame. You have to convert json to pd.DataFrame first using from_dict() function.
A:
Try:
import json
import pandas as pd
with open("data.json", "r") as f_in:
data = json.load(f_in)
x = pd.to_datetime(list(data["lastDate"].values()))
out = {
"lastDate": dict(
zip(data["lastDate"], (f"Q{q}{y}" for q, y in zip(x.quarter, x.year)))
)
}
print(out)
Prints:
{
"lastDate": {
"0": "Q42022",
"1": "Q42022",
"2": "Q42022",
"7": "Q32022",
"8": "Q32022",
"9": "Q22022",
"18": "Q22022",
"19": "Q22022",
"22": "Q12022",
"24": "Q12022",
}
}
To save out as Json:
with open("out.json", "w") as f_out:
json.dump(out, f_out, indent=4)
|
Change dates to quarters in JSON file Python
|
I'm trying to convert the dates inside a JSON file to their respective quarter and year. My JSON file is formatted below:
{
"lastDate": {
"0": "11/22/2022",
"1": "10/28/2022",
"2": "10/17/2022",
"7": "07/03/2022",
"8": "07/03/2022",
"9": "06/03/2022",
"18": "05/17/2022",
"19": "05/08/2022",
"22": "02/03/2022",
"24": "02/04/2022"
}
}
The current code I'm using is an attempt of using the pandas.Series.dt.quarter as seen below:
import json
import pandas as pd
data = json.load(open("date_to_quarters.json"))
df = data['lastDate']
pd.to_datetime(df['lastDate'])
df['Quarter'] = df['Date'].dt.quarter
open("date_to_quarters.json", "w").write(
json.dumps(data, indent=4))
The issue I face is that my code isn't comprehending the object name "lastDate". My ideal output should have the dates ultimately replaced into their quarter, check below:
{
"lastDate": {
"0": "Q42022",
"1": "Q42022",
"2": "Q42022",
"7": "Q32022",
"8": "Q32022",
"9": "Q22022",
"18": "Q22022",
"19": "Q22022",
"22": "Q12022",
"24": "Q12022"
}
}
|
[
"You can use this bit of code instead:\nimport json\nimport pandas as pd\n\ndata = json.load(open(\"date_to_quarters.json\"))\n\n# convert json to df\ndf = pd.DataFrame.from_dict(data, orient=\"columns\")\n\n# convert last date to quarter\ndf['lastDate'] = pd.to_datetime(df['lastDate'])\ndf['lastDate'] = df['lastDate'].dt.to_period('Q')\n\n# change type of lastDate to string\ndf['lastDate'] = df['lastDate'].astype(str)\n\n# write to json file\ndf.to_json(\"date_to_quarters1.json\", orient=\"columns\", indent=4)\n\njson object is different than pd.DataFrame. You have to convert json to pd.DataFrame first using from_dict() function.\n",
"Try:\nimport json\nimport pandas as pd\n\nwith open(\"data.json\", \"r\") as f_in:\n data = json.load(f_in)\n\nx = pd.to_datetime(list(data[\"lastDate\"].values()))\n\n\nout = {\n \"lastDate\": dict(\n zip(data[\"lastDate\"], (f\"Q{q}{y}\" for q, y in zip(x.quarter, x.year)))\n )\n}\nprint(out)\n\nPrints:\n{\n \"lastDate\": {\n \"0\": \"Q42022\",\n \"1\": \"Q42022\",\n \"2\": \"Q42022\",\n \"7\": \"Q32022\",\n \"8\": \"Q32022\",\n \"9\": \"Q22022\",\n \"18\": \"Q22022\",\n \"19\": \"Q22022\",\n \"22\": \"Q12022\",\n \"24\": \"Q12022\",\n }\n}\n\n\nTo save out as Json:\nwith open(\"out.json\", \"w\") as f_out:\n json.dump(out, f_out, indent=4)\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dataframe",
"json",
"pandas",
"python"
] |
stackoverflow_0074660556_dataframe_json_pandas_python.txt
|
Q:
mongodb aggregate a new field from (sub array of objects) of array of objects
I have those objects
db.inventory.insertMany([
{ item: 'notebook', status: 'A', size: { h: 8.5, w: 11, uom: 'in' }, instock: [{ qty: 5 }] },
{ item: 'paper', status: 'D', size: { h: 8.5, w: 11, uom: 'in' }, instock: [{ warehouse: [{ status: 'A' }], qty: 60 }] },
{ item: 'planner', status: 'D', size: { h: 22.85, w: 30, uom: 'cm' }, instock: [{ warehouse: [{ status: 'B' }], qty: 40 }] },
{
item: 'postcard',
status: 'A',
size: { h: 10, w: 15.25, uom: 'cm' },
instock: [
{
warehouse: [
{ status: 'A', createdAt: new Date('01.01.2020') },
{ status: 'C', createdAt: new Date('01.01.2022') },
{ status: 'B', createdAt: new Date('01.01.2021') }
],
qty: 1
},
{
warehouse: [
{ status: 'D', createdAt: new Date('01.01.2024') },
{ status: 'F', createdAt: new Date('01.01.2026') },
{ status: 'E', createdAt: new Date('01.01.2025') }
],
qty: 12
}
]
}
]);
I want to map array so I have new items like:
[
...
, {
instock: [
{
warehouseStatus: 'C',
qty: 1
},
{
warehouseStatus: 'F',
qty: 12
}
]
}
, ...
]
Basically inStock elements would have a new field called warehouseStatus which is the status of the newest object from instock.warehouse, sorted by createdAt.
I tried:
db.inventory.aggregate([
{
$project: {
_id: 0,
instock: {
warehouseStatus: [
{
$sortArray: {
input: '$instock.warehouse',
sortBy: { createdAt: 1 }
}
}
]
}
}
}
])
But it returns nonsense.
There is mongo console here, if you want to test: https://www.mongodb.com/docs/manual/tutorial/project-fields-from-query-results/
Please help I consumed my last brain cell today.
mongodb aggregation that will map my data
A:
Maybe something like this:
db.collection.aggregate([
{
$set: {
instock: {
"$map": {
"input": "$instock",
"as": "i",
"in": {
qty: "$$i.qty",
warehouse: "$$i.warehouse",
warehauseStatus: {
"$arrayElemAt": [
"$$i.warehouse",
{
"$indexOfArray": [
"$$i.warehouse.createdAt",
{
"$max": "$$i.warehouse.createdAt"
}
]
}
]
}
}
}
}
}
},
{
$set: {
instock: {
"$map": {
"input": "$instock",
"as": "i",
"in": {
qty: "$$i.qty",
warehouse: "$$i.warehouse",
warehauseStatus: "$$i.warehauseStatus.status"
}
}
}
}
}
])
Explained:
Using indexOfArray set the warehouseStatus to max element based on createdAt field
Set the status value to the warehauseStatus field.
Playground:
|
mongodb aggregate a new field from (sub array of objects) of array of objects
|
I have those objects
db.inventory.insertMany([
{ item: 'notebook', status: 'A', size: { h: 8.5, w: 11, uom: 'in' }, instock: [{ qty: 5 }] },
{ item: 'paper', status: 'D', size: { h: 8.5, w: 11, uom: 'in' }, instock: [{ warehouse: [{ status: 'A' }], qty: 60 }] },
{ item: 'planner', status: 'D', size: { h: 22.85, w: 30, uom: 'cm' }, instock: [{ warehouse: [{ status: 'B' }], qty: 40 }] },
{
item: 'postcard',
status: 'A',
size: { h: 10, w: 15.25, uom: 'cm' },
instock: [
{
warehouse: [
{ status: 'A', createdAt: new Date('01.01.2020') },
{ status: 'C', createdAt: new Date('01.01.2022') },
{ status: 'B', createdAt: new Date('01.01.2021') }
],
qty: 1
},
{
warehouse: [
{ status: 'D', createdAt: new Date('01.01.2024') },
{ status: 'F', createdAt: new Date('01.01.2026') },
{ status: 'E', createdAt: new Date('01.01.2025') }
],
qty: 12
}
]
}
]);
I want to map array so I have new items like:
[
...
, {
instock: [
{
warehouseStatus: 'C',
qty: 1
},
{
warehouseStatus: 'F',
qty: 12
}
]
}
, ...
]
Basically inStock elements would have a new field called warehouseStatus which is the status of the newest object from instock.warehouse, sorted by createdAt.
I tried:
db.inventory.aggregate([
{
$project: {
_id: 0,
instock: {
warehouseStatus: [
{
$sortArray: {
input: '$instock.warehouse',
sortBy: { createdAt: 1 }
}
}
]
}
}
}
])
But it returns nonsense.
There is mongo console here, if you want to test: https://www.mongodb.com/docs/manual/tutorial/project-fields-from-query-results/
Please help I consumed my last brain cell today.
mongodb aggregation that will map my data
|
[
"Maybe something like this:\ndb.collection.aggregate([\n{\n $set: {\n instock: {\n \"$map\": {\n \"input\": \"$instock\",\n \"as\": \"i\",\n \"in\": {\n qty: \"$$i.qty\",\n warehouse: \"$$i.warehouse\",\n warehauseStatus: {\n \"$arrayElemAt\": [\n \"$$i.warehouse\",\n {\n \"$indexOfArray\": [\n \"$$i.warehouse.createdAt\",\n {\n \"$max\": \"$$i.warehouse.createdAt\"\n }\n ]\n }\n ]\n }\n }\n }\n }\n }\n },\n {\n$set: {\n instock: {\n \"$map\": {\n \"input\": \"$instock\",\n \"as\": \"i\",\n \"in\": {\n qty: \"$$i.qty\",\n warehouse: \"$$i.warehouse\",\n warehauseStatus: \"$$i.warehauseStatus.status\"\n }\n }\n }\n }\n}\n])\n\nExplained:\n\nUsing indexOfArray set the warehouseStatus to max element based on createdAt field\n\nSet the status value to the warehauseStatus field.\n\n\nPlayground:\n"
] |
[
0
] |
[] |
[] |
[
"mongodb"
] |
stackoverflow_0074658368_mongodb.txt
|
Q:
Getting Index Out of Bounds Error When Trying to add Elements to a new 2D arraylist
I'm trying to implement the graph ADT in java, and for some reason even though I can add vertices to my 1D arraylist without issue, however I keep getting "Index 0 out of bounds for length 0" for my 2D arraylist of edges. I'm wondering if I'm using the arraylist incorrectly, or there's a problem at another point in my program (like the function to insert an edge).
import java.util.ArrayList;
public class GraphADT
{
// constructor, takes in 1D arraylist of vertices, and 2D list of edges
public GraphADT(ArrayList<Integer> Vert, ArrayList<ArrayList<Integer>> Edges)
{
this.V = Vert;
this.E = Edges;
this.adj = new boolean[V.size()][V.size()];
}
public ArrayList<Integer> V;
public ArrayList<ArrayList<Integer>> E;
public boolean[][] adj; // adjacency matrix
// increment v, w to make sure the edge doesn't already exist
// otherwise set the connection to be true
// return list of vertices
public ArrayList<Integer> vertices()
{
ArrayList<Integer> vertList = new ArrayList<Integer>();
for (int i = 0; i <= V.size(); i++)
{
int vert = V.get(i);
vertList.add(vert);
}
return vertList;
}
public int insertVertex(int x)
{
this.V.add(x);
return x;
}
public void removeVertex(int y)
{
// removes all adjacency matrix connections
// to the vertex that needs to be deleted
for (int i = 0; i < adj.length; i++)
{
for (int j = 0; j < adj.length; j++)
if (i == y || j == y)
{
adj[i][j] = false;
}
}
// Remove vertex from the vertex List.
this.V.remove(y);
}
public void insertEdge(int v, int w)
{
if (!adj[v][w])
{
v++;
w++;
} else
{
adj[v][w] = true;
adj[w][v] = true;
}
}
public static void main(String args[])
{
ArrayList<Integer> V = new ArrayList<Integer>();
V.add(1);
V.add(2);
V.add(3);
V.add(4);
ArrayList<ArrayList<Integer>> E = new ArrayList<ArrayList<Integer>>();
// getting edge indices and adding pairs to them
E.get(0).add(1, 3);
E.get(1).add(1, 1);
E.get(2).add(1, 2);
E.get(3).add(3, 4);
GraphADT G = new GraphADT(V, E);
G.insertVertex(5);
G.insertVertex(6);
G.insertVertex(7);
G.insertEdge(2, 4);
}
}
and if I try to use new to create space for the 0th row I still get an error
A:
Have you thought about using a "Set" data structure for the edges? Using ArrayLists nested into ArrayLists seems way too complicated.
But your Error Is clearly coming from line 87 --> You try to reference an element at index 0, but there is no element yet.
|
Getting Index Out of Bounds Error When Trying to add Elements to a new 2D arraylist
|
I'm trying to implement the graph ADT in java, and for some reason even though I can add vertices to my 1D arraylist without issue, however I keep getting "Index 0 out of bounds for length 0" for my 2D arraylist of edges. I'm wondering if I'm using the arraylist incorrectly, or there's a problem at another point in my program (like the function to insert an edge).
import java.util.ArrayList;
public class GraphADT
{
// constructor, takes in 1D arraylist of vertices, and 2D list of edges
public GraphADT(ArrayList<Integer> Vert, ArrayList<ArrayList<Integer>> Edges)
{
this.V = Vert;
this.E = Edges;
this.adj = new boolean[V.size()][V.size()];
}
public ArrayList<Integer> V;
public ArrayList<ArrayList<Integer>> E;
public boolean[][] adj; // adjacency matrix
// increment v, w to make sure the edge doesn't already exist
// otherwise set the connection to be true
// return list of vertices
public ArrayList<Integer> vertices()
{
ArrayList<Integer> vertList = new ArrayList<Integer>();
for (int i = 0; i <= V.size(); i++)
{
int vert = V.get(i);
vertList.add(vert);
}
return vertList;
}
public int insertVertex(int x)
{
this.V.add(x);
return x;
}
public void removeVertex(int y)
{
// removes all adjacency matrix connections
// to the vertex that needs to be deleted
for (int i = 0; i < adj.length; i++)
{
for (int j = 0; j < adj.length; j++)
if (i == y || j == y)
{
adj[i][j] = false;
}
}
// Remove vertex from the vertex List.
this.V.remove(y);
}
public void insertEdge(int v, int w)
{
if (!adj[v][w])
{
v++;
w++;
} else
{
adj[v][w] = true;
adj[w][v] = true;
}
}
public static void main(String args[])
{
ArrayList<Integer> V = new ArrayList<Integer>();
V.add(1);
V.add(2);
V.add(3);
V.add(4);
ArrayList<ArrayList<Integer>> E = new ArrayList<ArrayList<Integer>>();
// getting edge indices and adding pairs to them
E.get(0).add(1, 3);
E.get(1).add(1, 1);
E.get(2).add(1, 2);
E.get(3).add(3, 4);
GraphADT G = new GraphADT(V, E);
G.insertVertex(5);
G.insertVertex(6);
G.insertVertex(7);
G.insertEdge(2, 4);
}
}
and if I try to use new to create space for the 0th row I still get an error
|
[
"Have you thought about using a \"Set\" data structure for the edges? Using ArrayLists nested into ArrayLists seems way too complicated.\nBut your Error Is clearly coming from line 87 --> You try to reference an element at index 0, but there is no element yet.\n"
] |
[
0
] |
[] |
[] |
[
"arraylist",
"arrays",
"graph",
"indexoutofboundsexception",
"java"
] |
stackoverflow_0074660494_arraylist_arrays_graph_indexoutofboundsexception_java.txt
|
Q:
Multiple commands produce error in Xcode 13
I have been stuck on this bug for quite a while now so any help would be appreciated. When I try to build my app I keep getting the following build error:
Multiple commands produce '/Users/my_user_name/Library/Developer/Xcode/DerivedData/Expense_Tracker_Final-aujeprcwgnjmizeaueitvhpegrzf/Build/Products/Debug-iphonesimulator/Expense Tracker Final.app':
Target 'Expense Tracker Final' has create directory command with output '/Users/my_user_name/Library/Developer/Xcode/DerivedData/Expense_Tracker_Final-aujeprcwgnjmizeaueitvhpegrzf/Build/Products/Debug-iphonesimulator/Expense Tracker Final.app'
That command depends on command in Target 'Expense Tracker Final': script phase “[CP] Copy Pods Resources”
I have tried solution that have been recommended on other stack overflow questions such as deleting certain files from the [CP] Copy Pods Resources, but nothing seems to be working. Could someone please help me I'm really lost.
A:
Select Targets -> BuildPhases.
There you saw Copy Bundle Resources. Just remove the duplicate file that is creating an error by selecting that file and clicking on the minus icon and if info.plist is present there just remove it also.
A:
Looks like there's a duplicate code in your script under Build Phase > [CP] Copy Pods Resources
Remove the duplicate and re-run the project.
A:
This error is coming from the IOS directory because there are some duplicate files Build Phase > [CP] Copy Pods Resources, in my case the project was building on Android only and throwing this error for IOS.
I fixed this by first backing up the IOS directory, then deleting the IOS directory
then ran this command:
flutter create -i swift . --project-name="your project Name"
NOTE: check pubspec.yaml for your project name
A new podfile will be created, open it and uncomment this line of code
platform :ios, '9.0'
remember to replace the '9.0' with '10.0'
after running the command do
cd .
then do
flutter run
A:
In my case, this happens because there are 2 files that have the same name. So the solution is just to rename one of the files' name
A:
Also that happens when you have 2 or more files with the same name
|
Multiple commands produce error in Xcode 13
|
I have been stuck on this bug for quite a while now so any help would be appreciated. When I try to build my app I keep getting the following build error:
Multiple commands produce '/Users/my_user_name/Library/Developer/Xcode/DerivedData/Expense_Tracker_Final-aujeprcwgnjmizeaueitvhpegrzf/Build/Products/Debug-iphonesimulator/Expense Tracker Final.app':
Target 'Expense Tracker Final' has create directory command with output '/Users/my_user_name/Library/Developer/Xcode/DerivedData/Expense_Tracker_Final-aujeprcwgnjmizeaueitvhpegrzf/Build/Products/Debug-iphonesimulator/Expense Tracker Final.app'
That command depends on command in Target 'Expense Tracker Final': script phase “[CP] Copy Pods Resources”
I have tried solution that have been recommended on other stack overflow questions such as deleting certain files from the [CP] Copy Pods Resources, but nothing seems to be working. Could someone please help me I'm really lost.
|
[
"Select Targets -> BuildPhases.\nThere you saw Copy Bundle Resources. Just remove the duplicate file that is creating an error by selecting that file and clicking on the minus icon and if info.plist is present there just remove it also.\n\n",
"Looks like there's a duplicate code in your script under Build Phase > [CP] Copy Pods Resources\nRemove the duplicate and re-run the project.\n",
"This error is coming from the IOS directory because there are some duplicate files Build Phase > [CP] Copy Pods Resources, in my case the project was building on Android only and throwing this error for IOS.\nI fixed this by first backing up the IOS directory, then deleting the IOS directory\nthen ran this command:\nflutter create -i swift . --project-name=\"your project Name\"\nNOTE: check pubspec.yaml for your project name\nA new podfile will be created, open it and uncomment this line of code\nplatform :ios, '9.0'\nremember to replace the '9.0' with '10.0'\nafter running the command do\ncd .\nthen do\nflutter run\n",
"In my case, this happens because there are 2 files that have the same name. So the solution is just to rename one of the files' name\n",
"Also that happens when you have 2 or more files with the same name\n"
] |
[
27,
2,
0,
0,
0
] |
[] |
[] |
[
"ios",
"swift",
"xcode",
"xcode13"
] |
stackoverflow_0071004189_ios_swift_xcode_xcode13.txt
|
Q:
How to use Gradio interface to auto submit the audio when recording is done?
I am using the following Gradio sample code to transcribe my audio:
from transformers import pipeline
p = pipeline("automatic-speech-recognition")
import gradio as gr
def transcribe(audio):
text = p(audio)["text"]
return text
gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text").launch()
However, the user has to start recording audio, stop recording audio, and the submit the audio. Can I auto submit the audio when the user presses stop recording audio?
A:
You can use auto-submit something like this should work
#auto submit after 5 seconds
gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
auto_submit=True,
auto_submit_duration=5).launch()
A:
I found the solution. I am putting it here for other's reference.
import gradio as gr
from transformers import pipeline
p = pipeline("automatic-speech-recognition")
def transcribe(audio):
text = p(audio)["text"]
return text
gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",live=True).launch()
Adding live=True serves the purpose.
|
How to use Gradio interface to auto submit the audio when recording is done?
|
I am using the following Gradio sample code to transcribe my audio:
from transformers import pipeline
p = pipeline("automatic-speech-recognition")
import gradio as gr
def transcribe(audio):
text = p(audio)["text"]
return text
gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text").launch()
However, the user has to start recording audio, stop recording audio, and the submit the audio. Can I auto submit the audio when the user presses stop recording audio?
|
[
"You can use auto-submit something like this should work\n#auto submit after 5 seconds\ngr.Interface(\n fn=transcribe,\n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"),\n outputs=\"text\",\n auto_submit=True,\n auto_submit_duration=5).launch()\n\n",
"I found the solution. I am putting it here for other's reference.\nimport gradio as gr\n\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\")\n\ndef transcribe(audio):\n text = p(audio)[\"text\"]\n return text\n\ngr.Interface(\n fn=transcribe, \n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \n outputs=\"text\",live=True).launch()\n\nAdding live=True serves the purpose.\n"
] |
[
0,
0
] |
[] |
[] |
[
"gradio",
"python"
] |
stackoverflow_0074660611_gradio_python.txt
|
Q:
Custom TypeScript type from React imported library not recognized?
I am attempting to use a third party family tree library (react-family-tree) in my React TypeScript project.
The family tree wants an array with among other values, a value in the Gender type from its dependency library relatives-tree
I have imported the library like so:
import ReactFamilyTree from 'react-family-tree';
And am attempting to create the Gender type in the array with the below code:
var ancestorsFormatted: Array<{ id: number, gender: Gender, parents: {id: number}[], children: {id: number}[], spouse: {id: number}[]}> = [];
However, I am given the error of Cannot find name 'Gender'
I thought that importing this library would also make its types usable, do I also need to import the dependency library? I tried doing so with a variety of syntaxes but it is not recognized.
A:
import ReactFamilyTree from 'react-family-tree' only imports the default export from react-family-tree, which is not the Gender. You'll need to import Gender itself from somewhere.
If that's in the library relatives-tree, then this will likely be something like
import { Gender } from 'relatives-tree'.
You'll need to check the documentation/source though to understand exactly where you can find the Gender type.
Update: in this specific case, the Gender type seems to be exported from relatives-tree/lib/types, so the import statement should be:
import { Gender } from 'relatives-tree/lib/types'
|
Custom TypeScript type from React imported library not recognized?
|
I am attempting to use a third party family tree library (react-family-tree) in my React TypeScript project.
The family tree wants an array with among other values, a value in the Gender type from its dependency library relatives-tree
I have imported the library like so:
import ReactFamilyTree from 'react-family-tree';
And am attempting to create the Gender type in the array with the below code:
var ancestorsFormatted: Array<{ id: number, gender: Gender, parents: {id: number}[], children: {id: number}[], spouse: {id: number}[]}> = [];
However, I am given the error of Cannot find name 'Gender'
I thought that importing this library would also make its types usable, do I also need to import the dependency library? I tried doing so with a variety of syntaxes but it is not recognized.
|
[
"import ReactFamilyTree from 'react-family-tree' only imports the default export from react-family-tree, which is not the Gender. You'll need to import Gender itself from somewhere.\nIf that's in the library relatives-tree, then this will likely be something like\nimport { Gender } from 'relatives-tree'.\nYou'll need to check the documentation/source though to understand exactly where you can find the Gender type.\n\nUpdate: in this specific case, the Gender type seems to be exported from relatives-tree/lib/types, so the import statement should be:\nimport { Gender } from 'relatives-tree/lib/types'\n\n"
] |
[
2
] |
[] |
[] |
[
"reactjs",
"typescript"
] |
stackoverflow_0074660643_reactjs_typescript.txt
|
Q:
Cannot create project in Azure DevOps due to FIPS issue
Azure DevOps 2020, I created a new project collection on our DevOps server. When I went to create a new project for that new collection from my work computer browser, I received this message:
Oops, something went wrong. Project creation operation failed.
Hitting button Try Again on that error screen produced the same result.
On our DevOps server, the log file from my attempt C:\ProgramData\Microsoft\Azure DevOps\Server Configuration\Logs..._CreateProject_1130_141424.log had this error:
This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
Executing step: Create the Team Project
Executing step: 'Create the Team Project' WorkItemTracking.CreateTeamProject (5 of 12)
Process guids. TypeId: b8a3a935-7e91-48b8-a94c-606d37c3e9f2 Inherits: 00000000-0000-0000-0000-000000000000
Process flags. : IsSystem: True IsCustom: False
All projects count:1
Well-formed projects count:0
Refreshing server caches.
Importing queries.
Failure while provisioning project - will retry (Message): This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
Failure while provisioning project - will retry (Stacktrace): at System.Security.Cryptography.SHA1Managed..ctor()
at Microsoft.TeamFoundation.WorkItemTracking.Server.CommonWITUtils.GetSha1HashString(String text)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DalUpdateQueryItemHashElement.JoinBatch(ElementGroup group, ServerQueryItem item, IVssRequestContext requestContext)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.ExplodeQueryUpdates(Guid id)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.AddQueryUpdatesToBatch()
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.BuildBatch(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.UpdateImpl(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Payload metadataPayload, Boolean bisNotification, String& dbStamp, Boolean bulkUpdate, Boolean& bulkUpdateSuccess, IVssIdentity user, Boolean overwrite, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.Update(XmlElement package, Boolean overwrite, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.ProvisioningService.ImportQueries(IVssRequestContext requestContext, IProcessTemplate template, XmlNode queriesNode, Uri projectUri, ProvisioningActionType action)
at Microsoft.TeamFoundation.Server.Deploy.TFCollection.Project.WorkItemTrackingImporter.ImportQueries()
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.ProvisionTeamProject(IVssRequestContext requestContext, IServicingContext servicingContext, Lazy`1 witImporter, String projectUri, ProcessDescriptor processDescriptor)
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.CreateTeamProject(IServicingContext servicingContext)
Failure while provisioning project - will retry (Exception Type): InvalidOperationException
Importing queries.
[Error] This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
at System.Security.Cryptography.SHA1Managed..ctor()
at Microsoft.TeamFoundation.WorkItemTracking.Server.CommonWITUtils.GetSha1HashString(String text)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DalUpdateQueryItemHashElement.JoinBatch(ElementGroup group, ServerQueryItem item, IVssRequestContext requestContext)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.ExplodeQueryUpdates(Guid id)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.AddQueryUpdatesToBatch()
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.BuildBatch(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.UpdateImpl(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Payload metadataPayload, Boolean bisNotification, String& dbStamp, Boolean bulkUpdate, Boolean& bulkUpdateSuccess, IVssIdentity user, Boolean overwrite, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.Update(XmlElement package, Boolean overwrite, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.ProvisioningService.ImportQueries(IVssRequestContext requestContext, IProcessTemplate template, XmlNode queriesNode, Uri projectUri, ProvisioningActionType action)
at Microsoft.TeamFoundation.Server.Deploy.TFCollection.Project.WorkItemTrackingImporter.ImportQueries()
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.ProvisionTeamProject(IVssRequestContext requestContext, IServicingContext servicingContext, Lazy`1 witImporter, String projectUri, ProcessDescriptor processDescriptor)
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.CreateTeamProject(IServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformHostStep(String servicingOperation, ServicingOperationTarget target, IServicingStep servicingStep, String stepData, ServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformStep(String servicingOperation, ServicingOperationTarget target, String stepType, String stepData, ServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)
Step failed: Create the Team Project. Execution time: 220 milliseconds.
[StepDuration] 0.1820582
[GroupDuration] 0.2299482
[OperationDuration] 1.1763862
Clearing dictionary, removing all items.
Based on that error, I performed the following steps on the DevOps server. After each step I stopped/started IIS, then went back to attempt Create Project again. No luck with any of these solutions.
Modified file C:\ProgramData\Microsoft\Azure DevOps\Configuration\SavedSettings\ApplicationTier\web.config to contain element enforceFIPSPolicy enabled="false".
Since the app pools for Azure DevOps use the .NET CLR Version v4.0.30319, I modified file C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Aspnet.config to contain element enforceFIPSPolicy enabled="false".
On the machine's Local Security Policy, disabled setting System cryptography: Use FIPS compliant algorithms...
Can anyone suggest what else I can try? I'm assuming the error message is accurate, and quite frankly I was surprised that the last thing I tried did not solve the problem.
UPDATE:
In the error message I also see
at System.Security.Cryptography.SHA1Managed..ctor()
I'm assuming SHA1Managed..ctor() means SHA1Managed constructor. If that's true then Microsoft says that SHA1Managed is not FIPS compliant.
But I can't change the DevOps code, if it's using SHA1Managed there's nothing I can do about it, correct?
On our DevOps server, we have DevOps 2020 Update 1. So we are behind, would getting to Update 2 solve this problem? Or should I ask, does Update 2 use a different/newer cryptography class which might solve my problem?
A:
According to the error message, you can try to change the registry key to check if it works:
Change the registry key from 1 to 0 here:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\fipsalgorithmpolicy
A:
Ok so this option in my original post did work for me:
On the machine's Local Security Policy, disabled setting System cryptography: Use FIPS compliant algorithms...
What I did not do was get a server reboot from our network folks after making this change. One of them suggested that perhaps the value was cached, and a reboot was worth a try. We did so, and that allowed me to create a project.
WARNING THOUGH!!!
Disabling FIPS made the server unreachable via Remote Desktop! So after I created my Project, one of our network folks had to go directly to the machine to re-enable FIPS. Which put security back to where we want it, and allowed remote access again.
|
Cannot create project in Azure DevOps due to FIPS issue
|
Azure DevOps 2020, I created a new project collection on our DevOps server. When I went to create a new project for that new collection from my work computer browser, I received this message:
Oops, something went wrong. Project creation operation failed.
Hitting button Try Again on that error screen produced the same result.
On our DevOps server, the log file from my attempt C:\ProgramData\Microsoft\Azure DevOps\Server Configuration\Logs..._CreateProject_1130_141424.log had this error:
This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
Executing step: Create the Team Project
Executing step: 'Create the Team Project' WorkItemTracking.CreateTeamProject (5 of 12)
Process guids. TypeId: b8a3a935-7e91-48b8-a94c-606d37c3e9f2 Inherits: 00000000-0000-0000-0000-000000000000
Process flags. : IsSystem: True IsCustom: False
All projects count:1
Well-formed projects count:0
Refreshing server caches.
Importing queries.
Failure while provisioning project - will retry (Message): This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
Failure while provisioning project - will retry (Stacktrace): at System.Security.Cryptography.SHA1Managed..ctor()
at Microsoft.TeamFoundation.WorkItemTracking.Server.CommonWITUtils.GetSha1HashString(String text)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DalUpdateQueryItemHashElement.JoinBatch(ElementGroup group, ServerQueryItem item, IVssRequestContext requestContext)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.ExplodeQueryUpdates(Guid id)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.AddQueryUpdatesToBatch()
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.BuildBatch(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.UpdateImpl(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Payload metadataPayload, Boolean bisNotification, String& dbStamp, Boolean bulkUpdate, Boolean& bulkUpdateSuccess, IVssIdentity user, Boolean overwrite, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.Update(XmlElement package, Boolean overwrite, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.ProvisioningService.ImportQueries(IVssRequestContext requestContext, IProcessTemplate template, XmlNode queriesNode, Uri projectUri, ProvisioningActionType action)
at Microsoft.TeamFoundation.Server.Deploy.TFCollection.Project.WorkItemTrackingImporter.ImportQueries()
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.ProvisionTeamProject(IVssRequestContext requestContext, IServicingContext servicingContext, Lazy`1 witImporter, String projectUri, ProcessDescriptor processDescriptor)
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.CreateTeamProject(IServicingContext servicingContext)
Failure while provisioning project - will retry (Exception Type): InvalidOperationException
Importing queries.
[Error] This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
at System.Security.Cryptography.SHA1Managed..ctor()
at Microsoft.TeamFoundation.WorkItemTracking.Server.CommonWITUtils.GetSha1HashString(String text)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DalUpdateQueryItemHashElement.JoinBatch(ElementGroup group, ServerQueryItem item, IVssRequestContext requestContext)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.ExplodeQueryUpdates(Guid id)
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.AddQueryUpdatesToBatch()
at Microsoft.TeamFoundation.WorkItemTracking.Server.Update.BuildBatch(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.UpdateImpl(XmlElement updateElement, MetadataTable[] tablesRequested, Int64[] rowVersions, Payload metadataPayload, Boolean bisNotification, String& dbStamp, Boolean bulkUpdate, Boolean& bulkUpdateSuccess, IVssIdentity user, Boolean overwrite, Boolean bypassRules, Boolean validationOnly, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayerImpl.Update(XmlElement package, Boolean overwrite, Boolean provisionRules)
at Microsoft.TeamFoundation.WorkItemTracking.Server.ProvisioningService.ImportQueries(IVssRequestContext requestContext, IProcessTemplate template, XmlNode queriesNode, Uri projectUri, ProvisioningActionType action)
at Microsoft.TeamFoundation.Server.Deploy.TFCollection.Project.WorkItemTrackingImporter.ImportQueries()
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.ProvisionTeamProject(IVssRequestContext requestContext, IServicingContext servicingContext, Lazy`1 witImporter, String projectUri, ProcessDescriptor processDescriptor)
at Microsoft.TeamFoundation.Server.Servicing.TFCollection.WorkItemStepPerformer.CreateTeamProject(IServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformHostStep(String servicingOperation, ServicingOperationTarget target, IServicingStep servicingStep, String stepData, ServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformStep(String servicingOperation, ServicingOperationTarget target, String stepType, String stepData, ServicingContext servicingContext)
at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)
Step failed: Create the Team Project. Execution time: 220 milliseconds.
[StepDuration] 0.1820582
[GroupDuration] 0.2299482
[OperationDuration] 1.1763862
Clearing dictionary, removing all items.
Based on that error, I performed the following steps on the DevOps server. After each step I stopped/started IIS, then went back to attempt Create Project again. No luck with any of these solutions.
Modified file C:\ProgramData\Microsoft\Azure DevOps\Configuration\SavedSettings\ApplicationTier\web.config to contain element enforceFIPSPolicy enabled="false".
Since the app pools for Azure DevOps use the .NET CLR Version v4.0.30319, I modified file C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Aspnet.config to contain element enforceFIPSPolicy enabled="false".
On the machine's Local Security Policy, disabled setting System cryptography: Use FIPS compliant algorithms...
Can anyone suggest what else I can try? I'm assuming the error message is accurate, and quite frankly I was surprised that the last thing I tried did not solve the problem.
UPDATE:
In the error message I also see
at System.Security.Cryptography.SHA1Managed..ctor()
I'm assuming SHA1Managed..ctor() means SHA1Managed constructor. If that's true then Microsoft says that SHA1Managed is not FIPS compliant.
But I can't change the DevOps code, if it's using SHA1Managed there's nothing I can do about it, correct?
On our DevOps server, we have DevOps 2020 Update 1. So we are behind, would getting to Update 2 solve this problem? Or should I ask, does Update 2 use a different/newer cryptography class which might solve my problem?
|
[
"According to the error message, you can try to change the registry key to check if it works:\nChange the registry key from 1 to 0 here:\nHKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\Lsa\\fipsalgorithmpolicy\n\n",
"Ok so this option in my original post did work for me:\nOn the machine's Local Security Policy, disabled setting System cryptography: Use FIPS compliant algorithms...\nWhat I did not do was get a server reboot from our network folks after making this change. One of them suggested that perhaps the value was cached, and a reboot was worth a try. We did so, and that allowed me to create a project.\nWARNING THOUGH!!!\nDisabling FIPS made the server unreachable via Remote Desktop! So after I created my Project, one of our network folks had to go directly to the machine to re-enable FIPS. Which put security back to where we want it, and allowed remote access again.\n"
] |
[
0,
0
] |
[] |
[] |
[
"azure_devops",
"fips"
] |
stackoverflow_0074633912_azure_devops_fips.txt
|
Q:
Can't find AppDelegate.m
I'm having a weird issue with my xcode project.
I renamed my AppDelegate.m file to .mm, thinking I should implement c++ methods there, and changed my mind, moved the C++ calls to the view controller, and renamed AppDelegate to .m
But now when I try to build (it built just fine when it was named .mm and before changing the names at all) it says that it can't find AppDelegate.m and won't launch.
I've verified that the AppDelegate.m is in the Compile Sources in the settings..
If you need any more details, please let me know, I doubt if there's a code I can post to help you guys..
A few things that I already tried:
Removing the reference to the file and adding it again
Cleaning the project and rebuilding
Removing from the Compile Sources and re-adding it
Restarting xcode and the computer
All failed
Any ideas?
Thanks!
A:
Delete derived data. That usually does the trick with linking problems .
A:
In order to fix this, open the project in XCode, then navigate to Build Phases, in Compile Sources remove the AppDelegate.m file and add AppDelegate.mm instead.
|
Can't find AppDelegate.m
|
I'm having a weird issue with my xcode project.
I renamed my AppDelegate.m file to .mm, thinking I should implement c++ methods there, and changed my mind, moved the C++ calls to the view controller, and renamed AppDelegate to .m
But now when I try to build (it built just fine when it was named .mm and before changing the names at all) it says that it can't find AppDelegate.m and won't launch.
I've verified that the AppDelegate.m is in the Compile Sources in the settings..
If you need any more details, please let me know, I doubt if there's a code I can post to help you guys..
A few things that I already tried:
Removing the reference to the file and adding it again
Cleaning the project and rebuilding
Removing from the Compile Sources and re-adding it
Restarting xcode and the computer
All failed
Any ideas?
Thanks!
|
[
"Delete derived data. That usually does the trick with linking problems .\n",
"In order to fix this, open the project in XCode, then navigate to Build Phases, in Compile Sources remove the AppDelegate.m file and add AppDelegate.mm instead.\n"
] |
[
2,
0
] |
[] |
[] |
[
"appdelegate",
"ios",
"iphone",
"xcode"
] |
stackoverflow_0013896408_appdelegate_ios_iphone_xcode.txt
|
Q:
Why Unity WebGL build catched error "ReferenceError: MyMethodName is not defined"?
I want to call javascript function in unity. I created a folder "Plugins" like explained in https://docs.unity3d.com/Manual/webgl-interactingwithbrowserscripting.html and create a javascript file with extension .jslib and put inside next code:
`
mergeInto(LibraryManager.library, {
GetBrowserQueryString: () => {
const qs = window.location.search;
const bufferSize = lengthBytesUTF8(qs) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(qs, buffer, bufferSize);
return buffer;
},
GetVkUserId: () => {
const qs = window.location.search;
const match = qs.match(/vk_user_id=(\d+)/);
const vkUserIdString = match[1];
const bufferSize = lengthBytesUTF8(vkUserIdString) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(vkUserIdString, buffer, bufferSize);
return buffer;
}
});
`
When i start the build application in browser i see the error "ReferenceError: _GetBrowserQueryString is not defined". How to fix it? What i do wrongly?
In Inspector of jslib file i already checked the WebGL check.
C# code where a trying execute javascript code:
public class SocketManager : MonoBehaviour {
... bla bla bla
[DllImport("__Internal")]
private static extern string GetBrowserQueryString();
[DllImport("__Internal")]
private static extern string GetVkUserId();
private void Awake() {
string vkQuery = GetBrowserQueryString();
string vkUserId = GetVkUserId();
... bla bla bla
}
}
A:
The mistake was that I used arrow functions instead of the usual function naming..... Unity does not know about arrow functions in javascript :(
Fixes:
mergeInto(LibraryManager.library, {
GetQueryString: function() {
const qs = window.location.search;
const bufferSize = lengthBytesUTF8(qs) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(qs, buffer, bufferSize);
return buffer;
},
GetVkUserId: function() {
const qs = window.location.search;
const match = qs.match(/vk_user_id=(\d+)/);
const vkUserIdString = match[1];
const bufferSize = lengthBytesUTF8(vkUserIdString) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(vkUserIdString, buffer, bufferSize);
return buffer;
}
});
|
Why Unity WebGL build catched error "ReferenceError: MyMethodName is not defined"?
|
I want to call javascript function in unity. I created a folder "Plugins" like explained in https://docs.unity3d.com/Manual/webgl-interactingwithbrowserscripting.html and create a javascript file with extension .jslib and put inside next code:
`
mergeInto(LibraryManager.library, {
GetBrowserQueryString: () => {
const qs = window.location.search;
const bufferSize = lengthBytesUTF8(qs) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(qs, buffer, bufferSize);
return buffer;
},
GetVkUserId: () => {
const qs = window.location.search;
const match = qs.match(/vk_user_id=(\d+)/);
const vkUserIdString = match[1];
const bufferSize = lengthBytesUTF8(vkUserIdString) + 1;
const buffer = _malloc(bufferSize);
stringToUTF8(vkUserIdString, buffer, bufferSize);
return buffer;
}
});
`
When i start the build application in browser i see the error "ReferenceError: _GetBrowserQueryString is not defined". How to fix it? What i do wrongly?
In Inspector of jslib file i already checked the WebGL check.
C# code where a trying execute javascript code:
public class SocketManager : MonoBehaviour {
... bla bla bla
[DllImport("__Internal")]
private static extern string GetBrowserQueryString();
[DllImport("__Internal")]
private static extern string GetVkUserId();
private void Awake() {
string vkQuery = GetBrowserQueryString();
string vkUserId = GetVkUserId();
... bla bla bla
}
}
|
[
"The mistake was that I used arrow functions instead of the usual function naming..... Unity does not know about arrow functions in javascript :(\nFixes:\nmergeInto(LibraryManager.library, {\nGetQueryString: function() {\n const qs = window.location.search;\n const bufferSize = lengthBytesUTF8(qs) + 1;\n const buffer = _malloc(bufferSize);\n\n stringToUTF8(qs, buffer, bufferSize);\n\n return buffer;\n},\nGetVkUserId: function() {\n const qs = window.location.search;\n const match = qs.match(/vk_user_id=(\\d+)/);\n\n const vkUserIdString = match[1];\n const bufferSize = lengthBytesUTF8(vkUserIdString) + 1;\n const buffer = _malloc(bufferSize);\n\n stringToUTF8(vkUserIdString, buffer, bufferSize);\n\n return buffer;\n}\n\n});\n"
] |
[
0
] |
[] |
[] |
[
"unity3d",
"unity_webgl"
] |
stackoverflow_0074659369_unity3d_unity_webgl.txt
|
Q:
Collection Navigation property not Refreshed (reload) after Edit
I use a function to load data for a datagrid, Each row in the grid show items from a collection navigation property, But when I edit the collection and reload data for datagrid the added or removed entries not showing it shows only old ones:
Here is a part of LoadData() function I use for refreshing datagrid data, I call this after editing collection (Note the collection is correctly saved to DB):
count = query.Count();
var resumes0 = query.Skip(args.Skip.Value).Take(args.Top.Value).ToList<Resume>();
resumes0.ForEach(r =>
{
context.Entry(r).Navigation("ResumeSkills").IsLoaded = false;
context.Entry(r).Collection(r => r.ResumeSkills).IsLoaded = false;
context.Entry(r).Collection(r => r.ResumeSkills).Load();
});
resumes = resumes0;
A:
AsNoTracking() here:
var query = context.Resumes.Include(r => r.AcceptedResumes).Include(r => r.ResumeSkills).ThenInclude(rs => rs.Skill).AsNoTracking().AsQueryable();
solved my problem, But I need tracking entities. But it seems this is the only way to solve this problem.
|
Collection Navigation property not Refreshed (reload) after Edit
|
I use a function to load data for a datagrid, Each row in the grid show items from a collection navigation property, But when I edit the collection and reload data for datagrid the added or removed entries not showing it shows only old ones:
Here is a part of LoadData() function I use for refreshing datagrid data, I call this after editing collection (Note the collection is correctly saved to DB):
count = query.Count();
var resumes0 = query.Skip(args.Skip.Value).Take(args.Top.Value).ToList<Resume>();
resumes0.ForEach(r =>
{
context.Entry(r).Navigation("ResumeSkills").IsLoaded = false;
context.Entry(r).Collection(r => r.ResumeSkills).IsLoaded = false;
context.Entry(r).Collection(r => r.ResumeSkills).Load();
});
resumes = resumes0;
|
[
"AsNoTracking() here:\nvar query = context.Resumes.Include(r => r.AcceptedResumes).Include(r => r.ResumeSkills).ThenInclude(rs => rs.Skill).AsNoTracking().AsQueryable();\n\nsolved my problem, But I need tracking entities. But it seems this is the only way to solve this problem.\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"entity_framework_core",
"linq"
] |
stackoverflow_0074660553_c#_entity_framework_core_linq.txt
|
Q:
Can we use other than the "data(id)" for rendering labels in cytoscape.js ? i.e without using "data" object?
I was just wondering if we can use something like
label: "ID(id)"
where nodes object would be like :
nodes: [
{
data: { label: "IP 1", type: "ip" },
label:['EC2'],
ID:{id:'1'}
}
]
I don't see any particular documentation that specifies the use of "data" key to render.
By trying above code,it just prints it as a string and not evaluating the given expression.
Any inputs are appreciated.
Thanks in advance!
A:
This is some input (live-code) that is requested. It demonstrates the use of
label: "ID(id)"
does not produce expected result.
var data = {
"nodes": [{
data: {
label: "IP 1",
type: "ip"
},
label: ['EC2'],
ID: {
id: '1'
}
},
],
"edges": []
}
//console.log(data);//uncomment this to see file content
var cy = cytoscape({
elements: data,
container: document.getElementById("cy"),
style: [{
selector: "node",
style: {
shape: "hexagon",
"background-color": "red",
//label: "data(id)",
label: "ID(id)"
}
}],
layout: {
name: "grid"
}
});
#cy {
width: 400px;
height: 200px;
position: absolute;
top: 5px;
left: 5px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/cytoscape/2.7.10/cytoscape.js"></script>
<body>
<div id="cy"></div>
</body>
|
Can we use other than the "data(id)" for rendering labels in cytoscape.js ? i.e without using "data" object?
|
I was just wondering if we can use something like
label: "ID(id)"
where nodes object would be like :
nodes: [
{
data: { label: "IP 1", type: "ip" },
label:['EC2'],
ID:{id:'1'}
}
]
I don't see any particular documentation that specifies the use of "data" key to render.
By trying above code,it just prints it as a string and not evaluating the given expression.
Any inputs are appreciated.
Thanks in advance!
|
[
"This is some input (live-code) that is requested. It demonstrates the use of\nlabel: \"ID(id)\"\n\ndoes not produce expected result.\n\n\nvar data = {\n \"nodes\": [{\n data: {\n label: \"IP 1\",\n type: \"ip\"\n },\n label: ['EC2'],\n ID: {\n id: '1'\n }\n },\n ],\n \"edges\": []\n}\n\n\n//console.log(data);//uncomment this to see file content\n\nvar cy = cytoscape({\n elements: data,\n container: document.getElementById(\"cy\"),\n style: [{\n selector: \"node\",\n style: {\n shape: \"hexagon\",\n \"background-color\": \"red\",\n //label: \"data(id)\",\n label: \"ID(id)\"\n }\n }],\n layout: {\n name: \"grid\"\n }\n});\n#cy {\n width: 400px;\n height: 200px;\n position: absolute;\n top: 5px;\n left: 5px;\n}\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/cytoscape/2.7.10/cytoscape.js\"></script>\n\n<body>\n <div id=\"cy\"></div>\n</body>\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"cytoscape",
"cytoscape.js"
] |
stackoverflow_0074623852_cytoscape_cytoscape.js.txt
|
Q:
How to add mouseover eventlistener to xAxis area of Highcharts?
I added mouseover eventlistener to .highcharts-xaxis-labels class of Highcharts. However it console.log only when mouseover on <text> not the rest of .highcharts-xaxis-labels class.
How can I add the eventlistener so that it console.log on mouseover all over the .highcharts-xaxis-labels class, not only on <text> inside the class? That would be the <g> with className .highcharts-xaxis-labels.
live example: https://jsfiddle.net/simazargar/sv9e1g5x/9/
Highcharts.chart('container', {
series: [{
data: [29.9, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4]
}]
}, chart => {
document.querySelector('.highcharts-xaxis-labels')
.addEventListener('mouseover', function(e) {
console.log('mouseover');
});
});
<script src="https://code.highcharts.com/highcharts.js"></script>
<script src="https://code.highcharts.com/modules/accessibility.js"></script>
<div id="container"></div>
A:
I believe it would not be possible unless we have a new <rect> with a new className such as .highcharts-xaxis-box which will also allow to style this area rather than only the labels (similar to what Highcharts has for Legend).
I will ask for a new feature https://github.com/highcharts/highcharts/issues/18082
|
How to add mouseover eventlistener to xAxis area of Highcharts?
|
I added mouseover eventlistener to .highcharts-xaxis-labels class of Highcharts. However it console.log only when mouseover on <text> not the rest of .highcharts-xaxis-labels class.
How can I add the eventlistener so that it console.log on mouseover all over the .highcharts-xaxis-labels class, not only on <text> inside the class? That would be the <g> with className .highcharts-xaxis-labels.
live example: https://jsfiddle.net/simazargar/sv9e1g5x/9/
Highcharts.chart('container', {
series: [{
data: [29.9, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4]
}]
}, chart => {
document.querySelector('.highcharts-xaxis-labels')
.addEventListener('mouseover', function(e) {
console.log('mouseover');
});
});
<script src="https://code.highcharts.com/highcharts.js"></script>
<script src="https://code.highcharts.com/modules/accessibility.js"></script>
<div id="container"></div>
|
[
"I believe it would not be possible unless we have a new <rect> with a new className such as .highcharts-xaxis-box which will also allow to style this area rather than only the labels (similar to what Highcharts has for Legend).\nI will ask for a new feature https://github.com/highcharts/highcharts/issues/18082\n"
] |
[
0
] |
[] |
[] |
[
"highcharts",
"javascript",
"svg"
] |
stackoverflow_0074659898_highcharts_javascript_svg.txt
|
Q:
How to automatically install Ansible Galaxy roles?
All my Ansible playbooks/roles are checked in to my git repo.
However, for Ansible Galaxy roles I always have to explicitly download them one by one on every machine I want to run Ansible from.
It's even tough to know in advance exactly which Ansible Galaxy roles are needed until Ansible complains about a missing role at runtime.
How is one supposed to manage the Ansible Galaxy role dependencies? I would like to either have them checked into my git repo along with the rest of my ansible code or have them automatically be identified and downloaded when I run Ansible on a new machine.
A:
You should use a requirements.yml file for this use-case. Describe the roles you require, using any of a variety of install methods:
# Install a role from the Ansible Galaxy
- src: dfarrell07.opendaylight
# Install a role from GitHub
- name: opendaylight
src: https://github.com/dfarrell07/ansible-opendaylight
# Install a role from a specific git branch
- name: opendaylight
src: https://github.com/dfarrell07/ansible-opendaylight
version: origin/master
# Install a role at a specific tag from GitHub
- name: opendaylight
src: https://github.com/dfarrell07/ansible-opendaylight
version: 1.0.0
# Install a role at a specific commit from GitHub
- name: opendaylight
src: https://github.com/dfarrell07/ansible-opendaylight
version: <commit hash>
Then install them:
ansible-galaxy install -r requirements.yml
Here's a working example (installing OpenDaylight using Ansible as a Vagrant provisioner). See the relevant Ansible docs for more info.
A:
As suggested, you can use ansible galaxy for this need.
Ansible has a feature where you can create a requirements.yml file that lists all of your roles. You can find out about that here: http://docs.ansible.com/ansible/latest/galaxy.html#installing-multiple-roles-from-a-file
For example (requirements.yml):
- src: yatesr.timezone
You then run ansible-galaxy install -r requirements.yml on this file to download all of the roles listed there.
If you would like to further automate it then, you can create a simple shell script that will run the two commands.
For example (ansible.sh):
./ansible.sh
ansible-galaxy install -r requirements.yml
ansible-playbook playbook.yml -i inventory
A:
I often find myself installing installing a Java JDK. Using a role makes that touch easier. I've tried a couple of different ways (including lots of .gitmodules and submodule... I have to use multiple git systems for work and all it gets ugly). My largest requirement is that I not check role code into my playbook project, mostly so I can keep everything in one place.
The contents of my 'requirements.yml' file:
- src: https://github.com/staylorx/ansible-role-wls-prep.git
version: master
name: staylorx.wls-prep
- src: https://my-work-git-extravaganza.com
version: 2.x
name: coolplace.niftyrole
#From Ansible Galaxy
- src: staylorx.oracle-jdk
I run a separate playbook, install-roles.yml:
---
- hosts: localhost
tasks:
- file:
path: roles
state: absent
- local_action:
command ansible-galaxy install -r requirements.yml --roles-path roles
- lineinfile:
dest: .gitignore
regexp: '^\/roles$'
line: '/roles'
state: present
I run this first playbook, then I run my roles in any playbook normally. For me the secret is to ensure it's ignored by git so I don't check the roles in by mistake. Also since I wipe out the folder every time, I ensure I don't need to force or ignore errors.
A:
You could use an Ansible role to install the needed roles using the command module.
Here is a very basic example that runs ansible-galaxy install:
- name: Install roles from Ansible Galaxy
command: ansible-galaxy install {{ item.item }}
with_items:
- "{{ ansible_roles_list }}"
The ansible_roles_list may be supplied as a variable or as a role parameter.
If you do this in a role, it has to be applied before any other roles that you want to install using it, in a separate playbook. This is because Ansible checks the if all the roles are available before running the playbook where you reference them.
A:
Another solution is to use git submodules. After all, Ansible Galaxy only is a directory of github repositories...
I use this command to automatically add any Galaxy role as a submodule:
ansible-galaxy info <package> | grep -A 1 github_repo | tr '\n' ' ' | sed -e "s/.*github_repo: \([^[:space:]]*\)[^\w]*github_user: \([^[:space:]]*\)[[:space:]]*/git submodule add git:\/\/github.com\/\2\/\1.git roles\/\2.\1/g" | sh
Commit the changes then to your git repo. When you clone your repo in future make sure to clone it with submodules, e.g. git clone ... --recursive
An advantage of this is, a git submodule is always referencing a specific version (git commit-hash). This will prevent you from running untested updates in your productive environment. A new version of a Galaxy role could have bugs or work completely different than before. With a git submodule you decide if and when you update a role to the new version.
Also, you won't have to additionally take care of blacklisting galaxy roles in your .gitignore to prevent committing their code to your repository.
A:
At this point in time, as far as I know there's no automatic way to download roles at runtime.
Your best bet is to either commit them into your own repo or have a proper documentation listing all the requirements.
You could even create a pre-flight playbook that installs your roles. :)
A:
Here, my requirements are on the role and used in install.yml
main.yml
# tasks file for MY_ROLE
- name: Install requirements
local_action: command ansible-galaxy install -r {{ role_path }}/requirements.yml -p /etc/ansible/roles
- include_tasks: install.yml
.
├── playbook.yml
├── inventory
├── roles
│ └── My_Role
│ ├── tasks
│ │ └── main.yml
│ │ └── install.yml
│ └── requirements.yml
A:
If requirements.yml resides in the roles directory of your project, then Tower/AWX installs the roles automatically.
A:
On your gitlab account create a group where you put all your roles
Go to settings/repository and add a token with read rights
Copy the token-name:token an paste it in a requirements.yml file
- src: 'https://<token-name>:<token>@gitlab.com/ansible-cim/roles/instnginx.git'
scm: 'git'
version: 'v0.0.1'
name: 'instnginx'
Edit ansible.cfg if necessary to indicate where roles will be installed
[defaults]
roles_path=./roles
Create folder ./roles if necessery
Launch ansible-galaxy command
mkdir roles
ansible-galaxy install -r requirements.yml
A:
I use Crono's method in AWX but also on my local ansible controller.
It works well but somehow messes with the path where roles are eventually downloaded to.
My Git project is called 'roles' and I add it to requirements.yml:
- src: "git+https://<token_name>:<token>@gitlab.mydomain.com/mygroup/ansible/roles.git"
When I run
ansible-galaxy install -r requirements.yml
I get this folder structure in my roles directory after sync:
├── roles
│ └── roles
│ └── My_Role
│ ├── tasks
│ │ └── main.yml
This forces me to include my roles like this in the playbook:
hosts: [all]
roles:
- roles/roles/My_Role
Is there any way to sync the roles from git without the root folder??
You can do this with git via:
git clone "https://<token_name>:<token>@gitlab.mydomain.com/mygroup/ansible/roles.git ."
but the dot convention does not work inside the requirements.yml file.
Any ideas will be much appreciated.
A:
Exaple HowTo install a Ansible Galaxy role in playbook:
- name: Install role from Ansible Galaxy
local_action: command /usr/bin/ansible-galaxy install <GALAXY_PACKAGE_NAME>
A:
There is no mechanism to automatically download a playbook's needed roles. As others have suggested, using a requirements.yml file is one way, and maybe the best way, to do this.
However, you can also use a role's meta/main.yml file to specify it's dependencies. See the Roles documentation page. You can use this syntax for if you're using a private repo for example.
dependencies:
- name: java
src: ssh://git@myServer/myWorkspace/myRole.git
scm: git
version: master
So you can define a requirements.yml file to download myRole, and then it will download any roles it needs, and they can in turn download any roles they need through their meta/main.yml files. This is a lot more work that having a requirements.yml file IMO.
A:
Simply put: you can't. At best you can add a separate role to do the installs, but it will still fail if you try to include a Galaxy role in your playbook. So there's simply no other way than to install it manually. Yes, it is ridiculous, like many other things in Ansible.
|
How to automatically install Ansible Galaxy roles?
|
All my Ansible playbooks/roles are checked in to my git repo.
However, for Ansible Galaxy roles I always have to explicitly download them one by one on every machine I want to run Ansible from.
It's even tough to know in advance exactly which Ansible Galaxy roles are needed until Ansible complains about a missing role at runtime.
How is one supposed to manage the Ansible Galaxy role dependencies? I would like to either have them checked into my git repo along with the rest of my ansible code or have them automatically be identified and downloaded when I run Ansible on a new machine.
|
[
"You should use a requirements.yml file for this use-case. Describe the roles you require, using any of a variety of install methods:\n# Install a role from the Ansible Galaxy\n- src: dfarrell07.opendaylight\n\n# Install a role from GitHub\n- name: opendaylight\n src: https://github.com/dfarrell07/ansible-opendaylight\n\n# Install a role from a specific git branch\n- name: opendaylight\n src: https://github.com/dfarrell07/ansible-opendaylight\n version: origin/master\n\n# Install a role at a specific tag from GitHub\n- name: opendaylight\n src: https://github.com/dfarrell07/ansible-opendaylight\n version: 1.0.0\n\n# Install a role at a specific commit from GitHub\n- name: opendaylight\n src: https://github.com/dfarrell07/ansible-opendaylight\n version: <commit hash>\n\nThen install them:\nansible-galaxy install -r requirements.yml\n\nHere's a working example (installing OpenDaylight using Ansible as a Vagrant provisioner). See the relevant Ansible docs for more info.\n",
"As suggested, you can use ansible galaxy for this need.\nAnsible has a feature where you can create a requirements.yml file that lists all of your roles. You can find out about that here: http://docs.ansible.com/ansible/latest/galaxy.html#installing-multiple-roles-from-a-file\nFor example (requirements.yml):\n- src: yatesr.timezone\n\nYou then run ansible-galaxy install -r requirements.yml on this file to download all of the roles listed there.\nIf you would like to further automate it then, you can create a simple shell script that will run the two commands.\nFor example (ansible.sh):\n./ansible.sh\nansible-galaxy install -r requirements.yml\nansible-playbook playbook.yml -i inventory \n\n",
"I often find myself installing installing a Java JDK. Using a role makes that touch easier. I've tried a couple of different ways (including lots of .gitmodules and submodule... I have to use multiple git systems for work and all it gets ugly). My largest requirement is that I not check role code into my playbook project, mostly so I can keep everything in one place.\nThe contents of my 'requirements.yml' file:\n- src: https://github.com/staylorx/ansible-role-wls-prep.git\n version: master\n name: staylorx.wls-prep\n\n- src: https://my-work-git-extravaganza.com\n version: 2.x\n name: coolplace.niftyrole\n\n#From Ansible Galaxy\n- src: staylorx.oracle-jdk\n\nI run a separate playbook, install-roles.yml:\n---\n\n- hosts: localhost\n\n tasks:\n - file:\n path: roles\n state: absent\n\n - local_action:\n command ansible-galaxy install -r requirements.yml --roles-path roles\n\n - lineinfile:\n dest: .gitignore\n regexp: '^\\/roles$'\n line: '/roles'\n state: present\n\nI run this first playbook, then I run my roles in any playbook normally. For me the secret is to ensure it's ignored by git so I don't check the roles in by mistake. Also since I wipe out the folder every time, I ensure I don't need to force or ignore errors.\n",
"You could use an Ansible role to install the needed roles using the command module.\nHere is a very basic example that runs ansible-galaxy install:\n- name: Install roles from Ansible Galaxy\n command: ansible-galaxy install {{ item.item }}\n with_items:\n - \"{{ ansible_roles_list }}\"\n\nThe ansible_roles_list may be supplied as a variable or as a role parameter.\nIf you do this in a role, it has to be applied before any other roles that you want to install using it, in a separate playbook. This is because Ansible checks the if all the roles are available before running the playbook where you reference them.\n",
"Another solution is to use git submodules. After all, Ansible Galaxy only is a directory of github repositories...\nI use this command to automatically add any Galaxy role as a submodule:\nansible-galaxy info <package> | grep -A 1 github_repo | tr '\\n' ' ' | sed -e \"s/.*github_repo: \\([^[:space:]]*\\)[^\\w]*github_user: \\([^[:space:]]*\\)[[:space:]]*/git submodule add git:\\/\\/github.com\\/\\2\\/\\1.git roles\\/\\2.\\1/g\" | sh\n\nCommit the changes then to your git repo. When you clone your repo in future make sure to clone it with submodules, e.g. git clone ... --recursive\nAn advantage of this is, a git submodule is always referencing a specific version (git commit-hash). This will prevent you from running untested updates in your productive environment. A new version of a Galaxy role could have bugs or work completely different than before. With a git submodule you decide if and when you update a role to the new version.\nAlso, you won't have to additionally take care of blacklisting galaxy roles in your .gitignore to prevent committing their code to your repository. \n",
"At this point in time, as far as I know there's no automatic way to download roles at runtime.\nYour best bet is to either commit them into your own repo or have a proper documentation listing all the requirements.\nYou could even create a pre-flight playbook that installs your roles. :)\n",
"Here, my requirements are on the role and used in install.yml\nmain.yml\n # tasks file for MY_ROLE\n- name: Install requirements\n local_action: command ansible-galaxy install -r {{ role_path }}/requirements.yml -p /etc/ansible/roles\n\n- include_tasks: install.yml \n\n. \n├── playbook.yml \n├── inventory \n├── roles \n│ └── My_Role \n│ ├── tasks \n│ │ └── main.yml \n│ │ └── install.yml \n│ └── requirements.yml\n\n",
"If requirements.yml resides in the roles directory of your project, then Tower/AWX installs the roles automatically.\n",
"\nOn your gitlab account create a group where you put all your roles\nGo to settings/repository and add a token with read rights\nCopy the token-name:token an paste it in a requirements.yml file\n\n- src: 'https://<token-name>:<token>@gitlab.com/ansible-cim/roles/instnginx.git'\n scm: 'git'\n version: 'v0.0.1'\n name: 'instnginx'\n\n\nEdit ansible.cfg if necessary to indicate where roles will be installed\n\n[defaults]\nroles_path=./roles\n\n\nCreate folder ./roles if necessery\nLaunch ansible-galaxy command\n\nmkdir roles\nansible-galaxy install -r requirements.yml\n\n",
"I use Crono's method in AWX but also on my local ansible controller.\nIt works well but somehow messes with the path where roles are eventually downloaded to.\nMy Git project is called 'roles' and I add it to requirements.yml:\n- src: \"git+https://<token_name>:<token>@gitlab.mydomain.com/mygroup/ansible/roles.git\"\n\nWhen I run\nansible-galaxy install -r requirements.yml\n\nI get this folder structure in my roles directory after sync:\n├── roles \n│ └── roles\n│ └── My_Role \n│ ├── tasks \n│ │ └── main.yml \n\nThis forces me to include my roles like this in the playbook:\n hosts: [all]\n roles:\n - roles/roles/My_Role\n\nIs there any way to sync the roles from git without the root folder??\nYou can do this with git via:\ngit clone \"https://<token_name>:<token>@gitlab.mydomain.com/mygroup/ansible/roles.git .\"\n\nbut the dot convention does not work inside the requirements.yml file.\nAny ideas will be much appreciated.\n",
"Exaple HowTo install a Ansible Galaxy role in playbook:\n- name: Install role from Ansible Galaxy\n local_action: command /usr/bin/ansible-galaxy install <GALAXY_PACKAGE_NAME> \n\n",
"There is no mechanism to automatically download a playbook's needed roles. As others have suggested, using a requirements.yml file is one way, and maybe the best way, to do this.\nHowever, you can also use a role's meta/main.yml file to specify it's dependencies. See the Roles documentation page. You can use this syntax for if you're using a private repo for example.\ndependencies:\n - name: java\n src: ssh://git@myServer/myWorkspace/myRole.git\n scm: git\n version: master\n\nSo you can define a requirements.yml file to download myRole, and then it will download any roles it needs, and they can in turn download any roles they need through their meta/main.yml files. This is a lot more work that having a requirements.yml file IMO.\n",
"Simply put: you can't. At best you can add a separate role to do the installs, but it will still fail if you try to include a Galaxy role in your playbook. So there's simply no other way than to install it manually. Yes, it is ridiculous, like many other things in Ansible.\n"
] |
[
174,
56,
19,
11,
5,
2,
2,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"ansible",
"ansible_galaxy"
] |
stackoverflow_0025230376_ansible_ansible_galaxy.txt
|
Q:
Function to receive sheet id as parameter and return the spreadsheet object
I am trying to define a function in google apps script that receives the sheet id as input and returns the spreadsheet object so I can do further stuff with it like get range and values.
function spreadsheetCall() {
const ss = SpreadsheetApp.openById("1eZcZ0e1AQZ4DRLO9HQsF024qsmraIewY6LUkWYicYmY").getSheetByName("Semanal");
return ss
};
Logger.log(spreadsheetCallString().getRange("A1").getValues());
When I try that it works like a charm, I can get the range and values I want, but the function is not dynamic since the sheet id is hardcoded into the function. I am trying to have something like this
function spreadsheetCall(sheetID) {
const ss = SpreadsheetApp.openById(sheetID).getSheetByName("Semanal");
return ss
};
where if I have a list of multiple sheets I do not have to make a function for each but rather apply the same one multiple times if needed to get what I want. any guidance is helpful, I know basic python so maybe javascript works different idk, just asking to see if it is possible to do what I am thinking of I should find another approach.
Thanks
Tried creating a string with the spreadsheet call and then taking away the quotation marks to return it in a function and then use another function to try and make the proper call but did not work.
A:
How about something like this:
function getallsheetsatonce(sheetId="11yNxdh_GIokeHIctdzt3LDKVm7rYIJvAPlMLzunn7zE") {
let obj = {sA:[]}
let ss = SpreadsheetApp.openById(sheetId);
ss.getSheets().forEach(sh => {
obj.sA.push(sh.getName());
obj[sh.getName()]= sh.getDataRange().getValues();
});
return obj;
}
|
Function to receive sheet id as parameter and return the spreadsheet object
|
I am trying to define a function in google apps script that receives the sheet id as input and returns the spreadsheet object so I can do further stuff with it like get range and values.
function spreadsheetCall() {
const ss = SpreadsheetApp.openById("1eZcZ0e1AQZ4DRLO9HQsF024qsmraIewY6LUkWYicYmY").getSheetByName("Semanal");
return ss
};
Logger.log(spreadsheetCallString().getRange("A1").getValues());
When I try that it works like a charm, I can get the range and values I want, but the function is not dynamic since the sheet id is hardcoded into the function. I am trying to have something like this
function spreadsheetCall(sheetID) {
const ss = SpreadsheetApp.openById(sheetID).getSheetByName("Semanal");
return ss
};
where if I have a list of multiple sheets I do not have to make a function for each but rather apply the same one multiple times if needed to get what I want. any guidance is helpful, I know basic python so maybe javascript works different idk, just asking to see if it is possible to do what I am thinking of I should find another approach.
Thanks
Tried creating a string with the spreadsheet call and then taking away the quotation marks to return it in a function and then use another function to try and make the proper call but did not work.
|
[
"How about something like this:\nfunction getallsheetsatonce(sheetId=\"11yNxdh_GIokeHIctdzt3LDKVm7rYIJvAPlMLzunn7zE\") {\n let obj = {sA:[]}\n let ss = SpreadsheetApp.openById(sheetId);\n ss.getSheets().forEach(sh => {\n obj.sA.push(sh.getName());\n obj[sh.getName()]= sh.getDataRange().getValues();\n });\n return obj;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"google_apps_script",
"google_sheets",
"javascript"
] |
stackoverflow_0074660371_google_apps_script_google_sheets_javascript.txt
|
Q:
Find if words from one sentence are found in corresponding row of another column also containing sentences (Pandas)
I have dataframe that looks like this:
email account_name
0 NaN weichert, realtors mnsota
1 jhawkins sterling group com sterling group
2 lbaltz baltzchevy com baltz chevrolet
and I have this code that works as a solution but it takes forever on larger datasets and I know there has to be an easier way to solve it so just looking to see if anyone knows of a more concise/elegant way to do find a count of matching words between corresponding rows of both columns. Thanks
test = prod_nb_wcomps_2.sample(3, random_state=10).reset_index(drop = True)
test = test[['email','account_name']]
print(test)
lst = []
for i in test.index:
if not isinstance(test['email'].iloc[i], float):
for word in test['email'].iloc[i].split(' '):
if not isinstance(test['account_name'].iloc[i], float):
for word2 in test['account_name'].iloc[i].split(' '):
if word in word2:
lst.append({'index':i, 'bool_col': True})
else: lst.append({'index':i, 'bool_col': False})
df_dct = pd.DataFrame(lst)
df_dct = df_dct.loc[df_dct['bool_col'] == True]
df_dct['number of matches_per_row'] = df_dct.groupby('index')['bool_col'].transform('size')
df_dct.set_index('index', inplace=True, drop=True)
df_dct.drop(['bool_col'], inplace=True, axis =1)
test_ = pd.merge(test, df_dct, left_index=True, right_index=True)
test_
the resulting dataframe test_ looks like this
A:
This solves your query.
import pandas as pd
df = pd.DataFrame({'email': ['', 'jhawkins sterling group com', 'lbaltz baltzchevy com'], 'name': ['John', 'sterling group', 'Linda']})
for index, row in df.iterrows():
matches = sum([1 for x in row['email'].split() if x in row['name'].split()])
df.loc[index, 'matches'] = matches
Output:
email name matches
0 John 0.0
1 jhawkins sterling group com sterling group 2.0
2 lbaltz baltzchevy com Linda 0.0
A:
You can use the apply method on your dataframe to apply a function to each row, which can simplify your code and make it more efficient.
The apply method will apply the function you specify to each row of the dataframe, and the function should take a single row as input and return the desired result. In your case, you can define a function that takes a row as input, splits the email and account_name values in that row into words, and then counts the number of words that appear in both the email and account_name values. Here is an example of how you could define and use this function:
def count_matching_words(row):
email_words = row['email'].split(' ')
account_name_words = row['account_name'].split(' ')
return len(set(email_words).intersection(account_name_words))
test['number of matches_per_row'] = test.apply(count_matching_words, axis=1)
This code will apply the count_matching_words function to each row of the test dataframe, and the result will be a new column in the dataframe that contains the number of matching words between the email and account_name values in each row. This should be much more efficient and concise than your current solution, and it should work well even on larger datasets.
|
Find if words from one sentence are found in corresponding row of another column also containing sentences (Pandas)
|
I have dataframe that looks like this:
email account_name
0 NaN weichert, realtors mnsota
1 jhawkins sterling group com sterling group
2 lbaltz baltzchevy com baltz chevrolet
and I have this code that works as a solution but it takes forever on larger datasets and I know there has to be an easier way to solve it so just looking to see if anyone knows of a more concise/elegant way to do find a count of matching words between corresponding rows of both columns. Thanks
test = prod_nb_wcomps_2.sample(3, random_state=10).reset_index(drop = True)
test = test[['email','account_name']]
print(test)
lst = []
for i in test.index:
if not isinstance(test['email'].iloc[i], float):
for word in test['email'].iloc[i].split(' '):
if not isinstance(test['account_name'].iloc[i], float):
for word2 in test['account_name'].iloc[i].split(' '):
if word in word2:
lst.append({'index':i, 'bool_col': True})
else: lst.append({'index':i, 'bool_col': False})
df_dct = pd.DataFrame(lst)
df_dct = df_dct.loc[df_dct['bool_col'] == True]
df_dct['number of matches_per_row'] = df_dct.groupby('index')['bool_col'].transform('size')
df_dct.set_index('index', inplace=True, drop=True)
df_dct.drop(['bool_col'], inplace=True, axis =1)
test_ = pd.merge(test, df_dct, left_index=True, right_index=True)
test_
the resulting dataframe test_ looks like this
|
[
"This solves your query.\nimport pandas as pd\n\ndf = pd.DataFrame({'email': ['', 'jhawkins sterling group com', 'lbaltz baltzchevy com'], 'name': ['John', 'sterling group', 'Linda']})\n\nfor index, row in df.iterrows():\n matches = sum([1 for x in row['email'].split() if x in row['name'].split()])\n df.loc[index, 'matches'] = matches\n\nOutput:\n email name matches\n0 John 0.0\n1 jhawkins sterling group com sterling group 2.0\n2 lbaltz baltzchevy com Linda 0.0\n\n",
"You can use the apply method on your dataframe to apply a function to each row, which can simplify your code and make it more efficient.\nThe apply method will apply the function you specify to each row of the dataframe, and the function should take a single row as input and return the desired result. In your case, you can define a function that takes a row as input, splits the email and account_name values in that row into words, and then counts the number of words that appear in both the email and account_name values. Here is an example of how you could define and use this function:\ndef count_matching_words(row):\n email_words = row['email'].split(' ')\n account_name_words = row['account_name'].split(' ')\n return len(set(email_words).intersection(account_name_words))\n\ntest['number of matches_per_row'] = test.apply(count_matching_words, axis=1)\n\nThis code will apply the count_matching_words function to each row of the test dataframe, and the result will be a new column in the dataframe that contains the number of matching words between the email and account_name values in each row. This should be much more efficient and concise than your current solution, and it should work well even on larger datasets.\n"
] |
[
2,
2
] |
[] |
[] |
[
"group_by",
"pandas",
"python"
] |
stackoverflow_0074660484_group_by_pandas_python.txt
|
Q:
A class for rational number (p/q) with overloading + and << operator
I wanted to add two rational numbers and display them in the form of p/q using overloading the operators + and <<.
I'm using friend function, because the function for addition and display are taking multiple and different types of parameters. Inside the addition function I'm performing a normal addition of fractions, like how we do in real life. But when i run the code i get an error that can't convert Rational to Rational(),
Error: Rational.cpp: In function 'int main()': Rational.cpp:51:15: error: assignment of function 'Rational R3()' R3 = R1 + R2; Rational.cpp:51:15: error: cannot convert 'Rational' to 'Rational()' in assignment*
i have no idea why it's saying that .... ??
C++
#include <iostream>
using namespace std;
class Rational
{
private:
int P;
int Q;
public:
Rational(int p = 1, int q = 1)
{
P = p;
Q = q;
}
friend Rational operator+(Rational r1, Rational r2);
friend ostream & operator<<(ostream &out, Rational r3);
};
Rational operator+(Rational r1, Rational r2)
{
Rational temp;
if(r1.Q == r2.Q)
{
temp.P = r1.P + r2.P;
temp.Q = r1.Q;
}
else
{
temp.P = ((r1.P) * (r2.Q)) + ((r2.P) * (r1.Q));
temp.Q = (r1.Q) * (r2.Q);
}
return temp;
}
ostream & operator<<(ostream &out, Rational r3)
{
out<<r3.P<<"/"<<r3.Q<<endl;
return out;
}
int main()
{
Rational R1(3,4);
Rational R2(5,6);
Rational R3();
R3 = R1 + R2;
cout<<R3;
}
A:
This
Rational R3();
declares a function called R3 that returns a Rational and takes no parameters. It does not define R3 to be a default constructed Rational. Change the line to any of the below
Rational R3;
Rational R3{};
auto R3 = Rational();
auto R3 = Rational{};
A:
There you go. Just remove the parenthesis from R3.
int main(){
Rational R1(3,4);
Rational R2(5,6);
Rational R3;
R3 = R1 + R2;
cout<<R3;
}
|
A class for rational number (p/q) with overloading + and << operator
|
I wanted to add two rational numbers and display them in the form of p/q using overloading the operators + and <<.
I'm using friend function, because the function for addition and display are taking multiple and different types of parameters. Inside the addition function I'm performing a normal addition of fractions, like how we do in real life. But when i run the code i get an error that can't convert Rational to Rational(),
Error: Rational.cpp: In function 'int main()': Rational.cpp:51:15: error: assignment of function 'Rational R3()' R3 = R1 + R2; Rational.cpp:51:15: error: cannot convert 'Rational' to 'Rational()' in assignment*
i have no idea why it's saying that .... ??
C++
#include <iostream>
using namespace std;
class Rational
{
private:
int P;
int Q;
public:
Rational(int p = 1, int q = 1)
{
P = p;
Q = q;
}
friend Rational operator+(Rational r1, Rational r2);
friend ostream & operator<<(ostream &out, Rational r3);
};
Rational operator+(Rational r1, Rational r2)
{
Rational temp;
if(r1.Q == r2.Q)
{
temp.P = r1.P + r2.P;
temp.Q = r1.Q;
}
else
{
temp.P = ((r1.P) * (r2.Q)) + ((r2.P) * (r1.Q));
temp.Q = (r1.Q) * (r2.Q);
}
return temp;
}
ostream & operator<<(ostream &out, Rational r3)
{
out<<r3.P<<"/"<<r3.Q<<endl;
return out;
}
int main()
{
Rational R1(3,4);
Rational R2(5,6);
Rational R3();
R3 = R1 + R2;
cout<<R3;
}
|
[
"This\nRational R3();\n\ndeclares a function called R3 that returns a Rational and takes no parameters. It does not define R3 to be a default constructed Rational. Change the line to any of the below\n Rational R3; \n Rational R3{};\n auto R3 = Rational();\n auto R3 = Rational{};\n\n",
"There you go. Just remove the parenthesis from R3.\nint main(){\n\n Rational R1(3,4);\n Rational R2(5,6);\n Rational R3;\n\n R3 = R1 + R2;\n cout<<R3;\n\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"c++",
"class",
"friend_function",
"operator_overloading"
] |
stackoverflow_0068212028_c++_class_friend_function_operator_overloading.txt
|
Q:
Rails 6 with Webpacker & SCSS not working
I am having two separate issues (or maybe they are combined and I'm missing it). The app was picking up the bootstrap styles, but is no longer doing so.
Issue 1
When I make any updates to application.js no matter how small (an extra line break anywhere in the file) it would kill the imported bootstrap files.
Now I can't get the bootstrap styles to show period.
Issue 2
When I put the following into the head tag in application.html.erb:
<!-- before -->
<%= stylesheet_pack_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %>
<!-- after -->
It renders no output to the browser:
<!-- before -->
<!-- after -->
I'm uncertain if this is a Webpacker issue or what is causing this. Please let me know if any other details are required.
I have a full repo here that you can clone / browse with instructions for bringing up the dev environment with Docker.
You can check it out here: Funtime Github repo
A:
Webpacker is not configured to extract any css.
Set extract_css: true in webpacker.yml. Setting this to true will extract any css you import in js files under /packs to separate css files. In your case any css imported in application.js will be available in application.css. If you had a pack called test, the css will be extracted to test.css.
Move out application.scss from packs to /css (or stylesheets, whatever you want)
Update application.js like this:
import "./../css/application";
import Rails from "@rails/ujs";
....
Make sure you start webpacker dev server with bin/webpacker-dev-server.
Here's how it looks like:
A:
With Shakapacker:
app/javascript:
├── packs:
│ # only webpack entry files here
│ └── online_giving.js
│ └── online_giving.scss
└── src:
│ └── my_component.js
└── stylesheets:
│ └── my_styles.css
└── images:
└── logo.svg
And in the app/views/layouts/mylayout.html.haml :
= javascript_pack_tag 'online_giving'
= stylesheet_pack_tag 'online_giving'
you don't need to import the online_giving.scss in the online_giving.js file, Shakapacker will find it.
|
Rails 6 with Webpacker & SCSS not working
|
I am having two separate issues (or maybe they are combined and I'm missing it). The app was picking up the bootstrap styles, but is no longer doing so.
Issue 1
When I make any updates to application.js no matter how small (an extra line break anywhere in the file) it would kill the imported bootstrap files.
Now I can't get the bootstrap styles to show period.
Issue 2
When I put the following into the head tag in application.html.erb:
<!-- before -->
<%= stylesheet_pack_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %>
<!-- after -->
It renders no output to the browser:
<!-- before -->
<!-- after -->
I'm uncertain if this is a Webpacker issue or what is causing this. Please let me know if any other details are required.
I have a full repo here that you can clone / browse with instructions for bringing up the dev environment with Docker.
You can check it out here: Funtime Github repo
|
[
"Webpacker is not configured to extract any css.\n\nSet extract_css: true in webpacker.yml. Setting this to true will extract any css you import in js files under /packs to separate css files. In your case any css imported in application.js will be available in application.css. If you had a pack called test, the css will be extracted to test.css.\n\nMove out application.scss from packs to /css (or stylesheets, whatever you want)\n\nUpdate application.js like this:\n\n\nimport \"./../css/application\";\nimport Rails from \"@rails/ujs\";\n....\n\n\nMake sure you start webpacker dev server with bin/webpacker-dev-server.\n\nHere's how it looks like:\n\n",
"With Shakapacker:\napp/javascript:\n├── packs:\n│ # only webpack entry files here\n│ └── online_giving.js\n│ └── online_giving.scss\n└── src:\n│ └── my_component.js\n└── stylesheets:\n│ └── my_styles.css\n└── images:\n └── logo.svg\n\nAnd in the app/views/layouts/mylayout.html.haml :\n= javascript_pack_tag 'online_giving'\n= stylesheet_pack_tag 'online_giving' \n\nyou don't need to import the online_giving.scss in the online_giving.js file, Shakapacker will find it.\n"
] |
[
2,
0
] |
[] |
[] |
[
"ruby",
"ruby_on_rails",
"webpacker"
] |
stackoverflow_0070160669_ruby_ruby_on_rails_webpacker.txt
|
Q:
Criteria for choosing layouts in xml
I'm new to xml layout. I am confused which layout to choose, LinearLayout, RelativeLayout or ConstraintLayout, or others. In what scenarios should I use each?
A:
Linear layout - when you need to group few elements horizontally or vertically. Typically used as a nested layout of the relative layout.
Relative layout - when you need to layout multiple nested layout groups, you can align items center, left, top, etc. Typically used as root view layout in the past.
Constraint layout - this layout allow to group flat hierarchy of elements, with constraints you can combine the power of linear and relative layout and your layout will be much readable.
|
Criteria for choosing layouts in xml
|
I'm new to xml layout. I am confused which layout to choose, LinearLayout, RelativeLayout or ConstraintLayout, or others. In what scenarios should I use each?
|
[
"\nLinear layout - when you need to group few elements horizontally or vertically. Typically used as a nested layout of the relative layout.\nRelative layout - when you need to layout multiple nested layout groups, you can align items center, left, top, etc. Typically used as root view layout in the past.\nConstraint layout - this layout allow to group flat hierarchy of elements, with constraints you can combine the power of linear and relative layout and your layout will be much readable.\n\n"
] |
[
1
] |
[] |
[] |
[
"android_layout",
"xml"
] |
stackoverflow_0074660655_android_layout_xml.txt
|
Q:
Serving static HTML etc and Django from root '/' using nginx
I have nginx set up to successfully serve a Django website. I'd like to have it also serve a directory of HTML files, images, etc. If a URL doesn't match a file in there, the request should go to Django.
Currently I have this (with irrelevant settings removed, e.g. SSL, logging, etc):
upstream myproject_server {
server unix:/webapps/myproject/run/gunicorn.sock fail_timeout=0;
}
server {
server_name example.com;
rewrite ^/favicon.ico$ /static/myproject/favicons/favicon.ico last;
rewrite ^/robots.txt$ /static/myproject/robots.txt last;
location /static/ {
# Django's static files
alias /webapps/myproject/code/myproject/static_collected/;
}
location / {
if (!-f $request_filename) {
proxy_pass http://myproject_server;
break;
}
}
}
If I have a directory of miscellaneous files like this at /webapps/myproject/code/myproject/static_html/:
static_html/
test.html
directory/
foo.html
bar.png
another/
hello.pdf
etc...
What do I need to add to my nginx.conf so that those files are efficiently served at /test.html, /directory/foo.html, etc?
A:
You need a root statement to find files in the static_html directory. Your -f only checks file existence and not directory existence, so URIs pointing to a directory are sent to proxy_pass. You could change this to -e to check if files or directories exist.
See if documentation.
For example:
location / {
root /webapps/myproject/code/myproject/static_html;
if (!-e $request_filename) {
proxy_pass http://myproject_server;
}
}
However, the preferred solution would probably be to avoid the if block altogether. See try_files documentation.
For example:
location / {
root /webapps/myproject/code/myproject/static_html;
try_files $uri $uri/ @proxy;
}
location @proxy {
proxy_pass http://myproject_server;
}
|
Serving static HTML etc and Django from root '/' using nginx
|
I have nginx set up to successfully serve a Django website. I'd like to have it also serve a directory of HTML files, images, etc. If a URL doesn't match a file in there, the request should go to Django.
Currently I have this (with irrelevant settings removed, e.g. SSL, logging, etc):
upstream myproject_server {
server unix:/webapps/myproject/run/gunicorn.sock fail_timeout=0;
}
server {
server_name example.com;
rewrite ^/favicon.ico$ /static/myproject/favicons/favicon.ico last;
rewrite ^/robots.txt$ /static/myproject/robots.txt last;
location /static/ {
# Django's static files
alias /webapps/myproject/code/myproject/static_collected/;
}
location / {
if (!-f $request_filename) {
proxy_pass http://myproject_server;
break;
}
}
}
If I have a directory of miscellaneous files like this at /webapps/myproject/code/myproject/static_html/:
static_html/
test.html
directory/
foo.html
bar.png
another/
hello.pdf
etc...
What do I need to add to my nginx.conf so that those files are efficiently served at /test.html, /directory/foo.html, etc?
|
[
"You need a root statement to find files in the static_html directory. Your -f only checks file existence and not directory existence, so URIs pointing to a directory are sent to proxy_pass. You could change this to -e to check if files or directories exist.\nSee if documentation.\nFor example:\nlocation / {\n root /webapps/myproject/code/myproject/static_html;\n if (!-e $request_filename) {\n proxy_pass http://myproject_server;\n }\n}\n\n\nHowever, the preferred solution would probably be to avoid the if block altogether. See try_files documentation.\nFor example:\nlocation / {\n root /webapps/myproject/code/myproject/static_html;\n try_files $uri $uri/ @proxy;\n}\nlocation @proxy {\n proxy_pass http://myproject_server;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"nginx"
] |
stackoverflow_0074605537_django_nginx.txt
|
Q:
How to edit a value (list of entries) from an api response to use in a request body in Gatling/Scala
I have an issue that I'm hoping someone can help me with. I'm pretty new to coding and Gatling, so I'm not sure how to proceed.
I'm using Gatling (with Scala) to create a performance test scenario that contains two API-calls.
GetInformation
SendInformation
I'm storing some of the values from the GetInformation response so I can use it in the body for the SendInformation request. The problem is that some information from the GetInformation response needs to be edited/removed before it is included in the body for SendInformation.
Extract of the GetInformation response:
{
"parameter": [
{
"name": "ResponseFromGetInfo",
"type": "document",
"total": 3,
"entry": [
{
"fullUrl": "urn:uuid:4ea859d0-daa4-4d2a-8fbc-1571cd7dfdb0",
"resource": {
"resourceType": "Composition"
}
},
{
"fullUrl": "urn:uuid:1b10ed79-333b-4838-93a5-a40d22508f0a",
"resource": {
"resourceType": "Practitioner"
}
},
{
"fullUrl": "urn:uuid:650b8e7a-2cfc-4b0b-a23b-a85d1bf782de",
"resource": {
"resourceType": "Dispense"
}
}
]
}
]
}
What I want is to store the list in "entry" and remove the entries with resourceType = "Dispense" so I can use it in the body for SendInformation.
It would have been ok if the entry list always had the same number of entries and order, but that is not the case. The number of entries can be several hundred and the order of entries varies. The number of entries are equal to the "total" value that is included in the GetInformation response.
I've thought about a few ways to solve it, but now I'm stuck. Some alternatives:
Extract the entire "entry" list using .check(jsonPath("$.parameter[0].entry").saveAs("entryList")) and then iterate through the list to remove the entries with resourceTypes = "Dispense".
But I don't know how to iterate over a value of type io.gatling.core.session.SessionAttribute, or if this is possible. It would have been nice if I could iterate over the entry list and check if parameter[0].entry[0].resourceType = "Dispense", and remove the entry if the statement is true.
I'm also considering If I can use StringBuilder in some way. Maybe if I check one entry at the time using .check(parameter[0].entry[X].resourceType != dispense, and if true then append it to a stringBuilder.
Does someone know how I can do this? Either by one of the alternatives that I listed, or in a different way? All help is appreciated :)
So maybe in the end it will look something like this:
val scn = scenario("getAndSendInformation")
.exec(http("getInformation")
.post("/Information/$getInformation")
.body(ElFileBody("bodies/getInformtion.json"))
// I can save total, så I know the total number of entries in the entry list
.check(jsonPath("$.parameter[0].total").saveAs("total"))
//Store entire entry list
.check(jsonPath("$.parameter[0].entry").saveAs("entryList"))
//Or store all entries separatly and check afterwards who have resourceType = "dispense"? Not sure how to do this..
.check(jsonPath("$.parameter[0].entry[0]").saveAs("entry_0"))
.check(jsonPath("$.parameter[0].entry[1]").saveAs("entry_1"))
//...
.check(jsonPath("$.parameter[0].entry[X]").saveAs("entry_X"))
)
//Alternativ 1
.repeat("${total}", "counter") {
exec(session => {
//Do some magic here
//Check if session("parameter[0]_entry[counter].resourceType") = "Dispense" {
// if yes, remove entry from entry list}
session})}
//Alternativ 2
val entryString = new StringBuilder("")
.repeat("${total}", "counter") {
exec(session => {
//Do some magic here
//Check if session("parameter[0]_entry[counter].resourceType") != "Dispense" {
// if yes, add to StringBuilder}
// entryString.append(session("parameter[0]_entry[counter]").as[String] + ", ")
session})}
.exec(http("sendInformation")
.post("/Information/$sendInformation")
.body(ElFileBody("bodies/sendInformationRequest.json")))
A:
I'm pretty new to coding
I'm using Gatling (with Scala)
Gatling with Java would probably be an easier solution for you.
check(jsonPath("$.parameter[0].entry").saveAs("entryList"))
This is going to capture a String, not a list. In order to be able to iterate, you have to use ofXXX/ofType[], see https://gatling.io/docs/gatling/reference/current/core/check/#jsonpath
Then, in order to generate the next request's body, you could consider a templating engine such as PebbleBody (https://gatling.io/docs/gatling/reference/current/http/request/#pebblestringbody) or indeed use StringBody with a function with a StringBuilder.
|
How to edit a value (list of entries) from an api response to use in a request body in Gatling/Scala
|
I have an issue that I'm hoping someone can help me with. I'm pretty new to coding and Gatling, so I'm not sure how to proceed.
I'm using Gatling (with Scala) to create a performance test scenario that contains two API-calls.
GetInformation
SendInformation
I'm storing some of the values from the GetInformation response so I can use it in the body for the SendInformation request. The problem is that some information from the GetInformation response needs to be edited/removed before it is included in the body for SendInformation.
Extract of the GetInformation response:
{
"parameter": [
{
"name": "ResponseFromGetInfo",
"type": "document",
"total": 3,
"entry": [
{
"fullUrl": "urn:uuid:4ea859d0-daa4-4d2a-8fbc-1571cd7dfdb0",
"resource": {
"resourceType": "Composition"
}
},
{
"fullUrl": "urn:uuid:1b10ed79-333b-4838-93a5-a40d22508f0a",
"resource": {
"resourceType": "Practitioner"
}
},
{
"fullUrl": "urn:uuid:650b8e7a-2cfc-4b0b-a23b-a85d1bf782de",
"resource": {
"resourceType": "Dispense"
}
}
]
}
]
}
What I want is to store the list in "entry" and remove the entries with resourceType = "Dispense" so I can use it in the body for SendInformation.
It would have been ok if the entry list always had the same number of entries and order, but that is not the case. The number of entries can be several hundred and the order of entries varies. The number of entries are equal to the "total" value that is included in the GetInformation response.
I've thought about a few ways to solve it, but now I'm stuck. Some alternatives:
Extract the entire "entry" list using .check(jsonPath("$.parameter[0].entry").saveAs("entryList")) and then iterate through the list to remove the entries with resourceTypes = "Dispense".
But I don't know how to iterate over a value of type io.gatling.core.session.SessionAttribute, or if this is possible. It would have been nice if I could iterate over the entry list and check if parameter[0].entry[0].resourceType = "Dispense", and remove the entry if the statement is true.
I'm also considering If I can use StringBuilder in some way. Maybe if I check one entry at the time using .check(parameter[0].entry[X].resourceType != dispense, and if true then append it to a stringBuilder.
Does someone know how I can do this? Either by one of the alternatives that I listed, or in a different way? All help is appreciated :)
So maybe in the end it will look something like this:
val scn = scenario("getAndSendInformation")
.exec(http("getInformation")
.post("/Information/$getInformation")
.body(ElFileBody("bodies/getInformtion.json"))
// I can save total, så I know the total number of entries in the entry list
.check(jsonPath("$.parameter[0].total").saveAs("total"))
//Store entire entry list
.check(jsonPath("$.parameter[0].entry").saveAs("entryList"))
//Or store all entries separatly and check afterwards who have resourceType = "dispense"? Not sure how to do this..
.check(jsonPath("$.parameter[0].entry[0]").saveAs("entry_0"))
.check(jsonPath("$.parameter[0].entry[1]").saveAs("entry_1"))
//...
.check(jsonPath("$.parameter[0].entry[X]").saveAs("entry_X"))
)
//Alternativ 1
.repeat("${total}", "counter") {
exec(session => {
//Do some magic here
//Check if session("parameter[0]_entry[counter].resourceType") = "Dispense" {
// if yes, remove entry from entry list}
session})}
//Alternativ 2
val entryString = new StringBuilder("")
.repeat("${total}", "counter") {
exec(session => {
//Do some magic here
//Check if session("parameter[0]_entry[counter].resourceType") != "Dispense" {
// if yes, add to StringBuilder}
// entryString.append(session("parameter[0]_entry[counter]").as[String] + ", ")
session})}
.exec(http("sendInformation")
.post("/Information/$sendInformation")
.body(ElFileBody("bodies/sendInformationRequest.json")))
|
[
"\nI'm pretty new to coding\nI'm using Gatling (with Scala)\n\nGatling with Java would probably be an easier solution for you.\n\ncheck(jsonPath(\"$.parameter[0].entry\").saveAs(\"entryList\"))\n\nThis is going to capture a String, not a list. In order to be able to iterate, you have to use ofXXX/ofType[], see https://gatling.io/docs/gatling/reference/current/core/check/#jsonpath\nThen, in order to generate the next request's body, you could consider a templating engine such as PebbleBody (https://gatling.io/docs/gatling/reference/current/http/request/#pebblestringbody) or indeed use StringBody with a function with a StringBuilder.\n"
] |
[
0
] |
[] |
[] |
[
"gatling",
"scala",
"scala_gatling"
] |
stackoverflow_0074657153_gatling_scala_scala_gatling.txt
|
Q:
Binding to a property of an object in XAML not working
I am new to WPF and data binding. So I am trying some things but now encountered a problem that defies everything I find in reference material.
I have a test program with a string TestString1 that is bound to the Text property of a TextBox tbTest1, that works.
And I have an object TestString2 from ClassTestString2 that contains one property Str. And I want to bind Str to the Text property of a TextBox tbTest2. So I use Text="{Binding Path=TestString2.Str}". According to all documentation you can drill down to a property of an object with the normal C# syntax. But it simply doesn't bind, it doesn't show when starting the program and also making changes in tbTest2 are not reflected in TestString2.Str.
When I use this.DataContext = TestString2; and Text="{Binding Path=Str}", it works but than TestString1 is not bound anymore.
I have the following simple piece of XAML:
<Window x:Class="WpfBindingStringOnly.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:WpfBindingStringOnly"
mc:Ignorable="d"
Title="MainWindow" Height="450" Width="800">
<Grid>
<TextBox
x:Name="tbTest1"
Text="{Binding Path=TestString1}"
HorizontalAlignment="Left" Height="41"
Margin="124,47,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="250"/>
<TextBox
x:Name="tbTest2"
Text="{Binding Path=TestString2.Str}"
HorizontalAlignment="Left" Height="45"
Margin="124,126,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="250"/>
</Grid>
</Window>
And C# code behind:
using System;
using System.Windows;
using static WpfBindingStringOnly.MainWindow;
namespace WpfBindingStringOnly
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public string TestString1 { get; set; }
public class ClassTestString2
{
public string Str { get; set; }
public ClassTestString2(string s)
{
Str = s;
}
}
public ClassTestString2 TestString2;
public MainWindow()
{
TestString1 = "Hello1";
TestString2 = new("Hello2");
InitializeComponent();
this.DataContext = this;
}
}
}
A:
Bindings work on properties, not fields.
Change your TestString2 member from
public ClassTestString2 TestString2; // This is a field.
to
public ClassTestString2 TestString2 { get; set; } // This is a property.
|
Binding to a property of an object in XAML not working
|
I am new to WPF and data binding. So I am trying some things but now encountered a problem that defies everything I find in reference material.
I have a test program with a string TestString1 that is bound to the Text property of a TextBox tbTest1, that works.
And I have an object TestString2 from ClassTestString2 that contains one property Str. And I want to bind Str to the Text property of a TextBox tbTest2. So I use Text="{Binding Path=TestString2.Str}". According to all documentation you can drill down to a property of an object with the normal C# syntax. But it simply doesn't bind, it doesn't show when starting the program and also making changes in tbTest2 are not reflected in TestString2.Str.
When I use this.DataContext = TestString2; and Text="{Binding Path=Str}", it works but than TestString1 is not bound anymore.
I have the following simple piece of XAML:
<Window x:Class="WpfBindingStringOnly.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:WpfBindingStringOnly"
mc:Ignorable="d"
Title="MainWindow" Height="450" Width="800">
<Grid>
<TextBox
x:Name="tbTest1"
Text="{Binding Path=TestString1}"
HorizontalAlignment="Left" Height="41"
Margin="124,47,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="250"/>
<TextBox
x:Name="tbTest2"
Text="{Binding Path=TestString2.Str}"
HorizontalAlignment="Left" Height="45"
Margin="124,126,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="250"/>
</Grid>
</Window>
And C# code behind:
using System;
using System.Windows;
using static WpfBindingStringOnly.MainWindow;
namespace WpfBindingStringOnly
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public string TestString1 { get; set; }
public class ClassTestString2
{
public string Str { get; set; }
public ClassTestString2(string s)
{
Str = s;
}
}
public ClassTestString2 TestString2;
public MainWindow()
{
TestString1 = "Hello1";
TestString2 = new("Hello2");
InitializeComponent();
this.DataContext = this;
}
}
}
|
[
"Bindings work on properties, not fields.\nChange your TestString2 member from\npublic ClassTestString2 TestString2; // This is a field.\n\nto\npublic ClassTestString2 TestString2 { get; set; } // This is a property.\n\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"data_binding",
"xaml"
] |
stackoverflow_0074660118_c#_data_binding_xaml.txt
|
Q:
Why does large Jupyter notebook start consuming 100% CPU after every command invocation?
I have a big Jupyter notebook (consuming 150+ Gigabytes of RAM). When I run a command, including something as simple as 1+1, I get the answer, but right after that, the notebook starts taking up 100% of CPU. The memory usage starts increasing steadily, reaching over 230 GB, before dropping back down to 150 GB ish. If I interrupt the kernel right after I get the answer, I can run another command. However, right after giving the answer to that command, the process again starts consuming 100% CPU. It seems like some kind of garbage collector that runs after every invocation? Is this happening due to the large memory footprint of my notebook? Is there any way I can avoid this behavior?
Version info:
jupyter core : 4.6.1
jupyter-notebook : 6.0.2
qtconsole : 4.7.7
ipython : 7.9.0
ipykernel : 5.1.3
jupyter client : 5.3.4
jupyter lab : 1.1.4
nbconvert : 5.6.1
ipywidgets : 7.5.1
nbformat : 4.4.0
traitlets : 4.3.3
python : 3.6.9
A:
I just figured out (by trial and error) that the reason for the slowdown was a Notebook extension called "Variable Inspector". Disabling it makes this problem go away and my notebooks are fast and responsive once again :)
|
Why does large Jupyter notebook start consuming 100% CPU after every command invocation?
|
I have a big Jupyter notebook (consuming 150+ Gigabytes of RAM). When I run a command, including something as simple as 1+1, I get the answer, but right after that, the notebook starts taking up 100% of CPU. The memory usage starts increasing steadily, reaching over 230 GB, before dropping back down to 150 GB ish. If I interrupt the kernel right after I get the answer, I can run another command. However, right after giving the answer to that command, the process again starts consuming 100% CPU. It seems like some kind of garbage collector that runs after every invocation? Is this happening due to the large memory footprint of my notebook? Is there any way I can avoid this behavior?
Version info:
jupyter core : 4.6.1
jupyter-notebook : 6.0.2
qtconsole : 4.7.7
ipython : 7.9.0
ipykernel : 5.1.3
jupyter client : 5.3.4
jupyter lab : 1.1.4
nbconvert : 5.6.1
ipywidgets : 7.5.1
nbformat : 4.4.0
traitlets : 4.3.3
python : 3.6.9
|
[
"I just figured out (by trial and error) that the reason for the slowdown was a Notebook extension called \"Variable Inspector\". Disabling it makes this problem go away and my notebooks are fast and responsive once again :)\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"python_3.x"
] |
stackoverflow_0074652141_jupyter_notebook_python_3.x.txt
|
Q:
dart flutter - Why is this await unnecessary?
my linter is telling me this await is unnecessary:
Unnecessary await keyword in return.
I thought if you're calling a function inside an async function and you want to get/return the value, rather than the future you had to use await to designate you want the value, not the future.
Am I missing something here?
A:
As the documentation for that recommendation points out, you can take off the async and just return the Future, rather than using await:
Future<Dictionary> get _localDir => getApplicationDocumentsDirectory();
This is called "eliding async/await". Stephen Cleary wrote an article about it (written for C#, but largely applicable for any language that uses async and await), which you may find helpful: Eliding Async and Await
In short, it's more efficient to do this in situations when you can:
By not including these keywords, the compiler can skip generating the async state machine. This means that there are fewer compiler-generated types in your assembly, less pressure on the garbage collector, and fewer CPU instructions to execute.
|
dart flutter - Why is this await unnecessary?
|
my linter is telling me this await is unnecessary:
Unnecessary await keyword in return.
I thought if you're calling a function inside an async function and you want to get/return the value, rather than the future you had to use await to designate you want the value, not the future.
Am I missing something here?
|
[
"As the documentation for that recommendation points out, you can take off the async and just return the Future, rather than using await:\nFuture<Dictionary> get _localDir => getApplicationDocumentsDirectory();\n\nThis is called \"eliding async/await\". Stephen Cleary wrote an article about it (written for C#, but largely applicable for any language that uses async and await), which you may find helpful: Eliding Async and Await\nIn short, it's more efficient to do this in situations when you can:\n\nBy not including these keywords, the compiler can skip generating the async state machine. This means that there are fewer compiler-generated types in your assembly, less pressure on the garbage collector, and fewer CPU instructions to execute.\n\n"
] |
[
1
] |
[] |
[] |
[
"async_await",
"dart"
] |
stackoverflow_0074660714_async_await_dart.txt
|
Q:
RuntimeError: input.size(-1) must be equal to input_size. Expected 101, got 106
I am trying to use pytorch forecasting to make predictions using DeepAR.
I am getting the following error in the 2nd iteration of my training loop. I do not make any changes to the datasets after the training loop starts so I do not understand this error.
RuntimeError Traceback (most recent call last)
<ipython-input-41-d2a72cad41f9> in <module>
120 deepar,
121 train_dataloaders=train_dataloader,
--> 122 val_dataloaders=val_dataloader,
123 )
124
23 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
205 raise RuntimeError(
206 'input.size(-1) must be equal to input_size. Expected {}, got {}'.format(
--> 207 self.input_size, input.size(-1)))
208
209 def get_expected_hidden_size(self, input: Tensor, batch_sizes: Optional[Tensor]) -> Tuple[int, int, int]:
RuntimeError: input.size(-1) must be equal to input_size. Expected 101, got 106
My full code is as below
#from ctypes import FormatError
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import os,sys
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\\2_freq_nbinom_LSTM')))
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\\2_freq_nbinom_LSTM\\1_cluster_demand_prediction\data\weather_data')))
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\2_freq_nbinom_LSTM\1_cluster_demand_prediction\data\demand_data')))
import torch
torch.use_deterministic_algorithms(True)
from pytorch_forecasting.data.encoders import TorchNormalizer
from pytorch_forecasting.metrics import SMAPE, RMSE
from torchmetrics import R2Score, SymmetricMeanAbsolutePercentageError, MeanSquaredError
import matplotlib.pyplot as plt
import pandas as pd
from pytorch_forecasting.data import TimeSeriesDataSet
from pytorch_forecasting.data import NaNLabelEncoder
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
import pytorch_lightning as pl
import torch
from pytorch_forecasting.data.encoders import TorchNormalizer
import os,sys
import numpy as np
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
from statsmodels.tsa.stattools import acf,pacf
from scipy.signal import find_peaks
import operator
import statsmodels.api as sm
from itertools import combinations
import pickle
from pytorch_forecasting import Baseline
import random
from pytorch_forecasting import DeepAR,NegativeBinomialDistributionLoss
from itertools import product
from sklearn.metrics import mean_absolute_error, mean_squared_error
"""
Set Random seed
"""
random.seed(0)
torch.manual_seed(0)
np.random.seed(0)
## additional seeding to ensure reproduciblility.
pl.seed_everything(0)
import os
os.chdir('/content/drive/MyDrive/My_model/pytorch-forecasting-master/1_cluster_demand_prediction')
encoder_len = 24
pred_len = 1 # 1, 6, 12
#cov_lag_len= pred_len # do this when creating the dataset. import that code here.
os.chdir('/content/drive/MyDrive/My_model/pytorch-forecasting-master/1_cluster_demand_prediction')
tampines_all_clstr_train_dem_data = pd.read_csv('tampines_all_clstr_train_dem_data.csv')
tampines_all_clstr_val_dem_data = pd.read_csv('tampines_all_clstr_val_dem_data.csv')
tampines_all_clstr_test_dem_data = pd.read_csv('tampines_all_clstr_test_dem_data.csv')
tampines_all_clstr_train_dem_data = tampines_all_clstr_train_dem_data.drop(['Unnamed: 0'],axis=1)
tampines_all_clstr_val_dem_data = tampines_all_clstr_val_dem_data.drop(['Unnamed: 0'],axis=1)
tampines_all_clstr_test_dem_data = tampines_all_clstr_test_dem_data.drop(['Unnamed: 0'],axis=1)
train_data = pd.DataFrame()
val_data = pd.DataFrame()
test_data = pd.DataFrame()
cov_lag_len = 0
for c in tampines_all_clstr_train_dem_data.columns:
if c == 'target':
target_train = tampines_all_clstr_train_dem_data[c].iloc[:]
train_data[c] = target_train.reset_index(drop=True)
target_val = tampines_all_clstr_val_dem_data[c].iloc[:]
val_data[c] = target_val.reset_index(drop=True)
target_test = tampines_all_clstr_test_dem_data[c].iloc[:]
test_data[c] = target_test.reset_index(drop=True)
else:
other_col_train = tampines_all_clstr_train_dem_data[c].iloc[:]
train_data[c] = other_col_train.reset_index(drop=True)
other_col_val = tampines_all_clstr_val_dem_data[c].iloc[:]
val_data[c] = other_col_val.reset_index(drop=True)
other_col_test = tampines_all_clstr_test_dem_data[c].iloc[:]
test_data[c] = other_col_test.reset_index(drop=True)
"""
Import pre-processed Data
"""
#################### add date information ts ####################
#2021 oct 17 20:00:00
day=17
hour=(20+cov_lag_len)
if (20+cov_lag_len)>23:
day = 18
hour = hour%24
train_data["date"] = pd.Timestamp(year=2021, month=10, day=day, hour=hour ) + pd.to_timedelta(train_data.time_idx, "H")
train_data['_hour_of_day'] = train_data["date"].dt.hour.astype(str)
train_data['_day_of_week'] = train_data["date"].dt.dayofweek.astype(str)
train_data['_day_of_month'] = train_data["date"].dt.day.astype(str)
train_data['_day_of_year'] = train_data["date"].dt.dayofyear.astype(str)
train_data['_week_of_year'] = train_data["date"].dt.weekofyear.astype(str)
train_data['_month_of_year'] = train_data["date"].dt.month.astype(str)
train_data['_year'] = train_data["date"].dt.year.astype(str)
#################### add date information ts ####################
#################### add date information ts ####################
# val starts at 3/12/2021 09:00
day=3
hour=(9+cov_lag_len)
if (9+cov_lag_len)>23:
day = 4
hour = hour%24
val_data["date"] = pd.Timestamp(year=2021, month=12, day=day, hour=hour ) + pd.to_timedelta(val_data.time_idx, "H")
val_data['_hour_of_day'] = val_data["date"].dt.hour.astype(str)
val_data['_day_of_week'] = val_data["date"].dt.dayofweek.astype(str)
val_data['_day_of_month'] = val_data["date"].dt.day.astype(str)
val_data['_day_of_year'] = val_data["date"].dt.dayofyear.astype(str)
val_data['_week_of_year'] = val_data["date"].dt.weekofyear.astype(str)
val_data['_month_of_year'] = val_data["date"].dt.month.astype(str)
val_data['_year'] = val_data["date"].dt.year.astype(str)
#################### add date information ts ####################
#################### add date information ts ####################
# test starts at 16/12/2021 16:00
day=16
hour=(16+cov_lag_len)
if (16+cov_lag_len)>23:
day = 17
hour = hour%24
test_data["date"] = pd.Timestamp(year=2021, month=12, day=day, hour=hour ) + pd.to_timedelta(test_data.time_idx, "H")
test_data['_hour_of_day'] = test_data["date"].dt.hour.astype(str)
test_data['_day_of_week'] = test_data["date"].dt.dayofweek.astype(str)
test_data['_day_of_month'] = test_data["date"].dt.day.astype(str)
test_data['_day_of_year'] = test_data["date"].dt.dayofyear.astype(str)
test_data['_week_of_year'] = test_data["date"].dt.weekofyear.astype(str)
test_data['_month_of_year'] = test_data["date"].dt.month.astype(str)
test_data['_year'] = test_data["date"].dt.year.astype(str)
#################### add date information ts ####################
Target = 'target'
"""
set inputs here
(hyperparameters grid search)
"""
######### Network Architecture ###################
###### Create hyperparameters grid ######
hparams_grid = {"LSTM_neuron_size":[168,336,672],
"num_layers":[3],
"batch_size":[8,4],
"learning_rate":[0.0001],
"max_encoder_length":[encoder_len],
"max_prediction_length":[pred_len],
"dropout":[0.2],
#"cov_pair":cov_pairs_list,# [cov_pairs_list[7]],
"Num_epochs":[1]}
#"Num_epochs":[16,18,20,22,24,26,28]}
###### Create hyperparameters grid ######
p = 10 # patience no. of epochs
Loss=NegativeBinomialDistributionLoss()
######### Network Architecture ###################
TRAINING ROUTINE IS BELOW.
######### Training Routine ###################
fdv_steps = 10 # fast_dev_run
######### Training Routine ###################
############## Inputs for 2) Persistance model ( seasonal naive forecast ) #######################
season_len = 168 # length of season
num_past_seas = 2 # number of past seasons to use in averaging
#seas_pred_strt_idx = 2035 # seasonal naive forecast start index, in hours use the df dataframe
############## Inputs for 2) Persistance model ( seasonal naive forecast ) #######################
param_comb_cnt=0
for neu,lay,bat,lr,enc_len,pred_len,drop,num_ep in product(*[x for x in hparams_grid.values()]):
#print(param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,df_cov_col1,df_cov_col2)
param_comb_cnt+=1
param_comb_cnt
"""
Full Training Routine
with hyperparmeter grid search
Load data into TimeSeriesDataSet object
for fast development run
uncomment fast_dev_run = fdv_steps
"""
#early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=p, verbose=False, mode="min")
lr_logger = LearningRateMonitor()
RMSE_list = [] # FIND minimum RMSE case
hyperparams_list = [] # FIND minimum RMSE case
# best_val_comb_idx=[17,21,51,52,53,54,61,62,63,82,83,84,109,110,111,143,144,145,195,218,219,220,232,233,234,236,237,238,280,338,339,340,344,345,346,386]
# best_val_train_epochs = [50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50]
# best_val_comb_idx=[234]
# best_val_train_epochs = [50]
num_cols_list = []
param_comb_cnt=-1
for neu,lay,bat,lr,enc_len,pred_len,drop,num_ep in product(*[x for x in hparams_grid.values()]):
param_comb_cnt+=1
if param_comb_cnt <0 or param_comb_cnt > 50:
continue
# df_cov_col1 = cov_pair[0]
# df_cov_col2 = cov_pair[1]
######### Load DATA #############
cat_dict = {"_hour_of_day": NaNLabelEncoder(add_nan=True).fit(train_data._hour_of_day), \
"_day_of_week": NaNLabelEncoder(add_nan=True).fit(train_data._day_of_week), "_day_of_month" : NaNLabelEncoder(add_nan=True).fit(train_data._day_of_month), "_day_of_year" : NaNLabelEncoder(add_nan=True).fit(train_data._day_of_year), \
"_week_of_year": NaNLabelEncoder(add_nan=True).fit(train_data._week_of_year), "_month_of_year": NaNLabelEncoder(add_nan=True).fit(train_data._month_of_year) ,"_year": NaNLabelEncoder(add_nan=True).fit(train_data._year) }
cat_list = ["_hour_of_day","_day_of_week","_day_of_month","_day_of_year","_week_of_year","_month_of_year","_year"]
num_cols_list.append('dem_lag_168')
num_cols_list.append('dem_lag_336')
num_cols_list.append('inflow')
num_cols_list.append('inf_lag_168')
num_cols_list.append('inf_lag_336')
#cat_list.append('wea_desc_clstr_52')
train_dataset = TimeSeriesDataSet(
train_data,
time_idx="time_idx",
target=Target,
categorical_encoders=cat_dict,
group_ids=["group"],
min_encoder_length=enc_len,
max_encoder_length=enc_len,
min_prediction_length=pred_len,
max_prediction_length=pred_len,
time_varying_unknown_reals=[Target],
time_varying_known_reals=num_cols_list,
time_varying_known_categoricals=cat_list,
add_relative_time_idx=False,
randomize_length=False,
scalers={},
target_normalizer=TorchNormalizer(method="identity",center=False,transformation=None )
)
val_dataset = TimeSeriesDataSet.from_dataset(train_dataset,val_data, stop_randomization=True, predict=False)
test_dataset = TimeSeriesDataSet.from_dataset(train_dataset,test_data, stop_randomization=True)
train_dataloader = train_dataset.to_dataloader(train=True, batch_size=bat)
val_dataloader = val_dataset.to_dataloader(train=False, batch_size=bat)
test_dataloader = test_dataset.to_dataloader(train=False, batch_size=bat)
######### Load DATA #############
"""
Machine Learning predictions START
1) DeepAR
"""
trainer = pl.Trainer(
max_epochs=num_ep,
gpus=-1, #-1
auto_lr_find=False,
gradient_clip_val=0.1,
limit_train_batches=1.0,
limit_val_batches=1.0,
#fast_dev_run=fdv_steps,
logger=True,
#log_every_n_steps=10,
# profiler=True,
callbacks=[lr_logger]#, early_stop_callback],
#enable_checkpointing=True,
#default_root_dir="C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\2_freq_nbinom_LSTM\1_cluster_demand_prediction\logs"
)
#print(f"training routing:\n \n {trainer}")
deepar = DeepAR.from_dataset(
train_dataset,
learning_rate=lr,
hidden_size=neu,
rnn_layers=lay,
dropout=drop,
loss=Loss,
log_interval=20,
log_val_interval=6,
log_gradient_flow=False,
# reduce_on_plateau_patience=3,
)
#print(f"Number of parameters in network: {deepar.size()/1e3:.1f}k")
# print(f"Model :\n \n {deepar}")
torch.set_num_threads(10)
trainer.fit(
deepar,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
########## Prediction #####################
test_output = deepar.predict(data=test_dataloader,mode='prediction',return_index=True,num_workers=8,show_progress_bar=True)
len1 = test_output[1]['time_idx'].shape[0]
actual1_full = np.array([])
pred_full = np.array([])
RMSE_list = np.array([])
old_pred_idx = test_output[1]['time_idx'][0]
for i in range(len1):
pred_idx = test_output[1]['time_idx'][i]
if pred_idx - old_pred_idx > 1: # moved into new group
plt.figure(figsize=(25,5))
plt.plot(actual1_full.flatten(),'^-')
plt.plot(pred_full.flatten(),'*-')
plt.show()
RMSE = np.sqrt(mean_squared_error(actual1_full.flatten(),pred_full.flatten() ))
print('Average RMSE : ', RMSE)
actual1_full = np.array([])
pred_full = np.array([])
actual = test_data[Target].iloc[pred_idx:pred_idx+pred_len].values
actual1_full = np.append(actual1_full, actual)
pred = np.array(np.rint(test_output[0][i])).astype(int)
pred_full = np.append(pred_full, pred)
old_pred_idx = pred_idx
i=i+pred_len
plt.figure(figsize=(25,5))
plt.plot(actual1_full.flatten(),'^-')
plt.plot(pred_full.flatten(),'*-')
plt.show()
RMSE = np.sqrt(mean_squared_error(actual1_full.flatten(),pred_full.flatten() ))
print('Average RMSE : ', RMSE)
print('\n Hyperparameter: param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,inflow\n')
print(param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,' \n')
########## Prediction #####################
# TO find minimum RMSE
hyperparams_list.append((neu,lay,bat,lr,enc_len,pred_len,drop,num_ep))
# RMSE_list.append(RMSE)
"""
Machine Learning predictions END
"""
######## Identify least RMSE case #############
# min_RMSE_idx = RMSE_list.index(min(RMSE_list))
# hyperparams_list[min_RMSE_idx]
######## Identify least RMSE case #############
A:
The error was here:
num_cols_list.append('dem_lag_168')
num_cols_list.append('dem_lag_336')
num_cols_list.append('inflow')
num_cols_list.append('inf_lag_168')
num_cols_list.append('inf_lag_336')
I should perform this append outside the loop only once as it is a one-time operation and should not be in every hyperparameter combination.
That was why the got was more by 5 (i.e. expected 101, got 106)
|
RuntimeError: input.size(-1) must be equal to input_size. Expected 101, got 106
|
I am trying to use pytorch forecasting to make predictions using DeepAR.
I am getting the following error in the 2nd iteration of my training loop. I do not make any changes to the datasets after the training loop starts so I do not understand this error.
RuntimeError Traceback (most recent call last)
<ipython-input-41-d2a72cad41f9> in <module>
120 deepar,
121 train_dataloaders=train_dataloader,
--> 122 val_dataloaders=val_dataloader,
123 )
124
23 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
205 raise RuntimeError(
206 'input.size(-1) must be equal to input_size. Expected {}, got {}'.format(
--> 207 self.input_size, input.size(-1)))
208
209 def get_expected_hidden_size(self, input: Tensor, batch_sizes: Optional[Tensor]) -> Tuple[int, int, int]:
RuntimeError: input.size(-1) must be equal to input_size. Expected 101, got 106
My full code is as below
#from ctypes import FormatError
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import os,sys
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\\2_freq_nbinom_LSTM')))
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\\2_freq_nbinom_LSTM\\1_cluster_demand_prediction\data\weather_data')))
# sys.path.append(os.path.abspath(os.path.join('C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\2_freq_nbinom_LSTM\1_cluster_demand_prediction\data\demand_data')))
import torch
torch.use_deterministic_algorithms(True)
from pytorch_forecasting.data.encoders import TorchNormalizer
from pytorch_forecasting.metrics import SMAPE, RMSE
from torchmetrics import R2Score, SymmetricMeanAbsolutePercentageError, MeanSquaredError
import matplotlib.pyplot as plt
import pandas as pd
from pytorch_forecasting.data import TimeSeriesDataSet
from pytorch_forecasting.data import NaNLabelEncoder
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
import pytorch_lightning as pl
import torch
from pytorch_forecasting.data.encoders import TorchNormalizer
import os,sys
import numpy as np
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
from statsmodels.tsa.stattools import acf,pacf
from scipy.signal import find_peaks
import operator
import statsmodels.api as sm
from itertools import combinations
import pickle
from pytorch_forecasting import Baseline
import random
from pytorch_forecasting import DeepAR,NegativeBinomialDistributionLoss
from itertools import product
from sklearn.metrics import mean_absolute_error, mean_squared_error
"""
Set Random seed
"""
random.seed(0)
torch.manual_seed(0)
np.random.seed(0)
## additional seeding to ensure reproduciblility.
pl.seed_everything(0)
import os
os.chdir('/content/drive/MyDrive/My_model/pytorch-forecasting-master/1_cluster_demand_prediction')
encoder_len = 24
pred_len = 1 # 1, 6, 12
#cov_lag_len= pred_len # do this when creating the dataset. import that code here.
os.chdir('/content/drive/MyDrive/My_model/pytorch-forecasting-master/1_cluster_demand_prediction')
tampines_all_clstr_train_dem_data = pd.read_csv('tampines_all_clstr_train_dem_data.csv')
tampines_all_clstr_val_dem_data = pd.read_csv('tampines_all_clstr_val_dem_data.csv')
tampines_all_clstr_test_dem_data = pd.read_csv('tampines_all_clstr_test_dem_data.csv')
tampines_all_clstr_train_dem_data = tampines_all_clstr_train_dem_data.drop(['Unnamed: 0'],axis=1)
tampines_all_clstr_val_dem_data = tampines_all_clstr_val_dem_data.drop(['Unnamed: 0'],axis=1)
tampines_all_clstr_test_dem_data = tampines_all_clstr_test_dem_data.drop(['Unnamed: 0'],axis=1)
train_data = pd.DataFrame()
val_data = pd.DataFrame()
test_data = pd.DataFrame()
cov_lag_len = 0
for c in tampines_all_clstr_train_dem_data.columns:
if c == 'target':
target_train = tampines_all_clstr_train_dem_data[c].iloc[:]
train_data[c] = target_train.reset_index(drop=True)
target_val = tampines_all_clstr_val_dem_data[c].iloc[:]
val_data[c] = target_val.reset_index(drop=True)
target_test = tampines_all_clstr_test_dem_data[c].iloc[:]
test_data[c] = target_test.reset_index(drop=True)
else:
other_col_train = tampines_all_clstr_train_dem_data[c].iloc[:]
train_data[c] = other_col_train.reset_index(drop=True)
other_col_val = tampines_all_clstr_val_dem_data[c].iloc[:]
val_data[c] = other_col_val.reset_index(drop=True)
other_col_test = tampines_all_clstr_test_dem_data[c].iloc[:]
test_data[c] = other_col_test.reset_index(drop=True)
"""
Import pre-processed Data
"""
#################### add date information ts ####################
#2021 oct 17 20:00:00
day=17
hour=(20+cov_lag_len)
if (20+cov_lag_len)>23:
day = 18
hour = hour%24
train_data["date"] = pd.Timestamp(year=2021, month=10, day=day, hour=hour ) + pd.to_timedelta(train_data.time_idx, "H")
train_data['_hour_of_day'] = train_data["date"].dt.hour.astype(str)
train_data['_day_of_week'] = train_data["date"].dt.dayofweek.astype(str)
train_data['_day_of_month'] = train_data["date"].dt.day.astype(str)
train_data['_day_of_year'] = train_data["date"].dt.dayofyear.astype(str)
train_data['_week_of_year'] = train_data["date"].dt.weekofyear.astype(str)
train_data['_month_of_year'] = train_data["date"].dt.month.astype(str)
train_data['_year'] = train_data["date"].dt.year.astype(str)
#################### add date information ts ####################
#################### add date information ts ####################
# val starts at 3/12/2021 09:00
day=3
hour=(9+cov_lag_len)
if (9+cov_lag_len)>23:
day = 4
hour = hour%24
val_data["date"] = pd.Timestamp(year=2021, month=12, day=day, hour=hour ) + pd.to_timedelta(val_data.time_idx, "H")
val_data['_hour_of_day'] = val_data["date"].dt.hour.astype(str)
val_data['_day_of_week'] = val_data["date"].dt.dayofweek.astype(str)
val_data['_day_of_month'] = val_data["date"].dt.day.astype(str)
val_data['_day_of_year'] = val_data["date"].dt.dayofyear.astype(str)
val_data['_week_of_year'] = val_data["date"].dt.weekofyear.astype(str)
val_data['_month_of_year'] = val_data["date"].dt.month.astype(str)
val_data['_year'] = val_data["date"].dt.year.astype(str)
#################### add date information ts ####################
#################### add date information ts ####################
# test starts at 16/12/2021 16:00
day=16
hour=(16+cov_lag_len)
if (16+cov_lag_len)>23:
day = 17
hour = hour%24
test_data["date"] = pd.Timestamp(year=2021, month=12, day=day, hour=hour ) + pd.to_timedelta(test_data.time_idx, "H")
test_data['_hour_of_day'] = test_data["date"].dt.hour.astype(str)
test_data['_day_of_week'] = test_data["date"].dt.dayofweek.astype(str)
test_data['_day_of_month'] = test_data["date"].dt.day.astype(str)
test_data['_day_of_year'] = test_data["date"].dt.dayofyear.astype(str)
test_data['_week_of_year'] = test_data["date"].dt.weekofyear.astype(str)
test_data['_month_of_year'] = test_data["date"].dt.month.astype(str)
test_data['_year'] = test_data["date"].dt.year.astype(str)
#################### add date information ts ####################
Target = 'target'
"""
set inputs here
(hyperparameters grid search)
"""
######### Network Architecture ###################
###### Create hyperparameters grid ######
hparams_grid = {"LSTM_neuron_size":[168,336,672],
"num_layers":[3],
"batch_size":[8,4],
"learning_rate":[0.0001],
"max_encoder_length":[encoder_len],
"max_prediction_length":[pred_len],
"dropout":[0.2],
#"cov_pair":cov_pairs_list,# [cov_pairs_list[7]],
"Num_epochs":[1]}
#"Num_epochs":[16,18,20,22,24,26,28]}
###### Create hyperparameters grid ######
p = 10 # patience no. of epochs
Loss=NegativeBinomialDistributionLoss()
######### Network Architecture ###################
TRAINING ROUTINE IS BELOW.
######### Training Routine ###################
fdv_steps = 10 # fast_dev_run
######### Training Routine ###################
############## Inputs for 2) Persistance model ( seasonal naive forecast ) #######################
season_len = 168 # length of season
num_past_seas = 2 # number of past seasons to use in averaging
#seas_pred_strt_idx = 2035 # seasonal naive forecast start index, in hours use the df dataframe
############## Inputs for 2) Persistance model ( seasonal naive forecast ) #######################
param_comb_cnt=0
for neu,lay,bat,lr,enc_len,pred_len,drop,num_ep in product(*[x for x in hparams_grid.values()]):
#print(param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,df_cov_col1,df_cov_col2)
param_comb_cnt+=1
param_comb_cnt
"""
Full Training Routine
with hyperparmeter grid search
Load data into TimeSeriesDataSet object
for fast development run
uncomment fast_dev_run = fdv_steps
"""
#early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=p, verbose=False, mode="min")
lr_logger = LearningRateMonitor()
RMSE_list = [] # FIND minimum RMSE case
hyperparams_list = [] # FIND minimum RMSE case
# best_val_comb_idx=[17,21,51,52,53,54,61,62,63,82,83,84,109,110,111,143,144,145,195,218,219,220,232,233,234,236,237,238,280,338,339,340,344,345,346,386]
# best_val_train_epochs = [50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50]
# best_val_comb_idx=[234]
# best_val_train_epochs = [50]
num_cols_list = []
param_comb_cnt=-1
for neu,lay,bat,lr,enc_len,pred_len,drop,num_ep in product(*[x for x in hparams_grid.values()]):
param_comb_cnt+=1
if param_comb_cnt <0 or param_comb_cnt > 50:
continue
# df_cov_col1 = cov_pair[0]
# df_cov_col2 = cov_pair[1]
######### Load DATA #############
cat_dict = {"_hour_of_day": NaNLabelEncoder(add_nan=True).fit(train_data._hour_of_day), \
"_day_of_week": NaNLabelEncoder(add_nan=True).fit(train_data._day_of_week), "_day_of_month" : NaNLabelEncoder(add_nan=True).fit(train_data._day_of_month), "_day_of_year" : NaNLabelEncoder(add_nan=True).fit(train_data._day_of_year), \
"_week_of_year": NaNLabelEncoder(add_nan=True).fit(train_data._week_of_year), "_month_of_year": NaNLabelEncoder(add_nan=True).fit(train_data._month_of_year) ,"_year": NaNLabelEncoder(add_nan=True).fit(train_data._year) }
cat_list = ["_hour_of_day","_day_of_week","_day_of_month","_day_of_year","_week_of_year","_month_of_year","_year"]
num_cols_list.append('dem_lag_168')
num_cols_list.append('dem_lag_336')
num_cols_list.append('inflow')
num_cols_list.append('inf_lag_168')
num_cols_list.append('inf_lag_336')
#cat_list.append('wea_desc_clstr_52')
train_dataset = TimeSeriesDataSet(
train_data,
time_idx="time_idx",
target=Target,
categorical_encoders=cat_dict,
group_ids=["group"],
min_encoder_length=enc_len,
max_encoder_length=enc_len,
min_prediction_length=pred_len,
max_prediction_length=pred_len,
time_varying_unknown_reals=[Target],
time_varying_known_reals=num_cols_list,
time_varying_known_categoricals=cat_list,
add_relative_time_idx=False,
randomize_length=False,
scalers={},
target_normalizer=TorchNormalizer(method="identity",center=False,transformation=None )
)
val_dataset = TimeSeriesDataSet.from_dataset(train_dataset,val_data, stop_randomization=True, predict=False)
test_dataset = TimeSeriesDataSet.from_dataset(train_dataset,test_data, stop_randomization=True)
train_dataloader = train_dataset.to_dataloader(train=True, batch_size=bat)
val_dataloader = val_dataset.to_dataloader(train=False, batch_size=bat)
test_dataloader = test_dataset.to_dataloader(train=False, batch_size=bat)
######### Load DATA #############
"""
Machine Learning predictions START
1) DeepAR
"""
trainer = pl.Trainer(
max_epochs=num_ep,
gpus=-1, #-1
auto_lr_find=False,
gradient_clip_val=0.1,
limit_train_batches=1.0,
limit_val_batches=1.0,
#fast_dev_run=fdv_steps,
logger=True,
#log_every_n_steps=10,
# profiler=True,
callbacks=[lr_logger]#, early_stop_callback],
#enable_checkpointing=True,
#default_root_dir="C:\Work\WORK_PACKAGE\Demand_forecasting\github\DeepAR-pytorch\My_model\2_freq_nbinom_LSTM\1_cluster_demand_prediction\logs"
)
#print(f"training routing:\n \n {trainer}")
deepar = DeepAR.from_dataset(
train_dataset,
learning_rate=lr,
hidden_size=neu,
rnn_layers=lay,
dropout=drop,
loss=Loss,
log_interval=20,
log_val_interval=6,
log_gradient_flow=False,
# reduce_on_plateau_patience=3,
)
#print(f"Number of parameters in network: {deepar.size()/1e3:.1f}k")
# print(f"Model :\n \n {deepar}")
torch.set_num_threads(10)
trainer.fit(
deepar,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
########## Prediction #####################
test_output = deepar.predict(data=test_dataloader,mode='prediction',return_index=True,num_workers=8,show_progress_bar=True)
len1 = test_output[1]['time_idx'].shape[0]
actual1_full = np.array([])
pred_full = np.array([])
RMSE_list = np.array([])
old_pred_idx = test_output[1]['time_idx'][0]
for i in range(len1):
pred_idx = test_output[1]['time_idx'][i]
if pred_idx - old_pred_idx > 1: # moved into new group
plt.figure(figsize=(25,5))
plt.plot(actual1_full.flatten(),'^-')
plt.plot(pred_full.flatten(),'*-')
plt.show()
RMSE = np.sqrt(mean_squared_error(actual1_full.flatten(),pred_full.flatten() ))
print('Average RMSE : ', RMSE)
actual1_full = np.array([])
pred_full = np.array([])
actual = test_data[Target].iloc[pred_idx:pred_idx+pred_len].values
actual1_full = np.append(actual1_full, actual)
pred = np.array(np.rint(test_output[0][i])).astype(int)
pred_full = np.append(pred_full, pred)
old_pred_idx = pred_idx
i=i+pred_len
plt.figure(figsize=(25,5))
plt.plot(actual1_full.flatten(),'^-')
plt.plot(pred_full.flatten(),'*-')
plt.show()
RMSE = np.sqrt(mean_squared_error(actual1_full.flatten(),pred_full.flatten() ))
print('Average RMSE : ', RMSE)
print('\n Hyperparameter: param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,inflow\n')
print(param_comb_cnt,neu,lay,bat,lr,enc_len,pred_len,drop,num_ep,' \n')
########## Prediction #####################
# TO find minimum RMSE
hyperparams_list.append((neu,lay,bat,lr,enc_len,pred_len,drop,num_ep))
# RMSE_list.append(RMSE)
"""
Machine Learning predictions END
"""
######## Identify least RMSE case #############
# min_RMSE_idx = RMSE_list.index(min(RMSE_list))
# hyperparams_list[min_RMSE_idx]
######## Identify least RMSE case #############
|
[
"The error was here:\n num_cols_list.append('dem_lag_168') \n num_cols_list.append('dem_lag_336')\n num_cols_list.append('inflow')\n num_cols_list.append('inf_lag_168') \n num_cols_list.append('inf_lag_336')\n\nI should perform this append outside the loop only once as it is a one-time operation and should not be in every hyperparameter combination.\nThat was why the got was more by 5 (i.e. expected 101, got 106)\n"
] |
[
0
] |
[] |
[] |
[
"pytorch_dataloader"
] |
stackoverflow_0074601233_pytorch_dataloader.txt
|
Q:
`if constexpr` vs `if` in light of compiler optimization and code performance
Consider a function template func that is very performance critical. It can be instantiated with T=Type1 or some other type. Part of the function logic depends on T it is instantiated with.
One can either explicitly use a if constexpr (Code B) or use a vanilla if instead (Code A), while compiler probably optimizes the code.
However, I wonder, how the implementation without constexpr (Code A) is any different? Isn't the compiler capable of detecting which branch of if (in Code A) to use at compile time while instantiating? Can it still (for Code A) generate a less efficient code?
Code A. Without if constexpr:
template<class T>
void func(T argument)
{
// some general type-independent logic
if (std::is_same<Type1,T>::value)
{
// do something
}
else
{
// do something else
}
// some general type-independent logic
}
Code B. With if constexpr:
template<class T>
void func(T argument)
{
// some general type-independent logic
if constexpr (std::is_same<Type1,T>::value)
{
// do something
}
else
{
// do something else
}
// some general type-independent logic
}
Both codes A & B compile, as do something and do something else are well-formed for any T.
There are some similar-sounding questions:
Why is constexpr if needed? – this one answers when constexpr is required.
Difference between if and constexpr if – just lists the differences
The aforementioned questions do not answer if Code B is preferable to Code A for some reason (when both branches are well-formed anyway).
The only advantage I see would be to tell the programmer explicitly that this if is compile-time; however, I would say the conditional expression is self-explanatory.
A:
if constexpr is not intended about optimization. Compilers are very good at optimizing away a branch that is if (true) or if (false) (since we're talking about constant expressions, that is what it boils down to). Here is a godbolt demo of the example in OP - you'll note that both gcc and clang, even on -O0, do not emit a branch for a simple if.
if constexpr is all about ensuring that only one branch of the if is instantiated. This is hugely important and valuable for writing templates - because now we can actually write conditionally compiling code within the body of the same function instead of writing multiple artificial functions just to avoid instantiation.
That said, if you have a condition that is a known constant expression - just always use if constexpr, whether or not you need the instantiation benefit. There is no downside to such a decision. It makes it clearer to readers that indeed this condition is constant (since otherwise it wouldn't even compile). It will also force the evaluation of the expression as a constant (a slight variant leads gcc to emit a branch at -O0, thought not at -O1), which with the coming addition of is_constant_evaluated() may become more important in the long run (possibly even negating my opening paragraph).
The only advantage I see would be to tell the programmer explicitly that this if is compile-time; however, I would say the conditional expression is self-explanatory.
To address this specifically, yes, std::is_same<X, Y>::value is "self-explanatory" that it is a constant expression... because we happen to be familiar with std::is_same. But it's less obvious whether foo<X>::value is a constant expression or whether foo<X>() + bar<Y>() is a constant expression or anything more arbitrarily complicated than that.
It's seeing if constexpr that makes the fact that it's compile-time self-explanatory, not the content of the condition itself.
A:
Adding an example to @Barry 's explanation: The use is primarily for writing templates. Consider the following:
template <class T>
auto get_value()
{
if constexpr (std::is_same_v<T, int>) {
return 1
} else {
return 2.0;
}
}
You can note that, if the template parameter is int, the return value is determined to be int, while it is float when the template parameter is not int. You will see that this does not work with non-constexpr if statements, because at instantiation, all returns of a function must have a common type, which the former does not have. The only other way of achieving this is to use c++20 contraints, or std::enable_if to overload the function based on the template parameter.
|
`if constexpr` vs `if` in light of compiler optimization and code performance
|
Consider a function template func that is very performance critical. It can be instantiated with T=Type1 or some other type. Part of the function logic depends on T it is instantiated with.
One can either explicitly use a if constexpr (Code B) or use a vanilla if instead (Code A), while compiler probably optimizes the code.
However, I wonder, how the implementation without constexpr (Code A) is any different? Isn't the compiler capable of detecting which branch of if (in Code A) to use at compile time while instantiating? Can it still (for Code A) generate a less efficient code?
Code A. Without if constexpr:
template<class T>
void func(T argument)
{
// some general type-independent logic
if (std::is_same<Type1,T>::value)
{
// do something
}
else
{
// do something else
}
// some general type-independent logic
}
Code B. With if constexpr:
template<class T>
void func(T argument)
{
// some general type-independent logic
if constexpr (std::is_same<Type1,T>::value)
{
// do something
}
else
{
// do something else
}
// some general type-independent logic
}
Both codes A & B compile, as do something and do something else are well-formed for any T.
There are some similar-sounding questions:
Why is constexpr if needed? – this one answers when constexpr is required.
Difference between if and constexpr if – just lists the differences
The aforementioned questions do not answer if Code B is preferable to Code A for some reason (when both branches are well-formed anyway).
The only advantage I see would be to tell the programmer explicitly that this if is compile-time; however, I would say the conditional expression is self-explanatory.
|
[
"if constexpr is not intended about optimization. Compilers are very good at optimizing away a branch that is if (true) or if (false) (since we're talking about constant expressions, that is what it boils down to). Here is a godbolt demo of the example in OP - you'll note that both gcc and clang, even on -O0, do not emit a branch for a simple if.\nif constexpr is all about ensuring that only one branch of the if is instantiated. This is hugely important and valuable for writing templates - because now we can actually write conditionally compiling code within the body of the same function instead of writing multiple artificial functions just to avoid instantiation. \nThat said, if you have a condition that is a known constant expression - just always use if constexpr, whether or not you need the instantiation benefit. There is no downside to such a decision. It makes it clearer to readers that indeed this condition is constant (since otherwise it wouldn't even compile). It will also force the evaluation of the expression as a constant (a slight variant leads gcc to emit a branch at -O0, thought not at -O1), which with the coming addition of is_constant_evaluated() may become more important in the long run (possibly even negating my opening paragraph).\n\n\nThe only advantage I see would be to tell the programmer explicitly that this if is compile-time; however, I would say the conditional expression is self-explanatory.\n\nTo address this specifically, yes, std::is_same<X, Y>::value is \"self-explanatory\" that it is a constant expression... because we happen to be familiar with std::is_same. But it's less obvious whether foo<X>::value is a constant expression or whether foo<X>() + bar<Y>() is a constant expression or anything more arbitrarily complicated than that.\nIt's seeing if constexpr that makes the fact that it's compile-time self-explanatory, not the content of the condition itself.\n",
"Adding an example to @Barry 's explanation: The use is primarily for writing templates. Consider the following:\ntemplate <class T>\nauto get_value() \n{\n if constexpr (std::is_same_v<T, int>) {\n return 1\n } else {\n return 2.0;\n }\n}\n\nYou can note that, if the template parameter is int, the return value is determined to be int, while it is float when the template parameter is not int. You will see that this does not work with non-constexpr if statements, because at instantiation, all returns of a function must have a common type, which the former does not have. The only other way of achieving this is to use c++20 contraints, or std::enable_if to overload the function based on the template parameter.\n"
] |
[
11,
1
] |
[] |
[] |
[
"c++",
"c++17",
"if_constexpr"
] |
stackoverflow_0054545565_c++_c++17_if_constexpr.txt
|
Q:
Class type check in TypeScript
In ActionScript, it is possible to check the type at run-time using the is operator:
var mySprite:Sprite = new Sprite();
trace(mySprite is Sprite); // true
trace(mySprite is DisplayObject);// true
trace(mySprite is IEventDispatcher); // true
Is it possible to detect if a variable (extends or) is a certain class or interface with TypeScript?
I couldn't find anything about it in the language specs. It should be there when working with classes/interfaces.
A:
4.19.4 The instanceof operator
The instanceof operator requires the left operand to be of type Any, an object type, or a type parameter type, and the right operand to be of type Any or a subtype of the 'Function' interface type. The result is always of the Boolean primitive type.
So you could use
mySprite instanceof Sprite;
Note that this operator is also in ActionScript but it shouldn't be used there anymore:
The is operator, which is new for ActionScript 3.0, allows you to test whether a variable or expression is a member of a given data type. In previous versions of ActionScript, the instanceof operator provided this functionality, but in ActionScript 3.0 the instanceof operator should not be used to test for data type membership. The is operator should be used instead of the instanceof operator for manual type checking, because the expression x instanceof y merely checks the prototype chain of x for the existence of y (and in ActionScript 3.0, the prototype chain does not provide a complete picture of the inheritance hierarchy).
TypeScript's instanceof shares the same problems. As it is a language which is still in its development I recommend you to state a proposal of such facility.
See also:
MDN: instanceof
A:
TypeScript have a way of validating the type of a variable in runtime.
You can add a validating function that returns a type predicate.
So you can call this function inside an if statement, and be sure that all the code inside that block is safe to use as the type you think it is.
Example from the TypeScript docs:
function isFish(pet: Fish | Bird): pet is Fish {
return (<Fish>pet).swim !== undefined;
}
// Both calls to 'swim' and 'fly' are now okay.
if (isFish(pet)) {
pet.swim();
}
else {
pet.fly();
}
See more at:
https://www.typescriptlang.org/docs/handbook/advanced-types.html
A:
You can use the instanceof operator for this. From MDN:
The instanceof operator tests whether the prototype property of a
constructor appears anywhere in the prototype chain of an object.
If you don't know what prototypes and prototype chains are I highly recommend looking it up. Also here is a JS (TS works similar in this respect) example which might clarify the concept:
class Animal {
name;
constructor(name) {
this.name = name;
}
}
const animal = new Animal('fluffy');
// true because Animal in on the prototype chain of animal
console.log(animal instanceof Animal); // true
// Proof that Animal is on the prototype chain
console.log(Object.getPrototypeOf(animal) === Animal.prototype); // true
// true because Object in on the prototype chain of animal
console.log(animal instanceof Object);
// Proof that Object is on the prototype chain
console.log(Object.getPrototypeOf(Animal.prototype) === Object.prototype); // true
console.log(animal instanceof Function); // false, Function not on prototype chain
The prototype chain in this example is:
animal > Animal.prototype > Object.prototype
A:
You have two types of checks
typeof for basic types and
instanceof for complex types
by ex, the isString check can be performed like this:
function isString(value) {
return typeof value === 'string' || value instanceof String;
}
A:
Although late and already some good answers exists. The proposed solution from @Gilad has the flaw if the assigned content swim exists as Type but the Value is set to undefined. A more robust check would be:
export const isFish= (pet: Fish | Bird): pet is Fish =>
Object.keys(pet).includes('swim');
This solution wouldn't be dependent on the value of swim!
|
Class type check in TypeScript
|
In ActionScript, it is possible to check the type at run-time using the is operator:
var mySprite:Sprite = new Sprite();
trace(mySprite is Sprite); // true
trace(mySprite is DisplayObject);// true
trace(mySprite is IEventDispatcher); // true
Is it possible to detect if a variable (extends or) is a certain class or interface with TypeScript?
I couldn't find anything about it in the language specs. It should be there when working with classes/interfaces.
|
[
"\n4.19.4 The instanceof operator\nThe instanceof operator requires the left operand to be of type Any, an object type, or a type parameter type, and the right operand to be of type Any or a subtype of the 'Function' interface type. The result is always of the Boolean primitive type.\n\nSo you could use\nmySprite instanceof Sprite;\n\nNote that this operator is also in ActionScript but it shouldn't be used there anymore:\n\nThe is operator, which is new for ActionScript 3.0, allows you to test whether a variable or expression is a member of a given data type. In previous versions of ActionScript, the instanceof operator provided this functionality, but in ActionScript 3.0 the instanceof operator should not be used to test for data type membership. The is operator should be used instead of the instanceof operator for manual type checking, because the expression x instanceof y merely checks the prototype chain of x for the existence of y (and in ActionScript 3.0, the prototype chain does not provide a complete picture of the inheritance hierarchy).\n\nTypeScript's instanceof shares the same problems. As it is a language which is still in its development I recommend you to state a proposal of such facility.\nSee also:\n\nMDN: instanceof\n\n",
"TypeScript have a way of validating the type of a variable in runtime.\nYou can add a validating function that returns a type predicate.\nSo you can call this function inside an if statement, and be sure that all the code inside that block is safe to use as the type you think it is.\nExample from the TypeScript docs:\nfunction isFish(pet: Fish | Bird): pet is Fish {\n return (<Fish>pet).swim !== undefined;\n}\n\n// Both calls to 'swim' and 'fly' are now okay.\nif (isFish(pet)) {\n pet.swim();\n}\nelse {\n pet.fly();\n}\n\nSee more at:\nhttps://www.typescriptlang.org/docs/handbook/advanced-types.html\n",
"You can use the instanceof operator for this. From MDN:\n\nThe instanceof operator tests whether the prototype property of a\n constructor appears anywhere in the prototype chain of an object.\n\nIf you don't know what prototypes and prototype chains are I highly recommend looking it up. Also here is a JS (TS works similar in this respect) example which might clarify the concept:\n\n\n class Animal {\r\n name;\r\n \r\n constructor(name) {\r\n this.name = name;\r\n }\r\n }\r\n \r\n const animal = new Animal('fluffy');\r\n \r\n // true because Animal in on the prototype chain of animal\r\n console.log(animal instanceof Animal); // true\r\n // Proof that Animal is on the prototype chain\r\n console.log(Object.getPrototypeOf(animal) === Animal.prototype); // true\r\n \r\n // true because Object in on the prototype chain of animal\r\n console.log(animal instanceof Object); \r\n // Proof that Object is on the prototype chain\r\n console.log(Object.getPrototypeOf(Animal.prototype) === Object.prototype); // true\r\n \r\n console.log(animal instanceof Function); // false, Function not on prototype chain\r\n \r\n \n\n\n\nThe prototype chain in this example is:\nanimal > Animal.prototype > Object.prototype\n",
"You have two types of checks\n\ntypeof for basic types and\ninstanceof for complex types\n\nby ex, the isString check can be performed like this:\nfunction isString(value) {\n return typeof value === 'string' || value instanceof String;\n}\n\n",
"Although late and already some good answers exists. The proposed solution from @Gilad has the flaw if the assigned content swim exists as Type but the Value is set to undefined. A more robust check would be:\nexport const isFish= (pet: Fish | Bird): pet is Fish =>\n Object.keys(pet).includes('swim');\n\nThis solution wouldn't be dependent on the value of swim!\n"
] |
[
529,
115,
24,
9,
0
] |
[] |
[] |
[
"typechecking",
"typescript"
] |
stackoverflow_0012789231_typechecking_typescript.txt
|
Q:
How to make gif only loop once with react native?
Here is my code
<Image
resizeMode="contain"
source={require('../../assets/splash.gif')}
style={{ width: '100%', alignSelf: 'center' }}
/>
How can I control the gif to loop only once while first render? thanks.
A:
use ControlledGifView component from react-native-controlled-gif library
react-native-controlled-gif
A:
To control the playback of a GIF file and have it loop only once at first render time, you can use the 'loop' property of the 'react-native-animatable' library. This library provides an 'Animated.Image' component that allows you to easily control the playback of GIFs and other animations. To use it, you will first need to install it in your project via 'npm' or 'yarn'. Then, instead of using the React Native 'Image' component, you can import and use the 'react-native-animatable' 'Animated.Image' component instead. Here is an example of how it could be done:
import { Animated } from 'react-native-animatable';
// Somewhere in your code...
<Animated.Image
resizeMode="contain"
source={require('../../assets/splash.gif')}
style={{ width: '100%', alignSelf: 'center' }}
loop={false}
/>
This code uses the 'Animated.Image' component instead of the React Native 'Image' component, and sets the loop property to 'false' to indicate that the GIF should play.
|
How to make gif only loop once with react native?
|
Here is my code
<Image
resizeMode="contain"
source={require('../../assets/splash.gif')}
style={{ width: '100%', alignSelf: 'center' }}
/>
How can I control the gif to loop only once while first render? thanks.
|
[
"use ControlledGifView component from react-native-controlled-gif library\nreact-native-controlled-gif\n",
"To control the playback of a GIF file and have it loop only once at first render time, you can use the 'loop' property of the 'react-native-animatable' library. This library provides an 'Animated.Image' component that allows you to easily control the playback of GIFs and other animations. To use it, you will first need to install it in your project via 'npm' or 'yarn'. Then, instead of using the React Native 'Image' component, you can import and use the 'react-native-animatable' 'Animated.Image' component instead. Here is an example of how it could be done:\nimport { Animated } from 'react-native-animatable';\n// Somewhere in your code...\n<Animated.Image\n resizeMode=\"contain\"\n source={require('../../assets/splash.gif')}\n style={{ width: '100%', alignSelf: 'center' }}\n loop={false}\n/>\n\nThis code uses the 'Animated.Image' component instead of the React Native 'Image' component, and sets the loop property to 'false' to indicate that the GIF should play.\n"
] |
[
0,
0
] |
[] |
[] |
[
"gif",
"image",
"javascript",
"react_native",
"reactjs"
] |
stackoverflow_0074657579_gif_image_javascript_react_native_reactjs.txt
|
Q:
How can I compare one column of a dataframe to multiple other columns using SequenceMatcher?
I have a dataframe with 6 columns, the first two are an id and a name column, the remaining 4 are potential matches for the name column.
id name match1 match2 match3 match4
id name match1 match2 match3 match4
1 NXP Semiconductors NaN NaN NaN NaN
2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital
3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN
4 Miami-Dade County County of Will County of Orange NaN NaN
5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia
6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN
I'd like to use SequenceMatcher to compare the name column with each match column if there is a value and return the match value with the highest ratio, or closest match, in a new column at the end of the dataframe.
So the output would be something like this:
id name match1 match2 match3 match4 best match
1 NXP Semiconductors NaN NaN NaN NaN NaN
2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital Cincinnati Children's Hospital Medical Center
3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN The State Board of Administration of Florida
4 Miami-Dade County County of Will County of Orange NaN NaN County of Orange
5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia California Teacher's Association
6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN Waste Management
I've gotten the data into the dataframe and have been able to compare one column to a single other column using the apply method:
df['diff'] = df.apply(lambda x: diff.SequenceMatcher(None, x[0].strip(), x[1].strip()).ratio(), axis=1)
However, I'm not sure how to loop over multiple columns in the same row. I also thought about trying to reformat my data so it that the method above would work, something like this:
name match
name1 match1
name1 match2
name1 match3
However, I was running into issues dealing with the NaN values. Open to suggestions on the best route to accomplish this.
A:
I ended up solving this using the second idea of reformatting the table. Using the melt function I was able to get a two column table of the name field with each possible match. From there I used the original lambda function to compare the two columns and output a ratio. From there it was relatively easy to go through and see the most likely matches, although it did require some manual effort.
df = pd.read_csv('output.csv')
df1 = df.melt(id_vars = ['id', 'name'], var_name = 'match').dropna().drop('match',1).sort_values('name')
df1['diff'] = df1.apply(lambda x: diff.SequenceMatcher(None, x[1].strip(), x[2].strip()).ratio(), axis=1)
df1.to_csv('comparison-output.csv', encoding='utf-8')
|
How can I compare one column of a dataframe to multiple other columns using SequenceMatcher?
|
I have a dataframe with 6 columns, the first two are an id and a name column, the remaining 4 are potential matches for the name column.
id name match1 match2 match3 match4
id name match1 match2 match3 match4
1 NXP Semiconductors NaN NaN NaN NaN
2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital
3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN
4 Miami-Dade County County of Will County of Orange NaN NaN
5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia
6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN
I'd like to use SequenceMatcher to compare the name column with each match column if there is a value and return the match value with the highest ratio, or closest match, in a new column at the end of the dataframe.
So the output would be something like this:
id name match1 match2 match3 match4 best match
1 NXP Semiconductors NaN NaN NaN NaN NaN
2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital Cincinnati Children's Hospital Medical Center
3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN The State Board of Administration of Florida
4 Miami-Dade County County of Will County of Orange NaN NaN County of Orange
5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia California Teacher's Association
6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN Waste Management
I've gotten the data into the dataframe and have been able to compare one column to a single other column using the apply method:
df['diff'] = df.apply(lambda x: diff.SequenceMatcher(None, x[0].strip(), x[1].strip()).ratio(), axis=1)
However, I'm not sure how to loop over multiple columns in the same row. I also thought about trying to reformat my data so it that the method above would work, something like this:
name match
name1 match1
name1 match2
name1 match3
However, I was running into issues dealing with the NaN values. Open to suggestions on the best route to accomplish this.
|
[
"I ended up solving this using the second idea of reformatting the table. Using the melt function I was able to get a two column table of the name field with each possible match. From there I used the original lambda function to compare the two columns and output a ratio. From there it was relatively easy to go through and see the most likely matches, although it did require some manual effort.\ndf = pd.read_csv('output.csv')\ndf1 = df.melt(id_vars = ['id', 'name'], var_name = 'match').dropna().drop('match',1).sort_values('name')\ndf1['diff'] = df1.apply(lambda x: diff.SequenceMatcher(None, x[1].strip(), x[2].strip()).ratio(), axis=1) \ndf1.to_csv('comparison-output.csv', encoding='utf-8')\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"sequencematcher"
] |
stackoverflow_0074637083_pandas_python_sequencematcher.txt
|
Q:
Django query to return percentage of a users with a post
Two models Users (built-in) and Posts:
class Post(models.Model):
post_date = models.DateTimeField(default=timezone.now)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, related_name='user_post')
post = models.CharField(max_length=100)
I want to have an API endpoint that returns the percentage of users that have posted. Basically I want SUM(unique users who have posted) / total_users
I have been trying to play around with annotate and aggregate, but I am getting the sum of posts for each users, or the sum of users per post (which is one...). How can I get the sum of posts returned with unique users, divide that by user.count and return?
I feel like I am missing something silly but my brain has gone to mush staring at this.
class PostParticipationAPIView(generics.ListAPIView):
queryset = Post.objects.all()
serializer_class = PostSerializer
def get_queryset(self):
start_date = self.request.query_params.get('start_date')
end_date = self.request.query_params.get('end_date')
# How can I take something like this, divide it by User.objects.all().count() * 100, and assign it to something to return as the queryset?
queryset = Post.objects.filter(post_date__gte=start_date, post_date__lte=end_date).distinct('user').count()
return queryset
My goal is to end up with the endpoint like:
{
total_participation: 97.3
}
Thanks for any guidance.
BCBB
A:
something like this should work
# get total user count
total_users = User.objects.count()
# get unique set of users with post
total_users_who_posted = Post.objects.filter(...).distinct("user").count()
# calculate_percentage
percentage = {
"total_participation": (total_users_who_posted*100)/ total_users
}
# take caution of divion by zero
A:
I don't think it is possible to use djangos orm to do this completely but you can use the orm to get the user counts (with posts and total):
from django.db.models import BooleanField, Case, Count, When, Value
counts = (User
.objects
.annotate(posted=Case(When(user_post__isnull=False,
then=Value(True)),
default=Value(False),
output_field=BooleanField()))
.values('posted')
.aggregate(posted_users=Count('pk', filter=Q(posted=True)),
total_users=Count('pk', filter=Q(posted__isnull=False)))
# This will result in a dict containing the following:
# counts = {'posted_users': ...,
# 'total_users': ....}
|
Django query to return percentage of a users with a post
|
Two models Users (built-in) and Posts:
class Post(models.Model):
post_date = models.DateTimeField(default=timezone.now)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, related_name='user_post')
post = models.CharField(max_length=100)
I want to have an API endpoint that returns the percentage of users that have posted. Basically I want SUM(unique users who have posted) / total_users
I have been trying to play around with annotate and aggregate, but I am getting the sum of posts for each users, or the sum of users per post (which is one...). How can I get the sum of posts returned with unique users, divide that by user.count and return?
I feel like I am missing something silly but my brain has gone to mush staring at this.
class PostParticipationAPIView(generics.ListAPIView):
queryset = Post.objects.all()
serializer_class = PostSerializer
def get_queryset(self):
start_date = self.request.query_params.get('start_date')
end_date = self.request.query_params.get('end_date')
# How can I take something like this, divide it by User.objects.all().count() * 100, and assign it to something to return as the queryset?
queryset = Post.objects.filter(post_date__gte=start_date, post_date__lte=end_date).distinct('user').count()
return queryset
My goal is to end up with the endpoint like:
{
total_participation: 97.3
}
Thanks for any guidance.
BCBB
|
[
"something like this should work\n# get total user count\ntotal_users = User.objects.count()\n# get unique set of users with post\ntotal_users_who_posted = Post.objects.filter(...).distinct(\"user\").count()\n# calculate_percentage\npercentage = { \n \"total_participation\": (total_users_who_posted*100)/ total_users\n}\n# take caution of divion by zero\n\n",
"I don't think it is possible to use djangos orm to do this completely but you can use the orm to get the user counts (with posts and total):\nfrom django.db.models import BooleanField, Case, Count, When, Value\n\ncounts = (User\n .objects\n .annotate(posted=Case(When(user_post__isnull=False,\n then=Value(True)),\n default=Value(False), \n output_field=BooleanField()))\n .values('posted')\n .aggregate(posted_users=Count('pk', filter=Q(posted=True)),\n total_users=Count('pk', filter=Q(posted__isnull=False)))\n\n# This will result in a dict containing the following:\n# counts = {'posted_users': ...,\n# 'total_users': ....}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"django_orm",
"django_rest_framework"
] |
stackoverflow_0074660588_django_django_orm_django_rest_framework.txt
|
Q:
How to dynamically choose the separator used by OpenCSV application?
I've a code that reads a CSV file e convert the content to Java objects using CsvToBean.
public static <T> List<T> parseInputStreamFromCsv(InputStream inputStream, Class<T> clazz) {
try (Reader reader = new BufferedReader(new InputStreamReader(inputStream))) {
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader)
.withType(clazz)
.withIgnoreLeadingWhiteSpace(true)
.build();
return csvToBean.parse();
} catch (Exception ex) {
throw new ConversionFailedException("Error converting CSV");
}
}
Sometimes a user uploads a CSV using comma as separator, and then, other users uploads with semicolon as separator.
My questions is that exists a way to set the separator dynamically in my CsvToBeanBuilder, creating a way to convert both files (with comma and semicolon) without any problem. Thanks!
My questions is that exists a way to set the separator dynamically in my CsvToBeanBuilder, creating a way to convert both files (with comma and semicolon) without any problem.
A:
The following approach will work with both separator chars, ; and ,:
Parsing with dynamic separator detection
Note:
As per OP's requirement, the below method support the two main characters for separating row entries.
public static <T> List<T> parseFromCsvWithSeparatorDetection(
InputStream inputStream, Class<T> type, String[] columns)
throws IOException, CsvException {
final StringBuilder textBuilder = new StringBuilder();
try (Reader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8))) {
int c;
while ((c = reader.read()) != -1) {
textBuilder.append((char) c);
}
}
final String csvContent = textBuilder.toString();
final char detectedSeparator;
if(csvContent.contains(";")) {
detectedSeparator = ';'; // semicolon case
} else {
detectedSeparator = ','; // default case
}
try (Reader reader = new StringReader(csvContent)) {
ColumnPositionMappingStrategy<T> strategy = new ColumnPositionMappingStrategy<>();
strategy.setColumnMapping(columns);
strategy.setType(type);
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader)
.withMappingStrategy(strategy)
.withSeparator(detectedSeparator)
.withIgnoreLeadingWhiteSpace(true)
.build();
return csvToBean.parse();
}
}
Usage example
String[] columns = new String[]{"a", "b"};
InputStream in = ... // <-- set/obtain InputStream here
try {
List<Bean> objects = CSVUtils.parseFromCsvWithSeparatorDetection(in, Bean.class, columns);
} catch (IOException | CsvException e) {
e.printStackTrace();
}
Given that the class Bean has two String attributes a and b, a no-arg constructor (and getter/setter methods).
Demo csv data
A1;B1
A2;B2
and
A1,B1
A2,B2
I tested the above with 17 and OpenCSV 5.7.1, should also work for older or more recent 5.x versions.
Cave!
The above approach should only be used if memory for processing is not an issue at runtime. Reason: The inputStream is fully consumed and read into memory - yet, only once. Nevertheless, this could be problematic under low resources environments or with very large (, and very likely (!) with HUGE) csv files (with potentially millions of rows.
|
How to dynamically choose the separator used by OpenCSV application?
|
I've a code that reads a CSV file e convert the content to Java objects using CsvToBean.
public static <T> List<T> parseInputStreamFromCsv(InputStream inputStream, Class<T> clazz) {
try (Reader reader = new BufferedReader(new InputStreamReader(inputStream))) {
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader)
.withType(clazz)
.withIgnoreLeadingWhiteSpace(true)
.build();
return csvToBean.parse();
} catch (Exception ex) {
throw new ConversionFailedException("Error converting CSV");
}
}
Sometimes a user uploads a CSV using comma as separator, and then, other users uploads with semicolon as separator.
My questions is that exists a way to set the separator dynamically in my CsvToBeanBuilder, creating a way to convert both files (with comma and semicolon) without any problem. Thanks!
My questions is that exists a way to set the separator dynamically in my CsvToBeanBuilder, creating a way to convert both files (with comma and semicolon) without any problem.
|
[
"The following approach will work with both separator chars, ; and ,:\nParsing with dynamic separator detection\nNote:\nAs per OP's requirement, the below method support the two main characters for separating row entries.\npublic static <T> List<T> parseFromCsvWithSeparatorDetection(\n InputStream inputStream, Class<T> type, String[] columns)\n throws IOException, CsvException {\n\n final StringBuilder textBuilder = new StringBuilder();\n try (Reader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8))) {\n int c;\n while ((c = reader.read()) != -1) {\n textBuilder.append((char) c);\n }\n }\n final String csvContent = textBuilder.toString();\n final char detectedSeparator;\n if(csvContent.contains(\";\")) {\n detectedSeparator = ';'; // semicolon case\n } else {\n detectedSeparator = ','; // default case\n }\n try (Reader reader = new StringReader(csvContent)) {\n ColumnPositionMappingStrategy<T> strategy = new ColumnPositionMappingStrategy<>();\n strategy.setColumnMapping(columns);\n strategy.setType(type);\n CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader)\n .withMappingStrategy(strategy)\n .withSeparator(detectedSeparator)\n .withIgnoreLeadingWhiteSpace(true)\n .build();\n return csvToBean.parse();\n } \n }\n\nUsage example\nString[] columns = new String[]{\"a\", \"b\"};\nInputStream in = ... // <-- set/obtain InputStream here\n\ntry {\n List<Bean> objects = CSVUtils.parseFromCsvWithSeparatorDetection(in, Bean.class, columns);\n} catch (IOException | CsvException e) {\n e.printStackTrace();\n}\n\nGiven that the class Bean has two String attributes a and b, a no-arg constructor (and getter/setter methods).\nDemo csv data\nA1;B1\nA2;B2\n\nand\nA1,B1\nA2,B2\n\nI tested the above with 17 and OpenCSV 5.7.1, should also work for older or more recent 5.x versions.\nCave!\nThe above approach should only be used if memory for processing is not an issue at runtime. Reason: The inputStream is fully consumed and read into memory - yet, only once. Nevertheless, this could be problematic under low resources environments or with very large (, and very likely (!) with HUGE) csv files (with potentially millions of rows.\n"
] |
[
0
] |
[] |
[] |
[
"java",
"opencsv",
"separator"
] |
stackoverflow_0074350728_java_opencsv_separator.txt
|
Q:
Concat audiofiles using ffmpeg-python
I'm trying to concat two audiofiles using ffmpeg-python.
I've got the proper result with the direct use of ffmpeg in the CLI.
The following command gave the proper result
.\ffmpeg -f concat -safe 0 -i input.txt -codec copy output.mp4
But now I'm trying to investigate whether the Python wrapper can provide the solution without running ffmpeg directly from python script using subprocess.
It's possible to trim audiofiles, change volumes and make many other things with the use of ffmpeg-python. But for the concatenating audiofiles I've failed to found a solution.
A:
input_mp3 = ffmpeg.input(path)
input_mp3_2 = ffmpeg.input(path_2)
cut_1 = input_mp3.audio.filter('atrim', start=5, end=10)
cut_2 = input_mp3_2.audio.filter('atrim', start=5, end=10)
audio_output = ffmpeg.concat(cut_1, cut_2, v=0, a=1).output('out_merger.mp3')
ffmpeg.run(audio_output)
|
Concat audiofiles using ffmpeg-python
|
I'm trying to concat two audiofiles using ffmpeg-python.
I've got the proper result with the direct use of ffmpeg in the CLI.
The following command gave the proper result
.\ffmpeg -f concat -safe 0 -i input.txt -codec copy output.mp4
But now I'm trying to investigate whether the Python wrapper can provide the solution without running ffmpeg directly from python script using subprocess.
It's possible to trim audiofiles, change volumes and make many other things with the use of ffmpeg-python. But for the concatenating audiofiles I've failed to found a solution.
|
[
"input_mp3 = ffmpeg.input(path)\ninput_mp3_2 = ffmpeg.input(path_2)\ncut_1 = input_mp3.audio.filter('atrim', start=5, end=10)\ncut_2 = input_mp3_2.audio.filter('atrim', start=5, end=10)\naudio_output = ffmpeg.concat(cut_1, cut_2, v=0, a=1).output('out_merger.mp3')\nffmpeg.run(audio_output)\n\n"
] |
[
0
] |
[] |
[] |
[
"audio",
"audio_processing",
"ffmpeg_python"
] |
stackoverflow_0074648884_audio_audio_processing_ffmpeg_python.txt
|
Q:
If possible, how to use MUI with Qwik framework?
I try Qwik framework which looks a lot like Reactjs and uses jsx. And suddenly, I wonder if Reactjs libraries such as MUI can work with Qwik framework.
I tried this code:
import { component$ } from "@builder.io/qwik";
import Add from "@mui/icons-material/Add";
import IconButton from "@mui/material/IconButton";
const AddToCartButton = component$(() => {
return (
<IconButton>
<Add />
</IconButton>
);
});
export default AddToCartButton;
But I got this this error:
QWIK ERROR Code(25): Invalid JSXNode type. It must be either a function or a string. Found: {
'$$typeof': Symbol(react.memo),
type: {
'$$typeof': Symbol(react.forward_ref),
render: [Function: Component] { displayName: 'AddIcon', muiName: 'SvgIcon' }
},
compare: null
} Error: Code(25): Invalid JSXNode type. It must be either a function or a string. Found:
at logError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4515:58)
at logErrorAndStop (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4521:21)
at qError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4585:16)
at Proxy.jsx (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:605:23)
at AddToCartButton_component_4S0nJgnxzBU (/src/addtocartbutton_component_4s0njgnxzbu.js:11:55)
at useInvoke (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:149:30)
at E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4676:32
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async renderSSR (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:5280:9)
at async Proxy.renderToStream (E:\qwik\flower\node_modules\@builder.io\qwik\server.cjs:582:3)
at async file:///E:/qwik/flower/node_modules/@builder.io/qwik/optimizer.mjs:1776:30
QWIK ERROR Code(25): Invalid JSXNode type. It must be either a function or a string. Found: Error: Code(25): Invalid JSXNode type. It must be either a function or a string. Found:
at logError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4515:58)
at logErrorAndStop (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4521:21)
at qError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4585:16)
at Proxy.jsx (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:605:23)
at AddToCartButton_component_4S0nJgnxzBU (/src/addtocartbutton_component_4s0njgnxzbu.js:11:55)
at useInvoke (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:149:30)
at E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4676:32
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async renderSSR (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:5280:9)
at async Proxy.renderToStream (E:\qwik\flower\node_modules\@builder.io\qwik\server.cjs:582:3)
at async file:///E:/qwik/flower/node_modules/@builder.io/qwik/optimizer.mjs:1776:30
not rendered
A:
JSX in this case is the templating language of Qwik but the underlyings are different. It is made similar so you have an easier transition from react as stated in their docs.
Qwik is familiar for React developers and can be used to build any type of web site or application.
Qwik offers some adapter for react components you need to install and wrap your components in.
npm i -D @builder.io/qwik-react
And then the usage should look like the example in their repo.
/** @jsxImportSource react */
import { qwikify$ } from '@builder.io/qwik-react';
import { Button } from '@mui/material';
export const App = qwikify$(() => {
return (
<>
<Button variant="contained">Hola</Button>
</>
);
});
A:
This thread is a bit older but maybe someone stumbles across it like me.
I had the same issue using a UI-component library and resolved it with the following steps.
adding qwikReact into the vite.config file:
import { defineConfig } from "vite";
import { qwikVite } from "@builder.io/qwik/optimizer";
import { qwikCity } from "@builder.io/qwik-city/vite";
import { qwikReact } from "@builder.io/qwik-react";
import tsconfigPaths from "vite-tsconfig-paths";
export default defineConfig(() => {
return {
plugins: [qwikCity(), qwikVite(), qwikReact(), tsconfigPaths()],
preview: {
headers: {
"Cache-Control": "public, max-age=600",
},
},
};
});
qwikify() must be used in a seperate file only with /** @jsxImportSource react */ as Jonathan pointed out.
Be aware that react components will not be treated the same way in Qwik. As stated in the docs it should be a migration/testing tool for existing projects where react components should be introduced in "Wide islands".
|
If possible, how to use MUI with Qwik framework?
|
I try Qwik framework which looks a lot like Reactjs and uses jsx. And suddenly, I wonder if Reactjs libraries such as MUI can work with Qwik framework.
I tried this code:
import { component$ } from "@builder.io/qwik";
import Add from "@mui/icons-material/Add";
import IconButton from "@mui/material/IconButton";
const AddToCartButton = component$(() => {
return (
<IconButton>
<Add />
</IconButton>
);
});
export default AddToCartButton;
But I got this this error:
QWIK ERROR Code(25): Invalid JSXNode type. It must be either a function or a string. Found: {
'$$typeof': Symbol(react.memo),
type: {
'$$typeof': Symbol(react.forward_ref),
render: [Function: Component] { displayName: 'AddIcon', muiName: 'SvgIcon' }
},
compare: null
} Error: Code(25): Invalid JSXNode type. It must be either a function or a string. Found:
at logError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4515:58)
at logErrorAndStop (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4521:21)
at qError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4585:16)
at Proxy.jsx (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:605:23)
at AddToCartButton_component_4S0nJgnxzBU (/src/addtocartbutton_component_4s0njgnxzbu.js:11:55)
at useInvoke (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:149:30)
at E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4676:32
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async renderSSR (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:5280:9)
at async Proxy.renderToStream (E:\qwik\flower\node_modules\@builder.io\qwik\server.cjs:582:3)
at async file:///E:/qwik/flower/node_modules/@builder.io/qwik/optimizer.mjs:1776:30
QWIK ERROR Code(25): Invalid JSXNode type. It must be either a function or a string. Found: Error: Code(25): Invalid JSXNode type. It must be either a function or a string. Found:
at logError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4515:58)
at logErrorAndStop (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4521:21)
at qError (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4585:16)
at Proxy.jsx (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:605:23)
at AddToCartButton_component_4S0nJgnxzBU (/src/addtocartbutton_component_4s0njgnxzbu.js:11:55)
at useInvoke (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:149:30)
at E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:4676:32
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async renderSSR (E:\qwik\flower\node_modules\@builder.io\qwik\core.cjs:5280:9)
at async Proxy.renderToStream (E:\qwik\flower\node_modules\@builder.io\qwik\server.cjs:582:3)
at async file:///E:/qwik/flower/node_modules/@builder.io/qwik/optimizer.mjs:1776:30
not rendered
|
[
"JSX in this case is the templating language of Qwik but the underlyings are different. It is made similar so you have an easier transition from react as stated in their docs.\n\nQwik is familiar for React developers and can be used to build any type of web site or application.\n\nQwik offers some adapter for react components you need to install and wrap your components in.\nnpm i -D @builder.io/qwik-react\n\nAnd then the usage should look like the example in their repo.\n/** @jsxImportSource react */\n\nimport { qwikify$ } from '@builder.io/qwik-react';\nimport { Button } from '@mui/material';\n\nexport const App = qwikify$(() => {\n return (\n <>\n <Button variant=\"contained\">Hola</Button>\n </>\n );\n});\n\n",
"This thread is a bit older but maybe someone stumbles across it like me.\nI had the same issue using a UI-component library and resolved it with the following steps.\n\nadding qwikReact into the vite.config file:\nimport { defineConfig } from \"vite\";\nimport { qwikVite } from \"@builder.io/qwik/optimizer\";\nimport { qwikCity } from \"@builder.io/qwik-city/vite\";\nimport { qwikReact } from \"@builder.io/qwik-react\";\nimport tsconfigPaths from \"vite-tsconfig-paths\";\nexport default defineConfig(() => {\nreturn {\nplugins: [qwikCity(), qwikVite(), qwikReact(), tsconfigPaths()],\npreview: {\nheaders: {\n\"Cache-Control\": \"public, max-age=600\",\n},\n},\n};\n});\n\nqwikify() must be used in a seperate file only with /** @jsxImportSource react */ as Jonathan pointed out.\n\n\nBe aware that react components will not be treated the same way in Qwik. As stated in the docs it should be a migration/testing tool for existing projects where react components should be introduced in \"Wide islands\".\n"
] |
[
4,
0
] |
[] |
[] |
[
"material_ui",
"qwik",
"reactjs"
] |
stackoverflow_0073433417_material_ui_qwik_reactjs.txt
|
Q:
Determine if Two Strings Are Close
i am trying to make a program that compares word1 strings with word2 string to occur only once
class Solution:
def closeStrings(self, word1: str, word2: str) -> bool:
word1 = [x.strip() for x in word1]
word2 = [x.strip() for x in word2]
update = False
for x in word1:
if(x in word2):
update = True
if(type(x) is str):
a = word1.index(x)
b = word2.index(x)
word1[a]=''
word2[b]=''
else:
update = False
else:
update = False
break
return update
print(Solution.closeStrings(Solution,word1='a',word2='aa'))
Input
word1 = 'a',word2 ='aa'
Expected
Output = False
Actual
Output = True
A:
print(Solution.closeStrings(Solution,word1='a',word2='aa'))
You create a class in order to be able to create an instance of it. That way you don't need to pass Solution as the self parameter.
word1 = [x.strip() for x in word1]
It looks like you expect to remove spaces. But you'll get a list of strings with empty strings for the spaces. That's not what you want. See the output of
print([x.strip() for x in "Hello world"])
Your algorithm is way too complicated.
You can simply count the occurrences of each character in word2:
class Solution:
def closeStrings(self, word1: str, word2: str) -> bool:
for x in word1:
if word2.count(x) != word1.count(x): return False
return True
s = Solution()
print(s.closeStrings(word1='a',word2='aa'))
print(s.closeStrings(word1='abcb',word2='bcab'))
A:
Extending to other more solution answer by @Thomas Weller well explained by him
class Solution:
def closeStrings(self, word1: str, word2: str) -> bool:
for i in word1:
if i not in word2:
return False
for i in word2:
if i not in word1:
return False
return True
def closeStrings2(self, word1: str, word2: str) -> bool:
if len(word1) != len(word2):
return False
if set(word1) != set(word2):
return False
return True
def closeStrings3(self, word1: str, word2: str) -> bool:
if len(word1) != len(word2):
return False
if sorted(word1) != sorted(word2):
return False
return True
print(Solution().closeStrings(word1="cabbba", word2="abbccc"))
print(Solution().closeStrings3(word1="cabbba", word2="aabbss"))
print(Solution().closeStrings3(word1="cabbba", word2="aabbss"))
|
Determine if Two Strings Are Close
|
i am trying to make a program that compares word1 strings with word2 string to occur only once
class Solution:
def closeStrings(self, word1: str, word2: str) -> bool:
word1 = [x.strip() for x in word1]
word2 = [x.strip() for x in word2]
update = False
for x in word1:
if(x in word2):
update = True
if(type(x) is str):
a = word1.index(x)
b = word2.index(x)
word1[a]=''
word2[b]=''
else:
update = False
else:
update = False
break
return update
print(Solution.closeStrings(Solution,word1='a',word2='aa'))
Input
word1 = 'a',word2 ='aa'
Expected
Output = False
Actual
Output = True
|
[
"\nprint(Solution.closeStrings(Solution,word1='a',word2='aa'))\nYou create a class in order to be able to create an instance of it. That way you don't need to pass Solution as the self parameter.\n\nword1 = [x.strip() for x in word1]\nIt looks like you expect to remove spaces. But you'll get a list of strings with empty strings for the spaces. That's not what you want. See the output of\nprint([x.strip() for x in \"Hello world\"])\n\nYour algorithm is way too complicated.\nYou can simply count the occurrences of each character in word2:\n\n\nclass Solution:\n def closeStrings(self, word1: str, word2: str) -> bool:\n for x in word1:\n if word2.count(x) != word1.count(x): return False\n return True\n\n\ns = Solution()\nprint(s.closeStrings(word1='a',word2='aa'))\nprint(s.closeStrings(word1='abcb',word2='bcab'))\n\n",
"Extending to other more solution answer by @Thomas Weller well explained by him\nclass Solution:\n def closeStrings(self, word1: str, word2: str) -> bool:\n for i in word1:\n if i not in word2:\n return False\n for i in word2:\n if i not in word1:\n return False\n return True\n\n def closeStrings2(self, word1: str, word2: str) -> bool:\n if len(word1) != len(word2):\n return False\n if set(word1) != set(word2):\n return False\n return True\n\n def closeStrings3(self, word1: str, word2: str) -> bool:\n if len(word1) != len(word2):\n return False\n if sorted(word1) != sorted(word2):\n return False\n return True\n\nprint(Solution().closeStrings(word1=\"cabbba\", word2=\"abbccc\"))\nprint(Solution().closeStrings3(word1=\"cabbba\", word2=\"aabbss\"))\nprint(Solution().closeStrings3(word1=\"cabbba\", word2=\"aabbss\"))\n\n\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"list",
"python",
"python_3.x"
] |
stackoverflow_0074660641_list_python_python_3.x.txt
|
Q:
@media query not has no effect
Hey guys I'm trying to build a responsive webpage but the media query is not working.
I've allready searched for solutions and applied them but to no avail.
This is my CSS code:
@media only screen and (max-width:600px){
body{
font-size: 89px;
}
.main{
display: flex;
flex-direction: column;
width: 100vw;
height: 100vh;
}
.main-left{
background-color: #90c3ce;
}
}
And this is my html code in the of my html doc.
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="styles.css">
<title>Mubasic</title>
</head>
Not sure where the issue is would be super greatefull for help!
added meta viewport element in the of the html doc but all to no avail.
|
@media query not has no effect
|
Hey guys I'm trying to build a responsive webpage but the media query is not working.
I've allready searched for solutions and applied them but to no avail.
This is my CSS code:
@media only screen and (max-width:600px){
body{
font-size: 89px;
}
.main{
display: flex;
flex-direction: column;
width: 100vw;
height: 100vh;
}
.main-left{
background-color: #90c3ce;
}
}
And this is my html code in the of my html doc.
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="styles.css">
<title>Mubasic</title>
</head>
Not sure where the issue is would be super greatefull for help!
added meta viewport element in the of the html doc but all to no avail.
|
[] |
[] |
[
"Ok so the issue was my @media query was above all the other CSS over writing it.\n"
] |
[
-1
] |
[
"css",
"flexbox",
"html"
] |
stackoverflow_0074660685_css_flexbox_html.txt
|
Q:
How to find previous month and year in golang
I just found AddDate() does not always works as expected.
ex:
mayEndDate := time.Date(2021, 5, 31, 12, 00, 00, 00, time.UTC)
finalDate := endOfMay.AddDate(0, -1, 0)
here
output:
myEndDate = 2021-05-31 12:00:00 +0000 UTC
finalDate = 2021-05-01 12:00:00 +0000 UTC
I was expecting finalDate to be in April.
After reading the documentation, I found out the reason.
AddDate normalizes its result in the same way that Date does, so, for example, adding one month to October 31 yields December 1, the normalized form for November 31.
My question: how to now correctly find out the last month's date from today's date?
A:
Get the current month using Month(), then from there it’s pretty simple to get the previous one:
currentMonth := mayEndDate.Month()
previousMonth := currentMonth - 1
if currentMonth == time.January {
previousMonth = time.December
}
A:
if date is 2021-05-31 then previous month is April 2021.
package main
import (
"fmt"
"time"
)
func prevMonth(t time.Time) (int, time.Month) {
y, m, _ := t.Date()
y, m, _ = time.Date(y, m-1, 1, 0, 0, 0, 0, time.UTC).Date()
return y, m
}
func main() {
endOfMay := time.Date(2021, 5, 31, 12, 00, 00, 00, time.UTC)
fmt.Println(endOfMay)
fmt.Println(prevMonth(endOfMay))
}
https://go.dev/play/p/rP25ramRrZ3
2021-05-31 12:00:00 +0000 UTC
2021 April
|
How to find previous month and year in golang
|
I just found AddDate() does not always works as expected.
ex:
mayEndDate := time.Date(2021, 5, 31, 12, 00, 00, 00, time.UTC)
finalDate := endOfMay.AddDate(0, -1, 0)
here
output:
myEndDate = 2021-05-31 12:00:00 +0000 UTC
finalDate = 2021-05-01 12:00:00 +0000 UTC
I was expecting finalDate to be in April.
After reading the documentation, I found out the reason.
AddDate normalizes its result in the same way that Date does, so, for example, adding one month to October 31 yields December 1, the normalized form for November 31.
My question: how to now correctly find out the last month's date from today's date?
|
[
"Get the current month using Month(), then from there it’s pretty simple to get the previous one:\ncurrentMonth := mayEndDate.Month()\npreviousMonth := currentMonth - 1\nif currentMonth == time.January {\n previousMonth = time.December\n}\n\n",
"\nif date is 2021-05-31 then previous month is April 2021.\n\n\npackage main\nimport (\n \"fmt\"\n \"time\"\n)\n\nfunc prevMonth(t time.Time) (int, time.Month) {\n y, m, _ := t.Date()\n y, m, _ = time.Date(y, m-1, 1, 0, 0, 0, 0, time.UTC).Date()\n return y, m\n}\n\nfunc main() {\n endOfMay := time.Date(2021, 5, 31, 12, 00, 00, 00, time.UTC)\n fmt.Println(endOfMay)\n fmt.Println(prevMonth(endOfMay))\n}\n\nhttps://go.dev/play/p/rP25ramRrZ3\n2021-05-31 12:00:00 +0000 UTC\n2021 April\n\n"
] |
[
3,
3
] |
[
"I hope this is not against the TOS. Please let me know if it is, and I will delete the answer. I tried this question on the OpenAI Chatbot and here is it's answer. Works locally for me [https://chat.openai.com/chat].\nTo find the date of the last month from the current date, you can use the time.AddDate function and specify a negative value for the number of months to add. For example, to find the date of the last month from the current date, you can do the following:\n// Import the time package\nimport \"fmt\" // I added this, chatbot missed it.\nimport \"time\"\n\nfunc main() {\n // Get the current date and time\n now := time.Now()\n\n // Add -1 month to the current date and time\n lastMonth := now.AddDate(0, -1, 0)\n\n // Print the date of the last month\n fmt.Println(lastMonth)\n fmt.Println(lastMonth.Month()) // I added this, missed by chatbot\n}\n\nIn this code, the time.Now function is used to get the current date and time, and then the time.AddDate function is used to add -1 month to the current date and time. This results in the date of the last month.\nHere is the output of the code:\n 2022-11-02 11:44:32.864467 -0700 PDT\n November\n\nThis is the date of the last month from the current date. Note that the time component of the date is preserved from the current date, so the time in the output may be different from the time in the example.\nAlternatively, you can use the time.Month type to find the date of the last month. The time.Month type represents a calendar month, and you can use it to get the date of the previous or next month from the current date. Here is an example:\n// Import the time package\nimport \"fmt\" // I added this, missed by chatbot\nimport \"time\"\n\nfunc main() {\n // Get the current date and time\n now := time.Now()\n\n // Get the previous month from the current month\n lastMonth := now.Month() - 1\n\n // Set the month of the current date and time to the previous month\n lastMonthDate := time.Date(now.Year(), lastMonth, now.Day(), now.Hour(), now.Minute(), now.Second(), now.Nanosecond(), now.Location())\n\n // Print the date of the last month\n fmt.Println(lastMonthDate)\n fmt.Println(lastMonth) // I added this, missed by chatbot\n}\n\nIn this code, the time.Now function is used to get the current date and time, and then the time.Month type is used to get the previous month from the current month. This month is then used to create a new time.Date with the same year, day, hour, minute, second, and nanosecond as the current date and time, but with the previous month as the month. This results in the date of the last month.\nHere is the output of the code:\n 2022-11-02 11:43:36.508501 -0700 PDT\n November\n\nThis is the date of the last month from the current date. Note that the time component of the date is preserved from the current date, so the time in the output may be different from the time in the example.\nBoth of these approaches provide a way to find the date of the last month from the current date, and you can use either of them depending on your specific\n"
] |
[
-1
] |
[
"go"
] |
stackoverflow_0074645681_go.txt
|
Q:
What is the difference between page.$$(selector) and page.$$eval(selector, function) in puppeteer?
I'm trying to load page elements into an array and retrieve the innerHTML from both and be able to click on them.
var grabElements = await page.$$(selector);
await grabElements[0].click();
This allows me to grab my elements and click on them but it won't display innerHTML.
var elNum = await page.$$eval(selector, (element) => {
let n = []
element.forEach(e => {
n.push(e);
})
return n;
});
await elNum[0].click();
This lets me get the innerHTML if I push the innerHTML to n. If I push just the element e and try to click or get its innerHTML outside of the var declaration, it doesn't work. The innerHTML comes as undefined and if I click, I get an error saying elnum[index].click() is not a function. What am I doing wrong?
A:
The difference between page.$$eval (and other evaluate-style methods, with the exception of evaluateHandle) and page.$$ is that the evaluate family only works with serializable values. As you discovered, you can't return elements from these methods because they're not serialiable (they have circular references and would be useless in Node anyway).
On the other hand, page.$$ returns Puppeteer ElementHandles that are references to DOM elements that can be manipulated from Puppeteer's API in Node rather than in the browser. This is useful for many reasons, one of which is that ElementHandle.click() issues a totally different set of operations than running the native DOMElement.click() in the browser.
From the comments:
An example of what I'm trying to get is: <div class = "class">This is the innerHTML text I want. </div>. On the page, it's text inside a clickable portion of the website. What i want to do is loop through the available options, then click on the ones that match an innerHTML I'm looking for.
Here's a simple example you should be able to extrapolate to your actual use case:
const puppeteer = require("puppeteer"); // ^19.1.0
const {setTimeout} = require("timers/promises");
const html = `
<div>
<div class="class">This is the innerHTML text I want.</div>
<div class="class">This is the innerHTML text I don't want.</div>
<div class="class">This is the innerHTML text I want.</div>
</div>
<script>
document.querySelectorAll(".class").forEach(e => {
e.addEventListener("click", () => e.textContent = "clicked");
});
</script>
`;
const target = "This is the innerHTML text I want.";
let browser;
(async () => {
browser = await puppeteer.launch();
const [page] = await browser.pages();
await page.setContent(html);
///////////////////////////////////////////
// approach 1 -- trusted Puppeteer click //
///////////////////////////////////////////
const handles = await page.$$(".class");
for (const handle of handles) {
if (target === (await handle.evaluate(el => el.textContent))) {
await handle.click();
}
}
// show that it worked and reset
console.log(await page.$eval("div", el => el.innerHTML));
await page.setContent(html);
//////////////////////////////////////////////
// approach 2 -- untrusted native DOM click //
//////////////////////////////////////////////
await page.$$eval(".class", (els, target) => {
els.forEach(el => {
if (target === el.textContent) {
el.click();
}
});
}, target);
// show that it worked and reset
console.log(await page.$eval("div", el => el.innerHTML));
await page.setContent(html);
/////////////////////////////////////////////////////////////////
// approach 3 -- selecting with XPath and using trusted clicks //
/////////////////////////////////////////////////////////////////
const xp = '//*[@class="class"][text()="This is the innerHTML text I want."]';
for (const handle of await page.$x(xp)) {
await handle.click();
}
// show that it worked and reset
console.log(await page.$eval("div", el => el.innerHTML));
await page.setContent(html);
///////////////////////////////////////////////////////////////////
// approach 4 -- selecting with XPath and using untrusted clicks //
///////////////////////////////////////////////////////////////////
await page.evaluate(xp => {
// https://stackoverflow.com/a/68216786/6243352
const $x = xp => {
const snapshot = document.evaluate(
xp, document, null,
XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null
);
return [...Array(snapshot.snapshotLength)]
.map((_, i) => snapshot.snapshotItem(i))
;
};
$x(xp).forEach(e => e.click());
}, xp);
// show that it worked
console.log(await page.$eval("div", el => el.innerHTML));
})()
.catch(err => console.error(err))
.finally(() => browser?.close());
Output in all cases is:
<div class="class">clicked</div>
<div class="class">This is the innerHTML text I don't want.</div>
<div class="class">clicked</div>
Note that === might be too strict without calling .trim() on the textContent first. You may want an .includes() substring test instead, although the risk there is that it's too permissive. Or a regex may be the right tool. In short, use whatever makes sense for your use case rather than (necessarily) my === test.
With respect to the XPath approach, this answer shows a few options for dealing with whitespace and substrings.
|
What is the difference between page.$$(selector) and page.$$eval(selector, function) in puppeteer?
|
I'm trying to load page elements into an array and retrieve the innerHTML from both and be able to click on them.
var grabElements = await page.$$(selector);
await grabElements[0].click();
This allows me to grab my elements and click on them but it won't display innerHTML.
var elNum = await page.$$eval(selector, (element) => {
let n = []
element.forEach(e => {
n.push(e);
})
return n;
});
await elNum[0].click();
This lets me get the innerHTML if I push the innerHTML to n. If I push just the element e and try to click or get its innerHTML outside of the var declaration, it doesn't work. The innerHTML comes as undefined and if I click, I get an error saying elnum[index].click() is not a function. What am I doing wrong?
|
[
"The difference between page.$$eval (and other evaluate-style methods, with the exception of evaluateHandle) and page.$$ is that the evaluate family only works with serializable values. As you discovered, you can't return elements from these methods because they're not serialiable (they have circular references and would be useless in Node anyway).\nOn the other hand, page.$$ returns Puppeteer ElementHandles that are references to DOM elements that can be manipulated from Puppeteer's API in Node rather than in the browser. This is useful for many reasons, one of which is that ElementHandle.click() issues a totally different set of operations than running the native DOMElement.click() in the browser.\nFrom the comments:\n\nAn example of what I'm trying to get is: <div class = \"class\">This is the innerHTML text I want. </div>. On the page, it's text inside a clickable portion of the website. What i want to do is loop through the available options, then click on the ones that match an innerHTML I'm looking for.\n\nHere's a simple example you should be able to extrapolate to your actual use case:\nconst puppeteer = require(\"puppeteer\"); // ^19.1.0\nconst {setTimeout} = require(\"timers/promises\");\n\nconst html = `\n<div>\n <div class=\"class\">This is the innerHTML text I want.</div>\n <div class=\"class\">This is the innerHTML text I don't want.</div>\n <div class=\"class\">This is the innerHTML text I want.</div>\n</div>\n<script>\ndocument.querySelectorAll(\".class\").forEach(e => {\n e.addEventListener(\"click\", () => e.textContent = \"clicked\");\n});\n</script>\n`;\n\nconst target = \"This is the innerHTML text I want.\";\n\nlet browser;\n(async () => {\n browser = await puppeteer.launch();\n const [page] = await browser.pages();\n await page.setContent(html);\n\n ///////////////////////////////////////////\n // approach 1 -- trusted Puppeteer click //\n ///////////////////////////////////////////\n const handles = await page.$$(\".class\");\n\n for (const handle of handles) {\n if (target === (await handle.evaluate(el => el.textContent))) {\n await handle.click();\n }\n }\n\n // show that it worked and reset\n console.log(await page.$eval(\"div\", el => el.innerHTML));\n await page.setContent(html);\n\n //////////////////////////////////////////////\n // approach 2 -- untrusted native DOM click //\n //////////////////////////////////////////////\n await page.$$eval(\".class\", (els, target) => {\n els.forEach(el => {\n if (target === el.textContent) {\n el.click();\n }\n });\n }, target);\n\n // show that it worked and reset\n console.log(await page.$eval(\"div\", el => el.innerHTML));\n await page.setContent(html);\n\n /////////////////////////////////////////////////////////////////\n // approach 3 -- selecting with XPath and using trusted clicks //\n /////////////////////////////////////////////////////////////////\n const xp = '//*[@class=\"class\"][text()=\"This is the innerHTML text I want.\"]';\n\n for (const handle of await page.$x(xp)) {\n await handle.click();\n }\n\n // show that it worked and reset\n console.log(await page.$eval(\"div\", el => el.innerHTML));\n await page.setContent(html);\n\n ///////////////////////////////////////////////////////////////////\n // approach 4 -- selecting with XPath and using untrusted clicks //\n ///////////////////////////////////////////////////////////////////\n await page.evaluate(xp => {\n // https://stackoverflow.com/a/68216786/6243352\n const $x = xp => {\n const snapshot = document.evaluate(\n xp, document, null,\n XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null\n );\n return [...Array(snapshot.snapshotLength)]\n .map((_, i) => snapshot.snapshotItem(i))\n ;\n };\n $x(xp).forEach(e => e.click());\n }, xp);\n\n // show that it worked\n console.log(await page.$eval(\"div\", el => el.innerHTML));\n})()\n .catch(err => console.error(err))\n .finally(() => browser?.close());\n\nOutput in all cases is:\n<div class=\"class\">clicked</div>\n<div class=\"class\">This is the innerHTML text I don't want.</div>\n<div class=\"class\">clicked</div>\n\nNote that === might be too strict without calling .trim() on the textContent first. You may want an .includes() substring test instead, although the risk there is that it's too permissive. Or a regex may be the right tool. In short, use whatever makes sense for your use case rather than (necessarily) my === test.\nWith respect to the XPath approach, this answer shows a few options for dealing with whitespace and substrings.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"puppeteer",
"webautomation"
] |
stackoverflow_0069635543_javascript_puppeteer_webautomation.txt
|
Q:
How to fetch raw text from a url into node?
I am trying to fetch data from 2022 Advent Of Code into Vscode without copying and pasting the hundreds of lines of raw data into my IDE.
// async function
async function fetchData() {
let response = await fetch('https://adventofcode.com/2022/day/1/input', {
method: 'GET',
headers: {
cookie:
'session=xxx',
},
})
let data = await response.text()
return data
}
const data = await fetchData()
export default data
This code runs perfectly in the browser dev tools, but when I try to run it in node, I get an error of SyntaxError: await is only valid in async functions and the top level bodies of modules
A:
The await keyword can only be used inside an async function. To fix the error you're getting, you can either wrap the code inside an async function and then call that function, or you can use the .then() method on the fetch promise to handle the response.
Here's an example of how you could use an async function:
async function fetchData() {
let response = await fetch('https://adventofcode.com/2022/day/1/input')
let text = await response.text()
console.log(text)
}
// Call the function to fetch the data
fetchData()
And here's an example of using the .then() method:
fetch('https://adventofcode.com/2022/day/1/input')
.then(response => response.text())
.then(text => console.log(text))
Both of these approaches should allow you to fetch the data and log the response to the console.
|
How to fetch raw text from a url into node?
|
I am trying to fetch data from 2022 Advent Of Code into Vscode without copying and pasting the hundreds of lines of raw data into my IDE.
// async function
async function fetchData() {
let response = await fetch('https://adventofcode.com/2022/day/1/input', {
method: 'GET',
headers: {
cookie:
'session=xxx',
},
})
let data = await response.text()
return data
}
const data = await fetchData()
export default data
This code runs perfectly in the browser dev tools, but when I try to run it in node, I get an error of SyntaxError: await is only valid in async functions and the top level bodies of modules
|
[
"The await keyword can only be used inside an async function. To fix the error you're getting, you can either wrap the code inside an async function and then call that function, or you can use the .then() method on the fetch promise to handle the response.\nHere's an example of how you could use an async function:\nasync function fetchData() {\n let response = await fetch('https://adventofcode.com/2022/day/1/input')\n let text = await response.text()\n console.log(text)\n}\n\n// Call the function to fetch the data\nfetchData()\n\nAnd here's an example of using the .then() method:\nfetch('https://adventofcode.com/2022/day/1/input')\n .then(response => response.text())\n .then(text => console.log(text))\n\nBoth of these approaches should allow you to fetch the data and log the response to the console.\n"
] |
[
0
] |
[] |
[] |
[
"fetch",
"node.js"
] |
stackoverflow_0074660632_fetch_node.js.txt
|
Q:
Return partial object from HQL query
I am building a simple weather app and in the api call I have query params where I can define what sensors to include and what weather data properties to include. When I have the query like SELECT w FROM WeatherData w ... the api response shows the key value pairs
But if I do a query like SELECT w.temperature, w.humidity FROM WeatherData w ... it just displays the values and not the properties.
How can I have it that the response includes the keys temperature and humidity? It's not just those, I could query to have just the temperature. But how do I include the keys in the response?
Entity
@Getter
@Setter
@Entity(name = "WeatherData")
@Table(name = "weatherdata")
public class WeatherData {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id", updatable = false, nullable = false)
private Long id;
private float temperature;
private float humidity;
@Column(name = "wind_speed")
private float windSpeed;
@ManyToOne
@JoinColumn(name = "sensor_id")
@NotEmpty(message = "Weather data needs to be linked to a sensor")
private Sensor sensor;
@CreationTimestamp
@Column(nullable = false, updatable = false)
private LocalDateTime timestamp;
}
Controller
@Path("/weather")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class WeatherDataController {
@Inject
WeatherDataService service;
@POST
public Response postWeatherData(@NotNull @Valid WeatherData weatherData) {
WeatherData createdWeatherData = service.saveWeatherData(weatherData);
return Response
.status(Response.Status.CREATED)
.entity(createdWeatherData)
.build();
}
@GET
@Path("/{weatherDataId}")
public Response getSpecificWeatherData(@PathParam("weatherDataId") Long weatherDataId) {
WeatherData data = service.getWeatherDataById(weatherDataId);
return Response
.ok(data)
.build();
}
@GET
public Response getWeatherData(
@QueryParam("sensor") List<String> sensorIds,
@QueryParam("metric") List<String> metric,
@QueryParam("statistic") String statistic,
@QueryParam("dateStart") String dateStart,
@QueryParam("dateEnd") String dateEnd
) throws Exception {
ObjectNode response = service.getWeatherData(sensorIds, metric, Statistics.valueOf(statistic.toUpperCase()), dateStart, dateEnd);
return Response
.ok(response)
.build();
}
}
A:
Just do not select certain values if you want to see full datasets.
If you need certain Key Value Sets returned you should probably edit your API Endpoint accordingly. Maybe you can build a new Entity Type that your Endpoint can return.
|
Return partial object from HQL query
|
I am building a simple weather app and in the api call I have query params where I can define what sensors to include and what weather data properties to include. When I have the query like SELECT w FROM WeatherData w ... the api response shows the key value pairs
But if I do a query like SELECT w.temperature, w.humidity FROM WeatherData w ... it just displays the values and not the properties.
How can I have it that the response includes the keys temperature and humidity? It's not just those, I could query to have just the temperature. But how do I include the keys in the response?
Entity
@Getter
@Setter
@Entity(name = "WeatherData")
@Table(name = "weatherdata")
public class WeatherData {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id", updatable = false, nullable = false)
private Long id;
private float temperature;
private float humidity;
@Column(name = "wind_speed")
private float windSpeed;
@ManyToOne
@JoinColumn(name = "sensor_id")
@NotEmpty(message = "Weather data needs to be linked to a sensor")
private Sensor sensor;
@CreationTimestamp
@Column(nullable = false, updatable = false)
private LocalDateTime timestamp;
}
Controller
@Path("/weather")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class WeatherDataController {
@Inject
WeatherDataService service;
@POST
public Response postWeatherData(@NotNull @Valid WeatherData weatherData) {
WeatherData createdWeatherData = service.saveWeatherData(weatherData);
return Response
.status(Response.Status.CREATED)
.entity(createdWeatherData)
.build();
}
@GET
@Path("/{weatherDataId}")
public Response getSpecificWeatherData(@PathParam("weatherDataId") Long weatherDataId) {
WeatherData data = service.getWeatherDataById(weatherDataId);
return Response
.ok(data)
.build();
}
@GET
public Response getWeatherData(
@QueryParam("sensor") List<String> sensorIds,
@QueryParam("metric") List<String> metric,
@QueryParam("statistic") String statistic,
@QueryParam("dateStart") String dateStart,
@QueryParam("dateEnd") String dateEnd
) throws Exception {
ObjectNode response = service.getWeatherData(sensorIds, metric, Statistics.valueOf(statistic.toUpperCase()), dateStart, dateEnd);
return Response
.ok(response)
.build();
}
}
|
[
"Just do not select certain values if you want to see full datasets.\nIf you need certain Key Value Sets returned you should probably edit your API Endpoint accordingly. Maybe you can build a new Entity Type that your Endpoint can return.\n"
] |
[
0
] |
[] |
[] |
[
"hql",
"java",
"rest"
] |
stackoverflow_0074660493_hql_java_rest.txt
|
Q:
Why does one function work, but the second with other variables doesn't?
Question:
I have two functions in my code below. They are supposed to pick a random out of an array for var amount times. Then delete that random out of the array.
The first function for 2 random numbers works, but the second one, for lowercase letters, doesn't.
I tried:
I tried looking at both functions but they look the same to me, only different variables...
(This is a little part of a code that creates an random password.)
// Needed vars
var numbersN = [
'0',
'1',
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9'
];
var lowercaseN = [
'a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p',
'q',
'r',
's',
't',
'u',
'v',
'w',
'x',
'y',
'z'
];
var allN = numbersN.concat(lowercaseN);
var password;
var extra_safe = true;
// function randomNumbers (Works!)
var amountNumbers = 2;
function randomNumbers (){
for (var a = 0; a < amountNumbers; a = a + 1){
var random = pickRandom(numbersN);
password = password + random;
if (extra_safe === true){
console.log(numbersN);
delete numbersN[random];
console.log(numbersN);
delete allN[random];
}
}
}
// function randomLowercase (Doens't work..)
var amountLowercase = 2;
function randomLowercase (){
for (var b = 0; b < amountLowercase; b = b + 1){
var random = pickRandom(lowercaseN);
password = password + random;
if (extra_safe === true){
console.log(lowercaseN);
delete lowercaseN[random];
console.log(lowercaseN);
delete allN[random];
}
}
}
// Runs function + TEST: prints numbersN before and after the working delete
randomNumbers();
// Runs function + TEST: prints lowercaseN before and after the failed delete
randomLowercase();
// TEST: prints allN, this should show all numbers and lowercase letters except the deleted randoms (It only works for the numbers.)
console.log(allN);
// prints the random numbers + lowercase letters
console.log(password.replace(/['undefined']/g, ''));
Thanks!
Adriaan V
Please note, I'm a beginner and English is not my native language. I'm sorry for the spelling mistakes and my simple code. Please leave suggestions for better tags etc.
A:
The issue with the randomLowercase function is the use of the delete keyword. The delete keyword is used to delete properties from objects. Since the lowercaseN and allN variables are array of strings, not objects, the delete keyword does not work as intended.
You could then determine the index of the selected letter, and use the splice() method to remove the element at that index in the array.
MDN page on the delete operator here.
|
Why does one function work, but the second with other variables doesn't?
|
Question:
I have two functions in my code below. They are supposed to pick a random out of an array for var amount times. Then delete that random out of the array.
The first function for 2 random numbers works, but the second one, for lowercase letters, doesn't.
I tried:
I tried looking at both functions but they look the same to me, only different variables...
(This is a little part of a code that creates an random password.)
// Needed vars
var numbersN = [
'0',
'1',
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9'
];
var lowercaseN = [
'a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p',
'q',
'r',
's',
't',
'u',
'v',
'w',
'x',
'y',
'z'
];
var allN = numbersN.concat(lowercaseN);
var password;
var extra_safe = true;
// function randomNumbers (Works!)
var amountNumbers = 2;
function randomNumbers (){
for (var a = 0; a < amountNumbers; a = a + 1){
var random = pickRandom(numbersN);
password = password + random;
if (extra_safe === true){
console.log(numbersN);
delete numbersN[random];
console.log(numbersN);
delete allN[random];
}
}
}
// function randomLowercase (Doens't work..)
var amountLowercase = 2;
function randomLowercase (){
for (var b = 0; b < amountLowercase; b = b + 1){
var random = pickRandom(lowercaseN);
password = password + random;
if (extra_safe === true){
console.log(lowercaseN);
delete lowercaseN[random];
console.log(lowercaseN);
delete allN[random];
}
}
}
// Runs function + TEST: prints numbersN before and after the working delete
randomNumbers();
// Runs function + TEST: prints lowercaseN before and after the failed delete
randomLowercase();
// TEST: prints allN, this should show all numbers and lowercase letters except the deleted randoms (It only works for the numbers.)
console.log(allN);
// prints the random numbers + lowercase letters
console.log(password.replace(/['undefined']/g, ''));
Thanks!
Adriaan V
Please note, I'm a beginner and English is not my native language. I'm sorry for the spelling mistakes and my simple code. Please leave suggestions for better tags etc.
|
[
"The issue with the randomLowercase function is the use of the delete keyword. The delete keyword is used to delete properties from objects. Since the lowercaseN and allN variables are array of strings, not objects, the delete keyword does not work as intended.\nYou could then determine the index of the selected letter, and use the splice() method to remove the element at that index in the array.\nMDN page on the delete operator here.\n"
] |
[
2
] |
[] |
[] |
[
"function",
"javascript"
] |
stackoverflow_0074660744_function_javascript.txt
|
Q:
Find Sum and Average function not returning actual value of Average
Here's my function that returns the Sum of all pair numbers in an array, and the Average of Odd numbers. Although it outputs the Average as zero for some reason.
#include <stdio.h>
int MoySom(int Tab[],float* Moyenne,int Length)
{
int S=0,C=0;
*Moyenne=0;
for(int i=0;i<Length;++i)
{
if(Tab[i] % 2 == 0)
{
S=S+Tab[i];
}
else if(Tab[i] % 2 != 0)
{
*Moyenne+=Tab[i];
++C;
}
}
*Moyenne=*Moyenne/C;
return S;
}
void main()
{
int Length,Tab[Length];
float Moyenne;
printf("Entrer la longeur de tableau: ");
scanf("%d",&Length);
for(int i=0;i<Length;++i)
{
printf("Entrer l'element %d: ",i);
scanf("%d",&Tab[i]);
}
printf("Somme est:%d\nMoyenne est: %.2f",
MoySom(Tab,&Moyenne,Length), Moyenne);
}
A:
At least these problems:
Wrong declaration order
int Length,Tab[Length]; is junk. The declaration of Tab[Length] is happening, yet the value of Length is indeterminate.
Something more like the below. Declare Tab[] after Length is assigned.
int Length;
float Moyenne;
printf("Entrer la longeur de tableau: ");
scanf("%d",&Length);
int Tab[Length];
Better code checks the return value of scanf()
int cnt = scanf("%d",&Length);
if (cnt != 1 || Length <= 0) {
Report_Error_and_exit();
}
int Tab[Length];
Parameter evaluation order assumed
Calculate Moyenne, then used use it.
//printf("Somme est:%d\nMoyenne est: %.2f",
// MoySom(Tab,&Moyenne,Length), Moyenne);
printf("Somme est:%d\n", MoySom(Tab,&Moyenne,Length));
printf("Moyenne est: %.2f", Moyenne);
Potential /0
*Moyenne=*Moyenne/C; may attempt divide by zero. Better code would prevent that.
Unneeded test
if(Tab[i] % 2 == 0) {
S=S+Tab[i];
} else if(Tab[i] % 2 != 0) {
*Moyenne+=Tab[i];
simplifies to
if(Tab[i] % 2 == 0) {
S=S+Tab[i];
} else {
*Moyenne+=Tab[i];
A:
There are several issues with your code.
First, you are defining the Tab array in main() using the value of Length, which is not initialized yet. This is not allowed in C, because array sizes must be constant expressions. Instead, you should either use a fixed-size array or dynamically allocate the array using malloc().
Second, you are not checking whether C is zero before dividing *Moyenne by C to calculate the average. If C is zero, then this will result in a division by zero, which is undefined behavior. You should check for this condition and handle it properly.
Here is how you can fix these issues and properly calculate the sum of even numbers and the average of odd numbers in the array:
#include <stdio.h>
#include <stdlib.h>
int MoySom(int* Tab, float* Moyenne, int Length)
{
int S = 0, C = 0;
*Moyenne = 0;
for (int i = 0; i < Length; ++i)
{
if (Tab[i] % 2 == 0)
{
S = S + Tab[i];
}
else if (Tab[i] % 2 != 0)
{
*Moyenne += Tab[i];
++C;
}
}
if (C > 0)
{
*Moyenne = *Moyenne / C;
}
return S;
}
void main()
{
int Length;
float Moyenne;
printf("Entrer la longeur de tableau: ");
scanf("%d", &Length);
// Dynamically allocate the array using malloc()
int* Tab = malloc(Length * sizeof(int));
if (Tab == NULL)
{
// Handle allocation failure
printf("Erreur d'allocation de memoire!\n");
return;
}
for (int i = 0; i < Length; ++i)
{
printf("Entrer l'element %d: ", i);
scanf("%d", &Tab[i]);
|
Find Sum and Average function not returning actual value of Average
|
Here's my function that returns the Sum of all pair numbers in an array, and the Average of Odd numbers. Although it outputs the Average as zero for some reason.
#include <stdio.h>
int MoySom(int Tab[],float* Moyenne,int Length)
{
int S=0,C=0;
*Moyenne=0;
for(int i=0;i<Length;++i)
{
if(Tab[i] % 2 == 0)
{
S=S+Tab[i];
}
else if(Tab[i] % 2 != 0)
{
*Moyenne+=Tab[i];
++C;
}
}
*Moyenne=*Moyenne/C;
return S;
}
void main()
{
int Length,Tab[Length];
float Moyenne;
printf("Entrer la longeur de tableau: ");
scanf("%d",&Length);
for(int i=0;i<Length;++i)
{
printf("Entrer l'element %d: ",i);
scanf("%d",&Tab[i]);
}
printf("Somme est:%d\nMoyenne est: %.2f",
MoySom(Tab,&Moyenne,Length), Moyenne);
}
|
[
"At least these problems:\nWrong declaration order\nint Length,Tab[Length]; is junk. The declaration of Tab[Length] is happening, yet the value of Length is indeterminate.\nSomething more like the below. Declare Tab[] after Length is assigned.\n int Length;\n float Moyenne;\n printf(\"Entrer la longeur de tableau: \");\n scanf(\"%d\",&Length);\n int Tab[Length];\n\nBetter code checks the return value of scanf()\n int cnt = scanf(\"%d\",&Length);\n if (cnt != 1 || Length <= 0) {\n Report_Error_and_exit();\n } \n int Tab[Length];\n\nParameter evaluation order assumed\nCalculate Moyenne, then used use it.\n//printf(\"Somme est:%d\\nMoyenne est: %.2f\",\n// MoySom(Tab,&Moyenne,Length), Moyenne);\n\nprintf(\"Somme est:%d\\n\", MoySom(Tab,&Moyenne,Length));\nprintf(\"Moyenne est: %.2f\", Moyenne);\n\nPotential /0\n*Moyenne=*Moyenne/C; may attempt divide by zero. Better code would prevent that.\nUnneeded test\n if(Tab[i] % 2 == 0) {\n S=S+Tab[i];\n } else if(Tab[i] % 2 != 0) {\n *Moyenne+=Tab[i];\n\nsimplifies to\n if(Tab[i] % 2 == 0) {\n S=S+Tab[i];\n } else {\n *Moyenne+=Tab[i];\n\n",
"There are several issues with your code.\nFirst, you are defining the Tab array in main() using the value of Length, which is not initialized yet. This is not allowed in C, because array sizes must be constant expressions. Instead, you should either use a fixed-size array or dynamically allocate the array using malloc().\nSecond, you are not checking whether C is zero before dividing *Moyenne by C to calculate the average. If C is zero, then this will result in a division by zero, which is undefined behavior. You should check for this condition and handle it properly.\nHere is how you can fix these issues and properly calculate the sum of even numbers and the average of odd numbers in the array:\n#include <stdio.h>\n#include <stdlib.h>\n\nint MoySom(int* Tab, float* Moyenne, int Length)\n{\n int S = 0, C = 0;\n *Moyenne = 0;\n for (int i = 0; i < Length; ++i)\n {\n if (Tab[i] % 2 == 0)\n {\n S = S + Tab[i];\n }\n else if (Tab[i] % 2 != 0)\n {\n *Moyenne += Tab[i];\n ++C;\n }\n }\n if (C > 0)\n {\n *Moyenne = *Moyenne / C;\n }\n return S;\n}\n\nvoid main()\n{\n int Length;\n float Moyenne;\n printf(\"Entrer la longeur de tableau: \");\n scanf(\"%d\", &Length);\n\n // Dynamically allocate the array using malloc()\n int* Tab = malloc(Length * sizeof(int));\n if (Tab == NULL)\n {\n // Handle allocation failure\n printf(\"Erreur d'allocation de memoire!\\n\");\n return;\n }\n\n for (int i = 0; i < Length; ++i)\n {\n printf(\"Entrer l'element %d: \", i);\n scanf(\"%d\", &Tab[i]);\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"average",
"c",
"sum"
] |
stackoverflow_0074660582_average_c_sum.txt
|
Q:
SQLZOO- using GROUPBY to find the largest country in a continent; is this possible?
I'm working on a practice problem from SQLZOO, and am not sure why the solution I'm trying doesn't work as it makes sense to me.
This is the format of the table::
-------------------------------------------------------------
| name continent area population gdp |
|-------------------------------------------------------------|
| Afghanistan Asia 652230 25500100 20343000000 |
| . |
| . |
| . |
| |
-------------------------------------------------------------
The question is the following:
Find the largest country (by area) in each continent, show the continent, the name and the area.
Here is the way I was thinking to solve it:
SELECT continent, name, area
FROM world
WHERE name IN (SELECT continent, name, MAX(area)
FROM world
GROUP BY continent);
I know this doesn't work, but why not? It seems like the nested SELECT statement is finding the country with the MAX area per continent, is it not?
The actual solution for this is something like follows:
SELECT continent, name, area
FROM world x
WHERE area >= ALL
(SELECT area
FROM world y
WHERE y.continent=x.continent
AND area>0)
But this seems like a complicated way of coming up with it;; is this way makes the most sense? Any ideas are appreciated
Thank you in advance!!
A:
While at a quick glimpse this query seems works
SELECT continent, name, area
FROM world
WHERE area IN (SELECT MAX(area)
FROM world
GROUP BY continent);
Demo 1
considering the current data, some issues would raise while some other new records added such as in the demo below. Rather than the above prefer this one :
SELECT w1.continent, name, w1.area
FROM world AS w1
JOIN (SELECT continent, MAX(area) AS area
FROM world
GROUP BY continent) AS w2
ON w1.continent = w2.continent
AND w1.area = w2.area
Demo 2
A:
select A.continent, W.name, A.area
from
(select continent, max(area) as area from world group by continent)A, world W
where
A.continent = W.continent
and
A.area = W.area
A:
select continent, name, area
from
(select continent, name, area, rank() over(partition by continent order by area desc) as r1
from world) a
where a.r1 = 1
A:
Please try this query,
SELECT a.continent, name, a.area
FROM
(
SELECT continent, max(area) as area
FROM world
GROUP BY continent
ORDER BY area desc
) a
join world b on a.area = b.area
A:
I found this query useful.
SELECT continent,name,area
FROM world x
WHERE area >= (SELECT max(area)FROM world y WHERE y.continent = x.continent and area>=0)
We are fetching the country details from world table where the area is greater than or equal to the area of all countries where the continent is the same .
A:
select continent, name, area
from (select *, rank() over(partition by continent order by area desc) r from world) s
where r=1
A:
SELECT QWE.CONTINENT, WORLD.NAME, QWE.AREA
FROM (SELECT CONTINENT, MAX(AREA) AS AREA FROM WORLD GROUP BY CONTINENT) QWE
JOIN WORLD ON QWE.AREA=WORLD.AREA
for an interesting solution
A:
By using analytical function rank we can do this :
Here is :
select continent,name,area
from
(
select *,rank() over(partition by continent order by area desc) as rn
from world
) a
where rn=1
order by area desc
|
SQLZOO- using GROUPBY to find the largest country in a continent; is this possible?
|
I'm working on a practice problem from SQLZOO, and am not sure why the solution I'm trying doesn't work as it makes sense to me.
This is the format of the table::
-------------------------------------------------------------
| name continent area population gdp |
|-------------------------------------------------------------|
| Afghanistan Asia 652230 25500100 20343000000 |
| . |
| . |
| . |
| |
-------------------------------------------------------------
The question is the following:
Find the largest country (by area) in each continent, show the continent, the name and the area.
Here is the way I was thinking to solve it:
SELECT continent, name, area
FROM world
WHERE name IN (SELECT continent, name, MAX(area)
FROM world
GROUP BY continent);
I know this doesn't work, but why not? It seems like the nested SELECT statement is finding the country with the MAX area per continent, is it not?
The actual solution for this is something like follows:
SELECT continent, name, area
FROM world x
WHERE area >= ALL
(SELECT area
FROM world y
WHERE y.continent=x.continent
AND area>0)
But this seems like a complicated way of coming up with it;; is this way makes the most sense? Any ideas are appreciated
Thank you in advance!!
|
[
"While at a quick glimpse this query seems works\nSELECT continent, name, area \n FROM world\n WHERE area IN (SELECT MAX(area) \n FROM world \n GROUP BY continent);\n\nDemo 1\nconsidering the current data, some issues would raise while some other new records added such as in the demo below. Rather than the above prefer this one :\nSELECT w1.continent, name, w1.area \n FROM world AS w1\n JOIN (SELECT continent, MAX(area) AS area\n FROM world \n GROUP BY continent) AS w2\n ON w1.continent = w2.continent\n AND w1.area = w2.area\n\nDemo 2\n",
"select A.continent, W.name, A.area\nfrom\n(select continent, max(area) as area from world group by continent)A, world W\nwhere\nA.continent = W.continent\nand\nA.area = W.area\n\n",
"select continent, name, area\nfrom\n(select continent, name, area, rank() over(partition by continent order by area desc) as r1\nfrom world) a\nwhere a.r1 = 1\n\n",
"Please try this query,\nSELECT a.continent, name, a.area\nFROM\n (\n SELECT continent, max(area) as area\n FROM world \n GROUP BY continent\n ORDER BY area desc\n ) a \n join world b on a.area = b.area\n\n",
"I found this query useful.\nSELECT continent,name,area\nFROM world x\nWHERE area >= (SELECT max(area)FROM world y WHERE y.continent = x.continent and area>=0)\n\nWe are fetching the country details from world table where the area is greater than or equal to the area of all countries where the continent is the same .\n",
"select continent, name, area \nfrom (select *, rank() over(partition by continent order by area desc) r from world) s \nwhere r=1\n\n",
"SELECT QWE.CONTINENT, WORLD.NAME, QWE.AREA \nFROM (SELECT CONTINENT, MAX(AREA) AS AREA FROM WORLD GROUP BY CONTINENT) QWE \nJOIN WORLD ON QWE.AREA=WORLD.AREA\n\nfor an interesting solution\n",
"By using analytical function rank we can do this :\nHere is :\nselect continent,name,area\nfrom\n(\nselect *,rank() over(partition by continent order by area desc) as rn\nfrom world\n) a\nwhere rn=1\norder by area desc\n"
] |
[
6,
1,
0,
0,
0,
0,
0,
0
] |
[
"//select name, continent,population from world x where 25000000>= all (select population from world y where y.continent=x.continent)//\nI tried this way. It is giving the answer though not a standard procedure may be!\n",
"select continent, name, area \nfrom world x \nwhere area = All (select MAX(area) from world where x.continent = continent)\n\n"
] |
[
-1,
-3
] |
[
"greatest_n_per_group",
"mysql",
"sql"
] |
stackoverflow_0050517185_greatest_n_per_group_mysql_sql.txt
|
Q:
Network failure during git fetch command
I'm trying to investigate what happens if the network would fail during a git fetch command.
I can't find any documentation that really goes into detail of the fetch command and digging into the git C source code seems a bit overwhelming. Where can I find some good detailed description what fetch really does?
I'm investigating the possibility of using git as a backup solution for binary files. If the network goes down in the middle of a fetch, will git clean up and remove the downloaded data objects? Or will they just be left in the .git folder?
A:
There are two "kinds" of transports used for both fetch and push:
the "dumb" one sends entire objects;
the "smart" one sends a thin pack.
Smart transports are generally far more efficient, so most Git transfers use the smart method.
When using the smart method, git fetch will bring over a single thin pack, and then "fatten" the thin pack to become a normal pack. This pack is then just like any other pack, and resides in .git/objects/pack/ as usual. If the connection dies before the pack is fully received, the received thin pack must be (and is) deleted and nothing remains: the next git fetch starts over, generating a new and quite possibly entirely different thin pack (it may be compressed against different base objects).
When using a dumb transport, however, git fetch could opt to store each object as soon as it is complete. Whether it does so, I have no idea.
Note that git push in particular uses what Git calls a quarantine area for new incoming objects. Objects are placed "in quarantine" until the pre-receive and update hooks have run, and are migrated to the repository's object database only if they're wanted. (This particular optimization came from the GitHub folks, who didn't want to keep multi-gigabyte objects that got rejected as being too large. GitHub already keep every object forever, so this was obviously awful for them, before the idea of quarantining.) Hence even with a dumb transport, a push that fails partway through would have to start over.
The fetch operation has no reason to bother with quarantine and presumably doesn't do so.
|
Network failure during git fetch command
|
I'm trying to investigate what happens if the network would fail during a git fetch command.
I can't find any documentation that really goes into detail of the fetch command and digging into the git C source code seems a bit overwhelming. Where can I find some good detailed description what fetch really does?
I'm investigating the possibility of using git as a backup solution for binary files. If the network goes down in the middle of a fetch, will git clean up and remove the downloaded data objects? Or will they just be left in the .git folder?
|
[
"There are two \"kinds\" of transports used for both fetch and push:\n\nthe \"dumb\" one sends entire objects;\nthe \"smart\" one sends a thin pack.\n\nSmart transports are generally far more efficient, so most Git transfers use the smart method.\nWhen using the smart method, git fetch will bring over a single thin pack, and then \"fatten\" the thin pack to become a normal pack. This pack is then just like any other pack, and resides in .git/objects/pack/ as usual. If the connection dies before the pack is fully received, the received thin pack must be (and is) deleted and nothing remains: the next git fetch starts over, generating a new and quite possibly entirely different thin pack (it may be compressed against different base objects).\nWhen using a dumb transport, however, git fetch could opt to store each object as soon as it is complete. Whether it does so, I have no idea.\nNote that git push in particular uses what Git calls a quarantine area for new incoming objects. Objects are placed \"in quarantine\" until the pre-receive and update hooks have run, and are migrated to the repository's object database only if they're wanted. (This particular optimization came from the GitHub folks, who didn't want to keep multi-gigabyte objects that got rejected as being too large. GitHub already keep every object forever, so this was obviously awful for them, before the idea of quarantining.) Hence even with a dumb transport, a push that fails partway through would have to start over.\nThe fetch operation has no reason to bother with quarantine and presumably doesn't do so.\n"
] |
[
0
] |
[] |
[] |
[
"git",
"git_fetch"
] |
stackoverflow_0074653225_git_git_fetch.txt
|
Q:
Micrometer TimedAspect doesn't intercept calls to methods annotated with @Timed
I am trying to use Micrometer to record execution time in my Java application. This is related to my other question about used @Timed annotation.
I have a class CountedObject that has the following 2 methods:
@Measured
@Timed(value = "timer1")
public void measuredFunction() {
try {
int sleepTime = new Random().nextInt(3) + 1;
Thread.sleep(sleepTime * 1000L);
} catch (InterruptedException e) {}
}
@Timed(value = "timer2")
public void timedFunction() {
try {
int sleepTime = new Random().nextInt(3) + 1;
Thread.sleep(sleepTime * 1000L);
} catch (InterruptedException e) {}
}
I have defined a custom annotation @Measured
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Measured {
}
And a MeasuredAspect to intercept calls to methods annotated with my @Measured annotation:
@Aspect
public class MeasuredAspect {
@Around("execution(* *(..)) && @annotation(Measured)")
public Object around(ProceedingJoinPoint pjp) throws Throwable {
return AppMetrics.getInstance().handle(pjp);
}
}
In my AppMetrics class I initialize an instance of micrometer's TimedAspect and in the handle(ProceedingJoinPoint pjp) method pass the ProceedingJoinPoint pjp to the TimedAspect instance.
public class AppMetrics {
private static final AppMetrics instance = new AppMetrics();
private MeterRegistry registry;
private TimedAspect timedAspect;
public static AppMetrics getInstance() {
return instance;
}
private AppMetrics() {
this.registry = new SimpleMeterRegistry();
this.timedAspect = new TimedAspect(registry);
}
public Object handle(ProceedingJoinPoint pjp) throws Throwable {
return timedAspect.timedMethod(pjp);
}
}
In my application main, I create an object of CountedObject and invoke measuredFunction() and timedFunction() then I check my registry.getMeters(); only timer1 used by the measuredFunction() [which is annotated by both @Measured and @Timed] is found, while the timer2 that should be used by timedFunction() [annotated only by @Timed] doesn't exist.
I am using eclipse with AspectJ Development Tools Plugin and my project is a Gradle project with AspectJ capability. I am using id "io.freefair.aspectj" version "5.1.1" plugin in my Gradle plugins. This is a basic java application not a Spring app.
What configurations needs to be done or what code changes are required so that micrometer TimedAspect can intercept my method calls directly [i.e timedFunction() should be timed and timer2 should be found in the registry] without the need of my custom annotation?
A:
I created an example project for you:
https://github.com/kriegaex/SO_AJ_MicrometerTimed_67803726
Quoting the read-me (sorry, but answers only containing links are frowned upon on StackOverflow):
In https://github.com/micrometer-metrics/micrometer/issues/1149 and on StackOverflow, an FAQ about Micrometer's @Timed annotation is,
why it works with Spring AOP, but not when using Micrometer as an aspect library for native AspectJ in the context of compile-time weaving (CTW),
e.g. with AspectJ Maven Plugin. It can be made to work with load-time weaving (LTW) when providing an aop.xml pointing to TimedAspect,
but in a CTW the aspect never kicks in.
The reason is that the aspect has been compiled with Javac, not with the AspectJ compiler (AJC), which is necessary to "finish" the Java class,
i.e. to enhance its byte code in order to be a full AspectJ aspect. The LTW agent does that on the fly during class-loading, but in a CTW context
you need to explicitly tell AJC to do post-compile weaving (a.k.a. binary weaving) on the Micrometer library, producing newly woven class files.
This is done by putting Micrometer on AJC's inpath in order to make sure that its class files are being transformed and written to the target
directory. The inpath in AspectJ Maven is configured via <weaveDependencies>. There are at least two ways to do this:
You can either create your own woven version of the library in a separate Maven module and then use that module instead of Micrometer.
In that case, you need to exclude the original Micrometer library in the consuming module, in order to make sure that the unwoven
class files are not on the classpath anymore and accidentally used.
The way shown here in this example project is a single-module approach, building an executable uber JAR with Maven Shade. The Micrometer class
files are not a re-usable library like in the first approach, but it is nice for demonstration purposes, because we can just run the sample
application and check its output:
$ mvn clean package
...
[INFO] --- aspectj-maven-plugin:1.12.6:compile (default) @ SO_AJ_MicrometerTimed_67803726 ---
[INFO] Showing AJC message detail for messages of types: [error, warning, fail]
[INFO] Join point 'method-execution(void de.scrum_master.app.Application.doSomething())' in Type 'de.scrum_master.app.Application' (Application.java:23) advised by around advice from 'io.micrometer.core.aop.TimedAspect' (micrometer-core-1.7.0.jar!TimedAspect.class(from TimedAspect.java))
...
[INFO] --- maven-shade-plugin:3.2.4:shade (default) @ SO_AJ_MicrometerTimed_67803726 ---
[INFO] Including org.hdrhistogram:HdrHistogram:jar:2.1.12 in the shaded jar.
[INFO] Including org.latencyutils:LatencyUtils:jar:2.0.3 in the shaded jar.
[INFO] Including org.aspectj:aspectjrt:jar:1.9.6 in the shaded jar.
[INFO] Excluding io.micrometer:micrometer-core:jar:1.7.0 from the shaded jar.
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing C:\Users\me\java-src\SO_AJ_MicrometerTimed_67803726\target\SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT.jar with C:\Users\me\java-src\SO_AJ_MicrometerTimed_67803726\target\SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT-shaded.jar
[INFO] Dependency-reduced POM written at: C:\Users\me\java-src\SO_AJ_MicrometerTimed_67803726\target\dependency-reduced-pom.xml
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
$ java -jar target/SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT.jar
Juni 05, 2021 1:12:27 PM io.micrometer.core.instrument.push.PushMeterRegistry start
INFO: publishing metrics for LoggingMeterRegistry every 1m
Juni 05, 2021 1:13:00 PM io.micrometer.core.instrument.logging.LoggingMeterRegistry lambda$publish$5
INFO: method.timed{class=de.scrum_master.app.Application,exception=none,method=doSomething} throughput=0.166667/s mean=0.11842469s max=0.2146482s
Please specifically note those log lines (line breaks inserted for better readability):
Join point 'method-execution(void de.scrum_master.app.Application.doSomething())'
in Type 'de.scrum_master.app.Application' (Application.java:23)
advised by around advice from 'io.micrometer.core.aop.TimedAspect'
(micrometer-core-1.7.0.jar!TimedAspect.class(from TimedAspect.java))
The above is proof that the @Timed annotation actually causes Micrometer's TimedAspect to be woven into our application code. And here are
the measurements created by the aspect for the sample application:
method.timed
{class=de.scrum_master.app.Application,exception=none,method=doSomething}
throughput=0.166667/s mean=0.11842469s max=0.2146482s
A:
I'm not sure what do you expect from this:
public Object handle(ProceedingJoinPoint pjp) throws Throwable {
return timedAspect.timedMethod(pjp);
}
If I understand this correctly, it does nothing.
There are guides that you can follow to set-up AspectJ properly for your project. After it is done TimedAspect should work, you don't need MeasuredAspect or @Measured just set up AspectJ.
A:
Inspired by @kriegaex's Maven solution https://github.com/kriegaex/SO_AJ_MicrometerTimed_67803726, this is what I came up with for Gradle.
IMPORTANT once you have produced your jar, it replaces micrometer-core, so remember to exclude the original micrometer-core
from your dependencies. If you don't, it will be the luck of the draw
which TimedAspect and CountedAspect classes are chosen by your
runtime.
The goal is to produce a replacement jar for io.micrometer:micrometer-core. This involves compiling with ajc the original micrometer-core jar together with its transitive dependencies. The original micrometer-core contains only two @Aspects, TimedAspect and CountedAspect, so only those classes will changed by ajc.
build.gradle
plugins {
id 'java-library'
id 'io.freefair.aspectj.post-compile-weaving' version '6.6'
}
dependencies {
implementation 'org.aspectj:aspectjrt:1.9.9.1'
inpath 'io.micrometer:micrometer-core:1.10.2'
}
jar {
exclude 'dummy'
}
sourcesJar {
exclude 'dummy'
}
// Since I'm only compiling micrometer's library code, I don't need linting
compileJava.ajc.options.compilerArgs += "-Xlint:ignore"
Gradle won't allow me to compile without at least one java class to compile. To work around this, I created the following class. Yes, this does get compiled, but does not get included in the jar because it is filtered out by the jar's exclude path.
dummy.Dummy.java
package dummy;
public class Dummy {
public static void main(String[] args) {}
}
|
Micrometer TimedAspect doesn't intercept calls to methods annotated with @Timed
|
I am trying to use Micrometer to record execution time in my Java application. This is related to my other question about used @Timed annotation.
I have a class CountedObject that has the following 2 methods:
@Measured
@Timed(value = "timer1")
public void measuredFunction() {
try {
int sleepTime = new Random().nextInt(3) + 1;
Thread.sleep(sleepTime * 1000L);
} catch (InterruptedException e) {}
}
@Timed(value = "timer2")
public void timedFunction() {
try {
int sleepTime = new Random().nextInt(3) + 1;
Thread.sleep(sleepTime * 1000L);
} catch (InterruptedException e) {}
}
I have defined a custom annotation @Measured
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Measured {
}
And a MeasuredAspect to intercept calls to methods annotated with my @Measured annotation:
@Aspect
public class MeasuredAspect {
@Around("execution(* *(..)) && @annotation(Measured)")
public Object around(ProceedingJoinPoint pjp) throws Throwable {
return AppMetrics.getInstance().handle(pjp);
}
}
In my AppMetrics class I initialize an instance of micrometer's TimedAspect and in the handle(ProceedingJoinPoint pjp) method pass the ProceedingJoinPoint pjp to the TimedAspect instance.
public class AppMetrics {
private static final AppMetrics instance = new AppMetrics();
private MeterRegistry registry;
private TimedAspect timedAspect;
public static AppMetrics getInstance() {
return instance;
}
private AppMetrics() {
this.registry = new SimpleMeterRegistry();
this.timedAspect = new TimedAspect(registry);
}
public Object handle(ProceedingJoinPoint pjp) throws Throwable {
return timedAspect.timedMethod(pjp);
}
}
In my application main, I create an object of CountedObject and invoke measuredFunction() and timedFunction() then I check my registry.getMeters(); only timer1 used by the measuredFunction() [which is annotated by both @Measured and @Timed] is found, while the timer2 that should be used by timedFunction() [annotated only by @Timed] doesn't exist.
I am using eclipse with AspectJ Development Tools Plugin and my project is a Gradle project with AspectJ capability. I am using id "io.freefair.aspectj" version "5.1.1" plugin in my Gradle plugins. This is a basic java application not a Spring app.
What configurations needs to be done or what code changes are required so that micrometer TimedAspect can intercept my method calls directly [i.e timedFunction() should be timed and timer2 should be found in the registry] without the need of my custom annotation?
|
[
"I created an example project for you:\nhttps://github.com/kriegaex/SO_AJ_MicrometerTimed_67803726\nQuoting the read-me (sorry, but answers only containing links are frowned upon on StackOverflow):\n\nIn https://github.com/micrometer-metrics/micrometer/issues/1149 and on StackOverflow, an FAQ about Micrometer's @Timed annotation is,\nwhy it works with Spring AOP, but not when using Micrometer as an aspect library for native AspectJ in the context of compile-time weaving (CTW),\ne.g. with AspectJ Maven Plugin. It can be made to work with load-time weaving (LTW) when providing an aop.xml pointing to TimedAspect,\nbut in a CTW the aspect never kicks in.\nThe reason is that the aspect has been compiled with Javac, not with the AspectJ compiler (AJC), which is necessary to \"finish\" the Java class,\ni.e. to enhance its byte code in order to be a full AspectJ aspect. The LTW agent does that on the fly during class-loading, but in a CTW context\nyou need to explicitly tell AJC to do post-compile weaving (a.k.a. binary weaving) on the Micrometer library, producing newly woven class files.\nThis is done by putting Micrometer on AJC's inpath in order to make sure that its class files are being transformed and written to the target\ndirectory. The inpath in AspectJ Maven is configured via <weaveDependencies>. There are at least two ways to do this:\n\nYou can either create your own woven version of the library in a separate Maven module and then use that module instead of Micrometer.\nIn that case, you need to exclude the original Micrometer library in the consuming module, in order to make sure that the unwoven\nclass files are not on the classpath anymore and accidentally used.\n\nThe way shown here in this example project is a single-module approach, building an executable uber JAR with Maven Shade. The Micrometer class\nfiles are not a re-usable library like in the first approach, but it is nice for demonstration purposes, because we can just run the sample\napplication and check its output:\n\n\n$ mvn clean package\n\n...\n[INFO] --- aspectj-maven-plugin:1.12.6:compile (default) @ SO_AJ_MicrometerTimed_67803726 ---\n[INFO] Showing AJC message detail for messages of types: [error, warning, fail]\n[INFO] Join point 'method-execution(void de.scrum_master.app.Application.doSomething())' in Type 'de.scrum_master.app.Application' (Application.java:23) advised by around advice from 'io.micrometer.core.aop.TimedAspect' (micrometer-core-1.7.0.jar!TimedAspect.class(from TimedAspect.java))\n...\n[INFO] --- maven-shade-plugin:3.2.4:shade (default) @ SO_AJ_MicrometerTimed_67803726 ---\n[INFO] Including org.hdrhistogram:HdrHistogram:jar:2.1.12 in the shaded jar.\n[INFO] Including org.latencyutils:LatencyUtils:jar:2.0.3 in the shaded jar.\n[INFO] Including org.aspectj:aspectjrt:jar:1.9.6 in the shaded jar.\n[INFO] Excluding io.micrometer:micrometer-core:jar:1.7.0 from the shaded jar.\n[INFO] Replacing original artifact with shaded artifact.\n[INFO] Replacing C:\\Users\\me\\java-src\\SO_AJ_MicrometerTimed_67803726\\target\\SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT.jar with C:\\Users\\me\\java-src\\SO_AJ_MicrometerTimed_67803726\\target\\SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT-shaded.jar\n[INFO] Dependency-reduced POM written at: C:\\Users\\me\\java-src\\SO_AJ_MicrometerTimed_67803726\\target\\dependency-reduced-pom.xml\n[INFO] ------------------------------------------------------------------------\n[INFO] BUILD SUCCESS\n[INFO] ------------------------------------------------------------------------\n\n$ java -jar target/SO_AJ_MicrometerTimed_67803726-1.0-SNAPSHOT.jar\n\nJuni 05, 2021 1:12:27 PM io.micrometer.core.instrument.push.PushMeterRegistry start\nINFO: publishing metrics for LoggingMeterRegistry every 1m\nJuni 05, 2021 1:13:00 PM io.micrometer.core.instrument.logging.LoggingMeterRegistry lambda$publish$5\nINFO: method.timed{class=de.scrum_master.app.Application,exception=none,method=doSomething} throughput=0.166667/s mean=0.11842469s max=0.2146482s\n\nPlease specifically note those log lines (line breaks inserted for better readability):\nJoin point 'method-execution(void de.scrum_master.app.Application.doSomething())'\n in Type 'de.scrum_master.app.Application' (Application.java:23)\n advised by around advice from 'io.micrometer.core.aop.TimedAspect'\n (micrometer-core-1.7.0.jar!TimedAspect.class(from TimedAspect.java))\n\nThe above is proof that the @Timed annotation actually causes Micrometer's TimedAspect to be woven into our application code. And here are\nthe measurements created by the aspect for the sample application:\nmethod.timed\n {class=de.scrum_master.app.Application,exception=none,method=doSomething}\n throughput=0.166667/s mean=0.11842469s max=0.2146482s\n\n",
"I'm not sure what do you expect from this:\npublic Object handle(ProceedingJoinPoint pjp) throws Throwable {\n return timedAspect.timedMethod(pjp);\n}\n\nIf I understand this correctly, it does nothing.\nThere are guides that you can follow to set-up AspectJ properly for your project. After it is done TimedAspect should work, you don't need MeasuredAspect or @Measured just set up AspectJ.\n",
"Inspired by @kriegaex's Maven solution https://github.com/kriegaex/SO_AJ_MicrometerTimed_67803726, this is what I came up with for Gradle.\n\nIMPORTANT once you have produced your jar, it replaces micrometer-core, so remember to exclude the original micrometer-core\nfrom your dependencies. If you don't, it will be the luck of the draw\nwhich TimedAspect and CountedAspect classes are chosen by your\nruntime.\n\nThe goal is to produce a replacement jar for io.micrometer:micrometer-core. This involves compiling with ajc the original micrometer-core jar together with its transitive dependencies. The original micrometer-core contains only two @Aspects, TimedAspect and CountedAspect, so only those classes will changed by ajc.\nbuild.gradle\nplugins {\n id 'java-library'\n id 'io.freefair.aspectj.post-compile-weaving' version '6.6'\n}\n\ndependencies {\n implementation 'org.aspectj:aspectjrt:1.9.9.1'\n inpath 'io.micrometer:micrometer-core:1.10.2'\n}\n\njar {\n exclude 'dummy'\n}\n\nsourcesJar {\n exclude 'dummy'\n}\n\n// Since I'm only compiling micrometer's library code, I don't need linting\ncompileJava.ajc.options.compilerArgs += \"-Xlint:ignore\"\n\nGradle won't allow me to compile without at least one java class to compile. To work around this, I created the following class. Yes, this does get compiled, but does not get included in the jar because it is filtered out by the jar's exclude path.\ndummy.Dummy.java\npackage dummy;\n\npublic class Dummy {\n\n public static void main(String[] args) {}\n}\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"aop",
"aspectj",
"java",
"micrometer"
] |
stackoverflow_0067803726_aop_aspectj_java_micrometer.txt
|
Q:
Eclipse m2e fail where maven build succeeds - due to plugin execution not covered by lifecycle?
I have a project where maven build from eclipse m2e fails, but mvn clean install from the command line succeeds.
Its a multi module project ( parent and children ) which defines several custom executions.
I think the problem may be the result of several plugins showing errors of type "Plugin execution not covered by lifecycle configuration".
Furthermore upon import of the project a dialog comes up called "Setup Maven plugin connectors" and shows the goals with the custom executions as having no market place entries to handle them.
I have read
How to solve "Plugin execution not covered by lifecycle configuration" for Spring Data Maven Builds
and used "ignore" on the errors in eclipse maven preferences, which makes the errors go away, but the project is apparently not built correctly. Is there a more appropriate solution?
Here are shortened poms showing example of an uncovered goal. The parent pom defines a custom compile goal execution called compile_with_aspectj
<project xmlns=...xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>groupid</groupId>
<artifactId>parent-module</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>groupid</groupId>
<artifactId>child-module-1</artifactId>
<version>${project.version}</version>
<type>pom</type>
</dependency>
</dependencyManagement>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>com.nickwongdev</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<configuration>
<complianceLevel>11</complianceLevel>
<includes>
<include>**/*.java</include>
<include>**/*.aj</include>
</includes>
<showWeaveInfo>true</showWeaveInfo>
<forceAjcCompile>true</forceAjcCompile>
<Xlint>ignore</Xlint>
<sources/>
<weaveDirectories>
<weaveDirectory>${project.build.directory}/classes</weaveDirectory>
</weaveDirectories>
</configuration>
<executions>
<execution>
<id>compile_with_aspectj</id>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
which then produces the following error in eclipse m2e
Plugin execution not covered
by lifecycle configuration:
com.nickwongdev:aspectj-maven-plugin:1.12.6:compile (execution:
compile_with_aspectj, phase:
compile) pom.xml /child-module-1 line 7 Maven Project Build
Lifecycle Mapping Problem
where the child pom looks something like
<project xmlns=..../xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>child-module-1</artifactId>
<name>${project.groupId}:${project.artifactId}</name>
<description> </description>
<parent>
<groupId>groupid</groupId>
<artifactId>parent-module</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../parent-module</relativePath>
</parent>
<dependencies>
...
</dependencies>
<build>
<plugins>
<plugin>
<groupId>com.nickwongdev</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Does anyone know if m2e has a problem building projects with custom execution steps or with multi module projects?
A:
Providing lifecycle metadata inside the execution node resolved similar error in eclipse in my case.
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.8</version>
<executions>
<execution>
<?m2e execute?>
<id>generate-sources</id>
</execution>
</executions>
Refer this for other lifecycle mapping options.
|
Eclipse m2e fail where maven build succeeds - due to plugin execution not covered by lifecycle?
|
I have a project where maven build from eclipse m2e fails, but mvn clean install from the command line succeeds.
Its a multi module project ( parent and children ) which defines several custom executions.
I think the problem may be the result of several plugins showing errors of type "Plugin execution not covered by lifecycle configuration".
Furthermore upon import of the project a dialog comes up called "Setup Maven plugin connectors" and shows the goals with the custom executions as having no market place entries to handle them.
I have read
How to solve "Plugin execution not covered by lifecycle configuration" for Spring Data Maven Builds
and used "ignore" on the errors in eclipse maven preferences, which makes the errors go away, but the project is apparently not built correctly. Is there a more appropriate solution?
Here are shortened poms showing example of an uncovered goal. The parent pom defines a custom compile goal execution called compile_with_aspectj
<project xmlns=...xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>groupid</groupId>
<artifactId>parent-module</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>groupid</groupId>
<artifactId>child-module-1</artifactId>
<version>${project.version}</version>
<type>pom</type>
</dependency>
</dependencyManagement>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>com.nickwongdev</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<configuration>
<complianceLevel>11</complianceLevel>
<includes>
<include>**/*.java</include>
<include>**/*.aj</include>
</includes>
<showWeaveInfo>true</showWeaveInfo>
<forceAjcCompile>true</forceAjcCompile>
<Xlint>ignore</Xlint>
<sources/>
<weaveDirectories>
<weaveDirectory>${project.build.directory}/classes</weaveDirectory>
</weaveDirectories>
</configuration>
<executions>
<execution>
<id>compile_with_aspectj</id>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
which then produces the following error in eclipse m2e
Plugin execution not covered
by lifecycle configuration:
com.nickwongdev:aspectj-maven-plugin:1.12.6:compile (execution:
compile_with_aspectj, phase:
compile) pom.xml /child-module-1 line 7 Maven Project Build
Lifecycle Mapping Problem
where the child pom looks something like
<project xmlns=..../xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>child-module-1</artifactId>
<name>${project.groupId}:${project.artifactId}</name>
<description> </description>
<parent>
<groupId>groupid</groupId>
<artifactId>parent-module</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../parent-module</relativePath>
</parent>
<dependencies>
...
</dependencies>
<build>
<plugins>
<plugin>
<groupId>com.nickwongdev</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Does anyone know if m2e has a problem building projects with custom execution steps or with multi module projects?
|
[
"Providing lifecycle metadata inside the execution node resolved similar error in eclipse in my case.\n<plugin>\n<artifactId>maven-antrun-plugin</artifactId>\n<version>1.8</version>\n<executions>\n <execution>\n <?m2e execute?>\n <id>generate-sources</id>\n </execution>\n</executions>\n\nRefer this for other lifecycle mapping options.\n"
] |
[
0
] |
[] |
[] |
[
"eclipse",
"java",
"m2eclipse",
"maven",
"maven_plugin"
] |
stackoverflow_0070974117_eclipse_java_m2eclipse_maven_maven_plugin.txt
|
Q:
CSP for angular
I am implementing CSP for an Angular + ASP project. Since we have to run dynamic js files, I am trying to implement script-src with "strict-dynamic".
So far I implemented with a static nonce (which obviously is not good practice):
Added script-src 'strict-dynamic' 'nonce-RandomNumber' to my csp policy inside startup.cs file.
Published the project by dotnet publish -c Release -o publish
Edited my index file in publish/ClientApp/index.html by adding nonce=RandomNumber to all the bundled script files
This is working. My question is that:
How can I automate this process and publish my project?
How can have an automated process which performs similarly but with dynamic nonces? or an automated process which instead of using nonces, reads the bundled hashes from compiled angular project and updates my policies inside startup.cs file?
A:
I used Angular custom builder to add <nonce=RandomNumber> to every scripts in index.html file.
Then, I added a middleware which generates a dynamic nonce and sets the "Content-Security-Policy". What I need now is to replace RandomNumber with the generated nonce when serving index.html.
|
CSP for angular
|
I am implementing CSP for an Angular + ASP project. Since we have to run dynamic js files, I am trying to implement script-src with "strict-dynamic".
So far I implemented with a static nonce (which obviously is not good practice):
Added script-src 'strict-dynamic' 'nonce-RandomNumber' to my csp policy inside startup.cs file.
Published the project by dotnet publish -c Release -o publish
Edited my index file in publish/ClientApp/index.html by adding nonce=RandomNumber to all the bundled script files
This is working. My question is that:
How can I automate this process and publish my project?
How can have an automated process which performs similarly but with dynamic nonces? or an automated process which instead of using nonces, reads the bundled hashes from compiled angular project and updates my policies inside startup.cs file?
|
[
"I used Angular custom builder to add <nonce=RandomNumber> to every scripts in index.html file.\nThen, I added a middleware which generates a dynamic nonce and sets the \"Content-Security-Policy\". What I need now is to replace RandomNumber with the generated nonce when serving index.html.\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"asp.net",
"content_security_policy"
] |
stackoverflow_0074617409_angular_asp.net_content_security_policy.txt
|
Q:
Error while working on the site in Django
This is a continuation of the previous question. When I continued to work on the site and when I wanted to test the site through "python manage.py runserver" in the C:\mysite\site\miniproject directory, the following error pops up:
C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models.
new_class._meta.apps.register_model(new_class._meta.app_label, new_class)
C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models.
new_class._meta.apps.register_model(new_class._meta.app_label, new_class)
Watching for file changes with StatReloader
Performing system checks...
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 17, in include
urlconf_module, app_name = arg
ValueError: too many values to unpack (expected 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Program Files\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python36\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run
self.check(display_num_errors=True)
File "C:\Program Files\Python36\lib\site-packages\django\core\management\base.py", line 423, in check
databases=databases,
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config
return check_resolver(resolver)
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver
return check_method()
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 416, in check
for pattern in self.url_patterns:
File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 602, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 595, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Program Files\Python36\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "C:\mysite\site\miniproject\miniproject\urls.py", line 20, in <module>
url(r'^admin/', include(admin.site.urls)),
File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 27, in include
'provide the namespace argument to include() instead.' % len(arg)
django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead.
Here is a link to the chapter where I worked: https://pocoz.gitbooks.io/django-v-primerah/content/sozdanie-shablonov-dlia-view.html , Most likely I made a mistake somewhere. Next, I will show you the contents of the files:
base.html:
{% load static files %}
<!DOCTYPE html>
<html>
<head>
<title>{% block title %}{% endblock %}</title>
<link href="{% static "css/blog.css" %}" rel="stylesheet">
</head>
<body>
<div id="content">
{% block content %}
{%endblock%}
</div>
<div id="sidebar">
<h2>My blog</h2>
<p>This is my blog.</p>
</div>
</body>
</html>
list.html:
{% extends "blog/base.html" %}
{% block title %}My Blog{% endblock %}
{% block content %}
<h1>My Blog</h1>
{% for post in posts %}
<h2>
<a href="{{ post.get_absolute_url }}">{{ post.title }}</a>
</h2>
<p class="date">
Published {{ post.publish }} by {{ post.author }}
</p>
{{ post.body|truncatewords:30|linebreaks }}
{% endfor %}
{%endblock%}
detail.html:
{% extends "blog/base.html" %}
{% block title %}{{ post. title }}{% endblock %}
{% block content %}
<h1>{{post.title}}</h1>
<p class="date">
Published {{ post.publish }} by {{ post.author }}
</p>
{{ post.body|linebreaks}}
{%endblock%}
C:\mysite\site\miniproject\blog\views.py:
from django.shortcuts import render, get_object_or_404
from .models import Post
def post_list(request):
posts = Post.published.all()
return render(request, 'blog/post/list.html', {'posts': posts})
def post_detail(request, year, month, day, post):
post = get_object_or_404(Post, slug=post,
status='published',
publish_year=year,
publish__month=month,
publish_day=day)
return render(request,'blog/post/detail.html', {'post': post})
# Create your views here.
C:\mysite\site\miniproject\blog\urls.py:
from django.conf.urls import url
from. import views
urlpatterns = [
# post views
url(r'^$', views.post_list, name='post_list'),
url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\
r'(?P<post>[-\w]+)/$',
views.post_detail,
name='post_detail'),
]
C:\mysite\site\miniproject\miniproject\urls.py:
"""miniproject URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^blog/', include('blog.urls',
namespace='blog',
app_name='blog')),
]
C:\mysite\site\miniproject\blog\models.py:
from django.db import models
from django.utils import timezone
from django.contrib.auth.models import User
from django.shortcuts import reverse
class Post(models.Model):
def get_absolute_url(self):
return reverse('blog:post_detail',
args=[self.publish.year,
self.publish.strftime('%m'),
self.publish.strftime('%d'),
self.slug])
class Post(models.Model):
STATUS_CHOICES = (
('draft', 'Draft'),
('published', 'Published'),
)
title = models.CharField(max_length=250)
slug = models.SlugField(max_length=250, unique_for_date='publish')
author = models.ForeignKey(User, on_delete=models.CASCADE, related_name='blog_posts')
body = models.TextField()
publish = models.DateTimeField(default=timezone.now)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='draft')
class Meta:
ordering = ('-publish',)
def __str__(self):
return self.title
# Create your models here.
I updated the Python libraries and carefully checked everything, read the Django documentation and nothing helped, maybe I inserted the Python code incorrectly
A:
The error lies in your urls file
app_name is passed as an arg instead of a kwarg
url(r'^blog/', include('blog.urls',
namespace='blog',
app_name='blog')),
This should fix the issue
url(r'^blog/', include('blog.urls', "blog", namespace='blog'),
here is the implemtation behind include method
def include(arg, namespace=None):
app_name = None
if isinstance(arg, tuple):
# Callable returning a namespace hint.
try:
urlconf_module, app_name = arg
except ValueError:
if namespace:
raise ImproperlyConfigured(
"Cannot override the namespace for a dynamic module that "
"provides a namespace."
)
raise ImproperlyConfigured(
"Passing a %d-tuple to include() is not supported. Pass a "
"2-tuple containing the list of patterns and app_name, and "
"provide the namespace argument to include() instead." % len(arg)
)
else:
# No namespace hint - use manually provided namespace.
urlconf_module = arg
...
A:
Change the import line to from django.shortcuts import reverse.
|
Error while working on the site in Django
|
This is a continuation of the previous question. When I continued to work on the site and when I wanted to test the site through "python manage.py runserver" in the C:\mysite\site\miniproject directory, the following error pops up:
C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models.
new_class._meta.apps.register_model(new_class._meta.app_label, new_class)
C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models.
new_class._meta.apps.register_model(new_class._meta.app_label, new_class)
Watching for file changes with StatReloader
Performing system checks...
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 17, in include
urlconf_module, app_name = arg
ValueError: too many values to unpack (expected 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Program Files\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python36\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run
self.check(display_num_errors=True)
File "C:\Program Files\Python36\lib\site-packages\django\core\management\base.py", line 423, in check
databases=databases,
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config
return check_resolver(resolver)
File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver
return check_method()
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 416, in check
for pattern in self.url_patterns:
File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 602, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 595, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Program Files\Python36\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "C:\mysite\site\miniproject\miniproject\urls.py", line 20, in <module>
url(r'^admin/', include(admin.site.urls)),
File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 27, in include
'provide the namespace argument to include() instead.' % len(arg)
django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead.
Here is a link to the chapter where I worked: https://pocoz.gitbooks.io/django-v-primerah/content/sozdanie-shablonov-dlia-view.html , Most likely I made a mistake somewhere. Next, I will show you the contents of the files:
base.html:
{% load static files %}
<!DOCTYPE html>
<html>
<head>
<title>{% block title %}{% endblock %}</title>
<link href="{% static "css/blog.css" %}" rel="stylesheet">
</head>
<body>
<div id="content">
{% block content %}
{%endblock%}
</div>
<div id="sidebar">
<h2>My blog</h2>
<p>This is my blog.</p>
</div>
</body>
</html>
list.html:
{% extends "blog/base.html" %}
{% block title %}My Blog{% endblock %}
{% block content %}
<h1>My Blog</h1>
{% for post in posts %}
<h2>
<a href="{{ post.get_absolute_url }}">{{ post.title }}</a>
</h2>
<p class="date">
Published {{ post.publish }} by {{ post.author }}
</p>
{{ post.body|truncatewords:30|linebreaks }}
{% endfor %}
{%endblock%}
detail.html:
{% extends "blog/base.html" %}
{% block title %}{{ post. title }}{% endblock %}
{% block content %}
<h1>{{post.title}}</h1>
<p class="date">
Published {{ post.publish }} by {{ post.author }}
</p>
{{ post.body|linebreaks}}
{%endblock%}
C:\mysite\site\miniproject\blog\views.py:
from django.shortcuts import render, get_object_or_404
from .models import Post
def post_list(request):
posts = Post.published.all()
return render(request, 'blog/post/list.html', {'posts': posts})
def post_detail(request, year, month, day, post):
post = get_object_or_404(Post, slug=post,
status='published',
publish_year=year,
publish__month=month,
publish_day=day)
return render(request,'blog/post/detail.html', {'post': post})
# Create your views here.
C:\mysite\site\miniproject\blog\urls.py:
from django.conf.urls import url
from. import views
urlpatterns = [
# post views
url(r'^$', views.post_list, name='post_list'),
url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\
r'(?P<post>[-\w]+)/$',
views.post_detail,
name='post_detail'),
]
C:\mysite\site\miniproject\miniproject\urls.py:
"""miniproject URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^blog/', include('blog.urls',
namespace='blog',
app_name='blog')),
]
C:\mysite\site\miniproject\blog\models.py:
from django.db import models
from django.utils import timezone
from django.contrib.auth.models import User
from django.shortcuts import reverse
class Post(models.Model):
def get_absolute_url(self):
return reverse('blog:post_detail',
args=[self.publish.year,
self.publish.strftime('%m'),
self.publish.strftime('%d'),
self.slug])
class Post(models.Model):
STATUS_CHOICES = (
('draft', 'Draft'),
('published', 'Published'),
)
title = models.CharField(max_length=250)
slug = models.SlugField(max_length=250, unique_for_date='publish')
author = models.ForeignKey(User, on_delete=models.CASCADE, related_name='blog_posts')
body = models.TextField()
publish = models.DateTimeField(default=timezone.now)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='draft')
class Meta:
ordering = ('-publish',)
def __str__(self):
return self.title
# Create your models here.
I updated the Python libraries and carefully checked everything, read the Django documentation and nothing helped, maybe I inserted the Python code incorrectly
|
[
"The error lies in your urls file\napp_name is passed as an arg instead of a kwarg\n url(r'^blog/', include('blog.urls',\n namespace='blog',\n app_name='blog')), \n\nThis should fix the issue\n url(r'^blog/', include('blog.urls', \"blog\", namespace='blog'),\n\nhere is the implemtation behind include method\ndef include(arg, namespace=None):\n app_name = None\n if isinstance(arg, tuple):\n # Callable returning a namespace hint.\n try:\n urlconf_module, app_name = arg\n except ValueError:\n if namespace:\n raise ImproperlyConfigured(\n \"Cannot override the namespace for a dynamic module that \"\n \"provides a namespace.\"\n )\n raise ImproperlyConfigured(\n \"Passing a %d-tuple to include() is not supported. Pass a \"\n \"2-tuple containing the list of patterns and app_name, and \"\n \"provide the namespace argument to include() instead.\" % len(arg)\n )\n else:\n # No namespace hint - use manually provided namespace.\n urlconf_module = arg\n ...\n\n",
"Change the import line to from django.shortcuts import reverse.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_templates",
"python",
"python_3.x",
"web"
] |
stackoverflow_0074645823_django_django_templates_python_python_3.x_web.txt
|
Q:
React Native with expo av- recording audio stops on Android 12 after picking up a phone call
I am developing a recording app in React Native. For that, I use expo-av. I've noticed recently that on Android 12 when a user picks up a call, the app keeps recording but when listening to it later, there is silence while the user was on the phone, but also after he hung up till the end of the recording. On older versions of Android, there is silence while the user was on a call, but it starts capturing audio again when he hangs up. Any idea how to fix that it keeps capturing audio after the user hangs up?
I am on expo 45, btw.
A:
In new android version (>=12) permissions are changed. Please look at the here
A:
I am not 100% sure, but I've changed TARGET_SDK to version 31 (Android 12) and it seems to be working now.
|
React Native with expo av- recording audio stops on Android 12 after picking up a phone call
|
I am developing a recording app in React Native. For that, I use expo-av. I've noticed recently that on Android 12 when a user picks up a call, the app keeps recording but when listening to it later, there is silence while the user was on the phone, but also after he hung up till the end of the recording. On older versions of Android, there is silence while the user was on a call, but it starts capturing audio again when he hangs up. Any idea how to fix that it keeps capturing audio after the user hangs up?
I am on expo 45, btw.
|
[
"In new android version (>=12) permissions are changed. Please look at the here\n",
"I am not 100% sure, but I've changed TARGET_SDK to version 31 (Android 12) and it seems to be working now.\n"
] |
[
0,
0
] |
[] |
[] |
[
"android",
"expo",
"expo_av",
"react_native"
] |
stackoverflow_0074561024_android_expo_expo_av_react_native.txt
|
Q:
How do I make the first letter of a string uppercase in JavaScript?
How do I make the first letter of a string uppercase, but not change the case of any of the other letters?
For example:
"this is a test" → "This is a test"
"the Eiffel Tower" → "The Eiffel Tower"
"/index.html" → "/index.html"
A:
The basic solution is:
function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1);
}
console.log(capitalizeFirstLetter('foo')); // Foo
Some other answers modify String.prototype (this answer used to as well), but I would advise against this now due to maintainability (hard to find out where the function is being added to the prototype and could cause conflicts if other code uses the same name / a browser adds a native function with that same name in future).
...and then, there is so much more to this question when you consider internationalisation, as this astonishingly good answer (buried below) shows.
If you want to work with Unicode code points instead of code units (for example to handle Unicode characters outside of the Basic Multilingual Plane) you can leverage the fact that String#[@iterator] works with code points, and you can use toLocaleUpperCase to get locale-correct uppercasing:
const capitalizeFirstLetter = ([ first, ...rest ], locale = navigator.language) =>
first === undefined ? '' : first.toLocaleUpperCase(locale) + rest.join('')
console.log(
capitalizeFirstLetter(''), // [empty string]
capitalizeFirstLetter('foo'), // Foo
capitalizeFirstLetter(""), // "" (correct!)
capitalizeFirstLetter("italya", 'tr') // İtalya" (correct in Turkish Latin!)
)
For even more internationalization options, please see the original answer below.
A:
Here's a more object-oriented approach:
Object.defineProperty(String.prototype, 'capitalize', {
value: function() {
return this.charAt(0).toUpperCase() + this.slice(1);
},
enumerable: false
});
You'd call the function, like this:
"hello, world!".capitalize();
With the expected output being:
"Hello, world!"
A:
In CSS:
p::first-letter {
text-transform:capitalize;
}
A:
Here is a shortened version of the popular answer that gets the first letter by treating the string as an array:
function capitalize(s)
{
return s[0].toUpperCase() + s.slice(1);
}
Update
According to the comments below this doesn't work in IE 7 or below.
Update 2:
To avoid undefined for empty strings (see @njzk2's comment below), you can check for an empty string:
function capitalize(s)
{
return s && s[0].toUpperCase() + s.slice(1);
}
ES version
const capitalize = s => s && s[0].toUpperCase() + s.slice(1)
// to always return type string event when s may be falsy other than empty-string
const capitalize = s => (s && s[0].toUpperCase() + s.slice(1)) || ""
A:
If you're interested in the performance of a few different methods posted:
Here are the fastest methods based on this jsperf test (ordered from fastest to slowest).
As you can see, the first two methods are essentially comparable in terms of performance, whereas altering the String.prototype is by far the slowest in terms of performance.
// 10,889,187 operations/sec
function capitalizeFirstLetter(string) {
return string[0].toUpperCase() + string.slice(1);
}
// 10,875,535 operations/sec
function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1);
}
// 4,632,536 operations/sec
function capitalizeFirstLetter(string) {
return string.replace(/^./, string[0].toUpperCase());
}
// 1,977,828 operations/sec
String.prototype.capitalizeFirstLetter = function() {
return this.charAt(0).toUpperCase() + this.slice(1);
}
A:
I didn’t see any mention in the existing answers of issues related to astral plane code points or internationalization. “Uppercase” doesn’t mean the same thing in every language using a given script.
Initially I didn’t see any answers addressing issues related to astral plane code points. There is one, but it’s a bit buried (like this one will be, I guess!)
Overview of the hidden problem and various approaches to it
Most of the proposed functions look like this:
function capitalizeFirstLetter(str) {
return str[0].toUpperCase() + str.slice(1);
}
However, some cased characters fall outside the BMP (basic multilingual plane, code points U+0 to U+FFFF). For example take this Deseret text:
capitalizeFirstLetter(""); // ""
The first character here fails to capitalize because the array-indexed properties of strings don’t access “characters” or code points*. They access UTF-16 code units. This is true also when slicing — the index values point at code units.
It happens to be that UTF-16 code units are 1:1 with USV code points within two ranges, U+0 to U+D7FF and U+E000 to U+FFFF inclusive. Most cased characters fall into those two ranges, but not all of them.
From ES2015 on, dealing with this became a bit easier. String.prototype[@@iterator] yields strings corresponding to code points**. So for example, we can do this:
function capitalizeFirstLetter([ first='', ...rest ]) {
return [ first.toUpperCase(), ...rest ].join('');
}
capitalizeFirstLetter("") // ""
For longer strings, this is probably not terribly efficient*** — we don’t really need to iterate the remainder. We could use String.prototype.codePointAt to get at that first (possible) letter, but we’d still need to determine where the slice should begin. One way to avoid iterating the remainder would be to test whether the first codepoint is outside the BMP; if it isn’t, the slice begins at 1, and if it is, the slice begins at 2.
function capitalizeFirstLetter(str) {
if (!str) return '';
const firstCP = str.codePointAt(0);
const index = firstCP > 0xFFFF ? 2 : 1;
return String.fromCodePoint(firstCP).toUpperCase() + str.slice(index);
}
capitalizeFirstLetter("") // ""
You could use bitwise math instead of > 0xFFFF there, but it’s probably easier to understand this way and either would achieve the same thing.
We can also make this work in ES5 and below by taking that logic a bit further if necessary. There are no intrinsic methods in ES5 for working with codepoints, so we have to manually test whether the first code unit is a surrogate****:
function capitalizeFirstLetter(str) {
if (!str) return '';
var firstCodeUnit = str[0];
if (firstCodeUnit < '\uD800' || firstCodeUnit > '\uDFFF') {
return str[0].toUpperCase() + str.slice(1);
}
return str.slice(0, 2).toUpperCase() + str.slice(2);
}
capitalizeFirstLetter("") // ""
Deeper into internationalization (whose capitalization?)
At the start I also mentioned internationalization considerations. Some of these are very difficult to account for because they require knowledge not only of what language is being used, but also may require specific knowledge of the words in the language. For example, the Irish digraph "mb" capitalizes as "mB" at the start of a word. Another example, the German eszett, never begins a word (afaik), but still helps illustrate the problem. The lowercase eszett (“ß”) capitalizes to “SS,” but “SS” could lowercase to either “ß” or “ss” — you require out-of-band knowledge of the German language to know which is correct!
The most famous example of these kinds of issues, probably, is Turkish. In Turkish Latin, the capital form of i is İ, while the lowercase form of I is ı — they’re two different letters. Fortunately we do have a way to account for this:
function capitalizeFirstLetter([ first='', ...rest ], locale) {
return [ first.toLocaleUpperCase(locale), ...rest ].join('');
}
capitalizeFirstLetter("italy", "en") // "Italy"
capitalizeFirstLetter("italya", "tr") // "İtalya"
In a browser, the user’s most-preferred language tag is indicated by navigator.language, a list in order of preference is found at navigator.languages, and a given DOM element’s language can be obtained (usually) with Object(element.closest('[lang]')).lang || YOUR_DEFAULT_HERE in multilanguage documents.
In agents which support Unicode property character classes in RegExp, which were introduced in ES2018, we can clean stuff up further by directly expressing what characters we’re interested in:
function capitalizeFirstLetter(str, locale=navigator.language) {
return str.replace(/^\p{CWU}/u, char => char.toLocaleUpperCase(locale));
}
This could be tweaked a bit to also handle capitalizing multiple words in a string with fairly good accuracy for at least some languages, though outlying cases will be hard to avoid completely if doing so no matter what the primary language is.
The CWU or Changes_When_Uppercased character property matches all code points which change when uppercased in the generic case where specific locale data is absent. There are other interesting case-related Unicode character properties that you may wish to play around with. It’s a cool zone to explore but we’d go on all day if we enumerated em all here. Here’s something to get your curiosity going if you’re unfamiliar, though: \p{Lower} is a larger group than \p{LowercaseLetter} (aka \p{Ll}) — conveniently illustrated by the default character set comparison in this tool provided by Unicode. (NB: not everything you can reference there is also available in ES regular expressions, but most of the stuff you’re likely to want is).
Alternatives to case-mapping in JS (Firefox & CSS love the Dutch!)
If digraphs with unique locale/language/orthography capitalization rules happen to have a single-codepoint “composed” representation in Unicode, these might be used to make one’s capitalization expectations explicit even in the absence of locale data. For example, we could prefer the composed i-j digraph, ij / U+133, associated with Dutch, to ensure a case-mapping to uppercase IJ / U+132:
capitalizeFirstLetter('ijsselmeer'); // "IJsselmeer"
On the other hand, precomposed digraphs and similar are sometimes deprecated (like that one, it seems!) and may be undesirable in interchanged text regardless due to the potential copypaste nuisance if that’s not the normal way folks type the sequence in practice. Unfortunately, in the absence of the precomposition “hint,” an explicit locale won’t help here (at least as far as I know). If we spell ijsselmeer with an ordinary i + j, capitalizeFirstLetter will produce the wrong result even if we explicitly indicate nl as the locale:
capitalizeFirstLetter('ijsselmeer', 'nl'); // "Ijsselmeer" :(
(I’m not entirely sure whether there are some such cases where the behavior comes down to ICU data availability — perhaps someone else could say.)
If the point of the transformation is to display textual content in a web browser, though, you have an entirely different option available that will likely be your best bet: leveraging features of the web platform’s other core languages, HTML and CSS. Armed with HTML’s lang=... and CSS’s text-transform:..., you’ve got a (pseudo-)declarative solution that leaves extra room for the user agent to be “smart.” A JS API needs to have predictable outcomes across all browsers (generally) and isn’t free to experiment with heuristics. The user-agent itself is obligated only to its user, though, and heuristic solutions are fair game when the output is for a human being. If we tell it “this text is Dutch, but please display it capitalized,” the particular outcome might now vary between browsers, but it’s likely going to be the best each of them could do. Let’s see:
<!DOCTYPE html>
<dl>
<dt>Untransformed
<dd>ijsselmeer
<dt>Capitalized with CSS and <code>lang=en</code>
<dd lang="en" style="text-transform: capitalize">ijsselmeer
<dt>Capitalized with CSS and <code>lang=nl</code>
<dd lang="nl" style="text-transform: capitalize">ijsselmeer
In Chromium at the time of writing, both the English and Dutch lines come out as Ijsselmeer — so it does no better than JS. But try it in current Firefox! The element that we told the browser contains Dutch will be correctly rendered as IJsselmeer there.
This solution is purpose-specific (it’s not gonna help you in Node, anyway) but it was silly of me not to draw attention to it previously given some folks might not realize they’re googling the wrong question. Thanks @paul23 for clarifying more about the nature of the IJ digraph in practice and prompting further investigation!
As of January 2021, all major engines have implemented the Unicode property character class feature, but depending on your target support range you may not be able to use it safely yet. The last browser to introduce support was Firefox (78; June 30, 2020). You can check for support of this feature with the Kangax compat table. Babel can be used to compile RegExp literals with property references to equivalent patterns without them, but be aware that the resulting code can sometimes be enormous. You probably would not want to do this unless you’re certain the tradeoff is justified for your use case.
In all likelihood, people asking this question will not be concerned with Deseret capitalization or internationalization. But it’s good to be aware of these issues because there’s a good chance you’ll encounter them eventually even if they aren’t concerns presently. They’re not “edge” cases, or rather, they’re not by-definition edge cases — there’s a whole country where most people speak Turkish, anyway, and conflating code units with codepoints is a fairly common source of bugs (especially with regard to emoji). Both strings and language are pretty complicated!
* The code units of UTF-16 / UCS2 are also Unicode code points in the sense that e.g. U+D800 is technically a code point, but that’s not what it “means” here ... sort of ... though it gets pretty fuzzy. What the surrogates definitely are not, though, is USVs (Unicode scalar values).
** Though if a surrogate code unit is “orphaned” — i.e., not part of a logical pair — you could still get surrogates here, too.
*** maybe. I haven’t tested it. Unless you have determined capitalization is a meaningful bottleneck, I probably wouldn’t sweat it — choose whatever you believe is most clear and readable.
**** such a function might wish to test both the first and second code units instead of just the first, since it’s possible that the first unit is an orphaned surrogate. For example the input "\uD800x" would capitalize the X as-is, which may or may not be expected.
A:
For another case I need it to capitalize the first letter and lowercase the rest. The following cases made me change this function:
//es5
function capitalize(string) {
return string.charAt(0).toUpperCase() + string.slice(1).toLowerCase();
}
capitalize("alfredo") // => "Alfredo"
capitalize("Alejandro")// => "Alejandro
capitalize("ALBERTO") // => "Alberto"
capitalize("ArMaNdO") // => "Armando"
// es6 using destructuring
const capitalize = ([first,...rest]) => first.toUpperCase() + rest.join('').toLowerCase();
A:
This is the 2018 ECMAScript 6+ Solution:
const str = 'the Eiffel Tower';
const newStr = `${str[0].toUpperCase()}${str.slice(1)}`;
console.log('Original String:', str); // the Eiffel Tower
console.log('New String:', newStr); // The Eiffel Tower
A:
If you're already (or considering) using Lodash, the solution is easy:
_.upperFirst('fred');
// => 'Fred'
_.upperFirst('FRED');
// => 'FRED'
_.capitalize('fred') //=> 'Fred'
See their documentation: https://lodash.com/docs#capitalize
_.camelCase('Foo Bar'); //=> 'fooBar'
https://lodash.com/docs/4.15.0#camelCase
_.lowerFirst('Fred');
// => 'fred'
_.lowerFirst('FRED');
// => 'fRED'
_.snakeCase('Foo Bar');
// => 'foo_bar'
Vanilla JavaScript for first upper case:
function upperCaseFirst(str){
return str.charAt(0).toUpperCase() + str.substring(1);
}
A:
There is a very simple way to implement it by replace. For ECMAScript 6:
'foo'.replace(/^./, str => str.toUpperCase())
Result:
'Foo'
A:
Capitalize the first letter of all words in a string:
function ucFirstAllWords( str )
{
var pieces = str.split(" ");
for ( var i = 0; i < pieces.length; i++ )
{
var j = pieces[i].charAt(0).toUpperCase();
pieces[i] = j + pieces[i].substr(1);
}
return pieces.join(" ");
}
A:
CSS only
If the transformation is needed only for displaying on a web page:
p::first-letter {
text-transform: uppercase;
}
Despite being called "::first-letter", it applies to the first character, i.e. in case of string %a, this selector would apply to % and as such a would not be capitalized.
In IE9+ or IE5.5+ it's supported in legacy notation with only one colon (:first-letter).
ES2015 one-liner
const capitalizeFirstChar = str => str.charAt(0).toUpperCase() + str.substring(1);
Remarks
In the benchmark I performed, there was no significant difference between string.charAt(0) and string[0]. Note however, that string[0] would be undefined for an empty string, so the function would have to be rewritten to use "string && string[0]", which is way too verbose, compared to the alternative.
string.substring(1) is faster than string.slice(1).
Benchmark between substring() and slice()
The difference is rather minuscule nowadays (run the test yourself):
21,580,613.15 ops/s ±1.6% for substring(),
21,096,394.34 ops/s ±1.8% (2.24% slower) for slice().
A:
It's always better to handle these kinds of stuff using CSS first, in general, if you can solve something using CSS, go for that first, then try JavaScript to solve your problems, so in this case try using :first-letter in CSS and apply text-transform:capitalize;
So try creating a class for that, so you can use it globally, for example: .first-letter-uppercase and add something like below in your CSS:
.first-letter-uppercase:first-letter {
text-transform:capitalize;
}
Also the alternative option is JavaScript, so the best gonna be something like this:
function capitalizeTxt(txt) {
return txt.charAt(0).toUpperCase() + txt.slice(1); //or if you want lowercase the rest txt.slice(1).toLowerCase();
}
and call it like:
capitalizeTxt('this is a test'); // return 'This is a test'
capitalizeTxt('the Eiffel Tower'); // return 'The Eiffel Tower'
capitalizeTxt('/index.html'); // return '/index.html'
capitalizeTxt('alireza'); // return 'Alireza'
capitalizeTxt('dezfoolian'); // return 'Dezfoolian'
If you want to reuse it over and over, it's better attach it to javascript native String, so something like below:
String.prototype.capitalizeTxt = String.prototype.capitalizeTxt || function() {
return this.charAt(0).toUpperCase() + this.slice(1);
}
and call it as below:
'this is a test'.capitalizeTxt(); // return 'This is a test'
'the Eiffel Tower'.capitalizeTxt(); // return 'The Eiffel Tower'
'/index.html'.capitalizeTxt(); // return '/index.html'
'alireza'.capitalizeTxt(); // return 'Alireza'
A:
String.prototype.capitalize = function(allWords) {
return (allWords) ? // If all words
this.split(' ').map(word => word.capitalize()).join(' ') : // Break down the phrase to words and then recursive
// calls until capitalizing all words
this.charAt(0).toUpperCase() + this.slice(1); // If allWords is undefined, capitalize only the first word,
// meaning the first character of the whole string
}
And then:
"capitalize just the first word".capitalize(); ==> "Capitalize just the first word"
"capitalize all words".capitalize(true); ==> "Capitalize All Words"
Update November 2016 (ES6), just for fun:
const capitalize = (string = '') => [...string].map( // Convert to array with each item is a char of
// string by using spread operator (...)
(char, index) => index ? char : char.toUpperCase() // Index true means not equal 0, so (!index) is
// the first character which is capitalized by
// the `toUpperCase()` method
).join('') // Return back to string
then capitalize("hello") // Hello
A:
SHORTEST 3 solutions, 1 and 2 handle cases when s string is "", null and undefined:
s&&s[0].toUpperCase()+s.slice(1) // 32 char
s&&s.replace(/./,s[0].toUpperCase()) // 36 char - using regexp
'foo'.replace(/./,x=>x.toUpperCase()) // 31 char - direct on string, ES6
let s='foo bar';
console.log( s&&s[0].toUpperCase()+s.slice(1) );
console.log( s&&s.replace(/./,s[0].toUpperCase()) );
console.log( 'foo bar'.replace(/./,x=>x.toUpperCase()) );
A:
We could get the first character with one of my favorite RegExp, looks like a cute smiley: /^./
String.prototype.capitalize = function () {
return this.replace(/^./, function (match) {
return match.toUpperCase();
});
};
And for all coffee-junkies:
String::capitalize = ->
@replace /^./, (match) ->
match.toUpperCase()
...and for all guys who think that there's a better way of doing this, without extending native prototypes:
var capitalize = function (input) {
return input.replace(/^./, function (match) {
return match.toUpperCase();
});
};
A:
Here is a function called ucfirst()(short for "upper case first letter"):
function ucfirst(str) {
var firstLetter = str.substr(0, 1);
return firstLetter.toUpperCase() + str.substr(1);
}
You can capitalise a string by calling ucfirst("some string") -- for example,
ucfirst("this is a test") --> "This is a test"
It works by splitting the string into two pieces. On the first line it pulls out firstLetter and then on the second line it capitalises firstLetter by calling firstLetter.toUpperCase() and joins it with the rest of the string, which is found by calling str.substr(1).
You might think this would fail for an empty string, and indeed in a language like C you would have to cater for this. However in JavaScript, when you take a substring of an empty string, you just get an empty string back.
A:
Use:
var str = "ruby java";
console.log(str.charAt(0).toUpperCase() + str.substring(1));
It will output "Ruby java" to the console.
A:
If you use Underscore.js or Lodash, the underscore.string library provides string extensions, including capitalize:
_.capitalize(string) Converts first letter of the string to
uppercase.
Example:
_.capitalize("foo bar") == "Foo bar"
A:
If you're ok with capitalizing the first letter of every word, and your usecase is in HTML, you can use the following CSS:
<style type="text/css">
p.capitalize {text-transform:capitalize;}
</style>
<p class="capitalize">This is some text.</p>
This is from CSS text-transform Property (at W3Schools).
A:
var capitalized = yourstring[0].toUpperCase() + yourstring.substr(1);
A:
If you are wanting to reformat all-caps text, you might want to modify the other examples as such:
function capitalize (text) {
return text.charAt(0).toUpperCase() + text.slice(1).toLowerCase();
}
This will ensure that the following text is changed:
TEST => Test
This Is A TeST => This is a test
A:
String.prototype.capitalize = function(){
return this.replace(/(^|\s)([a-z])/g,
function(m, p1, p2) {
return p1 + p2.toUpperCase();
});
};
Usage:
capitalizedString = someString.capitalize();
This is a text string => This Is A Text String
A:
function capitalize(s) {
// returns the first letter capitalized + the string from index 1 and out aka. the rest of the string
return s[0].toUpperCase() + s.substr(1);
}
// examples
capitalize('this is a test');
=> 'This is a test'
capitalize('the Eiffel Tower');
=> 'The Eiffel Tower'
capitalize('/index.html');
=> '/index.html'
A:
yourString.replace(/\w/, c => c.toUpperCase())
I found this arrow function easiest. Replace matches the first letter character (\w) of your string and converts it to uppercase. Nothing fancier is necessary.
A:
var str = "test string";
str = str.substring(0,1).toUpperCase() + str.substring(1);
A:
57 81 different answers for this question, some off-topic, and yet none of them raise the important issue that none of the solutions listed will work with Asian characters, emoji's, and other high Unicode-point-value characters in many browsers. Here is a solution that will:
const consistantCapitalizeFirstLetter = "\uD852\uDF62".length === 1 ?
function(S) {
"use-strict"; // Hooray! The browser uses UTF-32!
return S.charAt(0).toUpperCase() + S.substring(1);
} : function(S) {
"use-strict";
// The browser is using UCS16 to store UTF-16
var code = S.charCodeAt(0)|0;
return (
code >= 0xD800 && code <= 0xDBFF ? // Detect surrogate pair
S.slice(0,2).toUpperCase() + S.substring(2) :
S.charAt(0).toUpperCase() + S.substring(1)
);
};
const prettyCapitalizeFirstLetter = "\uD852\uDF62".length === 1 ?
function(S) {
"use-strict"; // Hooray! The browser uses UTF-32!
return S.charAt(0).toLocaleUpperCase() + S.substring(1);
} : function(S) {
"use-strict";
// The browser is using UCS16 to store UTF-16
var code = S.charCodeAt(0)|0;
return (
code >= 0xD800 && code <= 0xDBFF ? // Detect surrogate pair
S.slice(0,2).toLocaleUpperCase() + S.substring(2) :
S.charAt(0).toLocaleUpperCase() + S.substring(1)
);
};
Do note that the above solution tries to account for UTF-32. However, the specification officially states that browsers are required to do everything in UTF-16 mapped into UCS2. Nevertheless, if we all come together, do our part, and start preparing for UTF32, then there is a chance that the TC39 may allow browsers to start using UTF-32 (like how Python uses 24-bits for each character of the string). This must seem silly to an English speaker: no one who uses only latin-1 has ever had to deal with Mojibake because Latin-I is supported by all character encodings. But, users in other countries (such as China, Japan, Indonesia, etc.) are not so fortunate. They constantly struggle with encoding problems not just from the webpage, but also from the JavaScript: many Chinese/Japanese characters are treated as two letters by JavaScript and thus may be broken apart in the middle, resulting in � and � (two question-marks that make no sense to the end user). If we could start getting ready for UTF-32, then the TC39 might just allow browsers do what Python did many years ago which had made Python very popular for working with high Unicode characters: using UTF-32.
consistantCapitalizeFirstLetter works correctly in Internet Explorer 3+ (when the const is changed to var). prettyCapitalizeFirstLetter requires Internet Explorer 5.5+ (see the top of page 250 of this document). However, these fact are more of just jokes because it is very likely that the rest of the code on your webpage will not even work in Internet Explorer 8 - because of all the DOM and JScript bugs and lack of features in these older browsers. Further, no one uses Internet Explorer 3 or Internet Explorer 5.5 any more.
A:
Check out this solution:
var stringVal = 'master';
stringVal.replace(/^./, stringVal[0].toUpperCase()); // Returns Master
A:
Only because this is really a one-liner I will include this answer. It's an ES6-based interpolated string one-liner.
let setStringName = 'the Eiffel Tower';
setStringName = `${setStringName[0].toUpperCase()}${setStringName.substring(1)}`;
A:
with arrow function
let fLCapital = s => s.replace(/./, c => c.toUpperCase())
fLCapital('this is a test') // "This is a test"
with arrow function, another solution
let fLCapital = s => s = s.charAt(0).toUpperCase() + s.slice(1);
fLCapital('this is a test') // "This is a test"
with array and map()
let namesCapital = names => names.map(name => name.replace(/./, c => c.toUpperCase()))
namesCapital(['james', 'robert', 'mary']) // ["James", "Robert", "Mary"]
A:
The ucfirst function works if you do it like this.
function ucfirst(str) {
var firstLetter = str.slice(0,1);
return firstLetter.toUpperCase() + str.substring(1);
}
Thanks J-P for the aclaration.
A:
yourString.replace(/^[a-z]/, function(m){ return m.toUpperCase() });
(You may encapsulate it in a function or even add it to the String prototype if you use it frequently.)
A:
Here's my version. I think it's easy to understand and elegant too.
var str = "foo bar baz";
// Capitalize
str.split(' ')
.map(w => w[0].toUpperCase() + w.substr(1).toLowerCase())
.join(' ')
// Returns "Foo Bar Baz"
// Capitalize the first letter
str.charAt(0).toUpperCase() + str.slice(1)
// Returns "Foo bar baz"
A:
You can do it in one line like this
string[0].toUpperCase() + string.substring(1)
A:
A functional approach
const capitalize = ([s, ...tring]) =>
[s.toUpperCase(), ...tring]
.join('');
Then you could
const titleCase = str =>
str
.split(' ')
.map(capitalize)
.join(' ')
A:
The first character of every string is capitalized.
function capitalize(word){
return word[0].toUpperCase() + word.slice(1).toLowerCase();
}
console.log(capitalize("john")); //John
console.log(capitalize("BRAVO")); //Bravo
console.log(capitalize("BLAne")); //Blane
A:
CoffeeScript
ucfirst = (str) -> str.charAt(0).toUpperCase() + str.slice(1)
As a String prototype method:
String::capitalize = -> @charAt(0).toUpperCase() + @slice(1)
A:
In CoffeeScript, add to the prototype for a string:
String::capitalize = ->
@substr(0, 1).toUpperCase() + @substr(1)
Usage would be:
"woobie".capitalize()
Which yields:
"Woobie"
A:
function capitalize(string) {
return string.replace(/^./, Function.call.bind("".toUpperCase));
}
A:
Posting an edit of @salim's answer to include locale letter transformation.
var str = "test string";
str = str.substring(0,1).toLocaleUpperCase() + str.substring(1);
A:
There are already so many good answers, but you can also use a simple CSS transform:
text-transform: capitalize;
div.c {
text-transform: capitalize;
}
<h2>text-transform: capitalize:</h2>
<div class="c">Lorem ipsum dolor sit amet, consectetur adipiscing elit.</div>
A:
// Uppercase first letter
function ucfirst(field) {
field.value = field.value.substr(0, 1).toUpperCase() + field.value.substr(1);
}
Usage:
<input type="text" onKeyup="ucfirst(this)" />
A:
Using the JS replace string method & a regular expression w/ a word boundary seems simple.
Capitalize the first words' first character: "the eiffel tower" --> "The eiffel tower"
str.replace(/\b\w/, v => v.toUpperCase())
Capitalize all words' first character: "the eiffel tower" --> "The Eiffel Tower"
str.replace(/\b\w/g, v => v.toUpperCase())
A:
One possible solution:
function ConvertFirstCharacterToUpperCase(text) {
return text.substr(0, 1).toUpperCase() + text.substr(1);
}
Use this:
alert(ConvertFirstCharacterToUpperCase("this is string"));
Here is working JS Fiddle
A:
This solution might be new and probably the simplest.
function firstUpperCase(input)
{
return input[0].toUpperCase() + input.substr(1);
}
console.log(firstUpperCase("capitalize first letter"));
A:
/*
* As terse as possible, assuming you're using ES version 6+
*/
var upLetter1=s=>s.replace(/./,m=>m.toUpperCase());
console.log(upLetter1("the quick brown fox jumped over the lazy dog."));
//\\ The quick brown fox jumped over the lazy dog. //\\
A:
Using an arrow function:
const capitalize = string => string[0].toUpperCase() + string.slice(1)
A:
Capitalize and Uncapitalize first Char of a String.
Functions to include:
/** First Character uppercase */
function capitalize(str) {
return str.charAt(0).toUpperCase() + str.slice(1);
}
/** First Character lowercase */
function uncapitalize(str) {
return str.charAt(0).toLowerCase() + str.slice(1);
}
Example1 "First Character uppercase":
alert(capitalize("hello world"));
Result: Hello world
Example 2 "First Character lowercase":
alert(uncapitalize("Hello World, today is sunny"));
Result: hello World, today is sunny
A:
Here is my attempt to make a universal function that can capitalize only the first letter, or the first letter of each word, including words separated by a dash (like some first names in French).
By default, the function capitalizes only the first letter and leave the rest untouched.
Parameters:
lc: true to force lower-casing the rest of the word(s)
all: true to capitalize each word
if( typeof String.prototype.capitalize !== "function" ) {
String.prototype.capitalize = function( lc, all ) {
if( all ) {
return this.split( " " )
.map( currentValue => currentValue.capitalize( lc ), this )
.join( " " )
.split( "-" )
.map( currentValue => currentValue.capitalize( false ), this )
.join( "-" );
} else {
return lc
? this.charAt( 0 ).toUpperCase() + this.slice( 1 ).toLowerCase()
: this.charAt( 0 ).toUpperCase() + this.slice( 1 );
}
}
}
A:
Or you could use Sugar.js capitalize()
Example:
'hello'.capitalize() -> 'Hello'
'hello kitty'.capitalize() -> 'Hello kitty'
'hello kitty'.capitalize(true) -> 'Hello Kitty'
A:
Using prototypes
String.prototype.capitalize = function () {
return this.charAt(0) + this.slice(1).toLowerCase();
}
or Using functions
function capitalize(str) {
return str.charAt(0).toUpperCase() + str.slice(1).toLowerCase();
}
A:
a.slice(0,1).toUpperCase()+a.slice(1)
let a = 'hello',
fix = a.slice(0,1).toUpperCase()+a.slice(1)
console.log(fix)
A:
There are multiple ways of doing this try some below
var lower = 'the Eiffel Tower';
var upper = lower.charAt(0).toUpperCase() + lower.substr(1);
And if you are comfortable with regular expressions, you do things this way:
var upper = lower.replace(/^\w/, function (chr) {
return chr.toUpperCase();
});
And you can even take it one step further by using more modern syntax:
const upper = lower.replace(/^\w/, c => c.toUpperCase());
Also this will take care of negative scenarios as mentioned in example like words starting with special characters like !@#$%^&*()}{{[];':",.<>/? .
A:
Unicode and Locale Aware
Using current language features:
function capitalize([firstLetter, ...rest]) {
return [firstLetter.toLocaleUpperCase(), ...rest].join('');
}
console.log(capitalize('foo bar'));
console.log(capitalize('ѷҥӕ'))
console.log(capitalize('❄⭐'));
// Title Case
console.log(
'Title Case:',
'foo bar'
.split(/\s+/)
.map(capitalize)
.join(' '),
);
We accept a destructured string as the only parameter [firstLetter, ...rest], assigning the first character to the variable firstLetter and get an array for the rest of the characters (...rest) bound to the rest variable. E.g. for the string lorem ipsum this should look like:
capitalize('lorem ipsum');
// firstLetter = 'l'
// rest = ['o', 'r', 'e', 'm', ' ', 'i', 'p', 's', 'u', 'm'];
Now all we need to do is prepend an uppercased version of the first letter firstLetter.toLocaleUpperCase() to the rest array—using the spread operator—and join the resulting array into a string using .join('')
A:
Elegant
const capitalize = ([firstChar, ...rest]) => `${firstChar.toUpperCase()}${rest.join('')}`;
A:
If you go with one of the regex answers, remember they will only work with ASCII characters. All your unicode letters will not be uppercased. The XRegExp library and its unicode plugins solve this problem if you want to stick with regexps. So something like this would work:
String.prototype.capitalize = function () {
return this.replace(XRegExp("^\\p{L}"), function ($0) { return $0.toUpperCase(); })
}
Considering that it still doesn't cover all possibilities (combined characters, see http://www.regular-expressions.info/unicode.html) it seems easier to just use the .charAt(0).toUpperCase() approach.
A:
This code will also handle extra spaces at the start & end of the string.
let val = ' this is test ';
val = val.trim();
val = val.charAt(0).toUpperCase() + val.slice(1);
console.log("Value => ", val);
A:
You can use regex approach :
str.replace(/(^|\s)\S/g, letter => letter.toUpperCase());
A:
Okay, so I am new to JavaScript. I wasn't able to get the above to work for me. So I started putting it together myself. Here's my idea (about the same, different and working syntax):
String name = request.getParameter("name");
name = name.toUpperCase().charAt(0) + name.substring(1);
out.println(name);
Here I get the variable from a form (it also works manually):
String name = "i am a Smartypants...";
name = name.toUpperCase().charAt(0) + name.substring(1);
out.println(name);
Output: "I am a Smartypants...";
A:
var capitalizeMe = "string not starting with capital"
Capitalize with substr
var capitalized = capitalizeMe.substr(0, 1).toUpperCase() + capitalizeMe.substr(1);
A:
For just capitalizing the first letter and make the rest of the string lower case:
function capitalize(str) {
var splittedEnter = str.split(" ");
var capitalized;
var capitalizedResult;
for (var i = 0 ; i < splittedEnter.length ; i++){
capitalized = splittedEnter[i].charAt(0).toUpperCase();
splittedEnter[i] = capitalized + splittedEnter[i].substr(1).toLowerCase();
}
return splittedEnter.join(" ");
}
capitalize("tHiS wiLL be alL CapiTaLiZED.");
The result will be:
This Will Be All Capitalized.
A:
I would just use a regular expression:
myString = ' the quick green alligator...';
myString.trim().replace(/^\w/, (c) => c.toUpperCase());
A:
function capitalizeEachWord(str) {
return str.replace(/\w\S*/g, function(txt) {
return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();
});
}
document.write(capitalizeEachWord('foo BAR God bAD'));
A:
A small improvement - every word in titlecase.
String.prototype.toTitleCase = function(){
return this.replace(/\b(\w+)/g, function(m,p){ return p[0].toUpperCase() + p.substr(1).toLowerCase() });
}
var s = 'heLLo, wOrLD!';
console.log(s.toTitleCase()); // Hello, World!
A:
The simplest solution is:
let yourSentence = 'it needs first letter upper case';
yourSentence.charAt(0).toUpperCase() + yourSentence.substr(1);
or:
yourSentence.charAt(0).toUpperCase() + yourSentence.slice(1);
or:
yourSentence.substr(0, 1).toUpperCase() + yourSentence.substr(1);
A:
1. We'll be using CSS to achieve this. It can also be set from an external CSS.
<span text-transform="capitalize ">The first letter of each word becomes an upper case</span>
2. Using vanilla JavaScript, we could do:
let string = "test case"
string = string[0].toUpperCase() + string.substring(1)
//return "Test case"
Explanation</b/>:
string[0].toUpperCase(): converts the first letter in the string to upper case
string.substring(1): deletes the first letter in the string and returns the remaining characters
text-transform="capitalize": make the first letter of each word in this tag upper case. If you use 'uppercase' as the value of text-transform, every letter in the tag will be a capital letter
A:
This code might work good in some cases:
function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1);
}
console.log(capitalizeFirstLetter('foo')); // Foo
// But if we had like this it won't work well
console.log(capitalizeFirstLetter('fOo')); // FOo
But if you really want to make sure, that there is only the first letter capitalized and the rest is built out of lowercase letters, you could adjust the code like this:
function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1).toLowerCase();
}
console.log(capitalizeFirstLetter('fOo')); // Foo
A:
The function takes two arguments:
start - the start index;
length - the length of substring to capitalise
String.prototype.subUpper = function () {
var result = this.toString();
var start = 0;
var length = 1;
if (arguments.length > 0) {
start = arguments[0];
if (start < this.length) {
if (arguments.length > 1) {
length = arguments[1];
}
if (start + length > this.length) {
length = this.length - start;
}
var startRest = start + length;
var prefix = start > 0 ? this.substr(0, start) : String.empty;
var sub = this.substr(start, length);
var suffix = this.substr(startRest, this.length - startRest);
result = prefix + sub.toUpperCase() + suffix;
}
}
return result;
};
A:
I have been trying to do same (that is; capitalize the first letter in a string while it is being typed) using jQuery. I searched all through the web for the answer but couldn't find it. However I was able to get a work around using the on() function in jQuery like so:
$("#FirstNameField").on("keydown",function(e){
var str = $("#FirstNameField").val();
if(str.substring()===str.substring(0,1)){
$("#FirstNameField").val(str.substring(0,1).toUpperCase());
}
});
This function actually capitalizes the first letter while the data entrant is typing continuously.
A:
I use something along these lines in my development environment, especially when working with APIs like HTTP:
Suppose you have an HTTP header in which you'd like to capitalize every initial letter in its name and add the hyphen between its constituent words. You may achieve something like that using this basic and simple routine:
'access control allow origin'
.replace(/\b\w/g, function (match) {
return match.toUpperCase();
})
.split(' ')
.join('-');
// Output: 'Access-Control-Allow-Origin'
It is not maybe the most elegant and attractive function definition out there, but it certainly gets the job done.
A:
Like it:
function capitalize(string,a) {
var tempstr = string.toLowerCase();
if (a == false || a == undefined)
return tempstr.replace(tempstr[0], tempstr[0].toUpperCase());
else {
return tempstr.split(" ").map(function (i) { return i[0].toUpperCase() + i.substring(1) }).join(" ");
}
}
capitalize('stack overflow yeah!',true)); //Stack Overflow Yeah!
capitalize('stack stack stack stack overflow yeah!'));//Stack overflow yeah!
https://jsfiddle.net/dgmLgv7b/
A:
A one-liner:
'string'.replace(/(^[a-z])/,function (p) { return p.toUpperCase(); } )
A:
Firstly, I just wanted to clear up what capitalize means in this context.
"This String Is Capitalized" Reliable source
You can see from the example provided this is not what the OP is looking for. What it should say is "How do I make the first letter of a string uppercase" (Not capitalize string)
function ucfirst (str) {
return typeof str != "undefined" ? (str += '', str[0].toUpperCase() + str.substr(1)) : '';
}
Explained
typeof str != "undefined" // Is str set
? // true
str += '' // Turns the string variable into a string
str[0].toUpperCase() // Get the first character and make it upper case
+ // Add
str.substr(1) // String starting from the index 1 (starts at 0)
: // false
''; // Returns an empty string
This will work with any argument or no argument at all.
undefined === ""
"" === ""
"my string" === "My string"
null === "Null"
undefined === "";
false === "False"
0 === "0"
true === "True"
[] === ""
[true,0,"",false] === "True,0,,false"
A:
One liner ("inputString can be set to any string"):
inputString.replace(/.{1}/, inputString.charAt(0).toUpperCase())
A:
This one is simple
const upper = lower.replace(/^\w/, c => c.toUpperCase());
A:
Capitalize First Word: Shortest
text.replace(/(^.)/, m => m.toUpperCase())
Capitalize Each Word: Shortest
text.replace(/(^\w|\s\w)/g, m => m.toUpperCase());
If you want to make sure the rest is in lowercase:
text.replace(/(^\w|\s\w)(\S*)/g, (_,m1,m2) => m1.toUpperCase()+m2.toLowerCase())
A:
Any type of string can convert --
YoUrStRiNg → Yourstring
var str = yOuRsTrING.toLowerCase(); // Output: yourstring
str.charAt(0).toUpperCase() + str.slice(1); // Output: Y + ourstring = Yourstring
A:
You can do str.replace(str[0], str[0].toUpperCase()).
Check this example:
let str = "hello, WORLD!"
let newStr = str.replace(str[0], str[0].toUpperCase())
console.log("str: ", str)
console.log("newStr: ", newStr)
A:
Just install and load Lodash:
import { capitalize } from "lodash";
capitalize('test') // Test
A:
The currently voted answer is right, but it doesn't trim or check the length of the string before capitalising the first character.
String.prototype.ucfirst = function(notrim) {
s = notrim ? this : this.replace(/(?:(?:^|\n)\s+|\s+(?:$|\n))/g,'').replace(/\s+/g,' ');
return s.length > 0 ? s.charAt(0).toUpperCase() + s.slice(1) : s;
}
Set the notrim argument to prevent trimming the string first:
'pizza'.ucfirst() => 'Pizza'
' pizza'.ucfirst() => 'Pizza'
' pizza'.ucfirst(true) => ' pizza'
A:
This does the same action:
var newStr = string.slice(0,1).toUpperCase() + string.slice(1);
A:
I know this is an old question with a lot of answers but here's my quick snippet.
const capitalize = (str) => str?.split('').map( (e, i) => i === 0 ? e.toUpperCase() : e ).join('')
A:
Solution for Cannot read property 'charAt' of undefined
const capitalize = (string) => {
return string ? string.charAt(0).toUpperCase() + string.slice(1) : "";
}
console.log(capitalize("i am a programmer")); // I am a programmer
A:
This is what I use religiously:
function capitalizeMe(str, force){
str = force ? str.toLowerCase() : str;
return str.replace(/(\b)([a-zA-Z])/g,
function(firstLetter){
return firstLetter.toUpperCase();
});
}
var firstName = capitalizeMe($firstName.val());
A:
If there's Lodash in your project, use upperFirst.
A:
function cap(input) {
return input.replace(/[\.\r\n\t\:\;\?\!]\W*(\w)/g, function(match, capture) {
// For other sentences in the text
return match.toUpperCase();
}).replace(/^\W*\w/, function(match, capture) {
// For the first sentence in the text
return match.toUpperCase();
});;
}
var a = "hi, dear user. it is a simple test. see you later!\r\nbye";
console.log(cap(a));
// Output: Hi, dear user. It is a simple test. See you later!
// Bye
A:
Another way using RamdaJs, the functional programming way:
firstCapital(str){
const fn = p => R.toUpper(R.head(p)) + R.tail(p);
return fn(str);
}
With multiple words in a string:
firstCapitalAllWords(str){
const fn = p => R.toUpper(R.head(p)) + R.tail(p);
return R.map(fn,R.split(' ', str)).join(' ');
}
A:
Just because you can, doesn't mean you should, however. It requires ECMAScript 6 as the code uses array destructuring.
const capitalizeFirstLetter = s => {
const type = typeof s;
if (type !== "string") {
throw new Error(`Expected string, instead received ${type}`);
}
const [firstChar, ...remainingChars] = s;
return [firstChar.toUpperCase(), ...remainingChars].join("");
};
A:
Here is the nice and cleaner version;
var str = '';
return str.replace(new RegExp('^'+str[0]+''), str[0].toUpperCase());
Results:
this is a test --> This is a test
A:
I prefer use a solution oriented to a functional way (mapping array for example):
Array.from(str).map((letter, i) => i === 0 ? letter.toUpperCase() : letter ).join('');
A:
The method will take a value and then split it to have an array of string.
const firstLetterToUpperCase = value => {
return value.replace(
value.split("")["0"], // Split stirng and get the first letter
value
.split("")
["0"].toString()
.toUpperCase() // Split string and get the first letter to replace it with an uppercase value
);
};
A:
If you need to have all words starting with a capital letter, you can use the following function:
const capitalLetters = (s) => {
return s.trim().split(" ").map(i => i[0].toUpperCase() + i.substr(1)).reduce((ac, i) => `${ac} ${i}`);
}
Example:
console.log(`result: ${capitalLetters("this is a test")}`)
// Result: "This Is A Test"
A:
Capitalizing the first letter with validation
function capitalizeFirstLetter(str) {
return (str && typeof str === 'string') ? (str.charAt(0).toUpperCase() + str.slice(1)) : "";
}
Testing
console.log(capitalizeFirstLetter(0)); // Output: ""
console.log(capitalizeFirstLetter(null)); // Output: ""
console.log(capitalizeFirstLetter("test")); // Output: "Test"
console.log(capitalizeFirstLetter({})); // Output: ""
A:
You should do like that:
let text = "lower case";
text = text.charAt(0).toUpperCase() + text.substring(1, text.length);
A:
EDIT :
I like this one :
yourString.replace(/(^[a-z])/i, (str, firstLetter) => firstLetter.toUpperCase())
A:
Use this module of Node.js, the http://stringjs.com/ package, to capitalize your string:
var S = require('string');
S('jon').capitalize().s; //'Jon'
S('JP').capitalize().s; //'Jp'
A:
This one will tolerate possible leading whitespaces and will not miss the target of the first letter in a string. Therefore, it might improve already good solutions available on the thread.
str = " the Eifel Tower";
str.replace(/\w/, str.match(/\w/)[0].toUpperCase());
>> " The Eifel Tower";
!But, will cause a 'soft' error if executed against a blank string.
To avoid this possible error or unnecessary processing of a blank string or a number, a ternary conditional guarding can be used:
+str!=+str ? str.replace(/\w/, str.match(/\w/)[0].toUpperCase()) : str;
A:
Try this code:
alert("hello".substr(0, 1).toUpperCase() + "hello".substr(1));
It is taking the first character in "hello", capitalizing it and adding the rest of it on.
A:
var a = "this is a test"
console.log(a.replace(/^[a-z]/g, txt => txt.toUpperCase()));
A:
You can do something like this:
mode = "string";
string = mode.charAt(0).toUpperCase() + mode.substr(1,mode.length).toLowerCase();
console.log(string);
This will print
String
|
How do I make the first letter of a string uppercase in JavaScript?
|
How do I make the first letter of a string uppercase, but not change the case of any of the other letters?
For example:
"this is a test" → "This is a test"
"the Eiffel Tower" → "The Eiffel Tower"
"/index.html" → "/index.html"
|
[
"The basic solution is:\n\n\nfunction capitalizeFirstLetter(string) {\n return string.charAt(0).toUpperCase() + string.slice(1);\n}\n\nconsole.log(capitalizeFirstLetter('foo')); // Foo\n\n\n\nSome other answers modify String.prototype (this answer used to as well), but I would advise against this now due to maintainability (hard to find out where the function is being added to the prototype and could cause conflicts if other code uses the same name / a browser adds a native function with that same name in future).\n...and then, there is so much more to this question when you consider internationalisation, as this astonishingly good answer (buried below) shows.\nIf you want to work with Unicode code points instead of code units (for example to handle Unicode characters outside of the Basic Multilingual Plane) you can leverage the fact that String#[@iterator] works with code points, and you can use toLocaleUpperCase to get locale-correct uppercasing:\n\n\nconst capitalizeFirstLetter = ([ first, ...rest ], locale = navigator.language) =>\n first === undefined ? '' : first.toLocaleUpperCase(locale) + rest.join('')\n\nconsole.log(\n capitalizeFirstLetter(''), // [empty string]\n capitalizeFirstLetter('foo'), // Foo\n capitalizeFirstLetter(\"\"), // \"\" (correct!)\n capitalizeFirstLetter(\"italya\", 'tr') // İtalya\" (correct in Turkish Latin!)\n)\n\n\n\nFor even more internationalization options, please see the original answer below.\n",
"Here's a more object-oriented approach:\nObject.defineProperty(String.prototype, 'capitalize', {\n value: function() {\n return this.charAt(0).toUpperCase() + this.slice(1);\n },\n enumerable: false\n});\n\nYou'd call the function, like this:\n\"hello, world!\".capitalize();\n\nWith the expected output being:\n\"Hello, world!\"\n\n",
"In CSS:\np::first-letter {\n text-transform:capitalize;\n}\n\n",
"Here is a shortened version of the popular answer that gets the first letter by treating the string as an array:\nfunction capitalize(s)\n{\n return s[0].toUpperCase() + s.slice(1);\n}\n\nUpdate\nAccording to the comments below this doesn't work in IE 7 or below.\nUpdate 2:\nTo avoid undefined for empty strings (see @njzk2's comment below), you can check for an empty string:\nfunction capitalize(s)\n{\n return s && s[0].toUpperCase() + s.slice(1);\n}\n\nES version\nconst capitalize = s => s && s[0].toUpperCase() + s.slice(1)\n\n// to always return type string event when s may be falsy other than empty-string\nconst capitalize = s => (s && s[0].toUpperCase() + s.slice(1)) || \"\"\n\n",
"If you're interested in the performance of a few different methods posted:\nHere are the fastest methods based on this jsperf test (ordered from fastest to slowest).\nAs you can see, the first two methods are essentially comparable in terms of performance, whereas altering the String.prototype is by far the slowest in terms of performance.\n// 10,889,187 operations/sec\nfunction capitalizeFirstLetter(string) {\n return string[0].toUpperCase() + string.slice(1);\n}\n\n// 10,875,535 operations/sec\nfunction capitalizeFirstLetter(string) {\n return string.charAt(0).toUpperCase() + string.slice(1);\n}\n\n// 4,632,536 operations/sec\nfunction capitalizeFirstLetter(string) {\n return string.replace(/^./, string[0].toUpperCase());\n}\n\n// 1,977,828 operations/sec\nString.prototype.capitalizeFirstLetter = function() {\n return this.charAt(0).toUpperCase() + this.slice(1);\n}\n\n\n",
"I didn’t see any mention in the existing answers of issues related to astral plane code points or internationalization. “Uppercase” doesn’t mean the same thing in every language using a given script.\nInitially I didn’t see any answers addressing issues related to astral plane code points. There is one, but it’s a bit buried (like this one will be, I guess!)\nOverview of the hidden problem and various approaches to it\nMost of the proposed functions look like this:\nfunction capitalizeFirstLetter(str) {\n return str[0].toUpperCase() + str.slice(1);\n}\n\nHowever, some cased characters fall outside the BMP (basic multilingual plane, code points U+0 to U+FFFF). For example take this Deseret text:\ncapitalizeFirstLetter(\"\"); // \"\"\n\nThe first character here fails to capitalize because the array-indexed properties of strings don’t access “characters” or code points*. They access UTF-16 code units. This is true also when slicing — the index values point at code units.\nIt happens to be that UTF-16 code units are 1:1 with USV code points within two ranges, U+0 to U+D7FF and U+E000 to U+FFFF inclusive. Most cased characters fall into those two ranges, but not all of them.\nFrom ES2015 on, dealing with this became a bit easier. String.prototype[@@iterator] yields strings corresponding to code points**. So for example, we can do this:\nfunction capitalizeFirstLetter([ first='', ...rest ]) {\n return [ first.toUpperCase(), ...rest ].join('');\n}\n\ncapitalizeFirstLetter(\"\") // \"\"\n\nFor longer strings, this is probably not terribly efficient*** — we don’t really need to iterate the remainder. We could use String.prototype.codePointAt to get at that first (possible) letter, but we’d still need to determine where the slice should begin. One way to avoid iterating the remainder would be to test whether the first codepoint is outside the BMP; if it isn’t, the slice begins at 1, and if it is, the slice begins at 2.\nfunction capitalizeFirstLetter(str) {\n if (!str) return '';\n\n const firstCP = str.codePointAt(0);\n const index = firstCP > 0xFFFF ? 2 : 1;\n\n return String.fromCodePoint(firstCP).toUpperCase() + str.slice(index);\n}\n\ncapitalizeFirstLetter(\"\") // \"\"\n\nYou could use bitwise math instead of > 0xFFFF there, but it’s probably easier to understand this way and either would achieve the same thing.\nWe can also make this work in ES5 and below by taking that logic a bit further if necessary. There are no intrinsic methods in ES5 for working with codepoints, so we have to manually test whether the first code unit is a surrogate****:\nfunction capitalizeFirstLetter(str) {\n if (!str) return '';\n\n var firstCodeUnit = str[0];\n\n if (firstCodeUnit < '\\uD800' || firstCodeUnit > '\\uDFFF') {\n return str[0].toUpperCase() + str.slice(1);\n }\n\n return str.slice(0, 2).toUpperCase() + str.slice(2);\n}\n\ncapitalizeFirstLetter(\"\") // \"\"\n\nDeeper into internationalization (whose capitalization?)\nAt the start I also mentioned internationalization considerations. Some of these are very difficult to account for because they require knowledge not only of what language is being used, but also may require specific knowledge of the words in the language. For example, the Irish digraph \"mb\" capitalizes as \"mB\" at the start of a word. Another example, the German eszett, never begins a word (afaik), but still helps illustrate the problem. The lowercase eszett (“ß”) capitalizes to “SS,” but “SS” could lowercase to either “ß” or “ss” — you require out-of-band knowledge of the German language to know which is correct!\nThe most famous example of these kinds of issues, probably, is Turkish. In Turkish Latin, the capital form of i is İ, while the lowercase form of I is ı — they’re two different letters. Fortunately we do have a way to account for this:\nfunction capitalizeFirstLetter([ first='', ...rest ], locale) {\n return [ first.toLocaleUpperCase(locale), ...rest ].join('');\n}\n\ncapitalizeFirstLetter(\"italy\", \"en\") // \"Italy\"\ncapitalizeFirstLetter(\"italya\", \"tr\") // \"İtalya\"\n\nIn a browser, the user’s most-preferred language tag is indicated by navigator.language, a list in order of preference is found at navigator.languages, and a given DOM element’s language can be obtained (usually) with Object(element.closest('[lang]')).lang || YOUR_DEFAULT_HERE in multilanguage documents.\nIn agents which support Unicode property character classes in RegExp, which were introduced in ES2018, we can clean stuff up further by directly expressing what characters we’re interested in:\nfunction capitalizeFirstLetter(str, locale=navigator.language) {\n return str.replace(/^\\p{CWU}/u, char => char.toLocaleUpperCase(locale));\n}\n\nThis could be tweaked a bit to also handle capitalizing multiple words in a string with fairly good accuracy for at least some languages, though outlying cases will be hard to avoid completely if doing so no matter what the primary language is.\nThe CWU or Changes_When_Uppercased character property matches all code points which change when uppercased in the generic case where specific locale data is absent. There are other interesting case-related Unicode character properties that you may wish to play around with. It’s a cool zone to explore but we’d go on all day if we enumerated em all here. Here’s something to get your curiosity going if you’re unfamiliar, though: \\p{Lower} is a larger group than \\p{LowercaseLetter} (aka \\p{Ll}) — conveniently illustrated by the default character set comparison in this tool provided by Unicode. (NB: not everything you can reference there is also available in ES regular expressions, but most of the stuff you’re likely to want is).\nAlternatives to case-mapping in JS (Firefox & CSS love the Dutch!)\nIf digraphs with unique locale/language/orthography capitalization rules happen to have a single-codepoint “composed” representation in Unicode, these might be used to make one’s capitalization expectations explicit even in the absence of locale data. For example, we could prefer the composed i-j digraph, ij / U+133, associated with Dutch, to ensure a case-mapping to uppercase IJ / U+132:\ncapitalizeFirstLetter('ijsselmeer'); // \"IJsselmeer\"\n\nOn the other hand, precomposed digraphs and similar are sometimes deprecated (like that one, it seems!) and may be undesirable in interchanged text regardless due to the potential copypaste nuisance if that’s not the normal way folks type the sequence in practice. Unfortunately, in the absence of the precomposition “hint,” an explicit locale won’t help here (at least as far as I know). If we spell ijsselmeer with an ordinary i + j, capitalizeFirstLetter will produce the wrong result even if we explicitly indicate nl as the locale:\ncapitalizeFirstLetter('ijsselmeer', 'nl'); // \"Ijsselmeer\" :(\n\n(I’m not entirely sure whether there are some such cases where the behavior comes down to ICU data availability — perhaps someone else could say.)\nIf the point of the transformation is to display textual content in a web browser, though, you have an entirely different option available that will likely be your best bet: leveraging features of the web platform’s other core languages, HTML and CSS. Armed with HTML’s lang=... and CSS’s text-transform:..., you’ve got a (pseudo-)declarative solution that leaves extra room for the user agent to be “smart.” A JS API needs to have predictable outcomes across all browsers (generally) and isn’t free to experiment with heuristics. The user-agent itself is obligated only to its user, though, and heuristic solutions are fair game when the output is for a human being. If we tell it “this text is Dutch, but please display it capitalized,” the particular outcome might now vary between browsers, but it’s likely going to be the best each of them could do. Let’s see:\n\n\n<!DOCTYPE html>\n<dl>\n<dt>Untransformed\n<dd>ijsselmeer\n<dt>Capitalized with CSS and <code>lang=en</code>\n<dd lang=\"en\" style=\"text-transform: capitalize\">ijsselmeer\n<dt>Capitalized with CSS and <code>lang=nl</code>\n<dd lang=\"nl\" style=\"text-transform: capitalize\">ijsselmeer\n\n\n\nIn Chromium at the time of writing, both the English and Dutch lines come out as Ijsselmeer — so it does no better than JS. But try it in current Firefox! The element that we told the browser contains Dutch will be correctly rendered as IJsselmeer there.\nThis solution is purpose-specific (it’s not gonna help you in Node, anyway) but it was silly of me not to draw attention to it previously given some folks might not realize they’re googling the wrong question. Thanks @paul23 for clarifying more about the nature of the IJ digraph in practice and prompting further investigation!\n\nAs of January 2021, all major engines have implemented the Unicode property character class feature, but depending on your target support range you may not be able to use it safely yet. The last browser to introduce support was Firefox (78; June 30, 2020). You can check for support of this feature with the Kangax compat table. Babel can be used to compile RegExp literals with property references to equivalent patterns without them, but be aware that the resulting code can sometimes be enormous. You probably would not want to do this unless you’re certain the tradeoff is justified for your use case.\n\nIn all likelihood, people asking this question will not be concerned with Deseret capitalization or internationalization. But it’s good to be aware of these issues because there’s a good chance you’ll encounter them eventually even if they aren’t concerns presently. They’re not “edge” cases, or rather, they’re not by-definition edge cases — there’s a whole country where most people speak Turkish, anyway, and conflating code units with codepoints is a fairly common source of bugs (especially with regard to emoji). Both strings and language are pretty complicated!\n\n* The code units of UTF-16 / UCS2 are also Unicode code points in the sense that e.g. U+D800 is technically a code point, but that’s not what it “means” here ... sort of ... though it gets pretty fuzzy. What the surrogates definitely are not, though, is USVs (Unicode scalar values).\n** Though if a surrogate code unit is “orphaned” — i.e., not part of a logical pair — you could still get surrogates here, too.\n*** maybe. I haven’t tested it. Unless you have determined capitalization is a meaningful bottleneck, I probably wouldn’t sweat it — choose whatever you believe is most clear and readable.\n**** such a function might wish to test both the first and second code units instead of just the first, since it’s possible that the first unit is an orphaned surrogate. For example the input \"\\uD800x\" would capitalize the X as-is, which may or may not be expected.\n",
"For another case I need it to capitalize the first letter and lowercase the rest. The following cases made me change this function:\n//es5\nfunction capitalize(string) {\n return string.charAt(0).toUpperCase() + string.slice(1).toLowerCase();\n}\ncapitalize(\"alfredo\") // => \"Alfredo\"\ncapitalize(\"Alejandro\")// => \"Alejandro\ncapitalize(\"ALBERTO\") // => \"Alberto\"\ncapitalize(\"ArMaNdO\") // => \"Armando\"\n\n// es6 using destructuring \nconst capitalize = ([first,...rest]) => first.toUpperCase() + rest.join('').toLowerCase();\n\n",
"This is the 2018 ECMAScript 6+ Solution:\n\n\nconst str = 'the Eiffel Tower';\r\nconst newStr = `${str[0].toUpperCase()}${str.slice(1)}`;\r\nconsole.log('Original String:', str); // the Eiffel Tower\r\nconsole.log('New String:', newStr); // The Eiffel Tower\n\n\n\n",
"If you're already (or considering) using Lodash, the solution is easy:\n_.upperFirst('fred');\n// => 'Fred'\n\n_.upperFirst('FRED');\n// => 'FRED'\n\n_.capitalize('fred') //=> 'Fred'\n\nSee their documentation: https://lodash.com/docs#capitalize\n_.camelCase('Foo Bar'); //=> 'fooBar'\nhttps://lodash.com/docs/4.15.0#camelCase\n_.lowerFirst('Fred');\n// => 'fred'\n\n_.lowerFirst('FRED');\n// => 'fRED'\n\n_.snakeCase('Foo Bar');\n// => 'foo_bar'\n\nVanilla JavaScript for first upper case:\nfunction upperCaseFirst(str){\n return str.charAt(0).toUpperCase() + str.substring(1);\n}\n\n",
"There is a very simple way to implement it by replace. For ECMAScript 6:\n'foo'.replace(/^./, str => str.toUpperCase())\n\nResult:\n'Foo'\n\n",
"Capitalize the first letter of all words in a string: \nfunction ucFirstAllWords( str )\n{\n var pieces = str.split(\" \");\n for ( var i = 0; i < pieces.length; i++ )\n {\n var j = pieces[i].charAt(0).toUpperCase();\n pieces[i] = j + pieces[i].substr(1);\n }\n return pieces.join(\" \");\n}\n\n",
"CSS only\nIf the transformation is needed only for displaying on a web page:\np::first-letter {\n text-transform: uppercase;\n}\n\n\nDespite being called \"::first-letter\", it applies to the first character, i.e. in case of string %a, this selector would apply to % and as such a would not be capitalized.\nIn IE9+ or IE5.5+ it's supported in legacy notation with only one colon (:first-letter).\n\nES2015 one-liner\nconst capitalizeFirstChar = str => str.charAt(0).toUpperCase() + str.substring(1);\n\nRemarks\n\nIn the benchmark I performed, there was no significant difference between string.charAt(0) and string[0]. Note however, that string[0] would be undefined for an empty string, so the function would have to be rewritten to use \"string && string[0]\", which is way too verbose, compared to the alternative.\nstring.substring(1) is faster than string.slice(1).\n\nBenchmark between substring() and slice()\nThe difference is rather minuscule nowadays (run the test yourself):\n\n21,580,613.15 ops/s ±1.6% for substring(),\n21,096,394.34 ops/s ±1.8% (2.24% slower) for slice().\n\n\n",
"It's always better to handle these kinds of stuff using CSS first, in general, if you can solve something using CSS, go for that first, then try JavaScript to solve your problems, so in this case try using :first-letter in CSS and apply text-transform:capitalize;\nSo try creating a class for that, so you can use it globally, for example: .first-letter-uppercase and add something like below in your CSS:\n.first-letter-uppercase:first-letter {\n text-transform:capitalize;\n}\n\nAlso the alternative option is JavaScript, so the best gonna be something like this:\nfunction capitalizeTxt(txt) {\n return txt.charAt(0).toUpperCase() + txt.slice(1); //or if you want lowercase the rest txt.slice(1).toLowerCase();\n}\n\nand call it like:\ncapitalizeTxt('this is a test'); // return 'This is a test'\ncapitalizeTxt('the Eiffel Tower'); // return 'The Eiffel Tower'\ncapitalizeTxt('/index.html'); // return '/index.html'\ncapitalizeTxt('alireza'); // return 'Alireza'\ncapitalizeTxt('dezfoolian'); // return 'Dezfoolian'\n\nIf you want to reuse it over and over, it's better attach it to javascript native String, so something like below:\nString.prototype.capitalizeTxt = String.prototype.capitalizeTxt || function() {\n return this.charAt(0).toUpperCase() + this.slice(1);\n}\n\nand call it as below:\n'this is a test'.capitalizeTxt(); // return 'This is a test'\n'the Eiffel Tower'.capitalizeTxt(); // return 'The Eiffel Tower'\n'/index.html'.capitalizeTxt(); // return '/index.html'\n'alireza'.capitalizeTxt(); // return 'Alireza'\n\n",
"String.prototype.capitalize = function(allWords) {\n return (allWords) ? // If all words\n this.split(' ').map(word => word.capitalize()).join(' ') : // Break down the phrase to words and then recursive\n // calls until capitalizing all words\n this.charAt(0).toUpperCase() + this.slice(1); // If allWords is undefined, capitalize only the first word,\n // meaning the first character of the whole string\n}\n\nAnd then:\n \"capitalize just the first word\".capitalize(); ==> \"Capitalize just the first word\"\n \"capitalize all words\".capitalize(true); ==> \"Capitalize All Words\"\n\nUpdate November 2016 (ES6), just for fun:\nconst capitalize = (string = '') => [...string].map( // Convert to array with each item is a char of\n // string by using spread operator (...)\n (char, index) => index ? char : char.toUpperCase() // Index true means not equal 0, so (!index) is\n // the first character which is capitalized by\n // the `toUpperCase()` method\n ).join('') // Return back to string\n\nthen capitalize(\"hello\") // Hello\n",
"SHORTEST 3 solutions, 1 and 2 handle cases when s string is \"\", null and undefined:\n s&&s[0].toUpperCase()+s.slice(1) // 32 char\n\n s&&s.replace(/./,s[0].toUpperCase()) // 36 char - using regexp\n\n'foo'.replace(/./,x=>x.toUpperCase()) // 31 char - direct on string, ES6\n\n\n\nlet s='foo bar';\r\n\r\nconsole.log( s&&s[0].toUpperCase()+s.slice(1) );\r\n\r\nconsole.log( s&&s.replace(/./,s[0].toUpperCase()) );\r\n\r\nconsole.log( 'foo bar'.replace(/./,x=>x.toUpperCase()) );\n\n\n\n",
"We could get the first character with one of my favorite RegExp, looks like a cute smiley: /^./\nString.prototype.capitalize = function () {\n return this.replace(/^./, function (match) {\n return match.toUpperCase();\n });\n};\n\nAnd for all coffee-junkies:\nString::capitalize = ->\n @replace /^./, (match) ->\n match.toUpperCase()\n\n...and for all guys who think that there's a better way of doing this, without extending native prototypes:\nvar capitalize = function (input) {\n return input.replace(/^./, function (match) {\n return match.toUpperCase();\n });\n};\n\n",
"Here is a function called ucfirst()(short for \"upper case first letter\"):\nfunction ucfirst(str) {\n var firstLetter = str.substr(0, 1);\n return firstLetter.toUpperCase() + str.substr(1);\n}\n\nYou can capitalise a string by calling ucfirst(\"some string\") -- for example,\nucfirst(\"this is a test\") --> \"This is a test\"\n\nIt works by splitting the string into two pieces. On the first line it pulls out firstLetter and then on the second line it capitalises firstLetter by calling firstLetter.toUpperCase() and joins it with the rest of the string, which is found by calling str.substr(1).\nYou might think this would fail for an empty string, and indeed in a language like C you would have to cater for this. However in JavaScript, when you take a substring of an empty string, you just get an empty string back.\n",
"Use:\n\n\nvar str = \"ruby java\";\r\n\r\nconsole.log(str.charAt(0).toUpperCase() + str.substring(1));\n\n\n\nIt will output \"Ruby java\" to the console.\n",
"If you use Underscore.js or Lodash, the underscore.string library provides string extensions, including capitalize:\n\n_.capitalize(string) Converts first letter of the string to\nuppercase.\n\nExample:\n_.capitalize(\"foo bar\") == \"Foo bar\"\n\n",
"If you're ok with capitalizing the first letter of every word, and your usecase is in HTML, you can use the following CSS:\n<style type=\"text/css\">\n p.capitalize {text-transform:capitalize;}\n</style>\n<p class=\"capitalize\">This is some text.</p>\n\nThis is from CSS text-transform Property (at W3Schools).\n",
"var capitalized = yourstring[0].toUpperCase() + yourstring.substr(1);\n\n",
"If you are wanting to reformat all-caps text, you might want to modify the other examples as such: \nfunction capitalize (text) {\n return text.charAt(0).toUpperCase() + text.slice(1).toLowerCase();\n}\n\nThis will ensure that the following text is changed:\nTEST => Test\nThis Is A TeST => This is a test\n\n",
"String.prototype.capitalize = function(){\n return this.replace(/(^|\\s)([a-z])/g, \n function(m, p1, p2) {\n return p1 + p2.toUpperCase();\n });\n};\n\nUsage:\ncapitalizedString = someString.capitalize();\n\nThis is a text string => This Is A Text String\n",
"function capitalize(s) {\n // returns the first letter capitalized + the string from index 1 and out aka. the rest of the string\n return s[0].toUpperCase() + s.substr(1);\n}\n\n\n// examples\ncapitalize('this is a test');\n=> 'This is a test'\n\ncapitalize('the Eiffel Tower');\n=> 'The Eiffel Tower'\n\ncapitalize('/index.html');\n=> '/index.html'\n\n",
"yourString.replace(/\\w/, c => c.toUpperCase())\n\nI found this arrow function easiest. Replace matches the first letter character (\\w) of your string and converts it to uppercase. Nothing fancier is necessary.\n",
"var str = \"test string\";\nstr = str.substring(0,1).toUpperCase() + str.substring(1);\n\n",
" \n57 81 different answers for this question, some off-topic, and yet none of them raise the important issue that none of the solutions listed will work with Asian characters, emoji's, and other high Unicode-point-value characters in many browsers. Here is a solution that will:\nconst consistantCapitalizeFirstLetter = \"\\uD852\\uDF62\".length === 1 ?\n function(S) {\n \"use-strict\"; // Hooray! The browser uses UTF-32!\n return S.charAt(0).toUpperCase() + S.substring(1);\n } : function(S) {\n \"use-strict\";\n // The browser is using UCS16 to store UTF-16\n var code = S.charCodeAt(0)|0;\n return (\n code >= 0xD800 && code <= 0xDBFF ? // Detect surrogate pair\n S.slice(0,2).toUpperCase() + S.substring(2) :\n S.charAt(0).toUpperCase() + S.substring(1)\n );\n };\nconst prettyCapitalizeFirstLetter = \"\\uD852\\uDF62\".length === 1 ?\n function(S) {\n \"use-strict\"; // Hooray! The browser uses UTF-32!\n return S.charAt(0).toLocaleUpperCase() + S.substring(1);\n } : function(S) {\n \"use-strict\";\n // The browser is using UCS16 to store UTF-16\n var code = S.charCodeAt(0)|0;\n return (\n code >= 0xD800 && code <= 0xDBFF ? // Detect surrogate pair\n S.slice(0,2).toLocaleUpperCase() + S.substring(2) :\n S.charAt(0).toLocaleUpperCase() + S.substring(1)\n );\n };\n\nDo note that the above solution tries to account for UTF-32. However, the specification officially states that browsers are required to do everything in UTF-16 mapped into UCS2. Nevertheless, if we all come together, do our part, and start preparing for UTF32, then there is a chance that the TC39 may allow browsers to start using UTF-32 (like how Python uses 24-bits for each character of the string). This must seem silly to an English speaker: no one who uses only latin-1 has ever had to deal with Mojibake because Latin-I is supported by all character encodings. But, users in other countries (such as China, Japan, Indonesia, etc.) are not so fortunate. They constantly struggle with encoding problems not just from the webpage, but also from the JavaScript: many Chinese/Japanese characters are treated as two letters by JavaScript and thus may be broken apart in the middle, resulting in � and � (two question-marks that make no sense to the end user). If we could start getting ready for UTF-32, then the TC39 might just allow browsers do what Python did many years ago which had made Python very popular for working with high Unicode characters: using UTF-32.\nconsistantCapitalizeFirstLetter works correctly in Internet Explorer 3+ (when the const is changed to var). prettyCapitalizeFirstLetter requires Internet Explorer 5.5+ (see the top of page 250 of this document). However, these fact are more of just jokes because it is very likely that the rest of the code on your webpage will not even work in Internet Explorer 8 - because of all the DOM and JScript bugs and lack of features in these older browsers. Further, no one uses Internet Explorer 3 or Internet Explorer 5.5 any more.\n",
"Check out this solution:\nvar stringVal = 'master';\nstringVal.replace(/^./, stringVal[0].toUpperCase()); // Returns Master\n\n",
"Only because this is really a one-liner I will include this answer. It's an ES6-based interpolated string one-liner.\nlet setStringName = 'the Eiffel Tower';\nsetStringName = `${setStringName[0].toUpperCase()}${setStringName.substring(1)}`;\n\n",
"with arrow function\nlet fLCapital = s => s.replace(/./, c => c.toUpperCase())\nfLCapital('this is a test') // \"This is a test\"\n\nwith arrow function, another solution\nlet fLCapital = s => s = s.charAt(0).toUpperCase() + s.slice(1);\nfLCapital('this is a test') // \"This is a test\"\n\nwith array and map()\nlet namesCapital = names => names.map(name => name.replace(/./, c => c.toUpperCase()))\nnamesCapital(['james', 'robert', 'mary']) // [\"James\", \"Robert\", \"Mary\"]\n\n",
"The ucfirst function works if you do it like this.\nfunction ucfirst(str) {\n var firstLetter = str.slice(0,1);\n return firstLetter.toUpperCase() + str.substring(1);\n}\n\nThanks J-P for the aclaration.\n",
"yourString.replace(/^[a-z]/, function(m){ return m.toUpperCase() });\n\n(You may encapsulate it in a function or even add it to the String prototype if you use it frequently.)\n",
"Here's my version. I think it's easy to understand and elegant too.\nvar str = \"foo bar baz\";\n\n// Capitalize\nstr.split(' ')\n .map(w => w[0].toUpperCase() + w.substr(1).toLowerCase())\n .join(' ')\n// Returns \"Foo Bar Baz\"\n\n// Capitalize the first letter\nstr.charAt(0).toUpperCase() + str.slice(1)\n// Returns \"Foo bar baz\"\n\n",
"You can do it in one line like this\nstring[0].toUpperCase() + string.substring(1)\n\n",
"A functional approach\nconst capitalize = ([s, ...tring]) =>\n [s.toUpperCase(), ...tring]\n .join('');\n\nThen you could\nconst titleCase = str => \n str\n .split(' ')\n .map(capitalize)\n .join(' ')\n\n",
"The first character of every string is capitalized.\n\n\nfunction capitalize(word){\n return word[0].toUpperCase() + word.slice(1).toLowerCase();\n}\n\nconsole.log(capitalize(\"john\")); //John\nconsole.log(capitalize(\"BRAVO\")); //Bravo\nconsole.log(capitalize(\"BLAne\")); //Blane\n\n\n\n",
"CoffeeScript\nucfirst = (str) -> str.charAt(0).toUpperCase() + str.slice(1)\n\nAs a String prototype method:\nString::capitalize = -> @charAt(0).toUpperCase() + @slice(1)\n\n",
"In CoffeeScript, add to the prototype for a string:\nString::capitalize = ->\n @substr(0, 1).toUpperCase() + @substr(1)\n\nUsage would be:\n\"woobie\".capitalize()\n\nWhich yields:\n\"Woobie\"\n\n",
"function capitalize(string) {\n return string.replace(/^./, Function.call.bind(\"\".toUpperCase));\n}\n\n",
"Posting an edit of @salim's answer to include locale letter transformation.\nvar str = \"test string\";\nstr = str.substring(0,1).toLocaleUpperCase() + str.substring(1);\n\n",
"There are already so many good answers, but you can also use a simple CSS transform:\ntext-transform: capitalize;\n\n\n\ndiv.c {\n text-transform: capitalize;\n}\n<h2>text-transform: capitalize:</h2>\n<div class=\"c\">Lorem ipsum dolor sit amet, consectetur adipiscing elit.</div>\n\n\n\n",
"// Uppercase first letter\nfunction ucfirst(field) {\n field.value = field.value.substr(0, 1).toUpperCase() + field.value.substr(1);\n}\n\nUsage:\n<input type=\"text\" onKeyup=\"ucfirst(this)\" />\n\n",
"Using the JS replace string method & a regular expression w/ a word boundary seems simple.\nCapitalize the first words' first character: \"the eiffel tower\" --> \"The eiffel tower\"\nstr.replace(/\\b\\w/, v => v.toUpperCase())\n\nCapitalize all words' first character: \"the eiffel tower\" --> \"The Eiffel Tower\"\nstr.replace(/\\b\\w/g, v => v.toUpperCase())\n\n",
"One possible solution:\nfunction ConvertFirstCharacterToUpperCase(text) {\n return text.substr(0, 1).toUpperCase() + text.substr(1); \n}\n\nUse this:\n alert(ConvertFirstCharacterToUpperCase(\"this is string\"));\n\nHere is working JS Fiddle\n",
"This solution might be new and probably the simplest.\n\n\nfunction firstUpperCase(input)\n{\n return input[0].toUpperCase() + input.substr(1);\n}\n\nconsole.log(firstUpperCase(\"capitalize first letter\"));\n\n\n\n",
"/*\n * As terse as possible, assuming you're using ES version 6+\n */\nvar upLetter1=s=>s.replace(/./,m=>m.toUpperCase());\n\nconsole.log(upLetter1(\"the quick brown fox jumped over the lazy dog.\"));\n//\\\\ The quick brown fox jumped over the lazy dog. //\\\\\n\n",
"Using an arrow function:\nconst capitalize = string => string[0].toUpperCase() + string.slice(1)\n\n",
"Capitalize and Uncapitalize first Char of a String.\nFunctions to include:\n/** First Character uppercase */\nfunction capitalize(str) {\n return str.charAt(0).toUpperCase() + str.slice(1);\n}\n\n/** First Character lowercase */\nfunction uncapitalize(str) {\n return str.charAt(0).toLowerCase() + str.slice(1);\n}\n\nExample1 \"First Character uppercase\":\nalert(capitalize(\"hello world\"));\n\nResult: Hello world\nExample 2 \"First Character lowercase\":\nalert(uncapitalize(\"Hello World, today is sunny\"));\n\nResult: hello World, today is sunny\n",
"Here is my attempt to make a universal function that can capitalize only the first letter, or the first letter of each word, including words separated by a dash (like some first names in French).\nBy default, the function capitalizes only the first letter and leave the rest untouched.\nParameters:\n\nlc: true to force lower-casing the rest of the word(s)\nall: true to capitalize each word\n\n \nif( typeof String.prototype.capitalize !== \"function\" ) {\n String.prototype.capitalize = function( lc, all ) {\n if( all ) {\n return this.split( \" \" )\n .map( currentValue => currentValue.capitalize( lc ), this )\n .join( \" \" )\n .split( \"-\" )\n .map( currentValue => currentValue.capitalize( false ), this )\n .join( \"-\" );\n } else {\n return lc\n ? this.charAt( 0 ).toUpperCase() + this.slice( 1 ).toLowerCase()\n : this.charAt( 0 ).toUpperCase() + this.slice( 1 );\n }\n }\n}\n\n",
"Or you could use Sugar.js capitalize()\nExample:\n'hello'.capitalize() -> 'Hello'\n'hello kitty'.capitalize() -> 'Hello kitty'\n'hello kitty'.capitalize(true) -> 'Hello Kitty'\n\n",
"Using prototypes\nString.prototype.capitalize = function () {\n return this.charAt(0) + this.slice(1).toLowerCase();\n }\n\nor Using functions\nfunction capitalize(str) {\nreturn str.charAt(0).toUpperCase() + str.slice(1).toLowerCase();\n}\n\n",
"a.slice(0,1).toUpperCase()+a.slice(1)\n\n\nlet a = 'hello',\r\n fix = a.slice(0,1).toUpperCase()+a.slice(1)\r\n \r\nconsole.log(fix)\n\n\n\n",
"There are multiple ways of doing this try some below\nvar lower = 'the Eiffel Tower';\nvar upper = lower.charAt(0).toUpperCase() + lower.substr(1);\n\nAnd if you are comfortable with regular expressions, you do things this way:\nvar upper = lower.replace(/^\\w/, function (chr) {\n return chr.toUpperCase();\n});\n\nAnd you can even take it one step further by using more modern syntax:\nconst upper = lower.replace(/^\\w/, c => c.toUpperCase());\n\nAlso this will take care of negative scenarios as mentioned in example like words starting with special characters like !@#$%^&*()}{{[];':\",.<>/? .\n",
"Unicode and Locale Aware\nUsing current language features:\n\n\nfunction capitalize([firstLetter, ...rest]) {\r\n return [firstLetter.toLocaleUpperCase(), ...rest].join('');\r\n}\r\n\r\nconsole.log(capitalize('foo bar'));\r\nconsole.log(capitalize('ѷҥӕ'))\r\nconsole.log(capitalize('❄⭐'));\r\n\r\n// Title Case\r\nconsole.log(\r\n 'Title Case:',\r\n 'foo bar'\r\n .split(/\\s+/)\r\n .map(capitalize)\r\n .join(' '),\r\n);\n\n\n\nWe accept a destructured string as the only parameter [firstLetter, ...rest], assigning the first character to the variable firstLetter and get an array for the rest of the characters (...rest) bound to the rest variable. E.g. for the string lorem ipsum this should look like:\ncapitalize('lorem ipsum');\n// firstLetter = 'l'\n// rest = ['o', 'r', 'e', 'm', ' ', 'i', 'p', 's', 'u', 'm'];\n\nNow all we need to do is prepend an uppercased version of the first letter firstLetter.toLocaleUpperCase() to the rest array—using the spread operator—and join the resulting array into a string using .join('')\n",
"Elegant\nconst capitalize = ([firstChar, ...rest]) => `${firstChar.toUpperCase()}${rest.join('')}`;\n\n",
"If you go with one of the regex answers, remember they will only work with ASCII characters. All your unicode letters will not be uppercased. The XRegExp library and its unicode plugins solve this problem if you want to stick with regexps. So something like this would work:\nString.prototype.capitalize = function () {\n return this.replace(XRegExp(\"^\\\\p{L}\"), function ($0) { return $0.toUpperCase(); })\n}\n\nConsidering that it still doesn't cover all possibilities (combined characters, see http://www.regular-expressions.info/unicode.html) it seems easier to just use the .charAt(0).toUpperCase() approach.\n",
"This code will also handle extra spaces at the start & end of the string.\n\n\nlet val = ' this is test ';\nval = val.trim();\nval = val.charAt(0).toUpperCase() + val.slice(1);\nconsole.log(\"Value => \", val);\n\n\n\n",
"You can use regex approach :\nstr.replace(/(^|\\s)\\S/g, letter => letter.toUpperCase());\n\n",
"Okay, so I am new to JavaScript. I wasn't able to get the above to work for me. So I started putting it together myself. Here's my idea (about the same, different and working syntax): \nString name = request.getParameter(\"name\");\nname = name.toUpperCase().charAt(0) + name.substring(1);\nout.println(name);\n\nHere I get the variable from a form (it also works manually):\nString name = \"i am a Smartypants...\";\nname = name.toUpperCase().charAt(0) + name.substring(1);\nout.println(name);\n\nOutput: \"I am a Smartypants...\";\n",
"var capitalizeMe = \"string not starting with capital\"\n\nCapitalize with substr\nvar capitalized = capitalizeMe.substr(0, 1).toUpperCase() + capitalizeMe.substr(1);\n\n",
"For just capitalizing the first letter and make the rest of the string lower case:\nfunction capitalize(str) {\n var splittedEnter = str.split(\" \");\n var capitalized;\n var capitalizedResult;\n for (var i = 0 ; i < splittedEnter.length ; i++){\n capitalized = splittedEnter[i].charAt(0).toUpperCase();\n splittedEnter[i] = capitalized + splittedEnter[i].substr(1).toLowerCase();\n }\n return splittedEnter.join(\" \");\n}\n\ncapitalize(\"tHiS wiLL be alL CapiTaLiZED.\");\n\nThe result will be:\n\nThis Will Be All Capitalized.\n\n",
"I would just use a regular expression:\nmyString = ' the quick green alligator...';\nmyString.trim().replace(/^\\w/, (c) => c.toUpperCase());\n\n",
"\n\nfunction capitalizeEachWord(str) {\r\n return str.replace(/\\w\\S*/g, function(txt) {\r\n return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();\r\n });\r\n}\r\n\r\ndocument.write(capitalizeEachWord('foo BAR God bAD'));\n\n\n\n",
"A small improvement - every word in titlecase.\nString.prototype.toTitleCase = function(){\n return this.replace(/\\b(\\w+)/g, function(m,p){ return p[0].toUpperCase() + p.substr(1).toLowerCase() });\n}\n\nvar s = 'heLLo, wOrLD!';\nconsole.log(s.toTitleCase()); // Hello, World!\n\n",
"The simplest solution is:\nlet yourSentence = 'it needs first letter upper case';\n\nyourSentence.charAt(0).toUpperCase() + yourSentence.substr(1);\n\nor:\nyourSentence.charAt(0).toUpperCase() + yourSentence.slice(1);\n\nor:\nyourSentence.substr(0, 1).toUpperCase() + yourSentence.substr(1);\n\n",
"1. We'll be using CSS to achieve this. It can also be set from an external CSS.\n<span text-transform=\"capitalize \">The first letter of each word becomes an upper case</span>\n\n2. Using vanilla JavaScript, we could do:\nlet string = \"test case\"\n\nstring = string[0].toUpperCase() + string.substring(1)\n//return \"Test case\"\n\nExplanation</b/>:\nstring[0].toUpperCase(): converts the first letter in the string to upper case\nstring.substring(1): deletes the first letter in the string and returns the remaining characters\ntext-transform=\"capitalize\": make the first letter of each word in this tag upper case. If you use 'uppercase' as the value of text-transform, every letter in the tag will be a capital letter\n",
"This code might work good in some cases:\n\n\nfunction capitalizeFirstLetter(string) {\n return string.charAt(0).toUpperCase() + string.slice(1);\n}\n\nconsole.log(capitalizeFirstLetter('foo')); // Foo\n// But if we had like this it won't work well\nconsole.log(capitalizeFirstLetter('fOo')); // FOo\n\n\n\nBut if you really want to make sure, that there is only the first letter capitalized and the rest is built out of lowercase letters, you could adjust the code like this:\n\n\nfunction capitalizeFirstLetter(string) {\n return string.charAt(0).toUpperCase() + string.slice(1).toLowerCase();\n}\n \nconsole.log(capitalizeFirstLetter('fOo')); // Foo\n\n\n\n",
"The function takes two arguments:\nstart - the start index; \nlength - the length of substring to capitalise\nString.prototype.subUpper = function () {\n var result = this.toString();\n var start = 0;\n var length = 1;\n if (arguments.length > 0) {\n start = arguments[0];\n if (start < this.length) {\n if (arguments.length > 1) {\n length = arguments[1];\n }\n if (start + length > this.length) {\n length = this.length - start;\n }\n var startRest = start + length;\n var prefix = start > 0 ? this.substr(0, start) : String.empty;\n var sub = this.substr(start, length);\n var suffix = this.substr(startRest, this.length - startRest);\n result = prefix + sub.toUpperCase() + suffix;\n }\n }\n return result;\n};\n\n",
"I have been trying to do same (that is; capitalize the first letter in a string while it is being typed) using jQuery. I searched all through the web for the answer but couldn't find it. However I was able to get a work around using the on() function in jQuery like so:\n$(\"#FirstNameField\").on(\"keydown\",function(e){\n var str = $(\"#FirstNameField\").val();\n if(str.substring()===str.substring(0,1)){\n $(\"#FirstNameField\").val(str.substring(0,1).toUpperCase());\n } \n});\n\nThis function actually capitalizes the first letter while the data entrant is typing continuously.\n",
"I use something along these lines in my development environment, especially when working with APIs like HTTP:\nSuppose you have an HTTP header in which you'd like to capitalize every initial letter in its name and add the hyphen between its constituent words. You may achieve something like that using this basic and simple routine:\n'access control allow origin'\n .replace(/\\b\\w/g, function (match) {\n return match.toUpperCase();\n })\n .split(' ')\n .join('-');\n\n// Output: 'Access-Control-Allow-Origin'\n\nIt is not maybe the most elegant and attractive function definition out there, but it certainly gets the job done.\n",
"Like it:\nfunction capitalize(string,a) {\n var tempstr = string.toLowerCase();\n if (a == false || a == undefined)\n return tempstr.replace(tempstr[0], tempstr[0].toUpperCase());\n else {\n return tempstr.split(\" \").map(function (i) { return i[0].toUpperCase() + i.substring(1) }).join(\" \");\n }\n}\n\n\ncapitalize('stack overflow yeah!',true)); //Stack Overflow Yeah!\n\ncapitalize('stack stack stack stack overflow yeah!'));//Stack overflow yeah!\n\nhttps://jsfiddle.net/dgmLgv7b/\n",
"A one-liner:\n\n\n'string'.replace(/(^[a-z])/,function (p) { return p.toUpperCase(); } )\n\n\n\n",
"Firstly, I just wanted to clear up what capitalize means in this context.\n\"This String Is Capitalized\" Reliable source\nYou can see from the example provided this is not what the OP is looking for. What it should say is \"How do I make the first letter of a string uppercase\" (Not capitalize string)\nfunction ucfirst (str) {\n return typeof str != \"undefined\" ? (str += '', str[0].toUpperCase() + str.substr(1)) : '';\n}\n\nExplained\ntypeof str != \"undefined\" // Is str set\n? // true\nstr += '' // Turns the string variable into a string\nstr[0].toUpperCase() // Get the first character and make it upper case\n+ // Add\nstr.substr(1) // String starting from the index 1 (starts at 0)\n: // false\n''; // Returns an empty string\n\nThis will work with any argument or no argument at all.\nundefined === \"\"\n\"\" === \"\"\n\"my string\" === \"My string\"\nnull === \"Null\"\nundefined === \"\";\nfalse === \"False\"\n0 === \"0\"\ntrue === \"True\"\n[] === \"\"\n[true,0,\"\",false] === \"True,0,,false\"\n\n",
"One liner (\"inputString can be set to any string\"):\ninputString.replace(/.{1}/, inputString.charAt(0).toUpperCase())\n\n",
"This one is simple\nconst upper = lower.replace(/^\\w/, c => c.toUpperCase());\n\n",
"Capitalize First Word: Shortest\ntext.replace(/(^.)/, m => m.toUpperCase())\n\n\nCapitalize Each Word: Shortest\ntext.replace(/(^\\w|\\s\\w)/g, m => m.toUpperCase());\n\nIf you want to make sure the rest is in lowercase:\ntext.replace(/(^\\w|\\s\\w)(\\S*)/g, (_,m1,m2) => m1.toUpperCase()+m2.toLowerCase())\n\n",
"Any type of string can convert --\nYoUrStRiNg → Yourstring\nvar str = yOuRsTrING.toLowerCase(); // Output: yourstring\nstr.charAt(0).toUpperCase() + str.slice(1); // Output: Y + ourstring = Yourstring\n\n",
"You can do str.replace(str[0], str[0].toUpperCase()).\nCheck this example:\n\n\nlet str = \"hello, WORLD!\"\nlet newStr = str.replace(str[0], str[0].toUpperCase())\n\nconsole.log(\"str: \", str)\nconsole.log(\"newStr: \", newStr)\n\n\n\n",
"Just install and load Lodash:\nimport { capitalize } from \"lodash\";\n\ncapitalize('test') // Test\n\n",
"The currently voted answer is right, but it doesn't trim or check the length of the string before capitalising the first character.\nString.prototype.ucfirst = function(notrim) {\n s = notrim ? this : this.replace(/(?:(?:^|\\n)\\s+|\\s+(?:$|\\n))/g,'').replace(/\\s+/g,' ');\n return s.length > 0 ? s.charAt(0).toUpperCase() + s.slice(1) : s;\n}\n\nSet the notrim argument to prevent trimming the string first:\n'pizza'.ucfirst() => 'Pizza'\n' pizza'.ucfirst() => 'Pizza'\n' pizza'.ucfirst(true) => ' pizza'\n\n",
"This does the same action:\nvar newStr = string.slice(0,1).toUpperCase() + string.slice(1);\n\n",
"I know this is an old question with a lot of answers but here's my quick snippet.\nconst capitalize = (str) => str?.split('').map( (e, i) => i === 0 ? e.toUpperCase() : e ).join('')\n\n",
"\nSolution for Cannot read property 'charAt' of undefined\n\nconst capitalize = (string) => {\n return string ? string.charAt(0).toUpperCase() + string.slice(1) : \"\";\n }\n\nconsole.log(capitalize(\"i am a programmer\")); // I am a programmer\n\n",
"This is what I use religiously:\nfunction capitalizeMe(str, force){\n str = force ? str.toLowerCase() : str;\n return str.replace(/(\\b)([a-zA-Z])/g,\n function(firstLetter){\n return firstLetter.toUpperCase();\n });\n}\n\n\nvar firstName = capitalizeMe($firstName.val());\n\n",
"If there's Lodash in your project, use upperFirst.\n",
"function cap(input) {\n return input.replace(/[\\.\\r\\n\\t\\:\\;\\?\\!]\\W*(\\w)/g, function(match, capture) {\n // For other sentences in the text\n return match.toUpperCase();\n }).replace(/^\\W*\\w/, function(match, capture) {\n // For the first sentence in the text\n return match.toUpperCase();\n });;\n}\n\nvar a = \"hi, dear user. it is a simple test. see you later!\\r\\nbye\";\nconsole.log(cap(a));\n// Output: Hi, dear user. It is a simple test. See you later!\n// Bye\n\n",
"Another way using RamdaJs, the functional programming way:\nfirstCapital(str){\n const fn = p => R.toUpper(R.head(p)) + R.tail(p);\n return fn(str);\n}\n\nWith multiple words in a string:\nfirstCapitalAllWords(str){\n const fn = p => R.toUpper(R.head(p)) + R.tail(p);\n return R.map(fn,R.split(' ', str)).join(' ');\n}\n\n",
"Just because you can, doesn't mean you should, however. It requires ECMAScript 6 as the code uses array destructuring.\nconst capitalizeFirstLetter = s => {\n const type = typeof s;\n if (type !== \"string\") {\n throw new Error(`Expected string, instead received ${type}`);\n }\n\n const [firstChar, ...remainingChars] = s;\n\n return [firstChar.toUpperCase(), ...remainingChars].join(\"\");\n};\n\n",
"Here is the nice and cleaner version;\nvar str = '';\nreturn str.replace(new RegExp('^'+str[0]+''), str[0].toUpperCase());\n\nResults:\nthis is a test --> This is a test\n",
"I prefer use a solution oriented to a functional way (mapping array for example): \nArray.from(str).map((letter, i) => i === 0 ? letter.toUpperCase() : letter ).join('');\n\n",
"The method will take a value and then split it to have an array of string.\nconst firstLetterToUpperCase = value => {\n return value.replace(\n value.split(\"\")[\"0\"], // Split stirng and get the first letter \n value\n .split(\"\")\n [\"0\"].toString()\n .toUpperCase() // Split string and get the first letter to replace it with an uppercase value\n );\n};\n\n",
"If you need to have all words starting with a capital letter, you can use the following function:\nconst capitalLetters = (s) => {\n return s.trim().split(\" \").map(i => i[0].toUpperCase() + i.substr(1)).reduce((ac, i) => `${ac} ${i}`);\n}\n\nExample:\nconsole.log(`result: ${capitalLetters(\"this is a test\")}`)\n// Result: \"This Is A Test\"\n\n",
"Capitalizing the first letter with validation\nfunction capitalizeFirstLetter(str) {\n return (str && typeof str === 'string') ? (str.charAt(0).toUpperCase() + str.slice(1)) : \"\";\n}\n\nTesting\nconsole.log(capitalizeFirstLetter(0)); // Output: \"\"\nconsole.log(capitalizeFirstLetter(null)); // Output: \"\"\nconsole.log(capitalizeFirstLetter(\"test\")); // Output: \"Test\"\nconsole.log(capitalizeFirstLetter({})); // Output: \"\"\n\n",
"You should do like that:\nlet text = \"lower case\";\ntext = text.charAt(0).toUpperCase() + text.substring(1, text.length);\n\n",
"EDIT :\nI like this one :\nyourString.replace(/(^[a-z])/i, (str, firstLetter) => firstLetter.toUpperCase())\n\n",
"Use this module of Node.js, the http://stringjs.com/ package, to capitalize your string:\nvar S = require('string');\nS('jon').capitalize().s; //'Jon'\nS('JP').capitalize().s; //'Jp'\n\n",
"This one will tolerate possible leading whitespaces and will not miss the target of the first letter in a string. Therefore, it might improve already good solutions available on the thread. \nstr = \" the Eifel Tower\";\nstr.replace(/\\w/, str.match(/\\w/)[0].toUpperCase());\n>> \" The Eifel Tower\";\n\n!But, will cause a 'soft' error if executed against a blank string.\nTo avoid this possible error or unnecessary processing of a blank string or a number, a ternary conditional guarding can be used:\n+str!=+str ? str.replace(/\\w/, str.match(/\\w/)[0].toUpperCase()) : str;\n\n",
"Try this code:\nalert(\"hello\".substr(0, 1).toUpperCase() + \"hello\".substr(1));\n\nIt is taking the first character in \"hello\", capitalizing it and adding the rest of it on.\n",
"var a = \"this is a test\"\nconsole.log(a.replace(/^[a-z]/g, txt => txt.toUpperCase()));\n\n",
"You can do something like this:\nmode = \"string\";\nstring = mode.charAt(0).toUpperCase() + mode.substr(1,mode.length).toLowerCase();\nconsole.log(string);\n\nThis will print\nString\n"
] |
[
7402,
1597,
930,
460,
274,
234,
170,
96,
92,
75,
68,
65,
59,
58,
54,
53,
52,
52,
49,
42,
40,
39,
35,
35,
33,
30,
26,
23,
22,
22,
20,
20,
20,
20,
16,
15,
13,
13,
13,
13,
13,
12,
12,
11,
11,
11,
11,
11,
10,
10,
10,
10,
10,
10,
10,
9,
9,
9,
8,
8,
8,
8,
6,
6,
6,
6,
6,
5,
5,
5,
5,
5,
5,
5,
5,
5,
5,
5,
5,
4,
4,
4,
4,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
2,
2,
2,
2,
2
] |
[
"Please use lodash\nimport { capitalize } from 'lodash';\n/** call it */\ncapitalize('word') //Word\n\n",
"Easy peasy:\n// OK, agreed so here is the edited version. I can't go simple beyond this.\n\n\nfunction FirstUpperCase(inputString){\n return inputString.replace(inputString[0],inputString[0].toUpperCase());\n};\n\n\n\nInput: hello student\nOutput: Hello student\n",
"A simple way to do it is:\n\n\nlet str = \"i want to be capitalized.\" // creates the string\n\nlet splittedStr = str.split(\" \") // returns an array\nlet array = [] // creates a array that will be used as output\nlet finalStr = \"\" // the output string\nsplittedStr.forEach(e => array.push(e[0].toUpperCase() + e.slice(1, e.length)))\n\nfinalStr = array.join(\" \") // divide the array elements and join them separated by \" \"s\n\nconsole.log(finalStr) // I Want To Be Capitalized.\n\n\n\nand if you wish to add it to String.prototype:\n\n\nString.prototype.toCapital = function() {\n let str = this\n let splittedStr = str.split(\" \") // returns an array\n let array = [] // creates a array that will be used as output\n let finalStr = \"\" // the output string\n splittedStr.forEach(function(e) { array.push(e[0].toUpperCase() + e.slice(1, e.length))}) \n\n finalStr = array.join(\" \") // divide the array elements and join them separated by \" \"s\n \n return finalStr\n}\n\nconsole.log(\"added to string.prototype!\".toCapital())\n\n\n\n",
"var nameP = prompt(\"please enter your name\");\nvar nameQ = nameP.slice(0,1);\nvar nameR = nameP.slice(1,100);\nnameQ = nameQ.toUpperCase();\nnameP = nameQ + nameR;\nconsole.log(\"Hello! \" + nameP);\n\nOutput:\nHello! Alex\n\n",
"If I may alter the code a little. I found that if I run an all caps string through this function, nothing happens. So... here is my tid bit. Force the string to lower case first. \nString.prototype.capitalize = function(){\n return this.toLowerCase().replace( /(^|\\s)([a-z])/g , function(m, p1, p2) {\n return p1 + p2.toUpperCase();\n });\n}\n\n"
] |
[
-1,
-2,
-2,
-3,
-8
] |
[
"capitalize",
"javascript",
"letter",
"string"
] |
stackoverflow_0001026069_capitalize_javascript_letter_string.txt
|
Q:
TypeError: JwtStrategy requires a secret or key
I've read the million threads with the same issue but I couldn't solve it :(
This is an old project I made and I need to access to it again but I'm getting the following error when running npm start:
> [email protected] start
> npm run tsc && npm run serve
> [email protected] tsc
> tsc
> [email protected] serve
> ts-node src/server.ts
{"level":"debug","message":"Logging initialized at debug level"}
C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\passport-jwt\lib\strategy.js:45
throw new TypeError('JwtStrategy requires a secret or key');
^
TypeError: JwtStrategy requires a secret or key
at new JwtStrategy (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\passport-jwt\lib\strategy.js:45:15)
at Object.<anonymous> (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\src\config\passport.ts:6:28)
at Module._compile (node:internal/modules/cjs/loader:1149:14)
at Module.m._compile (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:858:23)
at Module._extensions..js (node:internal/modules/cjs/loader:1203:10)
at Object.require.extensions.<computed> [as .ts] (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:861:12)
at Module.load (node:internal/modules/cjs/loader:1027:32)
at Function.Module._load (node:internal/modules/cjs/loader:868:12)
at Module.require (node:internal/modules/cjs/loader:1051:19)
at require (node:internal/modules/cjs/helpers:103:18)
at Object.<anonymous> (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\src\app.ts:16:1)
at Module._compile (node:internal/modules/cjs/loader:1149:14)
at Module.m._compile (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:858:23)
at Module._extensions..js (node:internal/modules/cjs/loader:1203:10)
at Object.require.extensions.<computed> [as .ts] (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:861:12)
at Module.load (node:internal/modules/cjs/loader:1027:32)
This is src/config/passport.ts:
import passport from 'passport'
import { Strategy as JwtStrategy, ExtractJwt } from 'passport-jwt'
import userServices from '../services/userServices'
import { JWT_SECRET } from '../util/secret'
export const jwtStrategy = new JwtStrategy(
{
secretOrKey: JWT_SECRET,
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
},
async (payload: any, done: any) => {
const userEmail = payload.email
const foundUser = await userServices.findUserByEmail(userEmail)
done(null, foundUser)
}
)
I don't have a .env file (which I don't I need, right?)
When running "npm i" I'm getting a bunch of errors as well but I could --force it:
npm WARN ERESOLVE overriding peer dependency
npm WARN While resolving: [email protected]
npm WARN Found: [email protected]
npm WARN node_modules/ts-node
npm WARN dev ts-node@"^8.6.2" from the root project
npm WARN
npm WARN Could not resolve dependency:
npm WARN peerOptional ts-node@">=9.0.0" from [email protected]
npm WARN node_modules/jest-config
npm WARN jest-config@"^27.5.1" from @jest/[email protected]
npm WARN node_modules/@jest/core
npm WARN 1 more (jest-cli)
npm WARN
npm WARN Conflicting peer dependency: [email protected]
npm WARN node_modules/ts-node
npm WARN peerOptional ts-node@">=9.0.0" from [email protected]
npm WARN node_modules/jest-config
npm WARN jest-config@"^27.5.1" from @jest/[email protected]
npm WARN node_modules/@jest/core
npm WARN 1 more (jest-cli)
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/jest
npm ERR! dev jest@"^27.5.1" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer jest@">=26 <27" from [email protected]
npm ERR! node_modules/ts-jest
npm ERR! dev ts-jest@"^26.5.1" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/jest
npm ERR! peer jest@">=26 <27" from [email protected]
npm ERR! node_modules/ts-jest
npm ERR! dev ts-jest@"^26.5.1" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
You can see the whole repo here.
Thanks!
A:
I'm not going to attempt to fix your conflicts, but npm i -force will allow everything to be installed.
You will also need to place a .env file in the root and ensure it has a JWT_SECRET secret value.
There are also a lot of other .env values required. Your .env should look like this (or production):
JWT_SECRET = 'jhagdhjwf'
MONGODB_URI = 'mongodb://localhost:27017/local'
PORT = 3000
CLOUD_NAME = 'Your cloud name'
CLOUDINARY_API_KEY = '123123123123'
CLOUDINARY_API_SECRET = 'abc123abc123abc123abc123'
NODE_ENV = 'production'
I don't know why you didn't think an .env was not required. I thought you said this was your project?
As you will also know, you have to have cloudinary API config and also a running mongodb server, on which you store your data.
Eg, install docker and run the command to get a quick running local version:
docker run -d -p 27017:27017 --name test-mongo mongo:latest
I did a pull of the code and everything works locally. Add this .env and it will work:
> [email protected] start
> npm run tsc && npm run serve
> [email protected] tsc
> tsc
> [email protected] serve
> ts-node src/server.ts
{"level":"debug","message":"Logging initialized at debug level"}
(node:78699) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
App running on port 3000
Opening http://localhost:3000/api/v1/users will give you the expected [] response. I assume you have a front end somewhere.
Everything should be working as expected.
Extra: Your import { JWT_SECRET } from '../util/secret' is not the right file name. In your repo it is https://github.com/AlanFPS/mern-finance-server/blob/master/src/util/seccret.ts. Your import is looking for secret but your file is called seccret (two c's).
|
TypeError: JwtStrategy requires a secret or key
|
I've read the million threads with the same issue but I couldn't solve it :(
This is an old project I made and I need to access to it again but I'm getting the following error when running npm start:
> [email protected] start
> npm run tsc && npm run serve
> [email protected] tsc
> tsc
> [email protected] serve
> ts-node src/server.ts
{"level":"debug","message":"Logging initialized at debug level"}
C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\passport-jwt\lib\strategy.js:45
throw new TypeError('JwtStrategy requires a secret or key');
^
TypeError: JwtStrategy requires a secret or key
at new JwtStrategy (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\passport-jwt\lib\strategy.js:45:15)
at Object.<anonymous> (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\src\config\passport.ts:6:28)
at Module._compile (node:internal/modules/cjs/loader:1149:14)
at Module.m._compile (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:858:23)
at Module._extensions..js (node:internal/modules/cjs/loader:1203:10)
at Object.require.extensions.<computed> [as .ts] (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:861:12)
at Module.load (node:internal/modules/cjs/loader:1027:32)
at Function.Module._load (node:internal/modules/cjs/loader:868:12)
at Module.require (node:internal/modules/cjs/loader:1051:19)
at require (node:internal/modules/cjs/helpers:103:18)
at Object.<anonymous> (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\src\app.ts:16:1)
at Module._compile (node:internal/modules/cjs/loader:1149:14)
at Module.m._compile (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:858:23)
at Module._extensions..js (node:internal/modules/cjs/loader:1203:10)
at Object.require.extensions.<computed> [as .ts] (C:\Users\alan_\Desktop\cdng\mern-finance-server-master\node_modules\ts-node\src\index.ts:861:12)
at Module.load (node:internal/modules/cjs/loader:1027:32)
This is src/config/passport.ts:
import passport from 'passport'
import { Strategy as JwtStrategy, ExtractJwt } from 'passport-jwt'
import userServices from '../services/userServices'
import { JWT_SECRET } from '../util/secret'
export const jwtStrategy = new JwtStrategy(
{
secretOrKey: JWT_SECRET,
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
},
async (payload: any, done: any) => {
const userEmail = payload.email
const foundUser = await userServices.findUserByEmail(userEmail)
done(null, foundUser)
}
)
I don't have a .env file (which I don't I need, right?)
When running "npm i" I'm getting a bunch of errors as well but I could --force it:
npm WARN ERESOLVE overriding peer dependency
npm WARN While resolving: [email protected]
npm WARN Found: [email protected]
npm WARN node_modules/ts-node
npm WARN dev ts-node@"^8.6.2" from the root project
npm WARN
npm WARN Could not resolve dependency:
npm WARN peerOptional ts-node@">=9.0.0" from [email protected]
npm WARN node_modules/jest-config
npm WARN jest-config@"^27.5.1" from @jest/[email protected]
npm WARN node_modules/@jest/core
npm WARN 1 more (jest-cli)
npm WARN
npm WARN Conflicting peer dependency: [email protected]
npm WARN node_modules/ts-node
npm WARN peerOptional ts-node@">=9.0.0" from [email protected]
npm WARN node_modules/jest-config
npm WARN jest-config@"^27.5.1" from @jest/[email protected]
npm WARN node_modules/@jest/core
npm WARN 1 more (jest-cli)
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/jest
npm ERR! dev jest@"^27.5.1" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer jest@">=26 <27" from [email protected]
npm ERR! node_modules/ts-jest
npm ERR! dev ts-jest@"^26.5.1" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/jest
npm ERR! peer jest@">=26 <27" from [email protected]
npm ERR! node_modules/ts-jest
npm ERR! dev ts-jest@"^26.5.1" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
You can see the whole repo here.
Thanks!
|
[
"I'm not going to attempt to fix your conflicts, but npm i -force will allow everything to be installed.\nYou will also need to place a .env file in the root and ensure it has a JWT_SECRET secret value.\nThere are also a lot of other .env values required. Your .env should look like this (or production):\nJWT_SECRET = 'jhagdhjwf'\nMONGODB_URI = 'mongodb://localhost:27017/local'\nPORT = 3000\n\nCLOUD_NAME = 'Your cloud name'\nCLOUDINARY_API_KEY = '123123123123'\nCLOUDINARY_API_SECRET = 'abc123abc123abc123abc123'\n\nNODE_ENV = 'production'\n\nI don't know why you didn't think an .env was not required. I thought you said this was your project?\nAs you will also know, you have to have cloudinary API config and also a running mongodb server, on which you store your data.\nEg, install docker and run the command to get a quick running local version:\ndocker run -d -p 27017:27017 --name test-mongo mongo:latest\nI did a pull of the code and everything works locally. Add this .env and it will work:\n\n> [email protected] start\n> npm run tsc && npm run serve\n\n\n> [email protected] tsc\n> tsc\n\n\n> [email protected] serve\n> ts-node src/server.ts\n\n{\"level\":\"debug\",\"message\":\"Logging initialized at debug level\"}\n(node:78699) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.\n(Use `node --trace-deprecation ...` to show where the warning was created)\nApp running on port 3000\n\nOpening http://localhost:3000/api/v1/users will give you the expected [] response. I assume you have a front end somewhere.\nEverything should be working as expected.\nExtra: Your import { JWT_SECRET } from '../util/secret' is not the right file name. In your repo it is https://github.com/AlanFPS/mern-finance-server/blob/master/src/util/seccret.ts. Your import is looking for secret but your file is called seccret (two c's).\n"
] |
[
0
] |
[] |
[] |
[
"express",
"jwt",
"mern",
"mongodb",
"stack"
] |
stackoverflow_0074659948_express_jwt_mern_mongodb_stack.txt
|
Q:
Getting the center point of a quadratic Bezier Line with SKPath (Skia Sharp)
I draw Bezier Lines with the QuadTo-Method on a SKPath. My requirement is to get a point on the rendered Bezier Line which is more or less in the middle of the line. I use this point to show a label on the line and to provide a hit point to interact with the line.
Currently, I shoehorned a simple algorithm which tries to resolve the points of the line vie GetFillPath and then try to detect the point which is nearest to the center of a thought straight line from the Bezier’s start and end.
This works, however, it feels extremely brutish. Is there a more sophisticated way to fulfill my requirement?
A:
The quadratic Bezier has a pretty simple parametric formula to calculate points lying on it, you can find it in many places, e.g. https://en.wikipedia.org/wiki/B%C3%A9zier_curve
As per description of your problem it is important to notice, that you don't specifally want to calculate the point B(0.5), as the length of curves (P0, B(0.5)) and (B(0.5), P2) can be different for non-symmetric Beziers.
What I would do is:
Flatten the curve to a polyline with an arbitrary precision p (selecting all the t params so that for every 3 consecutive t such that: tk < tk+1 < tk+2 distance between line (Pk, Pk+2) and point B(tk+1) < p)
Find k such that: L(P0, ..., B(tk)) <= L(P0,..., P2)/2 < L(B(tk+1), ..., P2), where L is the length of polyline.
Now with found k we know our middle point tmid fulfills tk <= tmid < tk+1. This can again be computed with an arbitrary precision.
For any higher degree Beziers you can use de Casteljau algorithm to calculate points for any given 0 <= t <= 1.
|
Getting the center point of a quadratic Bezier Line with SKPath (Skia Sharp)
|
I draw Bezier Lines with the QuadTo-Method on a SKPath. My requirement is to get a point on the rendered Bezier Line which is more or less in the middle of the line. I use this point to show a label on the line and to provide a hit point to interact with the line.
Currently, I shoehorned a simple algorithm which tries to resolve the points of the line vie GetFillPath and then try to detect the point which is nearest to the center of a thought straight line from the Bezier’s start and end.
This works, however, it feels extremely brutish. Is there a more sophisticated way to fulfill my requirement?
|
[
"The quadratic Bezier has a pretty simple parametric formula to calculate points lying on it, you can find it in many places, e.g. https://en.wikipedia.org/wiki/B%C3%A9zier_curve\n\nAs per description of your problem it is important to notice, that you don't specifally want to calculate the point B(0.5), as the length of curves (P0, B(0.5)) and (B(0.5), P2) can be different for non-symmetric Beziers.\nWhat I would do is:\n\nFlatten the curve to a polyline with an arbitrary precision p (selecting all the t params so that for every 3 consecutive t such that: tk < tk+1 < tk+2 distance between line (Pk, Pk+2) and point B(tk+1) < p)\nFind k such that: L(P0, ..., B(tk)) <= L(P0,..., P2)/2 < L(B(tk+1), ..., P2), where L is the length of polyline.\nNow with found k we know our middle point tmid fulfills tk <= tmid < tk+1. This can again be computed with an arbitrary precision.\n\nFor any higher degree Beziers you can use de Casteljau algorithm to calculate points for any given 0 <= t <= 1.\n"
] |
[
0
] |
[] |
[] |
[
"skia",
"skiasharp",
"xamarin.forms"
] |
stackoverflow_0071986943_skia_skiasharp_xamarin.forms.txt
|
Q:
java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
Getting following exception in my Eclipse '.metadata.log' file that is resulting into almost 80% of CPU usage. Anybody knows what this means? Or how it needs to be fixed? This started after STS plugin was installed on Eclipse.
`
java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2096)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$13(LanguageServerWrapper.java:497)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageConsumer.consume(StreamMessageConsumer.java:72)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$3(LanguageServerWrapper.java:265)
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.request(RemoteEndpoint.java:161)
at org.eclipse.lsp4j.jsonrpc.services.EndpointProxy.invoke(EndpointProxy.java:91)
at jdk.proxy11/jdk.proxy11.$Proxy35.shutdown(Unknown Source)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$13(LanguageServerWrapper.java:495)
... 7 more
`
This absolutely is killing my productivity, and I have a very good config machine here, Lenovo t495 with full SSD storage and 32 GB RAM. Just does not make sense. Checked the '.metadata.log' file and just keep seeing this exception.
A:
I think I found the answer. I'm new to Spring, but this is how I resolved it.
Select Spring boot in the setting below.
Also FYI, when I enabled logs in the setting below, I discovered that each of my project source file is being submitted as text to some process, and this is done for each and every file. So, there is entire project text that is being submitted (or something), and 'post' fails, and the plugin proceeds to next file, and these failues happen for each source file. I dont know if the plugin is trying to submit in an 'infinite loop' or something, but that seems to be the case why the CPU usage is spiking and rendering the machine useless.
Enabling 'Spring language server' actually fixed this. I see that there are very momentary spikes of CPU to like 7-10% and nothing more than that after this change.
And then I discover this that, Language server consoles are being created momentarily and then terminated, which explains those little spikes of 7-10% of CPU usage. I really was suffering from this for a month (might sound dumb :|) where my actual tasks were getting delayed, and then this finally fixed it.
I felt that this could have been much more sanely dealt with, not sure if I'm not educated enough to use this plugin given I'm novice in Spring here.
Hope this helps someone!
EDIT:
I think the best way is to just turn this off. After this change, Eclipse still works the same for me, functionally. Hope this plugin works good, and someday will see its benefits.
|
java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
|
Getting following exception in my Eclipse '.metadata.log' file that is resulting into almost 80% of CPU usage. Anybody knows what this means? Or how it needs to be fixed? This started after STS plugin was installed on Eclipse.
`
java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2096)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$13(LanguageServerWrapper.java:497)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: org.eclipse.lsp4j.jsonrpc.JsonRpcException: java.io.IOException: The pipe is being closed
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageConsumer.consume(StreamMessageConsumer.java:72)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$3(LanguageServerWrapper.java:265)
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.request(RemoteEndpoint.java:161)
at org.eclipse.lsp4j.jsonrpc.services.EndpointProxy.invoke(EndpointProxy.java:91)
at jdk.proxy11/jdk.proxy11.$Proxy35.shutdown(Unknown Source)
at org.eclipse.lsp4e.LanguageServerWrapper.lambda$13(LanguageServerWrapper.java:495)
... 7 more
`
This absolutely is killing my productivity, and I have a very good config machine here, Lenovo t495 with full SSD storage and 32 GB RAM. Just does not make sense. Checked the '.metadata.log' file and just keep seeing this exception.
|
[
"I think I found the answer. I'm new to Spring, but this is how I resolved it.\nSelect Spring boot in the setting below.\n\nAlso FYI, when I enabled logs in the setting below, I discovered that each of my project source file is being submitted as text to some process, and this is done for each and every file. So, there is entire project text that is being submitted (or something), and 'post' fails, and the plugin proceeds to next file, and these failues happen for each source file. I dont know if the plugin is trying to submit in an 'infinite loop' or something, but that seems to be the case why the CPU usage is spiking and rendering the machine useless.\nEnabling 'Spring language server' actually fixed this. I see that there are very momentary spikes of CPU to like 7-10% and nothing more than that after this change.\n\nAnd then I discover this that, Language server consoles are being created momentarily and then terminated, which explains those little spikes of 7-10% of CPU usage. I really was suffering from this for a month (might sound dumb :|) where my actual tasks were getting delayed, and then this finally fixed it.\n\nI felt that this could have been much more sanely dealt with, not sure if I'm not educated enough to use this plugin given I'm novice in Spring here.\nHope this helps someone!\nEDIT:\nI think the best way is to just turn this off. After this change, Eclipse still works the same for me, functionally. Hope this plugin works good, and someday will see its benefits.\n\n"
] |
[
0
] |
[] |
[] |
[
"eclipse",
"language_server_protocol",
"sts"
] |
stackoverflow_0074660749_eclipse_language_server_protocol_sts.txt
|
Q:
Java regex string pattern problem in understanding statement
I'm currently new to Java Programming so I'm taking on a few questions on HackerRank. Therefor, I encountered a question which asking me to validate IP address.
Write a class called MyRegex which will contain a string pattern. You need to write a regular expression and assign it to the pattern such that it can be used to validate an IP address. Use the following definition of an IP address:
IP address is a string in the form "A.B.C.D", where the value of A, B, C, and D may range from 0 to 255. Leading zeros are allowed. The length of A, B, C, or D can't be greater than 3.Some valid IP address:
000.12.12.034
121.234.12.12
23.45.12.56
Some invalid IP address:
000.12.234.23.23
666.666.23.23
.213.123.23.32
23.45.22.32.
I.Am.not.an.ip
In this problem you will be provided strings containing any combination of ASCII characters. You have to write a regular expression to find the valid IPs.
Just write the MyRegex class which contains a String pattern . The string should contain the correct regular expression.
(MyRegex class MUST NOT be public)
Sample Input
000.12.12.034
121.234.12.12
23.45.12.56
00.12.123.123123.123
122.23
Hello.IP
Sample Output
true
true
true
false
false
false
I just got an answer from the internet which is
class myRegex {
public String pattern="([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])";
}
I mainly don't understand the public pattern why is it so long and what do things in there do with the class. I hope someone can help me pull through this. Thanks so much!
A:
Try this regex
/(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)/
The pattern for a number between 001 and 255 is
(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
for each octet, it checks:
if the number starts with 25x, where x is a number between 0 and 5.
if the number starts with 2yx, where y is a number between 0 and 5 and x is a number between 0 and 9.
if the number starts like 0xx or 1xx, and x are numbers between 0 and 9.
just use \. for the dots and repeat the pattern 4 times.
|
Java regex string pattern problem in understanding statement
|
I'm currently new to Java Programming so I'm taking on a few questions on HackerRank. Therefor, I encountered a question which asking me to validate IP address.
Write a class called MyRegex which will contain a string pattern. You need to write a regular expression and assign it to the pattern such that it can be used to validate an IP address. Use the following definition of an IP address:
IP address is a string in the form "A.B.C.D", where the value of A, B, C, and D may range from 0 to 255. Leading zeros are allowed. The length of A, B, C, or D can't be greater than 3.Some valid IP address:
000.12.12.034
121.234.12.12
23.45.12.56
Some invalid IP address:
000.12.234.23.23
666.666.23.23
.213.123.23.32
23.45.22.32.
I.Am.not.an.ip
In this problem you will be provided strings containing any combination of ASCII characters. You have to write a regular expression to find the valid IPs.
Just write the MyRegex class which contains a String pattern . The string should contain the correct regular expression.
(MyRegex class MUST NOT be public)
Sample Input
000.12.12.034
121.234.12.12
23.45.12.56
00.12.123.123123.123
122.23
Hello.IP
Sample Output
true
true
true
false
false
false
I just got an answer from the internet which is
class myRegex {
public String pattern="([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])."
+ "([1][\\d][\\d]|[0][0][0]|([0][0]|)[\\d]|([0]|)[\\d][\\d]|[2][0-4][\\d]|[2][5][0-5])";
}
I mainly don't understand the public pattern why is it so long and what do things in there do with the class. I hope someone can help me pull through this. Thanks so much!
|
[
"Try this regex\n /(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)/\n\nThe pattern for a number between 001 and 255 is\n (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\n\nfor each octet, it checks:\n\nif the number starts with 25x, where x is a number between 0 and 5.\n\nif the number starts with 2yx, where y is a number between 0 and 5 and x is a number between 0 and 9.\n\nif the number starts like 0xx or 1xx, and x are numbers between 0 and 9.\n\n\njust use \\. for the dots and repeat the pattern 4 times.\n"
] |
[
1
] |
[] |
[] |
[
"java"
] |
stackoverflow_0074660761_java.txt
|
Q:
Reading integers from a file in C using a while-loop and getw(f) != EOF
I pretty new to C, or well, very new to C. I'm trying to write integers to a file using putw(), and then I try to read them using getw(), I read them using a while loop until EOF. But the loop dies prematurely, and it seems to do so when getw() gets the integer 26 from the file. I'm at a complete loss.
Basically I want to printf the integers that I previously saved to the file, using putw(), every 7th iteration I print a new line. It works all the way until getw() encounters the integer 26, that kills the loop, even if it isnt EOF. No matter how many integers I have in the file, it works only until getw() encounters 26. I´ve tried using fscanf but didnt get that to work either. Please help a beginner.
void readfile() {
FILE *f;
f = fopen("INTEGERS.DAT", "r");
int num, xar=1;
if (f==NULL){
printf("NO file detected.\n");
exit(0);
} else {
while((num = getw(f)) != EOF) {
printf("%d ", num);
if ( xar % 7 == 0) {
printf("\n");
}
xar++;
}
}
fclose(f);
}
Thanks in advance.
A:
You didn't indicate the format of your data file, but noting that you are opening the file with an "r" parameter, that would indicate that the data in the file is in a text format and not a binary format. So using that information and a bit of artistic license, I created a code snippet to build some text data with an integer value per line/record in a file, and then read the data in that file utilizing a tweaked version of your readfile function.
#include <stdio.h>
#include <stdlib.h>
void save_int(void)
{
int entry = 999;
FILE *fp;
fp = fopen("INTEGERS.DAT", "w");
if (fp != NULL)
{
while (1)
{
printf("Enter an integer or enter '0' to quit data entry: ");
scanf("%d", &entry);
if (entry == 0)
{
break;
}
fprintf(fp, "%d\n", entry);
}
}
fclose(fp);
return;
}
void readfile()
{
FILE *fp;
fp = fopen("INTEGERS.DAT", "r");
char number[16];
int value;
if (fp==NULL)
{
printf("NO file detected.\n");
exit(0);
}
else
{
while(1)
{
value = fscanf(fp, "%s", number);
if (value < 0)
{
break;
}
printf("%d ", atoi(number));
}
}
printf("\n");
fclose(fp);
}
int main()
{
save_int();
readfile();
return 0;
}
Some items to point out.
Each integer value is being written with a newline character to the text file, so that would be a caveat if your file actually is in a different format such as storing integers on the same line with some type of delimiter between the integer values.
In reading in the integer data from the created text file, fscanf is used for this task - you might get suggestions and other answers utilizing other functions such as fgets. There are pros and cons, so often it comes down to what is most familiar and comfortable to you.
Since the values were stored as string values, they are read in to a string and then converted to an integer utilizing the standard atoi function. Again, this is just a simple way to do this that I am familiar with. By all means, view any alternative answers you might get and/or comments added later to this answer.
With that, following is some sample output at the terminal.
@Dev:~/C_Programs/Console/Integers/bin/Release$ ./Integers
Enter an integer or enter '0' to quit data entry: 14
Enter an integer or enter '0' to quit data entry: 566
Enter an integer or enter '0' to quit data entry: 65335
Enter an integer or enter '0' to quit data entry: 122
Enter an integer or enter '0' to quit data entry: 18
Enter an integer or enter '0' to quit data entry: 0
14 566 65335 122 18
@Dev:~/C_Programs/Console/Integers/bin/Release$ cat INTEGERS.DAT
14
566
65335
122
18
Go ahead and test this out to see if it meets the spirit of your project.
|
Reading integers from a file in C using a while-loop and getw(f) != EOF
|
I pretty new to C, or well, very new to C. I'm trying to write integers to a file using putw(), and then I try to read them using getw(), I read them using a while loop until EOF. But the loop dies prematurely, and it seems to do so when getw() gets the integer 26 from the file. I'm at a complete loss.
Basically I want to printf the integers that I previously saved to the file, using putw(), every 7th iteration I print a new line. It works all the way until getw() encounters the integer 26, that kills the loop, even if it isnt EOF. No matter how many integers I have in the file, it works only until getw() encounters 26. I´ve tried using fscanf but didnt get that to work either. Please help a beginner.
void readfile() {
FILE *f;
f = fopen("INTEGERS.DAT", "r");
int num, xar=1;
if (f==NULL){
printf("NO file detected.\n");
exit(0);
} else {
while((num = getw(f)) != EOF) {
printf("%d ", num);
if ( xar % 7 == 0) {
printf("\n");
}
xar++;
}
}
fclose(f);
}
Thanks in advance.
|
[
"You didn't indicate the format of your data file, but noting that you are opening the file with an \"r\" parameter, that would indicate that the data in the file is in a text format and not a binary format. So using that information and a bit of artistic license, I created a code snippet to build some text data with an integer value per line/record in a file, and then read the data in that file utilizing a tweaked version of your readfile function.\n#include <stdio.h>\n#include <stdlib.h>\n\nvoid save_int(void)\n{\n int entry = 999;\n FILE *fp;\n fp = fopen(\"INTEGERS.DAT\", \"w\");\n\n if (fp != NULL)\n {\n while (1)\n {\n printf(\"Enter an integer or enter '0' to quit data entry: \");\n scanf(\"%d\", &entry);\n\n if (entry == 0)\n {\n break;\n }\n\n fprintf(fp, \"%d\\n\", entry);\n }\n }\n fclose(fp);\n return;\n}\n\nvoid readfile()\n{\n FILE *fp;\n fp = fopen(\"INTEGERS.DAT\", \"r\");\n char number[16];\n int value;\n if (fp==NULL)\n {\n printf(\"NO file detected.\\n\");\n exit(0);\n }\n else\n {\n while(1)\n {\n value = fscanf(fp, \"%s\", number);\n if (value < 0)\n {\n break;\n }\n printf(\"%d \", atoi(number));\n }\n }\n printf(\"\\n\");\n fclose(fp);\n}\n\nint main()\n{\n save_int();\n readfile();\n return 0;\n}\n\nSome items to point out.\n\nEach integer value is being written with a newline character to the text file, so that would be a caveat if your file actually is in a different format such as storing integers on the same line with some type of delimiter between the integer values.\nIn reading in the integer data from the created text file, fscanf is used for this task - you might get suggestions and other answers utilizing other functions such as fgets. There are pros and cons, so often it comes down to what is most familiar and comfortable to you.\nSince the values were stored as string values, they are read in to a string and then converted to an integer utilizing the standard atoi function. Again, this is just a simple way to do this that I am familiar with. By all means, view any alternative answers you might get and/or comments added later to this answer.\n\nWith that, following is some sample output at the terminal.\n@Dev:~/C_Programs/Console/Integers/bin/Release$ ./Integers \nEnter an integer or enter '0' to quit data entry: 14\nEnter an integer or enter '0' to quit data entry: 566\nEnter an integer or enter '0' to quit data entry: 65335\nEnter an integer or enter '0' to quit data entry: 122\nEnter an integer or enter '0' to quit data entry: 18\nEnter an integer or enter '0' to quit data entry: 0\n14 566 65335 122 18 \n\n@Dev:~/C_Programs/Console/Integers/bin/Release$ cat INTEGERS.DAT \n14\n566\n65335\n122\n18\n\nGo ahead and test this out to see if it meets the spirit of your project.\n"
] |
[
1
] |
[] |
[] |
[
"c",
"eof",
"file",
"integer",
"while_loop"
] |
stackoverflow_0074660353_c_eof_file_integer_while_loop.txt
|
Q:
Why is time passed being incorrect calculated across processes?
I am writing a program that sends a signal in one process and receives it in a thread in another. I have the entire program written with signals being caught and handled, as well as any synchronization issues. The problem is, I am trying to log the time the signal was sent and the time the signal was received. Though the values across the process vary strangely.
Here is how I did it.
I have a header file header.h which includes a shared global extern struct timespec begin, end;. The reason I made these shared was that I would need the beginning time to calculate the time elapsed since the program began.
Here is how I calculate the time elapsed.
I am using the POSIX clock_gettime().
I start the program and begin the timer, then when a signal is sent I run:
clock_gettime(CLOCK_REALTIME, &end);
long seconds = end.tv_sec - begin.tv_sec;
long nanoseconds = end.tv_nsec - begin.tv_nsec;
double elapsed = seconds + nanoseconds * 1e-9;
This all occurs in the main program.
The second process is another program which is exec() in a child process and that is where the signal catch occurs.
When I catch the signal, I store some data about it in a struct and store it in a buffer for another thread to read and log from.
typedef struct
{
int sig;
double time;
long int tid;
} data;
Here's what I do in one of the threads:
data d;
d.sig = 2;
d.tid = pthread_self();
clock_gettime(CLOCK_REALTIME, &end);
long seconds = end.tv_sec - begin.tv_sec;
long nanoseconds = end.tv_nsec - begin.tv_nsec;
double elapsed = seconds + nanoseconds * 1e-9;
d.time = elapsed;
put(d);
The problem is my outputs are vastly different. In my sentlog.txt the time is represented correctly, with enough precision to see a difference.
SIGUSR2 sent at 1.000286 seconds
SIGUSR2 sent at 1.082671 seconds
SIGUSR2 sent at 1.155440 seconds
SIGUSR1 sent at 1.250770 seconds
SIGUSR1 sent at 1.314637 seconds
SIGUSR2 sent at 1.398995 seconds
SIGUSR1 sent at 1.460559 seconds
SIGUSR2 sent at 1.498223 seconds
SIGUSR2 sent at 1.577555 seconds
SIGUSR1 sent at 1.618036 seconds
SIGUSR2 sent at 1.684488 seconds
SIGUSR2 sent at 1.743165 seconds
SIGUSR2 sent at 1.780100 seconds
SIGUSR2 sent at 1.871603 seconds
SIGUSR1 sent at 1.901293 seconds
SIGUSR2 sent at 1.944139 seconds
SIGUSR1 sent at 1.984142 seconds
SIGUSR1 sent at 2.040130 seconds
While the receivelog.txt is not.
Here is how I log to the file and stdout
if (d.sig == 1)
{
printf("SIGUSR1 received by thread %ld at time %f\n", d.tid, d.time);
fflush(stdout);
fprintf(fpRecieve, "Thread %ld received SIGUSR1 at %f seconds\n", d.tid, d.time);
fflush(fpRecieve);
}
else if (d.sig == 2)
{
printf("SIGUSR2 received by thread %ld at time %f\n", d.tid, d.time);
fflush(stdout);
fprintf(fpRecieve, "Thread %ld received SIGUSR2 at %f seconds\n", d.tid, d.time);
fflush(fpRecieve);
}
Thread 139995363964672 received SIGUSR2 at 1670008328.531628 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.613999 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.686767 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.782099 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.845975 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.930328 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.991889 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.029554 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.108883 seconds
Thread 139995372357376 received SIGUSR1 at 1670008329.149364 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.215814 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.274493 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.311425 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.402932 seconds
Thread 139995372357376 received SIGUSR1 at 1670008329.432621 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.475466 seconds
Why can I not simply just use the same operation as before?
A:
There are several issues with the way you are measuring the elapsed time in your code.
you are using the CLOCK_REALTIME clock to measure the elapsed time, but this clock is not guaranteed to have a constant rate or be monotonic. This means that the value returned by clock_gettime() using this clock may not accurately reflect the actual elapsed time, and it may even go backwards if the system time is adjusted. This is likely the cause of the large and incorrect values you are seeing in your receivelog.txt file.
you are subtracting the tv_sec and tv_nsec fields of the struct timespec values directly to calculate the elapsed time. This is not always correct, because the tv_nsec field may be greater than or equal to one billion (1000000000), in which case you will not get the correct elapsed time. Instead, you should use the timespec_get() function to calculate the elapsed time in seconds, which will handle this automatically for you.
you are not properly accounting for the fact that the end time may be earlier than the begin time. This can happen if the system time is adjusted, or if the end time is measured on a different CPU than the begin time. In this case, you will get a negative elapsed time, which is not correct. You should check for this condition and handle it properly.
Here is how you can fix these issues and properly measure the elapsed time in your code:
#include <time.h>
#include <stdio.h>
// Use the CLOCK_MONOTONIC clock instead of CLOCK_REALTIME
#define CLOCK CLOCK_MONOTONIC
struct timespec begin, end;
int main()
{
// Start the timer
clock_gettime(CLOCK, &begin);
// Do some work
// Measure the elapsed time
clock_gettime(CLOCK, &end);
// Use timespec_get() to calculate the elapsed time in seconds
double elapsed = end.tv_sec - begin.tv_sec +
(end.tv_nsec - begin.tv_nsec) * 1e-9;
// Check for negative elapsed time and handle it properly
if (elapsed < 0)
elapsed = 0;
// Print the elapsed time
printf("Elapsed time: %f seconds\n", elapsed);
return 0;
}
This code uses the CLOCK_MONOTONIC clock, which is guaranteed to be monotonic and have a constant rate. It also uses the timespec_get() function to calculate the elapsed time in seconds, and it checks for and handles negative elapsed time properly. This should give you accurate and correct elapsed time measurements.
A:
I have a header file header.h which includes a shared global extern struct timespec begin, end;. The reason I made these shared was that I would need the beginning time to calculate the time elapsed since the program began.
The end does not need to be global (and should not be). Only begin needs to be global.
When end is global, multiple threads can access it at the same time. This is a race condition and is UB (undefined behavior).
Make end a function scoped variable.
You're not showing the code for put or the [ring?] queue definition.
Access to it should be with a mutex or stdatomic.h primitives.
Although a bit trickier to implement, I usually prefer the atomic functions.
Also, I agree that the code should be using CLOCK_MONOTONIC.
|
Why is time passed being incorrect calculated across processes?
|
I am writing a program that sends a signal in one process and receives it in a thread in another. I have the entire program written with signals being caught and handled, as well as any synchronization issues. The problem is, I am trying to log the time the signal was sent and the time the signal was received. Though the values across the process vary strangely.
Here is how I did it.
I have a header file header.h which includes a shared global extern struct timespec begin, end;. The reason I made these shared was that I would need the beginning time to calculate the time elapsed since the program began.
Here is how I calculate the time elapsed.
I am using the POSIX clock_gettime().
I start the program and begin the timer, then when a signal is sent I run:
clock_gettime(CLOCK_REALTIME, &end);
long seconds = end.tv_sec - begin.tv_sec;
long nanoseconds = end.tv_nsec - begin.tv_nsec;
double elapsed = seconds + nanoseconds * 1e-9;
This all occurs in the main program.
The second process is another program which is exec() in a child process and that is where the signal catch occurs.
When I catch the signal, I store some data about it in a struct and store it in a buffer for another thread to read and log from.
typedef struct
{
int sig;
double time;
long int tid;
} data;
Here's what I do in one of the threads:
data d;
d.sig = 2;
d.tid = pthread_self();
clock_gettime(CLOCK_REALTIME, &end);
long seconds = end.tv_sec - begin.tv_sec;
long nanoseconds = end.tv_nsec - begin.tv_nsec;
double elapsed = seconds + nanoseconds * 1e-9;
d.time = elapsed;
put(d);
The problem is my outputs are vastly different. In my sentlog.txt the time is represented correctly, with enough precision to see a difference.
SIGUSR2 sent at 1.000286 seconds
SIGUSR2 sent at 1.082671 seconds
SIGUSR2 sent at 1.155440 seconds
SIGUSR1 sent at 1.250770 seconds
SIGUSR1 sent at 1.314637 seconds
SIGUSR2 sent at 1.398995 seconds
SIGUSR1 sent at 1.460559 seconds
SIGUSR2 sent at 1.498223 seconds
SIGUSR2 sent at 1.577555 seconds
SIGUSR1 sent at 1.618036 seconds
SIGUSR2 sent at 1.684488 seconds
SIGUSR2 sent at 1.743165 seconds
SIGUSR2 sent at 1.780100 seconds
SIGUSR2 sent at 1.871603 seconds
SIGUSR1 sent at 1.901293 seconds
SIGUSR2 sent at 1.944139 seconds
SIGUSR1 sent at 1.984142 seconds
SIGUSR1 sent at 2.040130 seconds
While the receivelog.txt is not.
Here is how I log to the file and stdout
if (d.sig == 1)
{
printf("SIGUSR1 received by thread %ld at time %f\n", d.tid, d.time);
fflush(stdout);
fprintf(fpRecieve, "Thread %ld received SIGUSR1 at %f seconds\n", d.tid, d.time);
fflush(fpRecieve);
}
else if (d.sig == 2)
{
printf("SIGUSR2 received by thread %ld at time %f\n", d.tid, d.time);
fflush(stdout);
fprintf(fpRecieve, "Thread %ld received SIGUSR2 at %f seconds\n", d.tid, d.time);
fflush(fpRecieve);
}
Thread 139995363964672 received SIGUSR2 at 1670008328.531628 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.613999 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.686767 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.782099 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.845975 seconds
Thread 139995363964672 received SIGUSR2 at 1670008328.930328 seconds
Thread 139995372357376 received SIGUSR1 at 1670008328.991889 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.029554 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.108883 seconds
Thread 139995372357376 received SIGUSR1 at 1670008329.149364 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.215814 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.274493 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.311425 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.402932 seconds
Thread 139995372357376 received SIGUSR1 at 1670008329.432621 seconds
Thread 139995363964672 received SIGUSR2 at 1670008329.475466 seconds
Why can I not simply just use the same operation as before?
|
[
"There are several issues with the way you are measuring the elapsed time in your code.\n\nyou are using the CLOCK_REALTIME clock to measure the elapsed time, but this clock is not guaranteed to have a constant rate or be monotonic. This means that the value returned by clock_gettime() using this clock may not accurately reflect the actual elapsed time, and it may even go backwards if the system time is adjusted. This is likely the cause of the large and incorrect values you are seeing in your receivelog.txt file.\n\nyou are subtracting the tv_sec and tv_nsec fields of the struct timespec values directly to calculate the elapsed time. This is not always correct, because the tv_nsec field may be greater than or equal to one billion (1000000000), in which case you will not get the correct elapsed time. Instead, you should use the timespec_get() function to calculate the elapsed time in seconds, which will handle this automatically for you.\n\nyou are not properly accounting for the fact that the end time may be earlier than the begin time. This can happen if the system time is adjusted, or if the end time is measured on a different CPU than the begin time. In this case, you will get a negative elapsed time, which is not correct. You should check for this condition and handle it properly.\n\n\nHere is how you can fix these issues and properly measure the elapsed time in your code:\n#include <time.h>\n#include <stdio.h>\n\n// Use the CLOCK_MONOTONIC clock instead of CLOCK_REALTIME\n#define CLOCK CLOCK_MONOTONIC\n\nstruct timespec begin, end;\n\nint main()\n{\n // Start the timer\n clock_gettime(CLOCK, &begin);\n\n // Do some work\n\n // Measure the elapsed time\n clock_gettime(CLOCK, &end);\n\n // Use timespec_get() to calculate the elapsed time in seconds\n double elapsed = end.tv_sec - begin.tv_sec +\n (end.tv_nsec - begin.tv_nsec) * 1e-9;\n\n // Check for negative elapsed time and handle it properly\n if (elapsed < 0)\n elapsed = 0;\n\n // Print the elapsed time\n printf(\"Elapsed time: %f seconds\\n\", elapsed);\n\n return 0;\n}\n\nThis code uses the CLOCK_MONOTONIC clock, which is guaranteed to be monotonic and have a constant rate. It also uses the timespec_get() function to calculate the elapsed time in seconds, and it checks for and handles negative elapsed time properly. This should give you accurate and correct elapsed time measurements.\n",
"\nI have a header file header.h which includes a shared global extern struct timespec begin, end;. The reason I made these shared was that I would need the beginning time to calculate the time elapsed since the program began.\n\nThe end does not need to be global (and should not be). Only begin needs to be global.\nWhen end is global, multiple threads can access it at the same time. This is a race condition and is UB (undefined behavior).\nMake end a function scoped variable.\n\nYou're not showing the code for put or the [ring?] queue definition.\nAccess to it should be with a mutex or stdatomic.h primitives.\nAlthough a bit trickier to implement, I usually prefer the atomic functions.\n\nAlso, I agree that the code should be using CLOCK_MONOTONIC.\n"
] |
[
1,
1
] |
[] |
[] |
[
"c",
"extern",
"signals",
"time"
] |
stackoverflow_0074660663_c_extern_signals_time.txt
|
Q:
Running Vagrant Up error - 'Never run this as root (or with sudo).'
Windows - Vagrant and VM Virtual Box
If I use Vagrant Up command, I get an error.
"Never run this as root (or with sudo)."
Vagrant file opens with this...
# prevent accidental sudo / root
if Process::uid == 0
puts "Never run this as root (or with sudo)."
exit 1
end
How do I run Vagrant Up and not get this error?
A:
If you sudo su then type exit to get out of sudo.
Or if you always use root - create another user on your unix host.
|
Running Vagrant Up error - 'Never run this as root (or with sudo).'
|
Windows - Vagrant and VM Virtual Box
If I use Vagrant Up command, I get an error.
"Never run this as root (or with sudo)."
Vagrant file opens with this...
# prevent accidental sudo / root
if Process::uid == 0
puts "Never run this as root (or with sudo)."
exit 1
end
How do I run Vagrant Up and not get this error?
|
[
"If you sudo su then type exit to get out of sudo.\nOr if you always use root - create another user on your unix host.\n"
] |
[
0
] |
[] |
[] |
[
"vagrant",
"vagrantfile"
] |
stackoverflow_0074660852_vagrant_vagrantfile.txt
|
Q:
VS Code: What is the difference between push and publish
On the GIT tab in Visual Studio Code there is an context menu with these items:
Sync
Pull
Pull (release)
Push
==================
Publish
==================
...
What does the publish button do?
A:
After checking the source code of Visual Studio Code.
Push
Push the current branch to the default remote upstream
public run(context?: any):Promise {
return this.gitService.push() // ... removed for brevity
}
Active when:
There is UPSTREAM and recent push/pulls (ahead)
if (!HEAD || !HEAD.name || !HEAD.upstream) {
return false;
}
if (!HEAD.ahead) { // no commits to pull or push
return false;
}
Publish
Allows you to choose which remote you want to push to.
public run(context?: any):Promise {
const model = this.gitService.getModel();
const remotes = model.getRemotes();
const branchName = model.getHEAD().name;
let promise: TPromise<string>;
if (remotes.length === 1) {
const remoteName = remotes[0].name;
promise = TPromise.as(result ? remoteName : null);
} else {
// open the option picker
promise = this.quickOpenService.pick(picks, { placeHolder })
.then(pick => pick && pick.label);
}
return promise
.then(remote => remote && this.gitService.push(remote, branchName, { setUpstream: true }))
}
Active when
There is NO UPSTREAM and off course remote branches are set.
if (model.getRemotes().length === 0) {
return false;
}
if (!HEAD || !HEAD.name || HEAD.upstream) {
return false;
}
A:
From the docs:
If there is no upstream branch configured and the Git repository has remotes set up, the Publish action is enabled. This will let you publish the current branch to a remote.
So I'd expect that if you have an upstream branch configured, you would be able to Push (i.e. push directly to the configured upstream branch) and if you have no upstream branch configured you are only allowed to Publish (i.e. select a remote and branch to push at).
A:
Publish will push the branch to the remote AND set up the local branch to track the remote branch.
Push just pushes and doesn't set upstream tracking information (ie: branch.<name>.remote and branch.<name>.merge configuration).
When there is no upstream branch, and push.default = simple (the git default), Push will raise a dialog to suggest a publish.
A:
Push = git push ...
Publish = git push -u ...
|
VS Code: What is the difference between push and publish
|
On the GIT tab in Visual Studio Code there is an context menu with these items:
Sync
Pull
Pull (release)
Push
==================
Publish
==================
...
What does the publish button do?
|
[
"After checking the source code of Visual Studio Code.\nPush\nPush the current branch to the default remote upstream\npublic run(context?: any):Promise {\n return this.gitService.push() // ... removed for brevity \n}\n\nActive when:\nThere is UPSTREAM and recent push/pulls (ahead)\nif (!HEAD || !HEAD.name || !HEAD.upstream) {\n return false;\n}\n\nif (!HEAD.ahead) { // no commits to pull or push\n return false;\n}\n\nPublish\nAllows you to choose which remote you want to push to.\npublic run(context?: any):Promise {\n const model = this.gitService.getModel();\n const remotes = model.getRemotes();\n const branchName = model.getHEAD().name;\n let promise: TPromise<string>;\n\n if (remotes.length === 1) {\n const remoteName = remotes[0].name;\n promise = TPromise.as(result ? remoteName : null);\n } else {\n // open the option picker \n promise = this.quickOpenService.pick(picks, { placeHolder })\n .then(pick => pick && pick.label);\n }\n\n return promise\n .then(remote => remote && this.gitService.push(remote, branchName, { setUpstream: true })) \n}\n\nActive when\nThere is NO UPSTREAM and off course remote branches are set.\nif (model.getRemotes().length === 0) {\n return false;\n}\n\nif (!HEAD || !HEAD.name || HEAD.upstream) {\n return false;\n}\n\n",
"From the docs:\n\nIf there is no upstream branch configured and the Git repository has remotes set up, the Publish action is enabled. This will let you publish the current branch to a remote.\n\nSo I'd expect that if you have an upstream branch configured, you would be able to Push (i.e. push directly to the configured upstream branch) and if you have no upstream branch configured you are only allowed to Publish (i.e. select a remote and branch to push at).\n",
"Publish will push the branch to the remote AND set up the local branch to track the remote branch.\nPush just pushes and doesn't set upstream tracking information (ie: branch.<name>.remote and branch.<name>.merge configuration).\nWhen there is no upstream branch, and push.default = simple (the git default), Push will raise a dialog to suggest a publish.\n",
"Push = git push ...\nPublish = git push -u ...\n"
] |
[
21,
12,
7,
0
] |
[] |
[] |
[
"git",
"visual_studio_code"
] |
stackoverflow_0037075486_git_visual_studio_code.txt
|
Q:
JavaScript Lambda CloudWatch logs indicate its handler is not loading functions defined in other sources
AWS Lambda (JavaScript/TypeScript) here. I have the following Lambda handler:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { User, allUsers } from './users';
import { Commentary } from './domain';
import { dynamoDbClient } from './store';
import { PutCommand } from "@aws-sdk/client-dynamodb";
export const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
let status: number = 200;
let commentary: Commentary = JSON.parse(event.body);
console.log("deserialized this into commentary");
console.log("and the deserialized commentary has content of: " + commentary.getContent());
provideCommentary(event.body);
responseBody = "\"message\": \"received commentary -- check dynamoDb!\"";
return {
statusCode: status,
body: responseBody
};
} catch (err) {
console.log(err);
return {
statusCode: 500,
body: JSON.stringify({
message: err.stack,
}),
};
}
};
const provideCommentary = async (commentary: Commentary) => {
// Set the parameters.
const params = {
TableName: "commentary-dev",
Item: {
id: commentary.getId(),
content: commentary.getContent(),
createdAt : commentary.getCreatedAt(),
providerId: commentary.getProviderId(),
receiverId: commentary.getReceiverId()
},
};
try {
const data = await ddbDocClient.send(new PutCommand(params));
console.log("Success - item added or updated", data);
} catch (err) {
console.log("Error", err.stack);
throw err;
}
};
Where the Commentary class in domain.ts is:
class Commentary {
private id: number;
private content: string;
private createdAt: Date;
private providerId: number;
private receiverId: number;
constructor(id: number, content: string, createdAt: Date, providerId: number, receiverId: number) {
this.id = id;
this.content = content;
this.createdAt = createdAt;
this.providerId = providerId;
this.receiverId = receiverId;
}
public getId(): number {
return this.id;
}
public getContent(): string {
return this.content;
}
public getProviderId(): number {
return this.providerId;
}
public getReceiverId(): number {
return this.receiverId;
}
}
export { Commentary };
When I invoke my Lambda, via command-line curl:
curl --request POST 'https://<mylambda>/commentary' \
--header 'Content-Type: application/json' -d '{"id":123,"content":"test commentary","createdAt":"2022-12-02T08:45:26.261-05:00","providerId":456,"receiverId":789}'
...I see the following error in my terminal:
{"message":"TypeError: r.getContent is not a function\n
Any idea why event.body is having trobule being deserialized into a Commentary?
A:
You haven't properly constructed a Commentary instance. Instead, you have just set the commentary variable's value to the parsed body, a plain-old object.
const { id, content, createdAt, providerId, receiverId } = JSON.parse(event.body);
const commentary = new Commentary(id, content, createdAt, providerId, receiverId);
|
JavaScript Lambda CloudWatch logs indicate its handler is not loading functions defined in other sources
|
AWS Lambda (JavaScript/TypeScript) here. I have the following Lambda handler:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { User, allUsers } from './users';
import { Commentary } from './domain';
import { dynamoDbClient } from './store';
import { PutCommand } from "@aws-sdk/client-dynamodb";
export const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
let status: number = 200;
let commentary: Commentary = JSON.parse(event.body);
console.log("deserialized this into commentary");
console.log("and the deserialized commentary has content of: " + commentary.getContent());
provideCommentary(event.body);
responseBody = "\"message\": \"received commentary -- check dynamoDb!\"";
return {
statusCode: status,
body: responseBody
};
} catch (err) {
console.log(err);
return {
statusCode: 500,
body: JSON.stringify({
message: err.stack,
}),
};
}
};
const provideCommentary = async (commentary: Commentary) => {
// Set the parameters.
const params = {
TableName: "commentary-dev",
Item: {
id: commentary.getId(),
content: commentary.getContent(),
createdAt : commentary.getCreatedAt(),
providerId: commentary.getProviderId(),
receiverId: commentary.getReceiverId()
},
};
try {
const data = await ddbDocClient.send(new PutCommand(params));
console.log("Success - item added or updated", data);
} catch (err) {
console.log("Error", err.stack);
throw err;
}
};
Where the Commentary class in domain.ts is:
class Commentary {
private id: number;
private content: string;
private createdAt: Date;
private providerId: number;
private receiverId: number;
constructor(id: number, content: string, createdAt: Date, providerId: number, receiverId: number) {
this.id = id;
this.content = content;
this.createdAt = createdAt;
this.providerId = providerId;
this.receiverId = receiverId;
}
public getId(): number {
return this.id;
}
public getContent(): string {
return this.content;
}
public getProviderId(): number {
return this.providerId;
}
public getReceiverId(): number {
return this.receiverId;
}
}
export { Commentary };
When I invoke my Lambda, via command-line curl:
curl --request POST 'https://<mylambda>/commentary' \
--header 'Content-Type: application/json' -d '{"id":123,"content":"test commentary","createdAt":"2022-12-02T08:45:26.261-05:00","providerId":456,"receiverId":789}'
...I see the following error in my terminal:
{"message":"TypeError: r.getContent is not a function\n
Any idea why event.body is having trobule being deserialized into a Commentary?
|
[
"You haven't properly constructed a Commentary instance. Instead, you have just set the commentary variable's value to the parsed body, a plain-old object.\nconst { id, content, createdAt, providerId, receiverId } = JSON.parse(event.body);\n\nconst commentary = new Commentary(id, content, createdAt, providerId, receiverId);\n\n"
] |
[
0
] |
[] |
[] |
[
"amazon_web_services",
"aws_lambda",
"javascript",
"typescript"
] |
stackoverflow_0074657054_amazon_web_services_aws_lambda_javascript_typescript.txt
|
Q:
How to not render an attachment
I am using botframework-webchat in a react app which is connected to a skillbot from which I send custom card attachments and render custom components.
I want to build a component that executes some code but does not render any visual box on screen.
const attachmentMiddleware = (properties) => () => next => card => {
return (
switch(card.attachment.contentType) {
case 'application/vnd.microsoft.card.adaptive.addUserDetails':
return false;
case 'application/vnd.microsoft.card.adaptive.locationpicker':
return <LocationPicker/>
default: return next(card);
}
)
}
My expectation is that when I return false the component will not render. Well the component does not render but the out speech box does render an empty box.
Bad
How can I implement this so that the outer speech box does not render at all as in the picture below when I return false from the attachmentMiddleware?
Good
A:
The better place to do this is just thru Web Chat's store.
This is because the store receives the activities first and then any middleware acts second. So, because the activity has the attachment, the attachmentMiddleware will necessarily render something even if you pass in false or null.
Therefore, if you want to not render anything at all for an adaptive card, you should filter for it in the store returning false or null at that point. Then, the middleware won't even register its presense and nothing will be rendered.
const store = window.WebChat.createStore( {}, ( { dispatch } ) => next => async action => {
if ( action.type === 'DIRECT_LINE/INCOMING_ACTIVITY' ) {
const { activity } = action.payload;
console.log('INCOMING ACTIVITY ', activity);
if (activity.attachments && activity.attachments[0].contentType === 'application/vnd.microsoft.card.adaptive') {
return false;
}
}
return next(action);
} );
[ ... ]
window.WebChat.renderWebChat( {
directLine: directLine,
store: store
}, document.getElementById('webchat') );
|
How to not render an attachment
|
I am using botframework-webchat in a react app which is connected to a skillbot from which I send custom card attachments and render custom components.
I want to build a component that executes some code but does not render any visual box on screen.
const attachmentMiddleware = (properties) => () => next => card => {
return (
switch(card.attachment.contentType) {
case 'application/vnd.microsoft.card.adaptive.addUserDetails':
return false;
case 'application/vnd.microsoft.card.adaptive.locationpicker':
return <LocationPicker/>
default: return next(card);
}
)
}
My expectation is that when I return false the component will not render. Well the component does not render but the out speech box does render an empty box.
Bad
How can I implement this so that the outer speech box does not render at all as in the picture below when I return false from the attachmentMiddleware?
Good
|
[
"The better place to do this is just thru Web Chat's store.\nThis is because the store receives the activities first and then any middleware acts second. So, because the activity has the attachment, the attachmentMiddleware will necessarily render something even if you pass in false or null.\nTherefore, if you want to not render anything at all for an adaptive card, you should filter for it in the store returning false or null at that point. Then, the middleware won't even register its presense and nothing will be rendered.\nconst store = window.WebChat.createStore( {}, ( { dispatch } ) => next => async action => {\n if ( action.type === 'DIRECT_LINE/INCOMING_ACTIVITY' ) {\n const { activity } = action.payload;\n console.log('INCOMING ACTIVITY ', activity);\n if (activity.attachments && activity.attachments[0].contentType === 'application/vnd.microsoft.card.adaptive') {\n return false;\n }\n }\n return next(action);\n} );\n\n[ ... ]\n\nwindow.WebChat.renderWebChat( {\n directLine: directLine,\n store: store\n}, document.getElementById('webchat') );\n\n"
] |
[
0
] |
[] |
[] |
[
"botframework",
"reactjs",
"web_chat"
] |
stackoverflow_0074029883_botframework_reactjs_web_chat.txt
|
Q:
Neo4j Python Driver Using Unwind with a list of dictionaries
I'm trying to batch merge to create multiple nodes. Using the below code,
def test_batches(tx,user_batch):
result= tx.run(f"Unwind {user_batch} as user\
MERGE (n:User {{id: user.id, name: user.name, username: user.username }})")
However I am getting this error.
Note I'm passing in a list of dictionaries.
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input '[': expected "+" or "-" (line 1, column 8 (offset: 7))
"Unwind [{'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}] as user MERGE (n:User {id: user.id, name: user.name, username: user.username })"
^}
I have no idea why this is happening any help is greatly appreciated.
A:
Below is a working code on using UNWIND for a list of dictionaries. Please note that is it recommended to pass the value as a parameter rather than working on the value string in query.
from neo4j import GraphDatabase
uri = "neo4j://localhost:7687"
driver = GraphDatabase.driver(uri, auth=("neo4j", "awesomepassword"))
def test_batches(tx, user_batch):
tx.run("UNWIND $user_batch as user \
MERGE (n:User {id: user.id, name: user.name, username: user.username})", user_batch=user_batch)
with driver.session() as session:
user_batch = [
{'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'},
{'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'},
{'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'},
{'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'},
{'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}]
session.write_transaction(test_batches, user_batch)
driver.close()
sample result:
A:
You may need to adjust the syntax of the Cypher query to conform to the Neo4j Cypher query language specification. For example, the MERGE clause should use the ON CREATE and ON MATCH syntax to specify the actions that should be taken if the node already exists or not.
Here is an example of how the Cypher query can be rewritten to use the ON CREATE and ON MATCH syntax:
def test_batches(tx,user_batch):
result = tx.run(f"UNWIND {user_batch} as user
MERGE (n:User {{id: user.id, name: user.name, username: user.username }})
ON CREATE SET n = user
ON MATCH SET n += user")
|
Neo4j Python Driver Using Unwind with a list of dictionaries
|
I'm trying to batch merge to create multiple nodes. Using the below code,
def test_batches(tx,user_batch):
result= tx.run(f"Unwind {user_batch} as user\
MERGE (n:User {{id: user.id, name: user.name, username: user.username }})")
However I am getting this error.
Note I'm passing in a list of dictionaries.
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input '[': expected "+" or "-" (line 1, column 8 (offset: 7))
"Unwind [{'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}] as user MERGE (n:User {id: user.id, name: user.name, username: user.username })"
^}
I have no idea why this is happening any help is greatly appreciated.
|
[
"Below is a working code on using UNWIND for a list of dictionaries. Please note that is it recommended to pass the value as a parameter rather than working on the value string in query.\nfrom neo4j import GraphDatabase\n\nuri = \"neo4j://localhost:7687\"\ndriver = GraphDatabase.driver(uri, auth=(\"neo4j\", \"awesomepassword\"))\n\ndef test_batches(tx, user_batch):\n tx.run(\"UNWIND $user_batch as user \\\n MERGE (n:User {id: user.id, name: user.name, username: user.username})\", user_batch=user_batch)\n \nwith driver.session() as session:\n user_batch = [\n {'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, \n {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, \n {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, \n {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, \n {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}]\n session.write_transaction(test_batches, user_batch) \n\ndriver.close()\n\nsample result:\n\n",
"You may need to adjust the syntax of the Cypher query to conform to the Neo4j Cypher query language specification. For example, the MERGE clause should use the ON CREATE and ON MATCH syntax to specify the actions that should be taken if the node already exists or not.\nHere is an example of how the Cypher query can be rewritten to use the ON CREATE and ON MATCH syntax:\ndef test_batches(tx,user_batch):\n result = tx.run(f\"UNWIND {user_batch} as user\n MERGE (n:User {{id: user.id, name: user.name, username: user.username }})\n ON CREATE SET n = user\n ON MATCH SET n += user\")\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"neo4j",
"neo4j_python_driver",
"python"
] |
stackoverflow_0074659436_neo4j_neo4j_python_driver_python.txt
|
Q:
Using dplyr::case_when to conditionally change the value of a factored vaiable
I have a dataset that requires extensive data cleaning. Some of my variables are already factors. Some of the values of the factored variable I know to be incorrect; however, the levels of the factor are valid.
Yes, I could have converted the factored variable back to character and then re-factored when done with data cleaning --- but then I wouldn't have learned something.
library(dplyr)
## Create minimal reproducible example
min_re <- tibble(i = seq(1:10), my_letters = factor(substring("statistics", 1:10, 1:10), levels = letters))
# A tibble: 10 x 2
i my_letters
<int> <fct>
1 1 s
2 2 t
3 3 a
4 4 t
5 5 i
6 6 s
7 7 t
8 8 i
9 9 c
10 10 s
The first s in statistics is the wrong value. I want to replace the first s with an x, i.e., xtatistics
My first attempt:
min_re2 <- min_re %>%
mutate(
my_letters = case_when(
my_letters == "s" & i == 1 ~ "x",
TRUE ~ my_letters
)
)
Resulting error:
Error in `mutate()`:
! Problem while computing `my_letters = case_when(my_letters == "s" & i == 1 ~ "x", TRUE
~ my_letters)`.
Caused by error in `` names(message) <- `*vtmp*` ``:
! 'names' attribute [1] must be the same length as the vector [0]
Run `rlang::last_error()` to see where the error occurred.
Yet, this works:
min_re$my_letters[which(min_re$my_letters == "s" & min_re == 1)] <- "x"
min_re
# A tibble: 10 x 2
i my_letters
<int> <fct>
1 1 x
2 2 t
3 3 a
4 4 t
5 5 i
6 6 s
7 7 t
8 8 i
9 9 c
10 10 s
Why does the base r method work when changing a value of a factored variable but not dplyr::case_when? Is there a coercion that the base r method performs that dplyr::case_when is unwilling/unable to perform (e.g., character to factor)?
Is there a more elegant dplyr-ish way of changing values of already factored variables? Think data cleaning not necessarily re-leveling. There are some observations where s should remain s.
If new levels would be introduced, how does this affect case_when. Does forcats and case_when play nice?
A:
Partial answer:
(Yes, much easier to switch back to character, finish data cleaning, then refactor.)
In case_when present the changed value (right-hand side) as a factor with all the requisite levels.
library(dplyr)
min_re <- tibble(i = seq(1:10), my_letters = factor(substring("statistics", 1:10, 1:10), levels = letters))
min_re2 <- min_re %>%
mutate(
my_letters = case_when(
my_letters == "s" & i == 1 ~ factor("x", levels = letters),
TRUE ~ my_letters
)
)
> min_re2
# A tibble: 10 x 2
i my_letters
<int> <fct>
1 1 x
2 2 t
3 3 a
4 4 t
5 5 i
6 6 s
7 7 t
8 8 i
9 9 c
10 10 s
|
Using dplyr::case_when to conditionally change the value of a factored vaiable
|
I have a dataset that requires extensive data cleaning. Some of my variables are already factors. Some of the values of the factored variable I know to be incorrect; however, the levels of the factor are valid.
Yes, I could have converted the factored variable back to character and then re-factored when done with data cleaning --- but then I wouldn't have learned something.
library(dplyr)
## Create minimal reproducible example
min_re <- tibble(i = seq(1:10), my_letters = factor(substring("statistics", 1:10, 1:10), levels = letters))
# A tibble: 10 x 2
i my_letters
<int> <fct>
1 1 s
2 2 t
3 3 a
4 4 t
5 5 i
6 6 s
7 7 t
8 8 i
9 9 c
10 10 s
The first s in statistics is the wrong value. I want to replace the first s with an x, i.e., xtatistics
My first attempt:
min_re2 <- min_re %>%
mutate(
my_letters = case_when(
my_letters == "s" & i == 1 ~ "x",
TRUE ~ my_letters
)
)
Resulting error:
Error in `mutate()`:
! Problem while computing `my_letters = case_when(my_letters == "s" & i == 1 ~ "x", TRUE
~ my_letters)`.
Caused by error in `` names(message) <- `*vtmp*` ``:
! 'names' attribute [1] must be the same length as the vector [0]
Run `rlang::last_error()` to see where the error occurred.
Yet, this works:
min_re$my_letters[which(min_re$my_letters == "s" & min_re == 1)] <- "x"
min_re
# A tibble: 10 x 2
i my_letters
<int> <fct>
1 1 x
2 2 t
3 3 a
4 4 t
5 5 i
6 6 s
7 7 t
8 8 i
9 9 c
10 10 s
Why does the base r method work when changing a value of a factored variable but not dplyr::case_when? Is there a coercion that the base r method performs that dplyr::case_when is unwilling/unable to perform (e.g., character to factor)?
Is there a more elegant dplyr-ish way of changing values of already factored variables? Think data cleaning not necessarily re-leveling. There are some observations where s should remain s.
If new levels would be introduced, how does this affect case_when. Does forcats and case_when play nice?
|
[
"Partial answer:\n(Yes, much easier to switch back to character, finish data cleaning, then refactor.)\nIn case_when present the changed value (right-hand side) as a factor with all the requisite levels.\nlibrary(dplyr)\n\nmin_re <- tibble(i = seq(1:10), my_letters = factor(substring(\"statistics\", 1:10, 1:10), levels = letters))\n\nmin_re2 <- min_re %>%\n mutate(\n my_letters = case_when(\n my_letters == \"s\" & i == 1 ~ factor(\"x\", levels = letters),\n TRUE ~ my_letters\n )\n )\n\n\n> min_re2\n# A tibble: 10 x 2\n i my_letters\n <int> <fct> \n 1 1 x \n 2 2 t \n 3 3 a \n 4 4 t \n 5 5 i \n 6 6 s \n 7 7 t \n 8 8 i \n 9 9 c \n10 10 s \n\n"
] |
[
0
] |
[] |
[] |
[
"dplyr",
"r"
] |
stackoverflow_0074660882_dplyr_r.txt
|
Q:
How to code an alert message to prompt after directing to URL header location (PHP)
The alert echo does not appear after the redirection in my else statement, may i request some assistance in getting this to work the way i want?
if (isset($_SESSION["username"]))
{
header("Location:http://localhost/swapproject/index.php");
debug();
}
else {
header("Location:http://localhost/swapproject/loginform.php");
echo '<script>alert("please enter valid login info")</script>';
debug();
die();
}
I tried the echo 'alert("please enter valid login info")'; before and after the header in the else statement. I want the alert prompt to display after it redirects the fella back to the loginform screen as he entered his details incorrectly.
A:
When you return a response that tells the browser to redirect the user:
header("Location:http://localhost/swapproject/loginform.php");
Then there's no reason to return any page content. While some browsers may behave differently, by and large they're just going to ignore the content. Because there's no reason to display it when one is just immediately redirecting the user to another page.
Instead, put the alert on your loginform.php page. If that page needs to only conditionally show that alert, you can wrap it in a condition:
if (isset($_GET['alert'])) {
echo '<script>alert("please enter valid login info")</script>';
}
And pass a query string parameter on the redirect to trigger it:
header("Location:http://localhost/swapproject/loginform.php?alert=true");
|
How to code an alert message to prompt after directing to URL header location (PHP)
|
The alert echo does not appear after the redirection in my else statement, may i request some assistance in getting this to work the way i want?
if (isset($_SESSION["username"]))
{
header("Location:http://localhost/swapproject/index.php");
debug();
}
else {
header("Location:http://localhost/swapproject/loginform.php");
echo '<script>alert("please enter valid login info")</script>';
debug();
die();
}
I tried the echo 'alert("please enter valid login info")'; before and after the header in the else statement. I want the alert prompt to display after it redirects the fella back to the loginform screen as he entered his details incorrectly.
|
[
"When you return a response that tells the browser to redirect the user:\nheader(\"Location:http://localhost/swapproject/loginform.php\");\n\nThen there's no reason to return any page content. While some browsers may behave differently, by and large they're just going to ignore the content. Because there's no reason to display it when one is just immediately redirecting the user to another page.\nInstead, put the alert on your loginform.php page. If that page needs to only conditionally show that alert, you can wrap it in a condition:\nif (isset($_GET['alert'])) {\n echo '<script>alert(\"please enter valid login info\")</script>';\n}\n\nAnd pass a query string parameter on the redirect to trigger it:\nheader(\"Location:http://localhost/swapproject/loginform.php?alert=true\");\n\n"
] |
[
1
] |
[] |
[] |
[
"php"
] |
stackoverflow_0074660796_php.txt
|
Q:
How to get reference table fields with django model query
When I am trying to fetch foreign key table using django model I am only unable to get the referenced table details.
I have two models TblVersion and TblProject defined below
class TblVersion(models.Model):
version_id = models.AutoField(primary_key=True)
project = models.ForeignKey(TblProject, models.DO_NOTHING)
version_major = models.PositiveSmallIntegerField()
version_minor = models.PositiveSmallIntegerField()
class Meta:
managed = False
db_table = 'tbl_version'
class TblProject(models.Model):
project_id = models.AutoField(primary_key=True)
project_name = models.CharField(max_length=32)
class Meta:
managed = False
db_table = 'tbl_project'
My current code implementation:
result= TblVersion.objects.all().select_related()
data = serializers.serialize('json', result)
print(data)
Code Result:
`[{"model": "CCM_API.tblversion", "pk": 1, "fields": {"project": 1, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 2, "fields": {"project": 2, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 3, "fields": {"project": 2, "version_major": 1000, "version_minor": 2}}]`
The code output lacks the foreign key fields (Project Name). I want a list of version numbers with their respective projects like this.
| Version Id | Major Version | Minor Version | Project Id | Project Name|
| -------- | -------- |-------- |-------- |-------- |
| 1 | 1000 |1 | 1| PROJ_1 |
| 2 | 1000 |1 | 2| PROJ_2 |
| 3 | 1000 |2 | 1| PROJ_1 |
A:
select_related method accepts an arg of fields that relates to an other model
result= TblVersion.objects.all().select_related("product")
Update
To add those related field to be serializable u can list the values as
result = TblVersion.objects.all().select_related("product").values("id", "version_id", ..., "product__id", "product__name")
|
How to get reference table fields with django model query
|
When I am trying to fetch foreign key table using django model I am only unable to get the referenced table details.
I have two models TblVersion and TblProject defined below
class TblVersion(models.Model):
version_id = models.AutoField(primary_key=True)
project = models.ForeignKey(TblProject, models.DO_NOTHING)
version_major = models.PositiveSmallIntegerField()
version_minor = models.PositiveSmallIntegerField()
class Meta:
managed = False
db_table = 'tbl_version'
class TblProject(models.Model):
project_id = models.AutoField(primary_key=True)
project_name = models.CharField(max_length=32)
class Meta:
managed = False
db_table = 'tbl_project'
My current code implementation:
result= TblVersion.objects.all().select_related()
data = serializers.serialize('json', result)
print(data)
Code Result:
`[{"model": "CCM_API.tblversion", "pk": 1, "fields": {"project": 1, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 2, "fields": {"project": 2, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 3, "fields": {"project": 2, "version_major": 1000, "version_minor": 2}}]`
The code output lacks the foreign key fields (Project Name). I want a list of version numbers with their respective projects like this.
| Version Id | Major Version | Minor Version | Project Id | Project Name|
| -------- | -------- |-------- |-------- |-------- |
| 1 | 1000 |1 | 1| PROJ_1 |
| 2 | 1000 |1 | 2| PROJ_2 |
| 3 | 1000 |2 | 1| PROJ_1 |
|
[
"select_related method accepts an arg of fields that relates to an other model\nresult= TblVersion.objects.all().select_related(\"product\")\nUpdate\nTo add those related field to be serializable u can list the values as\nresult = TblVersion.objects.all().select_related(\"product\").values(\"id\", \"version_id\", ..., \"product__id\", \"product__name\")\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"django_orm",
"django_views",
"python"
] |
stackoverflow_0074660813_django_django_models_django_orm_django_views_python.txt
|
Q:
Mock class in Python with decorator patch
I would like to patch a class in Python in unit testing. The main code is this (mymath.py):
class MyMath:
def my_add(self, a, b):
return a + b
def add_three_and_two():
my_math = MyMath()
return my_math.my_add(3, 2)
The test class is this:
import unittest
from unittest.mock import patch
import mymath
class TestMyMath(unittest.TestCase):
@patch('mymath.MyMath')
def test_add_three_and_two(self, mymath_mock):
mymath_mock.my_add.return_value = 5
result = mymath.add_three_and_two()
mymath_mock.my_add.assert_called_once_with(3, 2)
self.assertEqual(5, result)
unittest.main()
I am getting the following error:
AssertionError: Expected 'my_add' to be called once. Called 0 times.
The last assert would also fail:
AssertionError: 5 != <MagicMock name='MyMath().my_add()' id='3006283127328'>
I would expect that the above test passes. What I did wrong?
UPDATE:
Restrictions:
I would not change the tested part if possible. (I am curious if it is even possible, and this is the point of the question.)
If not possible, then I want the least amount of change in the to be tested part. Especially I want to keep the my_add() function non-static.
A:
Your code is almost there, some small changes and you'll be okay:
my_add should be a class method since self does not really play a role here.
If my_add is an instance method, then it will be harder to trace the calls, since your test will track the instance signature, not the class sig
Since you are are patching, not stubbing, you should use the "real thing", except when mocking the return value.
Here's what that looks like in your code:
class MyMath:
@classmethod
def my_add(cls, a, b):
return a + b
def add_three_and_two():
return MyMath.my_add(3, 2)
Now, the test:
import unittest
from unittest.mock import patch, MagicMock
import mymath
class TestMyMath(unittest.TestCase):
@patch('mymath.MyMath')
def test_add_three_and_two(self, mymath_mock):
# Mock what `mymath` would return
mymath_mock.my_add.return_value = 5
# We are patching, not stubbing, so use the real thing
result = mymath.add_three_and_two()
mymath.MyMath.my_add.assert_called_once_with(3, 2)
self.assertEqual(5, result)
unittest.main()
This should now work.
A:
Instead of patching the entire class, just patch the function.
class TestMyMath(unittest.TestCase):
@patch.object(mymath.MyMath, 'my_add')
def test_add_three_and_two(self, m):
m.return_value = 5
result = mymath.add_three_and_two()
m.assert_called_once_with(3, 2)
self.assertEqual(5, result)
I think the original problems is that my_math.my_add produces a new mock object every time it is used; you configured one Mock's return_value attribute, but then checked if another Mock instance was called. At the very least, using patch.object ensures you are disturbing your original code as little as possible.
|
Mock class in Python with decorator patch
|
I would like to patch a class in Python in unit testing. The main code is this (mymath.py):
class MyMath:
def my_add(self, a, b):
return a + b
def add_three_and_two():
my_math = MyMath()
return my_math.my_add(3, 2)
The test class is this:
import unittest
from unittest.mock import patch
import mymath
class TestMyMath(unittest.TestCase):
@patch('mymath.MyMath')
def test_add_three_and_two(self, mymath_mock):
mymath_mock.my_add.return_value = 5
result = mymath.add_three_and_two()
mymath_mock.my_add.assert_called_once_with(3, 2)
self.assertEqual(5, result)
unittest.main()
I am getting the following error:
AssertionError: Expected 'my_add' to be called once. Called 0 times.
The last assert would also fail:
AssertionError: 5 != <MagicMock name='MyMath().my_add()' id='3006283127328'>
I would expect that the above test passes. What I did wrong?
UPDATE:
Restrictions:
I would not change the tested part if possible. (I am curious if it is even possible, and this is the point of the question.)
If not possible, then I want the least amount of change in the to be tested part. Especially I want to keep the my_add() function non-static.
|
[
"Your code is almost there, some small changes and you'll be okay:\n\nmy_add should be a class method since self does not really play a role here.\nIf my_add is an instance method, then it will be harder to trace the calls, since your test will track the instance signature, not the class sig\nSince you are are patching, not stubbing, you should use the \"real thing\", except when mocking the return value.\n\nHere's what that looks like in your code:\nclass MyMath:\n\n @classmethod\n def my_add(cls, a, b):\n return a + b\n\ndef add_three_and_two():\n return MyMath.my_add(3, 2)\n\n\nNow, the test:\nimport unittest\nfrom unittest.mock import patch, MagicMock\nimport mymath\n\n\nclass TestMyMath(unittest.TestCase):\n\n @patch('mymath.MyMath')\n def test_add_three_and_two(self, mymath_mock):\n\n # Mock what `mymath` would return \n mymath_mock.my_add.return_value = 5\n\n # We are patching, not stubbing, so use the real thing\n result = mymath.add_three_and_two()\n mymath.MyMath.my_add.assert_called_once_with(3, 2)\n self.assertEqual(5, result)\n\n\nunittest.main()\n\nThis should now work.\n",
"Instead of patching the entire class, just patch the function.\nclass TestMyMath(unittest.TestCase):\n @patch.object(mymath.MyMath, 'my_add')\n def test_add_three_and_two(self, m):\n m.return_value = 5\n\n result = mymath.add_three_and_two()\n\n m.assert_called_once_with(3, 2)\n self.assertEqual(5, result)\n\nI think the original problems is that my_math.my_add produces a new mock object every time it is used; you configured one Mock's return_value attribute, but then checked if another Mock instance was called. At the very least, using patch.object ensures you are disturbing your original code as little as possible.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"python_unittest",
"python_unittest.mock"
] |
stackoverflow_0074525368_python_python_unittest_python_unittest.mock.txt
|
Q:
Find groups of connected nodes in Cypher
I have nodes with label A. Some of them are connected with the relationship TEST (see Figure A).
I want to MATCH the groups of connected nodes, create a new node B for each group and create a relationship from each member of the group to the new node B (see Figure B). I know that the groups are small, never more then 3 steps of TEST relationships.
How can I MATCH the A nodes and return connected groups? Is there a graph algorithm implemented in APOC?
A:
I found the answer, maybe it's still helpful for someone:
There are several algorithms for community detection in the graph algorithm package ()https://neo4j.com/docs/graph-algorithms/current/. In this case, we look for connected components: https://neo4j.com/docs/graph-algorithms/current/algorithms/connected-components/
The algorithm can find connected components and store an ID for the component on the nodes:
CALL algo.unionFind('A', 'TEST', {write:true, partitionProperty:"partition"})
YIELD nodes, setCount, loadMillis, computeMillis, writeMillis;
With this new property it's simple to MATCH all nodes belonging to a particular group:
MATCH (a:A)
WITH a.partition AS p, a
RETURN p, count(a)
A:
algo.unionFind seems to be deprecated. The replacement is:
CALL gds.wcc.write('A', {nodeLabels: 'TEST', writeProperty: 'partition' });
Here is the syntax from the docs
CALL gds.wcc.write(
graphName: String,
configuration: Map
)
YIELD
componentCount: Integer,
nodePropertiesWritten: Integer,
preProcessingMillis: Integer,
computeMillis: Integer,
writeMillis: Integer,
postProcessingMillis: Integer,
componentDistribution: Map,
configuration: Map
See https://neo4j.com/docs/graph-data-science/current/algorithms/wcc/ for details
|
Find groups of connected nodes in Cypher
|
I have nodes with label A. Some of them are connected with the relationship TEST (see Figure A).
I want to MATCH the groups of connected nodes, create a new node B for each group and create a relationship from each member of the group to the new node B (see Figure B). I know that the groups are small, never more then 3 steps of TEST relationships.
How can I MATCH the A nodes and return connected groups? Is there a graph algorithm implemented in APOC?
|
[
"I found the answer, maybe it's still helpful for someone:\nThere are several algorithms for community detection in the graph algorithm package ()https://neo4j.com/docs/graph-algorithms/current/. In this case, we look for connected components: https://neo4j.com/docs/graph-algorithms/current/algorithms/connected-components/\nThe algorithm can find connected components and store an ID for the component on the nodes:\nCALL algo.unionFind('A', 'TEST', {write:true, partitionProperty:\"partition\"})\nYIELD nodes, setCount, loadMillis, computeMillis, writeMillis;\n\nWith this new property it's simple to MATCH all nodes belonging to a particular group:\nMATCH (a:A)\nWITH a.partition AS p, a\nRETURN p, count(a)\n\n",
"algo.unionFind seems to be deprecated. The replacement is:\nCALL gds.wcc.write('A', {nodeLabels: 'TEST', writeProperty: 'partition' });\n\nHere is the syntax from the docs\nCALL gds.wcc.write(\n graphName: String,\n configuration: Map\n)\nYIELD\n componentCount: Integer,\n nodePropertiesWritten: Integer,\n preProcessingMillis: Integer,\n computeMillis: Integer,\n writeMillis: Integer,\n postProcessingMillis: Integer,\n componentDistribution: Map,\n configuration: Map\n\nSee https://neo4j.com/docs/graph-data-science/current/algorithms/wcc/ for details\n"
] |
[
2,
0
] |
[] |
[] |
[
"cypher",
"neo4j"
] |
stackoverflow_0055042489_cypher_neo4j.txt
|
Q:
I want to convert String value from API to Custom Codable Model
I have a response from API like this:
{
"data": {
"items": [
{
"jsonBody": "{\n \"documentInfo\": {\n \"docId\": \"AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA\",\n \"docCreateTimestamp\": 56565687867,\n \"docUpdateTimestamp\": 56465755766,\n \"docSynchTimestamp\": 56565687867,\n \"documentType\": \"document-monthly-instagram\",\n \"documentName\": \"Monthly Instagram Posts\",\n \"docDescription\": \"Monthly Instagram Posts\"\n },\n \"documentData\": {\n \"header\": {\n \"configuration\": {\n \"autoGenerate\": [\n {\n \"id\": \"headerDocNumberId\",\n \"type\": \"8code\"\n },\n {\n \"id\": \"headerVersionNumberId\",\n \"type\": \"autonum\"\n },\n {\n \"id\": \"headerDocumentCreationDateId\",\n \"type\": \"date\"\n }\n ],\n \"autoFill\": [],\n \"tileTitle\": \"headerDocumentTileNameId\",\n \"tileView\": [\n \"headerDocumentTileNameId\",\n \"headerDetailsId\",\n \"headerDocumentCreationDateId\"\n ],\n \"createView\": [\n \"headerMonthNameId\",\n \"headerNameId\",\n \"headerDetailsId\"\n ],\n \"editView\": [\n \"headerMonthNameId\",\n \"headerNameId\",\n \"headerDetailsId\"\n ]\n },\n \"data\": [\n {\n \"id\": \"headerDocNumberId\",\n \"name\": \"Document Number\",\n \"attr\": \"\",\n \"placeholder\": \"Enter An Unique Value\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": false,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerVersionNumberId\",\n \"name\": \"Document Version\",\n \"attr\": \"\",\n \"placeholder\": \"Enter A Number\",\n \"value\": 1,\n \"type\": \"int\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDocumentCreationDateId\",\n \"name\": \"Time & Date\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Date\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerNameId\",\n \"name\": \"Name\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Name\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDocumentTileNameId\",\n \"name\": \"Document Type\",\n \"attr\": \"\",\n \"placeholder\": \"\",\n \"value\": \"Monthly Log Book\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"readonly\": true,\n \"requred\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerMonthNameId\",\n \"name\": \"Month\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Post Month\",\n \"value\": \"\",\n \"type\": \"list\",\n \"extType\": \"monthTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDetailsId\",\n \"name\": \"Details\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Details\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\"\n }\n ]\n },\n \"content\": {\n \"groups\": [\n {\n \"name\": \"Week 1\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 2\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 3\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 4\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n }\n ]\n },\n \"summary\": \"\"\n },\n \"documentStructure\": {\n \"sectionTypes\": [\n {\n \"id\": \"typePostId\",\n \"name\": \"INSTAGRAM POST\",\n \"order\": 0,\n \"rowType\": {\n \"extendable\": \"static\",\n \"allowedTypeIds\": [\n \"rowDateId\",\n \"rowNameOfEventId\",\n \"rowDescriptionId\",\n \"rowHashTagsId\",\n \"rowLocationId\",\n \"rowPeopleLinksId\",\n \"rowLogosId\",\n \"rowPicturesVideoId\",\n \"rowPromoteId\",\n \"rowDestinationId\",\n \"rowAudienceId\",\n \"rowBudgetId\"\n ]\n },\n \"rows\": [\n ],\n \"collapsed\": false\n }\n ],\n \"rowTypes\": [\n {\n \"id\": \"rowDateId\",\n \"name\": \"Date\",\n \"attr\": \"Select date\",\n \"placeholder\": \"Enter Trip Date\",\n \"value\": \"\",\n \"type\": \"date\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 2\n },\n {\n \"id\": \"rowNameOfEventId\",\n \"name\": \"Name of Event\",\n \"attr\": \"Enter name of event\",\n \"placeholder\": \"Enter name of event\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 3\n },\n {\n \"id\": \"rowDescriptionId\",\n \"name\": \"Decription of Event\",\n \"attr\": \"Enter description of event\",\n \"placeholder\": \"Enter description\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 4\n },\n {\n \"id\": \"rowHashTagsId\",\n \"name\": \"Hash Tags\",\n \"attr\": \"Enter hash tags\",\n \"placeholder\": \"Enter hash tags\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 5\n },\n {\n \"id\": \"rowLocationId\",\n \"name\": \"Location\",\n \"attr\": \"Select location\",\n \"placeholder\": \"Enter location\",\n \"value\": {\n \"description\": \"\",\n \"location\": {}\n },\n \"type\": \"address\",\n \"extType\": \"300\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 6\n },\n {\n \"id\": \"rowPeopleLinksId\",\n \"name\": \"People links\",\n \"attr\": \"Enter people links\",\n \"placeholder\": \"Enter people links\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 7\n },\n {\n \"id\": \"rowLogosId\",\n \"name\": \"Logos\",\n \"attr\": \"Logos\",\n \"placeholder\": \"\",\n \"value\": [],\n \"type\": \"photo\",\n \"extType\": \"3\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 8\n },\n {\n \"id\": \"rowPicturesVideoId\",\n \"name\": \"Pictures/Video Records\",\n \"attr\": \"Pictures/Video\",\n \"placeholder\": \"\",\n \"value\": [],\n \"type\": \"video\",\n \"extType\": \"5\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 9\n },\n {\n \"id\": \"rowPromoteId\",\n \"name\": \"Promote\",\n \"attr\": \"\",\n \"placeholder\": \"\",\n \"value\": false,\n \"type\": \"bool\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 10\n },\n {\n \"id\": \"rowDestinationId\",\n \"name\": \"Destination\",\n \"attr\": \"Please select destination\",\n \"placeholder\": \"Select destination\",\n \"value\": \"iDestinationProfileId\",\n \"type\": \"list\",\n \"extType\": \"DestinationTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 11\n },\n {\n \"id\": \"rowAudienceId\",\n \"name\": \"Audience\",\n \"attr\": \"Please select audience to promote\",\n \"placeholder\": \"Select audience\",\n \"value\": \"iAudienceAutomaticId\",\n \"type\": \"list\",\n \"extType\": \"AudienceTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 12\n },\n {\n \"id\": \"rowBudgetId\",\n \"name\": \"Budget&Duration\",\n \"attr\": \"Please choose Budget or Duration\",\n \"placeholder\": \"Select Budget or Duration\",\n \"value\": \"iBudgetBudgetId\",\n \"type\": \"list\",\n \"extType\": \"BudgetTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 13\n }\n ],\n \"listTypes\": [\n {\n \"id\": \"DestinationTypeListId\",\n \"items\": [\n {\n \"id\": \"iDestinationProfileId\",\n \"value\": \"Your Profile\",\n \"selected\": true\n },\n {\n \"id\": \"iDestinationWebsiteId\",\n \"value\": \"Your Website\",\n \"selected\": false\n },\n {\n \"id\": \"iDestinationDirectId\",\n \"value\": \"Your Direct Messages\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"AudienceTypeListId\",\n \"items\": [\n {\n \"id\": \"iAudienceAutomaticId\",\n \"value\": \"Automatic\",\n \"selected\": true\n },\n {\n \"id\": \"iAudienceKUId\",\n \"value\": \"Kite Union Clients\",\n \"selected\": false\n },\n {\n \"id\": \"iAudienceCreateId\",\n \"value\": \"Create Your Own\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"BudgetTypeListId\",\n \"items\": [\n {\n \"id\": \"iBudgetBudgetId\",\n \"value\": \"Budget\",\n \"selected\": true\n },\n {\n \"id\": \"iBudgetDurationId\",\n \"value\": \"Duration\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"monthTypeListId\",\n \"items\": [\n {\n \"id\": \"iMonthTypeJanuaryId\",\n \"value\": \"January\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeFebruaryId\",\n \"value\": \"February\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeMarchId\",\n \"value\": \"March\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeAprilId\",\n \"value\": \"April\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeMayId\",\n \"value\": \"May\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeJuneId\",\n \"value\": \"June\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeJulyId\",\n \"value\": \"July\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeAugustId\",\n \"value\": \"August\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeSeptemberId\",\n \"value\": \"September\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeOctoberId\",\n \"value\": \"October\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeNovemberId\",\n \"value\": \"November\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeDecemberId\",\n \"value\": \"December\",\n \"selected\": false\n }\n ]\n }\n ]\n }\n}"
}
]
},
"code": 0,
"message": ""
}
I want to convert the value of "jsonBody" which is string into Codable Model.
I have tried to convert it into [String: Array] but then when I am assigning my model to the result it gives me this error:
typeMismatch(Swift.Dictionary<Swift.String, Any>, Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "jsonBody", intValue: nil)], debugDescription: "Expected to decode Dictionary<String, Any> but found a string/data instead.", underlyingError: nil))
A:
Nothing really complicated about that, the item has to use a secondary JsonDecoder:
struct Item: Decodable {
let body: Body
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
let jsonBody = try container.decode(String.self, forKey: .jsonBody)
let jsonBodyData = jsonBody.data(using: .utf8)!
let decoder = JSONDecoder()
body = try decoder.decode(Body.self, from: jsonBodyData)
}
private enum CodingKeys: String, CodingKey {
case jsonBody
}
}
struct Body: Decodable {
let documentInfo: ...
}
A:
This was a tough one. Hope this helps someone else.
I am parsing data from a JSON file. Here is the code to get the parse the JSON:
guard let url = Bundle.main.url(forResource: "SYNC_DATA", withExtension: "json") else {
fatalError("Failed to locate SYNC_DATA in bundle")
}
if let data = try? Data(contentsOf: url) {
let decoder = JSONDecoder()
if let syncData = try? decoder.decode(SyncResponse.self, from: data) {
print("Successfully decoded sync data!!!")
print("Sync Data = \(syncData)")
}
} else {
fatalError("Failed to load SYNC_DATA from bundle")
}
Here is the Codable:
struct SyncResponse: Codable {
//MARK: items currently in the sync download
var serverTimestamp: String
var status: Bool
//MARK: init method used to decode the data for this object
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
self.serverTimestamp = try container.decode(String.self, forKey: .serverTimestamp)
self.status = try container.decode(Bool.self, forKey: .status)
....(other data to decode)
}
enum CodingKeys: String, CodingKey {
case serverTimestamp = "serverTimestamp"
case status = "status"
....(other data)
}
}
|
I want to convert String value from API to Custom Codable Model
|
I have a response from API like this:
{
"data": {
"items": [
{
"jsonBody": "{\n \"documentInfo\": {\n \"docId\": \"AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA\",\n \"docCreateTimestamp\": 56565687867,\n \"docUpdateTimestamp\": 56465755766,\n \"docSynchTimestamp\": 56565687867,\n \"documentType\": \"document-monthly-instagram\",\n \"documentName\": \"Monthly Instagram Posts\",\n \"docDescription\": \"Monthly Instagram Posts\"\n },\n \"documentData\": {\n \"header\": {\n \"configuration\": {\n \"autoGenerate\": [\n {\n \"id\": \"headerDocNumberId\",\n \"type\": \"8code\"\n },\n {\n \"id\": \"headerVersionNumberId\",\n \"type\": \"autonum\"\n },\n {\n \"id\": \"headerDocumentCreationDateId\",\n \"type\": \"date\"\n }\n ],\n \"autoFill\": [],\n \"tileTitle\": \"headerDocumentTileNameId\",\n \"tileView\": [\n \"headerDocumentTileNameId\",\n \"headerDetailsId\",\n \"headerDocumentCreationDateId\"\n ],\n \"createView\": [\n \"headerMonthNameId\",\n \"headerNameId\",\n \"headerDetailsId\"\n ],\n \"editView\": [\n \"headerMonthNameId\",\n \"headerNameId\",\n \"headerDetailsId\"\n ]\n },\n \"data\": [\n {\n \"id\": \"headerDocNumberId\",\n \"name\": \"Document Number\",\n \"attr\": \"\",\n \"placeholder\": \"Enter An Unique Value\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": false,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerVersionNumberId\",\n \"name\": \"Document Version\",\n \"attr\": \"\",\n \"placeholder\": \"Enter A Number\",\n \"value\": 1,\n \"type\": \"int\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDocumentCreationDateId\",\n \"name\": \"Time & Date\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Date\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerNameId\",\n \"name\": \"Name\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Name\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"tag\": \"\",\n \"readonly\": true,\n \"required\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDocumentTileNameId\",\n \"name\": \"Document Type\",\n \"attr\": \"\",\n \"placeholder\": \"\",\n \"value\": \"Monthly Log Book\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"readonly\": true,\n \"requred\": true,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerMonthNameId\",\n \"name\": \"Month\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Post Month\",\n \"value\": \"\",\n \"type\": \"list\",\n \"extType\": \"monthTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\"\n },\n {\n \"id\": \"headerDetailsId\",\n \"name\": \"Details\",\n \"attr\": \"\",\n \"placeholder\": \"Enter Details\",\n \"value\": \"\",\n \"type\": \"text\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\"\n }\n ]\n },\n \"content\": {\n \"groups\": [\n {\n \"name\": \"Week 1\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 2\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 3\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n },\n {\n \"name\": \"Week 4\",\n \"sectionType\": {\n \"extendable\": \"dynamic\",\n \"allowedTypeIds\": [\n \"typePostId\"\n ]\n },\n \"sections\": [\n ]\n }\n ]\n },\n \"summary\": \"\"\n },\n \"documentStructure\": {\n \"sectionTypes\": [\n {\n \"id\": \"typePostId\",\n \"name\": \"INSTAGRAM POST\",\n \"order\": 0,\n \"rowType\": {\n \"extendable\": \"static\",\n \"allowedTypeIds\": [\n \"rowDateId\",\n \"rowNameOfEventId\",\n \"rowDescriptionId\",\n \"rowHashTagsId\",\n \"rowLocationId\",\n \"rowPeopleLinksId\",\n \"rowLogosId\",\n \"rowPicturesVideoId\",\n \"rowPromoteId\",\n \"rowDestinationId\",\n \"rowAudienceId\",\n \"rowBudgetId\"\n ]\n },\n \"rows\": [\n ],\n \"collapsed\": false\n }\n ],\n \"rowTypes\": [\n {\n \"id\": \"rowDateId\",\n \"name\": \"Date\",\n \"attr\": \"Select date\",\n \"placeholder\": \"Enter Trip Date\",\n \"value\": \"\",\n \"type\": \"date\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 2\n },\n {\n \"id\": \"rowNameOfEventId\",\n \"name\": \"Name of Event\",\n \"attr\": \"Enter name of event\",\n \"placeholder\": \"Enter name of event\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 3\n },\n {\n \"id\": \"rowDescriptionId\",\n \"name\": \"Decription of Event\",\n \"attr\": \"Enter description of event\",\n \"placeholder\": \"Enter description\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 4\n },\n {\n \"id\": \"rowHashTagsId\",\n \"name\": \"Hash Tags\",\n \"attr\": \"Enter hash tags\",\n \"placeholder\": \"Enter hash tags\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 5\n },\n {\n \"id\": \"rowLocationId\",\n \"name\": \"Location\",\n \"attr\": \"Select location\",\n \"placeholder\": \"Enter location\",\n \"value\": {\n \"description\": \"\",\n \"location\": {}\n },\n \"type\": \"address\",\n \"extType\": \"300\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 6\n },\n {\n \"id\": \"rowPeopleLinksId\",\n \"name\": \"People links\",\n \"attr\": \"Enter people links\",\n \"placeholder\": \"Enter people links\",\n \"value\": \"\",\n \"type\": \"textArea\",\n \"extType\": \"512\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 7\n },\n {\n \"id\": \"rowLogosId\",\n \"name\": \"Logos\",\n \"attr\": \"Logos\",\n \"placeholder\": \"\",\n \"value\": [],\n \"type\": \"photo\",\n \"extType\": \"3\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 8\n },\n {\n \"id\": \"rowPicturesVideoId\",\n \"name\": \"Pictures/Video Records\",\n \"attr\": \"Pictures/Video\",\n \"placeholder\": \"\",\n \"value\": [],\n \"type\": \"video\",\n \"extType\": \"5\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 9\n },\n {\n \"id\": \"rowPromoteId\",\n \"name\": \"Promote\",\n \"attr\": \"\",\n \"placeholder\": \"\",\n \"value\": false,\n \"type\": \"bool\",\n \"extType\": \"\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 10\n },\n {\n \"id\": \"rowDestinationId\",\n \"name\": \"Destination\",\n \"attr\": \"Please select destination\",\n \"placeholder\": \"Select destination\",\n \"value\": \"iDestinationProfileId\",\n \"type\": \"list\",\n \"extType\": \"DestinationTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 11\n },\n {\n \"id\": \"rowAudienceId\",\n \"name\": \"Audience\",\n \"attr\": \"Please select audience to promote\",\n \"placeholder\": \"Select audience\",\n \"value\": \"iAudienceAutomaticId\",\n \"type\": \"list\",\n \"extType\": \"AudienceTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 12\n },\n {\n \"id\": \"rowBudgetId\",\n \"name\": \"Budget&Duration\",\n \"attr\": \"Please choose Budget or Duration\",\n \"placeholder\": \"Select Budget or Duration\",\n \"value\": \"iBudgetBudgetId\",\n \"type\": \"list\",\n \"extType\": \"BudgetTypeListId\",\n \"readonly\": false,\n \"requred\": false,\n \"keyboardType\": \"\",\n \"col\": 13\n }\n ],\n \"listTypes\": [\n {\n \"id\": \"DestinationTypeListId\",\n \"items\": [\n {\n \"id\": \"iDestinationProfileId\",\n \"value\": \"Your Profile\",\n \"selected\": true\n },\n {\n \"id\": \"iDestinationWebsiteId\",\n \"value\": \"Your Website\",\n \"selected\": false\n },\n {\n \"id\": \"iDestinationDirectId\",\n \"value\": \"Your Direct Messages\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"AudienceTypeListId\",\n \"items\": [\n {\n \"id\": \"iAudienceAutomaticId\",\n \"value\": \"Automatic\",\n \"selected\": true\n },\n {\n \"id\": \"iAudienceKUId\",\n \"value\": \"Kite Union Clients\",\n \"selected\": false\n },\n {\n \"id\": \"iAudienceCreateId\",\n \"value\": \"Create Your Own\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"BudgetTypeListId\",\n \"items\": [\n {\n \"id\": \"iBudgetBudgetId\",\n \"value\": \"Budget\",\n \"selected\": true\n },\n {\n \"id\": \"iBudgetDurationId\",\n \"value\": \"Duration\",\n \"selected\": false\n }\n ]\n },\n {\n \"id\": \"monthTypeListId\",\n \"items\": [\n {\n \"id\": \"iMonthTypeJanuaryId\",\n \"value\": \"January\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeFebruaryId\",\n \"value\": \"February\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeMarchId\",\n \"value\": \"March\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeAprilId\",\n \"value\": \"April\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeMayId\",\n \"value\": \"May\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeJuneId\",\n \"value\": \"June\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeJulyId\",\n \"value\": \"July\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeAugustId\",\n \"value\": \"August\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeSeptemberId\",\n \"value\": \"September\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeOctoberId\",\n \"value\": \"October\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeNovemberId\",\n \"value\": \"November\",\n \"selected\": false\n },\n {\n \"id\": \"iMonthTypeDecemberId\",\n \"value\": \"December\",\n \"selected\": false\n }\n ]\n }\n ]\n }\n}"
}
]
},
"code": 0,
"message": ""
}
I want to convert the value of "jsonBody" which is string into Codable Model.
I have tried to convert it into [String: Array] but then when I am assigning my model to the result it gives me this error:
typeMismatch(Swift.Dictionary<Swift.String, Any>, Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "jsonBody", intValue: nil)], debugDescription: "Expected to decode Dictionary<String, Any> but found a string/data instead.", underlyingError: nil))
|
[
"Nothing really complicated about that, the item has to use a secondary JsonDecoder:\nstruct Item: Decodable {\n let body: Body\n\n init(from decoder: Decoder) throws {\n let container = try decoder.container(keyedBy: CodingKeys.self)\n let jsonBody = try container.decode(String.self, forKey: .jsonBody)\n let jsonBodyData = jsonBody.data(using: .utf8)!\n\n let decoder = JSONDecoder()\n body = try decoder.decode(Body.self, from: jsonBodyData)\n }\n\n private enum CodingKeys: String, CodingKey {\n case jsonBody\n }\n}\n\nstruct Body: Decodable {\n let documentInfo: ...\n}\n\n",
"This was a tough one. Hope this helps someone else.\nI am parsing data from a JSON file. Here is the code to get the parse the JSON:\nguard let url = Bundle.main.url(forResource: \"SYNC_DATA\", withExtension: \"json\") else {\n fatalError(\"Failed to locate SYNC_DATA in bundle\")\n }\n if let data = try? Data(contentsOf: url) {\n let decoder = JSONDecoder()\n if let syncData = try? decoder.decode(SyncResponse.self, from: data) {\n print(\"Successfully decoded sync data!!!\")\n print(\"Sync Data = \\(syncData)\")\n }\n } else {\n fatalError(\"Failed to load SYNC_DATA from bundle\")\n }\n\nHere is the Codable:\nstruct SyncResponse: Codable {\n//MARK: items currently in the sync download\nvar serverTimestamp: String\nvar status: Bool\n\n//MARK: init method used to decode the data for this object\ninit(from decoder: Decoder) throws {\n let container = try decoder.container(keyedBy: CodingKeys.self)\n self.serverTimestamp = try container.decode(String.self, forKey: .serverTimestamp)\n self.status = try container.decode(Bool.self, forKey: .status)\n ....(other data to decode)\n}\n\nenum CodingKeys: String, CodingKey {\n case serverTimestamp = \"serverTimestamp\"\n case status = \"status\"\n ....(other data)\n}\n}\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"codable",
"swift",
"xcode"
] |
stackoverflow_0063945091_codable_swift_xcode.txt
|
Q:
Undefined attribute with telegram API
My apologies. This is the first bit of code I've written. Spent all day trying to find a way to sort it and this is my last resort.
When someone forwards a message with a username I've defined - the bot sends a message back saying it's a real user. When someone forwards a message with no username, the bot sends a message back saying it's probably a scammer... but when someone forwards a message from someone with no username and who's disallowed message forwarding in their Telegram settings, I get an undefined attribute error.
AttributeError: 'NoneType' object has no attribute 'username'
here is my code
`
#!/usr/bin/python3
from telegram.ext.updater import Updater
from telegram.update import Update
from telegram.ext.callbackcontext import CallbackContext
from telegram.ext.commandhandler import CommandHandler
from telegram.ext.messagehandler import MessageHandler
from telegram.ext.filters import Filters
updater = Updater("My Token", use_context=True)
def start(update: Update, context: CallbackContext):
update.message.reply_text(
"Hi. I'm a bot that sniffs out this softwares scammers so you don't get exploited. Simply forward a message from the scammer to me "
"and I'll tell you if it's the real support or not.")
def usercheck(update: Update, context: CallbackContext):
if update.effective_message.forward_from.username == "admin1":
update.effective_message.reply_text("✅This message was sent from the real admin1, an authorised reseller from this group")
elif update.effective_message.forward_from.username == "dev1":
update.effective_message.reply_text("✅This message was sent from the real dev1, an authorised developer of group")
elif update.effective_message.forward_from.username == None:
update.effective_message.reply_text(" If this user says they're from Gunbot support - they are trying to scam you. "
"Please report them and stop talking to them immediately. ") #works but only if the user has no privacy settings
else:
update.effective_message.reply_text(" If this user says they're from My softwares support - they are trying to scam you. "
"Please report them and stop talking to them immediately. ") #would have thought it works if the user hides their privacy settings but it doesn't
updater.dispatcher.add_handler(CommandHandler("start", start))
updater.dispatcher.add_handler(MessageHandler(Filters.forwarded, usercheck))
updater.start_polling()
updater.idle()
`
A:
From the telegram docs, forward_from is optional, so it could be None. You should verify that update.effective_message.forward_from is not None before accessing effective_message.forward_from.username.
def usercheck(update: Update, context: CallbackContext):
if update.effective_message.forward_from is not None:
if update.effective_message.forward_from.username == "admin1":
# ...
else:
# do something else
|
Undefined attribute with telegram API
|
My apologies. This is the first bit of code I've written. Spent all day trying to find a way to sort it and this is my last resort.
When someone forwards a message with a username I've defined - the bot sends a message back saying it's a real user. When someone forwards a message with no username, the bot sends a message back saying it's probably a scammer... but when someone forwards a message from someone with no username and who's disallowed message forwarding in their Telegram settings, I get an undefined attribute error.
AttributeError: 'NoneType' object has no attribute 'username'
here is my code
`
#!/usr/bin/python3
from telegram.ext.updater import Updater
from telegram.update import Update
from telegram.ext.callbackcontext import CallbackContext
from telegram.ext.commandhandler import CommandHandler
from telegram.ext.messagehandler import MessageHandler
from telegram.ext.filters import Filters
updater = Updater("My Token", use_context=True)
def start(update: Update, context: CallbackContext):
update.message.reply_text(
"Hi. I'm a bot that sniffs out this softwares scammers so you don't get exploited. Simply forward a message from the scammer to me "
"and I'll tell you if it's the real support or not.")
def usercheck(update: Update, context: CallbackContext):
if update.effective_message.forward_from.username == "admin1":
update.effective_message.reply_text("✅This message was sent from the real admin1, an authorised reseller from this group")
elif update.effective_message.forward_from.username == "dev1":
update.effective_message.reply_text("✅This message was sent from the real dev1, an authorised developer of group")
elif update.effective_message.forward_from.username == None:
update.effective_message.reply_text(" If this user says they're from Gunbot support - they are trying to scam you. "
"Please report them and stop talking to them immediately. ") #works but only if the user has no privacy settings
else:
update.effective_message.reply_text(" If this user says they're from My softwares support - they are trying to scam you. "
"Please report them and stop talking to them immediately. ") #would have thought it works if the user hides their privacy settings but it doesn't
updater.dispatcher.add_handler(CommandHandler("start", start))
updater.dispatcher.add_handler(MessageHandler(Filters.forwarded, usercheck))
updater.start_polling()
updater.idle()
`
|
[
"From the telegram docs, forward_from is optional, so it could be None. You should verify that update.effective_message.forward_from is not None before accessing effective_message.forward_from.username.\ndef usercheck(update: Update, context: CallbackContext):\n if update.effective_message.forward_from is not None:\n if update.effective_message.forward_from.username == \"admin1\":\n # ...\n else:\n # do something else\n\n"
] |
[
0
] |
[] |
[] |
[
"python_3.x",
"telegram",
"telegram_api"
] |
stackoverflow_0074660842_python_3.x_telegram_telegram_api.txt
|
Q:
how to add character when type datetime input html js
i have a problem with input type = "date"
<input type = "date" or "datetime-local" />
when the user selects the time on the calendar it will take more time than typing, i had an idea like this but it seems too difficult for me
when user type two character day(dd), it will add "/" to input, two character next (month mm) will add "/", and year like this
<input type = "number" value="11/08/1999" />
how can i do this?
A:
As another user has mentioned, the input element itself will add "/" to the input.
However, if you log the value of the date input, the default date format is this: yyyy-mm-dd
If you want to reformat the date to this format: dd/mm/yyyy
Then you can do something like the example below:
var dateInput = document.querySelector('#date')
var output = document.querySelector('#output');
var dateButton = document.querySelector('#dateButton');
function displayFormattedDate() {
console.log('Default date output is: ' + dateInput.value)
//convert output to date string
var date = new Date(dateInput.value)
//get dd +1 for timezone offset
var dd = date.getDate() + 1
//get month +1 for timezone offset
var mm = date.getMonth() + 1
//get full year
var yyyy = date.getFullYear()
//concat date variables to dd/mm/yyyy format
var fullDate = dd + '/' + mm + '/' + yyyy
//
output.innerHTML = 'Reformatted date output is: ' + fullDate
}
//apply displayFormattedDate function to button
dateButton.addEventListener('click', displayFormattedDate)
input{
-webkit-appearance: none;
}
<span>Enter Date: </span><input id="date" type="date" /><button id="dateButton">Add Date</button>
<br/><br/>
<p id="output"></p>
If input type="date" appears different in your browser, you can use input type="number" with a custom solution.
I've provided a small example in the snippet below but you will have to add extra validation to ensure date is entered properly.
//declare vars
var mm = document.querySelector('#month')
var dd = document.querySelector('#day')
var yyyy = document.querySelector('#year')
var output = document.querySelector('#output');
var dateButton = document.querySelector('#dateButton');
function displayFormattedDate() {
//combine values from inputs into full date format
var fullDate = mm.value + '/' + dd.value + '/' + yyyy.value
//some minimal validation to check date format
if (mm.value.length !== 2 && dd.value.length !== 2 && yyyy.value.length !== 4) {
output.innerHTML = 'Please enter valid date.'
} else {
//display date if format is correct
output.innerHTML = fullDate
var date = new Date(fullDate)
console.log(date)
}
}
dateButton.addEventListener('click', displayFormattedDate)
input {
width: 15%;
}
<div>
<p>Enter Date: </p><input id="day" type="number" placeholder="day" /> / <input id="month" type="number" placeholder="month" /> / <input id="year" type="number" placeholder="year" /> <button id="dateButton">Add Date</button>
<br/><br/>
<p id="output"></p>
</div>
Or Option 3 is to just use input type="text" with replace and regex. The input below will not allow letters to be entered but will still require additional validation to ensure a proper date is being entered.
var dateInput = document.querySelector('#date')
function addSlashes(e) {
dateInput.value = e.target.value.replace(/^(\d\d\/\d\d)(\d+)$/g, '$1/$2').replace(/^(\d\d)(\d)$/g, '$1/$2').replace(/[^\d\/]/g, '')
}
dateInput.addEventListener('keydown', addSlashes)
<span>Enter Date: </span><input id="date" maxlength=10>
|
how to add character when type datetime input html js
|
i have a problem with input type = "date"
<input type = "date" or "datetime-local" />
when the user selects the time on the calendar it will take more time than typing, i had an idea like this but it seems too difficult for me
when user type two character day(dd), it will add "/" to input, two character next (month mm) will add "/", and year like this
<input type = "number" value="11/08/1999" />
how can i do this?
|
[
"As another user has mentioned, the input element itself will add \"/\" to the input.\nHowever, if you log the value of the date input, the default date format is this: yyyy-mm-dd\nIf you want to reformat the date to this format: dd/mm/yyyy\nThen you can do something like the example below:\n\n\nvar dateInput = document.querySelector('#date')\nvar output = document.querySelector('#output');\nvar dateButton = document.querySelector('#dateButton');\n\nfunction displayFormattedDate() {\n console.log('Default date output is: ' + dateInput.value)\n \n //convert output to date string\n var date = new Date(dateInput.value)\n\n //get dd +1 for timezone offset\n var dd = date.getDate() + 1\n\n //get month +1 for timezone offset \n var mm = date.getMonth() + 1\n\n //get full year\n var yyyy = date.getFullYear()\n\n //concat date variables to dd/mm/yyyy format\n var fullDate = dd + '/' + mm + '/' + yyyy\n\n //\n output.innerHTML = 'Reformatted date output is: ' + fullDate\n}\n\n//apply displayFormattedDate function to button\ndateButton.addEventListener('click', displayFormattedDate)\ninput{\n -webkit-appearance: none;\n}\n<span>Enter Date: </span><input id=\"date\" type=\"date\" /><button id=\"dateButton\">Add Date</button>\n<br/><br/>\n<p id=\"output\"></p>\n\n\n\nIf input type=\"date\" appears different in your browser, you can use input type=\"number\" with a custom solution.\nI've provided a small example in the snippet below but you will have to add extra validation to ensure date is entered properly.\n\n\n//declare vars\nvar mm = document.querySelector('#month')\nvar dd = document.querySelector('#day')\nvar yyyy = document.querySelector('#year')\nvar output = document.querySelector('#output');\nvar dateButton = document.querySelector('#dateButton');\n\nfunction displayFormattedDate() {\n //combine values from inputs into full date format\n var fullDate = mm.value + '/' + dd.value + '/' + yyyy.value\n //some minimal validation to check date format\n if (mm.value.length !== 2 && dd.value.length !== 2 && yyyy.value.length !== 4) {\n output.innerHTML = 'Please enter valid date.'\n } else {\n //display date if format is correct\n output.innerHTML = fullDate\n var date = new Date(fullDate)\n console.log(date)\n }\n}\n\ndateButton.addEventListener('click', displayFormattedDate)\ninput {\n width: 15%;\n}\n<div>\n <p>Enter Date: </p><input id=\"day\" type=\"number\" placeholder=\"day\" /> / <input id=\"month\" type=\"number\" placeholder=\"month\" /> / <input id=\"year\" type=\"number\" placeholder=\"year\" /> <button id=\"dateButton\">Add Date</button>\n <br/><br/>\n <p id=\"output\"></p>\n</div>\n\n\n\nOr Option 3 is to just use input type=\"text\" with replace and regex. The input below will not allow letters to be entered but will still require additional validation to ensure a proper date is being entered.\n\n\nvar dateInput = document.querySelector('#date')\n\nfunction addSlashes(e) {\n dateInput.value = e.target.value.replace(/^(\\d\\d\\/\\d\\d)(\\d+)$/g, '$1/$2').replace(/^(\\d\\d)(\\d)$/g, '$1/$2').replace(/[^\\d\\/]/g, '')\n}\n\ndateInput.addEventListener('keydown', addSlashes)\n<span>Enter Date: </span><input id=\"date\" maxlength=10>\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"html",
"javascript"
] |
stackoverflow_0074658004_html_javascript.txt
|
Q:
How to fix GitLFS Authorization error via SSH?
After updating to GitLFS v2.9.1 and GitLab 12.5+ on our tool server the git lfs commands on our staging and production server has stopped working out of nowhere.
The strange thing is that it still works just fine on our development machines on window 10.
Have someone an idea how to fix this problem or how to narrow it down?
Thank you very much!
LIV server.test gitresources # sudo git reset --hard origin/ptr
Downloading x/remoteResources/icons/x/x.png (8.7 KB)
Error downloading object: x/remoteResources/icons/x/x.png (873dd61): Smudge error: Error downloading x/remoteResources/icons/x/x.png (873dd61a4de1b23d4a8a86cff92b718ea76d89548d9df1b62e8652c93bbfb3cb): batch response: Authentication required: Authorization error: https://gitlab.x.com/x/x/app.git/info/lfs/objects/batch
Check that you have proper access to the repository
Errors logged to /root/*/gitresources/.git/lfs/logs/20191203T134636.060635499.log
Use `git lfs logs last` to view the log.
error: external filter git-lfs smudge -- %f failed 2
error: external filter git-lfs smudge -- %f failed
16:57:09.952506 git.c:557 trace: exec: 'git-lfs' 'pull'
16:57:09.952585 run-command.c:347 trace: run_command: 'git-lfs' 'pull'
16:57:09.963827 trace git-lfs: exec: git 'version'
16:57:09.969638 trace git-lfs: exec: git 'config' '-l'
16:57:09.972510 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=cat' '-c' 'filter.lfs.clean=cat' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD'
16:57:09.975174 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=cat' '-c' 'filter.lfs.clean=cat' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD'
16:57:09.980910 trace git-lfs: tq: running as batched queue, batch size of 100
16:57:09.983528 trace git-lfs: filepathfilter: accepting ".gitattributes"
16:57:09.983571 trace git-lfs: filepathfilter: accepting "lfs-test.iml"
16:57:09.983587 trace git-lfs: filepathfilter: accepting "testimage.jpg"
16:57:09.985367 trace git-lfs: fetch testimage.jpg [4d1a66693a39c26b90f1f9d94a5194b0e21696448811a3d06c58af48135accf4]
16:57:09.985535 trace git-lfs: tq: sending batch of size 1
16:57:09.985793 trace git-lfs: run_command: ssh -- [email protected] git-lfs-authenticate x/lfs-test.git download
16:57:10.498126 trace git-lfs: api: batch 1 files
16:57:10.498765 trace git-lfs: HTTP: POST https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
> POST /x/lfs-test.git/info/lfs/objects/batch HTTP/1.1
> Host: gitlab.x.com
> Accept: application/vnd.git-lfs+json; charset=utf-8
> Authorization: Basic *
> Content-Length: 204
> Content-Type: application/vnd.git-lfs+json; charset=utf-8
> User-Agent: git-lfs/2.9.1 (GitHub; linux amd64; go 1.13.1; git 7b479cc8)
>
{"operation":"download","objects":[{"oid":"4d1a66693a39c26b90f1f9d94a5194b0e21696448811a3d06c58af48135accf4","size":282810}],"transfers":["lfs-standalone-file","basic"],"ref":{"name":"refs/heads/master"}}16:57:10.789562 trace git-lfs: HTTP: 401
< HTTP/1.1 401 Unauthorized
< Content-Length: 26
< Cache-Control: no-cache
< Content-Type: text/plain; charset=utf-8
< Date: Wed, 04 Dec 2019 15:57:10 GMT
< Referrer-Policy: strict-origin-when-cross-origin
< Server: nginx
< Set-Cookie: experimentation_subject_id=ImM2MDI4ZWNiLWU3ZGQtNDZlMy1iMmNhLTVmYTlhNWI4ZTNkMCI%3D--6842ebb28f88d6209a31d6eec652a825e85d6664; domain=.x.com; path=/; expires=Sun, 04 Dec 2039 15:57:10 -0000; secure; HttpOnly
< Www-Authenticate: Basic realm="GitLab"
< X-Content-Type-Options: nosniff
< X-Download-Options: noopen
< X-Frame-Options: DENY
< X-Permitted-Cross-Domain-Policies: none
< X-Request-Id: fOZLAsyMCw9
< X-Runtime: 0.184977
< X-Ua-Compatible: IE=edge
< X-Xss-Protection: 1; mode=block
<
16:57:10.789961 trace git-lfs: api error: Authentication required: Authorization error: https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
Check that you have proper access to the repository
batch response: Authentication required: Authorization error: https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
Check that you have proper access to the repository
error: failed to fetch some objects from 'https://gitlab.x.com/x/lfs-test.git/info/lfs'
I have replaced some parts of the urls with a x since it is company property
GitLFS v2.9.1
GitLab 12.5.3
OS: CentOS
A:
The linked bug report didn't help, but maybe someone else searches for this so I may leave here what was the issue for me: proxy environment variables. We have a gitlab server running in our intranet, and in my .bashrc I have the http_proxy etc. variables set to be able to connect to the internet. Unsetting these variables, and re-trying the checkout with LFS did the trick.
|
How to fix GitLFS Authorization error via SSH?
|
After updating to GitLFS v2.9.1 and GitLab 12.5+ on our tool server the git lfs commands on our staging and production server has stopped working out of nowhere.
The strange thing is that it still works just fine on our development machines on window 10.
Have someone an idea how to fix this problem or how to narrow it down?
Thank you very much!
LIV server.test gitresources # sudo git reset --hard origin/ptr
Downloading x/remoteResources/icons/x/x.png (8.7 KB)
Error downloading object: x/remoteResources/icons/x/x.png (873dd61): Smudge error: Error downloading x/remoteResources/icons/x/x.png (873dd61a4de1b23d4a8a86cff92b718ea76d89548d9df1b62e8652c93bbfb3cb): batch response: Authentication required: Authorization error: https://gitlab.x.com/x/x/app.git/info/lfs/objects/batch
Check that you have proper access to the repository
Errors logged to /root/*/gitresources/.git/lfs/logs/20191203T134636.060635499.log
Use `git lfs logs last` to view the log.
error: external filter git-lfs smudge -- %f failed 2
error: external filter git-lfs smudge -- %f failed
16:57:09.952506 git.c:557 trace: exec: 'git-lfs' 'pull'
16:57:09.952585 run-command.c:347 trace: run_command: 'git-lfs' 'pull'
16:57:09.963827 trace git-lfs: exec: git 'version'
16:57:09.969638 trace git-lfs: exec: git 'config' '-l'
16:57:09.972510 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=cat' '-c' 'filter.lfs.clean=cat' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD'
16:57:09.975174 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=cat' '-c' 'filter.lfs.clean=cat' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD'
16:57:09.980910 trace git-lfs: tq: running as batched queue, batch size of 100
16:57:09.983528 trace git-lfs: filepathfilter: accepting ".gitattributes"
16:57:09.983571 trace git-lfs: filepathfilter: accepting "lfs-test.iml"
16:57:09.983587 trace git-lfs: filepathfilter: accepting "testimage.jpg"
16:57:09.985367 trace git-lfs: fetch testimage.jpg [4d1a66693a39c26b90f1f9d94a5194b0e21696448811a3d06c58af48135accf4]
16:57:09.985535 trace git-lfs: tq: sending batch of size 1
16:57:09.985793 trace git-lfs: run_command: ssh -- [email protected] git-lfs-authenticate x/lfs-test.git download
16:57:10.498126 trace git-lfs: api: batch 1 files
16:57:10.498765 trace git-lfs: HTTP: POST https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
> POST /x/lfs-test.git/info/lfs/objects/batch HTTP/1.1
> Host: gitlab.x.com
> Accept: application/vnd.git-lfs+json; charset=utf-8
> Authorization: Basic *
> Content-Length: 204
> Content-Type: application/vnd.git-lfs+json; charset=utf-8
> User-Agent: git-lfs/2.9.1 (GitHub; linux amd64; go 1.13.1; git 7b479cc8)
>
{"operation":"download","objects":[{"oid":"4d1a66693a39c26b90f1f9d94a5194b0e21696448811a3d06c58af48135accf4","size":282810}],"transfers":["lfs-standalone-file","basic"],"ref":{"name":"refs/heads/master"}}16:57:10.789562 trace git-lfs: HTTP: 401
< HTTP/1.1 401 Unauthorized
< Content-Length: 26
< Cache-Control: no-cache
< Content-Type: text/plain; charset=utf-8
< Date: Wed, 04 Dec 2019 15:57:10 GMT
< Referrer-Policy: strict-origin-when-cross-origin
< Server: nginx
< Set-Cookie: experimentation_subject_id=ImM2MDI4ZWNiLWU3ZGQtNDZlMy1iMmNhLTVmYTlhNWI4ZTNkMCI%3D--6842ebb28f88d6209a31d6eec652a825e85d6664; domain=.x.com; path=/; expires=Sun, 04 Dec 2039 15:57:10 -0000; secure; HttpOnly
< Www-Authenticate: Basic realm="GitLab"
< X-Content-Type-Options: nosniff
< X-Download-Options: noopen
< X-Frame-Options: DENY
< X-Permitted-Cross-Domain-Policies: none
< X-Request-Id: fOZLAsyMCw9
< X-Runtime: 0.184977
< X-Ua-Compatible: IE=edge
< X-Xss-Protection: 1; mode=block
<
16:57:10.789961 trace git-lfs: api error: Authentication required: Authorization error: https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
Check that you have proper access to the repository
batch response: Authentication required: Authorization error: https://gitlab.x.com/x/lfs-test.git/info/lfs/objects/batch
Check that you have proper access to the repository
error: failed to fetch some objects from 'https://gitlab.x.com/x/lfs-test.git/info/lfs'
I have replaced some parts of the urls with a x since it is company property
GitLFS v2.9.1
GitLab 12.5.3
OS: CentOS
|
[
"The linked bug report didn't help, but maybe someone else searches for this so I may leave here what was the issue for me: proxy environment variables. We have a gitlab server running in our intranet, and in my .bashrc I have the http_proxy etc. variables set to be able to connect to the internet. Unsetting these variables, and re-trying the checkout with LFS did the trick.\n"
] |
[
0
] |
[] |
[] |
[
"centos",
"git",
"git_lfs",
"gitlab",
"ssh"
] |
stackoverflow_0059177945_centos_git_git_lfs_gitlab_ssh.txt
|
Q:
Can I get the data and formula for this online calculator?
Google has an estimator for AdSense revenue based on region and industry. It uses a set parameter of 50,000 pageviews per month and outputs an estimate of annual revenue.
In the page code, line 382 is:
<div class="results-numbers" ng-repeat="character in calculatorCtrl.getRevenueValue() track by $index">
I presume the method getRevenueValue() does the calculation by a formula like:
Annual revenue = CPM * (pageviews / month) / 1,000 views * 12 months
where CPM = "cost per mille" = the amount paid per thousand views, and I presume there is a lookup table of CPM values for given region and industry.
Is there a way to confirm that the formula is the one I've assumed above, and is there a way to find the lookup table used for CPM?
A:
I found the answer when I took another look at the page about a week later. It turns out that the lookup table is right there in the page code on line 49:
var CALCULATOR_DATA = [
{"category": "Books \u0026 Literature", "multiplier": "2.16", "region": "Asia and Pacific Countries"},
{"category": "Internet \u0026 Telecom", "multiplier": "11.23", "region": "Asia and Pacific Countries"},
{"category": "Business \u0026 Industrial", "multiplier": "7.59", "region": "South America"},
...
{"category": "Arts \u0026 Entertainment", "multiplier": "3.62", "region": "Europe, Middle East and Africa"}];
The entries appear to be listed in a random order. There are 100 of them: 25 categories and four regions, exactly matching the choices offered in the drop-down lists on the page. The values of multiplier do confirm the formula stated in the question with multiplier = CPM.
|
Can I get the data and formula for this online calculator?
|
Google has an estimator for AdSense revenue based on region and industry. It uses a set parameter of 50,000 pageviews per month and outputs an estimate of annual revenue.
In the page code, line 382 is:
<div class="results-numbers" ng-repeat="character in calculatorCtrl.getRevenueValue() track by $index">
I presume the method getRevenueValue() does the calculation by a formula like:
Annual revenue = CPM * (pageviews / month) / 1,000 views * 12 months
where CPM = "cost per mille" = the amount paid per thousand views, and I presume there is a lookup table of CPM values for given region and industry.
Is there a way to confirm that the formula is the one I've assumed above, and is there a way to find the lookup table used for CPM?
|
[
"I found the answer when I took another look at the page about a week later. It turns out that the lookup table is right there in the page code on line 49:\nvar CALCULATOR_DATA = [\n{\"category\": \"Books \\u0026 Literature\", \"multiplier\": \"2.16\", \"region\": \"Asia and Pacific Countries\"}, \n{\"category\": \"Internet \\u0026 Telecom\", \"multiplier\": \"11.23\", \"region\": \"Asia and Pacific Countries\"}, \n{\"category\": \"Business \\u0026 Industrial\", \"multiplier\": \"7.59\", \"region\": \"South America\"}, \n...\n{\"category\": \"Arts \\u0026 Entertainment\", \"multiplier\": \"3.62\", \"region\": \"Europe, Middle East and Africa\"}];\n\nThe entries appear to be listed in a random order. There are 100 of them: 25 categories and four regions, exactly matching the choices offered in the drop-down lists on the page. The values of multiplier do confirm the formula stated in the question with multiplier = CPM.\n"
] |
[
0
] |
[] |
[] |
[
"dynamic",
"html"
] |
stackoverflow_0074591344_dynamic_html.txt
|
Q:
Use an if inside loop to replace the data between two dataframe
I have two files and want to transfer date from one to other after doing a test
File1:
ID, X1, X2, X3
2000, 1, 2, 3
2001, 3, 4, 5
1999, 2, 5, 6
2003, 3, 5, 4
File2:
ID, X1, X2, X3,
2000,
2001,
2002,
2003,
Result file will be like:
1999 "There is an error"
File2:
ID, X1, X2, X3
2000, 1, 2, 3
2001, 3, 4, 5
2002, Na, Na, Na
2003, 3, 5, 4
I tried to use for loop with if, Unfortunately, it doesn't work:
for(j in length(1: nrows(file1){
for(i in length(1: nrows(file2){
if( file1&ID[j]>= file2&ID[j+1]){
print(j, ' wrong value')
esle
file2[i,]<- file1[j,]
break
It would be very nice if I can get some ideas, codes how I can get something similar to result file
I hope I can find the right code to solve this problem
A:
No need to iterate using loops, you can simply use right_join from dplyr package
df1 %>%
right_join(df2, by="ID") %>%
arrange(ID)
ID X1 X2 X3
1 2000 1 2 3
2 2001 3 4 5
3 2002 NA NA NA
4 2003 3 5 4
Sample data
df1 <- structure(list(ID = c(2000L, 2001L, 1999L, 2003L), X1 = c(1L,
3L, 2L, 3L), X2 = c(2L, 4L, 5L, 5L), X3 = c(3L, 5L, 6L, 4L)), class = "data.frame", row.names = c(NA,
-4L))
df2 <- structure(list(ID = 2000:2003), class = "data.frame", row.names = c(NA,
-4L))
A:
Using data.table
library(data.table)
setDT(df2)[df1, names(df1)[-1] := mget(paste0("i.", names(df1)[-1])), on = .(ID)]
-output
> df2
ID X1 X2 X3
1: 2000 1 2 3
2: 2001 3 4 5
3: 2002 NA NA NA
4: 2003 3 5 4
A:
Here is a slightly different approach which does not give the exact expected output: Note that year 1999 is kept in the dataframe:
coalesce_by_column <- function(df) {
return(coalesce(df[1], df[2]))
}
bind_rows(df1, df2) %>%
group_by(ID) %>%
summarise_all(coalesce_by_column)
ID X1 X2 X3
<int> <int> <int> <int>
1 1999 2 5 6
2 2000 1 2 3
3 2001 3 4 5
4 2002 NA NA NA
5 2003 3 5 4
|
Use an if inside loop to replace the data between two dataframe
|
I have two files and want to transfer date from one to other after doing a test
File1:
ID, X1, X2, X3
2000, 1, 2, 3
2001, 3, 4, 5
1999, 2, 5, 6
2003, 3, 5, 4
File2:
ID, X1, X2, X3,
2000,
2001,
2002,
2003,
Result file will be like:
1999 "There is an error"
File2:
ID, X1, X2, X3
2000, 1, 2, 3
2001, 3, 4, 5
2002, Na, Na, Na
2003, 3, 5, 4
I tried to use for loop with if, Unfortunately, it doesn't work:
for(j in length(1: nrows(file1){
for(i in length(1: nrows(file2){
if( file1&ID[j]>= file2&ID[j+1]){
print(j, ' wrong value')
esle
file2[i,]<- file1[j,]
break
It would be very nice if I can get some ideas, codes how I can get something similar to result file
I hope I can find the right code to solve this problem
|
[
"No need to iterate using loops, you can simply use right_join from dplyr package\ndf1 %>% \n right_join(df2, by=\"ID\") %>% \n arrange(ID)\n ID X1 X2 X3\n1 2000 1 2 3\n2 2001 3 4 5\n3 2002 NA NA NA\n4 2003 3 5 4\n\nSample data\ndf1 <- structure(list(ID = c(2000L, 2001L, 1999L, 2003L), X1 = c(1L, \n3L, 2L, 3L), X2 = c(2L, 4L, 5L, 5L), X3 = c(3L, 5L, 6L, 4L)), class = \"data.frame\", row.names = c(NA, \n-4L))\n\ndf2 <- structure(list(ID = 2000:2003), class = \"data.frame\", row.names = c(NA, \n-4L))\n\n",
"Using data.table\nlibrary(data.table)\nsetDT(df2)[df1, names(df1)[-1] := mget(paste0(\"i.\", names(df1)[-1])), on = .(ID)]\n\n-output\n> df2\n ID X1 X2 X3\n1: 2000 1 2 3\n2: 2001 3 4 5\n3: 2002 NA NA NA\n4: 2003 3 5 4\n\n",
"Here is a slightly different approach which does not give the exact expected output: Note that year 1999 is kept in the dataframe:\ncoalesce_by_column <- function(df) {\n return(coalesce(df[1], df[2]))\n}\n\nbind_rows(df1, df2) %>% \n group_by(ID) %>%\n summarise_all(coalesce_by_column)\n\n ID X1 X2 X3\n <int> <int> <int> <int>\n1 1999 2 5 6\n2 2000 1 2 3\n3 2001 3 4 5\n4 2002 NA NA NA\n5 2003 3 5 4\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074660360_r.txt
|
Q:
Create Venn Diagram from two DF
I'm trying to create a Venn diagram of two data frames, but am only able receive incorrect results. An example of the data sets of the same structure:
Chemical
ChemID
Oxidopamine
D016627
Melatonin
D016627
I've only received incorrect results from the following:
VennDiagram::venn.diagram(
x = list(Lewy, Park),
category.names = c("ChemID, ChemID"),
filename ="venndiagramm.png",
output=TRUE)
Ideally, I would like to export an image of number of overlapping chemicals between the two sets.
A:
Welcome to SO! As far as I guess your data structure (two dataframes Lewy and Park, each with the column ChemID), try the following:
VennDiagram::venn.diagram(
x = list(Lewy$ChemID, Park$ChemID), # expects vectors, not dataframes
# category.names = c("ChemID, ChemID"), # see if these are rather to construct nice labels
filename ="venndiagramm.png",
output=TRUE)
You may increase the chance of a useful answer by providing minimal working data samples by dput(). Of course you can use simulated data. Try to explain what exactly did not work.
See also ? venn.diagram
|
Create Venn Diagram from two DF
|
I'm trying to create a Venn diagram of two data frames, but am only able receive incorrect results. An example of the data sets of the same structure:
Chemical
ChemID
Oxidopamine
D016627
Melatonin
D016627
I've only received incorrect results from the following:
VennDiagram::venn.diagram(
x = list(Lewy, Park),
category.names = c("ChemID, ChemID"),
filename ="venndiagramm.png",
output=TRUE)
Ideally, I would like to export an image of number of overlapping chemicals between the two sets.
|
[
"Welcome to SO! As far as I guess your data structure (two dataframes Lewy and Park, each with the column ChemID), try the following:\nVennDiagram::venn.diagram(\n x = list(Lewy$ChemID, Park$ChemID), # expects vectors, not dataframes\n # category.names = c(\"ChemID, ChemID\"), # see if these are rather to construct nice labels\n filename =\"venndiagramm.png\",\n output=TRUE) \n\nYou may increase the chance of a useful answer by providing minimal working data samples by dput(). Of course you can use simulated data. Try to explain what exactly did not work.\nSee also ? venn.diagram\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"r",
"venn_diagram"
] |
stackoverflow_0074660563_dataframe_r_venn_diagram.txt
|
Q:
I am wanting to return to a previous funtion in Powershell, to rectify with user error if a variable is met?
The current script is as follows;
$HN = hostname
$DN = Get-ADComputer -identity $HN -Properties DistinguishedName | select-object -ExpandProperty DistinguishedName
#*
$OU = 'OU=Workstations,DC=$domain,DC=$domain,DC=$domain'
[array]$A = Get-ADOrganizationalUnit -SearchBase $OU -SearchScope OneLevel -Filter * | Select-Object -ExpandProperty Name
[array]$DropDownArray = $A | Sort-Object
function Return-DropDown {
if ($DropDown.SelectedItem -eq $B){
$DropDown.SelectedItem = $DropDown.Items[0]
$Form.Close()
}
else{
$Form.Close()
}
}
function SelectGroup{
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$Form = New-Object System.Windows.Forms.Form
$Form.width = 600
$Form.height = 200
$Form.Text = ”DropDown”
$DropDown = new-object System.Windows.Forms.ComboBox
$DropDown.Location = new-object System.Drawing.Size(140,10)
$DropDown.Size = new-object System.Drawing.Size(300,80)
ForEach ($Item in $DropDownArray) {
[void] $DropDown.Items.Add($Item)
}
$Form.Controls.Add($DropDown)
$DropDownLabel = new-object System.Windows.Forms.Label
$DropDownLabel.Location = new-object System.Drawing.Size(10,10)
$DropDownLabel.size = new-object System.Drawing.Size(100,40)
$DropDownLabel.Text = "Select Group:"
$DropDown.Font = New-Object System.Drawing.Font("Calibri",15,[System.Drawing.FontStyle]::Bold)
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(140,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Select an Item"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Return-DropDown})
$form.Controls.Add($Button)
$form.ControlBox = $false
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(290,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Finish"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Move-ADObject -Identity "$DN" -TargetPath "$OU" | Return-DropDown})
$form.Controls.Add($Button)
$form.ControlBox = $false
$Form.Add_Shown({$Form.Activate()})
[void] $Form.ShowDialog()
$B = $dropdown.SelectedItem
return $B
}
$B = SelectGroup
I would like to develop this tool and add as an aditional option to return to the begining of the previous function;
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(290,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Back"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Return to #* })
$form.Controls.Add($Button)
$form.ControlBox = $false
Not sure how to achieve this, hoping to find help on here.
I have looked at loops and breaks but nothing seems to fit or that i can adapt to achieve this.
A:
If you're looking for simple repetition of the form function, you could do something like this (unless your tool hides the PowerShell window).
Do {
# Move these lines from #*
$OU = 'OU=Workstations,DC=$domain,DC=$domain,DC=$domain'
[array]$A = Get-ADOrganizationalUnit -SearchBase $OU -SearchScope
OneLevel -Filter * | Select-Object -ExpandProperty Name
[array]$DropDownArray = $A | Sort-Object
$B = SelectGroup
#{... Do Work on $B, if desired ...}
$Stop = Read-Host -Prompt 'Do you want to stop?'
} Until ($Stop -match '(Y|y|Yes|YES|yes)')
Otherwise you'll need to alter your "return-dropdown" function to not close your form and implement your "back" button another way.
|
I am wanting to return to a previous funtion in Powershell, to rectify with user error if a variable is met?
|
The current script is as follows;
$HN = hostname
$DN = Get-ADComputer -identity $HN -Properties DistinguishedName | select-object -ExpandProperty DistinguishedName
#*
$OU = 'OU=Workstations,DC=$domain,DC=$domain,DC=$domain'
[array]$A = Get-ADOrganizationalUnit -SearchBase $OU -SearchScope OneLevel -Filter * | Select-Object -ExpandProperty Name
[array]$DropDownArray = $A | Sort-Object
function Return-DropDown {
if ($DropDown.SelectedItem -eq $B){
$DropDown.SelectedItem = $DropDown.Items[0]
$Form.Close()
}
else{
$Form.Close()
}
}
function SelectGroup{
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$Form = New-Object System.Windows.Forms.Form
$Form.width = 600
$Form.height = 200
$Form.Text = ”DropDown”
$DropDown = new-object System.Windows.Forms.ComboBox
$DropDown.Location = new-object System.Drawing.Size(140,10)
$DropDown.Size = new-object System.Drawing.Size(300,80)
ForEach ($Item in $DropDownArray) {
[void] $DropDown.Items.Add($Item)
}
$Form.Controls.Add($DropDown)
$DropDownLabel = new-object System.Windows.Forms.Label
$DropDownLabel.Location = new-object System.Drawing.Size(10,10)
$DropDownLabel.size = new-object System.Drawing.Size(100,40)
$DropDownLabel.Text = "Select Group:"
$DropDown.Font = New-Object System.Drawing.Font("Calibri",15,[System.Drawing.FontStyle]::Bold)
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(140,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Select an Item"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Return-DropDown})
$form.Controls.Add($Button)
$form.ControlBox = $false
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(290,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Finish"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Move-ADObject -Identity "$DN" -TargetPath "$OU" | Return-DropDown})
$form.Controls.Add($Button)
$form.ControlBox = $false
$Form.Add_Shown({$Form.Activate()})
[void] $Form.ShowDialog()
$B = $dropdown.SelectedItem
return $B
}
$B = SelectGroup
I would like to develop this tool and add as an aditional option to return to the begining of the previous function;
$Button = new-object System.Windows.Forms.Button
$Button.Location = new-object System.Drawing.Size(290,50)
$Button.Size = new-object System.Drawing.Size(150,50)
$Button.Text = "Back"
$Button.Font = New-Object System.Drawing.Font("Calibri",11,[System.Drawing.FontStyle]::Bold)
$Button.Add_Click({Return to #* })
$form.Controls.Add($Button)
$form.ControlBox = $false
Not sure how to achieve this, hoping to find help on here.
I have looked at loops and breaks but nothing seems to fit or that i can adapt to achieve this.
|
[
"If you're looking for simple repetition of the form function, you could do something like this (unless your tool hides the PowerShell window).\nDo {\n # Move these lines from #*\n $OU = 'OU=Workstations,DC=$domain,DC=$domain,DC=$domain'\n [array]$A = Get-ADOrganizationalUnit -SearchBase $OU -SearchScope \nOneLevel -Filter * | Select-Object -ExpandProperty Name \n [array]$DropDownArray = $A | Sort-Object \n\n $B = SelectGroup \n #{... Do Work on $B, if desired ...}\n $Stop = Read-Host -Prompt 'Do you want to stop?'\n} Until ($Stop -match '(Y|y|Yes|YES|yes)')\n\nOtherwise you'll need to alter your \"return-dropdown\" function to not close your form and implement your \"back\" button another way.\n"
] |
[
0
] |
[] |
[] |
[
"powershell",
"powershell_2.0",
"powershell_3.0",
"powershell_4.0"
] |
stackoverflow_0074660534_powershell_powershell_2.0_powershell_3.0_powershell_4.0.txt
|
Q:
PostgreSQL not using covering index as expected
Given following schema with PostgreSQL 12.3 server
create table records
(
id serial primary key,
number varchar(20) not null,
owner_id integer not null,
state varchar(16) default 'open'::character varying not null,
created_at date,
updated_at date,
finished_at date
)
I am performing query which paginates records with stored state and some timestamp attributes.
EXPLAIN (ANALYSE, BUFFERS)
SELECT "records".*
FROM "records"
WHERE "records"."trashed_at" IS NULL
AND "records"."owner_id" = 11
AND "records"."state" IN ('fresh', 'processing')
ORDER BY "records"."created_at" DESC, "records"."number" DESC
LIMIT 20 OFFSET 0;
=>
Limit (cost=1241.09..1447.85 rows=20 width=1326) (actual time=1266.202..26013.831 rows=6 loops=1)
Output: ....
Buffers: shared hit=84977 read=132675 dirtied=4
-> Index Scan using index_records_on_owner_id_and_created_at_and_number on public.records (cost=0.56..254588.42 rows=24627 width=1326) (actual time=116.749..26013.765 rows=126 loops=1)
Output: ......
Index Cond: (records.owner_id = 14759)
Filter: ((records.trashed_at IS NULL) AND ((records.state)::text = ANY ('{fresh,processing}'::text[])))
Rows Removed by Filter: 228669
Buffers: shared hit=84977 read=132675 dirtied=4
Planning Time: 0.682 ms
Execution Time: 26013.889 ms
(11 rows)
Execution time is slow due to buffers read. When they are all in cache, time is reduced to ~300ms.
From EXPLAIN we can see that index index_records_on_owner_id_and_created_at_and_number was used. It is defined like
create index index_records_on_owner_id_and_created_at_and_number
on records (owner_id asc, created_at desc, number desc);
Notice that planner has really bad estimates (yes, VACUUM ANALYZE was performed before explain).
I expected that creating covering index index_records_optimize_sort_on_created_at_and_number_in bellow is going to help and will result to Index Scan without filter part. However planner uses only old plan and does not benefit from new index.
create index index_records_optimize_sort_on_created_at_and_number_in
on records (owner_id asc, created_at desc, number desc)
include (state)
where (trashed_at IS NULL);
I believed that this is perferct candidate for covering index as all filter/sort attributes are included insige index.
I can create another index, which helps this particular query. But there is a ceavant. From UI, I allow to select different states. So index suits single scenario, but there are multiple state combinations.
create index index_records_optimize_sort_on_created_at_with_where
on records (owner_id asc, created_at desc, number desc)
where (trashed_at IS NULL AND records.state IN ('fresh', 'processing'));
Am I missing something in the docs? Can single index modified so planner will use it? I have spend many hours in docs / cybertec blog (thanks for it!), but cannot make any progress.
A:
Index only scans are not all that clever. You are, by the use of *, selecting the trashed_at columnn, but that column is not available in the index. Now, it could synthesize the value to return for trashed_at based on the WHERE clause restriction, but it is not clever enough to do that. So instead, it is just not willing to use an index-only scan, which defeats the purpose of the "covering" index (that is, it is not really a covering index). Either put trashed_at into the indexed columns list, or enumerate all the columns you need to return and don't put trashed_at into that list.
|
PostgreSQL not using covering index as expected
|
Given following schema with PostgreSQL 12.3 server
create table records
(
id serial primary key,
number varchar(20) not null,
owner_id integer not null,
state varchar(16) default 'open'::character varying not null,
created_at date,
updated_at date,
finished_at date
)
I am performing query which paginates records with stored state and some timestamp attributes.
EXPLAIN (ANALYSE, BUFFERS)
SELECT "records".*
FROM "records"
WHERE "records"."trashed_at" IS NULL
AND "records"."owner_id" = 11
AND "records"."state" IN ('fresh', 'processing')
ORDER BY "records"."created_at" DESC, "records"."number" DESC
LIMIT 20 OFFSET 0;
=>
Limit (cost=1241.09..1447.85 rows=20 width=1326) (actual time=1266.202..26013.831 rows=6 loops=1)
Output: ....
Buffers: shared hit=84977 read=132675 dirtied=4
-> Index Scan using index_records_on_owner_id_and_created_at_and_number on public.records (cost=0.56..254588.42 rows=24627 width=1326) (actual time=116.749..26013.765 rows=126 loops=1)
Output: ......
Index Cond: (records.owner_id = 14759)
Filter: ((records.trashed_at IS NULL) AND ((records.state)::text = ANY ('{fresh,processing}'::text[])))
Rows Removed by Filter: 228669
Buffers: shared hit=84977 read=132675 dirtied=4
Planning Time: 0.682 ms
Execution Time: 26013.889 ms
(11 rows)
Execution time is slow due to buffers read. When they are all in cache, time is reduced to ~300ms.
From EXPLAIN we can see that index index_records_on_owner_id_and_created_at_and_number was used. It is defined like
create index index_records_on_owner_id_and_created_at_and_number
on records (owner_id asc, created_at desc, number desc);
Notice that planner has really bad estimates (yes, VACUUM ANALYZE was performed before explain).
I expected that creating covering index index_records_optimize_sort_on_created_at_and_number_in bellow is going to help and will result to Index Scan without filter part. However planner uses only old plan and does not benefit from new index.
create index index_records_optimize_sort_on_created_at_and_number_in
on records (owner_id asc, created_at desc, number desc)
include (state)
where (trashed_at IS NULL);
I believed that this is perferct candidate for covering index as all filter/sort attributes are included insige index.
I can create another index, which helps this particular query. But there is a ceavant. From UI, I allow to select different states. So index suits single scenario, but there are multiple state combinations.
create index index_records_optimize_sort_on_created_at_with_where
on records (owner_id asc, created_at desc, number desc)
where (trashed_at IS NULL AND records.state IN ('fresh', 'processing'));
Am I missing something in the docs? Can single index modified so planner will use it? I have spend many hours in docs / cybertec blog (thanks for it!), but cannot make any progress.
|
[
"Index only scans are not all that clever. You are, by the use of *, selecting the trashed_at columnn, but that column is not available in the index. Now, it could synthesize the value to return for trashed_at based on the WHERE clause restriction, but it is not clever enough to do that. So instead, it is just not willing to use an index-only scan, which defeats the purpose of the \"covering\" index (that is, it is not really a covering index). Either put trashed_at into the indexed columns list, or enumerate all the columns you need to return and don't put trashed_at into that list.\n"
] |
[
0
] |
[] |
[] |
[
"postgresql"
] |
stackoverflow_0074658416_postgresql.txt
|
Q:
Getting syntax error in GLSL array constructor not supported
with this code
var mat_add = gpu.createKernel(function(A, B) {
var sum = [];
for (var i=0; i<3; i++) {
sum.push(A[this.thread.y][i] + B[i][this.thread.x]);
}
return sum;
}).dimensions([2, 2]);
I am getting this error:
An error occurred compiling the shaders: ERROR: 0:141: '' : array size must be greater than zero
ERROR: 0:141: '[]' : array constructor supported in GLSL ES 3.00 and above only
ERROR: 0:141: 'constructor' : array constructor needs one argument per array element
ERROR: 0:141: '=' : Invalid operation for arrays
ERROR: 0:141: '=' : cannot convert from 'const array[1] of float' to 'highp float'
ERROR: 0:145: 'sum' : undeclared identifier
ERROR: 0:145: '' : methods supported in GLSL ES 3.00 and above only
ERROR: 0:145: 'push' : invalid method
This actually works:
var mat_mult = gpu.createKernel(function(A, B) {
var sum = 0;
for (var i=0; i<3; i++) {
sum += A[this.thread.y][i] * B[i][this.thread.x];
}
return sum;
}).dimensions([2, 1]);
Does anyone know how to fix this syntax error?
Thanks
A:
i created python code to convert list into function getters : https://github.com/MhadhbiXissam/GLSL_LIST
|
Getting syntax error in GLSL array constructor not supported
|
with this code
var mat_add = gpu.createKernel(function(A, B) {
var sum = [];
for (var i=0; i<3; i++) {
sum.push(A[this.thread.y][i] + B[i][this.thread.x]);
}
return sum;
}).dimensions([2, 2]);
I am getting this error:
An error occurred compiling the shaders: ERROR: 0:141: '' : array size must be greater than zero
ERROR: 0:141: '[]' : array constructor supported in GLSL ES 3.00 and above only
ERROR: 0:141: 'constructor' : array constructor needs one argument per array element
ERROR: 0:141: '=' : Invalid operation for arrays
ERROR: 0:141: '=' : cannot convert from 'const array[1] of float' to 'highp float'
ERROR: 0:145: 'sum' : undeclared identifier
ERROR: 0:145: '' : methods supported in GLSL ES 3.00 and above only
ERROR: 0:145: 'push' : invalid method
This actually works:
var mat_mult = gpu.createKernel(function(A, B) {
var sum = 0;
for (var i=0; i<3; i++) {
sum += A[this.thread.y][i] * B[i][this.thread.x];
}
return sum;
}).dimensions([2, 1]);
Does anyone know how to fix this syntax error?
Thanks
|
[
"i created python code to convert list into function getters : https://github.com/MhadhbiXissam/GLSL_LIST\n"
] |
[
0
] |
[] |
[] |
[
"glsl"
] |
stackoverflow_0042426290_glsl.txt
|
Q:
Google Sheets Pivot data with multiple columns
I'm trying to get my set of data to be pivot without using pivot table
(https://i.stack.imgur.com/H0BLJ.png)
(https://i.stack.imgur.com/3HDtB.png)
A:
Added Solution to your sheet
formula in cell K13:
={ArrayFormula(IF(COUNTIFS({QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},"Select Col1")},{QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},"Select Col1")},ROW(A2:A7),"<="&ROW(A2:A7))>1,,{QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},"Select Col1")})),QUERY(SORT({{C2:C7},{B2:B7},{"Q4 2022";BYROW(filter(D3:I7,REGEXMATCH(to_TEXT(MONTH(D2:I2)),"^10$|^11$|^12$")),LAMBDA(ax,sum(ax)))},{"Q1 2023";BYROW(filter(D3:I7,REGEXMATCH(to_TEXT(MONTH(D2:I2)),"^1$|^2$|^3$")),LAMBDA(ax,sum(ax)))}},1,1,2,1),"Select Col2, Col3, Col4")}
-
|
Google Sheets Pivot data with multiple columns
|
I'm trying to get my set of data to be pivot without using pivot table
(https://i.stack.imgur.com/H0BLJ.png)
(https://i.stack.imgur.com/3HDtB.png)
|
[
"Added Solution to your sheet\nformula in cell K13:\n={ArrayFormula(IF(COUNTIFS({QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},\"Select Col1\")},{QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},\"Select Col1\")},ROW(A2:A7),\"<=\"&ROW(A2:A7))>1,,{QUERY({{C2,B2};SORT({C3:C7,B3:B7},1,1,2,1)},\"Select Col1\")})),QUERY(SORT({{C2:C7},{B2:B7},{\"Q4 2022\";BYROW(filter(D3:I7,REGEXMATCH(to_TEXT(MONTH(D2:I2)),\"^10$|^11$|^12$\")),LAMBDA(ax,sum(ax)))},{\"Q1 2023\";BYROW(filter(D3:I7,REGEXMATCH(to_TEXT(MONTH(D2:I2)),\"^1$|^2$|^3$\")),LAMBDA(ax,sum(ax)))}},1,1,2,1),\"Select Col2, Col3, Col4\")}\n-\n\n"
] |
[
0
] |
[] |
[] |
[
"google_sheets"
] |
stackoverflow_0074557362_google_sheets.txt
|
Q:
Variational Autoencoder gives same output image for every input mnist image when using KL divergence
When not using KL divergence term, the VAE reconstructs mnist images almost perfectly but fails to generate new ones properly when provided with random noise.
When using KL divergence term, the VAE gives the same weird output both when reconstructing and generating images.
Here's the pytorch code for the loss function:
def loss_function(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), size_average=True)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return (BCE+KLD)
recon_x is the reconstructed image, x is the original_image, mu is the mean vector while logvar is the vector containing the log of variance.
What is going wrong here? Thanks in advance :)
A:
A possible reason is the numerical unbalance between the two losses, with your BCE loss computed as an average over the batch (c.f. size_average=True) while the KLD one is summed.
A:
Multiplying KLD with 0.0001 did it. The generated images are a little distorted, but similarity issue is resolved.
A:
Yes, try out with different weight factor for the KLD loss term. Weighing down the KLD loss term resolves the same reconstruction output issue in the CelebA dataset (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html).
A:
There are many possible reasons for that. As benjaminplanche stated you need to use .mean instead of .sum reduction. Also, KLD term weight could be different for different architecture and data sets. So, try different weights and see the reconstruction loss, and latent space to decide. There is a trade-off between reconstruction loss (output quality) and KLD term which forces the model to shape a gausian like latent space.
To evaluate different aspects of VAEs I trained a Vanilla autoencoder and VAE with different KLD term weights.
Note that, I used the MNIST hand-written digits dataset to train networks with input size 784=28*28 and latent size 30 dimensions. Although for data samples in range of [0, 1] we normally use a Sigmoid activation function, I used a Tanh for experimental reasons.
Vanilla Autoencoder:
Autoencoder(
(encoder): Encoder(
(nn): Sequential(
(0): Linear(in_features=784, out_features=30, bias=True)
)
)
(decoder): Decoder(
(nn): Sequential(
(0): Linear(in_features=30, out_features=784, bias=True)
(1): Tanh()
)
)
)
Afterward, I implemented the VAE model as shown in the following code blocks. I trained this model with different KLD weights from the set {0.5, 1, 5}.
class VAE(nn.Module):
def __init__(self,dim_latent_representation=2):
super(VAE,self).__init__()
class Encoder(nn.Module):
def __init__(self, output_size=2):
super(Encoder, self).__init__()
# needs your implementation
self.nn = nn.Sequential(
nn.Linear(28 * 28, output_size),
)
def forward(self, x):
# needs your implementation
return self.nn(x)
class Decoder(nn.Module):
def __init__(self, input_size=2):
super(Decoder, self).__init__()
# needs your implementation
self.nn = nn.Sequential(
nn.Linear(input_size, 28 * 28),
nn.Tanh(),
)
def forward(self, z):
# needs your implementation
return self.nn(z)
self.dim_latent_representation = dim_latent_representation
self.encoder = Encoder(output_size=dim_latent_representation)
self.mu_layer = nn.Linear(self.dim_latent_representation, self.dim_latent_representation)
self.logvar_layer = nn.Linear(self.dim_latent_representation, self.dim_latent_representation)
self.decoder = Decoder(input_size=dim_latent_representation)
# Implement this function for the VAE model
def reparameterise(self, mu, logvar):
if self.training:
std = logvar.mul(0.5).exp_()
eps = std.data.new(std.size()).normal_()
return eps.mul(std).add_(mu)
else:
return mu
def forward(self,x):
# This function should be modified for the DAE and VAE
x = self.encoder(x)
mu, logvar = self.mu_layer(x), self.logvar_layer(x)
z = self.reparameterise(mu, logvar)
return self.decoder(z), mu, logvar
Vanilla Autoencoder
Training loss: 0.4089 Validation loss
Validation loss (reconstruction error) : 0.4171
VAE Loss = MSE + 0.5 * KLD
Training loss: 0.6420
Validation loss (reconstruction error) : 0.6060
VAE Loss = MSE + 1 * KLD
Training loss: 0.6821
Validation loss (reconstruction error) : 0.6550
VAE Loss = MSE + 5 * KLD
Training loss: 0.7122
Validation loss (reconstruction error) : 0.7154
Here you can see output results from different models. I also visualized the 30 dimensional latent space in 2D using sklearn.manifold.TSNE transformation.
We observe a low loss value for the vanilla autoencoder with 30D bottleneck size which results in high-quality reconstructed images. Although loss values increased in VAEs, the VAE arranged the latent space such that gaps between latent representations for different classes decreased. It means we can get better manipulated (mixed latents) output. Since VAE follows an isotropic multivariate normal distribution at the latent space, we can generate new unseen images by taking samples from the latent space with higher quality compared to the Vanilla autoencoder. However, the reconstruction quality was reduced (loss values increased) since the loss function is a weighted combination of MSE and KLD terms to be optimized where the KLD term forces the latent space to resemble a Gaussian distribution. As we increased the KLD weight, we achieved a more compact latent space closer to the prior distribution by sacrificing the reconstruction quality.
|
Variational Autoencoder gives same output image for every input mnist image when using KL divergence
|
When not using KL divergence term, the VAE reconstructs mnist images almost perfectly but fails to generate new ones properly when provided with random noise.
When using KL divergence term, the VAE gives the same weird output both when reconstructing and generating images.
Here's the pytorch code for the loss function:
def loss_function(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), size_average=True)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return (BCE+KLD)
recon_x is the reconstructed image, x is the original_image, mu is the mean vector while logvar is the vector containing the log of variance.
What is going wrong here? Thanks in advance :)
|
[
"A possible reason is the numerical unbalance between the two losses, with your BCE loss computed as an average over the batch (c.f. size_average=True) while the KLD one is summed.\n",
"Multiplying KLD with 0.0001 did it. The generated images are a little distorted, but similarity issue is resolved.\n",
"Yes, try out with different weight factor for the KLD loss term. Weighing down the KLD loss term resolves the same reconstruction output issue in the CelebA dataset (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html).\n",
"There are many possible reasons for that. As benjaminplanche stated you need to use .mean instead of .sum reduction. Also, KLD term weight could be different for different architecture and data sets. So, try different weights and see the reconstruction loss, and latent space to decide. There is a trade-off between reconstruction loss (output quality) and KLD term which forces the model to shape a gausian like latent space.\nTo evaluate different aspects of VAEs I trained a Vanilla autoencoder and VAE with different KLD term weights.\nNote that, I used the MNIST hand-written digits dataset to train networks with input size 784=28*28 and latent size 30 dimensions. Although for data samples in range of [0, 1] we normally use a Sigmoid activation function, I used a Tanh for experimental reasons.\nVanilla Autoencoder:\nAutoencoder(\n (encoder): Encoder(\n (nn): Sequential(\n (0): Linear(in_features=784, out_features=30, bias=True)\n )\n )\n (decoder): Decoder(\n (nn): Sequential(\n (0): Linear(in_features=30, out_features=784, bias=True)\n (1): Tanh()\n )\n )\n)\n\nAfterward, I implemented the VAE model as shown in the following code blocks. I trained this model with different KLD weights from the set {0.5, 1, 5}.\nclass VAE(nn.Module):\n\n def __init__(self,dim_latent_representation=2):\n\n super(VAE,self).__init__()\n \n class Encoder(nn.Module):\n def __init__(self, output_size=2):\n super(Encoder, self).__init__()\n # needs your implementation\n self.nn = nn.Sequential(\n nn.Linear(28 * 28, output_size),\n )\n\n def forward(self, x):\n # needs your implementation\n return self.nn(x) \n\n class Decoder(nn.Module):\n def __init__(self, input_size=2):\n super(Decoder, self).__init__()\n # needs your implementation\n self.nn = nn.Sequential(\n nn.Linear(input_size, 28 * 28),\n nn.Tanh(),\n )\n\n def forward(self, z):\n # needs your implementation\n return self.nn(z)\n \n self.dim_latent_representation = dim_latent_representation\n self.encoder = Encoder(output_size=dim_latent_representation) \n self.mu_layer = nn.Linear(self.dim_latent_representation, self.dim_latent_representation)\n self.logvar_layer = nn.Linear(self.dim_latent_representation, self.dim_latent_representation) \n self.decoder = Decoder(input_size=dim_latent_representation)\n # Implement this function for the VAE model\n def reparameterise(self, mu, logvar):\n \n if self.training:\n std = logvar.mul(0.5).exp_()\n eps = std.data.new(std.size()).normal_()\n return eps.mul(std).add_(mu)\n else:\n return mu\n\n def forward(self,x):\n \n # This function should be modified for the DAE and VAE\n x = self.encoder(x)\n mu, logvar = self.mu_layer(x), self.logvar_layer(x)\n z = self.reparameterise(mu, logvar)\n return self.decoder(z), mu, logvar\n\n\nVanilla Autoencoder\n\nTraining loss: 0.4089 Validation loss\nValidation loss (reconstruction error) : 0.4171\n\n\nVAE Loss = MSE + 0.5 * KLD\n\nTraining loss: 0.6420\nValidation loss (reconstruction error) : 0.6060\n\n\nVAE Loss = MSE + 1 * KLD\n\nTraining loss: 0.6821\nValidation loss (reconstruction error) : 0.6550\n\n\nVAE Loss = MSE + 5 * KLD\n\nTraining loss: 0.7122\nValidation loss (reconstruction error) : 0.7154\n\n\n\nHere you can see output results from different models. I also visualized the 30 dimensional latent space in 2D using sklearn.manifold.TSNE transformation.\n\nWe observe a low loss value for the vanilla autoencoder with 30D bottleneck size which results in high-quality reconstructed images. Although loss values increased in VAEs, the VAE arranged the latent space such that gaps between latent representations for different classes decreased. It means we can get better manipulated (mixed latents) output. Since VAE follows an isotropic multivariate normal distribution at the latent space, we can generate new unseen images by taking samples from the latent space with higher quality compared to the Vanilla autoencoder. However, the reconstruction quality was reduced (loss values increased) since the loss function is a weighted combination of MSE and KLD terms to be optimized where the KLD term forces the latent space to resemble a Gaussian distribution. As we increased the KLD weight, we achieved a more compact latent space closer to the prior distribution by sacrificing the reconstruction quality.\n"
] |
[
10,
3,
3,
0
] |
[] |
[] |
[
"autoencoder",
"bayesian_networks",
"deep_learning",
"loss_function",
"pytorch"
] |
stackoverflow_0050607516_autoencoder_bayesian_networks_deep_learning_loss_function_pytorch.txt
|
Q:
How to use AWS Sagemaker with newer version of Huggingface Estimator?
When trying to use Huggingface estimator on sagemaker, Run training on Amazon SageMaker e.g.
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.17',
pytorch_version='1.10',
py_version='py38',
hyperparameters = hyperparameters
)
When I tried to increase the version to transformers_version='4.24', it throws an error where the maximum version supported is 4.17.
How to use AWS Sagemaker with newer version of Huggingface Estimator?
There's a note on using newer version for inference on https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9 but it looks like the way to use it for training with the Huggingface estimator is kind of complicated https://discuss.huggingface.co/t/huggingface-pytorch-versions-on-sagemaker/26315/5?u=alvations and it's not confirmed that the complicated steps can work.
A:
You can use the Pytorch estimator and in your source directory place a requirements.txt with Transformers added to it. This will ensure 2 things
You can use higher version of pytorch 1.12 (current) compared to 1.10.2 in the huggingface estimator.
Install new version of HuggingFace Transformers library.
To achieve this you need to structure your source directory like this
scripts
/train.py
/requirements.txt
and pass the source_dir attribute to the pytorch estimator
pt_estimator = PyTorch(
entry_point="train.py",
source_dir="scripts",
role=sagemaker.get_execution_role(),
A:
@alvas,
Amazon SageMaker is a managed service, which means AWS builds and operates the tooling for you, saving your time. In your case, the tooling of interest is an integration of a new version of HuggingFace Transformers library with SageMaker that should be developed, tested and deployed to production. So, this integration is naturally expected to be one or few versions behind the upstream library. But as a benefit, you always get a version of Transformers that is proved to be stable and compatible with SageMaker.
In your case, you want to try the latest version of Transformers in SageMaker, potentially sacrificing the stability and compatibility (v4.24 was released just less than a month ago). As you correctly mentioned, this workflow can be "kind of complicated" and "not confirmed that the complicated steps can work". @Arun Lokanatha suggested the easiest way to try the new version. Indeed, Transformers work with regular PyTorch estimator, but instead of high-level HuggingFace estimator API you now need to use the lower-level PyTorch estimator API. The above-mentioned requirements.txt will look like this:
transformers==4.24.0
As a drawback, you need to do a little bit more work by yourself, e.g. to figure out what is the minimal version of PyTorch/CUDA libraries required etc. And you're responsible for testing, securing, and optimizing the integration as appropriate for production grade use, potentially loosing some benefits from utilising SageMaker at its full capability.
If you finally decide to use HuggingFace high-level estimator in production after my explanation, I recommend to take at least these actions:
See the current list of supported versions in the latest version SageMaker Python SDK directly in its source code (at of today it's v4.17.0).
Create or monitor an existing issue asking for a new version support in SageMaker Python SDK, e.g. #3456 for support for Transformers v4.24.0.
I hope this answer is helpful.
Ivan
A:
You can achieve this by
Step-1 : Create a custom ECR Image with required hf version (https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html)
Step-2 : Develop your Train.py
Step-3 : : Pass train.py and the new ecr image uri to sagemaker.estimator.
(https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html)
A:
To use a newer version of the HuggingFace Estimator on Amazon SageMaker, you can use the transformers_version parameter in the HuggingFace() constructor to specify the version of the HuggingFace library that you want to use. However, the maximum supported version may be limited by the version of the PyTorch library that is installed on the SageMaker instances that you are using for training.
For example, if you try to use a newer version of the HuggingFace library than the one installed on the SageMaker instances, you may see an error similar to the following:
ImportError: Unable to import 'transformers'
To use a newer version of the HuggingFace library on SageMaker, you can do the following:
Use the pytorch_version and py_version parameters in the HuggingFace() constructor to specify the version of PyTorch that you want to use. This will ensure that the correct version of PyTorch is installed on the SageMaker instances that you are using for training.
Use the requirements.txt file in the source_dir parameter to specify any additional dependencies that are required by the newer version of the HuggingFace library. This will ensure that these dependencies are installed on the SageMaker instances along with the correct version of PyTorch.
Here is an example of how you can use these parameters to specify the version of the HuggingFace library and its dependencies on SageMaker:
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.24',
pytorch_version='1
|
How to use AWS Sagemaker with newer version of Huggingface Estimator?
|
When trying to use Huggingface estimator on sagemaker, Run training on Amazon SageMaker e.g.
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.17',
pytorch_version='1.10',
py_version='py38',
hyperparameters = hyperparameters
)
When I tried to increase the version to transformers_version='4.24', it throws an error where the maximum version supported is 4.17.
How to use AWS Sagemaker with newer version of Huggingface Estimator?
There's a note on using newer version for inference on https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9 but it looks like the way to use it for training with the Huggingface estimator is kind of complicated https://discuss.huggingface.co/t/huggingface-pytorch-versions-on-sagemaker/26315/5?u=alvations and it's not confirmed that the complicated steps can work.
|
[
"You can use the Pytorch estimator and in your source directory place a requirements.txt with Transformers added to it. This will ensure 2 things\n\nYou can use higher version of pytorch 1.12 (current) compared to 1.10.2 in the huggingface estimator.\nInstall new version of HuggingFace Transformers library.\n\nTo achieve this you need to structure your source directory like this\nscripts\n/train.py\n/requirements.txt\nand pass the source_dir attribute to the pytorch estimator\npt_estimator = PyTorch(\nentry_point=\"train.py\",\nsource_dir=\"scripts\",\nrole=sagemaker.get_execution_role(),\n\n",
"@alvas,\nAmazon SageMaker is a managed service, which means AWS builds and operates the tooling for you, saving your time. In your case, the tooling of interest is an integration of a new version of HuggingFace Transformers library with SageMaker that should be developed, tested and deployed to production. So, this integration is naturally expected to be one or few versions behind the upstream library. But as a benefit, you always get a version of Transformers that is proved to be stable and compatible with SageMaker.\nIn your case, you want to try the latest version of Transformers in SageMaker, potentially sacrificing the stability and compatibility (v4.24 was released just less than a month ago). As you correctly mentioned, this workflow can be \"kind of complicated\" and \"not confirmed that the complicated steps can work\". @Arun Lokanatha suggested the easiest way to try the new version. Indeed, Transformers work with regular PyTorch estimator, but instead of high-level HuggingFace estimator API you now need to use the lower-level PyTorch estimator API. The above-mentioned requirements.txt will look like this:\ntransformers==4.24.0\n\nAs a drawback, you need to do a little bit more work by yourself, e.g. to figure out what is the minimal version of PyTorch/CUDA libraries required etc. And you're responsible for testing, securing, and optimizing the integration as appropriate for production grade use, potentially loosing some benefits from utilising SageMaker at its full capability.\nIf you finally decide to use HuggingFace high-level estimator in production after my explanation, I recommend to take at least these actions:\n\nSee the current list of supported versions in the latest version SageMaker Python SDK directly in its source code (at of today it's v4.17.0).\nCreate or monitor an existing issue asking for a new version support in SageMaker Python SDK, e.g. #3456 for support for Transformers v4.24.0.\n\nI hope this answer is helpful.\nIvan\n",
"You can achieve this by\nStep-1 : Create a custom ECR Image with required hf version (https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html)\nStep-2 : Develop your Train.py\nStep-3 : : Pass train.py and the new ecr image uri to sagemaker.estimator.\n(https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html)\n",
"To use a newer version of the HuggingFace Estimator on Amazon SageMaker, you can use the transformers_version parameter in the HuggingFace() constructor to specify the version of the HuggingFace library that you want to use. However, the maximum supported version may be limited by the version of the PyTorch library that is installed on the SageMaker instances that you are using for training.\nFor example, if you try to use a newer version of the HuggingFace library than the one installed on the SageMaker instances, you may see an error similar to the following:\nImportError: Unable to import 'transformers'\n\nTo use a newer version of the HuggingFace library on SageMaker, you can do the following:\nUse the pytorch_version and py_version parameters in the HuggingFace() constructor to specify the version of PyTorch that you want to use. This will ensure that the correct version of PyTorch is installed on the SageMaker instances that you are using for training.\nUse the requirements.txt file in the source_dir parameter to specify any additional dependencies that are required by the newer version of the HuggingFace library. This will ensure that these dependencies are installed on the SageMaker instances along with the correct version of PyTorch.\nHere is an example of how you can use these parameters to specify the version of the HuggingFace library and its dependencies on SageMaker:\n# create the Estimator\nhuggingface_estimator = HuggingFace(\n entry_point='train.py',\n source_dir='./scripts',\n instance_type='ml.p3.2xlarge',\n instance_count=1,\n role=role,\n transformers_version='4.24',\n pytorch_version='1\n\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
"amazon_sagemaker",
"docker",
"huggingface",
"python",
"pytorch"
] |
stackoverflow_0074548143_amazon_sagemaker_docker_huggingface_python_pytorch.txt
|
Q:
AVL tree rotation occurs even when the tree is balanced
#include <stdio.h>
#include <stdlib.h>
typedef struct node *treenode;
struct node
{
int data;
int height;
treenode left;
treenode right;
};
int height(treenode t)
{
if(t == NULL)
return -1;
else
return t->height;
}
int max(int a, int b)
{
return (a > b)? a : b;
}
treenode singlerotatewithleft(treenode t)
{
treenode p;
p = t->left;
t->left = p->right;
p->right = t;
t->height = max(height(t->left), height(t->right));
p->height = max(height(p->left), t->height);
return p;
}
treenode singlerotatewithright(treenode t)
{
treenode p;
p = t->right;
t->right = p->left;
p->left = t;
t->height = max(height(t->left), height(t->right));
p->height = max(height(p->left), t->height);
return p;
}
treenode doublerotatewithleft(treenode t)
{
t->left = singlerotatewithright(t->left);
return singlerotatewithleft(t);
}
treenode doublerotatewithright(treenode t)
{
t->right = singlerotatewithleft(t->right);
return singlerotatewithright(t);
}
treenode insert(treenode t, int x)
{
if(t==NULL)
{
t = (struct node*)malloc(sizeof(struct node*));
if(t == NULL)
{
printf("Out of space");
}
else
{
t->data = x;
t->height = 0;
t->left = t->right = NULL;
}
}
if(x < t->data)
{
t->left = insert(t->left,x);
if(height(t->left) - height(t->right) == 2)
{
if(x < t->left->data)
t = singlerotatewithleft(t);
else
t = doublerotatewithleft(t);
}
}
else if(x > t->data)
{
t->right = insert(t->right,x);
if(height(t->right) - height(t->left) == 2)
{
if(x > t->right->data)
t = singlerotatewithright(t);
else
t = doublerotatewithright(t);
}
}
t->height = max(height(t->left), height(t->right)) + 1;
return t;
}
void preorder(treenode t)
{
if(t != NULL)
{
printf("%d ",t->data);
preorder(t->left);
preorder(t->right);
}
}
void main()
{
int choice;
treenode root;
root = NULL;
do
{
printf("\n1.Insert\n2.Preorder traversal\n3.Exit");
printf("\nEnter choice: ");
scanf("%d",&choice);
int x;
switch(choice)
{
case 1:
printf("\nEnter element to insert: ");
scanf("%d",&x);
root = insert(root,x);
break;
case 2:
printf("\nThe preorder traversal is: \n");
preorder(root);
break;
}
}while(choice != 3);
}
COULD YOU FIND THE ERROR IN THE CODE?OUTPUT-ERROR INSERTING 4
This code is supposed to insert integer elements in the AVL tree. While inserting 3 after 1 and 2, the preorder traversal shows the correct single rotation with right operation performed. But after inserting 4 the tree is balanced, but the preorder traversal shows that another single rotation with right operation is performed. How to fix it?
A:
At least this problem:
Wong allocation
Code allocated the size of a pointer and not a struct.
// v why * ?
t = (struct node*)malloc(sizeof(struct node*));
Instead allocate to the referenced object. Cast not needed.
t = malloc(sizeof t[0]);
|
AVL tree rotation occurs even when the tree is balanced
|
#include <stdio.h>
#include <stdlib.h>
typedef struct node *treenode;
struct node
{
int data;
int height;
treenode left;
treenode right;
};
int height(treenode t)
{
if(t == NULL)
return -1;
else
return t->height;
}
int max(int a, int b)
{
return (a > b)? a : b;
}
treenode singlerotatewithleft(treenode t)
{
treenode p;
p = t->left;
t->left = p->right;
p->right = t;
t->height = max(height(t->left), height(t->right));
p->height = max(height(p->left), t->height);
return p;
}
treenode singlerotatewithright(treenode t)
{
treenode p;
p = t->right;
t->right = p->left;
p->left = t;
t->height = max(height(t->left), height(t->right));
p->height = max(height(p->left), t->height);
return p;
}
treenode doublerotatewithleft(treenode t)
{
t->left = singlerotatewithright(t->left);
return singlerotatewithleft(t);
}
treenode doublerotatewithright(treenode t)
{
t->right = singlerotatewithleft(t->right);
return singlerotatewithright(t);
}
treenode insert(treenode t, int x)
{
if(t==NULL)
{
t = (struct node*)malloc(sizeof(struct node*));
if(t == NULL)
{
printf("Out of space");
}
else
{
t->data = x;
t->height = 0;
t->left = t->right = NULL;
}
}
if(x < t->data)
{
t->left = insert(t->left,x);
if(height(t->left) - height(t->right) == 2)
{
if(x < t->left->data)
t = singlerotatewithleft(t);
else
t = doublerotatewithleft(t);
}
}
else if(x > t->data)
{
t->right = insert(t->right,x);
if(height(t->right) - height(t->left) == 2)
{
if(x > t->right->data)
t = singlerotatewithright(t);
else
t = doublerotatewithright(t);
}
}
t->height = max(height(t->left), height(t->right)) + 1;
return t;
}
void preorder(treenode t)
{
if(t != NULL)
{
printf("%d ",t->data);
preorder(t->left);
preorder(t->right);
}
}
void main()
{
int choice;
treenode root;
root = NULL;
do
{
printf("\n1.Insert\n2.Preorder traversal\n3.Exit");
printf("\nEnter choice: ");
scanf("%d",&choice);
int x;
switch(choice)
{
case 1:
printf("\nEnter element to insert: ");
scanf("%d",&x);
root = insert(root,x);
break;
case 2:
printf("\nThe preorder traversal is: \n");
preorder(root);
break;
}
}while(choice != 3);
}
COULD YOU FIND THE ERROR IN THE CODE?OUTPUT-ERROR INSERTING 4
This code is supposed to insert integer elements in the AVL tree. While inserting 3 after 1 and 2, the preorder traversal shows the correct single rotation with right operation performed. But after inserting 4 the tree is balanced, but the preorder traversal shows that another single rotation with right operation is performed. How to fix it?
|
[
"At least this problem:\nWong allocation\nCode allocated the size of a pointer and not a struct.\n// v why * ?\nt = (struct node*)malloc(sizeof(struct node*));\n\nInstead allocate to the referenced object. Cast not needed.\nt = malloc(sizeof t[0]);\n\n"
] |
[
0
] |
[] |
[] |
[
"avl_tree",
"c",
"tree",
"tree_balancing"
] |
stackoverflow_0074660874_avl_tree_c_tree_tree_balancing.txt
|
Q:
Missing value where TRUE/FALSE needed when evaluating a while loop in R
So I am attempting to implement an algorithm that uses Newton's method for finding the root of some given function, and I am having what I imagine is decent success except at one small but very frustrating point.
When I evaluate my function for small initial values of x_0 (i.e. less than or close to 1, [-1.1,1.1]), my function seems to work fine, but when I attempt to use it for larger values (even something like 1.2 or higher) i get this error message:
Error in while (abs(x_1 - x_0) > 10^(-1 * k) && iter < max_iter) { :
missing value where TRUE/FALSE needed
Here is my code for this problem:
g_x <- function(x){
return(x/(1+x^2)^(1/2))
}
dg_x <- function(x){
# derivative of the given function of g
return(1/(x^2+1)^(3/2))
}
newton <- function(x_0,k){
# x_0 is initial guess from which you want to being zeroing in on the root
# k is the parameter of the tolerance = 10^(-k)
if(between(g_x(x_0),-10^(-1*k)/2,10^(-1*k)/2)){
return(c(x_0,0))
}
iter <- 1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
while(abs(x_1-x_0) > 10^(-1*k)){
x_0 <- x_1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
iter <- iter + 1
}
return(c((x_1+x_0)/2, iter))
}
I have no idea what i need to do to fix this. I have been going through this line by line in the console,
and the computer is capable of evaluating the expression abs(x_1-x_0) > 10^(-k) as TRUE for the first couple iterations. I am not sure why it becomes a problem at all.
It is not like the computer reches some super small/large value of x_1 that it generalizes as zero or infinity because this problem presents itself on the very first iteration of the algorithm.
In the problem we are given that: x_0 = 2 and g(x) = x/sqrt(x^2+1) so dg(x) = 1/(x^2+1)^3/2
therefore the next point to calculate is x_1 = -8
so the computer is evaluating abs(-8-2) > 10^-k. Why does this break
A:
Before doing a non-linear optimization problem, I will typically look at the function that I am seeking to solve with a graph. Since your function only depends on one variable it is straightforward to do this.
xs = seq(-10,10,length=10000)
plot(xs,g_x(xs))
yields
So the 'root' is at 0 (the function has a value of 0 at 0), but the derivative is "very spiky" which may mean that the Newton approach will repeatedly over shot the solution repeatedly.
I modified your function to put in an upper and lower bound for x, and it works provided that you start reasonably close to the solution:
newton <- function(x_0,k,lower=-10,upper=10){
# x_0 is initial guess from which you want to being zeroing in on the root
# k is the parameter of the tolerance = 10^(-k)
if(between(g_x(x_0),-10^(-1*k)/2,10^(-1*k)/2)){
return(c(x_0,0))
}
iter <- 1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
while( (abs(x_1-x_0) > 10^(-1*k)) & between(x_0,lower,upper)){
x_0 <- x_1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
iter <- iter + 1
}
return(c((x_1+x_0)/2, iter))
}
#x_0=10
#k=10
newton(.3,10)
newton(5,10)
Starting at 0.3 works (solves for -3.812802e-15 ), and starting at 5 does not (exits at 976500). Using a coarse grid search to choose a good starting value for your function would be a reasonable approach.
This shows what happens if you start at say 2:
x_0=2
iter <- 1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
for (inter in 1:10){
x_0 <- x_1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
iter <- iter + 1
print(x_1,iter)
}
returns:
[1] 512
[1] -1.34e+08
[1] 2.418e+24
[1] -1.4135e+73
[1] 2.82401e+219
[1] NaN
[1] NaN
[1] NaN
[1] NaN
[1] NaN
|
Missing value where TRUE/FALSE needed when evaluating a while loop in R
|
So I am attempting to implement an algorithm that uses Newton's method for finding the root of some given function, and I am having what I imagine is decent success except at one small but very frustrating point.
When I evaluate my function for small initial values of x_0 (i.e. less than or close to 1, [-1.1,1.1]), my function seems to work fine, but when I attempt to use it for larger values (even something like 1.2 or higher) i get this error message:
Error in while (abs(x_1 - x_0) > 10^(-1 * k) && iter < max_iter) { :
missing value where TRUE/FALSE needed
Here is my code for this problem:
g_x <- function(x){
return(x/(1+x^2)^(1/2))
}
dg_x <- function(x){
# derivative of the given function of g
return(1/(x^2+1)^(3/2))
}
newton <- function(x_0,k){
# x_0 is initial guess from which you want to being zeroing in on the root
# k is the parameter of the tolerance = 10^(-k)
if(between(g_x(x_0),-10^(-1*k)/2,10^(-1*k)/2)){
return(c(x_0,0))
}
iter <- 1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
while(abs(x_1-x_0) > 10^(-1*k)){
x_0 <- x_1
x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))
iter <- iter + 1
}
return(c((x_1+x_0)/2, iter))
}
I have no idea what i need to do to fix this. I have been going through this line by line in the console,
and the computer is capable of evaluating the expression abs(x_1-x_0) > 10^(-k) as TRUE for the first couple iterations. I am not sure why it becomes a problem at all.
It is not like the computer reches some super small/large value of x_1 that it generalizes as zero or infinity because this problem presents itself on the very first iteration of the algorithm.
In the problem we are given that: x_0 = 2 and g(x) = x/sqrt(x^2+1) so dg(x) = 1/(x^2+1)^3/2
therefore the next point to calculate is x_1 = -8
so the computer is evaluating abs(-8-2) > 10^-k. Why does this break
|
[
"Before doing a non-linear optimization problem, I will typically look at the function that I am seeking to solve with a graph. Since your function only depends on one variable it is straightforward to do this.\nxs = seq(-10,10,length=10000)\nplot(xs,g_x(xs))\n\nyields\n\nSo the 'root' is at 0 (the function has a value of 0 at 0), but the derivative is \"very spiky\" which may mean that the Newton approach will repeatedly over shot the solution repeatedly.\nI modified your function to put in an upper and lower bound for x, and it works provided that you start reasonably close to the solution:\nnewton <- function(x_0,k,lower=-10,upper=10){\n # x_0 is initial guess from which you want to being zeroing in on the root\n # k is the parameter of the tolerance = 10^(-k)\n if(between(g_x(x_0),-10^(-1*k)/2,10^(-1*k)/2)){\n return(c(x_0,0))\n }\n \n iter <- 1\n x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))\n \n while( (abs(x_1-x_0) > 10^(-1*k)) & between(x_0,lower,upper)){\n x_0 <- x_1\n x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))\n iter <- iter + 1\n }\n return(c((x_1+x_0)/2, iter))\n}\n\n#x_0=10\n#k=10\nnewton(.3,10)\n\nnewton(5,10)\n\nStarting at 0.3 works (solves for -3.812802e-15 ), and starting at 5 does not (exits at 976500). Using a coarse grid search to choose a good starting value for your function would be a reasonable approach.\nThis shows what happens if you start at say 2:\nx_0=2 \niter <- 1\nx_1 <- x_0 - (g_x(x_0)/dg_x(x_0))\n\nfor (inter in 1:10){\n x_0 <- x_1\n x_1 <- x_0 - (g_x(x_0)/dg_x(x_0))\n iter <- iter + 1\n print(x_1,iter)\n}\n\n\nreturns:\n[1] 512\n[1] -1.34e+08\n[1] 2.418e+24\n[1] -1.4135e+73\n[1] 2.82401e+219\n[1] NaN\n[1] NaN\n[1] NaN\n[1] NaN\n[1] NaN\n\n"
] |
[
0
] |
[] |
[] |
[
"loops",
"r",
"while_loop"
] |
stackoverflow_0074655931_loops_r_while_loop.txt
|
Q:
Add backslash in HTML code with jQuery or JS
I'm trying to replace this code:
<a href="test.html"></a>
To this one:
<a href=\"test.html\"></a>
This is what I've tried so far:
var source = $(this).val().replaceAll('"', '\"');
A:
var source = $(this).val().replace(/"/g, '\\"');
This will replace all instances of the double quote character with a backslash followed by a double quote.
|
Add backslash in HTML code with jQuery or JS
|
I'm trying to replace this code:
<a href="test.html"></a>
To this one:
<a href=\"test.html\"></a>
This is what I've tried so far:
var source = $(this).val().replaceAll('"', '\"');
|
[
"var source = $(this).val().replace(/\"/g, '\\\\\"');\n\nThis will replace all instances of the double quote character with a backslash followed by a double quote.\n"
] |
[
0
] |
[] |
[] |
[
"jquery",
"replace"
] |
stackoverflow_0074660843_jquery_replace.txt
|
Q:
Not authorized to perform sts:AssumeRoleWithWebIdentity- 403
I have been trying to run an external-dns pod using the guide provided by k8s-sig group. I have followed every step of the guide, and getting the below error.
time="2021-02-27T13:27:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 87a3ca86-ceb0-47be-8f90-25d0c2de9f48"
I had created AWS IAM policy using Terraform, and it was successfully created. Except IAM Role for service account for which I had used eksctl, everything else has been spun via Terraform.
But then I got hold of this article which says creating AWS IAM policy using awscli would eliminate this error. So I deleted the policy created using Terraform, and recreated it with awscli. Yet, it is throwing the same error error.
Below is my external dns yaml file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
# If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
# Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::268xxxxxxx:role/eksctl-ats-Eks1-addon-iamserviceaccoun-Role1-WMLL93xxxx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=xyz.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=Z0471542U7WSPZxxxx
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
I am scratching my head as there is no proper solution to this error anywhere in the net. Hoping to find a solution to this issue in this forum.
End result must show something like below and fill up records in hosted zone.
time="2020-05-05T02:57:31Z" level=info msg="All records are already up to date"
A:
I also struggled with this error.
The problem was in the definition of the trust relationship.
You can see in some offical aws tutorials (like this) the following setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:<my-namespace>:<my-service-account>"
}
}
}
]
}
Option 1 for failure
My problem was that I passed the a wrong value for my-service-account at the end of ${OIDC_PROVIDER}:sub in the Condition part.
Option 2 for failure
After the previous fix - I still faced the same error - it was solved by following this aws tutorial which shows the output of using the eksctl with the command below:
eksctl create iamserviceaccount \
--name my-serviceaccount \
--namespace <your-ns> \
--cluster <your-cluster-name> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
When you look at the output in the trust relationship tab in the AWS web console - you can see that an additional condition was added with the postfix of :aud and the value of sts.amazonaws.com:
So this need to be added after the "${OIDC_PROVIDER}:sub" condition.
A:
I was able to get help from the Kubernetes Slack (shout out to @Rob Del) and this is what we came up with. There's nothing wrong with the k8s rbac from the article, the issue is the way the IAM role is written. I am using Terraform v0.12.24, but I believe something similar to the following .tf should work for Terraform v0.14:
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "external_dns_role" {
name = "external-dns"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": format(
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:%s",
replace(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://",
"oidc-provider/"
)
)
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
format(
"%s:sub",
trimprefix(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://"
)
) : "system:serviceaccount:default:external-dns"
}
}
}
]
})
}
The above .tf assume you created your eks cluster using terraform and that you use the rbac manifest from the external-dns tutorial.
A:
I have a few possibilities here.
Before anything else, does your cluster have an OIDC provider associated with it? IRSA won't work without it.
You can check that in the AWS console, or via the CLI with:
aws eks describe-cluster --name {name} --query "cluster.identity.oidc.issuer"
First
Delete the iamserviceaccount, recreate it, remove the ServiceAccount definition from your ExternalDNS manfiest (the entire first section) and re-apply it.
eksctl delete iamserviceaccount --name {name} --namespace {namespace} --cluster {cluster}
eksctl create iamserviceaccount --name {name} --namespace {namespace} --cluster
{cluster} --attach-policy-arn {policy-arn} --approve --override-existing-serviceaccounts
kubectl apply -n {namespace} -f {your-externaldns-manifest.yaml}
It may be that there is some conflict going on as you have overwritten what you created with eksctl createiamserviceaccount by also specifying a ServiceAccount in your ExternalDNS manfiest.
Second
Upgrade your cluster to v1.19 (if it's not there already):
eksctl upgrade cluster --name {name} will show you what will be done;
eksctl upgrade cluster --name {name} --approve will do it
Third
Some documentation suggests that in addition to setting securityContext.fsGroup: 65534, you also need to set securityContext.runAsUser: 0.
A:
I've been struggling with a similar issue after following the setup suggested here
I ended up with the exception below in the deploy logs.
time="2021-05-10T06:40:17Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 3fda6c69-2a0a-4bc9-b478-521b5131af9b"
time="2021-05-10T06:41:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 7d3e07a2-c514-44fa-8e79-d49314d9adb6"
In my case, it was an issue with wrong Service account name mapped to the new role created.
Here is a step by step approach to get this done without much hiccups.
Create the IAM Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
Create the IAM role and the service account for your EKS cluster.
eksctl create iamserviceaccount \
--name external-dns-sa-eks \
--namespace default \
--cluster aecops-grpc-test \
--attach-policy-arn arn:aws:iam::xxxxxxxx:policy/external-dns-policy-eks \
--approve
--override-existing-serviceaccounts
Created new hosted zone.
aws route53 create-hosted-zone --name "hosted.domain.com." --caller-reference "grpc-endpoint-external-dns-test-$(date +%s)"
Deploy ExternalDNS, after creating the Cluster role and Cluster role binding to the previously created service account.
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns-sa-eks
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
# If you're using kiam or kube2iam, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
iam.amazonaws.com/role: arn:aws:iam::***********:role/eksctl-eks-cluster-name-addon-iamserviceacco-Role1-156KP94SN7D7
spec:
serviceAccountName: external-dns-sa-eks
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=hosted.domain.com. # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-hostedzone-identifier
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
Update Ingress resource with the domain name and reapply the manifest.
For ingress objects, ExternalDNS will create a DNS record based on the host specified for the ingress object.
- host: myapp.hosted.domain.com
Validate new records created.
BASH-3.2$ aws route53 list-resource-record-sets --output json
--hosted-zone-id "/hostedzone/Z065*********" --query "ResourceRecordSets[?Name == 'hosted.domain.com..']|[?Type == 'A']"
[
{
"Name": "myapp.hosted.domain.com..",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZCT6F*******",
"DNSName": "****************.elb.ap-southeast-2.amazonaws.com.",
"EvaluateTargetHealth": true
}
} ]
A:
In our case this issue occurred when using the Terraform module to create the eks cluster, and eksctl to create the iamserviceaccount for the aws-load-balancer controller. It all works fine the first go-round. But if you do a terraform destroy, you need to do some cleanup, like delete the CloudFormation script created by eksctl. Somehow things got crossed, and the CloudTrail was passing along a resource role that was no longer valid. So check the annotation of the service account to ensure it's valid, and update it if necessary. Then in my case I deleted and redeployed the aws-load-balancer-controller
%> kubectl describe serviceaccount aws-load-balancer-controller -n kube-system
Name: aws-load-balancer-controller
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-JQL4R3JM7I1A
Image pull secrets: <none>
Mountable secrets: aws-load-balancer-controller-token-b8hw7
Tokens: aws-load-balancer-controller-token-b8hw7
Events: <none>
%>
%> kubectl annotate --overwrite serviceaccount aws-load-balancer-controller eks.amazonaws.com/role-arn='arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-17A92GGXZRY6O' -n kube-system
A:
In my case, I was able to attach the oidc role with route53 permissions policy and that resolved the error.
https://medium.com/swlh/amazon-eks-setup-external-dns-with-oidc-provider-and-kube2iam-f2487c77b2a1
and then with the external-dns service account used that instead of the cluster role.
annotations:
# # Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::<account>:role/external-dns-service-account-oidc-role
A:
For me the issue was that the trust relationship was (correctly) setup using one partition whereas the ServiceAccount was annotated with a different partition, like so:
...
"Principal": {
"Federated": "arn:aws-us-gov:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
...
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::{{ .Values.aws.account }}:role/{{ .Values.aws.roleName }}
Notice arn:aws:iam vs arn:aws-us-gov:iam
|
Not authorized to perform sts:AssumeRoleWithWebIdentity- 403
|
I have been trying to run an external-dns pod using the guide provided by k8s-sig group. I have followed every step of the guide, and getting the below error.
time="2021-02-27T13:27:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 87a3ca86-ceb0-47be-8f90-25d0c2de9f48"
I had created AWS IAM policy using Terraform, and it was successfully created. Except IAM Role for service account for which I had used eksctl, everything else has been spun via Terraform.
But then I got hold of this article which says creating AWS IAM policy using awscli would eliminate this error. So I deleted the policy created using Terraform, and recreated it with awscli. Yet, it is throwing the same error error.
Below is my external dns yaml file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
# If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
# Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::268xxxxxxx:role/eksctl-ats-Eks1-addon-iamserviceaccoun-Role1-WMLL93xxxx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=xyz.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=Z0471542U7WSPZxxxx
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
I am scratching my head as there is no proper solution to this error anywhere in the net. Hoping to find a solution to this issue in this forum.
End result must show something like below and fill up records in hosted zone.
time="2020-05-05T02:57:31Z" level=info msg="All records are already up to date"
|
[
"I also struggled with this error.\nThe problem was in the definition of the trust relationship.\nYou can see in some offical aws tutorials (like this) the following setup:\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Federated\": \"arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}\"\n },\n \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n \"Condition\": {\n \"StringEquals\": {\n \"${OIDC_PROVIDER}:sub\": \"system:serviceaccount:<my-namespace>:<my-service-account>\"\n }\n }\n }\n ]\n}\n\nOption 1 for failure\nMy problem was that I passed the a wrong value for my-service-account at the end of ${OIDC_PROVIDER}:sub in the Condition part.\nOption 2 for failure\nAfter the previous fix - I still faced the same error - it was solved by following this aws tutorial which shows the output of using the eksctl with the command below:\neksctl create iamserviceaccount \\\n --name my-serviceaccount \\\n --namespace <your-ns> \\\n --cluster <your-cluster-name> \\\n --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \\\n --approve\n\nWhen you look at the output in the trust relationship tab in the AWS web console - you can see that an additional condition was added with the postfix of :aud and the value of sts.amazonaws.com:\n\nSo this need to be added after the \"${OIDC_PROVIDER}:sub\" condition.\n",
"I was able to get help from the Kubernetes Slack (shout out to @Rob Del) and this is what we came up with. There's nothing wrong with the k8s rbac from the article, the issue is the way the IAM role is written. I am using Terraform v0.12.24, but I believe something similar to the following .tf should work for Terraform v0.14:\ndata \"aws_caller_identity\" \"current\" {}\n\nresource \"aws_iam_role\" \"external_dns_role\" {\n name = \"external-dns\"\n\n assume_role_policy = jsonencode({\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Federated\": format(\n \"arn:aws:iam::${data.aws_caller_identity.current.account_id}:%s\", \n replace(\n \"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}\", \n \"https://\", \n \"oidc-provider/\"\n )\n )\n },\n \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n \"Condition\": {\n \"StringEquals\": {\n format(\n \"%s:sub\", \n trimprefix(\n \"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}\", \n \"https://\"\n )\n ) : \"system:serviceaccount:default:external-dns\"\n }\n }\n }\n ]\n })\n}\n\nThe above .tf assume you created your eks cluster using terraform and that you use the rbac manifest from the external-dns tutorial.\n",
"I have a few possibilities here.\nBefore anything else, does your cluster have an OIDC provider associated with it? IRSA won't work without it.\nYou can check that in the AWS console, or via the CLI with:\naws eks describe-cluster --name {name} --query \"cluster.identity.oidc.issuer\"\nFirst\nDelete the iamserviceaccount, recreate it, remove the ServiceAccount definition from your ExternalDNS manfiest (the entire first section) and re-apply it.\neksctl delete iamserviceaccount --name {name} --namespace {namespace} --cluster {cluster}\neksctl create iamserviceaccount --name {name} --namespace {namespace} --cluster \n{cluster} --attach-policy-arn {policy-arn} --approve --override-existing-serviceaccounts\nkubectl apply -n {namespace} -f {your-externaldns-manifest.yaml}\n\nIt may be that there is some conflict going on as you have overwritten what you created with eksctl createiamserviceaccount by also specifying a ServiceAccount in your ExternalDNS manfiest.\nSecond\nUpgrade your cluster to v1.19 (if it's not there already):\neksctl upgrade cluster --name {name} will show you what will be done;\neksctl upgrade cluster --name {name} --approve will do it\nThird\nSome documentation suggests that in addition to setting securityContext.fsGroup: 65534, you also need to set securityContext.runAsUser: 0.\n",
"I've been struggling with a similar issue after following the setup suggested here\nI ended up with the exception below in the deploy logs.\ntime=\"2021-05-10T06:40:17Z\" level=error msg=\"records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: 3fda6c69-2a0a-4bc9-b478-521b5131af9b\"\ntime=\"2021-05-10T06:41:20Z\" level=error msg=\"records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: 7d3e07a2-c514-44fa-8e79-d49314d9adb6\"\n\nIn my case, it was an issue with wrong Service account name mapped to the new role created.\nHere is a step by step approach to get this done without much hiccups.\n\nCreate the IAM Policy\n\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"route53:ChangeResourceRecordSets\"\n ],\n \"Resource\": [\n \"arn:aws:route53:::hostedzone/*\"\n ]\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"route53:ListHostedZones\",\n \"route53:ListResourceRecordSets\"\n ],\n \"Resource\": [\n \"*\"\n ]\n }\n ]\n }\n\n\n\nCreate the IAM role and the service account for your EKS cluster.\n\n\neksctl create iamserviceaccount \\\n --name external-dns-sa-eks \\\n --namespace default \\\n --cluster aecops-grpc-test \\\n --attach-policy-arn arn:aws:iam::xxxxxxxx:policy/external-dns-policy-eks \\\n --approve \n --override-existing-serviceaccounts\n\n\n\nCreated new hosted zone.\n\naws route53 create-hosted-zone --name \"hosted.domain.com.\" --caller-reference \"grpc-endpoint-external-dns-test-$(date +%s)\"\n\nDeploy ExternalDNS, after creating the Cluster role and Cluster role binding to the previously created service account.\n\n\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRole\nmetadata:\n name: external-dns\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\",\"endpoints\",\"pods\"]\n verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n resources: [\"ingresses\"]\n verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRoleBinding\nmetadata:\n name: external-dns-viewer\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: external-dns\nsubjects:\n- kind: ServiceAccount\n name: external-dns-sa-eks\n namespace: default\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: external-dns\nspec:\n strategy:\n type: Recreate\n selector:\n matchLabels:\n app: external-dns\n template:\n metadata:\n labels:\n app: external-dns\n # If you're using kiam or kube2iam, specify the following annotation.\n # Otherwise, you may safely omit it.\n annotations:\n iam.amazonaws.com/role: arn:aws:iam::***********:role/eksctl-eks-cluster-name-addon-iamserviceacco-Role1-156KP94SN7D7\n spec:\n serviceAccountName: external-dns-sa-eks\n containers:\n - name: external-dns\n image: k8s.gcr.io/external-dns/external-dns:v0.7.6\n args:\n - --source=service\n - --source=ingress\n - --domain-filter=hosted.domain.com. # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n - --provider=aws\n - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n - --registry=txt\n - --txt-owner-id=my-hostedzone-identifier\n securityContext:\n fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files\n\n\n\nUpdate Ingress resource with the domain name and reapply the manifest.\n\nFor ingress objects, ExternalDNS will create a DNS record based on the host specified for the ingress object.\n\n- host: myapp.hosted.domain.com\n\n\nValidate new records created.\n\n\nBASH-3.2$ aws route53 list-resource-record-sets --output json\n--hosted-zone-id \"/hostedzone/Z065*********\" --query \"ResourceRecordSets[?Name == 'hosted.domain.com..']|[?Type == 'A']\"\n\n[\n {\n \"Name\": \"myapp.hosted.domain.com..\",\n \"Type\": \"A\",\n \"AliasTarget\": {\n \"HostedZoneId\": \"ZCT6F*******\",\n \"DNSName\": \"****************.elb.ap-southeast-2.amazonaws.com.\",\n \"EvaluateTargetHealth\": true\n }\n } ]\n\n\n",
"In our case this issue occurred when using the Terraform module to create the eks cluster, and eksctl to create the iamserviceaccount for the aws-load-balancer controller. It all works fine the first go-round. But if you do a terraform destroy, you need to do some cleanup, like delete the CloudFormation script created by eksctl. Somehow things got crossed, and the CloudTrail was passing along a resource role that was no longer valid. So check the annotation of the service account to ensure it's valid, and update it if necessary. Then in my case I deleted and redeployed the aws-load-balancer-controller\n%> kubectl describe serviceaccount aws-load-balancer-controller -n kube-system \nName: aws-load-balancer-controller\nNamespace: kube-system\nLabels: app.kubernetes.io/managed-by=eksctl\nAnnotations: eks.amazonaws.com/role-arn: arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-JQL4R3JM7I1A\nImage pull secrets: <none>\nMountable secrets: aws-load-balancer-controller-token-b8hw7\nTokens: aws-load-balancer-controller-token-b8hw7\nEvents: <none>\n%>\n\n%> kubectl annotate --overwrite serviceaccount aws-load-balancer-controller eks.amazonaws.com/role-arn='arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-17A92GGXZRY6O' -n kube-system\n\n",
"In my case, I was able to attach the oidc role with route53 permissions policy and that resolved the error.\nhttps://medium.com/swlh/amazon-eks-setup-external-dns-with-oidc-provider-and-kube2iam-f2487c77b2a1\nand then with the external-dns service account used that instead of the cluster role.\n annotations:\n # # Substitute your account ID and IAM service role name below.\n eks.amazonaws.com/role-arn: arn:aws:iam::<account>:role/external-dns-service-account-oidc-role\n\n",
"For me the issue was that the trust relationship was (correctly) setup using one partition whereas the ServiceAccount was annotated with a different partition, like so:\n...\n\"Principal\": {\n \"Federated\": \"arn:aws-us-gov:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}\"\n},\n...\n\nkind: ServiceAccount\nmetadata:\n annotations:\n eks.amazonaws.com/role-arn: arn:aws:iam::{{ .Values.aws.account }}:role/{{ .Values.aws.roleName }}\n\nNotice arn:aws:iam vs arn:aws-us-gov:iam\n"
] |
[
23,
2,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"amazon_web_services",
"aws_cli",
"eksctl",
"external_dns",
"kubernetes"
] |
stackoverflow_0066405794_amazon_web_services_aws_cli_eksctl_external_dns_kubernetes.txt
|
Q:
SAS help - median of values in an array
I am relatively new to SAS but have done a fair amount of programming over the years. I am at a loss on how to accomplish a task in SAS that I feel I would be able to do relatively easily in other platforms. I have an input table similar to this:
City
_1988
_1989
_1990
_1991
_1992
_1993
_1994
_1995
_1996
_1997
_1998
_1999
_2000
Columbus
438866
437148
16082
475843
224411
411569
658459
174208
592418
31664
312374
242830
342950
Fargo
11218
7402
35574
14765
64727
29492
104541
616
57864
73451
96251
78803
34743
Santa Fe
10608
31531
46163
28215
62608
52576
55674
43339
34896
77851
41304
31308
60306
Poughkeepsie
2184
15642
13505
9279
22796
6458
3279
4458
19672
17610
2672
11454
1072
Montpelier
1428
671
520
5453
5468
2117
2802
5847
3165
6204
1832
5357
5499
Waco
12527
695
44426
61651
83997
12811
50570
15022
86732
38541
45292
120719
17969
Nashville
359806
249811
422314
151319
466174
107335
315576
571273
195685
230626
194663
11060
545940
Billings
49694
37415
38602
79238
65260
18497
8976
81148
71326
108760
43740
48110
32106
Pensacola
4501
9682
19061
14731
4623
16106
13419
47607
9198
25003
39303
45146
24143
Trenton
40341
21210
4162
57773
16937
60495
21508
80819
27349
65088
65815
66308
38151
I would like to find the median of all the differences in values for each city.
The basic logic is I need to obtain the median of all the values in the array "difference" in the pseudo-code below.
for i = 1988 to 2000
for j = i+1 to 2000
difference(i,j) = value year_i - value year_j
end
end
I wish I could paste my sample code here, but I am basically at a point of writers block where what I have produced is so far off that it is of no use. I don't necessarily need someone to write the entire code for me but am hoping somebody can send me down the right path. I feel like this shouldn't be that hard, but I am at a loss . . .
Thanks in advance!
A:
I'm not entirely following what you're trying to do here, but I think this will get you going.
First, I create the temp data with pairwise differences as you request. Then I use the Proc Summary to calculate the median across city and year.
Feel free to ask.
/* sample data */
data have;
input City :$12. _1988 - _2000;
infile datalines dlm = '|';
datalines;
Columbus |438866|437148|16082 |475843|224411|411569|658459|174208|592418|31664 |312374|242830|342950
Fargo |11218 |7402 |35574 |14765 |64727 |29492 |104541|616 |57864 |73451 |96251 |78803 |34743
Santa Fe |10608 |31531 |46163 |28215 |62608 |52576 |55674 |43339 |34896 |77851 |41304 |31308 |60306
Poughkeepsie|2184 |15642 |13505 |9279 |22796 |6458 |3279 |4458 |19672 |17610 |2672 |11454 |1072
Montpelier |1428 |671 |520 |5453 |5468 |2117 |2802 |5847 |3165 |6204 |1832 |5357 |5499
Waco |12527 |695 |44426 |61651 |83997 |12811 |50570 |15022 |86732 |38541 |45292 |120719|17969
Nashville |359806|249811|422314|151319|466174|107335|315576|571273|195685|230626|194663|11060 |545940
Billings |49694 |37415 |38602 |79238 |65260 |18497 |8976 |81148 |71326 |108760|43740 |48110 |32106
Pensacola |4501 |9682 |19061 |14731 |4623 |16106 |13419 |47607 |9198 |25003 |39303 |45146 |24143
Trenton |40341 |21210 |4162 |57773 |16937 |60495 |21508 |80819 |27349 |65088 |65815 |66308 |38151
;
/* pairwise differences in long format */
data temp;
set have;
array y1(i) _1988 - _2000;
array y2(j) _1988 - _2000;
do over y1;
do over y2;
year1 = input(compress(vname(y1), , 'kd'), 8.);
val1 = y1;
year2 = input(compress(vname(y2), , 'kd'), 8.);
val2 = y2;
diff = val1 - val2;
if i ne j then output;
end;
end;
drop _:;
run;
/* calculate median */
proc summary data = temp nway;
class city year1;
var diff;
output out = want(drop = _:) median =;
run;
A:
You can use a 13-element array to hold the year values. And a 13x13 temporary array to hold the differences. Then median(of ARRAY(*)) to get the median.
* create the sample data;
data have;
input City $ _1988 _1989 _1990 _1991 _1992 _1993 _1994 _1995 _1996 _1997 _1998 _1999 _2000;
datalines;
Columbus 438866 437148 16082 475843 224411 411569 658459 174208 592418 31664 312374 242830 342950
Fargo 11218 7402 35574 14765 64727 29492 104541 616 57864 73451 96251 78803 34743
Santa-Fe 10608 31531 46163 28215 62608 52576 55674 43339 34896 77851 41304 31308 60306
Poughkeepsie 2184 15642 13505 9279 22796 6458 3279 4458 19672 17610 2672 11454 1072
Montpelier 1428 671 520 5453 5468 2117 2802 5847 3165 6204 1832 5357 5499
Waco 12527 695 44426 61651 83997 12811 50570 15022 86732 38541 45292 120719 17969
Nashville 359806 249811 422314 151319 466174 107335 315576 571273 195685 230626 194663 11060 545940
Billings 49694 37415 38602 79238 65260 18497 8976 81148 71326 108760 43740 48110 32106
Pensacola 4501 9682 19061 14731 4623 16106 13419 47607 9198 25003 39303 45146 24143
Trenton 40341 21210 4162 57773 16937 60495 21508 80819 27349 65088 65815 66308 38151
;
run;
data want;
set have;
* create an array to hold the year variables;
array years {1988:2000} _1988 - _2000;
* create a 2-dimensional array to hold the differences;
array differences {1988:2000,1988:2000} _temporary_;
do i = 1988 to 20000;
do j = i + 1 to 2000;
* calculate the differences as per pseudo-code in question;
differences(i,j) = years(i) - years(j);
end;
end;
* get median value;
median_diff = median(of differences(*));
run;
A:
Thanks everyone! What you have provided is exactly what I needed to get me going in the right direction (I simplified the problem for the purpose of posting in the forum, so I still need to make some adjustments . . . I am well on my way). Thanks again!
|
SAS help - median of values in an array
|
I am relatively new to SAS but have done a fair amount of programming over the years. I am at a loss on how to accomplish a task in SAS that I feel I would be able to do relatively easily in other platforms. I have an input table similar to this:
City
_1988
_1989
_1990
_1991
_1992
_1993
_1994
_1995
_1996
_1997
_1998
_1999
_2000
Columbus
438866
437148
16082
475843
224411
411569
658459
174208
592418
31664
312374
242830
342950
Fargo
11218
7402
35574
14765
64727
29492
104541
616
57864
73451
96251
78803
34743
Santa Fe
10608
31531
46163
28215
62608
52576
55674
43339
34896
77851
41304
31308
60306
Poughkeepsie
2184
15642
13505
9279
22796
6458
3279
4458
19672
17610
2672
11454
1072
Montpelier
1428
671
520
5453
5468
2117
2802
5847
3165
6204
1832
5357
5499
Waco
12527
695
44426
61651
83997
12811
50570
15022
86732
38541
45292
120719
17969
Nashville
359806
249811
422314
151319
466174
107335
315576
571273
195685
230626
194663
11060
545940
Billings
49694
37415
38602
79238
65260
18497
8976
81148
71326
108760
43740
48110
32106
Pensacola
4501
9682
19061
14731
4623
16106
13419
47607
9198
25003
39303
45146
24143
Trenton
40341
21210
4162
57773
16937
60495
21508
80819
27349
65088
65815
66308
38151
I would like to find the median of all the differences in values for each city.
The basic logic is I need to obtain the median of all the values in the array "difference" in the pseudo-code below.
for i = 1988 to 2000
for j = i+1 to 2000
difference(i,j) = value year_i - value year_j
end
end
I wish I could paste my sample code here, but I am basically at a point of writers block where what I have produced is so far off that it is of no use. I don't necessarily need someone to write the entire code for me but am hoping somebody can send me down the right path. I feel like this shouldn't be that hard, but I am at a loss . . .
Thanks in advance!
|
[
"I'm not entirely following what you're trying to do here, but I think this will get you going.\nFirst, I create the temp data with pairwise differences as you request. Then I use the Proc Summary to calculate the median across city and year.\nFeel free to ask.\n/* sample data */\ndata have;\ninput City :$12. _1988 - _2000;\ninfile datalines dlm = '|';\ndatalines;\nColumbus |438866|437148|16082 |475843|224411|411569|658459|174208|592418|31664 |312374|242830|342950 \nFargo |11218 |7402 |35574 |14765 |64727 |29492 |104541|616 |57864 |73451 |96251 |78803 |34743 \nSanta Fe |10608 |31531 |46163 |28215 |62608 |52576 |55674 |43339 |34896 |77851 |41304 |31308 |60306 \nPoughkeepsie|2184 |15642 |13505 |9279 |22796 |6458 |3279 |4458 |19672 |17610 |2672 |11454 |1072 \nMontpelier |1428 |671 |520 |5453 |5468 |2117 |2802 |5847 |3165 |6204 |1832 |5357 |5499 \nWaco |12527 |695 |44426 |61651 |83997 |12811 |50570 |15022 |86732 |38541 |45292 |120719|17969 \nNashville |359806|249811|422314|151319|466174|107335|315576|571273|195685|230626|194663|11060 |545940 \nBillings |49694 |37415 |38602 |79238 |65260 |18497 |8976 |81148 |71326 |108760|43740 |48110 |32106 \nPensacola |4501 |9682 |19061 |14731 |4623 |16106 |13419 |47607 |9198 |25003 |39303 |45146 |24143 \nTrenton |40341 |21210 |4162 |57773 |16937 |60495 |21508 |80819 |27349 |65088 |65815 |66308 |38151 \n;\n\n/* pairwise differences in long format */\ndata temp;\n set have;\n array y1(i) _1988 - _2000;\n array y2(j) _1988 - _2000;\n\n do over y1;\n do over y2;\n year1 = input(compress(vname(y1), , 'kd'), 8.);\n val1 = y1;\n\n year2 = input(compress(vname(y2), , 'kd'), 8.);\n val2 = y2;\n\n diff = val1 - val2;\n\n if i ne j then output;\n end;\n end;\n\n drop _:;\nrun;\n\n/* calculate median */\nproc summary data = temp nway;\n class city year1;\n var diff;\n output out = want(drop = _:) median =;\nrun;\n\n",
"You can use a 13-element array to hold the year values. And a 13x13 temporary array to hold the differences. Then median(of ARRAY(*)) to get the median.\n* create the sample data;\ndata have;\n input City $ _1988 _1989 _1990 _1991 _1992 _1993 _1994 _1995 _1996 _1997 _1998 _1999 _2000;\n datalines;\nColumbus 438866 437148 16082 475843 224411 411569 658459 174208 592418 31664 312374 242830 342950\nFargo 11218 7402 35574 14765 64727 29492 104541 616 57864 73451 96251 78803 34743\nSanta-Fe 10608 31531 46163 28215 62608 52576 55674 43339 34896 77851 41304 31308 60306\nPoughkeepsie 2184 15642 13505 9279 22796 6458 3279 4458 19672 17610 2672 11454 1072\nMontpelier 1428 671 520 5453 5468 2117 2802 5847 3165 6204 1832 5357 5499\nWaco 12527 695 44426 61651 83997 12811 50570 15022 86732 38541 45292 120719 17969\nNashville 359806 249811 422314 151319 466174 107335 315576 571273 195685 230626 194663 11060 545940\nBillings 49694 37415 38602 79238 65260 18497 8976 81148 71326 108760 43740 48110 32106\nPensacola 4501 9682 19061 14731 4623 16106 13419 47607 9198 25003 39303 45146 24143\nTrenton 40341 21210 4162 57773 16937 60495 21508 80819 27349 65088 65815 66308 38151\n;\nrun;\n\ndata want;\n set have;\n * create an array to hold the year variables;\n array years {1988:2000} _1988 - _2000;\n * create a 2-dimensional array to hold the differences;\n array differences {1988:2000,1988:2000} _temporary_;\n\n do i = 1988 to 20000;\n do j = i + 1 to 2000;\n * calculate the differences as per pseudo-code in question;\n differences(i,j) = years(i) - years(j);\n end;\n end;\n * get median value;\n median_diff = median(of differences(*));\nrun;\n\n",
"Thanks everyone! What you have provided is exactly what I needed to get me going in the right direction (I simplified the problem for the purpose of posting in the forum, so I still need to make some adjustments . . . I am well on my way). Thanks again!\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"sas"
] |
stackoverflow_0074659934_sas.txt
|
Q:
How to get data from a screen in React-Native?
I have a Subscription screen that has a useEffect hook that updates a isSubbscribed useState.
I am implementing this screen in the ApprRoot as a subscription paywall. I simply need to get this useState variable isSubscribed and use it in the appRoot to see the data. How can I get this data from the Subscription screen?
for Example in my AppRoot.jsx:
const subbed = *subScreen use State Variable*
return (
{ subbed ? (
<Stack.Navigator>
<Stack.Group>
<Stack.Screen
name="SubScreen"
component={SubScreen}
initialParams={{ initialroute: 'Home' }}
/>
</Stack.Group>
</Stack.Navigator>
) : (
<NavigationRoot />
)}
)
Subscreen.jsx has a variable such as this
const [subbed, setSubbed] = useState([]);
How am I able to use this useState in the approot file? I cannot just simply export it?
I'm expecting to be able to get the useState variable subbed and use it to generate the proper screens.
Below is the full subScreen where the ownedSubscriptions and Setowned are being passed from the appRoot. I need the subScreen to be shown on launch if not subscribed, but be able to be accessed on launch as well to not show if ownedSubscriptions is true is not empty.
const SubScreen = ({ownedSubscriptions,setOwnedSubscriptions}) => {
const {
connected,
subscriptions,
getSubscriptions,
currentPurchase,
finishTransaction,
} = useIAP();
console.log('connected: ',connected);
//[ownedSubscriptions, setOwnedSubscriptions] = set useState(
const handleGetSubscriptions = () => {
getSubscriptions({skus: ['sku']}).then( ()=>{
console.log('Got Subscriptions')
}).catch( () => {
console.log('failed to get subscriptions')
})
}
const handleBuySubscription = async (
productId,
offerToken,
) => {
if (isPlay && !offerToken) {
console.warn(
`There are no subscription Offers for selected product (Only requiered for Google Play purchases): ${productId}`,
);
}
try {
await requestSubscription({
sku: productId,
...(offerToken && {
subscriptionOffers: [{sku: productId, offerToken}],
}),
});
} catch (error) {
if (error instanceof PurchaseError) {
errorLog({message: `[${error.code}]: ${error.message}`, error});
} else {
errorLog({message: 'handleBuySubscription', error});
}
}
};
useEffect(() => {
const checkCurrentPurchase = async () => {
try {
if (currentPurchase?.productId) {
await finishTransaction({
purchase: currentPurchase,
isConsumable: true,
});
setOwnedSubscriptions((prev) => [
...prev,
currentPurchase?.productId,
]);
}
} catch (error) {
if (error instanceof PurchaseError) {
errorLog({message: `[${error.code}]: ${error.message}`, error});
} else {
errorLog({message: 'handleBuyProduct', error});
}
}
};
checkCurrentPurchase();
}, [currentPurchase, finishTransaction]);
console.log(ownedSubscriptions + ' owned') //returns lxm_1999_1m_1w0
return (
<ScrollView contentContainerStyle={contentContainerStyle}>
<State connected={connected} storekit2={isIosStorekit2()} />
<Box>
<View style={styles.container}>
<Heading copy="Subscriptions" />
{subscriptions.map((subscription, index) => {
const owned = ownedSubscriptions.find((pId) => {
return isAmazon
? pId === constants.amazonBaseSku
: pId === subscription.productId;
});
return (
<Row
key={subscription.productId}
fields={[
{
label: 'Subscription Id',
value: subscription.productId,
},
{
label: 'type',
value:
'type' in subscription
? subscription.type
: subscription.productType,
},
]}
isLast={subscriptions.length - 1 === index}
>
{owned && <Text>Subscribed</Text>}
{!owned &&
isPlay &&
// On Google Play Billing V5 you might have multiple offers for a single sku
'subscriptionOfferDetails' in subscription &&
subscription?.subscriptionOfferDetails?.map((offer) => (
<Button
title={`Subscribe ${offer.pricingPhases.pricingPhaseList
.map((ppl) => ppl.billingPeriod)
.join(',')}`}
onPress={() => {
handleBuySubscription(
subscription.productId,
offer.offerToken,
);
}}
/>
))}
{!owned && (isIos || isAmazon) && (
<Button
title="Subscribe"
onPress={() => {
handleBuySubscription(subscription.productId);
}}
/>
)}
</Row>
);
})}
</View>
<Button
title="Get the subscriptions"
onPress={handleGetSubscriptions}
/>
</Box>
</ScrollView>
);
};
const styles = StyleSheet.create({
container: {
marginBottom: 20,
},
});
export default SubScreen;
A:
the simplest way I believe would be to define your state variable in your parent component (so apRoot) and to pass it down to your screen as props, along with it setSubbed (so 2 props).
So your AppRoot.jsx would be like :
const [subbed, setSubbed] = useState([]);
return (
{ subbed ? (
<Stack.Navigator>
<Stack.Group>
<Stack.Screen
name="SubScreen"
initialParams={{ initialroute: 'Home' }}>
{props => <SubScreen {...props} subbed={subbed} setSubbed={setSubbed} />}
</Stack.Screen>
</Stack.Group>
</Stack.Navigator>
) : (
<NavigationRoot />
)}
)
Now you can update it from your child component (using setSubbed) and use it from your parent component.
You child component would be something like this:
function SubScreen({subbed, setSubbed}) {}
// or
const SubScreen = ({subbed, setSubbed}) => {}
If you need it from several other components, or even nested components, you might want to look into contexts.
|
How to get data from a screen in React-Native?
|
I have a Subscription screen that has a useEffect hook that updates a isSubbscribed useState.
I am implementing this screen in the ApprRoot as a subscription paywall. I simply need to get this useState variable isSubscribed and use it in the appRoot to see the data. How can I get this data from the Subscription screen?
for Example in my AppRoot.jsx:
const subbed = *subScreen use State Variable*
return (
{ subbed ? (
<Stack.Navigator>
<Stack.Group>
<Stack.Screen
name="SubScreen"
component={SubScreen}
initialParams={{ initialroute: 'Home' }}
/>
</Stack.Group>
</Stack.Navigator>
) : (
<NavigationRoot />
)}
)
Subscreen.jsx has a variable such as this
const [subbed, setSubbed] = useState([]);
How am I able to use this useState in the approot file? I cannot just simply export it?
I'm expecting to be able to get the useState variable subbed and use it to generate the proper screens.
Below is the full subScreen where the ownedSubscriptions and Setowned are being passed from the appRoot. I need the subScreen to be shown on launch if not subscribed, but be able to be accessed on launch as well to not show if ownedSubscriptions is true is not empty.
const SubScreen = ({ownedSubscriptions,setOwnedSubscriptions}) => {
const {
connected,
subscriptions,
getSubscriptions,
currentPurchase,
finishTransaction,
} = useIAP();
console.log('connected: ',connected);
//[ownedSubscriptions, setOwnedSubscriptions] = set useState(
const handleGetSubscriptions = () => {
getSubscriptions({skus: ['sku']}).then( ()=>{
console.log('Got Subscriptions')
}).catch( () => {
console.log('failed to get subscriptions')
})
}
const handleBuySubscription = async (
productId,
offerToken,
) => {
if (isPlay && !offerToken) {
console.warn(
`There are no subscription Offers for selected product (Only requiered for Google Play purchases): ${productId}`,
);
}
try {
await requestSubscription({
sku: productId,
...(offerToken && {
subscriptionOffers: [{sku: productId, offerToken}],
}),
});
} catch (error) {
if (error instanceof PurchaseError) {
errorLog({message: `[${error.code}]: ${error.message}`, error});
} else {
errorLog({message: 'handleBuySubscription', error});
}
}
};
useEffect(() => {
const checkCurrentPurchase = async () => {
try {
if (currentPurchase?.productId) {
await finishTransaction({
purchase: currentPurchase,
isConsumable: true,
});
setOwnedSubscriptions((prev) => [
...prev,
currentPurchase?.productId,
]);
}
} catch (error) {
if (error instanceof PurchaseError) {
errorLog({message: `[${error.code}]: ${error.message}`, error});
} else {
errorLog({message: 'handleBuyProduct', error});
}
}
};
checkCurrentPurchase();
}, [currentPurchase, finishTransaction]);
console.log(ownedSubscriptions + ' owned') //returns lxm_1999_1m_1w0
return (
<ScrollView contentContainerStyle={contentContainerStyle}>
<State connected={connected} storekit2={isIosStorekit2()} />
<Box>
<View style={styles.container}>
<Heading copy="Subscriptions" />
{subscriptions.map((subscription, index) => {
const owned = ownedSubscriptions.find((pId) => {
return isAmazon
? pId === constants.amazonBaseSku
: pId === subscription.productId;
});
return (
<Row
key={subscription.productId}
fields={[
{
label: 'Subscription Id',
value: subscription.productId,
},
{
label: 'type',
value:
'type' in subscription
? subscription.type
: subscription.productType,
},
]}
isLast={subscriptions.length - 1 === index}
>
{owned && <Text>Subscribed</Text>}
{!owned &&
isPlay &&
// On Google Play Billing V5 you might have multiple offers for a single sku
'subscriptionOfferDetails' in subscription &&
subscription?.subscriptionOfferDetails?.map((offer) => (
<Button
title={`Subscribe ${offer.pricingPhases.pricingPhaseList
.map((ppl) => ppl.billingPeriod)
.join(',')}`}
onPress={() => {
handleBuySubscription(
subscription.productId,
offer.offerToken,
);
}}
/>
))}
{!owned && (isIos || isAmazon) && (
<Button
title="Subscribe"
onPress={() => {
handleBuySubscription(subscription.productId);
}}
/>
)}
</Row>
);
})}
</View>
<Button
title="Get the subscriptions"
onPress={handleGetSubscriptions}
/>
</Box>
</ScrollView>
);
};
const styles = StyleSheet.create({
container: {
marginBottom: 20,
},
});
export default SubScreen;
|
[
"the simplest way I believe would be to define your state variable in your parent component (so apRoot) and to pass it down to your screen as props, along with it setSubbed (so 2 props).\nSo your AppRoot.jsx would be like :\nconst [subbed, setSubbed] = useState([]);\nreturn (\n\n{ subbed ? (\n <Stack.Navigator> \n <Stack.Group>\n <Stack.Screen\n name=\"SubScreen\"\n initialParams={{ initialroute: 'Home' }}>\n {props => <SubScreen {...props} subbed={subbed} setSubbed={setSubbed} />}\n </Stack.Screen>\n </Stack.Group>\n </Stack.Navigator> \n ) : (\n <NavigationRoot /> \n )}\n)\n\nNow you can update it from your child component (using setSubbed) and use it from your parent component.\nYou child component would be something like this:\nfunction SubScreen({subbed, setSubbed}) {}\n// or\nconst SubScreen = ({subbed, setSubbed}) => {}\n\n\nIf you need it from several other components, or even nested components, you might want to look into contexts.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"react_hooks",
"react_native"
] |
stackoverflow_0074660870_javascript_react_hooks_react_native.txt
|
Q:
Input from user to print out a certain instance variable in python
I have created a class with programs:
class Program:
def __init__(self,channel,start, end, name, viewers, percentage):
self.channel = channel
self.start = start
self.end = end
self.name = name
self.viewers = viewers
Channel 1, start:16.00 end:17.45 viewers: 100 name: Matinee:The kiss on the cross
Channel 1, start:17.45 end:17.50 viewers: 45 name: The stock market today
Channel 2, start:16.45 end:17.50 viewers: 30 name: News
Channel 4, start:17.25 end:17.50 viewers: 10 name: Home building
Channel 5, start:15.45 end:16.50 viewers: 28 name: Reality
I also have created a nested list with the programs:
[[1,16:00, 17,45, 100, 'Matinee: The kiss on the cross'],[1,17:45, 17,50, 45,'The stock market today'],[2,16:45, 17,50, 30,'News'], [4,17:25, 17,50, 10,'Home building'],[5,15:45, 16,50, 28,'Reality']
Now we want the user to be able to write the name of a program:
News
The result should be:
News 19.45-17.50 has 30 viewers
I thought about how you could incorporate a method to avoid the program from crashing if the input is invalid/ not an instance variable
I have tried this:
Check_input():
print('Enter the name of the desired program:')
while True: #Continue asking for valid input.
try:
name = input('>')
if name == #is an instance?
return name
else:
print('Enter a program that is included in the schedule:') #input out of range
except ValueError:
print('Write a word!') #Word or letter as input
print('Try again')
I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?)
I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory"
Do you have any suggestions how to combat the problem?
All help is much appreciated!
A:
I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?)
Well if all your programs have a unique name then the easiest approach would probably be to store them in a dictionary instead of a nested list like:
programs = {
"News": Program("2", "16:45", "17:50", "News", "30", "60"),
"Reality": <Initialize Program class object for this program>,
...
}
Then you could just use the get dictionary method (it allows you to return a specific value if the key does not exist) to see if the asked program exists:
name = input('>')
program = programs.get(name, None)
if program:
print(program)
else:
# raise an exception or handle however you prefer
And if your programs don't have a unique name then you will have to iterate over the list. In which case I would probably return a list of all existing objects that have that name. A for loop would work just fine, but I would switch the nested list with a list of Program objects since you already have the class.
I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory" Do you have any suggestions how to combat the problem.
I would say that the most elegant solution is to override the __str__ method of your Program class so that you can just call print(program) and write out the right output. For example:
class Program:
def __init__(self,channel,start, end, name, viewers, percentage):
self.channel = channel
self.start = start
self.end = end
self.name = name
self.viewers = viewers
def __str__(self):
return self.name + " " + self.start + "-" + self.end + " has " + self.viewers + " viewers"
should print out
News 19.45-17.50 has 30 viewers
when you call it like:
program = programs.get(name, None)
if program:
print(program)
|
Input from user to print out a certain instance variable in python
|
I have created a class with programs:
class Program:
def __init__(self,channel,start, end, name, viewers, percentage):
self.channel = channel
self.start = start
self.end = end
self.name = name
self.viewers = viewers
Channel 1, start:16.00 end:17.45 viewers: 100 name: Matinee:The kiss on the cross
Channel 1, start:17.45 end:17.50 viewers: 45 name: The stock market today
Channel 2, start:16.45 end:17.50 viewers: 30 name: News
Channel 4, start:17.25 end:17.50 viewers: 10 name: Home building
Channel 5, start:15.45 end:16.50 viewers: 28 name: Reality
I also have created a nested list with the programs:
[[1,16:00, 17,45, 100, 'Matinee: The kiss on the cross'],[1,17:45, 17,50, 45,'The stock market today'],[2,16:45, 17,50, 30,'News'], [4,17:25, 17,50, 10,'Home building'],[5,15:45, 16,50, 28,'Reality']
Now we want the user to be able to write the name of a program:
News
The result should be:
News 19.45-17.50 has 30 viewers
I thought about how you could incorporate a method to avoid the program from crashing if the input is invalid/ not an instance variable
I have tried this:
Check_input():
print('Enter the name of the desired program:')
while True: #Continue asking for valid input.
try:
name = input('>')
if name == #is an instance?
return name
else:
print('Enter a program that is included in the schedule:') #input out of range
except ValueError:
print('Write a word!') #Word or letter as input
print('Try again')
I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?)
I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory"
Do you have any suggestions how to combat the problem?
All help is much appreciated!
|
[
"\nI wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?)\n\nWell if all your programs have a unique name then the easiest approach would probably be to store them in a dictionary instead of a nested list like:\nprograms = {\n \"News\": Program(\"2\", \"16:45\", \"17:50\", \"News\", \"30\", \"60\"),\n \"Reality\": <Initialize Program class object for this program>,\n ...\n}\n\nThen you could just use the get dictionary method (it allows you to return a specific value if the key does not exist) to see if the asked program exists:\nname = input('>') \nprogram = programs.get(name, None)\nif program:\n print(program)\nelse:\n # raise an exception or handle however you prefer\n\nAnd if your programs don't have a unique name then you will have to iterate over the list. In which case I would probably return a list of all existing objects that have that name. A for loop would work just fine, but I would switch the nested list with a list of Program objects since you already have the class.\n\nI also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the \"memory\" Do you have any suggestions how to combat the problem.\n\nI would say that the most elegant solution is to override the __str__ method of your Program class so that you can just call print(program) and write out the right output. For example:\nclass Program:\n def __init__(self,channel,start, end, name, viewers, percentage):\n self.channel = channel\n self.start = start\n self.end = end\n self.name = name\n self.viewers = viewers \n \n def __str__(self):\n return self.name + \" \" + self.start + \"-\" + self.end + \" has \" + self.viewers + \" viewers\"\n\nshould print out\n\nNews 19.45-17.50 has 30 viewers\n\nwhen you call it like:\nprogram = programs.get(name, None)\nif program:\n print(program)\n\n"
] |
[
1
] |
[] |
[] |
[
"class",
"input",
"list",
"python",
"try_except"
] |
stackoverflow_0074660715_class_input_list_python_try_except.txt
|
Q:
Get value of total coverage percentage from coverage.py using a regular expression (REGEX)
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions.
I create coverage reports in a gitlab pipeline using coverage.py and py.test which look like the following piped into a file like coverage37.log:
-------------- generated xml file: /builds/utils/foo/report.xml --------------
---------- coverage: platform linux, python 3.7.11-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
foo/tests/bar1.py 52 0 100%
...
foo/tests/bar2.py 0 0 100%
-------------------------------------------------
TOTAL 431 5 99%
======================= 102 passed, 9 warnings in 4.35s ========================
Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the anybadge package.
My naive approach would be something like:
COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log)
echo "Coverage is $COVERAGE_SCORE"
Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch.
Any hints are welcome. Thanks in advance!
A:
It is easier to use awk here:
cov_score=$(awk '$1 == "TOTAL" {print $NF+0}' coverage37.log)
Here $1 == "TOTAL" matches a line with first word as TOTAL and print $NF+0 prints number part of last field.
A:
rather than get approximate values from non-machine-readable outputs you'd be best to use coverage's programmatic apis, either coverage xml or coverage json
here's an example using the json output (note I send it to /dev/stdout, by default it goes to coverage.json)
$ coverage json -o /dev/stdout | jq .totals.percent_covered
52.908756889161054
there's even more information there if you need it:
$ coverage json -o /dev/stdout | jq .totals
{
"covered_lines": 839,
"num_statements": 1401,
"percent_covered": 52.908756889161054,
"missing_lines": 562,
"excluded_lines": 12,
"num_branches": 232,
"num_partial_branches": 7,
"covered_branches": 25,
"missing_branches": 207
}
A:
Also if you dont want to store it in a file for some reason, you can do
cov_score=$(coverage report | awk '$1 == "TOTAL" {print $NF+0}')
|
Get value of total coverage percentage from coverage.py using a regular expression (REGEX)
|
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions.
I create coverage reports in a gitlab pipeline using coverage.py and py.test which look like the following piped into a file like coverage37.log:
-------------- generated xml file: /builds/utils/foo/report.xml --------------
---------- coverage: platform linux, python 3.7.11-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
foo/tests/bar1.py 52 0 100%
...
foo/tests/bar2.py 0 0 100%
-------------------------------------------------
TOTAL 431 5 99%
======================= 102 passed, 9 warnings in 4.35s ========================
Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the anybadge package.
My naive approach would be something like:
COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log)
echo "Coverage is $COVERAGE_SCORE"
Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch.
Any hints are welcome. Thanks in advance!
|
[
"It is easier to use awk here:\ncov_score=$(awk '$1 == \"TOTAL\" {print $NF+0}' coverage37.log)\n\nHere $1 == \"TOTAL\" matches a line with first word as TOTAL and print $NF+0 prints number part of last field.\n",
"rather than get approximate values from non-machine-readable outputs you'd be best to use coverage's programmatic apis, either coverage xml or coverage json\nhere's an example using the json output (note I send it to /dev/stdout, by default it goes to coverage.json)\n$ coverage json -o /dev/stdout | jq .totals.percent_covered\n52.908756889161054\n\nthere's even more information there if you need it:\n$ coverage json -o /dev/stdout | jq .totals\n{\n \"covered_lines\": 839,\n \"num_statements\": 1401,\n \"percent_covered\": 52.908756889161054,\n \"missing_lines\": 562,\n \"excluded_lines\": 12,\n \"num_branches\": 232,\n \"num_partial_branches\": 7,\n \"covered_branches\": 25,\n \"missing_branches\": 207\n}\n\n",
"Also if you dont want to store it in a file for some reason, you can do\ncov_score=$(coverage report | awk '$1 == \"TOTAL\" {print $NF+0}')\n"
] |
[
4,
4,
0
] |
[] |
[] |
[
"coverage.py",
"grep",
"pytest",
"sed"
] |
stackoverflow_0068563978_coverage.py_grep_pytest_sed.txt
|
Q:
Cannot Open GitHub on Windows 11
I cannot open GitHub I already tried a lot of options like changing DNS and a bunch of other stuff still I cannot open GitHub on windows 11. Please help as this is a very new kind of problem for me.
When I edited DNS to 8.8.8.8 it showed me the second type of error.
A:
Look for hosts file at C:\Windows\System32\drivers\etc. If it contains any entry related to github remove and save in admin mode. Voila.
|
Cannot Open GitHub on Windows 11
|
I cannot open GitHub I already tried a lot of options like changing DNS and a bunch of other stuff still I cannot open GitHub on windows 11. Please help as this is a very new kind of problem for me.
When I edited DNS to 8.8.8.8 it showed me the second type of error.
|
[
"Look for hosts file at C:\\Windows\\System32\\drivers\\etc. If it contains any entry related to github remove and save in admin mode. Voila.\n"
] |
[
0
] |
[] |
[] |
[
"github",
"github_for_windows",
"github_pages"
] |
stackoverflow_0074331395_github_github_for_windows_github_pages.txt
|
Q:
How to return a string (or break) from forEach loop when a condition is met
I'm using Angular 14. I checked this and several other articles but none was useful at all. I've a piece of code in a ternary operator. The code is like this:
var finalValue = this.recordData.myArray.length === 0
? 'empty record'
: this.recordData.myArray.forEach(
(item: any) => {
if (item.stockKeepingStatus == 'A') {
this.thing = 'found';
// break; // syntax error
return this.thing;
} else {
this.thing = 'all are inactive';
}
return this.thing;
}
)
console.log(finalValue);
The logic is simple. If myArray is empty, then just say 'empty record'. Otherwise we will iterate through myArray array and check which item has stockKeepingStatus as active i.e. 'A'. The moment we find our first 'A' we will just break the loop and return 'found'. If none of thestockKeepingStatus was 'A' then we will just say 'all are inactive'. I'm getting finalValue undefined. Please point out my mistake.
A:
You can use find
find method returns the first element in the provided array that satisfies the provided testing function. If no values satisfy the testing function, undefined is returned
const item = this.recordData.myArray.find((item: any) => item.stockKeepingStatus == 'A');
const finalValue = this.recordData.myArray.length === 0 ? 'empty record' : item ? 'found' : 'all are inactive';
console.log(finalValue);
A:
I would suggest you use find method instead of forEach to return an item with stockKeepingStatus as 'A' or undefined otherwise.
This method works the way you described - checks for first entry that meets a condition and stop going through a loop if an element found, than returns a found element or return undefined otherwise.
const finalValue = this.recordData.myArray.length === 0
? 'empty record'
: (this.recordData.myArray.find(item => item.stockKeepingStatus === 'A') ?
'found' : 'all are inactive'
);
console.log(finalValue);
A:
forEach() returns undefined.
You need to use every(), which checks whether all elements in the array pass the test implemented by the provided function.
var finalValue = this.recordData.myArray.length === 0 ?
'empty record' :
(this.recordData.myArray.every((item: any) => item.stockKeepingStatus == 'A') ?
'found' :
'all are inactive');
console.log(finalValue);
A:
You can use the break keyword within a forEach loop to return a string when a condition is met.
For example, let's say you want to iterate over an array of strings and return the first string that starts with the letter 'A'. You can use the following code to do this:
const strings = ["Alpha", "Bravo", "Charlie"];
let resultString = '';
strings.forEach(string => {
if (string.startsWith('A')) {
resultString = string;
break;
}
});
console.log(resultString); // 'Alpha'
|
How to return a string (or break) from forEach loop when a condition is met
|
I'm using Angular 14. I checked this and several other articles but none was useful at all. I've a piece of code in a ternary operator. The code is like this:
var finalValue = this.recordData.myArray.length === 0
? 'empty record'
: this.recordData.myArray.forEach(
(item: any) => {
if (item.stockKeepingStatus == 'A') {
this.thing = 'found';
// break; // syntax error
return this.thing;
} else {
this.thing = 'all are inactive';
}
return this.thing;
}
)
console.log(finalValue);
The logic is simple. If myArray is empty, then just say 'empty record'. Otherwise we will iterate through myArray array and check which item has stockKeepingStatus as active i.e. 'A'. The moment we find our first 'A' we will just break the loop and return 'found'. If none of thestockKeepingStatus was 'A' then we will just say 'all are inactive'. I'm getting finalValue undefined. Please point out my mistake.
|
[
"You can use find\nfind method returns the first element in the provided array that satisfies the provided testing function. If no values satisfy the testing function, undefined is returned\nconst item = this.recordData.myArray.find((item: any) => item.stockKeepingStatus == 'A');\n\nconst finalValue = this.recordData.myArray.length === 0 ? 'empty record' : item ? 'found' : 'all are inactive';\n\nconsole.log(finalValue);\n\n\n",
"I would suggest you use find method instead of forEach to return an item with stockKeepingStatus as 'A' or undefined otherwise.\nThis method works the way you described - checks for first entry that meets a condition and stop going through a loop if an element found, than returns a found element or return undefined otherwise.\nconst finalValue = this.recordData.myArray.length === 0\n ? 'empty record'\n : (this.recordData.myArray.find(item => item.stockKeepingStatus === 'A') ?\n 'found' : 'all are inactive'\n );\n\n\nconsole.log(finalValue);\n\n",
"forEach() returns undefined.\nYou need to use every(), which checks whether all elements in the array pass the test implemented by the provided function.\n\n\nvar finalValue = this.recordData.myArray.length === 0 ?\n 'empty record' :\n (this.recordData.myArray.every((item: any) => item.stockKeepingStatus == 'A') ?\n 'found' :\n 'all are inactive');\n\nconsole.log(finalValue);\n\n\n\n",
"You can use the break keyword within a forEach loop to return a string when a condition is met.\nFor example, let's say you want to iterate over an array of strings and return the first string that starts with the letter 'A'. You can use the following code to do this:\nconst strings = [\"Alpha\", \"Bravo\", \"Charlie\"];\n\nlet resultString = '';\n\nstrings.forEach(string => {\n if (string.startsWith('A')) {\n resultString = string;\n break;\n }\n});\n\nconsole.log(resultString); // 'Alpha'\n\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"angular"
] |
stackoverflow_0074660254_angular.txt
|
Q:
Unexpected key "NSLocationWhenInUseUsageDescription"
I would like if someone can help me to solve this problem.
When I run the command ionic cap buils ios I get this error and when I open the project in xcode the info.plist file seems not to be visible.
`
[fatal] Error: Unexpected key "NSLocationWhenInUseUsageDescription" while parsing <dict/>.
[capacitor] at invariant (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:53:11)
[capacitor] at parsePlistXML (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:121:9)
[capacitor] at parsePlistXML (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:101:23)
[capacitor] at Object.parse (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:71:15)
[capacitor] at logiOSPlist (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/@capacitor/cli/dist/cordova.js:273:43)`
I have tried to look for information in google and in different forums but I can't find a solution for this problem, if someone understands this error and can guide me I would appreciate it.
info.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>en</string>
<key>CFBundleDisplayName</key>
<string>Mascota Anuncios</string>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIdentifier</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
</array>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationPortraitUpsideDown</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UIViewControllerBasedStatusBarAppearance</key>
<key>NSLocationWhenInUseUsageDescription</key>
<string>Necesitamos acceder a la ubicacion para obtener una info precisa de su ubicacion actual</string>
<true/>
</dict>
</plist>
A:
it seems that this solved the error, only now these warnings appear.
</array> <key>UIViewControllerBasedStatusBarAppearance</key> <true/> <key>NSLocationWhenInUseUsageDescription</key> <string>Necesitamos acceder a la ubicación para obtener una información precisa de su ubicación actual</string> </dict>
[capacitor] [warn] Configuration required for cordova-plugin-x-socialsharing.
[capacitor] Add the following to Info.plist:
[capacitor] <key>NSPhotoLibraryAddUsageDescription</key>
[capacitor] <string>$PHOTO_LIBRARY_ADD_USAGE_DESCRIPTION</string>
[capacitor]
[capacitor] [warn] Configuration required for cordova-plugin-x-socialsharing.
[capacitor] Add the following to Info.plist:
[capacitor] <key>NSPhotoLibraryUsageDescription</key>
[capacitor] <string>$PHOTO_LIBRARY_USAGE_DESCRIPTION</string>
[capacitor]
[capacitor] [warn] Configuration required for ionic-plugin-deeplinks.
[capacitor] Add the following to Info.plist:
[capacitor] <key>CFBundleURLTypes</key>
[capacitor] <array>
[capacitor] <dict>
[capacitor] <key>CFBundleURLSchemes</key>
[capacitor] <array>
[capacitor] <string>$URL_SCHEME</string>
[capacitor] </array>
[capacitor] </dict>
[capacitor] </array>
[capacitor]
[capacitor] [warn] Configuration required for cordova-plugin-googleplus.
[capacitor] Add the following to Info.plist:
[capacitor] <key>CFBundleURLTypes</key>
[capacitor] <array>
[capacitor] <dict>
[capacitor] <key>CFBundleTypeRole</key>
[capacitor] <key>CFBundleURLName</key>
[capacitor] <key>CFBundleURLSchemes</key>
[capacitor] <string>Editor</string>
[capacitor] <string>REVERSED_CLIENT_ID</string>
[capacitor] <array>
[capacitor] <string>$REVERSED_CLIENT_ID</string>
[capacitor] </array>
[capacitor] </dict>
[capacitor] </array>
[capacitor]
[capacitor] [warn] Configuration required for onesignal-cordova-plugin.
[capacitor] Add the following to Info.plist:
[capacitor] <key>UIBackgroundModes</key>
[capacitor] <array>
[capacitor] <string>remote-notification</string>
[capacitor] </array>
[capacitor]
I have to add those lines in info.plist as the message says, copy and paste I don't know much about this info.plist file between which labels I should put if they are to be copied.
|
Unexpected key "NSLocationWhenInUseUsageDescription"
|
I would like if someone can help me to solve this problem.
When I run the command ionic cap buils ios I get this error and when I open the project in xcode the info.plist file seems not to be visible.
`
[fatal] Error: Unexpected key "NSLocationWhenInUseUsageDescription" while parsing <dict/>.
[capacitor] at invariant (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:53:11)
[capacitor] at parsePlistXML (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:121:9)
[capacitor] at parsePlistXML (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:101:23)
[capacitor] at Object.parse (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/plist/lib/parse.js:71:15)
[capacitor] at logiOSPlist (/Users/ivanpala/Desktop/mascotas-anuncios-main/node_modules/@capacitor/cli/dist/cordova.js:273:43)`
I have tried to look for information in google and in different forums but I can't find a solution for this problem, if someone understands this error and can guide me I would appreciate it.
info.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>en</string>
<key>CFBundleDisplayName</key>
<string>Mascota Anuncios</string>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIdentifier</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
</array>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationPortraitUpsideDown</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UIViewControllerBasedStatusBarAppearance</key>
<key>NSLocationWhenInUseUsageDescription</key>
<string>Necesitamos acceder a la ubicacion para obtener una info precisa de su ubicacion actual</string>
<true/>
</dict>
</plist>
|
[
"it seems that this solved the error, only now these warnings appear.\n</array> <key>UIViewControllerBasedStatusBarAppearance</key> <true/> <key>NSLocationWhenInUseUsageDescription</key> <string>Necesitamos acceder a la ubicación para obtener una información precisa de su ubicación actual</string> </dict>\n\n[capacitor] [warn] Configuration required for cordova-plugin-x-socialsharing.\n[capacitor] Add the following to Info.plist:\n[capacitor] <key>NSPhotoLibraryAddUsageDescription</key>\n[capacitor] <string>$PHOTO_LIBRARY_ADD_USAGE_DESCRIPTION</string>\n[capacitor] \n[capacitor] [warn] Configuration required for cordova-plugin-x-socialsharing.\n[capacitor] Add the following to Info.plist:\n[capacitor] <key>NSPhotoLibraryUsageDescription</key>\n[capacitor] <string>$PHOTO_LIBRARY_USAGE_DESCRIPTION</string>\n[capacitor] \n[capacitor] [warn] Configuration required for ionic-plugin-deeplinks.\n[capacitor] Add the following to Info.plist:\n[capacitor] <key>CFBundleURLTypes</key>\n[capacitor] <array>\n[capacitor] <dict>\n[capacitor] <key>CFBundleURLSchemes</key>\n[capacitor] <array>\n[capacitor] <string>$URL_SCHEME</string>\n[capacitor] </array>\n[capacitor] </dict>\n[capacitor] </array>\n[capacitor] \n[capacitor] [warn] Configuration required for cordova-plugin-googleplus.\n[capacitor] Add the following to Info.plist:\n[capacitor] <key>CFBundleURLTypes</key>\n[capacitor] <array>\n[capacitor] <dict>\n[capacitor] <key>CFBundleTypeRole</key>\n[capacitor] <key>CFBundleURLName</key>\n[capacitor] <key>CFBundleURLSchemes</key>\n[capacitor] <string>Editor</string>\n[capacitor] <string>REVERSED_CLIENT_ID</string>\n[capacitor] <array>\n[capacitor] <string>$REVERSED_CLIENT_ID</string>\n[capacitor] </array>\n[capacitor] </dict>\n[capacitor] </array>\n[capacitor] \n[capacitor] [warn] Configuration required for onesignal-cordova-plugin.\n[capacitor] Add the following to Info.plist:\n[capacitor] <key>UIBackgroundModes</key>\n[capacitor] <array>\n[capacitor] <string>remote-notification</string>\n[capacitor] </array>\n[capacitor] \n\n I have to add those lines in info.plist as the message says, copy and paste I don't know much about this info.plist file between which labels I should put if they are to be copied.\n\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"ionic6",
"ios",
"xcode"
] |
stackoverflow_0074658065_angular_ionic6_ios_xcode.txt
|
Q:
Returning value of an api request in react-native
I am currently trying to make api requests for an app but i have an issue:
When I make the api request, the first one returns "undefined" but the second one (and all others) returns what I want.
Here is my code :
exports.makeRequest = function(infos, setInfos) {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((data) => {
setInfos(data)
})
.catch((error) => {
console.error('Error:', error);
})
return infos
}
And it is called this way in my app:
const [infos, setInfos] = useState([])
makeRequest(infos, setInfos);
I found out what happens : On the first request, it doesn't go in ".then((data) => {setInfos(data)})"
Thanks you.
A:
It looks like the issue is that you are returning the infos state variable before it has been updated with the data from the API request. Since the API request is asynchronous, the makeRequest function will return the initial value of infos before the request has completed and setInfos has been called to update the state.
To fix this, you can move the return statement inside the .then() callback of the API request, like this:
exports.makeRequest = function(infos, setInfos) {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((data) => {
setInfos(data);
return data;
})
.catch((error) => {
console.error('Error:', error);
});
};
This will ensure that makeRequest returns the updated value of infos after it has been set by the API request.
Another option is to use the async and await keywords to make the makeRequest function asynchronous and wait for the API request to complete before returning the updated value of infos, like this:
exports.makeRequest = async function(infos, setInfos) {
try {
const response = await fetch('https://jsonplaceholder.typicode.com/todos/1');
const data = await response.json();
setInfos(data);
return data;
} catch (error) {
console.error('Error:', error);
}
};
This will ensure that makeRequest returns the updated value of infos after the API request has completed.
|
Returning value of an api request in react-native
|
I am currently trying to make api requests for an app but i have an issue:
When I make the api request, the first one returns "undefined" but the second one (and all others) returns what I want.
Here is my code :
exports.makeRequest = function(infos, setInfos) {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((data) => {
setInfos(data)
})
.catch((error) => {
console.error('Error:', error);
})
return infos
}
And it is called this way in my app:
const [infos, setInfos] = useState([])
makeRequest(infos, setInfos);
I found out what happens : On the first request, it doesn't go in ".then((data) => {setInfos(data)})"
Thanks you.
|
[
"It looks like the issue is that you are returning the infos state variable before it has been updated with the data from the API request. Since the API request is asynchronous, the makeRequest function will return the initial value of infos before the request has completed and setInfos has been called to update the state.\nTo fix this, you can move the return statement inside the .then() callback of the API request, like this:\nexports.makeRequest = function(infos, setInfos) {\n fetch('https://jsonplaceholder.typicode.com/todos/1')\n .then((response) => response.json())\n .then((data) => {\n setInfos(data);\n return data;\n })\n .catch((error) => {\n console.error('Error:', error);\n });\n};\n\nThis will ensure that makeRequest returns the updated value of infos after it has been set by the API request.\nAnother option is to use the async and await keywords to make the makeRequest function asynchronous and wait for the API request to complete before returning the updated value of infos, like this:\nexports.makeRequest = async function(infos, setInfos) {\n try {\n const response = await fetch('https://jsonplaceholder.typicode.com/todos/1');\n const data = await response.json();\n setInfos(data);\n return data;\n } catch (error) {\n console.error('Error:', error);\n }\n};\n\nThis will ensure that makeRequest returns the updated value of infos after the API request has completed.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"react_native"
] |
stackoverflow_0074660936_javascript_react_native.txt
|
Q:
Laravel - Testing Auth0 User
I have a Laravel application with a standard User table that I'm implementing Auth0 login to. On login, a user record is created in the database with the given email.
I have a CustomUserRepository.php file:
<?php
namespace App\Repositories;
use App\Models\User;
use Illuminate\Contracts\Auth\Authenticatable;
class CustomUserRepository implements \Auth0\Laravel\Contract\Auth\User\Repository
{
public function fromSession(array $user): ?\Illuminate\Contracts\Auth\Authenticatable
{
return User::firstOrCreate(['email' => $user['email']]);
}
public function fromAccessToken(array $user): ?\Illuminate\Contracts\Auth\Authenticatable
{
// Simliar to above. Used for stateless application types.
return null;
}
public function getUserByUserInfo(array $userinfo) : Authenticatable
{
$user = $this->upsertUser( $userinfo['profile'] );
return new Auth0User( $user->getAttributes(), $userinfo['accessToken'] );
}
protected function upsertUser($profile)
{
return User::firstOrCreate(
[
'sub' => $profile['sub']
],
[
'email' => $profile['email'] ?? '',
'name' => $profile['name'] ?? '',
]
);
}
}
And my auth.php file:
<?php
return [
'defaults' => [
'guard' => 'auth0',
'passwords' => 'users',
],
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
'auth0' => [
'driver' => 'auth0',
'provider' => 'auth0',
],
],
'providers' => [
'users' => [
'driver' => 'eloquent',
'model' => App\Models\User::class,
],
'auth0' => [
'driver' => 'auth0',
'repository' => App\Repositories\CustomUserRepository::class
],
],
The app works. I login with Auth0, the users are created, everything works entirely as expected, except for the testing.
$this->be(User::find(1));
$response = $this->get('/valid-url');
$response->assertStatus(200);
$response = $this->get('/another-valid-url');
$response->assertStatus(200);
In this case, PHPUnit seems to "forget" my login for the second get() request. The first one works fine, status 200, everything OK. With the second request (get or post) I always get a 302 back to the login page.
How do I resolve this?
A:
#on your login function there should be the following
if(auth()->attempt($formFields)){
$request->session()->regenerate();
|
Laravel - Testing Auth0 User
|
I have a Laravel application with a standard User table that I'm implementing Auth0 login to. On login, a user record is created in the database with the given email.
I have a CustomUserRepository.php file:
<?php
namespace App\Repositories;
use App\Models\User;
use Illuminate\Contracts\Auth\Authenticatable;
class CustomUserRepository implements \Auth0\Laravel\Contract\Auth\User\Repository
{
public function fromSession(array $user): ?\Illuminate\Contracts\Auth\Authenticatable
{
return User::firstOrCreate(['email' => $user['email']]);
}
public function fromAccessToken(array $user): ?\Illuminate\Contracts\Auth\Authenticatable
{
// Simliar to above. Used for stateless application types.
return null;
}
public function getUserByUserInfo(array $userinfo) : Authenticatable
{
$user = $this->upsertUser( $userinfo['profile'] );
return new Auth0User( $user->getAttributes(), $userinfo['accessToken'] );
}
protected function upsertUser($profile)
{
return User::firstOrCreate(
[
'sub' => $profile['sub']
],
[
'email' => $profile['email'] ?? '',
'name' => $profile['name'] ?? '',
]
);
}
}
And my auth.php file:
<?php
return [
'defaults' => [
'guard' => 'auth0',
'passwords' => 'users',
],
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
'auth0' => [
'driver' => 'auth0',
'provider' => 'auth0',
],
],
'providers' => [
'users' => [
'driver' => 'eloquent',
'model' => App\Models\User::class,
],
'auth0' => [
'driver' => 'auth0',
'repository' => App\Repositories\CustomUserRepository::class
],
],
The app works. I login with Auth0, the users are created, everything works entirely as expected, except for the testing.
$this->be(User::find(1));
$response = $this->get('/valid-url');
$response->assertStatus(200);
$response = $this->get('/another-valid-url');
$response->assertStatus(200);
In this case, PHPUnit seems to "forget" my login for the second get() request. The first one works fine, status 200, everything OK. With the second request (get or post) I always get a 302 back to the login page.
How do I resolve this?
|
[
"#on your login function there should be the following\n if(auth()->attempt($formFields)){\n $request->session()->regenerate();\n\n"
] |
[
0
] |
[] |
[] |
[
"auth0",
"laravel"
] |
stackoverflow_0074656730_auth0_laravel.txt
|
Q:
Can you avoid loading reusable components that another page already loaded them
So I have a page (let's call it Page1) that use is loaded dynamically via React.lazy that uses several reusable components and another page (Page2) that uses some of those reusable components, this page being as well loaded dynamically.
My question would be, is there a way to prevent Page2 to load those reusable components again?
When I analyze the generated final bundle I see that both pages load the reusable components individually and I think ideally would be to just load them once and each new dynamically loaded page that uses those reusable components shouldn't fetch them again.
A:
Yes, there is a way to prevent Page2 from loading the reusable components again if they have already been loaded by Page1. This can be achieved by using the React.Suspense component to wrap the components that are being dynamically loaded by both Page1 and Page2.
The React.Suspense component allows you to specify a fallback component to render while the dynamic components are being loaded. If the dynamic components have already been loaded by another part of the application, the fallback component will not be rendered and the already-loaded components will be used instead.
Here is an example of how you can use the React.Suspense component to prevent Page2 from loading the reusable components again if they have already been loaded by Page1:
import React, { Suspense } from 'react';
import { BrowserRouter as Router, Switch, Route } from 'react-router-dom';
const Page1 = React.lazy(() => import('./Page1'));
const Page2 = React.lazy(() => import('./Page2'));
const FallbackComponent = () => <div>Loading...</div>;
const App = () => (
<Router>
<Switch>
<Route exact path="/page1">
<Suspense fallback={<FallbackComponent />}>
<Page1 />
</Suspense>
</Route>
<Route exact path="/page2">
<Suspense fallback={<FallbackComponent />}>
<Page2 />
</Suspense>
</Route>
</Switch>
</Router>
);
In this example, the FallbackComponent is used as the fallback component for the React.Suspense components that wrap Page1 and Page2. When Page1 is loaded, the reusable components will be loaded along with it, and when Page2 is loaded, it will not re-load the reusable components because they have already been loaded by Page1. The FallbackComponent will only be rendered while the dynamic components are being loaded, not when they are being re-used by another part of the application.
A:
Reusable components as like simple components act as individual distinctive when used in different code files. so they will surely re-render again when used in other files.
|
Can you avoid loading reusable components that another page already loaded them
|
So I have a page (let's call it Page1) that use is loaded dynamically via React.lazy that uses several reusable components and another page (Page2) that uses some of those reusable components, this page being as well loaded dynamically.
My question would be, is there a way to prevent Page2 to load those reusable components again?
When I analyze the generated final bundle I see that both pages load the reusable components individually and I think ideally would be to just load them once and each new dynamically loaded page that uses those reusable components shouldn't fetch them again.
|
[
"Yes, there is a way to prevent Page2 from loading the reusable components again if they have already been loaded by Page1. This can be achieved by using the React.Suspense component to wrap the components that are being dynamically loaded by both Page1 and Page2.\nThe React.Suspense component allows you to specify a fallback component to render while the dynamic components are being loaded. If the dynamic components have already been loaded by another part of the application, the fallback component will not be rendered and the already-loaded components will be used instead.\nHere is an example of how you can use the React.Suspense component to prevent Page2 from loading the reusable components again if they have already been loaded by Page1:\nimport React, { Suspense } from 'react';\nimport { BrowserRouter as Router, Switch, Route } from 'react-router-dom';\n\nconst Page1 = React.lazy(() => import('./Page1'));\nconst Page2 = React.lazy(() => import('./Page2'));\n\nconst FallbackComponent = () => <div>Loading...</div>;\n\nconst App = () => (\n <Router>\n <Switch>\n <Route exact path=\"/page1\">\n <Suspense fallback={<FallbackComponent />}>\n <Page1 />\n </Suspense>\n </Route>\n <Route exact path=\"/page2\">\n <Suspense fallback={<FallbackComponent />}>\n <Page2 />\n </Suspense>\n </Route>\n </Switch>\n </Router>\n);\n\nIn this example, the FallbackComponent is used as the fallback component for the React.Suspense components that wrap Page1 and Page2. When Page1 is loaded, the reusable components will be loaded along with it, and when Page2 is loaded, it will not re-load the reusable components because they have already been loaded by Page1. The FallbackComponent will only be rendered while the dynamic components are being loaded, not when they are being re-used by another part of the application.\n",
"Reusable components as like simple components act as individual distinctive when used in different code files. so they will surely re-render again when used in other files.\n"
] |
[
1,
0
] |
[] |
[] |
[
"react_router",
"react_router_dom",
"reactjs"
] |
stackoverflow_0074563231_react_router_react_router_dom_reactjs.txt
|
Q:
I need Update or Insert functionality in Kafka JDBC Sink Connector
Kafka JDBC Sink Connector
Kafka JDBC sink connector provide 3 insert.mode ..but i need update or insert functionality together . Anyone help how to achieve this.
A:
upsert literally means both insert or update for existing keys that have already been inserted
A:
You can consider the following steps:
separation events among 2 topics on source connector (topic for inserts and topic for updates)
procesing thee topics with independent sink connectors with different configurations.
|
I need Update or Insert functionality in Kafka JDBC Sink Connector
|
Kafka JDBC Sink Connector
Kafka JDBC sink connector provide 3 insert.mode ..but i need update or insert functionality together . Anyone help how to achieve this.
|
[
"upsert literally means both insert or update for existing keys that have already been inserted\n",
"You can consider the following steps:\n\nseparation events among 2 topics on source connector (topic for inserts and topic for updates)\nprocesing thee topics with independent sink connectors with different configurations.\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"apache_kafka",
"apache_kafka_connect",
"jdbc"
] |
stackoverflow_0070421310_apache_kafka_apache_kafka_connect_jdbc.txt
|
Q:
how to fix java.lang.NoClassDefFoundError: org/springframework/boot/Bootstrapper?
tried upgrading spring-boot from 2.5 to 2.6 facing the below issue, not sure what went wrong
bellow are some of the dependencies which we mention version, others are all we don't mention the versions and takes from parent
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.31</version>
</dependency>
<dependencies>
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-dependencies</artifactId>
<version>2.3.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.12.89</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>bom</artifactId>
<version>2.16.67</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
20:27:56.606 [main] ERROR org.springframework.boot.SpringApplication - Application run failed
java.lang.NoClassDefFoundError: org/springframework/boot/Bootstrapper
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1013)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
resolution for the error
A:
Seem like this is a known issue for this version
|
how to fix java.lang.NoClassDefFoundError: org/springframework/boot/Bootstrapper?
|
tried upgrading spring-boot from 2.5 to 2.6 facing the below issue, not sure what went wrong
bellow are some of the dependencies which we mention version, others are all we don't mention the versions and takes from parent
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.31</version>
</dependency>
<dependencies>
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-dependencies</artifactId>
<version>2.3.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.12.89</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>bom</artifactId>
<version>2.16.67</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
20:27:56.606 [main] ERROR org.springframework.boot.SpringApplication - Application run failed
java.lang.NoClassDefFoundError: org/springframework/boot/Bootstrapper
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1013)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
resolution for the error
|
[
"Seem like this is a known issue for this version\n"
] |
[
0
] |
[] |
[] |
[
"java",
"spring",
"spring_boot"
] |
stackoverflow_0074658355_java_spring_spring_boot.txt
|
Q:
Powershell Parallel Loop Not Recognizing List
I have a powershell script that is trying to get every AD group and their members. Since my real code is running a Get-ADUser on every user in every group, I am using parallel loops to save a good amount of time (side note: after testing I have found that using multiple Get-ADUser commands is typically faster than Get-ADGroupMember). However, I have noticed that I cannot view the members of a group when running a parallel loop. I have written some basic code for testing:
$Groups = Get-ADGroup -Filter * -Properties Created,Modified,Description,Members | select-object -first 50
# Loop A
$Groups | foreach-object {
$psitem.Members
}
# Loop B
$Groups | foreach-object -parallel {
$psitem.Members
}
For the test code above I can verify that $Groups does indeed have the Members property. They gettype() output is below:
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True False ADPropertyValueCollection System.Collections.CollectionBase
Loop A above prints every group member as expected, however Loop B always returns nothing. Does anybody know why this may be? I would like to use the double parallel loops if possible, just to save a lot of time as this script will be running periodically.
My PS version is 7.2.7
A:
As stated in the comments, using $PSItem['Members'] does the trick. See GitHub Issue #14604 for details.
$Groups = Get-ADGroup -Filter * -Properties Created,Modified,Description,Members | select-object -first 50
$A = $Groups | foreach-object {
$psitem.Members
}
$B = $Groups | foreach-object -parallel {
$PSItem['Members']
}
echo "A: $($A.count) ----- B: $($B.count)"
|
Powershell Parallel Loop Not Recognizing List
|
I have a powershell script that is trying to get every AD group and their members. Since my real code is running a Get-ADUser on every user in every group, I am using parallel loops to save a good amount of time (side note: after testing I have found that using multiple Get-ADUser commands is typically faster than Get-ADGroupMember). However, I have noticed that I cannot view the members of a group when running a parallel loop. I have written some basic code for testing:
$Groups = Get-ADGroup -Filter * -Properties Created,Modified,Description,Members | select-object -first 50
# Loop A
$Groups | foreach-object {
$psitem.Members
}
# Loop B
$Groups | foreach-object -parallel {
$psitem.Members
}
For the test code above I can verify that $Groups does indeed have the Members property. They gettype() output is below:
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True False ADPropertyValueCollection System.Collections.CollectionBase
Loop A above prints every group member as expected, however Loop B always returns nothing. Does anybody know why this may be? I would like to use the double parallel loops if possible, just to save a lot of time as this script will be running periodically.
My PS version is 7.2.7
|
[
"As stated in the comments, using $PSItem['Members'] does the trick. See GitHub Issue #14604 for details.\n$Groups = Get-ADGroup -Filter * -Properties Created,Modified,Description,Members | select-object -first 50\n$A = $Groups | foreach-object {\n $psitem.Members\n}\n$B = $Groups | foreach-object -parallel {\n $PSItem['Members']\n}\necho \"A: $($A.count) ----- B: $($B.count)\"\n\n"
] |
[
1
] |
[] |
[] |
[
"parallel.foreach",
"powershell"
] |
stackoverflow_0074657843_parallel.foreach_powershell.txt
|
Q:
I'm looking forward to install the hunspell package using pip, but it throws the following error:
Collecting hunspell
Using cached hunspell-0.5.5.tar.gz (34 kB)
Building wheels for collected packages: hunspell
Building wheel for hunspell (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\shikhar\AppData\Local\Temp\pip-wheel-5grngp_q'
cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd
Complete output (12 lines):
running bdist_wheel
running build
running build_ext
building 'hunspell' extension
creating build
creating build\temp.win-amd64-3.10
creating build\temp.win-amd64-3.10\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
hunspell.cpp
hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2
ERROR: Failed building wheel for hunspell
Running setup.py clean for hunspell
Failed to build hunspell
Installing collected packages: hunspell
Running setup.py install for hunspell ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell'
cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd
Complete output (12 lines):
running install
running build
running build_ext
building 'hunspell' extension
creating build
creating build\temp.win-amd64-3.10
creating build\temp.win-amd64-3.10\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC
-IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
hunspell.cpp
hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' Check the logs for full command output.
A:
I tried an older version and successfully installed.
pip install hunspell==0.3.4
A:
Collecting cyhunspell
Using cached CyHunspell-1.3.4.tar.gz (2.7 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: cacheman>=2.0.6 in c:\users\abdul rehman\appdata\local\programs\python\python310\lib\site-packages (from cyhunspell) (2.0.8)
Requirement already satisfied: future>=0.16.0 in c:\users\abdul rehman\appdata\local\programs\python\python310\lib\site-packages (from cacheman>=2.0.6->cyhunspell) (0.18.2)
Requirement already satisfied: psutil>=2.1.0 in c:\users\abdul rehman\appdata\roaming\python\python310\site-packages (from cacheman>=2.0.6->cyhunspell) (5.9.4)
Requirement already satisfied: six>=1.10.0 in c:\users\abdul rehman\appdata\roaming\python\python310\site-packages (from cacheman>=2.0.6->cyhunspell) (1.16.0)
Building wheels for collected packages: cyhunspell
Building wheel for cyhunspell (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [40 lines of output]
C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\hunspell
copying hunspell\platform.py -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell_init_.py -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.pxd -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\thread.pxd -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.pyx -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\thread.hpp -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.cpp -> build\lib.win-amd64-cpython-310\hunspell
creating build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_AU.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_CA.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_GB.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_NZ.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_US.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_ZA.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\test.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_AU.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_CA.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_GB.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_NZ.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_US.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_ZA.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\test.dic -> build\lib.win-amd64-cpython-310\dictionaries
creating build\lib.win-amd64-cpython-310\libs
creating build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc11-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc11-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc14-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc14-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc
running build_ext
building 'hunspell.hunspell' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cyhunspell
Running setup.py clean for cyhunspell
Failed to build cyhunspell
Installing collected packages: cyhunspell
Running setup.py install for cyhunspell ... error
error: subprocess-exited-with-error
× Running setup.py install for cyhunspell did not run successfully.
│ exit code: 1
╰─> [42 lines of output]
C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running install
C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\hunspell
copying hunspell\platform.py -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell_init_.py -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.pxd -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\thread.pxd -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.pyx -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\thread.hpp -> build\lib.win-amd64-cpython-310\hunspell
copying hunspell\hunspell.cpp -> build\lib.win-amd64-cpython-310\hunspell
creating build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_AU.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_CA.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_GB.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_NZ.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_US.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_ZA.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\test.aff -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_AU.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_CA.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_GB.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_NZ.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_US.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\en_ZA.dic -> build\lib.win-amd64-cpython-310\dictionaries
copying dictionaries\test.dic -> build\lib.win-amd64-cpython-310\dictionaries
creating build\lib.win-amd64-cpython-310\libs
creating build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc11-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc11-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc14-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc
copying libs\msvc\libhunspell-msvc14-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc
running build_ext
building 'hunspell.hunspell' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> cyhunspell
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
|
I'm looking forward to install the hunspell package using pip, but it throws the following error:
|
Collecting hunspell
Using cached hunspell-0.5.5.tar.gz (34 kB)
Building wheels for collected packages: hunspell
Building wheel for hunspell (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\shikhar\AppData\Local\Temp\pip-wheel-5grngp_q'
cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd
Complete output (12 lines):
running bdist_wheel
running build
running build_ext
building 'hunspell' extension
creating build
creating build\temp.win-amd64-3.10
creating build\temp.win-amd64-3.10\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
hunspell.cpp
hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2
ERROR: Failed building wheel for hunspell
Running setup.py clean for hunspell
Failed to build hunspell
Installing collected packages: hunspell
Running setup.py install for hunspell ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell'
cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd
Complete output (12 lines):
running install
running build
running build_ext
building 'hunspell' extension
creating build
creating build\temp.win-amd64-3.10
creating build\temp.win-amd64-3.10\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC
-IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
hunspell.cpp
hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' Check the logs for full command output.
|
[
"I tried an older version and successfully installed.\npip install hunspell==0.3.4\n\n",
"Collecting cyhunspell\nUsing cached CyHunspell-1.3.4.tar.gz (2.7 MB)\nPreparing metadata (setup.py) ... done\nRequirement already satisfied: cacheman>=2.0.6 in c:\\users\\abdul rehman\\appdata\\local\\programs\\python\\python310\\lib\\site-packages (from cyhunspell) (2.0.8)\nRequirement already satisfied: future>=0.16.0 in c:\\users\\abdul rehman\\appdata\\local\\programs\\python\\python310\\lib\\site-packages (from cacheman>=2.0.6->cyhunspell) (0.18.2)\nRequirement already satisfied: psutil>=2.1.0 in c:\\users\\abdul rehman\\appdata\\roaming\\python\\python310\\site-packages (from cacheman>=2.0.6->cyhunspell) (5.9.4)\nRequirement already satisfied: six>=1.10.0 in c:\\users\\abdul rehman\\appdata\\roaming\\python\\python310\\site-packages (from cacheman>=2.0.6->cyhunspell) (1.16.0)\nBuilding wheels for collected packages: cyhunspell\nBuilding wheel for cyhunspell (setup.py) ... error\nerror: subprocess-exited-with-error\n× python setup.py bdist_wheel did not run successfully.\n│ exit code: 1\n╰─> [40 lines of output]\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead\nwarnings.warn(\nrunning bdist_wheel\nrunning build\nrunning build_py\ncreating build\ncreating build\\lib.win-amd64-cpython-310\ncreating build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\platform.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell_init_.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pyx -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.hpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncreating build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncreating build\\lib.win-amd64-cpython-310\\libs\ncreating build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\nrunning build_ext\nbuilding 'hunspell.hunspell' extension\nerror: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nERROR: Failed building wheel for cyhunspell\nRunning setup.py clean for cyhunspell\nFailed to build cyhunspell\nInstalling collected packages: cyhunspell\nRunning setup.py install for cyhunspell ... error\nerror: subprocess-exited-with-error\n× Running setup.py install for cyhunspell did not run successfully.\n│ exit code: 1\n╰─> [42 lines of output]\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead\nwarnings.warn(\nrunning install\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\nwarnings.warn(\nrunning build\nrunning build_py\ncreating build\ncreating build\\lib.win-amd64-cpython-310\ncreating build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\platform.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell_init_.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pyx -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.hpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncreating build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncreating build\\lib.win-amd64-cpython-310\\libs\ncreating build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\nrunning build_ext\nbuilding 'hunspell.hunspell' extension\nerror: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: legacy-install-failure\n× Encountered error while trying to install package.\n╰─> cyhunspell\nnote: This is an issue with the package mentioned above, not pip.\nhint: See above for output from the failure.\n"
] |
[
0,
0
] |
[] |
[] |
[
"hunspell",
"python"
] |
stackoverflow_0071396413_hunspell_python.txt
|
Q:
C# Console.WriteLine but text contains ""
I want to write something with Console.WriteLine, but the text contains "".
Example:
Console.WriteLine("example: "this is an example" example");
How to avoid this problem?
I tried '' and other symbols, but it didn't work
A:
Use the backslash character to mark the quotations, as shown:
Console.WriteLine("example: \"this is an example\" example");
This is used in languages to mark special characters, in this case marking the quotation so that it doesn't end the string prematurely.
|
C# Console.WriteLine but text contains ""
|
I want to write something with Console.WriteLine, but the text contains "".
Example:
Console.WriteLine("example: "this is an example" example");
How to avoid this problem?
I tried '' and other symbols, but it didn't work
|
[
"Use the backslash character to mark the quotations, as shown:\nConsole.WriteLine(\"example: \\\"this is an example\\\" example\");\n\nThis is used in languages to mark special characters, in this case marking the quotation so that it doesn't end the string prematurely.\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"console.writeline"
] |
stackoverflow_0074660963_c#_console.writeline.txt
|