content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
I am trying to print the age's greater than 18 but it says uncaught type error : age
Uncaught Type Error : age
Refer the above image
It should print the age's greater than 18 but it shows the error.
A:
It's probably because of the <= in your for loop.
For i = 4, there no arr[4] so it's undefined.
Replace the <= by just < and it should work.
|
I am trying to print the age's greater than 18 but it says uncaught type error : age
|
Uncaught Type Error : age
Refer the above image
It should print the age's greater than 18 but it shows the error.
|
[
"It's probably because of the <= in your for loop.\nFor i = 4, there no arr[4] so it's undefined.\nReplace the <= by just < and it should work.\n"
] |
[
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074657901_javascript.txt
|
Q:
Importing SVG in svg.js
I have created a little sandbox to test this out but according to the docs I should be able to import an SVG using svg.js using https://playcode.io/1024624
mounted() {
this.$nextTick(() => {
if(this.svg) {
this.paper = SVG(this.svg).addTo('#paper');
} else {
this.paper = SVG('paper');
}
});
}
If you look in the console it throws an error so that can't be the correct way of doing it. I have managed to import using the following https://playcode.io/1024624?v=2
mounted() {
this.$nextTick(() => {
if(this.svg) {
this.paper = SVG('paper');
this.paper.svg(this.svg, true);
} else {
this.paper = SVG('paper');
}
});
}
But if you inspect the SVG it inserts the SVG into a SVG which means every time it is saved and reloaded the size of the image will get bigger and there will be multiple elements with the same ID which I believe is the reason why I cant query the elements correctly.
Any help is appreciated.
A:
I have tried everything to get this to import the SVG as one but I don't think it's possible. I have however figured out a solution which does the above. Instead of passing the SVG as one to the sever to save and then trying to reload it, I send over all the direct descendants and store them in a json format. I then on load loop through all the children and add them back to a fresh instance of SVG.
Save:
...
elements: this.paper.children().forEach(child => {
return {
elm: child.svg(),
id: child.node.id
}
});
...
Load:
$children.forEach(child => {
this.paper.svg(child.elm);
});
|
Importing SVG in svg.js
|
I have created a little sandbox to test this out but according to the docs I should be able to import an SVG using svg.js using https://playcode.io/1024624
mounted() {
this.$nextTick(() => {
if(this.svg) {
this.paper = SVG(this.svg).addTo('#paper');
} else {
this.paper = SVG('paper');
}
});
}
If you look in the console it throws an error so that can't be the correct way of doing it. I have managed to import using the following https://playcode.io/1024624?v=2
mounted() {
this.$nextTick(() => {
if(this.svg) {
this.paper = SVG('paper');
this.paper.svg(this.svg, true);
} else {
this.paper = SVG('paper');
}
});
}
But if you inspect the SVG it inserts the SVG into a SVG which means every time it is saved and reloaded the size of the image will get bigger and there will be multiple elements with the same ID which I believe is the reason why I cant query the elements correctly.
Any help is appreciated.
|
[
"I have tried everything to get this to import the SVG as one but I don't think it's possible. I have however figured out a solution which does the above. Instead of passing the SVG as one to the sever to save and then trying to reload it, I send over all the direct descendants and store them in a json format. I then on load loop through all the children and add them back to a fresh instance of SVG.\nSave:\n...\nelements: this.paper.children().forEach(child => {\n return {\n elm: child.svg(),\n id: child.node.id\n }\n});\n...\n\nLoad:\n$children.forEach(child => {\n this.paper.svg(child.elm);\n});\n\n"
] |
[
0
] |
[] |
[] |
[
"svg.js"
] |
stackoverflow_0074654093_svg.js.txt
|
Q:
_tkinter.TclError: image "score6" doesn't exist
Hello so I've been trying to solve this problem but cant find anything I tried dictionaries and exec. How can I use string value as a variable name? I have a problem when I define a variable name in a string and try to make a button with the image it shows error - _tkinter.TclError: image "score6" doesn't exist, but if I manually type in the image variable name the error doesn't show.
img = 'score' + str(correct) #here I make the variable name #the scores can be from 0-9
self.rez = Button(window, relief="sunken", image=img, bd=0, bg='#cecece',activebackground='#cecece')
self.rez.place(x=520, y=330)
#this is where images are defined(this is outside the class)
score0 = ImageTk.PhotoImage(Image.open("scores/09.png"))
score1 = ImageTk.PhotoImage(Image.open("scores/19.png"))
score2 = ImageTk.PhotoImage(Image.open("scores/29.png"))
score3 = ImageTk.PhotoImage(Image.open("scores/39.png"))
score4 = ImageTk.PhotoImage(Image.open("scores/49.png"))
score5 = ImageTk.PhotoImage(Image.open("scores/59.png"))
so how can I use string value as a variable name?
A:
You could do this using eval
img = eval('score' + str(correct))
but this is dangerous if correct is provided by the user. A better approach is to use a list
images = [ImageTk.PhotoImage(Image.open("scores/09.png")),
ImageTk.PhotoImage(Image.open("scores/19.png")),
ImageTk.PhotoImage(Image.open("scores/29.png")),
ImageTk.PhotoImage(Image.open("scores/39.png")),
ImageTk.PhotoImage(Image.open("scores/49.png")),
ImageTk.PhotoImage(Image.open("scores/59.png"))]
img = images[correct]
A:
Firstly, you'll probably want to import Pathlib to work with the absolute paths to your image files
from pathlib import Path
Then I think it might make more sense to put your image filenames into a list...
image_dir = Path(r'C:\<path>\<to>\scores') # the folder containing the images
images = [ # list of individual image file names
"09.png",
"19.png",
"29.png",
"39.png",
"49.png",
"59.png",
...
] # etc.
And then define a function that can handle fetching these images as needed
def set_image(correct): # I assume 'correct' is an integer
img = ImageTk.PhotoImage(
Image.open(
# open the correct image (-1 to accommodate zero-indexing)
image_dir.joinpath(images[correct - 1])
)
)
return img
Then you can update your button's image like so, for example:
self.rez.configure(image=img)
|
_tkinter.TclError: image "score6" doesn't exist
|
Hello so I've been trying to solve this problem but cant find anything I tried dictionaries and exec. How can I use string value as a variable name? I have a problem when I define a variable name in a string and try to make a button with the image it shows error - _tkinter.TclError: image "score6" doesn't exist, but if I manually type in the image variable name the error doesn't show.
img = 'score' + str(correct) #here I make the variable name #the scores can be from 0-9
self.rez = Button(window, relief="sunken", image=img, bd=0, bg='#cecece',activebackground='#cecece')
self.rez.place(x=520, y=330)
#this is where images are defined(this is outside the class)
score0 = ImageTk.PhotoImage(Image.open("scores/09.png"))
score1 = ImageTk.PhotoImage(Image.open("scores/19.png"))
score2 = ImageTk.PhotoImage(Image.open("scores/29.png"))
score3 = ImageTk.PhotoImage(Image.open("scores/39.png"))
score4 = ImageTk.PhotoImage(Image.open("scores/49.png"))
score5 = ImageTk.PhotoImage(Image.open("scores/59.png"))
so how can I use string value as a variable name?
|
[
"You could do this using eval\nimg = eval('score' + str(correct))\n\nbut this is dangerous if correct is provided by the user. A better approach is to use a list\nimages = [ImageTk.PhotoImage(Image.open(\"scores/09.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/19.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/29.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/39.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/49.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/59.png\"))]\n\nimg = images[correct]\n\n",
"Firstly, you'll probably want to import Pathlib to work with the absolute paths to your image files\nfrom pathlib import Path\n\nThen I think it might make more sense to put your image filenames into a list...\nimage_dir = Path(r'C:\\<path>\\<to>\\scores') # the folder containing the images\nimages = [ # list of individual image file names\n \"09.png\",\n \"19.png\",\n \"29.png\",\n \"39.png\", \n \"49.png\", \n \"59.png\",\n ...\n] # etc.\n\nAnd then define a function that can handle fetching these images as needed\ndef set_image(correct): # I assume 'correct' is an integer\n img = ImageTk.PhotoImage(\n Image.open(\n # open the correct image (-1 to accommodate zero-indexing)\n image_dir.joinpath(images[correct - 1]) \n )\n )\n return img\n\nThen you can update your button's image like so, for example:\nself.rez.configure(image=img)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074656689_python_tkinter.txt
|
Q:
How to add search other field in many2one?
HI I have a customer field and the default search is by name, and I want to add a search by barcode as well to the customer field
I have tried adding a barcode(partner_id.barcode) on the domain as below, but it still doesn't work (model = sale.order)
@api.model
def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None):
if self._context.get('sale_show_partner_name'):
if operator == 'ilike' and not (name or '').strip():
domain = []
elif operator in ('ilike', 'like', '=', '=like', '=ilike'):
domain = expression.AND([
args or [],
['|', '|', ('name', operator, name), ('partner_id.name', operator, name), ('partner_id.barcode', operator, name)]
])
return self._search(domain, limit=limit, access_rights_uid=name_get_uid)
return super(SaleOrder, self)._name_search(name, args=args, operator=operator, limit=limit, name_get_uid=name_get_uid)
I have also tried in the (res.partner) model as below. it can search customer by barcode, but cannot search customer by name :
@api.model
def name_search(self, name, args=None, operator='ilike', limit=100):
if not self.env.context.get('display_barcode', True):
return super(ResPartnerInherit, self).name_search(name, args, operator, limit)
else:
args = args or []
recs = self.browse()
if not recs:
recs = self.search([('barcode', operator, name)] + args, limit=limit)
return recs.name_get()
What should I do if I want to find a customer by name and barcode?
If anyone knows, please let me know
Best Regards
A:
The barcode field in res.partner is a property field and stored in ir.property model which name is Company Propeties in Odoo and you can access it with developer mode from Settings -> Technical -> Company Propeties.
The _name_search method for res.partner enable you to search in any Many2one partner relation field in any model by one of these fields display_name, email, reference and vat and you can override it to add barcode as below:
from odoo import api, models
from odoo.osv.expression import get_unaccent_wrapper
import re
class ResPartner(models.Model):
_inherit = 'res.partner'
@api.model
def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None):
self = self.with_user(name_get_uid) if name_get_uid else self
# as the implementation is in SQL, we force the recompute of fields if necessary
self.recompute(['display_name'])
self.flush()
print(args)
if args is None:
args = []
order_by_rank = self.env.context.get('res_partner_search_mode')
if (name or order_by_rank) and operator in ('=', 'ilike', '=ilike', 'like', '=like'):
self.check_access_rights('read')
where_query = self._where_calc(args)
self._apply_ir_rules(where_query, 'read')
from_clause, where_clause, where_clause_params = where_query.get_sql()
from_str = from_clause if from_clause else 'res_partner'
where_str = where_clause and (" WHERE %s AND " % where_clause) or ' WHERE '
print(where_clause_params)
# search on the name of the contacts and of its company
search_name = name
if operator in ('ilike', 'like'):
search_name = '%%%s%%' % name
if operator in ('=ilike', '=like'):
operator = operator[1:]
unaccent = get_unaccent_wrapper(self.env.cr)
fields = self._get_name_search_order_by_fields()
query = """SELECT res_partner.id
FROM {from_str}
LEFT JOIN ir_property trust_property ON (
trust_property.res_id = 'res.partner,'|| {from_str}."id"
AND trust_property.name = 'barcode')
{where} ({email} {operator} {percent}
OR {display_name} {operator} {percent}
OR {reference} {operator} {percent}
OR {barcode} {operator} {percent}
OR {vat} {operator} {percent})
-- don't panic, trust postgres bitmap
ORDER BY {fields} {display_name} {operator} {percent} desc,
{display_name}
""".format(from_str=from_str,
fields=fields,
where=where_str,
operator=operator,
email=unaccent('res_partner.email'),
display_name=unaccent('res_partner.display_name'),
reference=unaccent('res_partner.ref'),
barcode=unaccent('trust_property.value_text'),
percent=unaccent('%s'),
vat=unaccent('res_partner.vat'), )
where_clause_params += [search_name] * 4 # for email / display_name, reference
where_clause_params += [re.sub('[^a-zA-Z0-9\-\.]+', '', search_name) or None] # for vat
where_clause_params += [search_name] # for order by
if limit:
query += ' limit %s'
where_clause_params.append(limit)
print(query)
print(where_clause_params)
self.env.cr.execute(query, where_clause_params)
return [row[0] for row in self.env.cr.fetchall()]
return super(ResPartner, self)._name_search(name, args, operator=operator, limit=limit, name_get_uid=name_get_uid)
|
How to add search other field in many2one?
|
HI I have a customer field and the default search is by name, and I want to add a search by barcode as well to the customer field
I have tried adding a barcode(partner_id.barcode) on the domain as below, but it still doesn't work (model = sale.order)
@api.model
def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None):
if self._context.get('sale_show_partner_name'):
if operator == 'ilike' and not (name or '').strip():
domain = []
elif operator in ('ilike', 'like', '=', '=like', '=ilike'):
domain = expression.AND([
args or [],
['|', '|', ('name', operator, name), ('partner_id.name', operator, name), ('partner_id.barcode', operator, name)]
])
return self._search(domain, limit=limit, access_rights_uid=name_get_uid)
return super(SaleOrder, self)._name_search(name, args=args, operator=operator, limit=limit, name_get_uid=name_get_uid)
I have also tried in the (res.partner) model as below. it can search customer by barcode, but cannot search customer by name :
@api.model
def name_search(self, name, args=None, operator='ilike', limit=100):
if not self.env.context.get('display_barcode', True):
return super(ResPartnerInherit, self).name_search(name, args, operator, limit)
else:
args = args or []
recs = self.browse()
if not recs:
recs = self.search([('barcode', operator, name)] + args, limit=limit)
return recs.name_get()
What should I do if I want to find a customer by name and barcode?
If anyone knows, please let me know
Best Regards
|
[
"The barcode field in res.partner is a property field and stored in ir.property model which name is Company Propeties in Odoo and you can access it with developer mode from Settings -> Technical -> Company Propeties.\nThe _name_search method for res.partner enable you to search in any Many2one partner relation field in any model by one of these fields display_name, email, reference and vat and you can override it to add barcode as below:\nfrom odoo import api, models\nfrom odoo.osv.expression import get_unaccent_wrapper\nimport re\n\n\nclass ResPartner(models.Model):\n _inherit = 'res.partner'\n\n @api.model\n def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None):\n self = self.with_user(name_get_uid) if name_get_uid else self\n # as the implementation is in SQL, we force the recompute of fields if necessary\n self.recompute(['display_name'])\n self.flush()\n print(args)\n if args is None:\n args = []\n order_by_rank = self.env.context.get('res_partner_search_mode')\n if (name or order_by_rank) and operator in ('=', 'ilike', '=ilike', 'like', '=like'):\n self.check_access_rights('read')\n where_query = self._where_calc(args)\n self._apply_ir_rules(where_query, 'read')\n from_clause, where_clause, where_clause_params = where_query.get_sql()\n from_str = from_clause if from_clause else 'res_partner'\n where_str = where_clause and (\" WHERE %s AND \" % where_clause) or ' WHERE '\n print(where_clause_params)\n # search on the name of the contacts and of its company\n search_name = name\n if operator in ('ilike', 'like'):\n search_name = '%%%s%%' % name\n if operator in ('=ilike', '=like'):\n operator = operator[1:]\n\n unaccent = get_unaccent_wrapper(self.env.cr)\n\n fields = self._get_name_search_order_by_fields()\n\n query = \"\"\"SELECT res_partner.id\n FROM {from_str}\n LEFT JOIN ir_property trust_property ON (\n trust_property.res_id = 'res.partner,'|| {from_str}.\"id\"\n AND trust_property.name = 'barcode')\n {where} ({email} {operator} {percent}\n OR {display_name} {operator} {percent}\n OR {reference} {operator} {percent}\n OR {barcode} {operator} {percent}\n OR {vat} {operator} {percent})\n -- don't panic, trust postgres bitmap\n ORDER BY {fields} {display_name} {operator} {percent} desc,\n {display_name}\n \"\"\".format(from_str=from_str,\n fields=fields,\n where=where_str,\n operator=operator,\n email=unaccent('res_partner.email'),\n display_name=unaccent('res_partner.display_name'),\n reference=unaccent('res_partner.ref'),\n barcode=unaccent('trust_property.value_text'),\n percent=unaccent('%s'),\n vat=unaccent('res_partner.vat'), )\n\n where_clause_params += [search_name] * 4 # for email / display_name, reference\n where_clause_params += [re.sub('[^a-zA-Z0-9\\-\\.]+', '', search_name) or None] # for vat\n where_clause_params += [search_name] # for order by\n if limit:\n query += ' limit %s'\n where_clause_params.append(limit)\n print(query)\n print(where_clause_params)\n self.env.cr.execute(query, where_clause_params)\n return [row[0] for row in self.env.cr.fetchall()]\n\n return super(ResPartner, self)._name_search(name, args, operator=operator, limit=limit, name_get_uid=name_get_uid)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"erp",
"odoo",
"odoo_14",
"python"
] |
stackoverflow_0074651848_erp_odoo_odoo_14_python.txt
|
Q:
TS2532: Object is possibly 'undefined' inside an array
I'm beginning my path in Typescript and got a problem that i can solve. I'm trying to acess one index of one array inside the return of a API call. In the console the value is printed perfectly, but appears this message of error.
This is the interface i made :
interface Data {
list: [{
main: {
temp: number;
temp_min: number;
temp_max: number;
}
weather: [{
main: string;
description: string;
}]
clouds: [{
all: number;
}]
dt_txt: string;
}]
dt: number;
}
And that is the console.log I'm using:
data?.list[1].main.temp_min
This is the error that appears:
TS2532: Object is possibly 'undefined'.
109 |
110 | <>
> 111 | {console.log(data?.list[1].main.temp_min)}
| ^^^^^^^^^^^^^
112 | {console.log(data?.list[3]?.main)}
113 |
114 | </>
And that is the return value from the console.log:
Could you guys help me?
A:
Access the items using ?
data?.list?.[1]?.main.temp_min
Also, update the interface like this
interface Data {
list: {
main: {
temp: number;
temp_min: number;
temp_max: number;
}
weather: {
main: string;
description: string;
}[]
clouds: {
all: number;
}[]
dt_txt: string;
}[]
dt: number;
}
A:
Within an array, it is not guaranteed that you will have something in index 1.
Scenarios:
[undefined, undefined, <object>]
[<object>]
When possible, best practice is not to access arrays by index directly but use a methods like '.map' or '.forEach' to programatically go through each one.
A:
In TS you should make sure that the data at index i in an array is not undefined or null in order to access the properties of that index.
So you have 2 options to fix your issue:
Add ? to check if your data is undefined or not:
console.log(data?.list[1]?.main?.temp_min);
Add if condition:
if(data && !!data.list && data.list[1] && data.list[1].main) {
console.log(data.list[1].main.temp_min);
}
|
TS2532: Object is possibly 'undefined' inside an array
|
I'm beginning my path in Typescript and got a problem that i can solve. I'm trying to acess one index of one array inside the return of a API call. In the console the value is printed perfectly, but appears this message of error.
This is the interface i made :
interface Data {
list: [{
main: {
temp: number;
temp_min: number;
temp_max: number;
}
weather: [{
main: string;
description: string;
}]
clouds: [{
all: number;
}]
dt_txt: string;
}]
dt: number;
}
And that is the console.log I'm using:
data?.list[1].main.temp_min
This is the error that appears:
TS2532: Object is possibly 'undefined'.
109 |
110 | <>
> 111 | {console.log(data?.list[1].main.temp_min)}
| ^^^^^^^^^^^^^
112 | {console.log(data?.list[3]?.main)}
113 |
114 | </>
And that is the return value from the console.log:
Could you guys help me?
|
[
"Access the items using ?\ndata?.list?.[1]?.main.temp_min\n\nAlso, update the interface like this\ninterface Data {\n list: {\n\n main: {\n temp: number;\n temp_min: number;\n temp_max: number;\n }\n\n weather: {\n main: string;\n description: string;\n }[]\n\n clouds: {\n all: number;\n }[]\n\n dt_txt: string;\n }[]\n\n dt: number;\n}\n\n",
"Within an array, it is not guaranteed that you will have something in index 1.\nScenarios:\n[undefined, undefined, <object>]\n[<object>]\n\nWhen possible, best practice is not to access arrays by index directly but use a methods like '.map' or '.forEach' to programatically go through each one.\n",
"In TS you should make sure that the data at index i in an array is not undefined or null in order to access the properties of that index.\nSo you have 2 options to fix your issue:\n\nAdd ? to check if your data is undefined or not:\n\nconsole.log(data?.list[1]?.main?.temp_min);\n\n\nAdd if condition:\n\nif(data && !!data.list && data.list[1] && data.list[1].main) {\n console.log(data.list[1].main.temp_min);\n}\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"javascript",
"reactjs",
"typescript"
] |
stackoverflow_0074657410_javascript_reactjs_typescript.txt
|
Q:
Deployment exception on Payara
I have been sitting with a problem deploying an application to Payara 4.1.1.171.
The deployment goes thru up to a point where it fails with an exception.
Stack Trace below:
Exception while loading the app : CDI deployment failure:Exception List with 2 exceptions:
Exception 0 :
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type IterableProvider<ComponentInvocationHandler> with qualifiers @Default
at injection point [BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] @Inject private org.glassfish.api.invocation.InvocationManagerImpl(@Optional IterableProvider<ComponentInvocationHandler>)
at org.glassfish.api.invocation.InvocationManagerImpl.<init>(InvocationManagerImpl.java:91)
And lower down in the trace.
Exception 1 :
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type Logger with qualifiers @Default
at injection point [BackedAnnotatedField] @Inject org.glassfish.api.admin.AdminCommandLock.logger
at org.glassfish.api.admin.AdminCommandLock.logger(AdminCommandLock.java:0)
Have read allot on CDI and possible solutions to the problem but none currently addresses this issue.
The application is currently deployed on another server where its running, but for some odd reason it will not deploy to this server. Have also upgraded the server as well as downgraded the server, but all having the exact same problem.
A:
I managed to resolve the issue. It was caused by a custom threadpool executor service I have packaged before. This packaged contained the glassfish-api library causing a conflict during class loading, removing this resolved the issue.
A:
I looked at the test project I opened keeping in mind that it's loading something that causes Payara to throw this error. There are actually two Spring Framework dependencies involved. They are:
<dependency>
<groupId>jakarta.servlet</groupId>
<artifactId>jakarta.servlet-api</artifactId>
<version>6.0.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>6.0.0</version>
</dependency>
When only the first one, jakarta.servlet-api, is included, Payara deploys the app just fine. However, when the second one, spring-web, is included, that causes Payara to throw the error " CDI is not available.".
This is the simplest web app I could think of. It consists of 1 java file:
package com.test;
import java.io.PrintWriter;
import jakarta.servlet.ServletException;
import jakarta.servlet.http.HttpServlet;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
/**
*h
* @author Glenn Barnard
*
* @version 3.0, 10-27-2020
*/
public class TestServlet extends HttpServlet {
private static final long serialVersionUID = 4106307982613351195L;
public void init() throws ServletException {
super.init();
}
@Override
public void destroy() {
}
@Override
protected void service(HttpServletRequest request, HttpServletResponse response) {
String responseText = "Welcome to the Test Servlet";;
response.setContentType("text/html");
PrintWriter out = null;
try {
out = response.getWriter();
out.println(responseText);
}
catch (Exception e) {
e.printStackTrace();
}
finally {
if (out != null)
out.close();
}
}
}
And the POM file:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>bw</groupId>
<artifactId>testwebproject</artifactId>
<version>Test</version>
<name>TestWeb</name>
<packaging>war</packaging>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>jakarta.servlet</groupId>
<artifactId>jakarta.servlet-api</artifactId>
<version>6.0.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>6.0.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<!-- or whatever version you use -->
<source>17</source>
<target>17</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.2</version>
</plugin>
</plugins>
</build>
</project>
Given that the web MVC Springframework uses injection, it makes sense that it needs CDI.
|
Deployment exception on Payara
|
I have been sitting with a problem deploying an application to Payara 4.1.1.171.
The deployment goes thru up to a point where it fails with an exception.
Stack Trace below:
Exception while loading the app : CDI deployment failure:Exception List with 2 exceptions:
Exception 0 :
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type IterableProvider<ComponentInvocationHandler> with qualifiers @Default
at injection point [BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] @Inject private org.glassfish.api.invocation.InvocationManagerImpl(@Optional IterableProvider<ComponentInvocationHandler>)
at org.glassfish.api.invocation.InvocationManagerImpl.<init>(InvocationManagerImpl.java:91)
And lower down in the trace.
Exception 1 :
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type Logger with qualifiers @Default
at injection point [BackedAnnotatedField] @Inject org.glassfish.api.admin.AdminCommandLock.logger
at org.glassfish.api.admin.AdminCommandLock.logger(AdminCommandLock.java:0)
Have read allot on CDI and possible solutions to the problem but none currently addresses this issue.
The application is currently deployed on another server where its running, but for some odd reason it will not deploy to this server. Have also upgraded the server as well as downgraded the server, but all having the exact same problem.
|
[
"I managed to resolve the issue. It was caused by a custom threadpool executor service I have packaged before. This packaged contained the glassfish-api library causing a conflict during class loading, removing this resolved the issue.\n",
"I looked at the test project I opened keeping in mind that it's loading something that causes Payara to throw this error. There are actually two Spring Framework dependencies involved. They are:\n<dependency>\n <groupId>jakarta.servlet</groupId>\n <artifactId>jakarta.servlet-api</artifactId>\n <version>6.0.0</version>\n <scope>provided</scope>\n</dependency>\n\n<dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-web</artifactId>\n <version>6.0.0</version>\n</dependency>\n\nWhen only the first one, jakarta.servlet-api, is included, Payara deploys the app just fine. However, when the second one, spring-web, is included, that causes Payara to throw the error \" CDI is not available.\".\nThis is the simplest web app I could think of. It consists of 1 java file:\npackage com.test;\n\nimport java.io.PrintWriter;\n\nimport jakarta.servlet.ServletException;\nimport jakarta.servlet.http.HttpServlet;\nimport jakarta.servlet.http.HttpServletRequest;\nimport jakarta.servlet.http.HttpServletResponse;\n\n/**\n *h\n * @author Glenn Barnard\n * \n * @version 3.0, 10-27-2020\n */\n\npublic class TestServlet extends HttpServlet {\n\n private static final long serialVersionUID = 4106307982613351195L;\n\n \n public void init() throws ServletException {\n super.init();\n }\n\n @Override\n public void destroy() {\n } \n\n @Override\n protected void service(HttpServletRequest request, HttpServletResponse response) {\n\n String responseText = \"Welcome to the Test Servlet\";;\n\n response.setContentType(\"text/html\");\n PrintWriter out = null;\n try {\n out = response.getWriter();\n out.println(responseText);\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n finally {\n if (out != null)\n out.close();\n }\n }\n}\n\nAnd the POM file:\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>bw</groupId>\n <artifactId>testwebproject</artifactId>\n <version>Test</version>\n <name>TestWeb</name>\n <packaging>war</packaging>\n <url>http://maven.apache.org</url>\n\n\n <dependencies>\n\n <dependency>\n <groupId>jakarta.servlet</groupId>\n <artifactId>jakarta.servlet-api</artifactId>\n <version>6.0.0</version>\n <scope>provided</scope>\n </dependency>\n\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-web</artifactId>\n <version>6.0.0</version>\n </dependency>\n\n </dependencies>\n\n <build>\n <plugins>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>3.8.1</version>\n <configuration>\n <!-- or whatever version you use -->\n <source>17</source>\n <target>17</target>\n </configuration>\n </plugin>\n \n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-war-plugin</artifactId>\n <version>3.3.2</version>\n </plugin>\n </plugins>\n </build>\n</project>\n\nGiven that the web MVC Springframework uses injection, it makes sense that it needs CDI.\n"
] |
[
0,
0
] |
[] |
[] |
[
"cdi",
"glassfish_4.1",
"java",
"payara"
] |
stackoverflow_0043759948_cdi_glassfish_4.1_java_payara.txt
|
Q:
Warning, FIREBASE_CONFIG environment variable is missing. Initializing firebase-admin will fail
When I follow the document of firebase, but I can not connect to firebase.
Does anybody face this issue?
https://firebase.google.com/docs/firestore/quickstart
https://firebase.google.com/docs/firestore/manage-data/add-data
My code :
index.js
var admin = require("firebase-admin");
var serviceAccount = require("./key/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://sampleLink.firebaseio.com",
});
const functions = require("firebase-functions");
var db = admin.firestore();
var data = {
name: "Los Angeles",
state: "CA",
country: "USA",
};
// Add a new document in collection "cities" with ID 'LA'
var setDoc = db.collection("cities").doc("LA").set(data);
setDoc;
package.json
{
"name": "functions",
"description": "Cloud Functions for Firebase",
"scripts": {
"serve": "firebase serve --only functions",
"shell": "firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"dependencies": {
"firebase": "^5.7.3",
"firebase-admin": "~6.0.0",
"firebase-functions": "^2.1.0"
},
"private": true
}
console
tktktk:functions dev$ node index.js
Warning, FIREBASE_CONFIG environment variable is missing. Initializing firebase-admin will fail
The behavior for Date objects stored in Firestore is going to change
AND YOUR APP MAY BREAK.
To hide this warning and ensure your app does not break, you need to add the
following code to your app before calling any other Cloud Firestore methods:
const firestore = new Firestore();
const settings = {/* your settings... */ timestampsInSnapshots: true};
firestore.settings(settings);
With this change, timestamps stored in Cloud Firestore will be read back as
Firebase Timestamp objects instead of as system Date objects. So you will also
need to update code expecting a Date to instead expect a Timestamp. For example:
// Old:
const date = snapshot.get('created_at');
// New:
const timestamp = snapshot.get('created_at');
const date = timestamp.toDate();
Please audit all existing usages of Date when you enable the new behavior. In a
future release, the behavior will change to the new behavior, so if you do not
follow these steps, YOUR APP MAY BREAK.
My env is below:
$ firebase --version
6.3.0
$ node -v
v8.12.0
$ npm -v
6.4.1
A:
That FIREBASE_CONFIG warning hints that the path to the JSON is missing (or wrongful).
Either setup FIREBASE_CONFIG, as it demands - or setup GOOGLE_APPLICATION_CREDENTIALS as environment variable and then run gcloud auth application-default login; then you could use admin.credential.applicationDefault() instead of admin.credential.cert(serviceAccount).
A:
The proper way to run a cloud function locally is via the Functions shell (see https://firebase.google.com/docs/functions/local-emulator). If you want to run it directly like $ node index.js, then you have to set the required environment variables yourself, or otherwise firebase-functions is going to complain.
However, note that the above message is just a warning. Your code is running despite that. If you're not seeing any data written to Firestore, that's probably because you're not handling the promise returned by the Firestore set() method.
A:
For me, the problem was that line:
const functions = require('firebase-functions');
I just deleted it from index.js. I don't use Firebase Cloud Function anymore but I forgot to remove that line.
This is where my problem was coming from.
So maybe you too imported this package in a file that don't need it.
A:
I had the exact same issue.
Turns out it has to do with Node version 10
I removed Node 10 and went back to Node 8 and everything worked like a charm...
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
Just hit
firebase deploy --only functions
in Node v8.
A:
About you problem, just now Thanks to your code i was able to fix my problem.
What i do:
const admin = require('firebase-admin');
var serviceAccount = require("./serviceAccount.json");
admin.initializeApp({ credential: admin.credential.cert(serviceAccount) });
const db = admin.firestore();
Im using a Cloud Firestore db, not a real time db, so i dont send the second argument in the initializeApp, i just put the config of the db in the environment file.
export const environment = {
production: false,
firebaseConfig : {
apiKey: 'XXXXXXXXXXX',
authDomain: 'XXXXXXXXXX',
databaseURL: 'XXXXXXXXXX',
projectId: 'XXXXXXXXX',
storageBucket: 'XXXXXXXXXXX',
messagingSenderId: 'XXXXXXXXXXX',
appId: 'XXXXXXXXXXXX',
measurementId: 'XXXXXXX'
}
};
A:
This worked for me :
admin.initializeApp({
credential: admin.credential.cert({
"type": "service_account",
"project_id": "took from Firebase Generated Private Key",
"private_key_id": "took from Firebase Generated Private Key",
"private_key": "took from Firebase Generated Private Key",
"client_email": "took from Firebase Generated Private Key",
"client_id": "took from Firebase Generated Private Key",
"auth_uri": "took from Firebase Generated Private Key",
"token_uri": "took from Firebase Generated Private Key",
"auth_provider_x509_cert_url": "took from Firebase Generated Private Key",
"client_x509_cert_url": "took from Firebase Generated Private Key"
}),
databaseURL: "https://test-dia.firebaseio.com"
});
A:
The correct way to remove the warning as of 2021 is to add firebase-functions-test to your dev-dependencies, and call it in your jest config like so:
// jest.preset.js
const test = require('firebase-functions-test')();
// ...
Read more about testing in "offline mode" here:
https://firebase.google.com/docs/functions/unit-testing#offline-mode
A:
If you are trying to run a Cloud Function locally and want to use cloud authentication and database, eg.
Steps:
1 - Go to your Firebase console and copy your credentials data as JSON, for example:
{
"apiKey": "<YOUR-API-KEY>",
"authDomain": "<YOUR-authDomain>",
"databaseURL": "<YOUR-databaseURL>",
"projectId": "...",
"storageBucket": "...",
"messagingSenderId": "...",
"appId": "..."
}
2 - On your project folder, save this as JSON file. 'my_credentials.json' for example.
3 - Run Function Locally: (note the cat)
$ FIREBASE_CONFIG=$(cat my_credentials.json) npm run local
Note (!):
Add the 'local' script to your package.json file:
"scripts": {
...
"local": "npx functions-framework --target=<YOUR-APP-NAME> --signature-type=<YOUR-FUNCTION-CALL-METHOD-EXAMPLE-http>"
},
|
Warning, FIREBASE_CONFIG environment variable is missing. Initializing firebase-admin will fail
|
When I follow the document of firebase, but I can not connect to firebase.
Does anybody face this issue?
https://firebase.google.com/docs/firestore/quickstart
https://firebase.google.com/docs/firestore/manage-data/add-data
My code :
index.js
var admin = require("firebase-admin");
var serviceAccount = require("./key/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://sampleLink.firebaseio.com",
});
const functions = require("firebase-functions");
var db = admin.firestore();
var data = {
name: "Los Angeles",
state: "CA",
country: "USA",
};
// Add a new document in collection "cities" with ID 'LA'
var setDoc = db.collection("cities").doc("LA").set(data);
setDoc;
package.json
{
"name": "functions",
"description": "Cloud Functions for Firebase",
"scripts": {
"serve": "firebase serve --only functions",
"shell": "firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"dependencies": {
"firebase": "^5.7.3",
"firebase-admin": "~6.0.0",
"firebase-functions": "^2.1.0"
},
"private": true
}
console
tktktk:functions dev$ node index.js
Warning, FIREBASE_CONFIG environment variable is missing. Initializing firebase-admin will fail
The behavior for Date objects stored in Firestore is going to change
AND YOUR APP MAY BREAK.
To hide this warning and ensure your app does not break, you need to add the
following code to your app before calling any other Cloud Firestore methods:
const firestore = new Firestore();
const settings = {/* your settings... */ timestampsInSnapshots: true};
firestore.settings(settings);
With this change, timestamps stored in Cloud Firestore will be read back as
Firebase Timestamp objects instead of as system Date objects. So you will also
need to update code expecting a Date to instead expect a Timestamp. For example:
// Old:
const date = snapshot.get('created_at');
// New:
const timestamp = snapshot.get('created_at');
const date = timestamp.toDate();
Please audit all existing usages of Date when you enable the new behavior. In a
future release, the behavior will change to the new behavior, so if you do not
follow these steps, YOUR APP MAY BREAK.
My env is below:
$ firebase --version
6.3.0
$ node -v
v8.12.0
$ npm -v
6.4.1
|
[
"That FIREBASE_CONFIG warning hints that the path to the JSON is missing (or wrongful).\nEither setup FIREBASE_CONFIG, as it demands - or setup GOOGLE_APPLICATION_CREDENTIALS as environment variable and then run gcloud auth application-default login; then you could use admin.credential.applicationDefault() instead of admin.credential.cert(serviceAccount).\n",
"The proper way to run a cloud function locally is via the Functions shell (see https://firebase.google.com/docs/functions/local-emulator). If you want to run it directly like $ node index.js, then you have to set the required environment variables yourself, or otherwise firebase-functions is going to complain.\nHowever, note that the above message is just a warning. Your code is running despite that. If you're not seeing any data written to Firestore, that's probably because you're not handling the promise returned by the Firestore set() method. \n",
"For me, the problem was that line:\nconst functions = require('firebase-functions');\nI just deleted it from index.js. I don't use Firebase Cloud Function anymore but I forgot to remove that line.\nThis is where my problem was coming from.\nSo maybe you too imported this package in a file that don't need it.\n",
"I had the exact same issue.\nTurns out it has to do with Node version 10 \nI removed Node 10 and went back to Node 8 and everything worked like a charm...\n const functions = require('firebase-functions');\n const admin = require('firebase-admin');\n admin.initializeApp();\n\nJust hit\n firebase deploy --only functions\n\nin Node v8.\n",
"About you problem, just now Thanks to your code i was able to fix my problem.\nWhat i do: \nconst admin = require('firebase-admin');\n\nvar serviceAccount = require(\"./serviceAccount.json\");\n\nadmin.initializeApp({ credential: admin.credential.cert(serviceAccount) });\n\nconst db = admin.firestore();\n\nIm using a Cloud Firestore db, not a real time db, so i dont send the second argument in the initializeApp, i just put the config of the db in the environment file.\nexport const environment = {\n production: false,\n firebaseConfig : {\n apiKey: 'XXXXXXXXXXX',\n authDomain: 'XXXXXXXXXX',\n databaseURL: 'XXXXXXXXXX',\n projectId: 'XXXXXXXXX',\n storageBucket: 'XXXXXXXXXXX',\n messagingSenderId: 'XXXXXXXXXXX',\n appId: 'XXXXXXXXXXXX',\n measurementId: 'XXXXXXX'\n }\n};\n\n",
"This worked for me :\n\n\nadmin.initializeApp({\n credential: admin.credential.cert({\n \"type\": \"service_account\",\n \"project_id\": \"took from Firebase Generated Private Key\",\n \"private_key_id\": \"took from Firebase Generated Private Key\",\n \"private_key\": \"took from Firebase Generated Private Key\",\n \"client_email\": \"took from Firebase Generated Private Key\",\n \"client_id\": \"took from Firebase Generated Private Key\",\n \"auth_uri\": \"took from Firebase Generated Private Key\",\n \"token_uri\": \"took from Firebase Generated Private Key\",\n \"auth_provider_x509_cert_url\": \"took from Firebase Generated Private Key\",\n \"client_x509_cert_url\": \"took from Firebase Generated Private Key\"\n }),\n databaseURL: \"https://test-dia.firebaseio.com\"\n});\n\n\n\n",
"The correct way to remove the warning as of 2021 is to add firebase-functions-test to your dev-dependencies, and call it in your jest config like so:\n// jest.preset.js\nconst test = require('firebase-functions-test')();\n// ...\n\nRead more about testing in \"offline mode\" here:\nhttps://firebase.google.com/docs/functions/unit-testing#offline-mode\n",
"If you are trying to run a Cloud Function locally and want to use cloud authentication and database, eg.\nSteps:\n1 - Go to your Firebase console and copy your credentials data as JSON, for example:\n{\n \"apiKey\": \"<YOUR-API-KEY>\",\n \"authDomain\": \"<YOUR-authDomain>\",\n \"databaseURL\": \"<YOUR-databaseURL>\",\n \"projectId\": \"...\",\n \"storageBucket\": \"...\",\n \"messagingSenderId\": \"...\",\n \"appId\": \"...\"\n}\n\n2 - On your project folder, save this as JSON file. 'my_credentials.json' for example.\n3 - Run Function Locally: (note the cat)\n\n$ FIREBASE_CONFIG=$(cat my_credentials.json) npm run local\n\nNote (!):\nAdd the 'local' script to your package.json file:\n \"scripts\": {\n ...\n \"local\": \"npx functions-framework --target=<YOUR-APP-NAME> --signature-type=<YOUR-FUNCTION-CALL-METHOD-EXAMPLE-http>\"\n },\n\n"
] |
[
11,
8,
8,
7,
2,
1,
0,
0
] |
[
"Try using this version dependencies (in package.json) with node v8. \n\"dependencies\": {\n \"firebase-admin\": \"~7.1.1\",\n \"firebase-functions\": \"^2.2.1\"\n}\n\nIt worked for me.\n"
] |
[
-1
] |
[
"firebase",
"google_cloud_firestore",
"node.js"
] |
stackoverflow_0054292448_firebase_google_cloud_firestore_node.js.txt
|
Q:
Gnu Make: how to use the pattern rule
I have this sample (siimplified) makefile
all: a a.e b b.e
.SUFFIXES:
a a.e:
touch $@
b: a
ln -sf $(notdir $<) $@
b.e: a.e
ln -sf $(notdir $<) $@
clean:
rm -f a* b*
and it works.
I would like to use Pattern Rules as follow:
all: a a.e b b.e
.SUFFIXES:
a a.e:
touch $@
b%: a%
ln -sf $(notdir $<) $@
clean:
rm -f a* b*
but it fails:
$ make
touch a
touch a.e
make: *** No rule to make target 'b', needed by 'all'. Stop.
I cannot figure out why, and I don't know how to make it works
A:
Does this info from the manual give you your answer:
The target is a pattern for matching file names; the ‘%’ matches any nonempty substring, while other characters match only themselves.
(emphasis added)?
|
Gnu Make: how to use the pattern rule
|
I have this sample (siimplified) makefile
all: a a.e b b.e
.SUFFIXES:
a a.e:
touch $@
b: a
ln -sf $(notdir $<) $@
b.e: a.e
ln -sf $(notdir $<) $@
clean:
rm -f a* b*
and it works.
I would like to use Pattern Rules as follow:
all: a a.e b b.e
.SUFFIXES:
a a.e:
touch $@
b%: a%
ln -sf $(notdir $<) $@
clean:
rm -f a* b*
but it fails:
$ make
touch a
touch a.e
make: *** No rule to make target 'b', needed by 'all'. Stop.
I cannot figure out why, and I don't know how to make it works
|
[
"Does this info from the manual give you your answer:\n\nThe target is a pattern for matching file names; the ‘%’ matches any nonempty substring, while other characters match only themselves.\n\n(emphasis added)?\n"
] |
[
2
] |
[] |
[] |
[
"gnu_make",
"makefile"
] |
stackoverflow_0074653943_gnu_make_makefile.txt
|
Q:
Click button do not take action
test('test', async ({ page }) => {
const [page0] = await Promise.all([
page.goto('https://*********'),
page.locator('#popUpCookies').click(),
page.getByRole('button',{name:'ACEPTAR'}).click(),
]);
I was trying to make an automation step in order to click (accept) over a button in a pop-up for cookies. But this action "page.getByRole('button',{name:'ACEPTAR'}).click()", just put the cursor over the button do not click over the element. Appreciate your help in advance.
A:
As they are dependent steps/actions , they need to be executed sequentially not in parallel. Please try the below code:
test('test', async ({ page }) => {
await page.goto('https://*********')
await page.locator('#popUpCookies').click()
await page.getByRole('button',{name:'ACEPTAR'}).click()
})
|
Click button do not take action
|
test('test', async ({ page }) => {
const [page0] = await Promise.all([
page.goto('https://*********'),
page.locator('#popUpCookies').click(),
page.getByRole('button',{name:'ACEPTAR'}).click(),
]);
I was trying to make an automation step in order to click (accept) over a button in a pop-up for cookies. But this action "page.getByRole('button',{name:'ACEPTAR'}).click()", just put the cursor over the button do not click over the element. Appreciate your help in advance.
|
[
"As they are dependent steps/actions , they need to be executed sequentially not in parallel. Please try the below code:\n test('test', async ({ page }) => {\n await page.goto('https://*********')\n await page.locator('#popUpCookies').click()\n await page.getByRole('button',{name:'ACEPTAR'}).click()\n })\n\n"
] |
[
0
] |
[] |
[] |
[
"playwright",
"playwright_test",
"typescript"
] |
stackoverflow_0074657682_playwright_playwright_test_typescript.txt
|
Q:
Simple recursive function f#
This is an example from my professors book, but when I try to run it in f# it doesn't work. Can someone point out what is wrong here?
let rec readNonZeroValue a =
let a = int (System.Console. ReadLine ())
match a with
|0 ->
printfn "Error: zero value entered. Try again"
readNonZeroValue ()
|_ ->
a
printfn "Please enter a non-zero value"
let b = readNonZeroValue ()
printfn "You typed: %A" b
I am beginner, so sorry for asking such a simple question.
The point of the code is simply for the user to be able to type in a number and then get it printed to the terminal, for any other number than 0.
I have another very similar piece of code that actually works, only difference is that it takes a string instead:
let rec progLang a =
printfn "Please enter the name of a programming language"
let a = string (System.Console.ReadLine())
match a with
|"Fsharp" ->
printfn "%A is cool" a
progLang ()
|"quit" -> a
|_ ->
printfn "I don't know %A" a
progLang ()
A:
Lets start with the function that works.
let rec progLang a =
printfn "Please enter the name of a programming language"
let a = string (System.Console.ReadLine())
match a with
|"Fsharp" ->
printfn "%A is cool" a
progLang ()
|"quit" -> a
|_ ->
printfn "I don't know %A" a
progLang ()
"let rec progLang a" is a bit odd because the variable is inferred to be of type Unit, so the code may as well say
let rec progLang () =
printfn "Please enter the name of a programming language"
... etc ...
So for the one that doesnt work, you need to ensure the indentation is correct, AND I think the code u have pasted is of a function, plus some code that calls the function, the function should be this (if you're learning it may be an idea to put the type of the function in a comment - I do this quite a lot).
// Unit -> int
let rec readNonZeroValue a =
let a = int (System.Console. ReadLine ())
match a with
|0 ->
printfn "Error: zero value entered. Try again"
readNonZeroValue ()
|_ ->
a
So its a function that takes unit and returns an int, and you could potentially call it with code like this:
// unit -> unit
let codeThatCalls () =
printfn "Please enter a non-zero value"
let b = readNonZeroValue ()
printfn "You typed: %A" b
The moral of the story is IDENTATION is important, and can completely change the meaning of your code.
|
Simple recursive function f#
|
This is an example from my professors book, but when I try to run it in f# it doesn't work. Can someone point out what is wrong here?
let rec readNonZeroValue a =
let a = int (System.Console. ReadLine ())
match a with
|0 ->
printfn "Error: zero value entered. Try again"
readNonZeroValue ()
|_ ->
a
printfn "Please enter a non-zero value"
let b = readNonZeroValue ()
printfn "You typed: %A" b
I am beginner, so sorry for asking such a simple question.
The point of the code is simply for the user to be able to type in a number and then get it printed to the terminal, for any other number than 0.
I have another very similar piece of code that actually works, only difference is that it takes a string instead:
let rec progLang a =
printfn "Please enter the name of a programming language"
let a = string (System.Console.ReadLine())
match a with
|"Fsharp" ->
printfn "%A is cool" a
progLang ()
|"quit" -> a
|_ ->
printfn "I don't know %A" a
progLang ()
|
[
"Lets start with the function that works.\nlet rec progLang a = \n printfn \"Please enter the name of a programming language\"\n let a = string (System.Console.ReadLine())\n match a with\n |\"Fsharp\" -> \n printfn \"%A is cool\" a\n progLang ()\n |\"quit\" -> a\n |_ -> \n printfn \"I don't know %A\" a\n progLang ()\n\n\"let rec progLang a\" is a bit odd because the variable is inferred to be of type Unit, so the code may as well say\nlet rec progLang () = \n printfn \"Please enter the name of a programming language\"\n ... etc ...\n\nSo for the one that doesnt work, you need to ensure the indentation is correct, AND I think the code u have pasted is of a function, plus some code that calls the function, the function should be this (if you're learning it may be an idea to put the type of the function in a comment - I do this quite a lot).\n// Unit -> int\nlet rec readNonZeroValue a =\n let a = int (System.Console. ReadLine ())\n match a with\n |0 ->\n printfn \"Error: zero value entered. Try again\"\n readNonZeroValue ()\n |_ ->\n a\n\nSo its a function that takes unit and returns an int, and you could potentially call it with code like this:\n// unit -> unit\nlet codeThatCalls () = \n printfn \"Please enter a non-zero value\"\n let b = readNonZeroValue ()\n printfn \"You typed: %A\" b\n\nThe moral of the story is IDENTATION is important, and can completely change the meaning of your code.\n"
] |
[
1
] |
[] |
[] |
[
"f#",
"f#_interactive"
] |
stackoverflow_0074655291_f#_f#_interactive.txt
|
Q:
How to get header text and body text inline with each other
I have two headers and two paragraphs. But the first item of the paragraph is outlined over the other lines:
Header
waspeen: 3304.07
ananas: 24
appel: 3962.0
Header
waspeen: 3304.07
ananas: 30
appel: 3962.0
So how to get everything in line with each other?
so that it will look like:
Header Header
waspeen: 3304.07 waspeen: 3304.07
ananas: 30 ananas: 24
appel: 3962.0 appel: 3962.0
this is css:
#gallery-text-left {
/* Added */
width: 50%;
}
#gallery-paragraph-1 {
margin-right: 50px;
border-radius: 50px;
padding-left: 50px;
display: inline;
/* Added */
}
#gallery-paragraph-2 {
margin-right: 50px;
border-radius: 4px;
padding-left: 50px;
display: inline;
}
#gallery-text-right {
/* Added */
width: 50%;
}
and html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="{% static 'main/css/custom-style.css' %}" />
<link rel="stylesheet" type="text/css" href="{% static 'main/css/bootstrap.css' %}" />
</head>
<body>
<div class="container center">
<div id="gallery-text">
<div id="gallery-text-left" style="float:left">
<p id="gallery-text-quote-left" style="font-family:Century Gothic; color:#006600"><b> Header</b></p>
<p id="gallery-paragraph-1" style="font-family:Georgia">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</p>
</div>
<div id="gallery-text-right" style="float:left">
<p id="gallery-text-quote-right" style="font-family:Century Gothic; color:#006600"><b>Header</b></p>
<p id="gallery-paragraph-2" style="font-family:Georgia">
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</p>
</div>
</div>
</div>
</body>
</html>
A:
you are looking for flexbox, or alternatively you could use grid, the following code should work.
#gallery-text-left {
/* Added */
width: 50%;
display: flex;
flex-direction: column;
}
#gallery-text {
/* Added */
width: 100%;
display: flex;
}
#gallery-paragraph-1 {
border-radius: 50px;
display: flex;
flex-direction: column;
}
#gallery-paragraph-2 {
border-radius: 4px;
display: flex;
flex-direction: column;
}
#gallery-text-right {
/* Added */
display: flex;
flex-direction: column;
width: 50%;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="{% static 'main/css/custom-style.css' %}" />
<link rel="stylesheet" type="text/css" href="{% static 'main/css/bootstrap.css' %}" />
</head>
<body>
<div class="container center">
<div id="gallery-text">
<div id="gallery-text-left">
<p id="gallery-text-quote-left" style="font-family:Century Gothic; color:#006600"><b> Header</b></p>
<p id="gallery-paragraph-1" style="font-family:Georgia">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %}
</p>
</div>
<div id="gallery-text-right" style="float:left">
<p id="gallery-text-quote-right" style="font-family:Century Gothic; color:#006600"><b>Header</b></p>
<p id="gallery-paragraph-2" style="font-family:Georgia">
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %}
</p>
</div>
</div>
</div>
</body>
</html>
|
How to get header text and body text inline with each other
|
I have two headers and two paragraphs. But the first item of the paragraph is outlined over the other lines:
Header
waspeen: 3304.07
ananas: 24
appel: 3962.0
Header
waspeen: 3304.07
ananas: 30
appel: 3962.0
So how to get everything in line with each other?
so that it will look like:
Header Header
waspeen: 3304.07 waspeen: 3304.07
ananas: 30 ananas: 24
appel: 3962.0 appel: 3962.0
this is css:
#gallery-text-left {
/* Added */
width: 50%;
}
#gallery-paragraph-1 {
margin-right: 50px;
border-radius: 50px;
padding-left: 50px;
display: inline;
/* Added */
}
#gallery-paragraph-2 {
margin-right: 50px;
border-radius: 4px;
padding-left: 50px;
display: inline;
}
#gallery-text-right {
/* Added */
width: 50%;
}
and html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="{% static 'main/css/custom-style.css' %}" />
<link rel="stylesheet" type="text/css" href="{% static 'main/css/bootstrap.css' %}" />
</head>
<body>
<div class="container center">
<div id="gallery-text">
<div id="gallery-text-left" style="float:left">
<p id="gallery-text-quote-left" style="font-family:Century Gothic; color:#006600"><b> Header</b></p>
<p id="gallery-paragraph-1" style="font-family:Georgia">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</p>
</div>
<div id="gallery-text-right" style="float:left">
<p id="gallery-text-quote-right" style="font-family:Century Gothic; color:#006600"><b>Header</b></p>
<p id="gallery-paragraph-2" style="font-family:Georgia">
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</p>
</div>
</div>
</div>
</body>
</html>
|
[
"you are looking for flexbox, or alternatively you could use grid, the following code should work.\n\n\n#gallery-text-left {\n /* Added */\n width: 50%;\n display: flex;\n flex-direction: column;\n}\n\n#gallery-text {\n /* Added */\n width: 100%;\n display: flex;\n}\n\n#gallery-paragraph-1 {\n border-radius: 50px;\n display: flex;\n flex-direction: column;\n}\n\n#gallery-paragraph-2 {\n border-radius: 4px;\n display: flex;\n flex-direction: column;\n}\n\n#gallery-text-right {\n /* Added */\n display: flex;\n flex-direction: column;\n width: 50%;\n}\n<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"{% static 'main/css/custom-style.css' %}\" />\n <link rel=\"stylesheet\" type=\"text/css\" href=\"{% static 'main/css/bootstrap.css' %}\" />\n</head>\n\n<body>\n\n <div class=\"container center\">\n <div id=\"gallery-text\">\n\n <div id=\"gallery-text-left\">\n <p id=\"gallery-text-quote-left\" style=\"font-family:Century Gothic; color:#006600\"><b> Header</b></p>\n\n <p id=\"gallery-paragraph-1\" style=\"font-family:Georgia\">\n\n {% for key, value in content %}\n <span {% if key in diff_set %} style=\"color: red;\" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %}\n </p>\n </div>\n\n\n <div id=\"gallery-text-right\" style=\"float:left\">\n <p id=\"gallery-text-quote-right\" style=\"font-family:Century Gothic; color:#006600\"><b>Header</b></p>\n <p id=\"gallery-paragraph-2\" style=\"font-family:Georgia\">\n {% for key, value in content_excel %}\n <span {% if key in diff_set %} style=\"color: red;\" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %}\n </p>\n </div>\n </div>\n </div>\n</body>\n\n</html>\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074657870_css_html.txt
|
Q:
Gcc collect2: fatal error: cannot find 'ld'
I'm going through the tutorial on making an OS on http://wiki.osdev.org/Bare_Bones. When I try to link boot.o and kernel.o using this command: i686-elf-gcc -T linker.ld -o myos.bin -ffreestanding -O2 -nostdlib boot.o kernel.o -lgcc , I just get this error:
collect2: fatal error: cannot find 'ld'
compilation terminated.
I just installed fresh Ubuntu 15.10 that with gcc-5.2.1 and binutils-2.25.1 .
I have searched the internet for answers but nothing helped.
A:
I also got once the same error during a pentest while I was trying to compile my exploit on the victim server.
In my case, the directory where the "ld" program was located had not been defined in the PATH environment variable, so I simply added it.
eg. export PATH=$PATH:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin
A:
I had this error while hacking a remote machine and trying to use gcc to compile an exploit on the victim machine.
I simply copied the program ld to /tmp/, the working directory where i was compiling my exploit exploit.c by running
cp /usr/bin/ld /tmp/ld
followed by the original gcc compile command and the compile worked.
A:
I searched a lot to fix this issue and nothing worked but lastly i uninstalled MinGw and reinstalled it then did edited the environment variables again and then uninstalled and reinstalled vs code extensions then it worked.
A:
May i have a little attention from anyone here who could help me regarding set-up issues of Visual Studio Code on Windows 10 and configure my set up values.
I installed EXTENSIONS such as Microsoft C/C++ intelliSense and EXTENSION Pack and Code Runner, MSYS2 mingw64 compiler. After building a task, it could not build or generate "exe" file of the "cpp" file to be compiled. On prelaunch task execution, it keeps warning like this, "The preLaunchTask terminated with exit code - 1" which is negative 1, but there is "No problem" detailed regarding my cpp program.
Another thing, TERMINAL details that "g++.exe: fatal error: cannot execute 'as' :CreateProcess:No such file directory compilation terminated.
And lastly, when i RUN the program, it detailed 'g++' is not recognized as an internal or external command, operable program or batch file.Also, collect2. exe: fatal error : cannot find 'ld'
compilation terminated.
|
Gcc collect2: fatal error: cannot find 'ld'
|
I'm going through the tutorial on making an OS on http://wiki.osdev.org/Bare_Bones. When I try to link boot.o and kernel.o using this command: i686-elf-gcc -T linker.ld -o myos.bin -ffreestanding -O2 -nostdlib boot.o kernel.o -lgcc , I just get this error:
collect2: fatal error: cannot find 'ld'
compilation terminated.
I just installed fresh Ubuntu 15.10 that with gcc-5.2.1 and binutils-2.25.1 .
I have searched the internet for answers but nothing helped.
|
[
"I also got once the same error during a pentest while I was trying to compile my exploit on the victim server.\nIn my case, the directory where the \"ld\" program was located had not been defined in the PATH environment variable, so I simply added it.\neg. export PATH=$PATH:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin\n",
"I had this error while hacking a remote machine and trying to use gcc to compile an exploit on the victim machine.\nI simply copied the program ld to /tmp/, the working directory where i was compiling my exploit exploit.c by running \ncp /usr/bin/ld /tmp/ld \nfollowed by the original gcc compile command and the compile worked.\n",
"I searched a lot to fix this issue and nothing worked but lastly i uninstalled MinGw and reinstalled it then did edited the environment variables again and then uninstalled and reinstalled vs code extensions then it worked.\n",
"May i have a little attention from anyone here who could help me regarding set-up issues of Visual Studio Code on Windows 10 and configure my set up values.\nI installed EXTENSIONS such as Microsoft C/C++ intelliSense and EXTENSION Pack and Code Runner, MSYS2 mingw64 compiler. After building a task, it could not build or generate \"exe\" file of the \"cpp\" file to be compiled. On prelaunch task execution, it keeps warning like this, \"The preLaunchTask terminated with exit code - 1\" which is negative 1, but there is \"No problem\" detailed regarding my cpp program.\nAnother thing, TERMINAL details that \"g++.exe: fatal error: cannot execute 'as' :CreateProcess:No such file directory compilation terminated.\nAnd lastly, when i RUN the program, it detailed 'g++' is not recognized as an internal or external command, operable program or batch file.Also, collect2. exe: fatal error : cannot find 'ld'\ncompilation terminated.\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"c",
"cross_compiling",
"gcc",
"ubuntu_15.10"
] |
stackoverflow_0035970824_c_cross_compiling_gcc_ubuntu_15.10.txt
|
Q:
Navigate to screen from push notification flutter_local_notification package
I am using the flutter_local_notification package to handle notifications from a third party server (not firebase cloud messaging). Because I am using firebase but not firebase messaging, I am using the onSelectNotification function of the flutter-local_notification package.
This is the function that I pass to onSelectNotification:
static _selectNotification(String payload, StreamChatClient client, RemoteMessage message) {
debugPrint('notification payload: $payload');
if(payload.contains('livestream')) {
Utils.db.getLiveRoom(payload.split(":")[1]).then((liveRoom) {
Navigator.push(
NavigationService.navigatorKey.currentContext!,
MaterialPageRoute<void>(builder: (context) => LiveRoomChat(liveRoom: liveRoom)),
);
});
}
else {
List<String> ids = message.data['channel_id'].toString().split('_');
String receiverId = '';
if(ids[0] == Utils.user?.uid) {
receiverId = ids[1];
}
else {
receiverId = ids[0];
}
Navigator.push(
NavigationService.navigatorKey.currentContext!,
MaterialPageRoute<void>(builder: (context) => MessageApi(
sourceType: Utils.friends.containsKey(receiverId) ? SourceType.friends : SourceType.justMet,
receiverId: receiverId,
channelId: payload.split(":")[1],
streamToken: Utils.user?.streamToken ?? '',
client: client
)),
);
}
}
I have a global navigator key which I have defined in a NavigationService class, and I also assign this navigator key in the main.dart. This notification handling above works for ios but it does not work for android because NavigationService.navigatorKey.currentContext is always null on android. Does anyone know why this is the case on android, and what is the way to handle it?
A:
The issue you are experiencing with the NavigationService.navigatorKey.currentContext being null on Android is likely due to the fact that the Navigator widget is not available when the onSelectNotification callback is executed.
In Flutter, the Navigator widget is responsible for managing the navigation stack and providing access to the current context of the app. When the onSelectNotification callback is executed, the Navigator widget may not be available yet, and therefore the currentContext property will be null.
To fix this issue, you can try using the Navigator.pushNamed method instead of the Navigator.push method. The pushNamed method allows you to navigate to a named route, which is defined in the MaterialApp widget in your main.dart file. This way, you do not need to rely on the Navigator widget being available in the onSelectNotification callback, and you can navigate to the desired screen using the named route instead.
Here is an example of how you can use the pushNamed method to navigate to a named route from the onSelectNotification callback:
static _selectNotification(String payload, StreamChatClient client, RemoteMessage message) {
debugPrint('notification payload: $payload');
if(payload.contains('livestream')) {
Utils.db.getLiveRoom(payload.split(":")[1]).then((liveRoom) {
Navigator.pushNamed(
NavigationService.navigatorKey.currentContext!,
'/live-room-chat',
arguments: liveRoom,
);
});
}
else {
List<String> ids = message.data['channel_id'].toString().split('_');
String receiverId = '';
if(ids[0] == Utils.user?.uid) {
receiverId = ids[1];
}
else {
receiverId = ids[0];
}
Navigator.pushNamed(
NavigationService.navigatorKey.currentContext!,
'/message-api',
arguments: {
'sourceType': Utils.friends.containsKey(receiverId) ? SourceType.friends : SourceType.
|
Navigate to screen from push notification flutter_local_notification package
|
I am using the flutter_local_notification package to handle notifications from a third party server (not firebase cloud messaging). Because I am using firebase but not firebase messaging, I am using the onSelectNotification function of the flutter-local_notification package.
This is the function that I pass to onSelectNotification:
static _selectNotification(String payload, StreamChatClient client, RemoteMessage message) {
debugPrint('notification payload: $payload');
if(payload.contains('livestream')) {
Utils.db.getLiveRoom(payload.split(":")[1]).then((liveRoom) {
Navigator.push(
NavigationService.navigatorKey.currentContext!,
MaterialPageRoute<void>(builder: (context) => LiveRoomChat(liveRoom: liveRoom)),
);
});
}
else {
List<String> ids = message.data['channel_id'].toString().split('_');
String receiverId = '';
if(ids[0] == Utils.user?.uid) {
receiverId = ids[1];
}
else {
receiverId = ids[0];
}
Navigator.push(
NavigationService.navigatorKey.currentContext!,
MaterialPageRoute<void>(builder: (context) => MessageApi(
sourceType: Utils.friends.containsKey(receiverId) ? SourceType.friends : SourceType.justMet,
receiverId: receiverId,
channelId: payload.split(":")[1],
streamToken: Utils.user?.streamToken ?? '',
client: client
)),
);
}
}
I have a global navigator key which I have defined in a NavigationService class, and I also assign this navigator key in the main.dart. This notification handling above works for ios but it does not work for android because NavigationService.navigatorKey.currentContext is always null on android. Does anyone know why this is the case on android, and what is the way to handle it?
|
[
"The issue you are experiencing with the NavigationService.navigatorKey.currentContext being null on Android is likely due to the fact that the Navigator widget is not available when the onSelectNotification callback is executed.\nIn Flutter, the Navigator widget is responsible for managing the navigation stack and providing access to the current context of the app. When the onSelectNotification callback is executed, the Navigator widget may not be available yet, and therefore the currentContext property will be null.\nTo fix this issue, you can try using the Navigator.pushNamed method instead of the Navigator.push method. The pushNamed method allows you to navigate to a named route, which is defined in the MaterialApp widget in your main.dart file. This way, you do not need to rely on the Navigator widget being available in the onSelectNotification callback, and you can navigate to the desired screen using the named route instead.\nHere is an example of how you can use the pushNamed method to navigate to a named route from the onSelectNotification callback:\n static _selectNotification(String payload, StreamChatClient client, RemoteMessage message) {\n debugPrint('notification payload: $payload');\n\n if(payload.contains('livestream')) {\n Utils.db.getLiveRoom(payload.split(\":\")[1]).then((liveRoom) {\n Navigator.pushNamed(\n NavigationService.navigatorKey.currentContext!,\n '/live-room-chat',\n arguments: liveRoom,\n );\n });\n }\n else {\n List<String> ids = message.data['channel_id'].toString().split('_');\n String receiverId = '';\n if(ids[0] == Utils.user?.uid) {\n receiverId = ids[1];\n }\n else {\n receiverId = ids[0];\n }\n\n Navigator.pushNamed(\n NavigationService.navigatorKey.currentContext!,\n '/message-api',\n arguments: {\n 'sourceType': Utils.friends.containsKey(receiverId) ? SourceType.friends : SourceType.\n\n"
] |
[
0
] |
[] |
[] |
[
"flutter",
"flutter_local_notification",
"push_notification"
] |
stackoverflow_0074601473_flutter_flutter_local_notification_push_notification.txt
|
Q:
Next Auth.js - I can't get token with getToken({req})
I can't get the token with getToken:
This variables are ok:
NEXTAUTH_SECRET=secret
NEXTAUTH_URL=http://localhost:3000
Here is my [...nextauth].js - I can do console.log(token) and it works well
import NextAuth from "next-auth";
import GoogleProvider from "next-auth/providers/google";
...
jwt: {
secret: process.env.JWT_SECRET,
encryption: true,
},
secret: process.env.NEXTAUTH_SECRET,
callbacks: {
async redirect({ url, baseUrl }) {
return Promise.resolve(url);
},
async jwt({ token, user, account, profile, isNewUser }) {
return token;
},
async session({ session, user, token }) {
return session;
},
},
});
API section (I think getToken doesnt work well):
import { getToken } from "next-auth/jwt";
const secret = process.env.NEXTAUTH_SECRET;
export default async (req, res) => {
const token = await getToken({ req, secret, encryption: true });
console.log(token);
if (token) {
// Signed in
console.log("JSON Web Token", JSON.stringify(token, null, 2));
} else {
// Not Signed in
res.status(401);
}
res.end();
};
A:
This might be a bug in the function but the only way i got it to work was to use getToken to get the raw jwt token and then use jsonwebtoken package to verify and decode it
import { getToken } from "next-auth/jwt";
import jwt from "jsonwebtoken";
const secret = process.env.NEXT_AUTH_SECRET;
const token = await getToken({
req: req,
secret: secret,
raw: true,
});
const payload = jwt.verify(token, process.env.NEXT_AUTH_SECRET);
console.log(payload);
A:
this worked for me:
const secret = process.env.SECRET
const token = await getToken({ req:req ,secret:secret});
also check [...nextauth].js, if you're using
adapter: MongoDBAdapter(clientPromise),
then you dont have JWT token, you have to set
session: {
strategy: "jwt",
},
notice this will save the session locally and not on the database
more info here: https://next-auth.js.org/configuration/options#session
A:
This issue still exists, but unlike other answers, I simply had to pass the NEXTAUTH_SECRET that I had set on my .env file to my getToken() function without setting raw: true.
import { getToken } from "next-auth/jwt";
const secret = process.env.NEXTAUTH_SECRET;
export default async (req, res) => {
const token = await getToken({ req: req, secret: secret });
if (token) {
// Signed in
console.log("JSON Web Token", JSON.stringify(token, null, 2));
} else {
// Not Signed in
res.status(401);
}
res.end();
};
|
Next Auth.js - I can't get token with getToken({req})
|
I can't get the token with getToken:
This variables are ok:
NEXTAUTH_SECRET=secret
NEXTAUTH_URL=http://localhost:3000
Here is my [...nextauth].js - I can do console.log(token) and it works well
import NextAuth from "next-auth";
import GoogleProvider from "next-auth/providers/google";
...
jwt: {
secret: process.env.JWT_SECRET,
encryption: true,
},
secret: process.env.NEXTAUTH_SECRET,
callbacks: {
async redirect({ url, baseUrl }) {
return Promise.resolve(url);
},
async jwt({ token, user, account, profile, isNewUser }) {
return token;
},
async session({ session, user, token }) {
return session;
},
},
});
API section (I think getToken doesnt work well):
import { getToken } from "next-auth/jwt";
const secret = process.env.NEXTAUTH_SECRET;
export default async (req, res) => {
const token = await getToken({ req, secret, encryption: true });
console.log(token);
if (token) {
// Signed in
console.log("JSON Web Token", JSON.stringify(token, null, 2));
} else {
// Not Signed in
res.status(401);
}
res.end();
};
|
[
"This might be a bug in the function but the only way i got it to work was to use getToken to get the raw jwt token and then use jsonwebtoken package to verify and decode it\nimport { getToken } from \"next-auth/jwt\";\nimport jwt from \"jsonwebtoken\";\n\nconst secret = process.env.NEXT_AUTH_SECRET;\n const token = await getToken({\n req: req,\n secret: secret,\n raw: true,\n });\nconst payload = jwt.verify(token, process.env.NEXT_AUTH_SECRET);\nconsole.log(payload);\n\n",
"this worked for me:\n const secret = process.env.SECRET\n const token = await getToken({ req:req ,secret:secret});\n\nalso check [...nextauth].js, if you're using\nadapter: MongoDBAdapter(clientPromise),\n\nthen you dont have JWT token, you have to set\n session: {\n strategy: \"jwt\",\n },\n\nnotice this will save the session locally and not on the database\nmore info here: https://next-auth.js.org/configuration/options#session\n",
"This issue still exists, but unlike other answers, I simply had to pass the NEXTAUTH_SECRET that I had set on my .env file to my getToken() function without setting raw: true.\nimport { getToken } from \"next-auth/jwt\";\nconst secret = process.env.NEXTAUTH_SECRET;\n\nexport default async (req, res) => {\n const token = await getToken({ req: req, secret: secret });\n if (token) {\n // Signed in\n console.log(\"JSON Web Token\", JSON.stringify(token, null, 2));\n } else {\n // Not Signed in\n res.status(401);\n }\n res.end();\n};\n\n"
] |
[
6,
0,
0
] |
[] |
[] |
[
"next.js",
"next_auth",
"reactjs"
] |
stackoverflow_0071363829_next.js_next_auth_reactjs.txt
|
Q:
Python (NumPy): Memory efficient array multiplication with fancy indexing
I'm looking to do fast matrix multiplication in python, preferably NumPy, of an array A with another array B of repeated matrices by using a third array I of indices. This can be accomplished using fancy indexing and matrix multiplication:
from numpy.random import rand, randint
A = rand(1000,5,5)
B = rand(40000000,5,1)
I = randint(low=0, high=1000, size=40000000)
A[I] @ B
However, this creates the intermediate array A[I] of shape (40000000, 5, 5) which overflows the memory. It seems highly inefficient to have to repeat a small set of matrices for multiplication, and this is essentially a more general version of broadcasting such as A[0:1] @ B which has no issues.
Are there any alternatives?
I have looked at NumPy's einsum function but have not seen any support for utilizing an index vector in the call.
A:
If you're open to another package, you could wrap it up with dask.
from numpy.random import rand, randint
from dask import array as da
A = da.from_array(rand(1000,5,5))
B = da.from_array(rand(40000000,5,1))
I = da.from_array(randint(low=0, high=1000, size=40000000))
fancy = A[I] @ B
After finished manipulating, then bring it into memory using fancy.compute()
|
Python (NumPy): Memory efficient array multiplication with fancy indexing
|
I'm looking to do fast matrix multiplication in python, preferably NumPy, of an array A with another array B of repeated matrices by using a third array I of indices. This can be accomplished using fancy indexing and matrix multiplication:
from numpy.random import rand, randint
A = rand(1000,5,5)
B = rand(40000000,5,1)
I = randint(low=0, high=1000, size=40000000)
A[I] @ B
However, this creates the intermediate array A[I] of shape (40000000, 5, 5) which overflows the memory. It seems highly inefficient to have to repeat a small set of matrices for multiplication, and this is essentially a more general version of broadcasting such as A[0:1] @ B which has no issues.
Are there any alternatives?
I have looked at NumPy's einsum function but have not seen any support for utilizing an index vector in the call.
|
[
"If you're open to another package, you could wrap it up with dask.\nfrom numpy.random import rand, randint\nfrom dask import array as da\n\nA = da.from_array(rand(1000,5,5))\nB = da.from_array(rand(40000000,5,1))\nI = da.from_array(randint(low=0, high=1000, size=40000000))\n\nfancy = A[I] @ B\n\n\nAfter finished manipulating, then bring it into memory using fancy.compute()\n"
] |
[
1
] |
[] |
[] |
[
"memory",
"numpy",
"python",
"vectorization"
] |
stackoverflow_0074657420_memory_numpy_python_vectorization.txt
|
Q:
How can I take the elevation out of a json object so I am left with longitude and latitude, but not altitude?
I have a use case for importing coordinates into a system. The coordinates are provided to me in a JSON format with each point having latitude, longitude, and elevation (sometimes). I'm using @jq to format the json file and remove everything but the coordinates. I have tried and googled methods to cycle through the array and remove any elevations, but no luck. I'm currently manually removing them using vim and reading through the coordinates. I would like to script it so I can use the system's API to fully automate receiving the coordinates and applying them to the system.
TIA.
The data looks like this when it arrives:
{ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] }
I need it to be formatted like this without the elevations:
{ [ 48.2725225, 12.6538725 ], [ 48.2725226, 12.6654544 ] }
I've run through multiple data filters and wrote loops to iterate through each element of the array and remove the 3rd number.
A:
Your input
{ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] }
is not valid JSON. Curly object braces {} demand a comma-separated list of key-value pairs with a colon : in between.
If instead you had an array of arrays (both enclosed within square brackets []) like this:
[ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] ]
you could use map on the outer array to individually process each inner array. That processing could then be to slice the array, reducing it to its first two elements .[:2]:
map(.[:2])
[ [48.2725225,12.6538725], [48.2725226,12.6654544] ]
Demo
If, however, you only have a stream of arrays like this (notice the lack of braces and commas compared to the outer array from above):
[ 48.2725225, 12.6538725, 595.2270812 ] [ 48.2725226, 12.6654544 ]
you can directly proceed with the slicing, no outer mapping is needed:
.[:2]
[48.2725225,12.6538725] [48.2725226,12.6654544]
Demo
Also notice that the output format provided here matches the input format (outer array or stream). Of course, you can convert one into another if you want (see the --slurp option or the item iterator .[] for that).
If your input format is neither, you'd have to apply other means to turn it into one of these.
|
How can I take the elevation out of a json object so I am left with longitude and latitude, but not altitude?
|
I have a use case for importing coordinates into a system. The coordinates are provided to me in a JSON format with each point having latitude, longitude, and elevation (sometimes). I'm using @jq to format the json file and remove everything but the coordinates. I have tried and googled methods to cycle through the array and remove any elevations, but no luck. I'm currently manually removing them using vim and reading through the coordinates. I would like to script it so I can use the system's API to fully automate receiving the coordinates and applying them to the system.
TIA.
The data looks like this when it arrives:
{ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] }
I need it to be formatted like this without the elevations:
{ [ 48.2725225, 12.6538725 ], [ 48.2725226, 12.6654544 ] }
I've run through multiple data filters and wrote loops to iterate through each element of the array and remove the 3rd number.
|
[
"Your input\n{ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] }\n\nis not valid JSON. Curly object braces {} demand a comma-separated list of key-value pairs with a colon : in between.\n\nIf instead you had an array of arrays (both enclosed within square brackets []) like this:\n[ [ 48.2725225, 12.6538725, 595.2270812 ], [ 48.2725226, 12.6654544 ] ]\n\nyou could use map on the outer array to individually process each inner array. That processing could then be to slice the array, reducing it to its first two elements .[:2]:\nmap(.[:2])\n\n[ [48.2725225,12.6538725], [48.2725226,12.6654544] ]\n\nDemo\n\nIf, however, you only have a stream of arrays like this (notice the lack of braces and commas compared to the outer array from above):\n[ 48.2725225, 12.6538725, 595.2270812 ] [ 48.2725226, 12.6654544 ]\n\nyou can directly proceed with the slicing, no outer mapping is needed:\n.[:2]\n\n[48.2725225,12.6538725] [48.2725226,12.6654544]\n\nDemo\n\nAlso notice that the output format provided here matches the input format (outer array or stream). Of course, you can convert one into another if you want (see the --slurp option or the item iterator .[] for that).\nIf your input format is neither, you'd have to apply other means to turn it into one of these.\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"jq",
"json"
] |
stackoverflow_0074657564_arrays_jq_json.txt
|
Q:
How can i interact with Hardhat smart contract in Angular Project?
I created a Hardhat project and I deployed a smart contract locally. also I Interact with it by an Interact.Js file.
now I want to interact with this contract on an Angular Project.. how can I do that?
pragma solidity ^0.8.17;
import { ERC721 } from "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
contract NonFunToken is ERC721, Ownable {
NonFunToken.sol
// Constructor will be called on contract creation
constructor() ERC721("NonFunToken", "NONFUN") {}
// Allows minting of a new NFT
function mintCollectionNFT(address collector, uint256 tokenId) public onlyOwner() {
_safeMint(collector, tokenId);
}
}
///////////////////////////////////////////////////
deploy.js
// We require the Hardhat Runtime Environment explicitly here. This is optional
// but useful for running the script in a standalone fashion through `node <script>`.
//
// You can also run a script with `npx hardhat run <script>`. If you do that, Hardhat
// will compile your contracts, add the Hardhat Runtime Environment's members to the
// global scope, and execute the script.
const { ethers } = require("hardhat");
async function main() {
// Get the contract owner
const contractOwner = await ethers.getSigners();
console.log(`Deploying contract from: ${contractOwner[0].address}`);
// Hardhat helper to get the ethers contractFactory object
const NonFunToken = await ethers.getContractFactory('NonFunToken');
// Deploy the contract
console.log('Deploying NonFunToken...');
const nonFunToken = await NonFunToken.deploy();
await nonFunToken.deployed();
console.log(`NonFunToken deployed to: ${nonFunToken.address}`)
}
// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exitCode = 1;
});
///////////////////////////////////////////
// scripts/interact.js
const { ethers } = require("hardhat");
async function main() {
console.log('Getting the non fun token contract...\n');
const contractAddress = '0x5FbDB2315678afecb367f032d93F642f64180aa3';
const nonFunToken = await ethers.getContractAt('NonFunToken', contractAddress);
const signers = await ethers.getSigners();
// name()
console.log('Querying NFT collection name...');
const name = await nonFunToken.name();
console.log(`Token Collection Name: ${name}\n`);
// symbol()
console.log('Querying NFT collection symbol...');
const symbol = await nonFunToken.symbol();
console.log(`Token Collection Symbol: ${symbol}\n`);
// Mint new NFTs from the collection using custom function mintCollectionNFT()
console.log('Minting a new NFT from the collection to the contractOwner...');
const contractOwner = signers[0].address;
const initialMintCount = 10; // Number of NFTs to mint
let initialMint = [];
for (let i = 1; i <= initialMintCount; i++) {
let tx = await nonFunToken.mintCollectionNFT(signers[0].address, i.toString());
await tx.wait(); // wait for this tx to finish to avoid nonce issues
initialMint.push(i.toString());
}
console.log(`${symbol} NFT with tokenIds ${initialMint} and minted to: ${contractOwner}\n`);
// balanceOf()
console.log(`Querying the balance count of contractOwner ${contractOwner}...`);
let contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection\n`);
// ownerOf()
const NFT1 = initialMint[0];
console.log(`Querying the owner of ${symbol}#${NFT1}...`);
const owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of NFT ${symbol} ${NFT1}: ${owner}\n`);
// safeTransferFrom()
const collector = signers[1].address;
console.log(`Transferring ${symbol}#${NFT1} to collector ${collector}...`);
// safeTransferFrom() is overloaded (ie. multiple functions with same name) hence differing syntax
await nonFunToken["safeTransferFrom(address,address,uint256)"](contractOwner, collector, NFT1);
console.log(`${symbol}#${NFT1} transferred from ${contractOwner} to ${collector}`);
console.log(`Querying the owner of ${symbol}#${NFT1}...`);
let NFT1Owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of ${symbol}#${NFT1}: ${NFT1Owner}\n`);
// approve()
console.log(`Approving contractOwner to spend collector ${symbol}#${NFT1}...`);
// Creates a new instance of the contract connected to the collector
const collectorContract = nonFunToken.connect(signers[1]);
await collectorContract.approve(contractOwner, NFT1);
console.log(`contractOwner ${contractOwner} has been approved to spend collector ${collector} ${symbol}#${NFT1}\n`);
// getApproved()
console.log(`Getting the account approved to spend ${symbol}#${NFT1}...`);
let NFT1Spender = await nonFunToken.getApproved(NFT1);
console.log(`${NFT1Spender} has the approval to spend ${symbol}#${NFT1}\n`);
// safeTransferFrom() with valid approve()
console.log(`Transferring ${symbol}#${NFT1} from collector ${collector} to contractOwner ${contractOwner} using contractOwner wallet...`);
// Calling the safeTransferFrom() using the contractOwner instance
await nonFunToken["safeTransferFrom(address,address,uint256)"](collector, contractOwner, NFT1);
NFT1Owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of ${symbol}#${NFT1}: ${NFT1Owner}\n`);
// setApprovalForAll()
console.log(`Approving collector to spend all of contractOwner ${symbol} NFTs...`);
// Using the contractOwner contract instance as the caller of the function
await nonFunToken.setApprovalForAll(collector, true) // The second parameter can be set to false to remove operator
console.log(`collector ${collector} has been approved to spend all of contractOwner ${contractOwner} ${symbol} NFTs\n`);
// isApprovedForAll()
console.log(`Checking if collector has been approved to spend all of contractOwner ${symbol} NFTs`);
const approvedForAll = await nonFunToken.isApprovedForAll(contractOwner, collector);
console.log(`Is collector ${collector} approved to spend all of contractOwner ${contractOwner} ${symbol} NFTs: ${approvedForAll}\n`);
// safeTransferFrom() with valid isApprovedForAll()
console.log(`Validating collector has approval to transfer all of contractOwner NFTs...`);
// contractOwner NFT count before transfer
contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`BEFORE: ${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection`);
let collectorBalances = await nonFunToken.balanceOf(collector);
console.log(`BEFORE: ${collector} has ${collectorBalances} NFTs from this ${symbol} collection`);
console.log(`Collector transferring all contractOwner NFTs to collector wallet`);
for (let i = 0; i < initialMint.length; i++) {
// Using the collector wallet to call the transfer
await collectorContract["safeTransferFrom(address,address,uint256)"](contractOwner, collector, initialMint[i]);
}
console.log(`NFT transfer completed`);
contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`AFTER: ${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection`);
collectorBalances = await nonFunToken.balanceOf(collector);
console.log(`AFTER: ${collector} has ${collectorBalances} NFTs from this ${symbol} collection`);
}
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exitCode = 1;
});
A:
Your angular app is usually a separate project to your contract projects. You'll also usually not use hardhat inside your Angular app.
To use a contract inside your Frontend dApp (e.g. written in Angular), you need to:
Import ethers (not from Hardhat)
import { ethers } from "ethers";
Inject your credentials from the Wallet
const provider = new ethers.providers.Web3Provider(ethereum);
const signer = provider.getSigner();
Create an object of your contact using ABI
const contract = new ethers.Contract(CONTRACT_ADDRESS, CONTRACT_ABI, signer);
I recommend doing a tutorial on dApps, e.g. Buildspace: Build your first Ethereum dApp is a good one.
|
How can i interact with Hardhat smart contract in Angular Project?
|
I created a Hardhat project and I deployed a smart contract locally. also I Interact with it by an Interact.Js file.
now I want to interact with this contract on an Angular Project.. how can I do that?
pragma solidity ^0.8.17;
import { ERC721 } from "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
contract NonFunToken is ERC721, Ownable {
NonFunToken.sol
// Constructor will be called on contract creation
constructor() ERC721("NonFunToken", "NONFUN") {}
// Allows minting of a new NFT
function mintCollectionNFT(address collector, uint256 tokenId) public onlyOwner() {
_safeMint(collector, tokenId);
}
}
///////////////////////////////////////////////////
deploy.js
// We require the Hardhat Runtime Environment explicitly here. This is optional
// but useful for running the script in a standalone fashion through `node <script>`.
//
// You can also run a script with `npx hardhat run <script>`. If you do that, Hardhat
// will compile your contracts, add the Hardhat Runtime Environment's members to the
// global scope, and execute the script.
const { ethers } = require("hardhat");
async function main() {
// Get the contract owner
const contractOwner = await ethers.getSigners();
console.log(`Deploying contract from: ${contractOwner[0].address}`);
// Hardhat helper to get the ethers contractFactory object
const NonFunToken = await ethers.getContractFactory('NonFunToken');
// Deploy the contract
console.log('Deploying NonFunToken...');
const nonFunToken = await NonFunToken.deploy();
await nonFunToken.deployed();
console.log(`NonFunToken deployed to: ${nonFunToken.address}`)
}
// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exitCode = 1;
});
///////////////////////////////////////////
// scripts/interact.js
const { ethers } = require("hardhat");
async function main() {
console.log('Getting the non fun token contract...\n');
const contractAddress = '0x5FbDB2315678afecb367f032d93F642f64180aa3';
const nonFunToken = await ethers.getContractAt('NonFunToken', contractAddress);
const signers = await ethers.getSigners();
// name()
console.log('Querying NFT collection name...');
const name = await nonFunToken.name();
console.log(`Token Collection Name: ${name}\n`);
// symbol()
console.log('Querying NFT collection symbol...');
const symbol = await nonFunToken.symbol();
console.log(`Token Collection Symbol: ${symbol}\n`);
// Mint new NFTs from the collection using custom function mintCollectionNFT()
console.log('Minting a new NFT from the collection to the contractOwner...');
const contractOwner = signers[0].address;
const initialMintCount = 10; // Number of NFTs to mint
let initialMint = [];
for (let i = 1; i <= initialMintCount; i++) {
let tx = await nonFunToken.mintCollectionNFT(signers[0].address, i.toString());
await tx.wait(); // wait for this tx to finish to avoid nonce issues
initialMint.push(i.toString());
}
console.log(`${symbol} NFT with tokenIds ${initialMint} and minted to: ${contractOwner}\n`);
// balanceOf()
console.log(`Querying the balance count of contractOwner ${contractOwner}...`);
let contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection\n`);
// ownerOf()
const NFT1 = initialMint[0];
console.log(`Querying the owner of ${symbol}#${NFT1}...`);
const owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of NFT ${symbol} ${NFT1}: ${owner}\n`);
// safeTransferFrom()
const collector = signers[1].address;
console.log(`Transferring ${symbol}#${NFT1} to collector ${collector}...`);
// safeTransferFrom() is overloaded (ie. multiple functions with same name) hence differing syntax
await nonFunToken["safeTransferFrom(address,address,uint256)"](contractOwner, collector, NFT1);
console.log(`${symbol}#${NFT1} transferred from ${contractOwner} to ${collector}`);
console.log(`Querying the owner of ${symbol}#${NFT1}...`);
let NFT1Owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of ${symbol}#${NFT1}: ${NFT1Owner}\n`);
// approve()
console.log(`Approving contractOwner to spend collector ${symbol}#${NFT1}...`);
// Creates a new instance of the contract connected to the collector
const collectorContract = nonFunToken.connect(signers[1]);
await collectorContract.approve(contractOwner, NFT1);
console.log(`contractOwner ${contractOwner} has been approved to spend collector ${collector} ${symbol}#${NFT1}\n`);
// getApproved()
console.log(`Getting the account approved to spend ${symbol}#${NFT1}...`);
let NFT1Spender = await nonFunToken.getApproved(NFT1);
console.log(`${NFT1Spender} has the approval to spend ${symbol}#${NFT1}\n`);
// safeTransferFrom() with valid approve()
console.log(`Transferring ${symbol}#${NFT1} from collector ${collector} to contractOwner ${contractOwner} using contractOwner wallet...`);
// Calling the safeTransferFrom() using the contractOwner instance
await nonFunToken["safeTransferFrom(address,address,uint256)"](collector, contractOwner, NFT1);
NFT1Owner = await nonFunToken.ownerOf(NFT1);
console.log(`Owner of ${symbol}#${NFT1}: ${NFT1Owner}\n`);
// setApprovalForAll()
console.log(`Approving collector to spend all of contractOwner ${symbol} NFTs...`);
// Using the contractOwner contract instance as the caller of the function
await nonFunToken.setApprovalForAll(collector, true) // The second parameter can be set to false to remove operator
console.log(`collector ${collector} has been approved to spend all of contractOwner ${contractOwner} ${symbol} NFTs\n`);
// isApprovedForAll()
console.log(`Checking if collector has been approved to spend all of contractOwner ${symbol} NFTs`);
const approvedForAll = await nonFunToken.isApprovedForAll(contractOwner, collector);
console.log(`Is collector ${collector} approved to spend all of contractOwner ${contractOwner} ${symbol} NFTs: ${approvedForAll}\n`);
// safeTransferFrom() with valid isApprovedForAll()
console.log(`Validating collector has approval to transfer all of contractOwner NFTs...`);
// contractOwner NFT count before transfer
contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`BEFORE: ${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection`);
let collectorBalances = await nonFunToken.balanceOf(collector);
console.log(`BEFORE: ${collector} has ${collectorBalances} NFTs from this ${symbol} collection`);
console.log(`Collector transferring all contractOwner NFTs to collector wallet`);
for (let i = 0; i < initialMint.length; i++) {
// Using the collector wallet to call the transfer
await collectorContract["safeTransferFrom(address,address,uint256)"](contractOwner, collector, initialMint[i]);
}
console.log(`NFT transfer completed`);
contractOwnerBalances = await nonFunToken.balanceOf(contractOwner);
console.log(`AFTER: ${contractOwner} has ${contractOwnerBalances} NFTs from this ${symbol} collection`);
collectorBalances = await nonFunToken.balanceOf(collector);
console.log(`AFTER: ${collector} has ${collectorBalances} NFTs from this ${symbol} collection`);
}
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exitCode = 1;
});
|
[
"Your angular app is usually a separate project to your contract projects. You'll also usually not use hardhat inside your Angular app.\n\nTo use a contract inside your Frontend dApp (e.g. written in Angular), you need to:\n\nImport ethers (not from Hardhat)\n\nimport { ethers } from \"ethers\";\n\n\nInject your credentials from the Wallet\n\nconst provider = new ethers.providers.Web3Provider(ethereum);\nconst signer = provider.getSigner();\n\n\nCreate an object of your contact using ABI\n\nconst contract = new ethers.Contract(CONTRACT_ADDRESS, CONTRACT_ABI, signer);\n\n\nI recommend doing a tutorial on dApps, e.g. Buildspace: Build your first Ethereum dApp is a good one.\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"blockchain",
"hardhat",
"web3js"
] |
stackoverflow_0074646671_angular_blockchain_hardhat_web3js.txt
|
Q:
Mac run python script with double click
I'm trying to make an automator application to run a python script so I can double click the icon and start the script.
it doesn't give me an error but it does nothing.
#!/bin/bash
echo Running Script
python /Desktop/test.py
echo Script ended
I also tried with a Shell script .sh with the same code.
it was working before with the .sh until I updated to Mac OS Ventura.
I also installed anaconda and python, but not sure how to point to anaconda environment.
any help would be great
Thank you
A:
A shell of Automator has different PATH, setopt, and other values from your shell in terminal. Therefore the script in your Automator won't work exactly the same in your shell.
To make it work, you need to use an absolute path of the python command.
Run this command in your terminal to get the absolute path.
which python
After, fix python in your script with the previous result.
#!/bin/bash
echo Running Script
/Absolute/path/somewhere/python /Desktop/test.py
echo Script ended
To debug with showing a window of the script result
Add "Run Applescript" in your automator app with the code below.
It's not perfect but useful.
Also, in your picture, you can see the result by click the Results button.
on run {input, parameters}
display dialog input as text
end run
|
Mac run python script with double click
|
I'm trying to make an automator application to run a python script so I can double click the icon and start the script.
it doesn't give me an error but it does nothing.
#!/bin/bash
echo Running Script
python /Desktop/test.py
echo Script ended
I also tried with a Shell script .sh with the same code.
it was working before with the .sh until I updated to Mac OS Ventura.
I also installed anaconda and python, but not sure how to point to anaconda environment.
any help would be great
Thank you
|
[
"A shell of Automator has different PATH, setopt, and other values from your shell in terminal. Therefore the script in your Automator won't work exactly the same in your shell.\nTo make it work, you need to use an absolute path of the python command.\nRun this command in your terminal to get the absolute path.\nwhich python\n\nAfter, fix python in your script with the previous result.\n#!/bin/bash\n\necho Running Script\n\n/Absolute/path/somewhere/python /Desktop/test.py\n\necho Script ended\n\nTo debug with showing a window of the script result\nAdd \"Run Applescript\" in your automator app with the code below.\nIt's not perfect but useful.\nAlso, in your picture, you can see the result by click the Results button.\non run {input, parameters}\n display dialog input as text\nend run\n\n"
] |
[
0
] |
[] |
[] |
[
"automator",
"macos",
"python"
] |
stackoverflow_0074657989_automator_macos_python.txt
|
Q:
How to remove Google Tag Manager ID using an in-house Chrome Extension?
We have staff working from home all over the UK, and we need to stop their page view from being tracked on all the tracking help in Google Tag Manager. The IP block won't work due the numbers and variable IPs.
I'm looking to find a way to remove the Google Tag Manager ID or the code in full before it's fired.
I'm thinking an in-house built Chrome Extension will do this, but for the life of me I can't find out how.
I know how Extensions are built.
Also, are extensions fired after the DOM has fully loaded? if so then this option isn't workable.
Any tips, or asistance would be grateful.
Thanks
{
"name": " GTM Blocker",
"version": "1.0.0",
"description": "To block all tracking data",
"manifest_version": 3,
"author": "John Bell",
"action":{
"default_popup": "index.html",
"default_title": " GTM Blocker"
},
"icons": { "16": "images/favicon.ico",
"48": "images/favicon.ico",
"128": "images/favicon.ico" }
}
I did find this plugin https://github.com/tommyrharper/gtm-disabler-chrome
But it doesn't seem to work.
I added the extension and then did teh following...
<meta name="GTM-Blocker" content="enabled" />
<!-- Google Tag Manager -->
<script>
const GTMBlocker = document.querySelector('meta[name="GTM-Blocker"]');
const GTMBlockerEnabled = GTMBlocker.content === 'enabled'
if (GTMBlockerEnabled) { // DO NOT CONNECT TO GOOGLE TAG MANAGER
alert("It does not fire");
}else{
alert("It fires");
(function(w, d, s, l, i) {
w[l] = w[l] || [];
w[l].push({
'gtm.start': new Date().getTime(),
event: 'gtm.js'
});
var f = d.getElementsByTagName(s)[0],
j = d.createElement(s),
dl = l != 'dataLayer' ? '&l=' + l : '';
j.async = true;
j.src =
'https://www.googletagmanager.com/gtm.js?id=' + i + dl;
f.parentNode.insertBefore(j, f);
})(window, document, 'script', 'dataLayer', '<?php echo set_header_codes(TAG_MANGER_CODE); ?>');
}
</script>
<!-- End Google Tag Manager -->```
A:
Ok, you're likely approaching this from a wrong direction. However, I will be orderly:
Extensions are capable of preventing any network requests.
Extensions are there even before the dom ready.
Pretty much any adblocker will, too, block GTM. And for some adblockers, you can set exactly what to block, then ask your employees to use that.
Adblockers may be an overkill there. There are different extensions that allow overriding resources. I'm talking about... errr, well, it's called exactly that: resource override. You can easily override your GTM endpoint with a non-existent GTM, or with a different container that you want your employees to use. I would definitely do the latter. But, there's a better solution.
A better solution would be a GTM solution, of course. If you're ready to send your employees to some plugin installation, then it's likely easier to ask them to visit certain page on your site where you would set a cookie and a local storage, flagging them as employees. The cookie can be set only on that page (and on any lower env page), but you will reset the expiry date for the cookie every time a user hits any prod page of the site, to keep the cookie. Local storage is a backup here if they delete the cookie or something.
Don't forget mobile browsers don't have extensions. They still have cookies though. Very useful.
The above can be done in GTM, but also it can be done on the front-end. Now you just don't serve the GTM snippet to those who have the cookie/local storage flag. Or you do it more elegantly, in GTM. Making a blocking trigger based on that cookie. Then adding it as a blocker to every tag, or to tags you want to censor like this.
But even that is far from perfect.
A better solution would be to forcefully direct all the tracking from the employees to your lower environment GA instance. Or AA, or wherever you send your data to. Easier done that it looks: you would just have to serve your measurement id through a JS variable (or a lookup table) rather than hardcode it, and have the cookie-based condition there. You still just block all your third party pixels though. Don't want employees triggering them
Now this is better. Not perfect though.
A perfect solution would be having a separate lower environment container that would be conditionally served on all lower environments and even on prod, when a person is an employee.
Now, if you have that many employees that you feel like making an extension, then you must as well have them on VPN. Or, at least, you have an IT department who controls the machines through group policies and Active Directory. Now, through them, the IT can override client's local host files and firewalls, either blocking or rerouting certain endpoints to different endpoints. This approach will require collaboration from IT, but it will not rely on employees following your guidance and it will cover all work devices.
Sorry for all the details here, I just recently worked on a similar task, so memories are fresh. The latter solution was the best I could conjure up for a corp with over 50k employees.
|
How to remove Google Tag Manager ID using an in-house Chrome Extension?
|
We have staff working from home all over the UK, and we need to stop their page view from being tracked on all the tracking help in Google Tag Manager. The IP block won't work due the numbers and variable IPs.
I'm looking to find a way to remove the Google Tag Manager ID or the code in full before it's fired.
I'm thinking an in-house built Chrome Extension will do this, but for the life of me I can't find out how.
I know how Extensions are built.
Also, are extensions fired after the DOM has fully loaded? if so then this option isn't workable.
Any tips, or asistance would be grateful.
Thanks
{
"name": " GTM Blocker",
"version": "1.0.0",
"description": "To block all tracking data",
"manifest_version": 3,
"author": "John Bell",
"action":{
"default_popup": "index.html",
"default_title": " GTM Blocker"
},
"icons": { "16": "images/favicon.ico",
"48": "images/favicon.ico",
"128": "images/favicon.ico" }
}
I did find this plugin https://github.com/tommyrharper/gtm-disabler-chrome
But it doesn't seem to work.
I added the extension and then did teh following...
<meta name="GTM-Blocker" content="enabled" />
<!-- Google Tag Manager -->
<script>
const GTMBlocker = document.querySelector('meta[name="GTM-Blocker"]');
const GTMBlockerEnabled = GTMBlocker.content === 'enabled'
if (GTMBlockerEnabled) { // DO NOT CONNECT TO GOOGLE TAG MANAGER
alert("It does not fire");
}else{
alert("It fires");
(function(w, d, s, l, i) {
w[l] = w[l] || [];
w[l].push({
'gtm.start': new Date().getTime(),
event: 'gtm.js'
});
var f = d.getElementsByTagName(s)[0],
j = d.createElement(s),
dl = l != 'dataLayer' ? '&l=' + l : '';
j.async = true;
j.src =
'https://www.googletagmanager.com/gtm.js?id=' + i + dl;
f.parentNode.insertBefore(j, f);
})(window, document, 'script', 'dataLayer', '<?php echo set_header_codes(TAG_MANGER_CODE); ?>');
}
</script>
<!-- End Google Tag Manager -->```
|
[
"Ok, you're likely approaching this from a wrong direction. However, I will be orderly:\n\nExtensions are capable of preventing any network requests.\nExtensions are there even before the dom ready.\nPretty much any adblocker will, too, block GTM. And for some adblockers, you can set exactly what to block, then ask your employees to use that.\n\nAdblockers may be an overkill there. There are different extensions that allow overriding resources. I'm talking about... errr, well, it's called exactly that: resource override. You can easily override your GTM endpoint with a non-existent GTM, or with a different container that you want your employees to use. I would definitely do the latter. But, there's a better solution.\nA better solution would be a GTM solution, of course. If you're ready to send your employees to some plugin installation, then it's likely easier to ask them to visit certain page on your site where you would set a cookie and a local storage, flagging them as employees. The cookie can be set only on that page (and on any lower env page), but you will reset the expiry date for the cookie every time a user hits any prod page of the site, to keep the cookie. Local storage is a backup here if they delete the cookie or something.\nDon't forget mobile browsers don't have extensions. They still have cookies though. Very useful.\nThe above can be done in GTM, but also it can be done on the front-end. Now you just don't serve the GTM snippet to those who have the cookie/local storage flag. Or you do it more elegantly, in GTM. Making a blocking trigger based on that cookie. Then adding it as a blocker to every tag, or to tags you want to censor like this.\nBut even that is far from perfect.\nA better solution would be to forcefully direct all the tracking from the employees to your lower environment GA instance. Or AA, or wherever you send your data to. Easier done that it looks: you would just have to serve your measurement id through a JS variable (or a lookup table) rather than hardcode it, and have the cookie-based condition there. You still just block all your third party pixels though. Don't want employees triggering them\nNow this is better. Not perfect though.\nA perfect solution would be having a separate lower environment container that would be conditionally served on all lower environments and even on prod, when a person is an employee.\nNow, if you have that many employees that you feel like making an extension, then you must as well have them on VPN. Or, at least, you have an IT department who controls the machines through group policies and Active Directory. Now, through them, the IT can override client's local host files and firewalls, either blocking or rerouting certain endpoints to different endpoints. This approach will require collaboration from IT, but it will not rely on employees following your guidance and it will cover all work devices.\nSorry for all the details here, I just recently worked on a similar task, so memories are fresh. The latter solution was the best I could conjure up for a corp with over 50k employees.\n"
] |
[
0
] |
[] |
[] |
[
"google_analytics",
"google_chrome",
"google_chrome_extension",
"google_tag_manager"
] |
stackoverflow_0074657510_google_analytics_google_chrome_google_chrome_extension_google_tag_manager.txt
|
Q:
How to perform SUM in calculated column such that the SUM is performed for that row rather than across all rows?
I'm trying to learn about calculated column and measures.
In the data table, when I create a calculated column with following DAX SUM(Sale[salesamt]), then for each row it will do a SUM showing the total sum across all rows.
For SUM to be applicable for each row, rather than all rows, what is the technique to use: It is to use SUMX or to use CALCULATE?
Should there be context transition - if so then we must use CALCULATE or SUMX?
A:
A calculated column (as opposed to a measure) will operate on the data in a single row without the use of any aggerate functions.
SUMX would be used in a measure where you wanted to iterate over the table resolve some expression on each row and then sum those results into a single value.
|
How to perform SUM in calculated column such that the SUM is performed for that row rather than across all rows?
|
I'm trying to learn about calculated column and measures.
In the data table, when I create a calculated column with following DAX SUM(Sale[salesamt]), then for each row it will do a SUM showing the total sum across all rows.
For SUM to be applicable for each row, rather than all rows, what is the technique to use: It is to use SUMX or to use CALCULATE?
Should there be context transition - if so then we must use CALCULATE or SUMX?
|
[
"A calculated column (as opposed to a measure) will operate on the data in a single row without the use of any aggerate functions.\n\nSUMX would be used in a measure where you wanted to iterate over the table resolve some expression on each row and then sum those results into a single value.\n\n"
] |
[
1
] |
[] |
[] |
[
"dax",
"powerbi"
] |
stackoverflow_0074653247_dax_powerbi.txt
|
Q:
Saving a json file after after filtering with jq
I've filtered values in a json file that I obtained from Google Maps API with the following command:
jq '.[] | select(.vicinity|contains("Bern"))' Bern.json > Bern_test.json
(I only want to have places in Bern)
I tried loading the Bern_test.json file to Tableau to visualize it, but Tableau complains that the format is wrong. json validator online also says something is wrong:
Error: Parse error on line 38:
... Wohlen bei Bern"} { "business_status"
----------------------^
Expecting 'EOF', '}', ',', ']', got '{'
Here is the "broken" part (if it helps):
"reference": "ChIJ-abNilA5jkcRAh6Jj7dLx_c",
"scope": "GOOGLE",
"types": [
"restaurant",
"food",
"point_of_interest",
"establishment"
],
"vicinity": "Aumattweg 22, Wohlen bei Bern"
} {
"business_status": "OPERATIONAL",
"geometry": {
"location": {
"lat": 46.9684408,
"lng": 7.377661100000001
I don't know why it got broken, or how I can properly save a json file after I've done some magic with jq. Can anyone help?
A:
Your current filter produces a stream of JSON objects rather than a single JSON value. To fix that, wrap your filter in [...]:
jq '[.[] | select(.vicinity|contains("Bern"))]' Bern.json > Bern_test.json
or use the predefined map function to achieve the same result.
jq 'map(select(.vicinity|contains("Bern")))' Bern.json > Bern_test.json
|
Saving a json file after after filtering with jq
|
I've filtered values in a json file that I obtained from Google Maps API with the following command:
jq '.[] | select(.vicinity|contains("Bern"))' Bern.json > Bern_test.json
(I only want to have places in Bern)
I tried loading the Bern_test.json file to Tableau to visualize it, but Tableau complains that the format is wrong. json validator online also says something is wrong:
Error: Parse error on line 38:
... Wohlen bei Bern"} { "business_status"
----------------------^
Expecting 'EOF', '}', ',', ']', got '{'
Here is the "broken" part (if it helps):
"reference": "ChIJ-abNilA5jkcRAh6Jj7dLx_c",
"scope": "GOOGLE",
"types": [
"restaurant",
"food",
"point_of_interest",
"establishment"
],
"vicinity": "Aumattweg 22, Wohlen bei Bern"
} {
"business_status": "OPERATIONAL",
"geometry": {
"location": {
"lat": 46.9684408,
"lng": 7.377661100000001
I don't know why it got broken, or how I can properly save a json file after I've done some magic with jq. Can anyone help?
|
[
"Your current filter produces a stream of JSON objects rather than a single JSON value. To fix that, wrap your filter in [...]:\njq '[.[] | select(.vicinity|contains(\"Bern\"))]' Bern.json > Bern_test.json\n\nor use the predefined map function to achieve the same result.\njq 'map(select(.vicinity|contains(\"Bern\")))' Bern.json > Bern_test.json\n\n"
] |
[
2
] |
[] |
[] |
[
"jq",
"json"
] |
stackoverflow_0074657818_jq_json.txt
|
Q:
How to push Unity project (files together over 1GB) to the GitHub without splitting it?
I am trying to push my Unity project to GitHub and this error appears:
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
I want to have my entire project on one repo without splitting it. Is there any way to do it?
There is this topic but I'm not really sure what that answers say. If they are correct can someone simplify them to me?
Also here someone said that once a month you get 1GB on git lfs but on witch day of the month this happen?
A:
If you need to distribute large files within your repository, you can create releases on GitHub.com. Releases allow you to package software, release notes, and links to binary files, for other people to use. see the GitHub documentation. There is no limit to the total size of the binary files in the release or the bandwidth used to deliver them. However, each individual file must be smaller than 2 GB.
Instead, if you've got really large binary files, besides git-lfs which has a lightweight pointer to your file you might wanna consider git-annex to store the data outside of the repository. I think that git-lfs refreshes your bandwidth limit one month after you reached the limit.
That being said, In my opinion, the best option would be to store on GitHub only the source code of your project and add all large assets files, typical of a game (meshes, textures, audio files, etc.) on a separate database (Amazon S3, Google Cloud, etc.). Only push their relative .meta files and add a pre-processing script that downloads and injects your dependencies inside your project before opening it in the editor. This could be a magefile or more simply a shell/bash script.
In this way, you can organize (multiple) archived packages that you can update manually and use your own versioning method (renaming the archives with the date they were created, the dependencies versions, etc.).
This gives you more granular control over the files you will inject as git cannot generate meaningful diffs, or merge binary files in any way that could make sense. So all merges, rebases, or cherrypicks involving a change to a binary file will involve you making a manual conflict resolution on that binary file and you would not break GitHub guidelines on this. Quoting:
We recommend repositories remain small, ideally less than 1 GB, and
less than 5 GB is strongly recommended. Smaller repositories are
faster to clone and easier to work with and maintain. If your
repository excessively impacts our infrastructure, you might receive
an email from GitHub Support asking you to take corrective action.
|
How to push Unity project (files together over 1GB) to the GitHub without splitting it?
|
I am trying to push my Unity project to GitHub and this error appears:
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
I want to have my entire project on one repo without splitting it. Is there any way to do it?
There is this topic but I'm not really sure what that answers say. If they are correct can someone simplify them to me?
Also here someone said that once a month you get 1GB on git lfs but on witch day of the month this happen?
|
[
"If you need to distribute large files within your repository, you can create releases on GitHub.com. Releases allow you to package software, release notes, and links to binary files, for other people to use. see the GitHub documentation. There is no limit to the total size of the binary files in the release or the bandwidth used to deliver them. However, each individual file must be smaller than 2 GB.\nInstead, if you've got really large binary files, besides git-lfs which has a lightweight pointer to your file you might wanna consider git-annex to store the data outside of the repository. I think that git-lfs refreshes your bandwidth limit one month after you reached the limit.\nThat being said, In my opinion, the best option would be to store on GitHub only the source code of your project and add all large assets files, typical of a game (meshes, textures, audio files, etc.) on a separate database (Amazon S3, Google Cloud, etc.). Only push their relative .meta files and add a pre-processing script that downloads and injects your dependencies inside your project before opening it in the editor. This could be a magefile or more simply a shell/bash script.\nIn this way, you can organize (multiple) archived packages that you can update manually and use your own versioning method (renaming the archives with the date they were created, the dependencies versions, etc.).\nThis gives you more granular control over the files you will inject as git cannot generate meaningful diffs, or merge binary files in any way that could make sense. So all merges, rebases, or cherrypicks involving a change to a binary file will involve you making a manual conflict resolution on that binary file and you would not break GitHub guidelines on this. Quoting:\n\nWe recommend repositories remain small, ideally less than 1 GB, and\nless than 5 GB is strongly recommended. Smaller repositories are\nfaster to clone and easier to work with and maintain. If your\nrepository excessively impacts our infrastructure, you might receive\nan email from GitHub Support asking you to take corrective action.\n\n"
] |
[
2
] |
[] |
[] |
[
"git",
"git_lfs",
"github",
"unity3d",
"version_control"
] |
stackoverflow_0074657680_git_git_lfs_github_unity3d_version_control.txt
|
Q:
Chrome dev tools fails to show response even the content returned has header Content-Type:text/html; charset=UTF-8
Why does my Chrome developer tools show
Failed to show response data
in response when the content returned is of type text/html?
What is the alternative to see the returned response in developer tools?
A:
I think this only happens when you have 'Preserve log' checked and you are trying to view the response data of a previous request after you have navigated away.
For example, I viewed the Response to loading this Stack Overflow question. You can see it.
The second time, I reloaded this page but didn't look at the Headers or Response. I navigated to a different website. Now when I look at the response, it shows 'Failed to load response data'.
This is a known issue, that's been around for a while, and debated a lot.
A:
As described by Gideon, this is a known issue with Chrome that has been open for more than 5 years with no apparent interest in fixing it.
Unfortunately, in my case, the window.onunload = function() { debugger; } workaround didn't work either. So far the best workaround I've found is to use Firefox, which does display response data even after a navigation. The Firefox devtools also have a lot of nice features missing in Chrome, such as syntax highlighting the response data if it is html and automatically parsing it if it is JSON.
A:
For the ones who are getting the error while requesting JSON data:
If your are requesting JSON data, the JSON might be too large and that what cause the error to happen.
My solution is to copy the request link to new tab (get request from browser)
copy the data to JSON viewer online where you have auto parsing and work on it there.
A:
As described by Gideon, this is a known issue.
For use window.onunload = function() { debugger; } instead.
But you can add a breakpoint in Source tab, then can solve your problem.
like this:
A:
If you make an AJAX request with fetch, the response isn't shown unless it's read with .text(), .json(), etc.
If you just do:
r = fetch("/some-path");
the response won't be shown in dev tools.
It shows up after you run:
r.then(r => r.text())
A:
"Failed to show response data" can also happen if you are doing crossdomain requests and the remote host is not properly handling the CORS headers. Check your js console for errors.
A:
For the once who receive this error while requesting large JSON data it is, as mentioned by Blauhirn, not a solution to just open the request in new tab if you are using authentication headers and suchlike.
Forturnatly chrome does have other options such as Copy -> Copy as curl.
Running this call from the commandoline through cURL will be a exact replicate of the original call.
I added > ~/result.json to the last part of the commando to save the result to a file.
Otherwise it will be outputted to the console.
A:
For those coming here from Google, and for whom the previous answers do not solve the mystery...
If you use XHR to make a server call, but do not return a response, this error will occur.
Example (from Nodejs/React but could equally be js/php):
App.tsx
const handleClickEvent = () => {
fetch('/routeInAppjs?someVar=someValue&nutherVar=summat_else', {
method: 'GET',
mode: 'same-origin',
credentials: 'include',
headers: {
'content-type': 'application/json',
dataType: 'json',
},
}).then((response) => {
console.log(response)
});
}
App.js
app.route('/getAllPublicDatasheets').get(async function (req, res) {
const { someVar, nutherVar } = req.query;
console.log('Ending here without a return...')
});
Console.log will here report:
Failed to show response data
To fix, add the return response to bottom of your route (server-side):
res.json('Adding this below the console.log in App.js route will solve it.');
A:
I had the same problem and none of the answers worked, finally i noticed i had made a huge mistake and had chosen other as you can see
Now this seems like a dumb mistake but the thing is even after removing and reinstalling chrome the problem had remained (settings are not uninstalled by default when removing chrome) and so it took me a while until I found this and choose All again...!
This happened because my backend doesn't handle OPTIONS method and because I had clicked on other by mistake which caused me to spend a couple days trying answers!
A:
Bug still active.
This happens when JS becomes the initiator for new page(200), or redirect(301/302)
1 possible way to fix it - it disable JavaScript on request.
I.e. in puppeteer you can use: page.setJavaScriptEnabled(false) while intercepting request(page.on('request'))
A:
another possibility is that the server does not handle the OPTIONS request.
A:
One workaround is to use Postman with same request url, headers and payload.
It will give response for sure.
A:
For me, the issue happens when the returned JSON file is too large.
If you just want to see the response, you can get it with the help of Postman. See the steps below:
Copy the request with all information(including URL, header, token, etc) from chrome debugger through Chrome Developer Tools->Network Tab->find the request->right click on it->Copy->Copy as cURL.
Open postman, import->Rawtext, paste the content. Postman will recreate the same request. Then run the request you should see the JSON response.
[Import cURL in postmain][1]: https://i.stack.imgur.com/dL9Qo.png
If you want to reduce the size of the API response, maybe you can return fewer fields in the response. For mongoose, you can easily do this by providing a field name list when calling the find() method.
For exmaple, convert the method from:
const users = await User.find().lean();
To:
const users = await User.find({}, '_id username email role timecreated').lean();
In my case, there is field called description, which is a large string. After removing it from the field list, the response size is reduced from 6.6 MB to 404 KB.
A:
As long as the body of the Response is not consumed within your code (using .json() or .text() for instance), it won't be displayed in the preview tab of Chrome dev tools
|
Chrome dev tools fails to show response even the content returned has header Content-Type:text/html; charset=UTF-8
|
Why does my Chrome developer tools show
Failed to show response data
in response when the content returned is of type text/html?
What is the alternative to see the returned response in developer tools?
|
[
"I think this only happens when you have 'Preserve log' checked and you are trying to view the response data of a previous request after you have navigated away.\nFor example, I viewed the Response to loading this Stack Overflow question. You can see it.\n\nThe second time, I reloaded this page but didn't look at the Headers or Response. I navigated to a different website. Now when I look at the response, it shows 'Failed to load response data'.\n\nThis is a known issue, that's been around for a while, and debated a lot.\n",
"As described by Gideon, this is a known issue with Chrome that has been open for more than 5 years with no apparent interest in fixing it.\nUnfortunately, in my case, the window.onunload = function() { debugger; } workaround didn't work either. So far the best workaround I've found is to use Firefox, which does display response data even after a navigation. The Firefox devtools also have a lot of nice features missing in Chrome, such as syntax highlighting the response data if it is html and automatically parsing it if it is JSON.\n",
"For the ones who are getting the error while requesting JSON data:\nIf your are requesting JSON data, the JSON might be too large and that what cause the error to happen.\nMy solution is to copy the request link to new tab (get request from browser)\ncopy the data to JSON viewer online where you have auto parsing and work on it there.\n",
"As described by Gideon, this is a known issue.\nFor use window.onunload = function() { debugger; } instead.\nBut you can add a breakpoint in Source tab, then can solve your problem.\nlike this:\n\n",
"If you make an AJAX request with fetch, the response isn't shown unless it's read with .text(), .json(), etc.\nIf you just do:\n r = fetch(\"/some-path\");\n\nthe response won't be shown in dev tools.\nIt shows up after you run:\nr.then(r => r.text())\n\n",
"\"Failed to show response data\" can also happen if you are doing crossdomain requests and the remote host is not properly handling the CORS headers. Check your js console for errors.\n",
"For the once who receive this error while requesting large JSON data it is, as mentioned by Blauhirn, not a solution to just open the request in new tab if you are using authentication headers and suchlike.\nForturnatly chrome does have other options such as Copy -> Copy as curl.\nRunning this call from the commandoline through cURL will be a exact replicate of the original call.\nI added > ~/result.json to the last part of the commando to save the result to a file.\nOtherwise it will be outputted to the console.\n",
"For those coming here from Google, and for whom the previous answers do not solve the mystery...\nIf you use XHR to make a server call, but do not return a response, this error will occur.\nExample (from Nodejs/React but could equally be js/php):\nApp.tsx\nconst handleClickEvent = () => {\n fetch('/routeInAppjs?someVar=someValue&nutherVar=summat_else', {\n method: 'GET',\n mode: 'same-origin',\n credentials: 'include',\n headers: {\n 'content-type': 'application/json',\n dataType: 'json',\n },\n }).then((response) => {\n console.log(response)\n });\n}\n\nApp.js\napp.route('/getAllPublicDatasheets').get(async function (req, res) {\n const { someVar, nutherVar } = req.query;\n console.log('Ending here without a return...')\n});\n\nConsole.log will here report:\nFailed to show response data\n\nTo fix, add the return response to bottom of your route (server-side):\nres.json('Adding this below the console.log in App.js route will solve it.');\n\n",
"I had the same problem and none of the answers worked, finally i noticed i had made a huge mistake and had chosen other as you can see\n\nNow this seems like a dumb mistake but the thing is even after removing and reinstalling chrome the problem had remained (settings are not uninstalled by default when removing chrome) and so it took me a while until I found this and choose All again...!\nThis happened because my backend doesn't handle OPTIONS method and because I had clicked on other by mistake which caused me to spend a couple days trying answers!\n",
"Bug still active.\nThis happens when JS becomes the initiator for new page(200), or redirect(301/302)\n1 possible way to fix it - it disable JavaScript on request.\nI.e. in puppeteer you can use: page.setJavaScriptEnabled(false) while intercepting request(page.on('request'))\n",
"another possibility is that the server does not handle the OPTIONS request.\n",
"One workaround is to use Postman with same request url, headers and payload.\nIt will give response for sure.\n",
"For me, the issue happens when the returned JSON file is too large.\nIf you just want to see the response, you can get it with the help of Postman. See the steps below:\n\nCopy the request with all information(including URL, header, token, etc) from chrome debugger through Chrome Developer Tools->Network Tab->find the request->right click on it->Copy->Copy as cURL.\nOpen postman, import->Rawtext, paste the content. Postman will recreate the same request. Then run the request you should see the JSON response.\n[Import cURL in postmain][1]: https://i.stack.imgur.com/dL9Qo.png\n\nIf you want to reduce the size of the API response, maybe you can return fewer fields in the response. For mongoose, you can easily do this by providing a field name list when calling the find() method.\nFor exmaple, convert the method from:\nconst users = await User.find().lean();\n\nTo:\nconst users = await User.find({}, '_id username email role timecreated').lean();\n\nIn my case, there is field called description, which is a large string. After removing it from the field list, the response size is reduced from 6.6 MB to 404 KB.\n",
"As long as the body of the Response is not consumed within your code (using .json() or .text() for instance), it won't be displayed in the preview tab of Chrome dev tools\n"
] |
[
331,
75,
70,
37,
30,
17,
3,
2,
1,
0,
0,
0,
0,
0
] |
[
"Use firefox, it always display the response and give the same tools that chrome does.\n"
] |
[
-2
] |
[
"google_chrome",
"google_chrome_devtools",
"http"
] |
stackoverflow_0038924798_google_chrome_google_chrome_devtools_http.txt
|
Q:
Migrating to Spring Boot 3 with ActiveMQ "Classic"
I am trying to migrate to Spring Boot 3 with the new namespace jakarta.xx instead of javax.xx but the ActiveMQ "Classic" client has not been updated and was deprecated. Is there a way to continue using the old ActiveMQ client?
I tried the new ActiveMQ Artemis client but it seems like they are not interoperable with the ActiveMQ "Classic" server.
Including the old ActiveMQ client results in not being able to use JMSTemplate for configuration because JMSTemplate uses jakarta.xx and expects a ConnectionFactory from jakarta.xx not javax.xx
A:
As you noticed there is no ActiveMQ client that supports the Jakarta namespace JMS dependency or in fact none that supports JMS 2.0 so you really need to move to something else such as an ActiveMQ Artemis broker and the ActiveMQ Artemis client or Qpid JMS AMQP client v2.1.0 which both support JMS 2.0 and use the Jakarta APIs.
If you are dead set on sticking with ActiveMQ 5.x you can try using the Qpid JMS v2.1.0 client which does use the Jakarata JMS API but you will need to be somewhat careful as the 5.x broker doesn't support JMS 2.0 so some parts of the API can trigger unexpected behaviors. The AMQP support in the 5.x broker is not as fully integrated and JMS 2.0 aware as that of the Artemis broker so you can encounter issues you wouldn't see if you moved on to the Artemis broker.
|
Migrating to Spring Boot 3 with ActiveMQ "Classic"
|
I am trying to migrate to Spring Boot 3 with the new namespace jakarta.xx instead of javax.xx but the ActiveMQ "Classic" client has not been updated and was deprecated. Is there a way to continue using the old ActiveMQ client?
I tried the new ActiveMQ Artemis client but it seems like they are not interoperable with the ActiveMQ "Classic" server.
Including the old ActiveMQ client results in not being able to use JMSTemplate for configuration because JMSTemplate uses jakarta.xx and expects a ConnectionFactory from jakarta.xx not javax.xx
|
[
"As you noticed there is no ActiveMQ client that supports the Jakarta namespace JMS dependency or in fact none that supports JMS 2.0 so you really need to move to something else such as an ActiveMQ Artemis broker and the ActiveMQ Artemis client or Qpid JMS AMQP client v2.1.0 which both support JMS 2.0 and use the Jakarta APIs.\nIf you are dead set on sticking with ActiveMQ 5.x you can try using the Qpid JMS v2.1.0 client which does use the Jakarata JMS API but you will need to be somewhat careful as the 5.x broker doesn't support JMS 2.0 so some parts of the API can trigger unexpected behaviors. The AMQP support in the 5.x broker is not as fully integrated and JMS 2.0 aware as that of the Artemis broker so you can encounter issues you wouldn't see if you moved on to the Artemis broker.\n"
] |
[
2
] |
[] |
[] |
[
"activemq",
"activemq_artemis",
"spring_boot"
] |
stackoverflow_0074653414_activemq_activemq_artemis_spring_boot.txt
|
Q:
For Loop with ifelse statement to create variable
I am trying to use a for loop with a nested ifelse statement to generate an indicator variable in a dataframe. I'm fairly new to using for-loops however. Other questions I've found seem to be more complex than my dataset, so the answers haven't been ideal for my situation.
Essentially, I have survey recipients and names of their bosses, and I need to identify which recipients are also listed as bosses.
I have a vector of the boss names in which I know these names are also survey recipients.
For example (names have been changed):
bossrecip<-c("Tamira Hughes", "John Legend", "Robert Collins")
Then the column that includes the recipients full name, which I cleaned to be formatted in the same way as the boss names, is column "RecipientFullName" in my SurveyData.
RecipientFullName<-c("Gosha Jennings", "Robert Stew", "John Legend")
both_recip_boss<-0
SurveyData<-data.frame(RecipientFullName, both_boss_recip)
"both_recip_boss" is where I would like to put a 1 for if the recipient is also a boss, and keep it as a 0 if they are just a recipient
The for-loop I have tried that I think I am the closest with is
for (b in bossrecip) {
ifelse(b==SurveyData$RecipientFullName | SurveyData$both_recip_boss==1,
SurveyData$both_recip_boss<-1,
SurveyData$both_recip_boss<-0)
}
I included the OR statement because I don't want the following names in b to overwrite the previous loop work. However, this just gives me one row with a 1, when I know there should be at least 91 ones in my full dataset. I'm sure I'm messing up something with the logic of for-loops, but I'm uncertain what it is.
I'd be very grateful for any advice and insight into what I am doing incorrectly. Thank you!
A:
No need for a loop. Using %in% you could do:
SurveyData$both_recip_boss <- +(SurveyData$RecipientFullName %in% bossrecip)
SurveyData
#> RecipientFullName both_recip_boss
#> 1 Gosha Jennings 0
#> 2 Robert Stew 0
#> 3 John Legend 1
A:
I rearranged the loop logic to show an approach.
However, R is a vectorized language, and much of the benefits in computation speed and codability come from vectorizing code or using the internal loop replacement functions (such as the apply family)
bossrecip<-c("Tamira Hughes", "John Legend", "Robert Collins")
SurveyData<-data.frame(RecipientFullName=c("Gosha Jennings", "Robert Stew", "John Legend"),
both_boss_recip=0)
for (i in 1:nrow(SurveyData)){
SurveyData$both_boss_recip[i]<-ifelse(SurveyData$RecipientFullName[i] %in% bossrecip,
1,
0)
}
SurveyData
RecipientFullName both_boss_recip
1 Gosha Jennings 0
2 Robert Stew 0
3 John Legend 1
|
For Loop with ifelse statement to create variable
|
I am trying to use a for loop with a nested ifelse statement to generate an indicator variable in a dataframe. I'm fairly new to using for-loops however. Other questions I've found seem to be more complex than my dataset, so the answers haven't been ideal for my situation.
Essentially, I have survey recipients and names of their bosses, and I need to identify which recipients are also listed as bosses.
I have a vector of the boss names in which I know these names are also survey recipients.
For example (names have been changed):
bossrecip<-c("Tamira Hughes", "John Legend", "Robert Collins")
Then the column that includes the recipients full name, which I cleaned to be formatted in the same way as the boss names, is column "RecipientFullName" in my SurveyData.
RecipientFullName<-c("Gosha Jennings", "Robert Stew", "John Legend")
both_recip_boss<-0
SurveyData<-data.frame(RecipientFullName, both_boss_recip)
"both_recip_boss" is where I would like to put a 1 for if the recipient is also a boss, and keep it as a 0 if they are just a recipient
The for-loop I have tried that I think I am the closest with is
for (b in bossrecip) {
ifelse(b==SurveyData$RecipientFullName | SurveyData$both_recip_boss==1,
SurveyData$both_recip_boss<-1,
SurveyData$both_recip_boss<-0)
}
I included the OR statement because I don't want the following names in b to overwrite the previous loop work. However, this just gives me one row with a 1, when I know there should be at least 91 ones in my full dataset. I'm sure I'm messing up something with the logic of for-loops, but I'm uncertain what it is.
I'd be very grateful for any advice and insight into what I am doing incorrectly. Thank you!
|
[
"No need for a loop. Using %in% you could do:\nSurveyData$both_recip_boss <- +(SurveyData$RecipientFullName %in% bossrecip)\n\nSurveyData\n#> RecipientFullName both_recip_boss\n#> 1 Gosha Jennings 0\n#> 2 Robert Stew 0\n#> 3 John Legend 1\n\n",
"I rearranged the loop logic to show an approach.\nHowever, R is a vectorized language, and much of the benefits in computation speed and codability come from vectorizing code or using the internal loop replacement functions (such as the apply family)\nbossrecip<-c(\"Tamira Hughes\", \"John Legend\", \"Robert Collins\") \n\nSurveyData<-data.frame(RecipientFullName=c(\"Gosha Jennings\", \"Robert Stew\", \"John Legend\"),\n both_boss_recip=0)\n\nfor (i in 1:nrow(SurveyData)){\n SurveyData$both_boss_recip[i]<-ifelse(SurveyData$RecipientFullName[i] %in% bossrecip, \n 1, \n 0)\n}\nSurveyData\n\n\n RecipientFullName both_boss_recip\n1 Gosha Jennings 0\n2 Robert Stew 0\n3 John Legend 1\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"for_loop",
"if_statement",
"r"
] |
stackoverflow_0074657722_for_loop_if_statement_r.txt
|
Q:
SPARQL - What does a single colon do in a prefix?
I've often seen a SPARQL query starting with this prefix:
PREFIX : <http://dbpedia.org/resource/>
But what exactly does it mean to use only a colon ":" in a prefix? I usually know it as putting another abbreviation in front of it. Like for example here:
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
Is there a reason to write it this way, and if so, what is the function of the prefix?
I would imagine that this would cover any other abbreviations that were not assigned. But unfortunately I've not found anything specific about this on the Internet
A:
There’s no special functionality involved. It’s a regular prefix label, which happens to be empty.
SPARQL: Prefixed Names (bold emphasis mine):
The PREFIX keyword associates a prefix label with an IRI. A prefixed name is a prefix label and a local part, separated by a colon ":". A prefixed name is mapped to an IRI by concatenating the IRI associated with the prefix and the local part. The prefix label or the local part may be empty.
So these three snippets are equivalent:
SELECT * WHERE {
?person a <http://xmlns.com/foaf/0.1/Person> .
}
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT * WHERE {
?person a foaf:Person .
}
PREFIX : <http://xmlns.com/foaf/0.1/>
SELECT * WHERE {
?person a :Person .
}
Using an empty prefix label in SPARQL queries might make sense if all, or almost all, IRIs come from the same ontology, because the query might become more readable then.
|
SPARQL - What does a single colon do in a prefix?
|
I've often seen a SPARQL query starting with this prefix:
PREFIX : <http://dbpedia.org/resource/>
But what exactly does it mean to use only a colon ":" in a prefix? I usually know it as putting another abbreviation in front of it. Like for example here:
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
Is there a reason to write it this way, and if so, what is the function of the prefix?
I would imagine that this would cover any other abbreviations that were not assigned. But unfortunately I've not found anything specific about this on the Internet
|
[
"There’s no special functionality involved. It’s a regular prefix label, which happens to be empty.\nSPARQL: Prefixed Names (bold emphasis mine):\n\nThe PREFIX keyword associates a prefix label with an IRI. A prefixed name is a prefix label and a local part, separated by a colon \":\". A prefixed name is mapped to an IRI by concatenating the IRI associated with the prefix and the local part. The prefix label or the local part may be empty.\n\nSo these three snippets are equivalent:\nSELECT * WHERE {\n ?person a <http://xmlns.com/foaf/0.1/Person> .\n}\n\nPREFIX foaf: <http://xmlns.com/foaf/0.1/>\nSELECT * WHERE {\n ?person a foaf:Person .\n}\n\nPREFIX : <http://xmlns.com/foaf/0.1/>\nSELECT * WHERE {\n ?person a :Person .\n}\n\nUsing an empty prefix label in SPARQL queries might make sense if all, or almost all, IRIs come from the same ontology, because the query might become more readable then.\n"
] |
[
1
] |
[] |
[] |
[
"dbpedia",
"sparql",
"sparqlwrapper"
] |
stackoverflow_0074655646_dbpedia_sparql_sparqlwrapper.txt
|
Q:
Set bits using node-redis with a custom offset
I am having a bit of an issue figuring out the API. How can I go about using the API to send a request to Redis to execute something such as "bitfield somebf SET i4 #0 5"? i.e., set the first 4 bits at offset 0 to a 5 (0101). Currently, I execute setbit() 4 times setting the bits manually, which is rather inconvenient.
A:
The snippet of code that you are looking for is:
await client.bitField('key', [{
operation: 'SET',
encoding: 'i4',
offset: '#0',
value: 5
}])
The various commands and their arguments for Node Redis are not terribly well documented. My personal life hack for when I need to figure one of the more complex ones out—which happens quite a bit—is to take a look at the source code for the tests.
|
Set bits using node-redis with a custom offset
|
I am having a bit of an issue figuring out the API. How can I go about using the API to send a request to Redis to execute something such as "bitfield somebf SET i4 #0 5"? i.e., set the first 4 bits at offset 0 to a 5 (0101). Currently, I execute setbit() 4 times setting the bits manually, which is rather inconvenient.
|
[
"The snippet of code that you are looking for is:\nawait client.bitField('key', [{\n operation: 'SET',\n encoding: 'i4',\n offset: '#0',\n value: 5\n}])\n\nThe various commands and their arguments for Node Redis are not terribly well documented. My personal life hack for when I need to figure one of the more complex ones out—which happens quite a bit—is to take a look at the source code for the tests.\n"
] |
[
1
] |
[] |
[] |
[
"node.js",
"redis"
] |
stackoverflow_0074650914_node.js_redis.txt
|
Q:
JavaScript play audio segment
I need to call a script and specify 3 parameters, audio file ID, audio start time, audio end time (optional).
With zero prior JS knowledge, I'm unable to get this script I stitched together to work flawlessly. The main bug now is that if EventListener is active and a segment is still playing and the user clicks on another segment, it should cancel active EventListener and assign a new one if end time is specified.
Any help would be great.
<!DOCTYPE html>
<script>
function audio_play(fileID, sTime, eTime){
var audio = document.getElementById(fileID);
var segmentEnd;
audio.pause();
audio.currentTime = 0;
audio.addEventListener('timeupdate', function (){
if (segmentEnd && (audio.currentTime >= segmentEnd)) {
audio.pause();
segmentEnd = 0;
audio.currentTime = 0;
}
console.log(audio.currentTime);
}, false);
audio.currentTime = sTime;
segmentEnd = eTime;
audio.play();
}
</script>
<body>
<html lang="en">
<audio id="id1" src="1.mp3" controls preload></audio>
<audio id="id2" src="2.mp3" controls preload></audio>
<p><a href="javascript:audio_play('id1', 15, 20);">File 1 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id1', 60, 65);">File 1 / Segment (60-65)</a></p>
<p><a href="javascript:audio_play('id1', 20);">File 1 / Segment (20 ...)</a></p>
<br>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (30-35)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (40 ...)</a></p>
</body>
</html>
I tried to add temporary variable to interrupt EventListener. It didn't work.
A:
As an experiment, I put this exact question into https://chat.openai.com/chat. This is the response I got back:
Here is a revised version of your code that should work. I added a new variable called segmentStart to keep track of the starting time of the segment that is currently playing. I also added a new conditional statement to check if the segmentStart value is not 0, which means an audio segment is already playing. If that is the case, the function removes the timeupdate event listener, sets the segmentStart and segmentEnd variables to 0, and then plays the new segment.
<!DOCTYPE html>
<script>
function audio_play(fileID, sTime, eTime){
var audio = document.getElementById(fileID);
var segmentEnd;
var segmentStart = 0;
if (segmentStart !== 0) {
audio.removeEventListener('timeupdate', playSegment);
segmentStart = 0;
segmentEnd = 0;
}
audio.pause();
audio.currentTime = 0;
function playSegment() {
if (segmentEnd && (audio.currentTime >= segmentEnd)) {
audio.pause();
segmentEnd = 0;
audio.currentTime = 0;
}
}
audio.addEventListener('timeupdate', playSegment);
segmentStart = sTime;
segmentEnd = eTime;
audio.currentTime = sTime;
audio.play();
}
</script>
<body>
<html lang="en">
<audio id="id1" src="1.mp3" controls preload></audio>
<audio id="id2" src="2.mp3" controls preload></audio>
<p><a href="javascript:audio_play('id1', 15, 20);">File 1 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id1', 60, 65);">File 1 / Segment (60-65)</a></p>
<p><a href="javascript:audio_play('id1', 20);">File 1 / Segment (20 ...)</a></p>
<br>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (30-35)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (40 ...)</a></p>
</body>
</html>
I hope this helps! Let me know if you have any questions.
It looks legit. Let me know if that works.
|
JavaScript play audio segment
|
I need to call a script and specify 3 parameters, audio file ID, audio start time, audio end time (optional).
With zero prior JS knowledge, I'm unable to get this script I stitched together to work flawlessly. The main bug now is that if EventListener is active and a segment is still playing and the user clicks on another segment, it should cancel active EventListener and assign a new one if end time is specified.
Any help would be great.
<!DOCTYPE html>
<script>
function audio_play(fileID, sTime, eTime){
var audio = document.getElementById(fileID);
var segmentEnd;
audio.pause();
audio.currentTime = 0;
audio.addEventListener('timeupdate', function (){
if (segmentEnd && (audio.currentTime >= segmentEnd)) {
audio.pause();
segmentEnd = 0;
audio.currentTime = 0;
}
console.log(audio.currentTime);
}, false);
audio.currentTime = sTime;
segmentEnd = eTime;
audio.play();
}
</script>
<body>
<html lang="en">
<audio id="id1" src="1.mp3" controls preload></audio>
<audio id="id2" src="2.mp3" controls preload></audio>
<p><a href="javascript:audio_play('id1', 15, 20);">File 1 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id1', 60, 65);">File 1 / Segment (60-65)</a></p>
<p><a href="javascript:audio_play('id1', 20);">File 1 / Segment (20 ...)</a></p>
<br>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (15-20)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (30-35)</a></p>
<p><a href="javascript:audio_play('id2', 20);">File 2 / Segment (40 ...)</a></p>
</body>
</html>
I tried to add temporary variable to interrupt EventListener. It didn't work.
|
[
"As an experiment, I put this exact question into https://chat.openai.com/chat. This is the response I got back:\n\nHere is a revised version of your code that should work. I added a new variable called segmentStart to keep track of the starting time of the segment that is currently playing. I also added a new conditional statement to check if the segmentStart value is not 0, which means an audio segment is already playing. If that is the case, the function removes the timeupdate event listener, sets the segmentStart and segmentEnd variables to 0, and then plays the new segment.\n\n<!DOCTYPE html>\n<script>\n function audio_play(fileID, sTime, eTime){\n var audio = document.getElementById(fileID);\n var segmentEnd;\n var segmentStart = 0;\n \n if (segmentStart !== 0) {\n audio.removeEventListener('timeupdate', playSegment);\n segmentStart = 0;\n segmentEnd = 0;\n }\n \n audio.pause();\n audio.currentTime = 0;\n \n function playSegment() {\n if (segmentEnd && (audio.currentTime >= segmentEnd)) {\n audio.pause();\n segmentEnd = 0;\n audio.currentTime = 0;\n }\n }\n \n audio.addEventListener('timeupdate', playSegment);\n segmentStart = sTime;\n segmentEnd = eTime;\n audio.currentTime = sTime;\n audio.play();\n }\n</script>\n\n<body>\n <html lang=\"en\">\n <audio id=\"id1\" src=\"1.mp3\" controls preload></audio>\n <audio id=\"id2\" src=\"2.mp3\" controls preload></audio>\n\n <p><a href=\"javascript:audio_play('id1', 15, 20);\">File 1 / Segment (15-20)</a></p>\n <p><a href=\"javascript:audio_play('id1', 60, 65);\">File 1 / Segment (60-65)</a></p>\n <p><a href=\"javascript:audio_play('id1', 20);\">File 1 / Segment (20 ...)</a></p>\n <br>\n <p><a href=\"javascript:audio_play('id2', 20);\">File 2 / Segment (15-20)</a></p>\n <p><a href=\"javascript:audio_play('id2', 20);\">File 2 / Segment (30-35)</a></p>\n <p><a href=\"javascript:audio_play('id2', 20);\">File 2 / Segment (40 ...)</a></p>\n</body>\n</html>\n\n\nI hope this helps! Let me know if you have any questions.\n\nIt looks legit. Let me know if that works.\n"
] |
[
0
] |
[] |
[] |
[
"audio",
"javascript",
"segment"
] |
stackoverflow_0074594916_audio_javascript_segment.txt
|
Q:
How to show object with Property does not exist on type in template
I have object with type WithBalance | WithoutBalance
withBalance : { balance:number, name:string } withoutBalance : { name : string}
<span>{{object?.balance ?? 0}} </span>
But when I try to do so I get error Property 'balance' does not exist on type WithoutBalance.
How to solve this problem?
A:
Optional chaining (?.) lets us write code where TypeScript can immediately stop running some expressions if we run into a null or undefined. So in your case you have used object?.balance which means if object is not null try to access the balance property. So if object is in withoutBalance type it raises an error like this: Property 'balance' does not exist on type WithoutBalance.
You can use in operator in your component like this:
getBalance (){
if ("balance" in this.object) {
return this.object.balance
}
return 0;
}
<span>
{{ getBalance()}}
</span>
A:
You should consider using Type Narrowing in the angular template. Check out this answer
In such cases either introduce a new property named type in the objects with the name of the type as a string and add a condition or either check for the property inside the object in the template like the above answer.
Solution 1
withBalance : { balance:number, name:string, type:"withBalance" } withoutBalance : { name : string, type:"withoutBalance"}
<span>{{object.type=="withBalance"? object.balance: 0}} </span>
Solution 2
getBalance (obj:any){
if ("balance" in obj) {
return obj.balance
}
return 0;
}
<span>
{{ getBalance(object)}}
</span>
A:
You can simply do this:
// in your data.model.ts define the type
export interface Balance {
name: string;
balance?: number
}
// this is where you are going to instantiate the model
const withBalance: Balance = {
name: 'with balance',
balance: 50
}
const balanceExample: Balance = {
name: 'Without balance',
// balance is optional here so you can either add it or leave it
}
// in your template
<span> {{balanceExample.balance == null ? balanceExample.balance : 0}} </span>
It is important to use == null instead of withoutBalance.balance??0 because balance is of type number and if you do actually have a balance and its value happens to be 0 or 1 typescript will convert it into a boolean so using == null will ensure that it will only be true if balance is null or undefined.
A:
A workaround:
<span>{{$any(object).balance ?? 0}} </span>
This will silent the error. Use it if you are sure that your model is right
|
How to show object with Property does not exist on type in template
|
I have object with type WithBalance | WithoutBalance
withBalance : { balance:number, name:string } withoutBalance : { name : string}
<span>{{object?.balance ?? 0}} </span>
But when I try to do so I get error Property 'balance' does not exist on type WithoutBalance.
How to solve this problem?
|
[
"Optional chaining (?.) lets us write code where TypeScript can immediately stop running some expressions if we run into a null or undefined. So in your case you have used object?.balance which means if object is not null try to access the balance property. So if object is in withoutBalance type it raises an error like this: Property 'balance' does not exist on type WithoutBalance.\nYou can use in operator in your component like this:\ngetBalance (){\n if (\"balance\" in this.object) {\n return this.object.balance\n }\n return 0;\n }\n\n<span>\n {{ getBalance()}}\n</span>\n\n",
"You should consider using Type Narrowing in the angular template. Check out this answer\nIn such cases either introduce a new property named type in the objects with the name of the type as a string and add a condition or either check for the property inside the object in the template like the above answer.\nSolution 1\nwithBalance : { balance:number, name:string, type:\"withBalance\" } withoutBalance : { name : string, type:\"withoutBalance\"}\n\n\n<span>{{object.type==\"withBalance\"? object.balance: 0}} </span>\n\nSolution 2\ngetBalance (obj:any){\n if (\"balance\" in obj) {\n return obj.balance\n }\n return 0;\n }\n\n<span>\n {{ getBalance(object)}}\n</span>\n\n",
"You can simply do this:\n// in your data.model.ts define the type\nexport interface Balance {\n name: string;\n balance?: number\n}\n\n// this is where you are going to instantiate the model\nconst withBalance: Balance = {\n name: 'with balance',\n balance: 50\n}\n\nconst balanceExample: Balance = {\n name: 'Without balance',\n // balance is optional here so you can either add it or leave it\n}\n\n// in your template\n<span> {{balanceExample.balance == null ? balanceExample.balance : 0}} </span>\n\n\nIt is important to use == null instead of withoutBalance.balance??0 because balance is of type number and if you do actually have a balance and its value happens to be 0 or 1 typescript will convert it into a boolean so using == null will ensure that it will only be true if balance is null or undefined.\n",
"A workaround:\n<span>{{$any(object).balance ?? 0}} </span>\n\nThis will silent the error. Use it if you are sure that your model is right\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"angular",
"typescript"
] |
stackoverflow_0069553185_angular_typescript.txt
|
Q:
SQL Decode format numbers only
I want to format amounts to salary format, e.g. 10000 becomes 10,000, so I use to_char(amount, '99,999,99')
SELECT SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0)) Salary,
SUM(DECODE(e.element_name,'Transportation Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Transportation,
SUM(DECODE(e.element_name,'GOSI Processing',to_char(v.screen_entry_value,'99,999,99'),0)) GOSI,
SUM(DECODE(e.element_name,'Housing Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Housing
FROM values v,
values_types vt,
elements e
WHERE vt.value_type = 'Amount'
this gives error invalid number because not all values are numbers until value_type is equal to Amount but I guess decode check all values anyway although what I know is that the execution begins with from then where then select, what's going wrong here?
A:
You said you added decode(...), but it looks like you might have actually added sum(decode(...)).
You are converting your values to strings with to_char(v.screen_entry_value,'99,999,99'), so your decode() generates a string - the default 0 will be converted to '0' - giving you a value like '1,234,56'. Then you are aggregating those, so sum() has to implicitly convert those strings to numbers - and it is throwing the error when it tries to do that:
select to_number('1,234,56') from dual
will also get "ORA-01722: invalid number", unless you supply a similar format mask so it knows how to interpret it. You could do that, e.g.:
SUM(to_number(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0),'99,999,99'))
... but it's maybe more obvious that something is strange, and even if you did, you would end up with a number, not a formatted string.
So instead of doing:
SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0))
you should format the result after aggregating:
to_char(SUM(DECODE(e.element_name,'Basic Salary',v.screen_entry_value,0)),'99,999,99')
fiddle with dummy tables, data and joins.
|
SQL Decode format numbers only
|
I want to format amounts to salary format, e.g. 10000 becomes 10,000, so I use to_char(amount, '99,999,99')
SELECT SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0)) Salary,
SUM(DECODE(e.element_name,'Transportation Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Transportation,
SUM(DECODE(e.element_name,'GOSI Processing',to_char(v.screen_entry_value,'99,999,99'),0)) GOSI,
SUM(DECODE(e.element_name,'Housing Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Housing
FROM values v,
values_types vt,
elements e
WHERE vt.value_type = 'Amount'
this gives error invalid number because not all values are numbers until value_type is equal to Amount but I guess decode check all values anyway although what I know is that the execution begins with from then where then select, what's going wrong here?
|
[
"You said you added decode(...), but it looks like you might have actually added sum(decode(...)).\nYou are converting your values to strings with to_char(v.screen_entry_value,'99,999,99'), so your decode() generates a string - the default 0 will be converted to '0' - giving you a value like '1,234,56'. Then you are aggregating those, so sum() has to implicitly convert those strings to numbers - and it is throwing the error when it tries to do that:\nselect to_number('1,234,56') from dual\n\nwill also get \"ORA-01722: invalid number\", unless you supply a similar format mask so it knows how to interpret it. You could do that, e.g.:\nSUM(to_number(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0),'99,999,99'))\n\n... but it's maybe more obvious that something is strange, and even if you did, you would end up with a number, not a formatted string.\nSo instead of doing:\nSUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0))\n\nyou should format the result after aggregating:\nto_char(SUM(DECODE(e.element_name,'Basic Salary',v.screen_entry_value,0)),'99,999,99')\n\nfiddle with dummy tables, data and joins.\n"
] |
[
0
] |
[] |
[] |
[
"decode",
"oracle",
"sql"
] |
stackoverflow_0074657322_decode_oracle_sql.txt
|
Q:
Bitbucket Pipelines - AWSCLI - ERROR: Could not find a version that satisfies the requirement botocore==1.29.21 (from awscli)
Today we have faced an specific error about a dependency of aws cli called botocore.
We are using pip from a bitbucket caches like in this yml below:
- step: &build-and-publish
name: Build and Publish
services:
- docker
caches:
- pip
script:
- pip3 install awscli
During the building process, the following error happens: ERROR: Could not find a version that satisfies the requirement botocore==1.29.21 (from awscli) .
A:
Our Solution was to upgrade pip using this code in the bitbucket-pipelines.yml file: pip install --upgrade pip .
The final bitbucket-pipelines.yml:
- step: &build-and-publish
name: Build and Publish
services:
- docker
caches:
- pip
script:
- pip install --upgrade pip
- pip3 install awscli
We don't know if it's a bug on Bitbucket Pipelines or something wrong with our project. We are still investigating it. But maybe this solution can be helpful if it's a bug.
A:
awscli 1.27.21 depending on botocore 1.29.21 were simultaneously published (uploaded to pypi.org) yesterday around 20:16 UTC.
My guess is there is a small time window with non-zero chance of pypi.org not having consistently distributed this info globally yet.
Just try again now.
|
Bitbucket Pipelines - AWSCLI - ERROR: Could not find a version that satisfies the requirement botocore==1.29.21 (from awscli)
|
Today we have faced an specific error about a dependency of aws cli called botocore.
We are using pip from a bitbucket caches like in this yml below:
- step: &build-and-publish
name: Build and Publish
services:
- docker
caches:
- pip
script:
- pip3 install awscli
During the building process, the following error happens: ERROR: Could not find a version that satisfies the requirement botocore==1.29.21 (from awscli) .
|
[
"Our Solution was to upgrade pip using this code in the bitbucket-pipelines.yml file: pip install --upgrade pip .\nThe final bitbucket-pipelines.yml:\n - step: &build-and-publish\n name: Build and Publish\n services:\n - docker\n caches:\n - pip\n script:\n - pip install --upgrade pip \n - pip3 install awscli\n\nWe don't know if it's a bug on Bitbucket Pipelines or something wrong with our project. We are still investigating it. But maybe this solution can be helpful if it's a bug.\n",
"awscli 1.27.21 depending on botocore 1.29.21 were simultaneously published (uploaded to pypi.org) yesterday around 20:16 UTC.\nMy guess is there is a small time window with non-zero chance of pypi.org not having consistently distributed this info globally yet.\nJust try again now.\n"
] |
[
0,
0
] |
[] |
[] |
[
"aws_cli",
"bitbucket",
"bitbucket_pipelines",
"pi",
"pypi"
] |
stackoverflow_0074657673_aws_cli_bitbucket_bitbucket_pipelines_pi_pypi.txt
|
Q:
Compare arrA(str) and arrB(obj) and return a list of objects from arrB that match both its property keys to arrA's strings
Need to find the most efficient way to compare arrA and arrB and return an array of arrB's elements that match both keys to strings found in arrA.
There will only ever be two properties per object in arrB, but the number of elements in arrA will shift dramatically depending on the number of elements in arrB. Also, arrB's number of elements is unknown.
const arrA = ["green", "blue", "orange"]
const arrB = [
{ orange: 4, green: 4},
{ green: 0, yellow: 0},
{ yellow: 1, orange: 4 },
{ blue: 2, green: 1 },
{ blue: 2, yellow: 1 },
{ green: 3, yellow: 2 },
{ green: 1, blue: 3},
{ green: 5, yellow: 2 },
{ green: 5, blue: 2}
]
The result would look like:
var arrC= [
{orange: 4, green: 4 },
{blue: 2, green: 1 },
{green: 1, blue: 3 },
{green: 5, blue: 2 }
]
I've attempted a mix of ideas that look like varying options from below but clearly none of them function because they are a mix of code and pseudocode:
const compare = (arrA, arrB) => {
const color_obj = arrB[i]
(color_obj) => {
const [[color1, val1], [color2, val2]] = Object.entries(color_obj)
for (let i=0; i<arrA; i++) {
if(arrA[i] === color1 && arrA[i+1] === color2 || arrA[i] === color2 && arrA[i+1] === color1)
filteredColorObjects + arrb[obj]
}
}
}
Or
const compare = arrA.filter((e) => Object.entries(arrB.includes(e[1] && e[2]), [])
Or
const compare = arrB.filter(o => arrA.includes(o.arrB[i])).map(o => o[key] && o.arrB[i])).map(o => o[key]);
Or
const compare = arrA.filter((e) {
return !!arrB.find(function(o) {
return arrB[i] === e;
});
A:
Try this:
var arrC = []
arrB.forEach( e => {
if(Object.keys(e).every(i => arrA.includes(i))){
arrC.push(e);
}
})
.forEach: iterates over all elements of the array (arrB)
Object.keys(e): gets all keys in the object e as array
.every: iterates over all elements of the array (Object.keys(e)) and return true only if all iterations returned true.
#So here we are iterating over the keys of the objects inside arrB.
.includes: will return true if the value we send exists in the array
Now if all the keys of the object are in arrA, we push the object to arrC.
|
Compare arrA(str) and arrB(obj) and return a list of objects from arrB that match both its property keys to arrA's strings
|
Need to find the most efficient way to compare arrA and arrB and return an array of arrB's elements that match both keys to strings found in arrA.
There will only ever be two properties per object in arrB, but the number of elements in arrA will shift dramatically depending on the number of elements in arrB. Also, arrB's number of elements is unknown.
const arrA = ["green", "blue", "orange"]
const arrB = [
{ orange: 4, green: 4},
{ green: 0, yellow: 0},
{ yellow: 1, orange: 4 },
{ blue: 2, green: 1 },
{ blue: 2, yellow: 1 },
{ green: 3, yellow: 2 },
{ green: 1, blue: 3},
{ green: 5, yellow: 2 },
{ green: 5, blue: 2}
]
The result would look like:
var arrC= [
{orange: 4, green: 4 },
{blue: 2, green: 1 },
{green: 1, blue: 3 },
{green: 5, blue: 2 }
]
I've attempted a mix of ideas that look like varying options from below but clearly none of them function because they are a mix of code and pseudocode:
const compare = (arrA, arrB) => {
const color_obj = arrB[i]
(color_obj) => {
const [[color1, val1], [color2, val2]] = Object.entries(color_obj)
for (let i=0; i<arrA; i++) {
if(arrA[i] === color1 && arrA[i+1] === color2 || arrA[i] === color2 && arrA[i+1] === color1)
filteredColorObjects + arrb[obj]
}
}
}
Or
const compare = arrA.filter((e) => Object.entries(arrB.includes(e[1] && e[2]), [])
Or
const compare = arrB.filter(o => arrA.includes(o.arrB[i])).map(o => o[key] && o.arrB[i])).map(o => o[key]);
Or
const compare = arrA.filter((e) {
return !!arrB.find(function(o) {
return arrB[i] === e;
});
|
[
"Try this:\nvar arrC = []\narrB.forEach( e => {\n if(Object.keys(e).every(i => arrA.includes(i))){\n arrC.push(e);\n }\n})\n\n.forEach: iterates over all elements of the array (arrB)\nObject.keys(e): gets all keys in the object e as array\n.every: iterates over all elements of the array (Object.keys(e)) and return true only if all iterations returned true.\n#So here we are iterating over the keys of the objects inside arrB.\n.includes: will return true if the value we send exists in the array\nNow if all the keys of the object are in arrA, we push the object to arrC.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"comparison",
"intersection",
"javascript",
"object"
] |
stackoverflow_0074657810_arrays_comparison_intersection_javascript_object.txt
|
Q:
How can I print the intermediate results in CPLEX?
Other than decision variables, there must be some other variables in the subject block of the C-PLEX software. For calculations and analysis, we may have to use it. How to print other variables into an Excel sheet?
A:
Any array of dvar or data can be written into Excel
See https://github.com/AlexFleischerParis/oplexcel/blob/main/write3Darray.mod
range A=1..2;
range B=1..3;
range C=1..4;
dvar int X[A][B][C];
subject to
{
forall(a in A,b in B,c in C) X[a][b][c]==a*b*c;
}
tuple someTuple{
int a;
int b;
int c;
int value;
};
{someTuple} someSet = {<i,j,k,X[i][j][k]> | i in A, j in B, k in C};
https://github.com/AlexFleischerParis/oplexcel/blob/main/write3Darray.dat
SheetConnection sheet("write3Darray.xlsx");
someSet to SheetWrite(sheet,"A1:D24");
|
How can I print the intermediate results in CPLEX?
|
Other than decision variables, there must be some other variables in the subject block of the C-PLEX software. For calculations and analysis, we may have to use it. How to print other variables into an Excel sheet?
|
[
"Any array of dvar or data can be written into Excel\nSee https://github.com/AlexFleischerParis/oplexcel/blob/main/write3Darray.mod\nrange A=1..2;\nrange B=1..3;\nrange C=1..4;\n\n\ndvar int X[A][B][C];\n\nsubject to\n{\nforall(a in A,b in B,c in C) X[a][b][c]==a*b*c;\n}\n\ntuple someTuple{\nint a;\nint b;\nint c;\nint value;\n};\n\n\n{someTuple} someSet = {<i,j,k,X[i][j][k]> | i in A, j in B, k in C}; \n\nhttps://github.com/AlexFleischerParis/oplexcel/blob/main/write3Darray.dat\nSheetConnection sheet(\"write3Darray.xlsx\");\n\nsomeSet to SheetWrite(sheet,\"A1:D24\"); \n\n"
] |
[
0
] |
[] |
[] |
[
"conflict",
"cplex",
"php",
"var",
"variables"
] |
stackoverflow_0074656389_conflict_cplex_php_var_variables.txt
|
Q:
Trying to apply fit_transofrm() function from sklearn.compose.ColumnTransformer class on array but getting "tuple index out of range" error
I am beginner in ML/AI and trying to do pre-proccesing on my dataset of digits that I've made myself. I want to apply OneHotEncoding on my categorical variable (which is a dependent one,idk if it is important) but getting "tuple index out of range" error. I was searching on the internet and the only solution was to use reshape() function but it didn't help or may be i am not using it correctly.
Here is my dataset,first 28 columns are decoded digits into 1s and 0s and the last column contains digits itself
Here is my code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
#Data Preprocessing
dataset = pd.read_csv('dataset_cisla_polia2.csv',header = None,sep = ';')
X = dataset.iloc[:, 0:28].values
y = dataset.iloc[:, 29].values
print(X)
print(y)
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough')
y = np.array(ct.fit_transform(y))
I am expecting to get variable y to be like this:
digit 1 is encoded that way = [1 0 0 0 0 0 0 0 0 0 ],
digit 2 is encoded that way = [0 1 0 0 0 0 0 0 0 0 ]
and so on..
A:
This is because ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') will one-hot encode the column of index 29.
You are fit-transforming y which only has 1 column. You can change the 29 to 0.
ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[0])],remainder = 'passthrough')
Edit
You also need to change the iloc to keep the numpy array as column structure.
y = dataset.iloc[:, [29]].values
|
Trying to apply fit_transofrm() function from sklearn.compose.ColumnTransformer class on array but getting "tuple index out of range" error
|
I am beginner in ML/AI and trying to do pre-proccesing on my dataset of digits that I've made myself. I want to apply OneHotEncoding on my categorical variable (which is a dependent one,idk if it is important) but getting "tuple index out of range" error. I was searching on the internet and the only solution was to use reshape() function but it didn't help or may be i am not using it correctly.
Here is my dataset,first 28 columns are decoded digits into 1s and 0s and the last column contains digits itself
Here is my code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
#Data Preprocessing
dataset = pd.read_csv('dataset_cisla_polia2.csv',header = None,sep = ';')
X = dataset.iloc[:, 0:28].values
y = dataset.iloc[:, 29].values
print(X)
print(y)
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough')
y = np.array(ct.fit_transform(y))
I am expecting to get variable y to be like this:
digit 1 is encoded that way = [1 0 0 0 0 0 0 0 0 0 ],
digit 2 is encoded that way = [0 1 0 0 0 0 0 0 0 0 ]
and so on..
|
[
"This is because ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') will one-hot encode the column of index 29.\nYou are fit-transforming y which only has 1 column. You can change the 29 to 0.\nct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[0])],remainder = 'passthrough')\n\nEdit\nYou also need to change the iloc to keep the numpy array as column structure.\ny = dataset.iloc[:, [29]].values\n\n"
] |
[
0
] |
[] |
[] |
[
"artificial_intelligence",
"data_preprocessing",
"machine_learning",
"python",
"scikit_learn"
] |
stackoverflow_0074657678_artificial_intelligence_data_preprocessing_machine_learning_python_scikit_learn.txt
|
Q:
Elastica get all results order by matched
I am newer with elastica and i would like to get all products but sorting by favorites,
in my mode : poductDocument, i added a collection field to store the ids of users how added this product to favorite :
class poductDocument implements DocumentInterface
{
private int $id;
private string $label;
private Collection $userIdsWhoAddedThisProductToFavorite;
public function getId(): int
{
return $this->id;
}
public function setId(int $id): self
{
$this->id = $id;
return $this;
}
public function getLabel(): string
{
return $this->label;
}
public function setLabel(string $label): self
{
$this->label = $label;
return $this;
}
public function getUserIdsWhoAddedThisProductToFavorite(): Collection
{
return $this->userIdsWhoAddedThisProductToFavorite;
}
public function setUserIdsWhoAddedThisProductToFavorite(array $data): self
{
$this->userIdsWhoAddedThisProductToFavorite = new ArrayCollection($data);
return $this;
}
}
And my mapping :
settings:
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 60s
mappings:
dynamic: false
properties:
id:
type: integer
label:
type: keyword
fields:
autocomplete:
type: text
analyzer: app_autocomplete
search_analyzer: standard
text:
type: text
analyzer: french
fielddata: true
user_ids_who_added_this_product_to_favorite:
type: integer
And in my custom filter i used Query term to find my favorite products
public function applySort(Query $query, Query\BoolQuery $boolQuery): void
{
$termQuery = new Query\Term();
$termQuery->setTerm('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());
$boolQuery->addMust($termQuery);
}
This code is working but give me just the favorite products, what i would like to do is to get all my product sorted by favorite product
for example if i have 4 products and i have product 1 and 2 as favorite my code give me :
product 1
product 2
and i'd like that the result be :
product 1
product 2
product 3
product 4
Any help please
A:
$termQuery = new Query\Term();
$termQuery->setTerm('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());
$boolQuery->addShould($termQuery);
$filterBoolQuery = new Query\BoolQuery();
$matchQuery = new Query\MatchQuery('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());
$filterBoolQuery->addMust($matchQuery);
|
Elastica get all results order by matched
|
I am newer with elastica and i would like to get all products but sorting by favorites,
in my mode : poductDocument, i added a collection field to store the ids of users how added this product to favorite :
class poductDocument implements DocumentInterface
{
private int $id;
private string $label;
private Collection $userIdsWhoAddedThisProductToFavorite;
public function getId(): int
{
return $this->id;
}
public function setId(int $id): self
{
$this->id = $id;
return $this;
}
public function getLabel(): string
{
return $this->label;
}
public function setLabel(string $label): self
{
$this->label = $label;
return $this;
}
public function getUserIdsWhoAddedThisProductToFavorite(): Collection
{
return $this->userIdsWhoAddedThisProductToFavorite;
}
public function setUserIdsWhoAddedThisProductToFavorite(array $data): self
{
$this->userIdsWhoAddedThisProductToFavorite = new ArrayCollection($data);
return $this;
}
}
And my mapping :
settings:
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 60s
mappings:
dynamic: false
properties:
id:
type: integer
label:
type: keyword
fields:
autocomplete:
type: text
analyzer: app_autocomplete
search_analyzer: standard
text:
type: text
analyzer: french
fielddata: true
user_ids_who_added_this_product_to_favorite:
type: integer
And in my custom filter i used Query term to find my favorite products
public function applySort(Query $query, Query\BoolQuery $boolQuery): void
{
$termQuery = new Query\Term();
$termQuery->setTerm('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());
$boolQuery->addMust($termQuery);
}
This code is working but give me just the favorite products, what i would like to do is to get all my product sorted by favorite product
for example if i have 4 products and i have product 1 and 2 as favorite my code give me :
product 1
product 2
and i'd like that the result be :
product 1
product 2
product 3
product 4
Any help please
|
[
" $termQuery = new Query\\Term();\n $termQuery->setTerm('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());\n $boolQuery->addShould($termQuery);\n\n $filterBoolQuery = new Query\\BoolQuery();\n $matchQuery = new Query\\MatchQuery('user_ids_who_added_this_product_to_favorite', $this->getUser()->getId());\n $filterBoolQuery->addMust($matchQuery);\n\n"
] |
[
0
] |
[] |
[] |
[
"elastica",
"elasticsearch",
"php",
"symfony"
] |
stackoverflow_0074654250_elastica_elasticsearch_php_symfony.txt
|
Q:
python socket recv timeout does not time-out
Synopsis: server hangs on socket.recv() even though a socket.settimeout() was set.
Have a simple socket based server that loops over commands (simple text messages) sent from client(s) until it receives an 'end' command. I simulate a broken client by
- making a connection
- sending a command
- receive the result
- exit (never sending the 'end' command to server)
The whole system worked perfectly when the server/client protocol was adhered to, but with the broken client simulation the server is NOT timing out on the recv.
Relevant server code (simplified):
ip = '192.168.77.170'
port = 12345
args = { app specific data }
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as serverSock:
serverSock.bind((ip, port))
serverSock.listen()
while True:
sock, addr = serverSock.accept()
thread = threading.Thread(target=session, args=(sock, args))
thread.name = "client {}".format(addr)
thread.daemon = True
thread.start()
def session(sock, args):
sess = threading.local()
sess.sock = sock
sess.__dict__.update(args)
print("-" * 60)
print("Session opened from={addr}.".format(addr=sock.getpeername()))
sock.settimeout(10.0)
while True:
try:
print('about to wait on recv', sock.gettimeout())
cmd = recvString(sock)
print('got cmd =', cmd)
if cmd.startswith('foo'): doFoo(sess, cmd)
elif cmd.startswith('bar'): doBar(sess, cmd)
elif cmd.startswith('baz'): doBaz(sess, cmd)
elif cmd.startswith('end'): break
else:
raise Exception("Protocol Error: bad command '{}'".format(cmd))
except TimeoutError as err:
print("Error protocol timeout")
break
finally:
try:
sock.close()
except:
pass
print("Session closed.")
def recvString(sock):
buff = bytearray()
while True:
b = sock.recv(1)
if b == b'\x00': break
buff += b
return buff.decode() if len(buff) else ''
When running with broken client I get
> about to wait on recv cmd 10.0
> got cmd = foo
> about to wait on recv cmd 10.0
waits forever (and has very high CPU consumption to add insult to injury)
I've RTFM'd thoroughly and multiple times, and looked at other similar SO postings and all indicate that a simple settimeout() should work. Can't see what I'm doing wrong. Any help greatly appreciated.
A:
while True:
b = sock.recv(1)
if b == b'\x00': break
buff += b
If the peer closes the connection then sock.recv(1) will return b'', i.e. no data. This situation is not accounted for.
As a result there will be an endless and busy loop where sock.recv(1) will return with b'', only to be called again and immediately return with b'' etc. This also explains the high CPU.
|
python socket recv timeout does not time-out
|
Synopsis: server hangs on socket.recv() even though a socket.settimeout() was set.
Have a simple socket based server that loops over commands (simple text messages) sent from client(s) until it receives an 'end' command. I simulate a broken client by
- making a connection
- sending a command
- receive the result
- exit (never sending the 'end' command to server)
The whole system worked perfectly when the server/client protocol was adhered to, but with the broken client simulation the server is NOT timing out on the recv.
Relevant server code (simplified):
ip = '192.168.77.170'
port = 12345
args = { app specific data }
with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as serverSock:
serverSock.bind((ip, port))
serverSock.listen()
while True:
sock, addr = serverSock.accept()
thread = threading.Thread(target=session, args=(sock, args))
thread.name = "client {}".format(addr)
thread.daemon = True
thread.start()
def session(sock, args):
sess = threading.local()
sess.sock = sock
sess.__dict__.update(args)
print("-" * 60)
print("Session opened from={addr}.".format(addr=sock.getpeername()))
sock.settimeout(10.0)
while True:
try:
print('about to wait on recv', sock.gettimeout())
cmd = recvString(sock)
print('got cmd =', cmd)
if cmd.startswith('foo'): doFoo(sess, cmd)
elif cmd.startswith('bar'): doBar(sess, cmd)
elif cmd.startswith('baz'): doBaz(sess, cmd)
elif cmd.startswith('end'): break
else:
raise Exception("Protocol Error: bad command '{}'".format(cmd))
except TimeoutError as err:
print("Error protocol timeout")
break
finally:
try:
sock.close()
except:
pass
print("Session closed.")
def recvString(sock):
buff = bytearray()
while True:
b = sock.recv(1)
if b == b'\x00': break
buff += b
return buff.decode() if len(buff) else ''
When running with broken client I get
> about to wait on recv cmd 10.0
> got cmd = foo
> about to wait on recv cmd 10.0
waits forever (and has very high CPU consumption to add insult to injury)
I've RTFM'd thoroughly and multiple times, and looked at other similar SO postings and all indicate that a simple settimeout() should work. Can't see what I'm doing wrong. Any help greatly appreciated.
|
[
"while True:\n b = sock.recv(1)\n if b == b'\\x00': break\n buff += b\n\nIf the peer closes the connection then sock.recv(1) will return b'', i.e. no data. This situation is not accounted for.\nAs a result there will be an endless and busy loop where sock.recv(1) will return with b'', only to be called again and immediately return with b'' etc. This also explains the high CPU.\n"
] |
[
0
] |
[] |
[] |
[
"python_3.x",
"recv",
"settimeout",
"sockets",
"timeout"
] |
stackoverflow_0074657526_python_3.x_recv_settimeout_sockets_timeout.txt
|
Q:
How to align items to right side in MarkdownString of VSCode
While making VSCode Extension I have a requirement to show decorations on a file. Then while you hover over that decoration you can see the default hover with some information you want to show. I am using the following code to create the hover using MarkDownString for VSCode. Even after using "float:right;" in the span the view comment section won't move to the extreme right of the hover. It would just stay in the left. Can someone help me with the way to make it right aligned? Screenshot added below for reference.
const myContent = new MarkdownString(`<span style='float:right;'><a href='#'>View Comment</a></span>`);
myContent.isTrusted = true;
myContent.supportHtml = true;
const decoration = { range, hoverMessage:myContent };
Screenshot
I want the highlighted red box area to be in extreme right of the hover. [The image used is just for reference].
A:
After trying the same thing, I found this question still unanswered.
The MarkdownString documentation states that:
When supportHtml is true, the markdown render will also allow a safe subset of html tags and attributes to be rendered. See https://github.com/microsoft/vscode/blob/6d2920473c6f13759c978dd89104c4270a83422d/src/vs/base/browser/markdownRenderer.ts#L296 for a list of all supported tags and attributes.
Looking at the code linked, we can see that while "span" is allowed the "style" and "class" attributes, they are very strictly filtered and only allow some of the vscode built-ins to be used.
So, while styling elements with inline CSS or even a custom class might be possible, there is another approach to this - using Markdown tables, which get translated into HTML tables and allow custom alignment.
For example, what I ended up using was something like:
new MarkdownString(`
| |
| ---: |
| the very long line we want the below link to right-align to |
| [Link text](https://linktarget "Link hover message") |
`)
The idea came from the github issue linked in the the source file mentioned above.
A couple of notes:
codicons are still supported inside Markdown tables;
the HTML table generated does not expand to 100% of the hover message box, so if you want to right-align the link text with a longer line, they have to be in the same Markdown table;
Markdown table detection is very picky about spaces and newlines, you might have to play a bit with the formatting of your string for vscode to correctly transform it into a HTML table;
the Markdown table header cannot be omitted, at least that was the conclusion of my testing; it can however be empty;
the resulting HTML table might add some small invisible borders, I haven't investigated, but my alignment with other rows in the hover seemed 1px off;
trying to bypass the Markdown table creation and writing my own HTML table broke codicons support, although I haven't investigated that too much either;
|
How to align items to right side in MarkdownString of VSCode
|
While making VSCode Extension I have a requirement to show decorations on a file. Then while you hover over that decoration you can see the default hover with some information you want to show. I am using the following code to create the hover using MarkDownString for VSCode. Even after using "float:right;" in the span the view comment section won't move to the extreme right of the hover. It would just stay in the left. Can someone help me with the way to make it right aligned? Screenshot added below for reference.
const myContent = new MarkdownString(`<span style='float:right;'><a href='#'>View Comment</a></span>`);
myContent.isTrusted = true;
myContent.supportHtml = true;
const decoration = { range, hoverMessage:myContent };
Screenshot
I want the highlighted red box area to be in extreme right of the hover. [The image used is just for reference].
|
[
"After trying the same thing, I found this question still unanswered.\nThe MarkdownString documentation states that:\n\nWhen supportHtml is true, the markdown render will also allow a safe subset of html tags and attributes to be rendered. See https://github.com/microsoft/vscode/blob/6d2920473c6f13759c978dd89104c4270a83422d/src/vs/base/browser/markdownRenderer.ts#L296 for a list of all supported tags and attributes.\n\nLooking at the code linked, we can see that while \"span\" is allowed the \"style\" and \"class\" attributes, they are very strictly filtered and only allow some of the vscode built-ins to be used.\nSo, while styling elements with inline CSS or even a custom class might be possible, there is another approach to this - using Markdown tables, which get translated into HTML tables and allow custom alignment.\nFor example, what I ended up using was something like:\nnew MarkdownString(`\n| |\n| ---: |\n| the very long line we want the below link to right-align to |\n| [Link text](https://linktarget \"Link hover message\") |\n`)\n\nThe idea came from the github issue linked in the the source file mentioned above.\nA couple of notes:\n\ncodicons are still supported inside Markdown tables;\nthe HTML table generated does not expand to 100% of the hover message box, so if you want to right-align the link text with a longer line, they have to be in the same Markdown table;\nMarkdown table detection is very picky about spaces and newlines, you might have to play a bit with the formatting of your string for vscode to correctly transform it into a HTML table;\nthe Markdown table header cannot be omitted, at least that was the conclusion of my testing; it can however be empty;\nthe resulting HTML table might add some small invisible borders, I haven't investigated, but my alignment with other rows in the hover seemed 1px off;\ntrying to bypass the Markdown table creation and writing my own HTML table broke codicons support, although I haven't investigated that too much either;\n\n"
] |
[
0
] |
[] |
[] |
[
"visual_studio_code",
"vscode_api",
"vscode_extensions"
] |
stackoverflow_0073376933_visual_studio_code_vscode_api_vscode_extensions.txt
|
Q:
How the selection of the letter work in function replace /[aeiou/g]?
I made a code that select vowels and replace them with ""
function disemvowel(str) {
str = str.replace(/[aeiouAEIOU]/g, "");
return str;
}
console.log(disemvowel("This website is for losers LOL!"));
But I am not quite sure how this part of the code works =
/[aeiouAEIOU]/g why are the vowels inside [] and what the g does? as well as the //
Another question, how could I select both lower and upper case letter at once instead of right [aeiouAEIOU]?
A:
Read this.
/[aeiouAEIOU]/g
is short for
new RegExp( '[aeiouAEIOU]', 'g' )
It constructs a RegExp object which represents a regular expression and some flags. A regular expression defines a set of strings. String.prototype.replace can use this definition to identify the substrings to replace.
The regex pattern
[aeiouAEIOU]
defines the following set of strings: a, e, i, o, u, A, E, I, O, U.
The g flag tells String.prototype.replace to replaces all instances of those substrings (not just the first).
Note that /[aeiou]/ig would be sufficient. The i flag makes operations case-insensitive. (This may have a performance penalty.)
A:
The code inside the replace method is a Regular Expression or Regex.
As described by MDN:
Regular expressions are patterns used to match character combinations in strings. In JavaScript, regular expressions are also objects. These patterns are used with the exec() and test() methods of RegExp, and with the match(), matchAll(), replace(), replaceAll(), search(), and split() methods of String.
If you want to learn more about regex, I recommend taking a look at this website. It helps you to write and visualise the patterns.
|
How the selection of the letter work in function replace /[aeiou/g]?
|
I made a code that select vowels and replace them with ""
function disemvowel(str) {
str = str.replace(/[aeiouAEIOU]/g, "");
return str;
}
console.log(disemvowel("This website is for losers LOL!"));
But I am not quite sure how this part of the code works =
/[aeiouAEIOU]/g why are the vowels inside [] and what the g does? as well as the //
Another question, how could I select both lower and upper case letter at once instead of right [aeiouAEIOU]?
|
[
"Read this.\n/[aeiouAEIOU]/g\n\nis short for\nnew RegExp( '[aeiouAEIOU]', 'g' )\n\nIt constructs a RegExp object which represents a regular expression and some flags. A regular expression defines a set of strings. String.prototype.replace can use this definition to identify the substrings to replace.\nThe regex pattern\n[aeiouAEIOU]\n\ndefines the following set of strings: a, e, i, o, u, A, E, I, O, U.\nThe g flag tells String.prototype.replace to replaces all instances of those substrings (not just the first).\nNote that /[aeiou]/ig would be sufficient. The i flag makes operations case-insensitive. (This may have a performance penalty.)\n",
"The code inside the replace method is a Regular Expression or Regex.\nAs described by MDN:\n\nRegular expressions are patterns used to match character combinations in strings. In JavaScript, regular expressions are also objects. These patterns are used with the exec() and test() methods of RegExp, and with the match(), matchAll(), replace(), replaceAll(), search(), and split() methods of String.\n\nIf you want to learn more about regex, I recommend taking a look at this website. It helps you to write and visualise the patterns.\n"
] |
[
2,
1
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074657952_javascript.txt
|
Q:
why does .map() log out the array correctly the first time but say 'undefined' the second time?
`
export default async function handler(req, res) {
if (req.method === 'POST') {
try {
const bodyItems = req.body.items;
console.log(bodyItems)
const renderCartItems = bodyItems?.map( singleProduct => {
return {
price_data: {
currency: 'usd',
product_data: {
name: singleProduct.name
},
unit_amount: singleProduct.price
},
quantity: singleProduct.qty,
}
})
console.log(renderCartItems, "mapped")
// Create Checkout Sessions from body params.
const session = await stripe.checkout.sessions.create({
line_items: renderCartItems,
mode: 'payment',
success_url: `http://localhost:3000/done`,
cancel_url: `http://localhost:3000`,
});
console.log(session.url)
res.redirect(303, session.url);
} catch (err) {
res.status(err.statusCode || 500).json(err.message);
}
} else {
res.setHeader('Allow', 'POST');
res.status(405).end('Method Not Allowed');
}
}
`
this is the error
"The `line_items` parameter is required in payment mode."
and this is what got consoled out from my consonle.log statments
[
{
name: 'sped',
qty: 1,
price: 5000,
img: { _type: 'image', asset: [Object] },
ogprice: 5000
}
]
[
{
price_data: { currency: 'usd', product_data: [Object], unit_amount: 5000 },
quantity: 1
}
] mapped
undefined
undefined mapped
Anybody know how to fix this problem? It seems to get the right data on the first run, but on the second one it returns undefined. I tried using react strict mode but it still runs twice
A:
Taking a step back, the log output your shared appears to indicate your handler is running twice, once with req.body.items as you expect, and once where that's undefined.
I'd suggest adding logging at the start of handler to understand what this extra request is, and also examine your browser network logs to see if the endpoint is being called twice in the client-side code. You might find this is a React error where you're calling your API with unstable data when you don't intend to.
A:
It looks like the map function is being called on bodyItems twice, but the second time bodyItems is undefined, resulting in renderCartItems also being undefined.
To fix this, you could check if bodyItems is defined before calling map on it, like this:
const renderCartItems = bodyItems ? bodyItems.map(singleProduct => {
// Code to map the items goes here
}) : [];
This way, renderCartItems will always be an array, even if bodyItems is undefined.
Alternatively, you could avoid calling map on bodyItems twice by using a temporary variable to store the result of the first map call, like this:
const mappedBodyItems = bodyItems.map(singleProduct => {
// Code to map the items goes here
});
const renderCartItems = mappedBodyItems;
This way, you only call map once, and renderCartItems will always have the correct value.
|
why does .map() log out the array correctly the first time but say 'undefined' the second time?
|
`
export default async function handler(req, res) {
if (req.method === 'POST') {
try {
const bodyItems = req.body.items;
console.log(bodyItems)
const renderCartItems = bodyItems?.map( singleProduct => {
return {
price_data: {
currency: 'usd',
product_data: {
name: singleProduct.name
},
unit_amount: singleProduct.price
},
quantity: singleProduct.qty,
}
})
console.log(renderCartItems, "mapped")
// Create Checkout Sessions from body params.
const session = await stripe.checkout.sessions.create({
line_items: renderCartItems,
mode: 'payment',
success_url: `http://localhost:3000/done`,
cancel_url: `http://localhost:3000`,
});
console.log(session.url)
res.redirect(303, session.url);
} catch (err) {
res.status(err.statusCode || 500).json(err.message);
}
} else {
res.setHeader('Allow', 'POST');
res.status(405).end('Method Not Allowed');
}
}
`
this is the error
"The `line_items` parameter is required in payment mode."
and this is what got consoled out from my consonle.log statments
[
{
name: 'sped',
qty: 1,
price: 5000,
img: { _type: 'image', asset: [Object] },
ogprice: 5000
}
]
[
{
price_data: { currency: 'usd', product_data: [Object], unit_amount: 5000 },
quantity: 1
}
] mapped
undefined
undefined mapped
Anybody know how to fix this problem? It seems to get the right data on the first run, but on the second one it returns undefined. I tried using react strict mode but it still runs twice
|
[
"Taking a step back, the log output your shared appears to indicate your handler is running twice, once with req.body.items as you expect, and once where that's undefined.\nI'd suggest adding logging at the start of handler to understand what this extra request is, and also examine your browser network logs to see if the endpoint is being called twice in the client-side code. You might find this is a React error where you're calling your API with unstable data when you don't intend to.\n",
"It looks like the map function is being called on bodyItems twice, but the second time bodyItems is undefined, resulting in renderCartItems also being undefined.\nTo fix this, you could check if bodyItems is defined before calling map on it, like this:\nconst renderCartItems = bodyItems ? bodyItems.map(singleProduct => {\n // Code to map the items goes here\n}) : [];\n\nThis way, renderCartItems will always be an array, even if bodyItems is undefined.\nAlternatively, you could avoid calling map on bodyItems twice by using a temporary variable to store the result of the first map call, like this:\nconst mappedBodyItems = bodyItems.map(singleProduct => {\n // Code to map the items goes here\n});\nconst renderCartItems = mappedBodyItems;\n\nThis way, you only call map once, and renderCartItems will always have the correct value.\n"
] |
[
1,
0
] |
[] |
[] |
[
"dictionary",
"javascript",
"next",
"reactjs",
"stripe_payments"
] |
stackoverflow_0074657846_dictionary_javascript_next_reactjs_stripe_payments.txt
|
Q:
How to real count repeated string in string in python?
I am sorry that I am not sure my question is correct or clear enough. However, I hope the example below can explain my question:
As what you see
print("abbbbbbc".count("bbb")) #–output is 2
But I want the result is 4, because bbbbbb has 6 characters and can breakdown as below:
bbb---
-bbb--
--bbb-
---bbb
I couldn't figure out which function I can use to solve this matter. What I did is count by for combinations, and it doesn't look right at all.
Thank for helping.
A:
You can implement your own logic, like this:
a = "abbbbbbc"
b = "bbb"
count = 0
for i in range(len(a) - len(b)):
if a[i:i+len(b)] == b:
count += 1
print(count)
OR
count = 0
for i in range(len(a)):
if a[i:].startswith(b):
count += 1
print(count)
OR
count = sum([1 if a[i:].startswith(b) else 0 for i in range(len(a))])
print(count)
|
How to real count repeated string in string in python?
|
I am sorry that I am not sure my question is correct or clear enough. However, I hope the example below can explain my question:
As what you see
print("abbbbbbc".count("bbb")) #–output is 2
But I want the result is 4, because bbbbbb has 6 characters and can breakdown as below:
bbb---
-bbb--
--bbb-
---bbb
I couldn't figure out which function I can use to solve this matter. What I did is count by for combinations, and it doesn't look right at all.
Thank for helping.
|
[
"You can implement your own logic, like this:\na = \"abbbbbbc\"\nb = \"bbb\"\n\ncount = 0\nfor i in range(len(a) - len(b)):\n if a[i:i+len(b)] == b:\n count += 1\nprint(count)\n\nOR\ncount = 0\nfor i in range(len(a)):\n if a[i:].startswith(b):\n count += 1\nprint(count)\n\nOR\ncount = sum([1 if a[i:].startswith(b) else 0 for i in range(len(a))])\nprint(count)\n\n"
] |
[
1
] |
[] |
[] |
[
"count",
"python",
"string"
] |
stackoverflow_0074657986_count_python_string.txt
|
Q:
Jenkins downstream error not propagated to upstream
I have a pipeline to build which will trigger another pipeline for running the (long) tests. My idea is to have the downstream test job run without waiting (to allow frequent build jobs). However, when I add wait: false to the build job command a failure will not change the status of the build job:
JenkinsfileBuild
node {
def app
stage('Clone repository') {
}
stage('Merge main') {
}
stage('Build image') {
}
stage('Upload image') {
sh "echo would push image here"
}
}
stage('Test Image') {
build job: "../${env.JOB_NAME.split("/")[0]}Test/${env.BRANCH_NAME}", wait: false
}
JenkinsfileTest
node {
stage('Run Test') {
sh "exit -1";
}
}
I was expecting that the failing test job would mark the build job as failed as it does, when wait: true is set.
What am I missing?
A:
Add propagate: true option to the build step. And you need to wait for the Job in order to get the result.
build job: "../${env.JOB_NAME.split("/")[0]}Test/${env.BRANCH_NAME}", wait: true, propagate: true
|
Jenkins downstream error not propagated to upstream
|
I have a pipeline to build which will trigger another pipeline for running the (long) tests. My idea is to have the downstream test job run without waiting (to allow frequent build jobs). However, when I add wait: false to the build job command a failure will not change the status of the build job:
JenkinsfileBuild
node {
def app
stage('Clone repository') {
}
stage('Merge main') {
}
stage('Build image') {
}
stage('Upload image') {
sh "echo would push image here"
}
}
stage('Test Image') {
build job: "../${env.JOB_NAME.split("/")[0]}Test/${env.BRANCH_NAME}", wait: false
}
JenkinsfileTest
node {
stage('Run Test') {
sh "exit -1";
}
}
I was expecting that the failing test job would mark the build job as failed as it does, when wait: true is set.
What am I missing?
|
[
"Add propagate: true option to the build step. And you need to wait for the Job in order to get the result.\nbuild job: \"../${env.JOB_NAME.split(\"/\")[0]}Test/${env.BRANCH_NAME}\", wait: true, propagate: true\n\n"
] |
[
0
] |
[] |
[] |
[
"jenkins",
"jenkins_pipeline"
] |
stackoverflow_0074657976_jenkins_jenkins_pipeline.txt
|
Q:
Export SQL Script from SQLAlchemy
I am using SQLAlchemy with ORM and DeclarativeMeta to connect to my Database.
Is there a way to generate or export a .sql file that contains all the Create Tables Commands?
Thank you!
I tried to get that information from my Meta Object or even from my SQLAlchemy Engine but they don't hold information like that.
Even the Meta.metadate._create_all() does not return a string or something else
A:
Found an answers in the documentation of sqlalchemy.
from sqlalchemy.schema import CreateTable
print(CreateTable(my_mysql_table).compile(mysql_engine))
CREATE TABLE my_table (
id INTEGER(11) NOT NULL AUTO_INCREMENT,
...
)ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
SQLAlchemy Documentation!
|
Export SQL Script from SQLAlchemy
|
I am using SQLAlchemy with ORM and DeclarativeMeta to connect to my Database.
Is there a way to generate or export a .sql file that contains all the Create Tables Commands?
Thank you!
I tried to get that information from my Meta Object or even from my SQLAlchemy Engine but they don't hold information like that.
Even the Meta.metadate._create_all() does not return a string or something else
|
[
"Found an answers in the documentation of sqlalchemy.\nfrom sqlalchemy.schema import CreateTable\nprint(CreateTable(my_mysql_table).compile(mysql_engine))\n\nCREATE TABLE my_table (\nid INTEGER(11) NOT NULL AUTO_INCREMENT,\n...\n)ENGINE=InnoDB DEFAULT CHARSET=utf8mb4\n\nSQLAlchemy Documentation!\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0074657665_python_sqlalchemy.txt
|
Q:
Create MUI TextField Next to Label
I've searched all over, but I can't make it work. I want to create a TextField Element Next to a label
(and not anywhere inside or touching the border of a TextField) like this one:
Desired Result
Labe <____> TextField
Such a scenario is not covered in the documentation.
The best i could get was to use the startAdornment in the TextField Component.
InputProps={{
startAdornment: (
<InputAdornment position="start">Name</InputAdornment>
)
}}
Also I tried, to use the FormControlLabel component and pass inside the TextField as a Control, although TextField can't be anymore fullWidth
<FormControlLabel
control={<TextField
variant="standard"
margin="normal"
id="tags"
helperText="service."
fullWidth
/>}
label="Start"
labelPlacement="start"
/>
But it doesn't seem to work properly and I'm not even sure if I should use a FormControlLabel with a TextField.
Thank you in advance. This seems as something very basic but I can't get it to work and I find no other relative problems posted.
A:
you can apply a flex style to your Form or wrap your Textfield in a div with an InputLabelin it like and add style to match the desire result for example like this:
import React from "react";
import { makeStyles } from "@material-ui/core/styles";
import TextField from "@material-ui/core/TextField";
import InputLabel from "@material-ui/core/InputLabel";
import { FormHelperText } from "@material-ui/core";
const useStyles = makeStyles((theme) => ({
root: {
display: "flex"
},
TextFieldDiv: {
marginLeft: "15px"
},
labelName: {
paddingTop: "10px"
},
optionalTag: {
fontSize: "13px"
},
labelTag: {
textAlign: "right"
}
}));
export default function BasicTextFields() {
const classes = useStyles();
return (
<form className={classes.root} noValidate autoComplete="off">
<InputLabel className={classes.labelName}>
<div className={classes.labelTag}>Name</div>{" "}
<div className={classes.optionalTag}>(optional)</div>
</InputLabel>
<div className={classes.TextFieldDiv}>
<TextField id="standard-basic" />
<FormHelperText>The service name</FormHelperText>
</div>
</form>
);
}
here a sandBox Example
A:
I tried the above solutions and these are not working in Mui-v5.
You can use Grid.
<Grid container direction="row" alignItems="center" spacing={2}>
<Grid item>
<div>Label</div>
</Grid>
<Grid item>
<div>Input</div>
</Grid>
</Grid>
You can adjust spacing and alignment accordingly.
|
Create MUI TextField Next to Label
|
I've searched all over, but I can't make it work. I want to create a TextField Element Next to a label
(and not anywhere inside or touching the border of a TextField) like this one:
Desired Result
Labe <____> TextField
Such a scenario is not covered in the documentation.
The best i could get was to use the startAdornment in the TextField Component.
InputProps={{
startAdornment: (
<InputAdornment position="start">Name</InputAdornment>
)
}}
Also I tried, to use the FormControlLabel component and pass inside the TextField as a Control, although TextField can't be anymore fullWidth
<FormControlLabel
control={<TextField
variant="standard"
margin="normal"
id="tags"
helperText="service."
fullWidth
/>}
label="Start"
labelPlacement="start"
/>
But it doesn't seem to work properly and I'm not even sure if I should use a FormControlLabel with a TextField.
Thank you in advance. This seems as something very basic but I can't get it to work and I find no other relative problems posted.
|
[
"you can apply a flex style to your Form or wrap your Textfield in a div with an InputLabelin it like and add style to match the desire result for example like this:\nimport React from \"react\";\nimport { makeStyles } from \"@material-ui/core/styles\";\nimport TextField from \"@material-ui/core/TextField\";\nimport InputLabel from \"@material-ui/core/InputLabel\";\nimport { FormHelperText } from \"@material-ui/core\";\nconst useStyles = makeStyles((theme) => ({\n root: {\n display: \"flex\"\n },\n TextFieldDiv: {\n marginLeft: \"15px\"\n },\n labelName: {\n paddingTop: \"10px\"\n },\n optionalTag: {\n fontSize: \"13px\"\n },\n labelTag: {\n textAlign: \"right\"\n }\n}));\n\nexport default function BasicTextFields() {\n const classes = useStyles();\n\n return (\n <form className={classes.root} noValidate autoComplete=\"off\">\n <InputLabel className={classes.labelName}>\n <div className={classes.labelTag}>Name</div>{\" \"}\n <div className={classes.optionalTag}>(optional)</div>\n </InputLabel>\n <div className={classes.TextFieldDiv}>\n <TextField id=\"standard-basic\" />\n <FormHelperText>The service name</FormHelperText>\n </div>\n </form>\n );\n}\n\n\nhere a sandBox Example\n",
"I tried the above solutions and these are not working in Mui-v5.\nYou can use Grid.\n<Grid container direction=\"row\" alignItems=\"center\" spacing={2}>\n <Grid item>\n <div>Label</div>\n </Grid>\n <Grid item>\n <div>Input</div>\n </Grid>\n</Grid>\n\nYou can adjust spacing and alignment accordingly.\n"
] |
[
0,
0
] |
[] |
[] |
[
"material_ui",
"next.js",
"reactjs"
] |
stackoverflow_0066444419_material_ui_next.js_reactjs.txt
|
Q:
java.sql.SQLException: No value specified for parameter 2 error
When I run my project then I type the password and username in my project then I encounter the error code: no value specified for parameter 2, even though all parameters should be filled in.
String user = username.getText();
String pass = password.getText();
String option = options.getSelectedItem().toString();
if( user.equals("awd") || pass.equals("awd") ) {
JOptionPane.showMessageDialog(rootPane, "Enter Username and Password","Login Error",1);
password.setText(null);
username.setText(null);
} else {
try {
con = Connector.getConnection();
pst = con.prepareStatement("SELECT * FROM logins WHERE username=? and password=?");
pst.setString(1, user);
pst.setString(2, pass);
rs = pst.executeQuery();
if(rs.next()){
String s1 = rs.getString("options");
String un = rs.getString("username");
if(option.equalsIgnoreCase("Admin") && s1.equalsIgnoreCase("admin")){
Main mn = new Main(un);
mn.setVisible(true);
setVisible(false);
}
if(option.equalsIgnoreCase("Employee") && s1.equalsIgnoreCase("employee")){
OwnerFrame ow = new OwnerFrame(un);
ow.setVisible(true);
setVisible(false);
}
else{
JOptionPane.showMessageDialog(rootPane, "Username or Password are not Match", "Login Error",1);
}
A:
SQL Bind parameters are one of the few places where we count from 1. This
pst.setString(0, user);
pst.setString(1, pass);
should be
pst.setString(1, user);
pst.setString(2, pass);
|
java.sql.SQLException: No value specified for parameter 2 error
|
When I run my project then I type the password and username in my project then I encounter the error code: no value specified for parameter 2, even though all parameters should be filled in.
String user = username.getText();
String pass = password.getText();
String option = options.getSelectedItem().toString();
if( user.equals("awd") || pass.equals("awd") ) {
JOptionPane.showMessageDialog(rootPane, "Enter Username and Password","Login Error",1);
password.setText(null);
username.setText(null);
} else {
try {
con = Connector.getConnection();
pst = con.prepareStatement("SELECT * FROM logins WHERE username=? and password=?");
pst.setString(1, user);
pst.setString(2, pass);
rs = pst.executeQuery();
if(rs.next()){
String s1 = rs.getString("options");
String un = rs.getString("username");
if(option.equalsIgnoreCase("Admin") && s1.equalsIgnoreCase("admin")){
Main mn = new Main(un);
mn.setVisible(true);
setVisible(false);
}
if(option.equalsIgnoreCase("Employee") && s1.equalsIgnoreCase("employee")){
OwnerFrame ow = new OwnerFrame(un);
ow.setVisible(true);
setVisible(false);
}
else{
JOptionPane.showMessageDialog(rootPane, "Username or Password are not Match", "Login Error",1);
}
|
[
"SQL Bind parameters are one of the few places where we count from 1. This\npst.setString(0, user);\npst.setString(1, pass);\n\nshould be\npst.setString(1, user);\npst.setString(2, pass);\n\n"
] |
[
-1
] |
[] |
[] |
[
"database",
"java",
"jframe"
] |
stackoverflow_0074658019_database_java_jframe.txt
|
Q:
AWS CDK Jest Unit Test Resource Has DeletionPolicy
In the AWS CDK, I can write a Jest unit test to test if a resource has a specific property. But how do I test a resource DeletionPolicy value which is NOT a property?
cdk.out/example.template.json (simplified)
"AppsUserPool8FD9D0C0": {
"Type": "AWS::Cognito::UserPool",
"Properties": {
"UserPoolName": "test",
...
},
"UpdateReplacePolicy": "Retain",
"DeletionPolicy": "Retain",
"Metadata": {}
}
Jest unit test passes for property (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"UserPoolName": "test"
});
Jest unit test fails for DeletionPolicy (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"DeletionPolicy": "Retain"
});
A:
You can use the following example
https://github.com/aws/aws-cdk/blob/775a0c930a680f8a52bb4a40084d07492f7f9fee/packages/%40aws-cdk/aws-cloudformation/test/test.resource.ts#L57
You can use haveResouce() with parameter ResourcePart.CompleteDefinition
snippet from the example
expect(stack).to(haveResource('AWS::CloudFormation::CustomResource', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
}, ResourcePart.CompleteDefinition));
A:
Here's an updated snippet confirmed working on CDK version: 1.107.0
import { ResourcePart } from '@aws-cdk/assert';
test('stack has correct policies', async () => {
expect(stack).toHaveResource('AWS::Cognito::UserPool', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
}, ResourcePart.CompleteDefinition);
});
A:
Updated snippet for CDK 2.x
const template = Template.fromStack(stack);
template.hasResource('AWS::Cognito::UserPool', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
});
|
AWS CDK Jest Unit Test Resource Has DeletionPolicy
|
In the AWS CDK, I can write a Jest unit test to test if a resource has a specific property. But how do I test a resource DeletionPolicy value which is NOT a property?
cdk.out/example.template.json (simplified)
"AppsUserPool8FD9D0C0": {
"Type": "AWS::Cognito::UserPool",
"Properties": {
"UserPoolName": "test",
...
},
"UpdateReplacePolicy": "Retain",
"DeletionPolicy": "Retain",
"Metadata": {}
}
Jest unit test passes for property (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"UserPoolName": "test"
});
Jest unit test fails for DeletionPolicy (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"DeletionPolicy": "Retain"
});
|
[
"You can use the following example\nhttps://github.com/aws/aws-cdk/blob/775a0c930a680f8a52bb4a40084d07492f7f9fee/packages/%40aws-cdk/aws-cloudformation/test/test.resource.ts#L57\nYou can use haveResouce() with parameter ResourcePart.CompleteDefinition\nsnippet from the example\n expect(stack).to(haveResource('AWS::CloudFormation::CustomResource', {\n DeletionPolicy: 'Retain',\n UpdateReplacePolicy: 'Retain',\n }, ResourcePart.CompleteDefinition));\n\n",
"Here's an updated snippet confirmed working on CDK version: 1.107.0\nimport { ResourcePart } from '@aws-cdk/assert';\n\ntest('stack has correct policies', async () => {\n expect(stack).toHaveResource('AWS::Cognito::UserPool', {\n DeletionPolicy: 'Retain',\n UpdateReplacePolicy: 'Retain',\n }, ResourcePart.CompleteDefinition);\n});\n\n",
"Updated snippet for CDK 2.x\nconst template = Template.fromStack(stack);\ntemplate.hasResource('AWS::Cognito::UserPool', {\n DeletionPolicy: 'Retain',\n UpdateReplacePolicy: 'Retain',\n});\n\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"aws_cdk",
"jestjs"
] |
stackoverflow_0067842828_aws_cdk_jestjs.txt
|
Q:
How to remove margins from PDF? (Generated using WeasyPrint)
I am trying to render a PDF document within my Flask application. For this, I am using the following HTML template:
<!DOCTYPE html>
<html>
<head>
<style>
@page {
margin:0
}
h1 {
color:white;
}
.header{
background: #0a0045;
height: 250px;
}
.center {
position: relative;
top: 50%;
left: 50%;
-ms-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
text-align:center;
}
</style>
</head>
<body>
<div class="header">
<div class="center">
<h1>Name</h1>
</div>
</div>
</body>
</html>
I keep getting white margins at the top and right/left of my header section:
Is there a way to remove them?
Edit:
Below is the code used to generate the PDF file using WeasyPrint in my Flask app:
def generate_pdf(id):
element = Element.query.filter_by(id=id).first()
attri_dict = get_element_attri_dict_for_tpl(element)
html = render_template('element.html', attri_dict=attri_dict)
pdf = HTML(string=html).write_pdf()
destin_loc = app.config['ELEMENTS_FOLDER']
timestamp = dt.datetime.now().strftime('%Y%m%d%H%M%S')
file_name = '_'.join(['new_element', timestamp])
path_to_new_file = destin_loc + '/%s.pdf' % file_name
f = open(path_to_new_file, 'wb')
f.write(pdf)
filename = return_latest_element_path()
return send_from_directory(directory=app.config['ELEMENTS_FOLDER'],
filename=filename,
as_attachment=True)
A:
Maybe you forgot " ; " or/and " mm ",
it works:
@page {
size: A4; /* Change from the default size of A4 */
margin: 0mm; /* Set margin on each page */
}
A:
The weasyprint uses 3 sources of css, one of them is default user agent stylesheet
(https://doc.courtbouillon.org/weasyprint/stable/api_reference.html#supported-features)
That defines:
body {
display: block;
margin: 8px;
}
make sure to override that margin on tag and you will loose the margin.
|
How to remove margins from PDF? (Generated using WeasyPrint)
|
I am trying to render a PDF document within my Flask application. For this, I am using the following HTML template:
<!DOCTYPE html>
<html>
<head>
<style>
@page {
margin:0
}
h1 {
color:white;
}
.header{
background: #0a0045;
height: 250px;
}
.center {
position: relative;
top: 50%;
left: 50%;
-ms-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
text-align:center;
}
</style>
</head>
<body>
<div class="header">
<div class="center">
<h1>Name</h1>
</div>
</div>
</body>
</html>
I keep getting white margins at the top and right/left of my header section:
Is there a way to remove them?
Edit:
Below is the code used to generate the PDF file using WeasyPrint in my Flask app:
def generate_pdf(id):
element = Element.query.filter_by(id=id).first()
attri_dict = get_element_attri_dict_for_tpl(element)
html = render_template('element.html', attri_dict=attri_dict)
pdf = HTML(string=html).write_pdf()
destin_loc = app.config['ELEMENTS_FOLDER']
timestamp = dt.datetime.now().strftime('%Y%m%d%H%M%S')
file_name = '_'.join(['new_element', timestamp])
path_to_new_file = destin_loc + '/%s.pdf' % file_name
f = open(path_to_new_file, 'wb')
f.write(pdf)
filename = return_latest_element_path()
return send_from_directory(directory=app.config['ELEMENTS_FOLDER'],
filename=filename,
as_attachment=True)
|
[
"Maybe you forgot \" ; \" or/and \" mm \",\nit works:\n@page {\n size: A4; /* Change from the default size of A4 */\n margin: 0mm; /* Set margin on each page */\n }\n\n",
"The weasyprint uses 3 sources of css, one of them is default user agent stylesheet\n(https://doc.courtbouillon.org/weasyprint/stable/api_reference.html#supported-features)\nThat defines:\nbody {\n display: block;\n margin: 8px;\n}\n\nmake sure to override that margin on tag and you will loose the margin.\n"
] |
[
13,
0
] |
[] |
[] |
[
"flask",
"margin",
"pdf",
"python",
"weasyprint"
] |
stackoverflow_0058175484_flask_margin_pdf_python_weasyprint.txt
|
Q:
getting the sum of products between two columns in SQL
My situation is this: I have an order_details table containing entities representing the
"content" of an orders, each row has
a product_id column ---> foreign key referencing the type of product
an order_id column ----> foreign key represents which order the product belongs to
a qty column ----------> representing the quantity of the product present in the order
each product row instead has:
a product name column ----> representing the product name
a co2_value --------------> representing the value of co2 removed buying that product
each table (orders table too) has his own id column as primary key for the rows.
the problem is the following:
I need a sql query to find the total co2_value of an order adding all the co2_values of the products belonging to a specific order, multiplied by their quantity.
I tried this query first:
SELECT SUM(co2_value) FROM products
INNER JOIN order_details
ON products.id = order_details.product_id AND order_details.order_id = ". $this->id;
the problem with this is that i'am getting only the sum of the co2_values of the product without multiplying them by the quantity, is there a way to do that in a single query?
I'll leave the link to migration here
A:
select
sum(co2_value * qty)
from
products
inner join order_details on
products.id = order_details.product_id
and order_details.order_id = ". $this->id;
You can simply select the co2_value with respect to its quantity by multiplying it using the above query
|
getting the sum of products between two columns in SQL
|
My situation is this: I have an order_details table containing entities representing the
"content" of an orders, each row has
a product_id column ---> foreign key referencing the type of product
an order_id column ----> foreign key represents which order the product belongs to
a qty column ----------> representing the quantity of the product present in the order
each product row instead has:
a product name column ----> representing the product name
a co2_value --------------> representing the value of co2 removed buying that product
each table (orders table too) has his own id column as primary key for the rows.
the problem is the following:
I need a sql query to find the total co2_value of an order adding all the co2_values of the products belonging to a specific order, multiplied by their quantity.
I tried this query first:
SELECT SUM(co2_value) FROM products
INNER JOIN order_details
ON products.id = order_details.product_id AND order_details.order_id = ". $this->id;
the problem with this is that i'am getting only the sum of the co2_values of the product without multiplying them by the quantity, is there a way to do that in a single query?
I'll leave the link to migration here
|
[
"select\n sum(co2_value * qty)\nfrom\n products\ninner join order_details on\n products.id = order_details.product_id\n and order_details.order_id = \". $this->id;\n\nYou can simply select the co2_value with respect to its quantity by multiplying it using the above query\n"
] |
[
1
] |
[] |
[] |
[
"database",
"sql"
] |
stackoverflow_0074657947_database_sql.txt
|
Q:
Docker for Windows error: "Hardware assisted virtualization and data execution protection must be enabled in the BIOS"
I've installed Docker and I'm getting this error when I run the GUI:
Hardware assisted virtualization and data execution protection must
be enabled in the BIOS
Seems like a bug since Docker works like a charm from the command line, but I'm wondering if anyone has a clue about why this is happening?
Before you ask, yes, I've enabled virtualization in the BIOS and the Intel Processor Identification Utility confirms that it's activated. Docker, docker-machine and docker-compose all work from the command line, Virtualbox works, running Docker from a Debian or Ubuntu VM works.
There's just this weird issue about the GUI.
My specs:
Windows 10 Pro x64 Anniversary Edition
Intel core i5-6300HQ @ 2.30GHz
A:
If the features described are enabled, the problem is with Hyper-V that is disabled or Hypervisor agent not running.
SOLUTION A (If Hyper-V is totally disabled or not installed)
Open PowerShell as administrator and
Enable Hyper-V with
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
SOLUTION B (If Hyper-V feature is already enabled but doesn't work)
Enable Hypervisor with
bcdedit /set hypervisorlaunchtype auto
Now restart the system and try again.
SOLUTION C
If the problem persists, probably Hyper-V on your system is corrupted, so
Go in Control Panel -> [Programs] -> [Windows Features] and completely uncheck all Hyper-V related components. Restart the system.
Enable Hyper-V again. Restart.
NOTE 1:
Hyper-V needs hardware virtualization as prerequisite. Make sure your PC supports it, if yes and still won't work, there is the possibility your BIOS is not configured correctly and this feature is disabled. In this case, check, enable it and try again. The virtualization features could be reported under different names according the platform used (e.g if you don't see any option that uses virtualization label explicitly, on AMD you have to check SVM feature state, on Intel the VT-x feature state).
NOTE 2:
Hyper-V can be installed only with some version e.g.:
Windows 10 Enterprise; Windows 10 Professional; Windows 10 Education.
Hyper-V cannot be installed on cheaper or mobile Windows versions e.g.:
Windows 10 Home; Windows 10 Mobile; Windows 10 Mobile Enterprise.
A:
Below is working solution for me, please follow these steps
Open PowerShell as administrator or CMD prompt as administrator
Run this command in PowerShell-> bcdedit /set hypervisorlaunchtype auto
Now restart the system and try again.
cheers.
A:
In my case I had to enable virtualization in the BIOS setting.
Restart PC
While you are on the 'restart' screen press any of these keys and you enter the bios settings in windows: esc, f1, f2, f3, f4, f8 or delete
For intel based systems:
press f7 (advanced mode)
go to advanced
cpa configuration
enable virtualization
And after all above steps, it finally works :-)
A:
I uninstalled Intel HAXM and VirtualBox, Docker now runs
A:
Note: If your version of Windows supports Hyper-V, you can install docker directly by selecting Use Hyper-V during installation.
However, if your Windows does not have this support, follow the solution below.
I had a similar problem.
I have enabled Intel Virtual Technology in the bios settings.
Then I updated the Linux kernel from here.
and it worked
My specs:
Microsoft Windows 10 Home x64 Single Language
Intel(R) Core(TM) i5-7300 @ 2.50GHz
A:
For me, all I had to do it uninstalling VMware.
Docker now is running
A:
Open the task manager and click on the performance tab. If virtualization is disabled, you need to follow the instructions here to enable it: https://blogs.technet.microsoft.com/canitpro/2015/09/08/step-by-step-enabling-hyper-v-for-use-on-windows-10/
A:
Try these steps
Run this command in powershell
bcdedit /set hypervisorlaunchtype
auto
Restart your PC
Now try docker --version in cmd line
A:
Can you try enabling Hyper-V manually, and potentially creating and running a Hyper-V VM manually? Details:
https://docs.docker.com/docker-for-windows/#/what-to-know-before-you-install
https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install
A:
Enable the Hyper-V role through Settings
Right click on the Windows button/Icon and select ‘Apps and Features’.
1- Select Programs and Features on the right under related settings.
2- Select Turn Windows Features on or off.
2- Select Hyper-V and click OK.
A:
If solution above does not work, then
Go to command prompt and type systeminfo. check Hyper-V Requirements section.
If all listed Hyper-V requirements have a value of Yes, your system can run the Hyper-V role.
In my case virtualization enable in Firmware was NO.
So, I did enabled in system bios by making Virtualization Technology enabled in my HP laptop.
Please visit this link to enable it:
https://2nwiki.2n.cz/pages/viewpage.action?pageId=75202968
A:
follow the steps bellow:
go to: windows setting => Update & Security => Recovery => Advanced Startup and click on : Restart Now.
Troubleshoot => Advanced Option => UEFI Firmware => Restart.
go to Bios => configuration => Virtualization technology => enable it.
save change and it will works.
A:
Try this in PowerShell(admin enabled):
Enable-WindowsOptionalFeature –Online -FeatureName Microsoft-Hyper-V –All -NoRestart
This will install HyperVisor without management tools, and then you can run Docker after this.
A:
In my case I had to uninstall hyper-v, restart pc, and run docker again.
A:
I also use vagrant. It appears I can only use 1 thing at a time. Uninstalling vagrant/virtualBox allowed me to run docker and vise versa
A:
I have tried many suggestions above but docker keeps complaining about hardware assisted virtualization error. Virtualization is enabled in BIOS, and also Hyper-V is installed and enabled. After a few try and errors, I eventually downloaded coreinfo tool and found out that Hypervisor was not actually enabled. Using ISE (64 bit) as admin and run command from above Solution B and that enables Hypervisor successfully (checked via coreinfo -v again). After restart, docker is now running successfully.
A:
If everything is fine with BIOS option I just forced disabling and enabling all HyperV features and this solved my issue
--cmd
Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All
--restart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
A:
In my case even though I used all the solutions mentioned above but nothing worked for me. So I decided to uninstall docker and install it again.
Now in the process, I have noticed that I did not check Use Windows containers instead of Linux containers (this can be changed after installation) in my previous installation, and that is why I got the problem above and the solutions still did not fix it. So ensure to check it before you run desktop docker or uninstall it and install it again by checking this option.
A:
For me, disabling and then enabling Virtualisation in BIOS helped, strangely.
A:
It helped me:
Disable components Virtual Machine Platform and Windows Subsystem for Linux
Restart
Enable components
Restart
I think my problem was related to beta version of WSL2.
I tried to install android subsystem. But I have deleted it some time ago. So, there was only beta WSL2 left
A:
Issue for me was solved when I uninstalled Cygwin.
A:
I tried many of the suggestions here, but did not manage to get it running. What worked for me in the end was to go straight in to the BIOS to activate it. The following article was of great help:
https://www.nextofwindows.com/how-to-enable-configure-and-use-hyper-v-on-windows-10
A:
@Silverstorm
I had Hyperv installed and virtualization enabled in my BIOS.
But SOLUTION A didn't work for me.
However, SOLUTION B worked like a charm.
SOLUTION B (If Hyper-V feature is already enabled but doesn't work)
Enable Hypervisor with
bcdedit /set hypervisorlaunchtype auto
Now restart the system and try again.
A:
Besides the original answer, I have done the following:
Disable Hyper-V in Windows Features
Turning virtualization off and on in BIOS
Log back in windows, enabled Hyper-V. I was prompted there are updates for Hyper-V and I did the update. Restart when prompted.
It worked!
A:
If the problem persists probably Hyper-V on your system is corrupted, so
Go in Control Panel -> [Programs] -> [Windows Features] and completely uncheck all Hyper-V related components. Restart the system.
Enable Hyper-V again. Restart.
A:
I had the same issue after installing VMWare, I uninstalled it but this didn't fix the issue.
Solution for me: in "Turn windows features on or off" I turned off:
hyper-v
containers
windows subsystem for linux
then restart
After the restart I got this message from docker:
I ran the ran the command as said in the message
Enable-WindowsOptionalFeature -Online -FeatureName $("VirtualMachinePlatform", "Microsoft-Windows-Subsystem-Linux")
Then restart and voilà, Docker was back with WSL2
A:
In my case virtualization is disabled so I need to do some configuration in my bios,
Please check following link I think it will help you to make bios setup
https://support.bluestacks.com/hc/en-us/articles/4409279876621-How-to-enable-Virtualization-VT-on-Windows-11-for-BlueStacks-5
In bios the setting are dependent on your system manufacture so please find setting accordingly.
Hope it will help you and save your time.
Thanks :)
A:
I don't know how this works, and I don't even know what these commands do, I don't know what is hypervisor or what it does that it interferes with Docker, and I don't know what the nx means in the second command which it is apparently turning it off. I had these commands saved on my computer as "Turn VT-x off" (yet another thing that I don't know what it is, I think it's related to Virtualization Technology which I don't know what that does/is either). But nothing else worked for me (including the accepted answer (which I tested all of it's solutions) and other upvoted answers, although I didn't read all of them), except running both of these. It is completely up to you to test these, I do not guarantee any fixes to you, but it worked for me, I put it in here because I thought it might be helpful for someone else like me who also didn't find other answers to be that helpful:
bcdedit /set hypervisorlaunchtype auto
bcdedit /set nx AlwaysOff
shutdown /s
|
Docker for Windows error: "Hardware assisted virtualization and data execution protection must be enabled in the BIOS"
|
I've installed Docker and I'm getting this error when I run the GUI:
Hardware assisted virtualization and data execution protection must
be enabled in the BIOS
Seems like a bug since Docker works like a charm from the command line, but I'm wondering if anyone has a clue about why this is happening?
Before you ask, yes, I've enabled virtualization in the BIOS and the Intel Processor Identification Utility confirms that it's activated. Docker, docker-machine and docker-compose all work from the command line, Virtualbox works, running Docker from a Debian or Ubuntu VM works.
There's just this weird issue about the GUI.
My specs:
Windows 10 Pro x64 Anniversary Edition
Intel core i5-6300HQ @ 2.30GHz
|
[
"If the features described are enabled, the problem is with Hyper-V that is disabled or Hypervisor agent not running.\nSOLUTION A (If Hyper-V is totally disabled or not installed)\n\nOpen PowerShell as administrator and\n\nEnable Hyper-V with\ndism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All\n\n\nSOLUTION B (If Hyper-V feature is already enabled but doesn't work)\nEnable Hypervisor with\nbcdedit /set hypervisorlaunchtype auto\n\nNow restart the system and try again.\nSOLUTION C\nIf the problem persists, probably Hyper-V on your system is corrupted, so\n\nGo in Control Panel -> [Programs] -> [Windows Features] and completely uncheck all Hyper-V related components. Restart the system.\n\nEnable Hyper-V again. Restart.\n\n\nNOTE 1:\nHyper-V needs hardware virtualization as prerequisite. Make sure your PC supports it, if yes and still won't work, there is the possibility your BIOS is not configured correctly and this feature is disabled. In this case, check, enable it and try again. The virtualization features could be reported under different names according the platform used (e.g if you don't see any option that uses virtualization label explicitly, on AMD you have to check SVM feature state, on Intel the VT-x feature state).\nNOTE 2:\nHyper-V can be installed only with some version e.g.:\n\nWindows 10 Enterprise; Windows 10 Professional; Windows 10 Education.\n\nHyper-V cannot be installed on cheaper or mobile Windows versions e.g.:\n\nWindows 10 Home; Windows 10 Mobile; Windows 10 Mobile Enterprise.\n\n",
"Below is working solution for me, please follow these steps\n\nOpen PowerShell as administrator or CMD prompt as administrator \nRun this command in PowerShell-> bcdedit /set hypervisorlaunchtype auto\nNow restart the system and try again.\n\ncheers.\n",
"In my case I had to enable virtualization in the BIOS setting.\n\nRestart PC\nWhile you are on the 'restart' screen press any of these keys and you enter the bios settings in windows: esc, f1, f2, f3, f4, f8 or delete\nFor intel based systems:\n\n\npress f7 (advanced mode)\ngo to advanced\ncpa configuration\nenable virtualization\n\n\nAnd after all above steps, it finally works :-)\n",
"I uninstalled Intel HAXM and VirtualBox, Docker now runs\n",
"Note: If your version of Windows supports Hyper-V, you can install docker directly by selecting Use Hyper-V during installation.\nHowever, if your Windows does not have this support, follow the solution below.\n\nI had a similar problem.\nI have enabled Intel Virtual Technology in the bios settings.\n\nThen I updated the Linux kernel from here.\nand it worked\nMy specs:\n\nMicrosoft Windows 10 Home x64 Single Language\nIntel(R) Core(TM) i5-7300 @ 2.50GHz\n\n",
"For me, all I had to do it uninstalling VMware.\nDocker now is running\n",
"Open the task manager and click on the performance tab. If virtualization is disabled, you need to follow the instructions here to enable it: https://blogs.technet.microsoft.com/canitpro/2015/09/08/step-by-step-enabling-hyper-v-for-use-on-windows-10/\n",
"Try these steps\n\nRun this command in powershell\nbcdedit /set hypervisorlaunchtype\nauto\n\n\nRestart your PC\nNow try docker --version in cmd line\n\n",
"Can you try enabling Hyper-V manually, and potentially creating and running a Hyper-V VM manually? Details:\n\nhttps://docs.docker.com/docker-for-windows/#/what-to-know-before-you-install\nhttps://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install\n\n",
"Enable the Hyper-V role through Settings\nRight click on the Windows button/Icon and select ‘Apps and Features’.\n1- Select Programs and Features on the right under related settings.\n2- Select Turn Windows Features on or off.\n2- Select Hyper-V and click OK.\n\n",
"If solution above does not work, then\nGo to command prompt and type systeminfo. check Hyper-V Requirements section.\nIf all listed Hyper-V requirements have a value of Yes, your system can run the Hyper-V role.\nIn my case virtualization enable in Firmware was NO.\nSo, I did enabled in system bios by making Virtualization Technology enabled in my HP laptop.\nPlease visit this link to enable it:\nhttps://2nwiki.2n.cz/pages/viewpage.action?pageId=75202968\n",
"follow the steps bellow:\n\ngo to: windows setting => Update & Security => Recovery => Advanced Startup and click on : Restart Now.\nTroubleshoot => Advanced Option => UEFI Firmware => Restart.\ngo to Bios => configuration => Virtualization technology => enable it.\nsave change and it will works.\n\n\n\n",
"Try this in PowerShell(admin enabled): \nEnable-WindowsOptionalFeature –Online -FeatureName Microsoft-Hyper-V –All -NoRestart\n\nThis will install HyperVisor without management tools, and then you can run Docker after this.\n",
"In my case I had to uninstall hyper-v, restart pc, and run docker again.\n",
"I also use vagrant. It appears I can only use 1 thing at a time. Uninstalling vagrant/virtualBox allowed me to run docker and vise versa \n",
"I have tried many suggestions above but docker keeps complaining about hardware assisted virtualization error. Virtualization is enabled in BIOS, and also Hyper-V is installed and enabled. After a few try and errors, I eventually downloaded coreinfo tool and found out that Hypervisor was not actually enabled. Using ISE (64 bit) as admin and run command from above Solution B and that enables Hypervisor successfully (checked via coreinfo -v again). After restart, docker is now running successfully.\n",
"If everything is fine with BIOS option I just forced disabling and enabling all HyperV features and this solved my issue\n--cmd \nDisable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All\n--restart\nEnable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All\n",
"In my case even though I used all the solutions mentioned above but nothing worked for me. So I decided to uninstall docker and install it again.\nNow in the process, I have noticed that I did not check Use Windows containers instead of Linux containers (this can be changed after installation) in my previous installation, and that is why I got the problem above and the solutions still did not fix it. So ensure to check it before you run desktop docker or uninstall it and install it again by checking this option.\n\n",
"For me, disabling and then enabling Virtualisation in BIOS helped, strangely.\n",
"It helped me:\n\nDisable components Virtual Machine Platform and Windows Subsystem for Linux\nRestart\nEnable components\nRestart\n\nI think my problem was related to beta version of WSL2.\nI tried to install android subsystem. But I have deleted it some time ago. So, there was only beta WSL2 left\n",
"Issue for me was solved when I uninstalled Cygwin. \n",
"I tried many of the suggestions here, but did not manage to get it running. What worked for me in the end was to go straight in to the BIOS to activate it. The following article was of great help:\nhttps://www.nextofwindows.com/how-to-enable-configure-and-use-hyper-v-on-windows-10\n",
"@Silverstorm \nI had Hyperv installed and virtualization enabled in my BIOS.\nBut SOLUTION A didn't work for me.\nHowever, SOLUTION B worked like a charm.\nSOLUTION B (If Hyper-V feature is already enabled but doesn't work)\nEnable Hypervisor with\nbcdedit /set hypervisorlaunchtype auto\nNow restart the system and try again.\n",
"Besides the original answer, I have done the following:\n\nDisable Hyper-V in Windows Features\nTurning virtualization off and on in BIOS\nLog back in windows, enabled Hyper-V. I was prompted there are updates for Hyper-V and I did the update. Restart when prompted.\nIt worked!\n\n",
"If the problem persists probably Hyper-V on your system is corrupted, so\nGo in Control Panel -> [Programs] -> [Windows Features] and completely uncheck all Hyper-V related components. Restart the system.\nEnable Hyper-V again. Restart.\n",
"I had the same issue after installing VMWare, I uninstalled it but this didn't fix the issue.\nSolution for me: in \"Turn windows features on or off\" I turned off:\n\nhyper-v\ncontainers\nwindows subsystem for linux\n\nthen restart\nAfter the restart I got this message from docker:\n\nI ran the ran the command as said in the message\nEnable-WindowsOptionalFeature -Online -FeatureName $(\"VirtualMachinePlatform\", \"Microsoft-Windows-Subsystem-Linux\")\n\nThen restart and voilà, Docker was back with WSL2\n",
"In my case virtualization is disabled so I need to do some configuration in my bios,\nPlease check following link I think it will help you to make bios setup\nhttps://support.bluestacks.com/hc/en-us/articles/4409279876621-How-to-enable-Virtualization-VT-on-Windows-11-for-BlueStacks-5\nIn bios the setting are dependent on your system manufacture so please find setting accordingly.\nHope it will help you and save your time.\nThanks :)\n",
"I don't know how this works, and I don't even know what these commands do, I don't know what is hypervisor or what it does that it interferes with Docker, and I don't know what the nx means in the second command which it is apparently turning it off. I had these commands saved on my computer as \"Turn VT-x off\" (yet another thing that I don't know what it is, I think it's related to Virtualization Technology which I don't know what that does/is either). But nothing else worked for me (including the accepted answer (which I tested all of it's solutions) and other upvoted answers, although I didn't read all of them), except running both of these. It is completely up to you to test these, I do not guarantee any fixes to you, but it worked for me, I put it in here because I thought it might be helpful for someone else like me who also didn't find other answers to be that helpful:\nbcdedit /set hypervisorlaunchtype auto\n\nbcdedit /set nx AlwaysOff\n\nshutdown /s\n\n"
] |
[
560,
56,
30,
13,
12,
8,
6,
5,
3,
3,
3,
3,
2,
2,
2,
2,
2,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
0
] |
[
"I had to uninstall VirtualBox to get it to work, such a pity!\n"
] |
[
-1
] |
[
"docker",
"windows"
] |
stackoverflow_0039684974_docker_windows.txt
|
Q:
Webscrape specific information
I have been trying to scrape the 2 screenshot informations from the link below, but neither in google sheets or nor in excel I couldnt. Does anyone have any idea about it , appreciated. Thank you
https://www.aircanada.com/ca/en/aco/home/fly/flight-information/flight-status-results.html#/flight-status-results?method=byfn&date=01-15-2023&fn=887
A:
There may be no straightforward way to grab that information with a spreadsheet formula.
To understand why, see How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website?
|
Webscrape specific information
|
I have been trying to scrape the 2 screenshot informations from the link below, but neither in google sheets or nor in excel I couldnt. Does anyone have any idea about it , appreciated. Thank you
https://www.aircanada.com/ca/en/aco/home/fly/flight-information/flight-status-results.html#/flight-status-results?method=byfn&date=01-15-2023&fn=887
|
[
"There may be no straightforward way to grab that information with a spreadsheet formula.\nTo understand why, see How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website?\n"
] |
[
2
] |
[] |
[] |
[
"excel",
"google_sheets",
"web",
"web_scraping"
] |
stackoverflow_0074655697_excel_google_sheets_web_web_scraping.txt
|
Q:
Count the digits in a number
I have written a function called count_digit:
# Write a function which takes a number as an input
# It should count the number of digits in the number
# And check if the number is a 1 or 2-digit number then return True
# Return False for any other case
def count_digit(num):
if (num/10 == 0):
return 1
else:
return 1 + count_digit(num / 10);
print(count_digit(23))
I get 325 as output. Why is that and how do I correct it?
A:
convert the integer to a string, and then use the len() method on the converted string. Unless you also consider taking floats as input too, and not integers exclusively.
A:
This is a Python3 behaviour. / returns float and not integer division.
Change your code to:
def count_digit(num):
if (num//10 == 0):
return 1
else:
return 1 + count_digit(num // 10)
print(count_digit(23))
A:
Recursive
def count_digit(n):
if n == 0:
return 0
return count_digit(n // 10) + 1
Easy
def count_digit(n):
return len(str(abs(n)))
A:
Assuming you always send integer to function and you are asking for a mathematical answer, this might be your solution:
import math
def count_digits(number):
return int(math.log10(abs(number))) + 1 if number else 1
if __name__ == '__main__':
print(count_digits(15712))
# prints: 5
A:
Here is an easy solution in python;
num=23
temp=num #make a copy of integer
count=0
#This while loop will run unless temp is not zero.,so we need to do something to make this temp ==0
while(temp):
temp=temp//10 # "//10" this floor division by 10,remove last digit of the number.,means we are removing last digit and assigning back to the number;unless we make the number 0
count+=1 #after removing last digit from the number;we are counting it;until the number becomes Zero and while loop becomes False
print(count)
|
Count the digits in a number
|
I have written a function called count_digit:
# Write a function which takes a number as an input
# It should count the number of digits in the number
# And check if the number is a 1 or 2-digit number then return True
# Return False for any other case
def count_digit(num):
if (num/10 == 0):
return 1
else:
return 1 + count_digit(num / 10);
print(count_digit(23))
I get 325 as output. Why is that and how do I correct it?
|
[
"convert the integer to a string, and then use the len() method on the converted string. Unless you also consider taking floats as input too, and not integers exclusively.\n",
"This is a Python3 behaviour. / returns float and not integer division.\nChange your code to:\ndef count_digit(num):\n if (num//10 == 0):\n return 1\n else:\n return 1 + count_digit(num // 10)\n\nprint(count_digit(23))\n\n",
"Recursive\ndef count_digit(n):\n if n == 0:\n return 0\n return count_digit(n // 10) + 1\n\nEasy\ndef count_digit(n):\n return len(str(abs(n)))\n\n",
"Assuming you always send integer to function and you are asking for a mathematical answer, this might be your solution:\nimport math\n\n\ndef count_digits(number):\n return int(math.log10(abs(number))) + 1 if number else 1\n\n\nif __name__ == '__main__':\n print(count_digits(15712))\n # prints: 5\n\n",
"Here is an easy solution in python;\nnum=23\ntemp=num #make a copy of integer\ncount=0\n\n#This while loop will run unless temp is not zero.,so we need to do something to make this temp ==0\n\nwhile(temp): \n temp=temp//10 # \"//10\" this floor division by 10,remove last digit of the number.,means we are removing last digit and assigning back to the number;unless we make the number 0\n count+=1 #after removing last digit from the number;we are counting it;until the number becomes Zero and while loop becomes False\nprint(count)\n\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"count",
"filter",
"python"
] |
stackoverflow_0070258942_count_filter_python.txt
|
Q:
How to Hash the XML file using SHA-256 in Java?
I am working on ESign process for XML
The signing steps includes,
Hash the new invoice body using SHA-256
Encode the hashed invoice using base64
Further is to Generate Digital Signature.
A:
This may help you
public String getXMLInvHash(String xmlDocument) {
String xmlHasHString = "";
try {
Transformer transformer = this.getTransformer();
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
StreamResult xmlOutput = new StreamResult(byteArrayOutputStream);
transformer.transform(new StreamSource(new StringReader(xmlDocument)), xmlOutput);
String canString = canonicalizeXml(byteArrayOutputStream.toByteArray());
byte[] shaByte = hashStringToBytes(canString);
xmlHasHString = Base64.getEncoder().encodeToString(shaByte);
} catch (TransformerException e) {
e.printStackTrace();
}
return xmlHasHString;
}
private String canonicalizeXml(final byte[] xmlDocument) {
String canonicString = "";
try {
Init.init();
Canonicalizer canon = Canonicalizer.getInstance("http://www.w3.org/2006/12/xml-c14n11");
// System.out.println("Can : " + new String(canon.canonicalize(xmlDocument)));
canonicString = new String(canon.canonicalize(xmlDocument));
} catch (ParserConfigurationException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
} catch (InvalidCanonicalizerException e) {
e.printStackTrace();
} catch (CanonicalizationException e) {
e.printStackTrace();
}
return canonicString;
}
private Transformer getTransformer() {
TransformerFactory transformerFactory = null;
Transformer transformer = null;
try {
transformerFactory = TransformerFactory.newInstance();
transformer = transformerFactory.newTransformer();
transformer.setOutputProperty("encoding", "UTF-8");
transformer.setOutputProperty("indent", "no");
transformer.setOutputProperty("omit-xml-declaration", "yes");
} catch (TransformerException e) {
e.printStackTrace();
}
return transformer;
}
private byte[] hashStringToBytes(String input) {
MessageDigest digest = null;
try {
digest = MessageDigest.getInstance("SHA-256");
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
final byte[] hash = digest.digest(input.getBytes(StandardCharsets.UTF_8));
return hash;
}
|
How to Hash the XML file using SHA-256 in Java?
|
I am working on ESign process for XML
The signing steps includes,
Hash the new invoice body using SHA-256
Encode the hashed invoice using base64
Further is to Generate Digital Signature.
|
[
"This may help you\npublic String getXMLInvHash(String xmlDocument) {\n String xmlHasHString = \"\";\n try {\n Transformer transformer = this.getTransformer();\n ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();\n StreamResult xmlOutput = new StreamResult(byteArrayOutputStream);\n transformer.transform(new StreamSource(new StringReader(xmlDocument)), xmlOutput);\n String canString = canonicalizeXml(byteArrayOutputStream.toByteArray());\n byte[] shaByte = hashStringToBytes(canString);\n xmlHasHString = Base64.getEncoder().encodeToString(shaByte);\n } catch (TransformerException e) {\n e.printStackTrace();\n }\n return xmlHasHString;\n }\n\n private String canonicalizeXml(final byte[] xmlDocument) {\n String canonicString = \"\";\n try {\n Init.init();\n Canonicalizer canon = Canonicalizer.getInstance(\"http://www.w3.org/2006/12/xml-c14n11\");\n // System.out.println(\"Can : \" + new String(canon.canonicalize(xmlDocument)));\n canonicString = new String(canon.canonicalize(xmlDocument));\n } catch (ParserConfigurationException e) {\n e.printStackTrace();\n } catch (IOException e) {\n e.printStackTrace();\n } catch (SAXException e) {\n e.printStackTrace();\n } catch (InvalidCanonicalizerException e) {\n e.printStackTrace();\n } catch (CanonicalizationException e) {\n e.printStackTrace();\n }\n return canonicString;\n }\n\n private Transformer getTransformer() {\n TransformerFactory transformerFactory = null;\n Transformer transformer = null;\n try {\n transformerFactory = TransformerFactory.newInstance();\n transformer = transformerFactory.newTransformer();\n transformer.setOutputProperty(\"encoding\", \"UTF-8\");\n transformer.setOutputProperty(\"indent\", \"no\");\n transformer.setOutputProperty(\"omit-xml-declaration\", \"yes\");\n\n } catch (TransformerException e) {\n e.printStackTrace();\n }\n return transformer;\n }\n\n private byte[] hashStringToBytes(String input) {\n MessageDigest digest = null;\n try {\n digest = MessageDigest.getInstance(\"SHA-256\");\n } catch (NoSuchAlgorithmException e) {\n e.printStackTrace();\n }\n final byte[] hash = digest.digest(input.getBytes(StandardCharsets.UTF_8));\n return hash;\n }\n\n"
] |
[
1
] |
[] |
[] |
[
"java",
"sha256",
"xml"
] |
stackoverflow_0074320103_java_sha256_xml.txt
|
Q:
Is it possible to create a single sql statement to fetch a list of people where their birthday celebration will fall in next 60 days
I have a table where it has some name and dateofbirth.
Refence data:
ABC, 1990-11-23
BCD, 1998-10-21
CDE, 1997-05-02
DEF, 2000-10-15
EFG, 1999-01-10
FGH, 1987-01-15
GHI, 1989-12-19
HIJ, 1986-12-09
I need a SQL query where I need to get the birthday celebration dates that is going to happen during the next 60 days ordered by celebration dates.
This is the query that I used till now.
SELECT *
FROM `friends`
WHERE ( DATE_FORMAT(`dob`, '%m%d') >= DATE_FORMAT(CURDATE(), '%m%d')
AND DATE_FORMAT(`dob`, '%m%d') <= DATE_FORMAT(DATE_ADD(CURDATE(), INTERVAL 60 DAY), '%m%d')
ORDER BY DATE_FORMAT(`dob`, '%m%d');
It works ok if it runs during Jan to Oct. During November and December, the condition DATE_FORMAT(dob, '%m%d') <= DATE_FORMAT(DATE_ADD(CURDATE(), INTERVAL 60 DAY), '%m%d') cannot apply. For example, the resulting comparison will be like 1209 < 0131 and fails.
The result that I expect to get when executed on Dec 2, 2022 is
HIJ, 1986-12-09
GHI, 1989-12-19
EFG, 1999-01-10
FGH, 1987-01-15
How do I do this in one single query?
A:
The thread mentioned in the comment to your question uses things like adding 365.25 days to get this to work. I think this solution might be more reliable.
You can construct this years' birthday by extracting the month and day from the date of birth, and concatenating the current year to it using STR_TO_DATE.
Then you can check using a CASE statement if this years' birthday has already passed, in which case you add a year to that birthday, because that will be the next birthday for name. Then you can check if the result of that CASE statement is BETWEEN today and 60 days from now.
I used a CTE to make it clearer to read. DBfiddle here.
WITH cte as (
SELECT
-- First determine this years (year of current date) birthday
-- by constructing it from the current year, month of birth and day of birth
STR_TO_DATE(
CONCAT(YEAR(CURDATE()),'-', MONTH(dob), '-', DAY(dob)),
'%Y-%m-%d') AS this_years_birthday,
name,
dob
FROM friends
)
SELECT cte.name, cte.dob
FROM cte
WHERE
-- If the birthday is still in this year
-- Use this years' birthday
-- else add a year to this years' birthday
-- Then filter it to be between today and 60 days from now
CASE WHEN this_years_birthday >= CURDATE()
THEN this_years_birthday
ELSE DATE_ADD(this_years_birthday, INTERVAL 1 YEAR) END
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 60 DAY)
ORDER BY MONTH(cte.dob) DESC
name
dob
GHI
1989-12-19
HIJ
1986-12-09
EFG
1999-01-10
FGH
1987-01-15
|
Is it possible to create a single sql statement to fetch a list of people where their birthday celebration will fall in next 60 days
|
I have a table where it has some name and dateofbirth.
Refence data:
ABC, 1990-11-23
BCD, 1998-10-21
CDE, 1997-05-02
DEF, 2000-10-15
EFG, 1999-01-10
FGH, 1987-01-15
GHI, 1989-12-19
HIJ, 1986-12-09
I need a SQL query where I need to get the birthday celebration dates that is going to happen during the next 60 days ordered by celebration dates.
This is the query that I used till now.
SELECT *
FROM `friends`
WHERE ( DATE_FORMAT(`dob`, '%m%d') >= DATE_FORMAT(CURDATE(), '%m%d')
AND DATE_FORMAT(`dob`, '%m%d') <= DATE_FORMAT(DATE_ADD(CURDATE(), INTERVAL 60 DAY), '%m%d')
ORDER BY DATE_FORMAT(`dob`, '%m%d');
It works ok if it runs during Jan to Oct. During November and December, the condition DATE_FORMAT(dob, '%m%d') <= DATE_FORMAT(DATE_ADD(CURDATE(), INTERVAL 60 DAY), '%m%d') cannot apply. For example, the resulting comparison will be like 1209 < 0131 and fails.
The result that I expect to get when executed on Dec 2, 2022 is
HIJ, 1986-12-09
GHI, 1989-12-19
EFG, 1999-01-10
FGH, 1987-01-15
How do I do this in one single query?
|
[
"The thread mentioned in the comment to your question uses things like adding 365.25 days to get this to work. I think this solution might be more reliable.\nYou can construct this years' birthday by extracting the month and day from the date of birth, and concatenating the current year to it using STR_TO_DATE.\nThen you can check using a CASE statement if this years' birthday has already passed, in which case you add a year to that birthday, because that will be the next birthday for name. Then you can check if the result of that CASE statement is BETWEEN today and 60 days from now.\nI used a CTE to make it clearer to read. DBfiddle here.\nWITH cte as (\n SELECT\n -- First determine this years (year of current date) birthday\n -- by constructing it from the current year, month of birth and day of birth\n STR_TO_DATE(\n CONCAT(YEAR(CURDATE()),'-', MONTH(dob), '-', DAY(dob)), \n '%Y-%m-%d') AS this_years_birthday, \n name, \n dob\n FROM friends\n)\nSELECT cte.name, cte.dob\nFROM cte\nWHERE \n -- If the birthday is still in this year\n -- Use this years' birthday\n -- else add a year to this years' birthday\n -- Then filter it to be between today and 60 days from now\n CASE WHEN this_years_birthday >= CURDATE() \n THEN this_years_birthday \n ELSE DATE_ADD(this_years_birthday, INTERVAL 1 YEAR) END \n BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 60 DAY)\nORDER BY MONTH(cte.dob) DESC\n\n\n\n\n\nname\ndob\n\n\n\n\nGHI\n1989-12-19\n\n\nHIJ\n1986-12-09\n\n\nEFG\n1999-01-10\n\n\nFGH\n1987-01-15\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"sql"
] |
stackoverflow_0074657653_mysql_sql.txt
|
Q:
How can I create for each category a horizontal bar plot that consists of shares
I have the following df
type eur_d asia_d amer_d
0 cat1 0.58 0.30 0.12
1 cat2 0.50 0.29 0.21
2 cat3 0.50 0.30 0.20
3 cat4 0.42 0.31 0.27
4 cat5 0.42 0.37 0.20
5 cat6 0.60 0.21 0.19
6 cat7 0.26 0.50 0.24
7 cat8 0.54 0.17 0.30
8 cat9 0.46 0.25 0.29
Ideally I want to create 9 horizontal bar of same length that shows the share of Europe, Asia, and America for each category with different colors.
A:
If you mean stacked horizontal bar chart, this can help.
df.plot.barh(x="type", stacked=True, figsize=(10, 5))
plt.show()
|
How can I create for each category a horizontal bar plot that consists of shares
|
I have the following df
type eur_d asia_d amer_d
0 cat1 0.58 0.30 0.12
1 cat2 0.50 0.29 0.21
2 cat3 0.50 0.30 0.20
3 cat4 0.42 0.31 0.27
4 cat5 0.42 0.37 0.20
5 cat6 0.60 0.21 0.19
6 cat7 0.26 0.50 0.24
7 cat8 0.54 0.17 0.30
8 cat9 0.46 0.25 0.29
Ideally I want to create 9 horizontal bar of same length that shows the share of Europe, Asia, and America for each category with different colors.
|
[
"If you mean stacked horizontal bar chart, this can help.\ndf.plot.barh(x=\"type\", stacked=True, figsize=(10, 5))\nplt.show()\n\n\n"
] |
[
2
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python"
] |
stackoverflow_0074658027_matplotlib_pandas_python.txt
|
Q:
Input placeholder text not showing in Safari
I have a Mailchimp sign-up form with an <input> element that has placeholder text. On Chrome, Firefox, and Microsoft Edge it displays fine but on Safari (desktop and mobile) the placeholder text does not show. It is blank. Code below:
<input type="email" placeholder="Enter your email" onfocus="this.placeholder='Enter your email'" name="EMAIL" id="mce-EMAIL" class="required email" required="" aria-required="true">
I used web-kit fixes to remove the background color from the autocomplete, have white text, and add text styles.
input:-webkit-autofill,
input:-webkit-autofill:active,
input:-webkit-autofill:focus,
input:-webkit-autofill:hover,
select:-webkit-autofill,
select:-webkit-autofill:active,
select:-webkit-autofill:focus,
select:-webkit-autofill:hover,
textarea:-webkit-autofill,
textarea:-webkit-autofill:active,
textarea:-webkit-autofill:focus,
textarea:-webkit-autofill:hover {
background-color: #ED3A38 !important;
transition: background-color 5000s ease-in-out 0s !important;
-webkit-text-fill-color: #FFFFFF !important;
font-size: 6vw !important;
}
As well as followed this guide to change the placeholder text colour and size:
input:-webkit-autofill::first-line,
input:-internal-autofill-previewed,
::-webkit-input-placeholder,
::-moz-placeholder,
:-ms-input-placeholder,
::-ms-input-placeholder,
:-moz-placeholder {
-webkit-text-fill-color: #FFFFFF !important;
font-size: 6vw !important;
letter-spacing: -0.057em;
}
::placeholder {
color: #FFFFFF !important;
opacity: 1 !important;
}
I'm stuck on why this isn't working. Is there something I am missing?
This is the live example in production: link
A:
you can use this code
<textarea class="concerncommentArea" placeholder="Place you comment here"></textarea>
A:
Posting for future reference. For some reason in Safari, it displays under
Shadow Content (User Agent) and there was an overflow applied on the input::placeholder element.
I overrode that with the following code:
input::placeholder {
overflow: visible;
}
This seems to work. Strange... but if it works, it works.
|
Input placeholder text not showing in Safari
|
I have a Mailchimp sign-up form with an <input> element that has placeholder text. On Chrome, Firefox, and Microsoft Edge it displays fine but on Safari (desktop and mobile) the placeholder text does not show. It is blank. Code below:
<input type="email" placeholder="Enter your email" onfocus="this.placeholder='Enter your email'" name="EMAIL" id="mce-EMAIL" class="required email" required="" aria-required="true">
I used web-kit fixes to remove the background color from the autocomplete, have white text, and add text styles.
input:-webkit-autofill,
input:-webkit-autofill:active,
input:-webkit-autofill:focus,
input:-webkit-autofill:hover,
select:-webkit-autofill,
select:-webkit-autofill:active,
select:-webkit-autofill:focus,
select:-webkit-autofill:hover,
textarea:-webkit-autofill,
textarea:-webkit-autofill:active,
textarea:-webkit-autofill:focus,
textarea:-webkit-autofill:hover {
background-color: #ED3A38 !important;
transition: background-color 5000s ease-in-out 0s !important;
-webkit-text-fill-color: #FFFFFF !important;
font-size: 6vw !important;
}
As well as followed this guide to change the placeholder text colour and size:
input:-webkit-autofill::first-line,
input:-internal-autofill-previewed,
::-webkit-input-placeholder,
::-moz-placeholder,
:-ms-input-placeholder,
::-ms-input-placeholder,
:-moz-placeholder {
-webkit-text-fill-color: #FFFFFF !important;
font-size: 6vw !important;
letter-spacing: -0.057em;
}
::placeholder {
color: #FFFFFF !important;
opacity: 1 !important;
}
I'm stuck on why this isn't working. Is there something I am missing?
This is the live example in production: link
|
[
"you can use this code\n\n\n<textarea class=\"concerncommentArea\" placeholder=\"Place you comment here\"></textarea>\n\n\n\n",
"Posting for future reference. For some reason in Safari, it displays under\nShadow Content (User Agent) and there was an overflow applied on the input::placeholder element.\n\nI overrode that with the following code:\ninput::placeholder {\n overflow: visible;\n}\n\nThis seems to work. Strange... but if it works, it works.\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"html",
"webkit"
] |
stackoverflow_0074637393_css_html_webkit.txt
|
Q:
Django: python manage.py migrate does nothing at all
I just started learning django, and as i try to apply my migrations the first problem occurs. I start the server up, type
python manage.py migrate
and nothing happens. No error, no crash, just no response.
Performing system checks...
System check identified no issues (0 silenced).
You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
May 01, 2017 - 11:36:27
Django version 1.11, using settings 'website.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
python manage.py migrate
And that's the end of my terminal feed.
I thought maybe it just looks like nothing happens, but no. The changes weren't applied and I can't proceed any further. Any ideas on what's going on?
A:
Well, you say that you first start the server and then type in the commands. That's also what the terminal feed you shared shows.
Do not run the server if you want to run management commands using manage.py.
Hit Ctrl+C to exit the server and then run your migration commands, it will work.
A:
Try:
python manage.py makemigrations
python manage.py migrate
A:
@adam-karolczak n all
If there are multiple DJANGO Projects, it can happen that DJANGO_SETTINGS_MODULE is set to some other app in environment varibles, the current project manage.py will not point to current project settings thus the error.
So, confirm DJANGO_SETTINGS_MODULE in fact points to the settings.py of current project.
Close the project if its running viz. ctrl+C.
You can also check the server is not running ( linux ) by
ps -ef | grep runserver
Then kill the process ids if they exist.
If you confirmed settings.py in DJANGO_MODULE_SETTINGS is for the project you are having issue.
Run the following it should resolve.
python manage.py makemigrations
python manage.py migrate
Hope it helps.
A:
I was getting the same error
running this 2 command in terminal
python manage.py makemigrations
python manage.py migrate
and then
python manage.py runserver
solved my issues.
Thanks
A:
Have you tried with parameter?
python manage.py makemigrations <app_name>
A:
I had the same issue and the problem was that there was a pg_dump script running at the same time I was trying to migrate. After the dump was completed, migrations ran successfully.
A:
Check that INSTALL_APPS app exists, if not add it
Checks the model for default attributes
Running this 2 command in terminal
python manage.py makemigrations
python manage.py migration
A:
First exit of the present web server by typing Ctrl + C
Then run python manage.py migrate
The Warning is due to not configuring the initial database or migrating.
|
Django: python manage.py migrate does nothing at all
|
I just started learning django, and as i try to apply my migrations the first problem occurs. I start the server up, type
python manage.py migrate
and nothing happens. No error, no crash, just no response.
Performing system checks...
System check identified no issues (0 silenced).
You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
May 01, 2017 - 11:36:27
Django version 1.11, using settings 'website.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
python manage.py migrate
And that's the end of my terminal feed.
I thought maybe it just looks like nothing happens, but no. The changes weren't applied and I can't proceed any further. Any ideas on what's going on?
|
[
"Well, you say that you first start the server and then type in the commands. That's also what the terminal feed you shared shows. \nDo not run the server if you want to run management commands using manage.py.\nHit Ctrl+C to exit the server and then run your migration commands, it will work.\n",
"Try: \npython manage.py makemigrations\npython manage.py migrate\n\n",
"@adam-karolczak n all\nIf there are multiple DJANGO Projects, it can happen that DJANGO_SETTINGS_MODULE is set to some other app in environment varibles, the current project manage.py will not point to current project settings thus the error.\nSo, confirm DJANGO_SETTINGS_MODULE in fact points to the settings.py of current project.\nClose the project if its running viz. ctrl+C. \nYou can also check the server is not running ( linux ) by\nps -ef | grep runserver\n\nThen kill the process ids if they exist.\nIf you confirmed settings.py in DJANGO_MODULE_SETTINGS is for the project you are having issue.\nRun the following it should resolve.\npython manage.py makemigrations\npython manage.py migrate\n\nHope it helps.\n",
"I was getting the same error \nrunning this 2 command in terminal\n python manage.py makemigrations\n python manage.py migrate\n\nand then\n python manage.py runserver\n\nsolved my issues.\nThanks\n",
"Have you tried with parameter?\npython manage.py makemigrations <app_name>\n",
"I had the same issue and the problem was that there was a pg_dump script running at the same time I was trying to migrate. After the dump was completed, migrations ran successfully. \n",
"\nCheck that INSTALL_APPS app exists, if not add it\n\nChecks the model for default attributes\n\nRunning this 2 command in terminal\npython manage.py makemigrations\npython manage.py migration\n\n\n",
"\nFirst exit of the present web server by typing Ctrl + C\nThen run python manage.py migrate\n\nThe Warning is due to not configuring the initial database or migrating.\n"
] |
[
12,
10,
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"django",
"python",
"python_3.5",
"python_3.x"
] |
stackoverflow_0043718536_django_python_python_3.5_python_3.x.txt
|
Q:
pyqt5_tools designer.exe does not exist
I have installed PyQT5 by command pip install pyqt5 pyqt5-tools. Then I want to show path for designer.exe. However I could not found that in C:\Users\User\AppData\Local\Programs\Python\Python38\Lib\site-packages\pyqt5_tools directory. These are content of that folder.
A:
using the pip install pyqt5-tools method I found the designer on this path:
C:\Users\user\AppData\Local\Programs\Python\Python39\Lib\site-packages\qt5_applications\Qt\bin
A:
On my system QT Designer is saved under C:\Users\User\AppData\Local\Qt Designer
EDIT:
It seems like I installed QT Designer differently.
You can use pip install PyQt5Designer.
Then it should be in the path I gave.
A:
pip install pyqt5-tools
Check the path your_python_installed\Lib\site-packages\qt5_applications\Qt\bin\designer.exe
A:
pip install pyqt5-tools
I Found designer.exe in:
%APPDATA%\Roaming\Python\[Version]\site-packages\qt5_applications\Qt\bin
A:
I found it on path
venv\Lib\site-packages\qt5_applications\Qt\bin
A:
If this helps someone, in my case there were two "similar" of pyqt5 folders inside site-packages, ones starts with pyqt5 and others with just qt5, I found the designer app in the folder qt5_aplications.
A:
I searched the site-packages folder and found out that the designer.exe now exists under the ./Lib/site-packages/qt5_applications/Qt/bin
I am currently using Python 3.9.13
|
pyqt5_tools designer.exe does not exist
|
I have installed PyQT5 by command pip install pyqt5 pyqt5-tools. Then I want to show path for designer.exe. However I could not found that in C:\Users\User\AppData\Local\Programs\Python\Python38\Lib\site-packages\pyqt5_tools directory. These are content of that folder.
|
[
"using the pip install pyqt5-tools method I found the designer on this path:\nC:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python39\\Lib\\site-packages\\qt5_applications\\Qt\\bin\n",
"On my system QT Designer is saved under C:\\Users\\User\\AppData\\Local\\Qt Designer\nEDIT:\nIt seems like I installed QT Designer differently.\nYou can use pip install PyQt5Designer.\nThen it should be in the path I gave.\n",
"pip install pyqt5-tools\nCheck the path your_python_installed\\Lib\\site-packages\\qt5_applications\\Qt\\bin\\designer.exe\n",
"pip install pyqt5-tools\n\nI Found designer.exe in:\n%APPDATA%\\Roaming\\Python\\[Version]\\site-packages\\qt5_applications\\Qt\\bin\n\n",
"I found it on path\nvenv\\Lib\\site-packages\\qt5_applications\\Qt\\bin\n",
"If this helps someone, in my case there were two \"similar\" of pyqt5 folders inside site-packages, ones starts with pyqt5 and others with just qt5, I found the designer app in the folder qt5_aplications.\n",
"I searched the site-packages folder and found out that the designer.exe now exists under the ./Lib/site-packages/qt5_applications/Qt/bin \nI am currently using Python 3.9.13\n"
] |
[
10,
4,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"pip",
"python"
] |
stackoverflow_0065007143_pip_python.txt
|
Q:
React-tooltip doesn't show on conditional render
react-tooltip is awesome, but I'm running into issues with a conditionally rendered component.
I have a refresh icon that only shows when props.refreshImages is true:
Topbar.js
import React from 'react'
export default function Topbar (props) {
return (
<div>
{props.refreshImages &&
<i data-tip="Refresh the images" className="fas fa-sync-alt" ></i>}
</div>
)
}
App.js
import React from 'react'
import Topbar from './Topbar'
import ReactTooltip from 'react-tooltip'
export default function App() {
return (
<ReactTooltip />
<Topbar refreshImages={true}/>
)
}
Simplified example. But when the refresh icon is hidden and shown again (props.refreshImages is false and then true) the tooltips don't display.
I've already tried moving <ReactTooltip /> into Topbar.js and running ReactTooltip.rebuild() on every render, none have worked. For props.refreshImages I'm actually using Redux.
Thanks in advance for the help.
A:
You need to rebuild your tooltips with ReactTooltip.rebuild post a render and not before it.
Assuming you are using functional components with hooks, you can do so with useEffect hooks
export default function App(props) {
useEffect(() => {
ReactTooltip.rebuild();
}, [props.refreshImages])
return (
<ReactTooltip />
<Topbar refreshImages={props.refreshImages}/>
)
}
or with a class component you would write the logic in componentDidUpdate
componentDidUpdate(prevProps) {
if(prevProps.showItem !== this.props.showItem) {
ReactTooltip.rebuild();
}
}
Sample demo
A:
ReactTooltip.rebuild();
add this in componentDidUpdate function whenever state will update the function will callup and rebuild the Tooltip
|
React-tooltip doesn't show on conditional render
|
react-tooltip is awesome, but I'm running into issues with a conditionally rendered component.
I have a refresh icon that only shows when props.refreshImages is true:
Topbar.js
import React from 'react'
export default function Topbar (props) {
return (
<div>
{props.refreshImages &&
<i data-tip="Refresh the images" className="fas fa-sync-alt" ></i>}
</div>
)
}
App.js
import React from 'react'
import Topbar from './Topbar'
import ReactTooltip from 'react-tooltip'
export default function App() {
return (
<ReactTooltip />
<Topbar refreshImages={true}/>
)
}
Simplified example. But when the refresh icon is hidden and shown again (props.refreshImages is false and then true) the tooltips don't display.
I've already tried moving <ReactTooltip /> into Topbar.js and running ReactTooltip.rebuild() on every render, none have worked. For props.refreshImages I'm actually using Redux.
Thanks in advance for the help.
|
[
"You need to rebuild your tooltips with ReactTooltip.rebuild post a render and not before it.\nAssuming you are using functional components with hooks, you can do so with useEffect hooks\nexport default function App(props) {\n useEffect(() => {\n ReactTooltip.rebuild();\n }, [props.refreshImages])\n return (\n <ReactTooltip />\n <Topbar refreshImages={props.refreshImages}/>\n )\n}\n\nor with a class component you would write the logic in componentDidUpdate\n componentDidUpdate(prevProps) {\n if(prevProps.showItem !== this.props.showItem) {\n ReactTooltip.rebuild();\n }\n }\n\nSample demo\n",
"ReactTooltip.rebuild();\nadd this in componentDidUpdate function whenever state will update the function will callup and rebuild the Tooltip\n"
] |
[
20,
0
] |
[] |
[] |
[
"javascript",
"react_redux",
"react_tooltip",
"reactjs"
] |
stackoverflow_0062043514_javascript_react_redux_react_tooltip_reactjs.txt
|
Q:
Understanding concepts. Check if a member is static
Lets say we have a simple concept like:
template <typename T>
concept MyConcept = requires {
T::value == 42;
};
In my understanding the concept says that, if the code T::value == 42 is valid for the type T, I pass. So the value MUST be a static member, right?
I have a struct
struct Test { int value = 0; }
and the next template function
template <MyConcept T>
void call() { /*...*/ }
and when I try to do this:
int main()
{
call<Test>();
}
It works!
And the question is: why does it work? Test::value == 42 is not a valid code for the type Test.
I found a method to fix it like:
template <typename T>
concept MyConcept = requires {
*(&T::value) == 42;
};
And it "works" as expected:
<source>:6:20: note: the required expression '((* & T::value) == 42)' is invalid
And this concept works for the static value only, as it should be.
But why does the T::value == 42 work?
godbolt: https://godbolt.org/z/d3GPETEq9
UPD: + example https://godbolt.org/z/d8qfzK9b6
A:
And this concept works for the static value only, as it should be. But why does the T::value == 42 work?
Because there's actually an exception in the rule you think will cause this to fail.
The rule, from [expr.prim.id.general]/3 is, emphasis mine:
An id-expression that denotes a non-static data member or non-static member function of a class can only be used:
as part of a class member access in which the object expression refers to the member's class51 or a class derived from that class, or
to form a pointer to member ([expr.unary.op]), or
if that id-expression denotes a non-static data member and it appears in an unevaluated operand.
[Example 3:
struct S {
int m;
};
int i = sizeof(S::m); // OK
int j = sizeof(S::m + 42); // OK
— end example]
That third bullet right there: T::value is usable as an unevaluated operand. All the expressions you check in a requirement are unevaluated. So T::value works for non-static data members just fine.
|
Understanding concepts. Check if a member is static
|
Lets say we have a simple concept like:
template <typename T>
concept MyConcept = requires {
T::value == 42;
};
In my understanding the concept says that, if the code T::value == 42 is valid for the type T, I pass. So the value MUST be a static member, right?
I have a struct
struct Test { int value = 0; }
and the next template function
template <MyConcept T>
void call() { /*...*/ }
and when I try to do this:
int main()
{
call<Test>();
}
It works!
And the question is: why does it work? Test::value == 42 is not a valid code for the type Test.
I found a method to fix it like:
template <typename T>
concept MyConcept = requires {
*(&T::value) == 42;
};
And it "works" as expected:
<source>:6:20: note: the required expression '((* & T::value) == 42)' is invalid
And this concept works for the static value only, as it should be.
But why does the T::value == 42 work?
godbolt: https://godbolt.org/z/d3GPETEq9
UPD: + example https://godbolt.org/z/d8qfzK9b6
|
[
"\nAnd this concept works for the static value only, as it should be. But why does the T::value == 42 work?\n\nBecause there's actually an exception in the rule you think will cause this to fail.\nThe rule, from [expr.prim.id.general]/3 is, emphasis mine:\n\nAn id-expression that denotes a non-static data member or non-static member function of a class can only be used:\n\nas part of a class member access in which the object expression refers to the member's class51 or a class derived from that class, or\nto form a pointer to member ([expr.unary.op]), or\nif that id-expression denotes a non-static data member and it appears in an unevaluated operand.\n\n[Example 3:\nstruct S {\n int m;\n};\nint i = sizeof(S::m); // OK\nint j = sizeof(S::m + 42); // OK\n\n— end example]\n\nThat third bullet right there: T::value is usable as an unevaluated operand. All the expressions you check in a requirement are unevaluated. So T::value works for non-static data members just fine.\n"
] |
[
3
] |
[] |
[] |
[
"c++",
"c++20",
"c++_concepts",
"c++_templates",
"templates"
] |
stackoverflow_0074654983_c++_c++20_c++_concepts_c++_templates_templates.txt
|
Q:
Postgres CloudSQL Migration: ERROR: permission denied to set parameter "log_min_duration_statement"
I am trying to use the Database Migration Service to migrate an existing database into CloudSQL.
When I start the migration, I receive the following error:
finished setup replication with errors: [api_production]: error importing schema: failed to restore schema: stderr=pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 3997; 0 0 DATABASE PROPERTIES api_production postgres pg_restore: error: could not execute query: ERROR: permission denied to set parameter "log_min_duration_statement" Command was: ALTER DATABASE api_production SET log_min_duration_statement TO '500ms'; pg_restore: warning: errors ignored on restore: 1 , stdout=
How can I continue the migration, ignoring the SET PARAMETER statement?
A:
I have finally been able to start the migration by resetting the parameters on the source database (setting them to match was insufficient).
This can be done from a postgres console on the source database:
reset log_min_duration_statement;
ALTER DATABASE <database_name> RESET log_min_duration_statement;
ALTER DATABASE postgres RESET log_min_duration_statement;
|
Postgres CloudSQL Migration: ERROR: permission denied to set parameter "log_min_duration_statement"
|
I am trying to use the Database Migration Service to migrate an existing database into CloudSQL.
When I start the migration, I receive the following error:
finished setup replication with errors: [api_production]: error importing schema: failed to restore schema: stderr=pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 3997; 0 0 DATABASE PROPERTIES api_production postgres pg_restore: error: could not execute query: ERROR: permission denied to set parameter "log_min_duration_statement" Command was: ALTER DATABASE api_production SET log_min_duration_statement TO '500ms'; pg_restore: warning: errors ignored on restore: 1 , stdout=
How can I continue the migration, ignoring the SET PARAMETER statement?
|
[
"I have finally been able to start the migration by resetting the parameters on the source database (setting them to match was insufficient).\nThis can be done from a postgres console on the source database:\nreset log_min_duration_statement;\nALTER DATABASE <database_name> RESET log_min_duration_statement;\nALTER DATABASE postgres RESET log_min_duration_statement;\n\n"
] |
[
0
] |
[
"The Database Migration Service is a managed Google Cloud service, it has restricted access to certain system procedures and tables that require advanced privileges.The 'postgres' user is the most privileged user available in Cloud SQL, but it is not a Postgres superadmin. See public docs for more information about PostgreSQL Users.\nThere are some other parameters that you could run the \"ALTER Database\" command to change; however, \"log_statement\" and \"log_min_duration_statement\" are unfortunately not examples of these parameters.\nThe PostgreSQL documentation also documents this in particular \"Certain variables cannot be set this way, or can only be set by a superuser.\"\nHowever, you can change the particular setting in Console via the Flags on the database edit screen and remove these statements from the Migration job,to avoid these failure errors.\nPlease refer to the documentation to know more about configuring database flags.\n"
] |
[
-1
] |
[
"google_cloud_sql",
"migration",
"postgresql"
] |
stackoverflow_0074489411_google_cloud_sql_migration_postgresql.txt
|
Q:
Spring SAML Sample application returns Could not initialize class org.apache.commons.ssl.TrustMaterial
I have been trying to get the Spring SAML Sample application up and running, but have been struggling for days, and searching the internet with no success. I have followed all the steps in the Quick start guide.... when I click the 'Start single sign-on' button, I get redirected to SSOCircle, I log in, and get redirected back to the sample application, but it returns the following error:
Message:
Could not initialize class org.apache.commons.ssl.TrustMaterial
StackTrace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.commons.ssl.TrustMaterial
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
The stack trace from Tomcat is as follows:
java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.
at sun.security.util.DerInputStream.getLength(DerInputStream.java:561)
at sun.security.util.DerValue.init(DerValue.java:365)
at sun.security.util.DerValue.<init>(DerValue.java:320)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1220)
at java.security.KeyStore.load(KeyStore.java:1214)
at org.apache.commons.ssl.KeyStoreBuilder.tryJKS(KeyStoreBuilder.java:450)
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:416)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.
at sun.security.util.DerInputStream.getLength(DerInputStream.java:561)
at sun.security.util.DerValue.init(DerValue.java:365)
at sun.security.util.DerValue.<init>(DerValue.java:320)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1220)
at java.security.KeyStore.load(KeyStore.java:1214)
at org.apache.commons.ssl.KeyStoreBuilder.tryJKS(KeyStoreBuilder.java:450)
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:416)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
java.security.KeyStoreException: failed to extract any certificates or private keys - maybe bad password?
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:436)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Any help would be much apprectiated!
A:
You're most likely hitting a bug in the underlying OpenSAML and SSL library which presumes that file JAVA_HOME/lib/security/cacerts or JAVA_HOME/lib/security/jssecacerts is present and can be read as a JKS or PKCS12 keystore. In your case the file is probably corrupted.
Please try updating the cacerts file in your JDK with the file from the original installation. Make sure you can read it using keytool -list -keystore cacerts with either an empty password or password "changeit".
A:
Same issue, upgraded to not-yet-commons-ssl-0.3.16.jar from the bundled 3.9 of saml-sample and it worked.
A:
I'm on a mac with Java 1.6 - here's what I found:
TrustMaterial.java is running static init code ->
String pathToCacerts = javaHome + "/lib/security/cacerts";
String pathToJSSECacerts = javaHome + "/lib/security/jssecacerts";
TrustMaterial cacerts = null;
TrustMaterial jssecacerts = null;
try {
File f = new File(pathToCacerts);
if (f.exists()) {
cacerts = new TrustMaterial(pathToCacerts);
}
}
catch (Exception e) {
e.printStackTrace();
}
try {
File f = new File(pathToJSSECacerts);
if (f.exists()) {
jssecacerts = new TrustMaterial(pathToJSSECacerts);
}
}
catch (Exception e) {
e.printStackTrace();
}
CACERTS = cacerts;
JSSE_CACERTS = jssecacerts;
if (JSSE_CACERTS != null) {
DEFAULT = JSSE_CACERTS;
} else {
DEFAULT = CACERTS;
}
Now, above there is a bug mentioned about assuming JAVA_HOME/lib/security/... files are valid keystores. If neither of these files are valid keystores both CACERTS and JSSE_CACERTS are null and this line at line 127 causes the NPE because JSSE_CACERTS is null:
this.jks = CACERTS != null ? CACERTS.jks : JSSE_CACERTS.jks;
So, why are both null?
When I look at mine on my filesystem:
file /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts
I get this:
/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts: broken symbolic link to /System/Library/Java/Support/CoreDeploy.bundle/Contents/Home/lib/security/cacerts
That's a symlink to an invalid cacerts keystore. What I did was get a good copy of a JDK1.6 keystore via this command:
sudo find / -name 'cacerts' 2>/dev/null
/some/other/path/to/cacerts
Then, do file /some/other/path/to/cacerts to make sure you get a valid file:
/some/other/path/to/cacerts: Java KeyStore
Copy that to /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts to replace your broken symlink and verify it's good:
file /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts
/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts: Java KeyStore
Once that's a valid keystore, this code will work.
What a pain in the ass.
A:
I got this or very similar problem with openjdk11, turns out to be this bug:
https://github.com/narupley/not-going-to-be-commons-ssl/issues/5
defaultKeystoreType was pkcs12 (seems to be in newer java version) and there seems to be a bug in not-going-to-be-commons 0.3.19 that causes this problem since pkcs12 as opposed to jks requires a password to see the certificates.
A:
check jre Installed Extensions https://docs.oracle.com/javase/tutorial/ext/basics/install.html
In my case the problem was that I had had another bcprov jar inside the external jre library path "/jre/lib/ext" that was caused a conflict with the bcprov in the maven pom file,** after removing** from "/jre/lib/ext" the **problem **was fixed
|
Spring SAML Sample application returns Could not initialize class org.apache.commons.ssl.TrustMaterial
|
I have been trying to get the Spring SAML Sample application up and running, but have been struggling for days, and searching the internet with no success. I have followed all the steps in the Quick start guide.... when I click the 'Start single sign-on' button, I get redirected to SSOCircle, I log in, and get redirected back to the sample application, but it returns the following error:
Message:
Could not initialize class org.apache.commons.ssl.TrustMaterial
StackTrace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.commons.ssl.TrustMaterial
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
The stack trace from Tomcat is as follows:
java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.
at sun.security.util.DerInputStream.getLength(DerInputStream.java:561)
at sun.security.util.DerValue.init(DerValue.java:365)
at sun.security.util.DerValue.<init>(DerValue.java:320)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1220)
at java.security.KeyStore.load(KeyStore.java:1214)
at org.apache.commons.ssl.KeyStoreBuilder.tryJKS(KeyStoreBuilder.java:450)
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:416)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.
at sun.security.util.DerInputStream.getLength(DerInputStream.java:561)
at sun.security.util.DerValue.init(DerValue.java:365)
at sun.security.util.DerValue.<init>(DerValue.java:320)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1220)
at java.security.KeyStore.load(KeyStore.java:1214)
at org.apache.commons.ssl.KeyStoreBuilder.tryJKS(KeyStoreBuilder.java:450)
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:416)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
java.security.KeyStoreException: failed to extract any certificates or private keys - maybe bad password?
at org.apache.commons.ssl.KeyStoreBuilder.parse(KeyStoreBuilder.java:436)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:207)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:160)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:165)
at org.apache.commons.ssl.TrustMaterial.<init>(TrustMaterial.java:170)
at org.apache.commons.ssl.TrustMaterial.<clinit>(TrustMaterial.java:83)
at org.opensaml.xml.security.x509.X509Util.decodeCertificate(X509Util.java:351)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificate(KeyInfoHelper.java:201)
at org.opensaml.xml.security.keyinfo.KeyInfoHelper.getCertificates(KeyInfoHelper.java:176)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.extractCertificates(InlineX509DataProvider.java:192)
at org.opensaml.xml.security.keyinfo.provider.InlineX509DataProvider.process(InlineX509DataProvider.java:126)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChild(BasicProviderKeyInfoCredentialResolver.java:300)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfoChildren(BasicProviderKeyInfoCredentialResolver.java:256)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.processKeyInfo(BasicProviderKeyInfoCredentialResolver.java:190)
at org.opensaml.xml.security.keyinfo.BasicProviderKeyInfoCredentialResolver.resolveFromSource(BasicProviderKeyInfoCredentialResolver.java:149)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.security.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:275)
at org.springframework.security.saml.trust.MetadataCredentialResolver.retrieveFromMetadata(MetadataCredentialResolver.java:123)
at org.opensaml.security.MetadataCredentialResolver.resolveFromSource(MetadataCredentialResolver.java:178)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:57)
at org.opensaml.xml.security.credential.AbstractCriteriaFilteringCredentialResolver.resolve(AbstractCriteriaFilteringCredentialResolver.java:37)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:98)
at org.opensaml.xml.signature.impl.ExplicitKeySignatureTrustEngine.validate(ExplicitKeySignatureTrustEngine.java:49)
at org.springframework.security.saml.websso.AbstractProfileBase.verifySignature(AbstractProfileBase.java:267)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertionSignature(WebSSOProfileConsumerImpl.java:419)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.verifyAssertion(WebSSOProfileConsumerImpl.java:292)
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:214)
at org.springframework.security.saml.SAMLAuthenticationProvider.authenticate(SAMLAuthenticationProvider.java:82)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.saml.SAMLProcessingFilter.attemptAuthentication(SAMLProcessingFilter.java:84)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:195)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.saml.metadata.MetadataGeneratorFilter.doFilter(MetadataGeneratorFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:107)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:76)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:934)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:90)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:515)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1012)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:642)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1555)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Any help would be much apprectiated!
|
[
"You're most likely hitting a bug in the underlying OpenSAML and SSL library which presumes that file JAVA_HOME/lib/security/cacerts or JAVA_HOME/lib/security/jssecacerts is present and can be read as a JKS or PKCS12 keystore. In your case the file is probably corrupted.\nPlease try updating the cacerts file in your JDK with the file from the original installation. Make sure you can read it using keytool -list -keystore cacerts with either an empty password or password \"changeit\".\n",
"Same issue, upgraded to not-yet-commons-ssl-0.3.16.jar from the bundled 3.9 of saml-sample and it worked.\n",
"I'm on a mac with Java 1.6 - here's what I found:\nTrustMaterial.java is running static init code -> \n String pathToCacerts = javaHome + \"/lib/security/cacerts\";\n String pathToJSSECacerts = javaHome + \"/lib/security/jssecacerts\";\n TrustMaterial cacerts = null;\n TrustMaterial jssecacerts = null;\n try {\n File f = new File(pathToCacerts);\n if (f.exists()) {\n cacerts = new TrustMaterial(pathToCacerts);\n }\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n try {\n File f = new File(pathToJSSECacerts);\n if (f.exists()) {\n jssecacerts = new TrustMaterial(pathToJSSECacerts);\n }\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n\n CACERTS = cacerts;\n JSSE_CACERTS = jssecacerts;\n if (JSSE_CACERTS != null) {\n DEFAULT = JSSE_CACERTS;\n } else {\n DEFAULT = CACERTS;\n }\n\nNow, above there is a bug mentioned about assuming JAVA_HOME/lib/security/... files are valid keystores. If neither of these files are valid keystores both CACERTS and JSSE_CACERTS are null and this line at line 127 causes the NPE because JSSE_CACERTS is null:\nthis.jks = CACERTS != null ? CACERTS.jks : JSSE_CACERTS.jks;\n\nSo, why are both null?\nWhen I look at mine on my filesystem:\nfile /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts\n\nI get this:\n\n\n/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts: broken symbolic link to /System/Library/Java/Support/CoreDeploy.bundle/Contents/Home/lib/security/cacerts\n\n\nThat's a symlink to an invalid cacerts keystore. What I did was get a good copy of a JDK1.6 keystore via this command:\nsudo find / -name 'cacerts' 2>/dev/null\n\n\n/some/other/path/to/cacerts\n\n\nThen, do file /some/other/path/to/cacerts to make sure you get a valid file:\n\n\n/some/other/path/to/cacerts: Java KeyStore\n\n\nCopy that to /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts to replace your broken symlink and verify it's good:\nfile /Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts\n\n\n/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/security/cacerts: Java KeyStore\n\n\nOnce that's a valid keystore, this code will work.\nWhat a pain in the ass.\n",
"I got this or very similar problem with openjdk11, turns out to be this bug:\nhttps://github.com/narupley/not-going-to-be-commons-ssl/issues/5\ndefaultKeystoreType was pkcs12 (seems to be in newer java version) and there seems to be a bug in not-going-to-be-commons 0.3.19 that causes this problem since pkcs12 as opposed to jks requires a password to see the certificates.\n",
"check jre Installed Extensions https://docs.oracle.com/javase/tutorial/ext/basics/install.html\nIn my case the problem was that I had had another bcprov jar inside the external jre library path \"/jre/lib/ext\" that was caused a conflict with the bcprov in the maven pom file,** after removing** from \"/jre/lib/ext\" the **problem **was fixed\n"
] |
[
6,
2,
2,
0,
0
] |
[] |
[] |
[
"java",
"saml_2.0",
"spring",
"spring_saml"
] |
stackoverflow_0027792138_java_saml_2.0_spring_spring_saml.txt
|
Q:
How to use a variable in Adobe's pdf embed API as URL-value?
I use the Adobe PDF Embed API (https://www.adobe.io/apis/documentcloud/dcsdk/docs.html?view=view) to display pdfs within modals on a site of mine.
As I want the modals to only change in one tiny detail (the file-url of the pdf displayed there) I wanted to use the filename dynamically. So I did that:
document.addEventListener("adobe_dc_view_sdk.ready", function() {
var adobeDCView = new AdobeDC.View({
clientId: "xyz",
divId: "adobe-dc-view"
});
adobeDCView.previewFile({
content: {
location: {
var model_filename_chosen = "https://www.URL.com/files/" +
var model_filename;
// Does get printed correctly
console.log(model_filename_chosen);
//doesn't get parsed at all
url: model_filename_chosen
}
},
metaData: {
fileName: "Something"
}
}, {
});
});
And that in the header before it
function openFahrzeugModal(data) {
x = new bootstrap.Modal(document.getElementById("modalFahrzeug"));
x.toggle();
$('#input_model_hidden').val(data);
var model_filename = data;
console.log(data);
}
And the trigger for those looks then something like that:
<a onclick="openFahrzeugModal('myfile1.pdf')">
So any log does get printed correctly but the pdf isn't shown at all, the modal opens up correctly. The variable does get printed in other elements of the modal correctly but within the Adobe embed-thing the result is empty. I do use the same domain for the code and the file and my API-key is valid. As soon as I enter a static URL (the same basically as the one that gets printed on the console) the pdf gets shown correctly.
Why is that and what would I need to fix?
A:
It's a timing issue. The code you have on top will run as soon as our library is loaded. What you want instead is for the previewFile code to run only on user input. I'd modify openFahrzeugModal such that it will run that part. Something like this:
function openFahrzeugModal(data) {
x = new bootstrap.Modal(document.getElementById("modalFahrzeug"));
x.toggle();
$('#input_model_hidden').val(data);
var model_filename = data;
var model_filename_chosen = "https://www.URL.com/files/" + model_filename;
adobeDCView.previewFile({
content: {
location: {
url: model_filename_chosen
}
},
metaData: {
fileName: "Something"
}
}, {
});
}
I typed that by hand so it may not be perfect. Do know however that you don't want to run your click event until the library is ready. Normally I'd assign the click handler in JS, not HTML, and do it inside the event handler for adobe_dc_view_sdk.ready.
A:
Thanks Raymond, very good help with that script.
I copy and fix it
function OpenPdf(data) {
let adobeDCView = new AdobeDC.View({clientId: ADOBE_KEY, divId: "pdfDisplay" });
var model_filename = data;
var model_filename_chosen = "your URL" + model_filename;
adobeDCView.previewFile({
content: {location: {url: model_filename_chosen}},
metaData: {fileName: "yournamefile.pdf" }
});
}
I'm a Coldfusion developer and have json array and in this one a have the pdf file name.
I did this loop with te call of the function
<p id="navLinks" style="display:none">
<cfloop index="i" from="1" to="#arrayLen(res.contracts)#">
<cfoutput>
Periodo: #res.contratos[i].period#> -- <button onclick="OpenPdf('#res.contracts[i].document#')">Open contract</button><br>
</cfoutput>
</cfloop>
</p>
<div id="pdfDisplay"></div>
But in another json I have a field where I have the pdf file in base64 format. The PDF API can read binary variables?
Or I have to write it on disk first and later pass the PDF's URL?
I would like to hide the PDF URL for security reasons.
Actually I have this and open the file in a new tab browser but I want to use the PDF API.
<cfset myPDF = ToBinary(cfData.content)>
<cfcontent type="application/pdf" variable="#myPDF#">
Regards
|
How to use a variable in Adobe's pdf embed API as URL-value?
|
I use the Adobe PDF Embed API (https://www.adobe.io/apis/documentcloud/dcsdk/docs.html?view=view) to display pdfs within modals on a site of mine.
As I want the modals to only change in one tiny detail (the file-url of the pdf displayed there) I wanted to use the filename dynamically. So I did that:
document.addEventListener("adobe_dc_view_sdk.ready", function() {
var adobeDCView = new AdobeDC.View({
clientId: "xyz",
divId: "adobe-dc-view"
});
adobeDCView.previewFile({
content: {
location: {
var model_filename_chosen = "https://www.URL.com/files/" +
var model_filename;
// Does get printed correctly
console.log(model_filename_chosen);
//doesn't get parsed at all
url: model_filename_chosen
}
},
metaData: {
fileName: "Something"
}
}, {
});
});
And that in the header before it
function openFahrzeugModal(data) {
x = new bootstrap.Modal(document.getElementById("modalFahrzeug"));
x.toggle();
$('#input_model_hidden').val(data);
var model_filename = data;
console.log(data);
}
And the trigger for those looks then something like that:
<a onclick="openFahrzeugModal('myfile1.pdf')">
So any log does get printed correctly but the pdf isn't shown at all, the modal opens up correctly. The variable does get printed in other elements of the modal correctly but within the Adobe embed-thing the result is empty. I do use the same domain for the code and the file and my API-key is valid. As soon as I enter a static URL (the same basically as the one that gets printed on the console) the pdf gets shown correctly.
Why is that and what would I need to fix?
|
[
"It's a timing issue. The code you have on top will run as soon as our library is loaded. What you want instead is for the previewFile code to run only on user input. I'd modify openFahrzeugModal such that it will run that part. Something like this:\nfunction openFahrzeugModal(data) {\n x = new bootstrap.Modal(document.getElementById(\"modalFahrzeug\"));\n x.toggle();\n $('#input_model_hidden').val(data);\n var model_filename = data;\n var model_filename_chosen = \"https://www.URL.com/files/\" + model_filename;\n\n adobeDCView.previewFile({\n content: {\n location: {\n url: model_filename_chosen\n }\n },\n metaData: {\n fileName: \"Something\"\n }\n }, {\n });\n }\n\nI typed that by hand so it may not be perfect. Do know however that you don't want to run your click event until the library is ready. Normally I'd assign the click handler in JS, not HTML, and do it inside the event handler for adobe_dc_view_sdk.ready.\n",
"Thanks Raymond, very good help with that script.\nI copy and fix it\nfunction OpenPdf(data) {\n let adobeDCView = new AdobeDC.View({clientId: ADOBE_KEY, divId: \"pdfDisplay\" });\n var model_filename = data;\n var model_filename_chosen = \"your URL\" + model_filename;\n adobeDCView.previewFile({\n content: {location: {url: model_filename_chosen}},\n metaData: {fileName: \"yournamefile.pdf\" }\n });\n }\n\nI'm a Coldfusion developer and have json array and in this one a have the pdf file name.\nI did this loop with te call of the function\n<p id=\"navLinks\" style=\"display:none\">\n<cfloop index=\"i\" from=\"1\" to=\"#arrayLen(res.contracts)#\"> \n\n <cfoutput>\n Periodo: #res.contratos[i].period#> -- <button onclick=\"OpenPdf('#res.contracts[i].document#')\">Open contract</button><br> \n </cfoutput>\n</cfloop> \n</p> \n\n<div id=\"pdfDisplay\"></div>\n\nBut in another json I have a field where I have the pdf file in base64 format. The PDF API can read binary variables?\nOr I have to write it on disk first and later pass the PDF's URL?\nI would like to hide the PDF URL for security reasons.\nActually I have this and open the file in a new tab browser but I want to use the PDF API.\n<cfset myPDF = ToBinary(cfData.content)>\n<cfcontent type=\"application/pdf\" variable=\"#myPDF#\">\n\nRegards\n"
] |
[
0,
0
] |
[] |
[] |
[
"adobe_embed_api",
"javascript"
] |
stackoverflow_0068578436_adobe_embed_api_javascript.txt
|
Q:
Handling variables in SQL Server
I have a procedure that needs to handle up to 60 different variables.
The variables have a standardized naming convention.
@TextParameter1 varchar(443) = NULL,
@TextParameter2 varchar(443) = NULL,
@TextParameter3 varchar(443) = NULL
I need to be able to check which variables are NULL and which aren't, and then handle the values of the non-null variables.
I tried using dynamic SQL to iterate over the variables by making the first portion of the variable name a string and iterating through the numbers on the end.
declare @rownum int = 1
while @rownum <= 60
declare @var_sql nvarchar(max) = 'INSERT INTO #slicer
SELECT IDENTITY(Int, 1, 1) AS rowkey, value
FROM STRING_SPLIT(CAST(@DetailQueryTextParameter' + CAST(@rownum AS nvarchar(3)) AS varchar(4000)), '^')'
execute @var_sql
Set @rownum = @rownum + 1
This will return an error claiming that @DetailQueryTextParameter needs to be declared first. What is the best way to handle all of these variables? I could do by writing a line of code for every single variable, but it seems like there is a better way. Can I insert the variable names into a table and iterate from there?
A:
(The below was updated to include the SPLIT_STRING().)
You can use a VALUES subselect (not sure if that is the proper term) to combine all of your parameters into a single collection. You can then filter out the null values and pass the remaining values into STRING_SPLIT() using a CROSS APPLY.
DECLARE
@TextParameter1 varchar(443) = NULL,
@TextParameter2 varchar(443) = 'aaa^bbb',
@TextParameter3 varchar(443) = NULL,
@TextParameter4 varchar(443) = 'xxx^yyy^zzz',
@TextParameter5 varchar(443) = NULL
SELECT A.*, B.Value AS SplitValue
FROM (
VALUES
(1, @TextParameter1),
(2, @TextParameter2),
(3, @TextParameter3),
(4, @TextParameter4),
(5, @TextParameter5)
) A(Parameter, Value)
CROSS APPLY STRING_SPLIT(A.Value, '^') B
WHERE A.Value IS NOT NULL
Which would yield results like:
Parameter
Value
SplitValue
2
aaa^bbb
aaa
2
aaa^bbb
bbb
4
xxx^yyy^zzz
xxx
4
xxx^yyy^zzz
yyy
4
xxx^yyy^zzz
zzz
See this db<>fiddle.
|
Handling variables in SQL Server
|
I have a procedure that needs to handle up to 60 different variables.
The variables have a standardized naming convention.
@TextParameter1 varchar(443) = NULL,
@TextParameter2 varchar(443) = NULL,
@TextParameter3 varchar(443) = NULL
I need to be able to check which variables are NULL and which aren't, and then handle the values of the non-null variables.
I tried using dynamic SQL to iterate over the variables by making the first portion of the variable name a string and iterating through the numbers on the end.
declare @rownum int = 1
while @rownum <= 60
declare @var_sql nvarchar(max) = 'INSERT INTO #slicer
SELECT IDENTITY(Int, 1, 1) AS rowkey, value
FROM STRING_SPLIT(CAST(@DetailQueryTextParameter' + CAST(@rownum AS nvarchar(3)) AS varchar(4000)), '^')'
execute @var_sql
Set @rownum = @rownum + 1
This will return an error claiming that @DetailQueryTextParameter needs to be declared first. What is the best way to handle all of these variables? I could do by writing a line of code for every single variable, but it seems like there is a better way. Can I insert the variable names into a table and iterate from there?
|
[
"(The below was updated to include the SPLIT_STRING().)\nYou can use a VALUES subselect (not sure if that is the proper term) to combine all of your parameters into a single collection. You can then filter out the null values and pass the remaining values into STRING_SPLIT() using a CROSS APPLY.\nDECLARE\n @TextParameter1 varchar(443) = NULL, \n @TextParameter2 varchar(443) = 'aaa^bbb',\n @TextParameter3 varchar(443) = NULL,\n @TextParameter4 varchar(443) = 'xxx^yyy^zzz',\n @TextParameter5 varchar(443) = NULL\n \nSELECT A.*, B.Value AS SplitValue\nFROM (\n VALUES\n (1, @TextParameter1),\n (2, @TextParameter2),\n (3, @TextParameter3),\n (4, @TextParameter4),\n (5, @TextParameter5)\n) A(Parameter, Value)\nCROSS APPLY STRING_SPLIT(A.Value, '^') B\nWHERE A.Value IS NOT NULL\n\nWhich would yield results like:\n\n\n\n\nParameter\nValue\nSplitValue\n\n\n\n\n2\naaa^bbb\naaa\n\n\n2\naaa^bbb\nbbb\n\n\n4\nxxx^yyy^zzz\nxxx\n\n\n4\nxxx^yyy^zzz\nyyy\n\n\n4\nxxx^yyy^zzz\nzzz\n\n\n\n\nSee this db<>fiddle.\n"
] |
[
2
] |
[] |
[] |
[
"loops",
"sql",
"sql_server",
"stored_procedures",
"variables"
] |
stackoverflow_0074657330_loops_sql_sql_server_stored_procedures_variables.txt
|
Q:
Match comments unless the initiating character is surrounded by unescaped quotes
With a regex: How can I match comments which begin with a semicolon unless the semicolon is surrounded on both sides by unescaped quotes, as shown below (the green blocks denote the matched comments )?:
Note, that the dquotes can by escaped by doubling them up "".
Such escaped dquotes behave as completely different characters, i.e. they do not have the ability to surround the semicolon and disable its comment-starting function.
Also, unbalanced dquotes are treated as escaped dquotes.
With Bubble's help, I have gotten as far as the regex below, which fails to correctly treat a trailing escaped dquote in the last test vector line.
^(?>(?:""[^""\n]*""|[^;""\n]+)*)""?[^"";\n]*(;.*)
See it run here.
Test vectors (the same as in the color-coded diagram above):
Peekaboo ; A comment starts with a semicolon and continues till the EOL
Unless the semicolon is surrounded by dquotes ”Don’t do it ; here” ;but match me; once
Im not surrounded ”so pay attention to me” ; ”peekaboo”
Im not surrounded ”so pay attention” to;me” ; ”peekaboo”
Im not surrounded ”so pay attention to me ; peekaboo
Dquote escapes a dquote so ”dont pay attention to ””me;here”” buster” do it ; here
Don’t pay attention to ”””me;here””” but do ””it;here””
and ”dont do ””it;here””” either ;peekaboo
but "pay attention to "it;here"" ;not here though
Simon said ”I like goats” then he added ”and sheep;” ;a good comment is ”here
Simon said ”I like goats” then he added ”and sheep;” dont do it here
Simon said ””I like goats;”peekaboo
Simon said ”I like goats;””peekaboo
A:
The task is to find comments starting with a ; semicolon outside quotes considering "" escaped quotes and a potential non-closed quote before. This approach works for yet provided test cases.
Updated pattern: A shorter and more efficient variant without alternation.
^((?>(?:(?:[^"\n;]*"[^"\n]*")+(?!"))?[^"\n;]*)"?[^"\n;]*);.*
New demo at regex101
This pattern works without alternation and uses a negative lookahead to check for the last valid double quote. In both patterns the atomic group mimics possessive quantifiers to prevent any backtracking and keep the balance. Using possessive quantifiers the pattern would look like this regex101 demo. [^";\n]*"?[^";\n]* is the part that is allowing an optional non-closed quote.
Previous pattern: This turned out to be reliable yet but is a little bit slower.
^((?>(?:(?:[^;"\n]*"(?>(?:[^"\n]+|"")*)")+)?)[^";\n]*"?[^";\n]*);.*
Old demo at regex101
"(([^"]+|"")*)" consumes either " ... " or "". This gets repeated any amount of times with any [^;"]* characters that are not ; or " in between. All that is done inside an atomic group. Having matched the quoted parts with any non semicolons in between due to use of an atomic group there is no way back. After finally allowing an optional non-closed " either a ; will be found or it fails.
The first capturing group $1 contains the part up to the targeted ; comment-start. To remove the comment, replace the full match with the captured part. If needed capture (.*) to a second group.
regex-part
matches
(?>...)
denotes an atomic group, used to prevent any further backtracking
[^...]
a negated character class matches a single character not in the listed
(...) and (?:...)
capturing and non capturing groups (latter for repitition or alternation)
quantifiers: ? * +
? matches zero or one (optional), * any amount and + one or more
If replacements are done on single lines, all the \n newlines can be dropped from either pattern.
|
Match comments unless the initiating character is surrounded by unescaped quotes
|
With a regex: How can I match comments which begin with a semicolon unless the semicolon is surrounded on both sides by unescaped quotes, as shown below (the green blocks denote the matched comments )?:
Note, that the dquotes can by escaped by doubling them up "".
Such escaped dquotes behave as completely different characters, i.e. they do not have the ability to surround the semicolon and disable its comment-starting function.
Also, unbalanced dquotes are treated as escaped dquotes.
With Bubble's help, I have gotten as far as the regex below, which fails to correctly treat a trailing escaped dquote in the last test vector line.
^(?>(?:""[^""\n]*""|[^;""\n]+)*)""?[^"";\n]*(;.*)
See it run here.
Test vectors (the same as in the color-coded diagram above):
Peekaboo ; A comment starts with a semicolon and continues till the EOL
Unless the semicolon is surrounded by dquotes ”Don’t do it ; here” ;but match me; once
Im not surrounded ”so pay attention to me” ; ”peekaboo”
Im not surrounded ”so pay attention” to;me” ; ”peekaboo”
Im not surrounded ”so pay attention to me ; peekaboo
Dquote escapes a dquote so ”dont pay attention to ””me;here”” buster” do it ; here
Don’t pay attention to ”””me;here””” but do ””it;here””
and ”dont do ””it;here””” either ;peekaboo
but "pay attention to "it;here"" ;not here though
Simon said ”I like goats” then he added ”and sheep;” ;a good comment is ”here
Simon said ”I like goats” then he added ”and sheep;” dont do it here
Simon said ””I like goats;”peekaboo
Simon said ”I like goats;””peekaboo
|
[
"The task is to find comments starting with a ; semicolon outside quotes considering \"\" escaped quotes and a potential non-closed quote before. This approach works for yet provided test cases.\nUpdated pattern: A shorter and more efficient variant without alternation.\n^((?>(?:(?:[^\"\\n;]*\"[^\"\\n]*\")+(?!\"))?[^\"\\n;]*)\"?[^\"\\n;]*);.*\n\nNew demo at regex101\nThis pattern works without alternation and uses a negative lookahead to check for the last valid double quote. In both patterns the atomic group mimics possessive quantifiers to prevent any backtracking and keep the balance. Using possessive quantifiers the pattern would look like this regex101 demo. [^\";\\n]*\"?[^\";\\n]* is the part that is allowing an optional non-closed quote.\n\nPrevious pattern: This turned out to be reliable yet but is a little bit slower.\n^((?>(?:(?:[^;\"\\n]*\"(?>(?:[^\"\\n]+|\"\")*)\")+)?)[^\";\\n]*\"?[^\";\\n]*);.*\n\nOld demo at regex101\n\"(([^\"]+|\"\")*)\" consumes either \" ... \" or \"\". This gets repeated any amount of times with any [^;\"]* characters that are not ; or \" in between. All that is done inside an atomic group. Having matched the quoted parts with any non semicolons in between due to use of an atomic group there is no way back. After finally allowing an optional non-closed \" either a ; will be found or it fails.\n\nThe first capturing group $1 contains the part up to the targeted ; comment-start. To remove the comment, replace the full match with the captured part. If needed capture (.*) to a second group.\n\n\n\n\nregex-part\nmatches\n\n\n\n\n(?>...)\ndenotes an atomic group, used to prevent any further backtracking\n\n\n[^...]\na negated character class matches a single character not in the listed\n\n\n(...) and (?:...)\ncapturing and non capturing groups (latter for repitition or alternation)\n\n\nquantifiers: ? * +\n? matches zero or one (optional), * any amount and + one or more\n\n\n\n\nIf replacements are done on single lines, all the \\n newlines can be dropped from either pattern.\n"
] |
[
4
] |
[] |
[] |
[
".net",
"c#",
"powershell",
"regex"
] |
stackoverflow_0074636553_.net_c#_powershell_regex.txt
|
Q:
How to read a large data from a database using WSO2 Dataservice?
How to read millions of rows from a database using dataservice because it return for me an exception if i select more then 1100000 rows .
the exception below :
Trying to submit a response to an already closed connection : http-incoming-4
Select * from users;
should return all rows .
A:
Returning millions of data directly into any integration layer is not recommended. We need to leverage some other mechanisms such as cursors to return paged data so that we can keep integration layer focusing on integrations and not data transfer.
Assuming you are connecting to a MSSQL database, try to leverage SQL Cursors - https://www.sqlservertutorial.net/sql-server-stored-procedures/sql-server-cursor/
A:
The alternate solution is by using pagination with WSO2 DataService, you first create a query in DataService which would take start and end records like below SELECT * from [table_name] where between :Start AND :End , so add those as parameters when you call the API. for example start = 1, end = 100 and increment = 100.
|
How to read a large data from a database using WSO2 Dataservice?
|
How to read millions of rows from a database using dataservice because it return for me an exception if i select more then 1100000 rows .
the exception below :
Trying to submit a response to an already closed connection : http-incoming-4
Select * from users;
should return all rows .
|
[
"Returning millions of data directly into any integration layer is not recommended. We need to leverage some other mechanisms such as cursors to return paged data so that we can keep integration layer focusing on integrations and not data transfer.\nAssuming you are connecting to a MSSQL database, try to leverage SQL Cursors - https://www.sqlservertutorial.net/sql-server-stored-procedures/sql-server-cursor/\n",
"The alternate solution is by using pagination with WSO2 DataService, you first create a query in DataService which would take start and end records like below SELECT * from [table_name] where between :Start AND :End , so add those as parameters when you call the API. for example start = 1, end = 100 and increment = 100.\n"
] |
[
3,
1
] |
[] |
[] |
[
"java",
"sql",
"wso2",
"wso2_enterprise_integrator",
"wso2_esb"
] |
stackoverflow_0074643363_java_sql_wso2_wso2_enterprise_integrator_wso2_esb.txt
|
Q:
interpolate string from index ranges
I have an array of index ranges and a string:
const ranges = [[2,5], [11, 14]]
const str = 'brave new world'
I'm trying to write a function to interpolate and wrap the characters at those ranges.
const magic = (str, ranges, before='<bold>', after='</bold>') => {
// magic here
return 'br<bold>ave</bold> new w<bold>orl</bold>d'
}
A:
Assuming your ranges are in the order they need to be applied, then:
Iterate over the ranges.
Add the part of the string previous the start of the range to the result.
Wrap the middle part in the before and after.
Remember which is the last index you processed then repeat 2.-4. for as many ranges there are.
Add the rest of the string after the last range to the result.
const magic = (str, ranges, before='<bold>', after='</bold>') => {
let result = "";
let lastIndex = 0;
for(const [start, end] of ranges) {
result += str.slice(lastIndex, start);
const wrap = str.slice(start, end);
result += before + wrap + after;
lastIndex = end;
}
result += str.slice(lastIndex);
return result;
}
const ranges = [[2,5], [11, 14]]
const str = 'brave new world'
console.log(magic(str, ranges));
A:
Sure, here's one way you can solve this problem:
const magic = (str, ranges, before='<bold>', after='</bold>') => {
// Create an array of characters from the input string
const chars = str.split('');
// Iterate over the ranges
for (const [start, end] of ranges) {
// Insert the before and after strings at the start and end indices
chars.splice(end, 0, after);
chars.splice(start, 0, before);
}
// Join the characters and return the resulting string
return chars.join('');
}
const ranges = [[2, 5], [11, 14]];
const str = 'brave new world';
const wrappedString = magic(str, ranges);
console.log(wrappedString); // "br<bold>ave</bold> new w<bold>orl</bold>d"
I hope this helps! Let me know if you have any questions.
A:
If you make sure you go through the ranges in reverse order (call .reverse() on the array if it helps, or sort the array if necessary), then it can be this simple:
// Wraps one range
function wrap(str, [i, j]) {
return str.substring(0, i) + '<b>' + str.substring(i, j) + '</b>' + str.substring(j);
}
[[2, 5], [11, 14]].reverse().reduce(wrap, 'brave new world')
So I'd write magic like this:
function magic(str, ranges, b='<bold>', a='</bold>') {
const wrap = (str, [i, j]) => str.substring(0, i) + b +
str.substring(i, j) + a + str.substring(j);
return ranges.sort(([i], [j]) => j - i).reduce(wrap, str);
}
|
interpolate string from index ranges
|
I have an array of index ranges and a string:
const ranges = [[2,5], [11, 14]]
const str = 'brave new world'
I'm trying to write a function to interpolate and wrap the characters at those ranges.
const magic = (str, ranges, before='<bold>', after='</bold>') => {
// magic here
return 'br<bold>ave</bold> new w<bold>orl</bold>d'
}
|
[
"Assuming your ranges are in the order they need to be applied, then:\n\nIterate over the ranges.\nAdd the part of the string previous the start of the range to the result.\nWrap the middle part in the before and after.\nRemember which is the last index you processed then repeat 2.-4. for as many ranges there are.\nAdd the rest of the string after the last range to the result.\n\n\n\nconst magic = (str, ranges, before='<bold>', after='</bold>') => {\n let result = \"\";\n let lastIndex = 0;\n \n for(const [start, end] of ranges) {\n result += str.slice(lastIndex, start);\n const wrap = str.slice(start, end);\n result += before + wrap + after;\n lastIndex = end;\n }\n \n result += str.slice(lastIndex);\n \n return result;\n}\n\nconst ranges = [[2,5], [11, 14]]\nconst str = 'brave new world'\nconsole.log(magic(str, ranges));\n\n\n\n",
"Sure, here's one way you can solve this problem:\n\n\nconst magic = (str, ranges, before='<bold>', after='</bold>') => {\n // Create an array of characters from the input string\n const chars = str.split('');\n\n // Iterate over the ranges\n for (const [start, end] of ranges) {\n // Insert the before and after strings at the start and end indices\n chars.splice(end, 0, after);\n chars.splice(start, 0, before);\n }\n\n // Join the characters and return the resulting string\n return chars.join('');\n}\n\nconst ranges = [[2, 5], [11, 14]];\nconst str = 'brave new world';\nconst wrappedString = magic(str, ranges);\n\nconsole.log(wrappedString); // \"br<bold>ave</bold> new w<bold>orl</bold>d\"\n\n\n\nI hope this helps! Let me know if you have any questions.\n",
"If you make sure you go through the ranges in reverse order (call .reverse() on the array if it helps, or sort the array if necessary), then it can be this simple:\n// Wraps one range\nfunction wrap(str, [i, j]) {\n return str.substring(0, i) + '<b>' + str.substring(i, j) + '</b>' + str.substring(j);\n}\n[[2, 5], [11, 14]].reverse().reduce(wrap, 'brave new world')\n\nSo I'd write magic like this:\nfunction magic(str, ranges, b='<bold>', a='</bold>') {\n const wrap = (str, [i, j]) => str.substring(0, i) + b + \n str.substring(i, j) + a + str.substring(j);\n return ranges.sort(([i], [j]) => j - i).reduce(wrap, str);\n}\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"javascript",
"string",
"string_interpolation"
] |
stackoverflow_0074657670_javascript_string_string_interpolation.txt
|
Q:
Parse data in a column to create 2 other columns - substring
This is an MSSQL query question.
I'm trying to use the substring function in a SQL query to get parts of a column to create other columns, but is there a way to look for characters instead of telling it where to start and how many characters to take?
In the below data, I always want to grab the numbers that are between the ' '. I then want to put them in columns called "Write" and "Prev".
Input Data:
Write '8' to '/FOUNDRY::[Foundry_Muller]F26:30'. Previous value was '9.0'
Results:
Write = 8
Prev = 9.0
A:
This is mighty fugly but it works (give me a proper regex any day). The 'with' statement is a common table expression and is used here to just set up test data, like a temp table (a great way to set up data for examples here). The meat is the query below that.
Select the substring starting at the pattern "Write '" + 7 to get you at the first digit. The length to return is that number from the starting point of the pattern of "' to'". So, this allows for a variable length "Write" value as long as the format of the string stays the same.
with tbl(str) as (
select 'Input Data: Write ''8989'' to ''/FOUNDRY::[Foundry_Muller]F26:30''. Previous value was ''229.0'''
)
select substring(str, (patindex('%Write ''%', str)+7), patindex('%'' to ''%', str)-(patindex('%Write ''%', str)+7)) as write_val,
substring(str, (patindex('%Previous value was ''%', str)+20),len(str)-(patindex('%Previous value was ''%', str)+20)) as prev_val
from tbl;
|
Parse data in a column to create 2 other columns - substring
|
This is an MSSQL query question.
I'm trying to use the substring function in a SQL query to get parts of a column to create other columns, but is there a way to look for characters instead of telling it where to start and how many characters to take?
In the below data, I always want to grab the numbers that are between the ' '. I then want to put them in columns called "Write" and "Prev".
Input Data:
Write '8' to '/FOUNDRY::[Foundry_Muller]F26:30'. Previous value was '9.0'
Results:
Write = 8
Prev = 9.0
|
[
"This is mighty fugly but it works (give me a proper regex any day). The 'with' statement is a common table expression and is used here to just set up test data, like a temp table (a great way to set up data for examples here). The meat is the query below that.\nSelect the substring starting at the pattern \"Write '\" + 7 to get you at the first digit. The length to return is that number from the starting point of the pattern of \"' to'\". So, this allows for a variable length \"Write\" value as long as the format of the string stays the same.\nwith tbl(str) as (\nselect 'Input Data: Write ''8989'' to ''/FOUNDRY::[Foundry_Muller]F26:30''. Previous value was ''229.0'''\n)\nselect substring(str, (patindex('%Write ''%', str)+7), patindex('%'' to ''%', str)-(patindex('%Write ''%', str)+7)) as write_val,\n substring(str, (patindex('%Previous value was ''%', str)+20),len(str)-(patindex('%Previous value was ''%', str)+20)) as prev_val\nfrom tbl;\n\n"
] |
[
0
] |
[] |
[] |
[
"sql",
"sql_server",
"substring"
] |
stackoverflow_0074656606_sql_sql_server_substring.txt
|
Q:
PHP verify valid UUID
I found the following Javascript function in another answer:
function createUUID() {
var s = [];
var hexDigits = "0123456789abcdef";
for (var i = 0; i < 36; i++) {
s[i] = hexDigits.substr(Math.floor(Math.random() * 0x10), 1);
}
s[14] = "4";
s[19] = hexDigits.substr((s[19] & 0x3) | 0x8, 1);
s[8] = s[13] = s[18] = s[23] = "-";
var uuid = s.join("");
return uuid;
}
This creates an RFC valid UUID/GUID in javascript.. What I want to know is if there is a way to validate the string once it arrives to the PHP side. I found a regex that would potentially validate everything roughly that format as true:
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/
but using that as long as you specified a string of equal length and format it would pass validation.
Is there any way to add some sort of seed to the javascript function and then verify that in the PHP or if there was a way for PHP to validate the number that was passed matches with the correct format or something?
Any help is greatly appreciated.
A:
According to Wikipedia the format of UUID v4 is:
Version 4 UUIDs have the form xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx
where x is any hexadecimal digit and y is one of 8, 9, A, or B. e.g.
f47ac10b-58cc-4372-a567-0e02b2c3d479.
The corresponding regex is:
/^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
A:
I use regex validation along with string validation to check if the given string is in valid UUID format.
if (!is_string($uuid) || (preg_match('/^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/', $uuid) !== 1)) {
return false;
}
You can see the full gist here.
A:
You should use regex. For this, you can use this quick regex:
/^[a-f\d]{8}(-[a-f\d]{4}){4}[a-f\d]{8}$/i
It ignore the case of chars, and works the same a some regex which are longer.
You can find full explaination here.
To use it, you can do:
if(is_string($uuid) && preg_match('/^[a-f\d]{8}(-[a-f\d]{4}){4}[a-f\d]{8}$/i', $uuid)) {
// your code here
}
|
PHP verify valid UUID
|
I found the following Javascript function in another answer:
function createUUID() {
var s = [];
var hexDigits = "0123456789abcdef";
for (var i = 0; i < 36; i++) {
s[i] = hexDigits.substr(Math.floor(Math.random() * 0x10), 1);
}
s[14] = "4";
s[19] = hexDigits.substr((s[19] & 0x3) | 0x8, 1);
s[8] = s[13] = s[18] = s[23] = "-";
var uuid = s.join("");
return uuid;
}
This creates an RFC valid UUID/GUID in javascript.. What I want to know is if there is a way to validate the string once it arrives to the PHP side. I found a regex that would potentially validate everything roughly that format as true:
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/
but using that as long as you specified a string of equal length and format it would pass validation.
Is there any way to add some sort of seed to the javascript function and then verify that in the PHP or if there was a way for PHP to validate the number that was passed matches with the correct format or something?
Any help is greatly appreciated.
|
[
"According to Wikipedia the format of UUID v4 is:\n\nVersion 4 UUIDs have the form xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx\n where x is any hexadecimal digit and y is one of 8, 9, A, or B. e.g.\n f47ac10b-58cc-4372-a567-0e02b2c3d479.\n\nThe corresponding regex is:\n/^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i\n\n",
"I use regex validation along with string validation to check if the given string is in valid UUID format.\nif (!is_string($uuid) || (preg_match('/^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/', $uuid) !== 1)) {\n return false;\n}\n\nYou can see the full gist here.\n",
"You should use regex. For this, you can use this quick regex:\n/^[a-f\\d]{8}(-[a-f\\d]{4}){4}[a-f\\d]{8}$/i\n\nIt ignore the case of chars, and works the same a some regex which are longer.\nYou can find full explaination here.\nTo use it, you can do:\nif(is_string($uuid) && preg_match('/^[a-f\\d]{8}(-[a-f\\d]{4}){4}[a-f\\d]{8}$/i', $uuid)) {\n // your code here\n}\n\n"
] |
[
20,
8,
0
] |
[] |
[] |
[
"php",
"regex",
"uuid"
] |
stackoverflow_0012808597_php_regex_uuid.txt
|
Q:
How to correctly set permission for a localhost server in Fedora using Laravel?
I am attempting to run a laravel app on a local server in https mode in a Fedora 36 OS, but I am given this message
The stream or file "/var/www/compagnon-be/storage/logs/laravel.log"
could not be opened in append mode: Failed to open stream: Permission
denied The exception occurred while attempting to log
It seems to me that my permissions are correct
My DocumentRoot is /var/www/compagnon-be/public
I used these commands from /var/www
sudo chown -R $USER:apache compagnon-be
and
sudo chmod -R 775 compagnon-be
ls -l returns this (muser being my user)
[jaaf@localhost www]$ ls -l
total 12
drwxr-xr-x. 2 root root 4096 17 juin 13:13 cgi-bin
drwxrwxr-x. 14 muser apache 4096 2 déc. 06:32 compagnon-be
drwxr-xr-x. 4 root root 4096 1 déc. 06:52 html
[jaaf@localhost www]$
What is wrong ?
A:
The trouble was coming from selinux.
I tried
sudo restorecon -R -v /var/www/compagnon-be
After that the message changed to
file_put_contents(/var/www/compagnon-be/storage/framework/views/dc2fe5ffc0c4db448244e2a441f79c65b3812ff5.php):
Failed to open stream: Permission denied
Then I decided to install setroubleshoot package in my Fedora distribution and launched sealert
Refreshing the page triggered an alert and sealert gave me the commands to use
It was:
Vous devez modifier l'étiquette sur (You must change label on) « /var/www/compagnon-be/storage/framework/views »
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/compagnon-be/storage/framework/views'
# restorecon -v '/var/www/compagnon-be/storage/framework/views'
|
How to correctly set permission for a localhost server in Fedora using Laravel?
|
I am attempting to run a laravel app on a local server in https mode in a Fedora 36 OS, but I am given this message
The stream or file "/var/www/compagnon-be/storage/logs/laravel.log"
could not be opened in append mode: Failed to open stream: Permission
denied The exception occurred while attempting to log
It seems to me that my permissions are correct
My DocumentRoot is /var/www/compagnon-be/public
I used these commands from /var/www
sudo chown -R $USER:apache compagnon-be
and
sudo chmod -R 775 compagnon-be
ls -l returns this (muser being my user)
[jaaf@localhost www]$ ls -l
total 12
drwxr-xr-x. 2 root root 4096 17 juin 13:13 cgi-bin
drwxrwxr-x. 14 muser apache 4096 2 déc. 06:32 compagnon-be
drwxr-xr-x. 4 root root 4096 1 déc. 06:52 html
[jaaf@localhost www]$
What is wrong ?
|
[
"The trouble was coming from selinux.\nI tried\nsudo restorecon -R -v /var/www/compagnon-be\n\nAfter that the message changed to\n\nfile_put_contents(/var/www/compagnon-be/storage/framework/views/dc2fe5ffc0c4db448244e2a441f79c65b3812ff5.php):\nFailed to open stream: Permission denied\n\nThen I decided to install setroubleshoot package in my Fedora distribution and launched sealert\nRefreshing the page triggered an alert and sealert gave me the commands to use\nIt was:\nVous devez modifier l'étiquette sur (You must change label on) « /var/www/compagnon-be/storage/framework/views »\n# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/compagnon-be/storage/framework/views'\n\n# restorecon -v '/var/www/compagnon-be/storage/framework/views'\n\n"
] |
[
0
] |
[] |
[] |
[
"apache",
"fedora",
"localhost"
] |
stackoverflow_0074651562_apache_fedora_localhost.txt
|
Q:
How can I iterate over a dataframe containing multiple conditions (using iterrows()) and flag columns based on these conditions using np.where?
I am trying to generate flags based on multiple conditions.
I would like to do the following in a more iterable way:
# sample dataframe
data = [[1, 1980.0, 2000.0]]
df = pd.DataFrame(data, columns=["Item", "year1", "start_year"])
df
Item year1 year2
1 1980.0 2000.0
# assign flag based on condition
df = df.assign(year_flag=lambda x: np.where(x["year1"] < 1985, True, False))
df
Item year1 year2 year_flag
1 1980.0 2000.0 True
The way I would like to do this is the following:
# create a dataframe containing conditions and flags I'd like to generate
data = [
[
"year1",
"(df['year1'] < 1985)",
],
[
"year2",
"((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))",
],
]
condition_df = pd.DataFrame(data, columns=["column", "condition"])
condition_df
column condition
year1 (df['year1'] < 1985)
year2 ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))
# iterate through rows in condition_df to generate conditions + flags
for idx, row in condition_df.iterrows():
col = row["column"]
condition = row["condition"]
flag_col_name = f"{col}_flag"
df = df.assign(flag_col_name=lambda df: np.where(condition, True, False))
Unfortunately this results in the following error:
I am assuming this is because the condition is a string and thus 1985 is also a string (could be wrong though). Is there any way I can use this method to flag a dataframe? Or an alternative method that might be more successful?
Thank you !!!
A:
In my opinion, a number is expected, and the resulting value is a string. I tried removing the quotes in the conditions themselves. No errors occur. I also added one more line to the dataframe for verification. The flags are displayed as they should. The last line gives: False.
import pandas as pd
import numpy as np
data = [[1, 1980.0, 2000.0], [2, 1990, 3000.0]]
df = pd.DataFrame(data, columns=["Item", "year1", "start_year"])
data = [["year1", (df['year1'] < 1985), ],
["year2", ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)), ], ]
condition_df = pd.DataFrame(data, columns=["column", "condition"])
for idx, row in condition_df.iterrows():
col = row["column"]
condition = row["condition"]
flag_col_name = f"{col}_flag"
df = df.assign(flag_col_name=lambda df: np.where(condition, True, False))
print(df)
|
How can I iterate over a dataframe containing multiple conditions (using iterrows()) and flag columns based on these conditions using np.where?
|
I am trying to generate flags based on multiple conditions.
I would like to do the following in a more iterable way:
# sample dataframe
data = [[1, 1980.0, 2000.0]]
df = pd.DataFrame(data, columns=["Item", "year1", "start_year"])
df
Item year1 year2
1 1980.0 2000.0
# assign flag based on condition
df = df.assign(year_flag=lambda x: np.where(x["year1"] < 1985, True, False))
df
Item year1 year2 year_flag
1 1980.0 2000.0 True
The way I would like to do this is the following:
# create a dataframe containing conditions and flags I'd like to generate
data = [
[
"year1",
"(df['year1'] < 1985)",
],
[
"year2",
"((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))",
],
]
condition_df = pd.DataFrame(data, columns=["column", "condition"])
condition_df
column condition
year1 (df['year1'] < 1985)
year2 ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))
# iterate through rows in condition_df to generate conditions + flags
for idx, row in condition_df.iterrows():
col = row["column"]
condition = row["condition"]
flag_col_name = f"{col}_flag"
df = df.assign(flag_col_name=lambda df: np.where(condition, True, False))
Unfortunately this results in the following error:
I am assuming this is because the condition is a string and thus 1985 is also a string (could be wrong though). Is there any way I can use this method to flag a dataframe? Or an alternative method that might be more successful?
Thank you !!!
|
[
"In my opinion, a number is expected, and the resulting value is a string. I tried removing the quotes in the conditions themselves. No errors occur. I also added one more line to the dataframe for verification. The flags are displayed as they should. The last line gives: False.\nimport pandas as pd\nimport numpy as np\n\ndata = [[1, 1980.0, 2000.0], [2, 1990, 3000.0]]\n\ndf = pd.DataFrame(data, columns=[\"Item\", \"year1\", \"start_year\"])\n\n\ndata = [[\"year1\", (df['year1'] < 1985), ],\n [\"year2\", ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)), ], ]\n\ncondition_df = pd.DataFrame(data, columns=[\"column\", \"condition\"])\n\n\nfor idx, row in condition_df.iterrows():\n col = row[\"column\"]\n condition = row[\"condition\"]\n\n flag_col_name = f\"{col}_flag\"\n\n df = df.assign(flag_col_name=lambda df: np.where(condition, True, False))\n print(df)\n\n"
] |
[
0
] |
[] |
[] |
[
"conditional_statements",
"dataframe",
"loops",
"pandas",
"python"
] |
stackoverflow_0074636740_conditional_statements_dataframe_loops_pandas_python.txt
|
Q:
I want to extract ALL frames from a 30fps frames (so 30 frames per 1 second of video) using ffmpeg
I am new to using ffmpeg, but I need to extract all frames of a short (<10 second) video while maintaining the quality. Does anyone have code for this?
I have tried using:
C:\Users\taylo>ffmpeg -i test_video.mp4 %04d.png
But it could not find my video anyway (it was stored in the downloads folder).
EDIT:
I fixed this problem by setting my directory to my Videos folder (Windows 11) and putting my "test_video.mp4" in that folder.
C:\Users\(name)\>cd .\Videos
I am currently using two lines of code to extract these frames:
C:\Users\(name)\Videos>ffmpeg -i test_video.mp4 -r 30/1 out%03d.png
AND
C:\Users\(name)\Videos>ffmpeg -i test_video.mp4 out%03d.png
Does anyone know the difference between the two? I extracted a 4 second video at 30 fps and figured I would get ~120 frames but am getting slightly more at ~145 with both methods. I'm assuming this is accounting for milliseconds?
A:
Put the video inside FFmpeg folder. Copy this code inside a .bat file in the same folder and run it:
echo off
mkdir outputs
for %%a in ("*.mp4") do ffmpeg -i "%%a" "outputs\%%~na %%03d.png"
pause
This would extract the frames of all the .mp4 files available inside the FFmpeg folder, to a folder called "outputs" in the same directory; with the PNG format. The name of each file would be in this format: "source name 000.png".
For a 2 seconds long video with 15 fps, this code created 30 PNG files (2x15=30).
|
I want to extract ALL frames from a 30fps frames (so 30 frames per 1 second of video) using ffmpeg
|
I am new to using ffmpeg, but I need to extract all frames of a short (<10 second) video while maintaining the quality. Does anyone have code for this?
I have tried using:
C:\Users\taylo>ffmpeg -i test_video.mp4 %04d.png
But it could not find my video anyway (it was stored in the downloads folder).
EDIT:
I fixed this problem by setting my directory to my Videos folder (Windows 11) and putting my "test_video.mp4" in that folder.
C:\Users\(name)\>cd .\Videos
I am currently using two lines of code to extract these frames:
C:\Users\(name)\Videos>ffmpeg -i test_video.mp4 -r 30/1 out%03d.png
AND
C:\Users\(name)\Videos>ffmpeg -i test_video.mp4 out%03d.png
Does anyone know the difference between the two? I extracted a 4 second video at 30 fps and figured I would get ~120 frames but am getting slightly more at ~145 with both methods. I'm assuming this is accounting for milliseconds?
|
[
"Put the video inside FFmpeg folder. Copy this code inside a .bat file in the same folder and run it:\necho off\nmkdir outputs\nfor %%a in (\"*.mp4\") do ffmpeg -i \"%%a\" \"outputs\\%%~na %%03d.png\"\npause\n\nThis would extract the frames of all the .mp4 files available inside the FFmpeg folder, to a folder called \"outputs\" in the same directory; with the PNG format. The name of each file would be in this format: \"source name 000.png\".\nFor a 2 seconds long video with 15 fps, this code created 30 PNG files (2x15=30).\n"
] |
[
0
] |
[] |
[] |
[
"extract",
"ffmpeg",
"video_processing"
] |
stackoverflow_0074605153_extract_ffmpeg_video_processing.txt
|
Q:
Amplitude on competitors' websites
I am going to start learning Amplitude Analytics, could you tell me if Amplitude can be used on the websites of my competitors, to know how their customers behave with them.?
Still to try and find out.
A:
Well even if that was possible, that would be pretty unethical.
But it's not possible. Amplitude, as well as any other tracking script, has to be deployed to a page in order for it to start being tracked.
You surely can deploy it locally on your version of the page, but then you'd only be able to track yourself.
For competitor assessment, you typically use your broad market assessment tools that give you approximate numbers based on analysis of things like behavior of people using certain extensions, or estimations based on search engine result page rankings by search frequency of particular keywords, or self-declared revenue numbers. There are many legal ways to get an estimation.
|
Amplitude on competitors' websites
|
I am going to start learning Amplitude Analytics, could you tell me if Amplitude can be used on the websites of my competitors, to know how their customers behave with them.?
Still to try and find out.
|
[
"Well even if that was possible, that would be pretty unethical.\nBut it's not possible. Amplitude, as well as any other tracking script, has to be deployed to a page in order for it to start being tracked.\nYou surely can deploy it locally on your version of the page, but then you'd only be able to track yourself.\nFor competitor assessment, you typically use your broad market assessment tools that give you approximate numbers based on analysis of things like behavior of people using certain extensions, or estimations based on search engine result page rankings by search frequency of particular keywords, or self-declared revenue numbers. There are many legal ways to get an estimation.\n"
] |
[
0
] |
[] |
[] |
[
"amplitude_analytics",
"analytics"
] |
stackoverflow_0074657820_amplitude_analytics_analytics.txt
|
Q:
Flutter - How to get context when not in a class with a build method
I'm writing code for a PaginatedDataTable, but I get lost when I try to use showDialog().
What I want is when the user presses a row in the DataTable, a dialog box appears with information that row contains. The problem I'm having, is the context for the Dialog box, because it requires context, and the DataRow class doesn't innately have context.
In order to recognize that a row has been selected, you have to use onSelectChanged() in the DataRow class. Here, I'm trying to call the dialog box, but I have to use a Global navigator key.
class CalendarResultsDataSource extends DataTableSource {
final List<CalendarResult> _calendarResults;
CalendarResultsDataSource(this._calendarResults);
@override
DataRow getRow(int index) {
assert(index >= 0);
if (index >= _calendarResults.length) return null;
final CalendarResult calendarResult = _calendarResults[index];
return DataRow.byIndex(
index: index,
selected: calendarResult.selected,
onSelectChanged: (bool value) {
print(value);
print(calendarResult.agency);
Dialogs.showAuditInfo(
navigatorKey.currentState.overlay.context, calendarResult); //<-- this feels hacky
},
cells: <DataCell>[
DataCell(Container(
height: double.infinity,
width: double.infinity,
color: index.isEven
? ColorDefs.colorAlternatingDark
: ColorDefs.colorDarkBackground,
child: Padding(
padding: const EdgeInsets.fromLTRB(6.0, 4.0, 4.0, 4.0),
child: Center(
child: Text('${calendarResult.getDateFormatted()}',
style: ColorDefs.textBodyWhite15)),
))), etc. etc.
]);
in the dataRow class.
So, I created a navigator key in main() like this:
final navigatorKey = GlobalKey<NavigatorState>();
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
navigatorKey: navigatorKey,
home: Scaffold(body: AwesomeAppCodeHere())
);
}
}
Then, I'm referring to that key in the onSelectChanged() method like this:
showDialog<void>(
context: navigatorKey.currentState.overlay.context,
builder: (BuildContext context) {
return AlertDialog(
content: StatefulBuilder(
builder: (BuildContext context, StateSetter setState) {
return DialogInnards();
},
),
);
},
);
But, it feels hacky. The only other solution I can think of is to pass context to the DataRow class from the PaginatedDataTable widget, but then I'd have to modify the flutter DataRow widget directly.
So, what's the best way to get context in a class without an exposed context?
I did see the 'get' package: might work... https://pub.dev/packages/get but isn't that overkill for something that is straightforward? (but just escaping me?)
A:
I'm dealing with the same thing, and I've found a workaround.
You can declare a BuildContext attribute in your DataTableSource implementation class.
class EventData extends DataTableSource {
final BuildContext context;
EventData({required this.context});
final List<Map<String, dynamic>> _data = List.generate(
200,
(index) => {
"price": Random().nextInt(1000),
});
@override
DataRow? getRow(int index) {
return DataRow(cells: [
DataCell(Center(
// We can know use the context in this method.
child: Text("${_data[index]['price']} €",
style: Theme.of(context)
.textTheme
.headline6!
.copyWith(fontWeight: FontWeight.bold)),
)),
]);
}
@override
bool get isRowCountApproximate => false;
@override
int get rowCount => _data.length;
@override
int get selectedRowCount => 0;
}
This attribute has to be provided by your UI and can now be used in the DataTableSource class's methods.
class PaginatedExample extends StatelessWidget {
const PaginatedExample({super.key});
@override
Widget build(BuildContext context) {
return PaginatedDataTable(
source: EventData(context: context), //The context can be provided here.
columns: const [
DataColumn(label: Text('Price')),
],
);
}
}
The drawback of this solution is that the EventData class will be re-instanciated at each UI rebuilds.
A:
As said in the doc DataTableSource is long lived object that should be reused across build from the PaginatedDataTable, so you should not pass context to the DataTableSource object IMO.
I prefer to pass callback that will be called once a row is tapped.
Here a simplified example of what I use in my app :
class PlayerDataSource extends DataTableSource {
/// When no other user is selected, this callback will be called from the onTap event on a Datarow.
final void Function(Player player)? onTap;
/// Allow the pagined data table to know if it should show the checkbox
final void Function(bool displayCheckbox)? onSelect;
PlayerDataSource({this.onTap, this.onSelect});
List<Player> displayedPlayers = [];
int _selectedCount = 0;
@override
int get rowCount => displayedPlayers.length;
@override
bool get isRowCountApproximate => false;
@override
int get selectedRowCount => _selectedCount;
@override
DataRow? getRow(int index) {
assert(index >= 0);
if (index >= displayedPlayers.length) {
return null;
}
final player = displayedPlayers[index];
return DataRow.byIndex(
index: index,
selected: player.selected,
onSelectChanged: (isSelected) {
if (_selectedCount == 0) {
onTap?.call(player);
return;
}
if (_selectedCount == 1 && isSelected != null && !isSelected) {
onSelect?.call(false);
}
selectOne(player, isSelected ?? false);
},
onLongPress: _selectedCount == 0
? () {
onSelect?.call(true);
selectOne(player, true);
}
: null,
cells: [
// Wrap Text with builder to access context and get localizations
DataCell(Builder(builder: (context) {
final local = AppLocalizations.of(context)!;
return Text(player.hasSub ? local.yes : local.no);
})),
DataCell(Text('0')),
],
);
}
}
PS : If you need context inside a cell, you could wrap you cell child widget (here a Text widget) inside a Builder widget
|
Flutter - How to get context when not in a class with a build method
|
I'm writing code for a PaginatedDataTable, but I get lost when I try to use showDialog().
What I want is when the user presses a row in the DataTable, a dialog box appears with information that row contains. The problem I'm having, is the context for the Dialog box, because it requires context, and the DataRow class doesn't innately have context.
In order to recognize that a row has been selected, you have to use onSelectChanged() in the DataRow class. Here, I'm trying to call the dialog box, but I have to use a Global navigator key.
class CalendarResultsDataSource extends DataTableSource {
final List<CalendarResult> _calendarResults;
CalendarResultsDataSource(this._calendarResults);
@override
DataRow getRow(int index) {
assert(index >= 0);
if (index >= _calendarResults.length) return null;
final CalendarResult calendarResult = _calendarResults[index];
return DataRow.byIndex(
index: index,
selected: calendarResult.selected,
onSelectChanged: (bool value) {
print(value);
print(calendarResult.agency);
Dialogs.showAuditInfo(
navigatorKey.currentState.overlay.context, calendarResult); //<-- this feels hacky
},
cells: <DataCell>[
DataCell(Container(
height: double.infinity,
width: double.infinity,
color: index.isEven
? ColorDefs.colorAlternatingDark
: ColorDefs.colorDarkBackground,
child: Padding(
padding: const EdgeInsets.fromLTRB(6.0, 4.0, 4.0, 4.0),
child: Center(
child: Text('${calendarResult.getDateFormatted()}',
style: ColorDefs.textBodyWhite15)),
))), etc. etc.
]);
in the dataRow class.
So, I created a navigator key in main() like this:
final navigatorKey = GlobalKey<NavigatorState>();
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
navigatorKey: navigatorKey,
home: Scaffold(body: AwesomeAppCodeHere())
);
}
}
Then, I'm referring to that key in the onSelectChanged() method like this:
showDialog<void>(
context: navigatorKey.currentState.overlay.context,
builder: (BuildContext context) {
return AlertDialog(
content: StatefulBuilder(
builder: (BuildContext context, StateSetter setState) {
return DialogInnards();
},
),
);
},
);
But, it feels hacky. The only other solution I can think of is to pass context to the DataRow class from the PaginatedDataTable widget, but then I'd have to modify the flutter DataRow widget directly.
So, what's the best way to get context in a class without an exposed context?
I did see the 'get' package: might work... https://pub.dev/packages/get but isn't that overkill for something that is straightforward? (but just escaping me?)
|
[
"I'm dealing with the same thing, and I've found a workaround.\nYou can declare a BuildContext attribute in your DataTableSource implementation class.\n class EventData extends DataTableSource {\n final BuildContext context;\n EventData({required this.context});\n final List<Map<String, dynamic>> _data = List.generate(\n 200,\n (index) => {\n \"price\": Random().nextInt(1000),\n });\n \n @override\n DataRow? getRow(int index) {\n return DataRow(cells: [\n DataCell(Center(\n // We can know use the context in this method.\n child: Text(\"${_data[index]['price']} €\",\n style: Theme.of(context)\n .textTheme\n .headline6!\n .copyWith(fontWeight: FontWeight.bold)),\n )),\n ]);\n }\n \n @override\n bool get isRowCountApproximate => false;\n \n @override\n int get rowCount => _data.length;\n \n @override\n int get selectedRowCount => 0;\n }\n\nThis attribute has to be provided by your UI and can now be used in the DataTableSource class's methods.\nclass PaginatedExample extends StatelessWidget {\n const PaginatedExample({super.key});\n\n @override\n Widget build(BuildContext context) {\n return PaginatedDataTable(\n source: EventData(context: context), //The context can be provided here.\n columns: const [\n DataColumn(label: Text('Price')),\n ],\n );\n }\n}\n\nThe drawback of this solution is that the EventData class will be re-instanciated at each UI rebuilds.\n",
"As said in the doc DataTableSource is long lived object that should be reused across build from the PaginatedDataTable, so you should not pass context to the DataTableSource object IMO.\nI prefer to pass callback that will be called once a row is tapped.\nHere a simplified example of what I use in my app :\nclass PlayerDataSource extends DataTableSource {\n\n /// When no other user is selected, this callback will be called from the onTap event on a Datarow.\n final void Function(Player player)? onTap;\n\n /// Allow the pagined data table to know if it should show the checkbox\n final void Function(bool displayCheckbox)? onSelect;\n\n PlayerDataSource({this.onTap, this.onSelect});\n\n List<Player> displayedPlayers = [];\n int _selectedCount = 0;\n\n @override\n int get rowCount => displayedPlayers.length;\n\n @override\n bool get isRowCountApproximate => false;\n\n @override\n int get selectedRowCount => _selectedCount;\n\n @override\n DataRow? getRow(int index) {\n assert(index >= 0);\n if (index >= displayedPlayers.length) {\n return null;\n }\n final player = displayedPlayers[index];\n return DataRow.byIndex(\n index: index,\n selected: player.selected,\n onSelectChanged: (isSelected) {\n if (_selectedCount == 0) {\n onTap?.call(player);\n return;\n }\n if (_selectedCount == 1 && isSelected != null && !isSelected) {\n onSelect?.call(false);\n }\n selectOne(player, isSelected ?? false);\n },\n onLongPress: _selectedCount == 0\n ? () {\n onSelect?.call(true);\n selectOne(player, true);\n }\n : null,\n cells: [\n // Wrap Text with builder to access context and get localizations\n DataCell(Builder(builder: (context) {\n final local = AppLocalizations.of(context)!;\n return Text(player.hasSub ? local.yes : local.no);\n })),\n DataCell(Text('0')),\n ],\n );\n }\n}\n\n\nPS : If you need context inside a cell, you could wrap you cell child widget (here a Text widget) inside a Builder widget\n"
] |
[
0,
0
] |
[] |
[] |
[
"flutter"
] |
stackoverflow_0062054100_flutter.txt
|
Q:
Next Auth Azure Ad B2C signout problem session kills on app but not on azure AD
I am integrating Next Auth with Azure AD B2C i am able to create a login session when i login or signup on azure AD but when i signout using next Auth i am not signing out of azure AD and it automatically signins me in till the azure AD session expires that is 1 day after a day i will get option again to sign in.
Tried following documentation but got no result any help would be appreciated! The thing is next auth provides solution for signin sign up and stuff but the session at my app gets killed on signout but it kills the reason for MFA(multi factor authentication) if azure AD session is maintained which can be used again and signed in without credentials to my app!
A:
You can either..
Force users to re-enter their credentials on each login
Reference: Next-Auth "Additional parameters" documentation
signIn("azure-ad-b2c", null, { prompt: "login" })
Defer calling signOut() until after you redirect to B2C, as B2C handles clearing its session
Reference: Benjamin Fox Blog, Azure B2C with Next-Auth
<button
href={`https://${process.env.AUTH_TENANT_NAME}.b2clogin.com/${process.env.AUTH_TENANT_NAME}.onmicrosoft.com/${process.env.USER_FLOW}/oauth2/v2.0/logout?post_logout_redirect_uri=${process.env.NEXTAUTH_URL}/auth/signout`}
>
Sign Out
</button>
where the /auth/signout page calls Next-Auth's signOut()
|
Next Auth Azure Ad B2C signout problem session kills on app but not on azure AD
|
I am integrating Next Auth with Azure AD B2C i am able to create a login session when i login or signup on azure AD but when i signout using next Auth i am not signing out of azure AD and it automatically signins me in till the azure AD session expires that is 1 day after a day i will get option again to sign in.
Tried following documentation but got no result any help would be appreciated! The thing is next auth provides solution for signin sign up and stuff but the session at my app gets killed on signout but it kills the reason for MFA(multi factor authentication) if azure AD session is maintained which can be used again and signed in without credentials to my app!
|
[
"You can either..\n\nForce users to re-enter their credentials on each login\nReference: Next-Auth \"Additional parameters\" documentation\nsignIn(\"azure-ad-b2c\", null, { prompt: \"login\" })\n\n\nDefer calling signOut() until after you redirect to B2C, as B2C handles clearing its session\nReference: Benjamin Fox Blog, Azure B2C with Next-Auth\n\n\n\n\n<button \n href={`https://${process.env.AUTH_TENANT_NAME}.b2clogin.com/${process.env.AUTH_TENANT_NAME}.onmicrosoft.com/${process.env.USER_FLOW}/oauth2/v2.0/logout?post_logout_redirect_uri=${process.env.NEXTAUTH_URL}/auth/signout`}\n>\n Sign Out\n</button>\n\n\n\nwhere the /auth/signout page calls Next-Auth's signOut()\n"
] |
[
0
] |
[] |
[] |
[
"azure_active_directory",
"azure_ad_b2c",
"next.js",
"next_auth"
] |
stackoverflow_0074557856_azure_active_directory_azure_ad_b2c_next.js_next_auth.txt
|
Q:
Rails: how to require at least one field not to be blank
I know I can require a field by adding validates_presence_of :field to the model. However, how do I require at least one field to be mandatory, while not requiring any particular field?
thanks in advance
-- Deb
A:
You can use:
validate :any_present?
def any_present?
if %w(field1 field2 field3).all?{|attr| self[attr].blank?}
errors.add :base, "Error message"
end
end
EDIT: updated from original answer for Rails 3+ as per comment.
But you have to provide field names manually.
You could get all content columns of a model with Model.content_columns.map(&:name), but it will include created_at and updated_at columns too, and that is probably not what you want.
A:
Here's a reusable version:
class AnyPresenceValidator < ActiveModel::Validator
def validate(record)
unless options[:fields].any?{|attr| record[attr].present?}
record.errors.add(:base, :blank)
end
end
end
You can use it in your model with:
validates_with AnyPresenceValidator, fields: %w(field1 field2 field3)
A:
Add a validate method to your model:
def validate
if field1.blank? and field2.blank? and field3.blank? # ...
errors.add_to_base("You must fill in at least one field")
end
end
A:
I believe something like the following may work
class MyModel < ActiveRecord::Base
validate do |my_model|
my_model.my_validation
end
def my_validation
errors.add_to_base("Your error message") if self.blank?
#or self.attributes.blank? - not sure
end
end
A:
Going further with @Votya's correct answer, here is a way to retrieve all columns besides created_at and updated_at (and optionally, any others you want to throw out):
# Get all column names as an array and reject the ones we don't want
Model.content_columns.map(&:name).reject {|i| i =~ /(created|updated)_at/}
For example:
1.9.3p327 :012 > Client.content_columns.map(&:name).reject {|i| i =~ /(created|updated)_at/}
=> ["primary_email", "name"]
A:
If you only have two fields, this will get the job done:
validates :first_name, presence: true, if: :nick_name.blank?
validates :nick_name, presence: true, if: :first_name.blank?
This does not scale up well with more fields, but when you only have two, this is perhaps clearer than a custom validation method.
n.b. If they omit both, the error message will appear more restrictive than you intend. (e.g. First Name is required. Nick Name is required.) ¯\(ツ)/¯
|
Rails: how to require at least one field not to be blank
|
I know I can require a field by adding validates_presence_of :field to the model. However, how do I require at least one field to be mandatory, while not requiring any particular field?
thanks in advance
-- Deb
|
[
"You can use:\nvalidate :any_present?\n\ndef any_present?\n if %w(field1 field2 field3).all?{|attr| self[attr].blank?}\n errors.add :base, \"Error message\"\n end\nend\n\nEDIT: updated from original answer for Rails 3+ as per comment.\nBut you have to provide field names manually.\nYou could get all content columns of a model with Model.content_columns.map(&:name), but it will include created_at and updated_at columns too, and that is probably not what you want.\n",
"Here's a reusable version:\nclass AnyPresenceValidator < ActiveModel::Validator\n def validate(record)\n unless options[:fields].any?{|attr| record[attr].present?}\n record.errors.add(:base, :blank)\n end\n end\nend\n\nYou can use it in your model with:\nvalidates_with AnyPresenceValidator, fields: %w(field1 field2 field3)\n\n",
"Add a validate method to your model:\ndef validate\n if field1.blank? and field2.blank? and field3.blank? # ...\n errors.add_to_base(\"You must fill in at least one field\")\n end\nend\n\n",
"I believe something like the following may work\nclass MyModel < ActiveRecord::Base\n validate do |my_model|\n my_model.my_validation\n end\n\n def my_validation \n errors.add_to_base(\"Your error message\") if self.blank? \n #or self.attributes.blank? - not sure\n end\nend\n\n",
"Going further with @Votya's correct answer, here is a way to retrieve all columns besides created_at and updated_at (and optionally, any others you want to throw out):\n# Get all column names as an array and reject the ones we don't want\nModel.content_columns.map(&:name).reject {|i| i =~ /(created|updated)_at/}\n\nFor example:\n 1.9.3p327 :012 > Client.content_columns.map(&:name).reject {|i| i =~ /(created|updated)_at/}\n => [\"primary_email\", \"name\"]\n\n",
"If you only have two fields, this will get the job done:\nvalidates :first_name, presence: true, if: :nick_name.blank?\nvalidates :nick_name, presence: true, if: :first_name.blank?\n\nThis does not scale up well with more fields, but when you only have two, this is perhaps clearer than a custom validation method.\nn.b. If they omit both, the error message will appear more restrictive than you intend. (e.g. First Name is required. Nick Name is required.) ¯\\(ツ)/¯\n"
] |
[
36,
9,
8,
2,
1,
0
] |
[] |
[] |
[
"models",
"ruby_on_rails",
"validation"
] |
stackoverflow_0002823628_models_ruby_on_rails_validation.txt
|
Q:
How can I use static_assert to call a function that receives an array as pointer and length?
I have a function that receives an array, and I want to test it using static_assert():
// This is the function I want to test:
constexpr static int find_minimum(const int arr[], size_t size);
// the ony way I have found is to define another function:
constexpr static int helper(std::initializer_list<int> lst)
{
return find_minimum(lst.begin(), lst.size());
}
// and then call:
static_assert(2 == helper({2,3,4}));
This works as expected, but is there a way to do this without the helper function?
A:
What I'd do is define the test array as a constexpr variable:
constexpr int testarray[] = {2, 3, 4};
static_assert(2 == find_minimum(testarray, sizeof(testarray)));
|
How can I use static_assert to call a function that receives an array as pointer and length?
|
I have a function that receives an array, and I want to test it using static_assert():
// This is the function I want to test:
constexpr static int find_minimum(const int arr[], size_t size);
// the ony way I have found is to define another function:
constexpr static int helper(std::initializer_list<int> lst)
{
return find_minimum(lst.begin(), lst.size());
}
// and then call:
static_assert(2 == helper({2,3,4}));
This works as expected, but is there a way to do this without the helper function?
|
[
"What I'd do is define the test array as a constexpr variable:\nconstexpr int testarray[] = {2, 3, 4};\nstatic_assert(2 == find_minimum(testarray, sizeof(testarray)));\n\n"
] |
[
2
] |
[] |
[] |
[
"c++",
"static_assert"
] |
stackoverflow_0074658045_c++_static_assert.txt
|
Q:
Laravel - sort array from other class by SQL table column
I am calling an array of all the comments of a poll by using the following code:
$poll = Poll::find($id);
return view('pages.poll', ['poll' => $poll, 'comments' => $poll->comments]);
and the links between Comments and Polls are the following:
Comment.php
public function poll() {
return $this->belongsTo(Poll::class, 'poll_id');
}
Poll.php
public function comments() {
return $this->hasMany(Comment::class, 'poll_id');
}
Finally, I would like to sort the array comments coming from $poll->comment by the column likes in the Comment table, something like DB::table('comment')->orderBy('likes')->get();.
Is there any way to do that?
A:
$poll->comments->sortBy('likes')
A:
There's a number of ways you can do this.
Add orderBy('likes') directly to your comments relationship:
Poll.php:
public function comments() {
return $this->hasMany(Comment::class, 'poll_id')->orderBy('likes');
}
Now, any time you access $poll->comments, they will be automatically sorted by the likes column. This is useful if you always want comments in this order (and it can still be overridden using the approaches below)
"Eager Load" comments with the correct order:
In your Controller:
$poll = Poll::with(['comments' => function ($query) {
return $query->orderBy('likes');
})->find($id);
return view('pages.poll', [
'poll' => $poll,
'comments' => $poll->comments
]);
with(['comments' => function ($query) { ... }]) adjusts the subquery used to load comments and applies the ordering for this instance only. Note: Eager Loading for a single record generally isn't necessary, but can be useful as you don't need to define an extra variable, don't need to use load, etc.
Manually Load comments with the correct order:
In your Controller:
$poll = Poll::find($id);
$comments = $poll->comments()->orderBy('likes')->get();
return view('pages.poll', [
'poll' => $poll,
'comments' => $comments
]);
Similar to eager loading, but assigned to its own variable.
Use sortBy('likes'):
In your Controller:
$poll = Poll::find($id);
return view('pages.poll', [
'poll' => $poll,
'comments' => $poll->comments->sortBy('likes')
]);
Similar to the above approaches, but uses PHP's sorting instead of database-level sorting, which can be significantly less efficient depending on the number of rows.
https://laravel.com/docs/9.x/eloquent-relationships#eager-loading
https://laravel.com/docs/9.x/eloquent-relationships#constraining-eager-loads
https://laravel.com/docs/9.x/collections#method-sortby
https://laravel.com/docs/9.x/collections#method-sortbydesc
|
Laravel - sort array from other class by SQL table column
|
I am calling an array of all the comments of a poll by using the following code:
$poll = Poll::find($id);
return view('pages.poll', ['poll' => $poll, 'comments' => $poll->comments]);
and the links between Comments and Polls are the following:
Comment.php
public function poll() {
return $this->belongsTo(Poll::class, 'poll_id');
}
Poll.php
public function comments() {
return $this->hasMany(Comment::class, 'poll_id');
}
Finally, I would like to sort the array comments coming from $poll->comment by the column likes in the Comment table, something like DB::table('comment')->orderBy('likes')->get();.
Is there any way to do that?
|
[
"$poll->comments->sortBy('likes')\n\n",
"There's a number of ways you can do this.\n\nAdd orderBy('likes') directly to your comments relationship:\n\nPoll.php:\npublic function comments() {\n return $this->hasMany(Comment::class, 'poll_id')->orderBy('likes');\n}\n\nNow, any time you access $poll->comments, they will be automatically sorted by the likes column. This is useful if you always want comments in this order (and it can still be overridden using the approaches below)\n\n\"Eager Load\" comments with the correct order:\n\nIn your Controller:\n$poll = Poll::with(['comments' => function ($query) {\n return $query->orderBy('likes');\n})->find($id);\n\nreturn view('pages.poll', [\n 'poll' => $poll,\n 'comments' => $poll->comments\n]);\n\nwith(['comments' => function ($query) { ... }]) adjusts the subquery used to load comments and applies the ordering for this instance only. Note: Eager Loading for a single record generally isn't necessary, but can be useful as you don't need to define an extra variable, don't need to use load, etc.\n\nManually Load comments with the correct order:\n\nIn your Controller:\n$poll = Poll::find($id);\n$comments = $poll->comments()->orderBy('likes')->get();\n\nreturn view('pages.poll', [\n 'poll' => $poll,\n 'comments' => $comments\n]);\n\nSimilar to eager loading, but assigned to its own variable.\n\nUse sortBy('likes'):\n\nIn your Controller:\n$poll = Poll::find($id);\n\nreturn view('pages.poll', [\n 'poll' => $poll,\n 'comments' => $poll->comments->sortBy('likes')\n]);\n\nSimilar to the above approaches, but uses PHP's sorting instead of database-level sorting, which can be significantly less efficient depending on the number of rows.\nhttps://laravel.com/docs/9.x/eloquent-relationships#eager-loading\nhttps://laravel.com/docs/9.x/eloquent-relationships#constraining-eager-loads\nhttps://laravel.com/docs/9.x/collections#method-sortby\nhttps://laravel.com/docs/9.x/collections#method-sortbydesc\n"
] |
[
0,
0
] |
[] |
[] |
[
"laravel",
"php"
] |
stackoverflow_0074657195_laravel_php.txt
|
Q:
How can I create a tooltip to show a chart series in Slate?
I built a chart and it turned out chaotic because it had too many series in the legend. Therefore, I want to use the tooltips to show the series title, so that it looks tidy. But I am not sure what is the best way to achieve that.
The goal is that when you hover over a color band on the bar chart the correct title shows up. Below is an example of what it currently looks like, with too many series in the legend.
A:
You can turn on tooltips using the miscellaneous tab of your chart menu. Then, input the value of {{w_chart.hover.series}}.
|
How can I create a tooltip to show a chart series in Slate?
|
I built a chart and it turned out chaotic because it had too many series in the legend. Therefore, I want to use the tooltips to show the series title, so that it looks tidy. But I am not sure what is the best way to achieve that.
The goal is that when you hover over a color band on the bar chart the correct title shows up. Below is an example of what it currently looks like, with too many series in the legend.
|
[
"You can turn on tooltips using the miscellaneous tab of your chart menu. Then, input the value of {{w_chart.hover.series}}.\n\n"
] |
[
0
] |
[] |
[] |
[
"foundry_slate",
"palantir_foundry"
] |
stackoverflow_0074658202_foundry_slate_palantir_foundry.txt
|
Q:
PyCharm login window not opening in browser
I am currently trying to login to my JetBrains account in PyCharm however the Browser Pop-up Window never appears :
If I click on the "Troubles?" link it opens a new window that offers to enter an IDE authorization key before almost instantly changing to an "Unable to complete authorization process" one :
I then followed all the instructions in the help article but without success.
Here's a list of others things I tried :
Changing my default browser (Chrome, Firefox)
Rebooting my computer
Reinstalling PyCharm
I also can't log in with the activation code since I only have an education license.
Thanks in advance.
A:
Turns out my company actually had a proxy, which blocked the authentication process.
I simply set up the "Proxy settings" in the Licenses Window and it solved the problem.
|
PyCharm login window not opening in browser
|
I am currently trying to login to my JetBrains account in PyCharm however the Browser Pop-up Window never appears :
If I click on the "Troubles?" link it opens a new window that offers to enter an IDE authorization key before almost instantly changing to an "Unable to complete authorization process" one :
I then followed all the instructions in the help article but without success.
Here's a list of others things I tried :
Changing my default browser (Chrome, Firefox)
Rebooting my computer
Reinstalling PyCharm
I also can't log in with the activation code since I only have an education license.
Thanks in advance.
|
[
"Turns out my company actually had a proxy, which blocked the authentication process.\nI simply set up the \"Proxy settings\" in the Licenses Window and it solved the problem.\n"
] |
[
0
] |
[] |
[] |
[
"pycharm"
] |
stackoverflow_0074548092_pycharm.txt
|
Q:
Plot only the high density data with geom_hex
I create a geom_hex plot using ggplot with the following code:
ggplot(aes(x= inpol, y= polnameref), data =dfbin) +
geom_hex()
Now I want to produce the same graph but only with the high density data (data with counts in the top 10th percentile). I want the x and y axis scales, the size and colour of the hexagons to stay the same.
The way that I have tried is to create a count column using hexbin. I then subset the dataframe for only the high counts and plot.
h <- hexbin(dfbin,IDs=TRUE)
cdf <- h@count[match(h@cID, h@cell)]
dfbin$count <- cdf
dfbin <- subset(dfbin, count >= quantile(dfbin$count, 0.9, na.rm = T))
ggplot(aes(x= inpol, y= polnameref), data =dfbin) +
geom_hex()+ylim(37,80)+xlim(17,65)
This is not good. The hexagons have moved places. Also I only want the light blue ones. It has reset the colour scale. Please help. The dataframe before the subset is:
inpol polnameref
17.42676391 38.25
17.5048993 38.25
17.77483251 38.87
17.39251595 37.63
17.31113196 36.88
17.43306337 40.85
17.35539298 39.93
17.6139507 38.9
17.56161397 40.37
17.73130075 40.5
17.36926083 41.79
17.56821045 40.6
17.69573547 42.78
17.64254034 41.98
17.62787969 40.67
18.10980992 42.85
18.0725116 41.21
18.01466172 40.23
17.95309804 42.56
17.75885604 41.43
18.23903621 42.29
18.07178862 43.25
17.99967031 42.54
20.02494924 42.29
18.51730294 41.73
21.14507957 43.54
20.48827122 45.06
20.3843049 40.77
18.36868867 41.63
19.53477601 41.6
20.80472381 41.75
19.96617762 41.27
19.98848821 41.77
20.02480735 43.51
19.89288626 42.46
20.32677396 42.05
19.66313026 43.08
20.39302774 42.09
20.19763967 41.7
20.42255334 41.43
20.17807805 42.04
20.28806701 42.88
20.40751273 42.66
19.9050178 41.58
20.25198356 41.88
20.20938488 41.67
19.90404658 42.03
19.71477348 42.17
19.68338152 40.98
19.67046427 41.33
19.90519354 41.66
19.71399623 41.72
19.99590555 43.29
20.01490779 41.47
19.70521077 41.96
19.58479633 42.32
19.7858919 41.09
19.48444803 41.93
19.59917189 41.37
19.6844288 42.26
19.45611171 42.2
19.8283006 40.72
19.43650149 42.81
19.45118314 41.68
19.62734881 42.28
19.5755179 42.53
19.30813031 41.44
19.63158924 42.24
19.80666451 42.13
19.79966307 42.74
19.7888629 42.23
19.76426397 42.24
19.27794406 44.97
19.68915206 43.15
19.40696523 41.38
19.31461079 42.82
19.52188744 42.67
19.70922269 41.72
19.74247564 42.05
19.59442694 42.44
19.50884562 42.99
19.47578724 41.87
19.53253826 40.89
19.31143528 42.09
20.06161687 42.74
21.27860503 43.51
21.31562355 44.91
21.63930143 42.82
21.50771959 43.63
21.45356499 44.07
21.58179896 42.72
21.59739348 43.92
21.56413994 44.72
21.94329194 43.01
22.28233901 44.25
21.95846238 43.41
22.37496129 44.35
21.96430959 45.55
22.01903662 44.36
21.78941674 46.48
21.70367342 45.49
21.79222265 44.46
21.78528912 46.66
22.00845113 44.71
21.53516507 45.45
22.17443148 45.84
22.26811528 45.85
22.03964574 46.15
22.04461583 45.24
21.98978594 45.69
22.49674076 45.64
22.10263468 45.19
21.88414276 45.56
22.00298657 45.73
22.15120982 45.63
21.85503077 46.32
22.12098045 45.29
22.14342396 46.42
22.42330741 46.99
22.3987982 46.38
22.44525934 47.62
22.32275898 46.42
22.00197921 46.82
22.38612131 47.01
22.16875464 46.65
22.44285689 46.94
22.47604651 47.34
22.39620617 47.87
22.52473436 48.47
22.21109105 46.91
22.36805704 46.31
22.33986031 47.12
22.25803183 48.03
22.69138134 47.22
22.29976395 47.06
22.33845646 47.05
22.25282078 47.4
22.27895141 47.02
22.55944832 47.4
22.40500726 46.32
22.51406224 47.59
22.37466457 46.83
22.34895404 46.05
22.4610081 46.3
22.34985953 47.3
22.32471725 45.96
22.40504599 46.25
22.34311997 45.39
22.41149132 46.94
22.47033791 45.13
22.21412718 45.44
22.46292823 44.53
22.38438769 44.74
22.1645396 44.46
22.32260146 44.94
22.43829913 46.31
22.34439617 47.86
22.54660265 44.98
22.51751905 46
22.41903293 47.46
22.4024444 45.62
22.64624141 44.83
22.34402646 45.13
22.64857615 46.75
22.628904 46.39
22.03869574 45.26
22.51592044 45.14
22.48721371 46.08
22.59074569 45.41
22.44825156 45.34
22.52951537 48.48
22.55646667 46.85
22.66958166 46.83
22.42873404 47.42
22.54312474 45.71
22.59117514 46.37
22.61624115 47.08
22.68121763 47.2
22.40023094 46.7
22.66173108 45.54
22.37639571 46.56
22.95107637 47.63
23.0379301 47.94
22.72865707 47.62
22.63720396 47.09
22.80593724 48.41
22.72360911 47.59
22.52995956 47
22.45949931 46.79
22.39673331 47.63
22.45988863 46.9
22.70019946 48.56
22.48648674 46.61
22.07587756 47.49
22.60669807 48.34
22.45930076 47.6
22.29895314 47.59
22.27287001 48.23
22.52669014 48.28
22.46398129 48.34
22.12840597 47.15
22.4984977 48.44
22.42495701 48.4
22.55233865 47.04
22.49739963 48.38
22.70725596 47.58
22.36992699 48.49
22.7392806 48.31
22.49460089 48.2
23.03010242 49.61
22.53344863 48.33
22.64603909 47.96
22.81655928 48.7
22.7326182 49.14
22.78408409 49.43
22.52245748 49.4
22.50190243 48.83
22.67152808 51.71
22.73431867 48.66
23.10417711 49.69
22.93608008 50.27
23.41445591 51.35
22.70795555 49.57
23.12595828 49.54
22.89295606 49.9
22.6075502 51.06
21.87559709 51.46
22.6702465 50.35
22.71228948 50.25
23.61207318 49.6
22.65804417 50.71
22.3070406 51.06
22.39958213 49.52
23.5079138 49.97
22.16422586 50.86
22.38312254 49.46
22.04896729 50.63
23.54793995 49.99
20.38162757 49.71
21.38200519 50.54
21.43333337 49.64
21.59500658 50.17
22.03506269 49.49
21.5043559 48.34
21.70015439 49.68
21.80207113 48.69
21.74517136 49.83
21.8609921 49.57
21.67641705 49.55
21.34903313 49.47
21.41308062 48.49
21.63805781 48.97
21.67864117 50.46
21.68675143 48.54
21.62867732 49.51
21.16976344 48.99
21.70387895 47.62
21.477848 49.49
21.9543765 48.85
21.58896211 47.72
21.26003266 49.11
21.35601366 48.58
22.08436729 47.48
22.70243304 45.53
41.2294227 45.61
26.77883723 49.24
36.47645963 48.8
36.8336389 47.23
38.02198933 47.61
38.22273383 45.69
33.14825745 47.6
34.05349207 48.82
34.09730751 47.71
33.11988209 48.16
31.91505055 46.63
31.26936121 44.89
30.23171054 46.84
28.27728546 46.22
26.10252999 45.6
26.56643397 45.63
26.75523342 41.97
27.13260062 46.2
27.22019207 46.03
27.41993138 46.94
27.21529427 46.89
28.11137671 45.27
27.77674591 45.7
27.50241325 46.15
27.31882784 44.74
27.16689129 44.8
27.03591275 44.1
26.53485741 44.66
27.77045211 44.56
26.27928195 44.53
26.04376496 45.8
26.21254574 46.06
26.08243168 45.68
26.12646966 46.19
25.99029321 45.23
25.84735928 46.34
26.19874778 47.84
25.92992329 45.95
26.1647604 47.36
26.22273734 45.69
25.61298895 45.81
25.86863522 44.88
25.54572722 46.39
25.42494037 45.47
25.67121093 46.19
25.22826636 45.27
25.09830832 47.43
25.04649523 46.48
25.15377705 47.37
24.33327451 45.59
24.40491887 44.72
24.74121548 45.12
24.28010796 47.22
24.47110852 44.83
24.59174212 45.93
25.15500895 45.82
25.1112664 46.48
24.84030969 46.43
24.21852486 46.79
24.32798221 47.29
23.70098056 48.09
23.92662019 47.45
24.48271273 48.53
23.99476941 47.91
24.2861086 48.64
24.10086564 49.4
23.84419737 49.03
24.0335352 50.6
24.01178619 50.13
23.67764051 48.53
23.99742496 49.88
23.71466042 49.83
23.76999156 50.26
23.92642523 50.49
23.93898223 49.09
23.46873289 49.58
23.69176247 50.11
24.21102825 48.65
23.71533163 50.24
24.00964442 50.7
23.66545989 50.38
23.83280933 50.75
23.72293199 50.2
23.96290657 51.18
23.96997352 50.89
23.97610611 50.03
24.18524842 51.21
24.44730965 51.28
24.2232263 52.41
24.30968902 52.33
24.18934039 51.98
24.76317041 53.03
24.05379701 53.32
23.99848849 52.23
23.70820653 54.38
23.9988556 55.27
23.92546417 55.85
23.85557109 55.57
24.22741257 54.53
23.34826666 56.29
26.29360721 56.3
43.43981175 75.88
43.43981175 76.15
39.8001966 75.25
39.8001966 76.22
36.34802927 76.09
36.34802927 78.46
33.93190822 78.25
33.93190822 76.6
31.75796316 76.87
31.05566519 78.99
30.98690611 76.77
30.27141269 77.13
30.31065104 76.14
29.99500575 78.37
29.8957148 79.58
29.72999871 77.46
29.49784144 78.35
29.40197634 77.24
29.13645113 79.73
28.91445928 76.69
28.62765673 76.31
28.63034445 76.95
28.61705689 77.64
28.26823504 76.28
28.23742028 76.3
28.39297285 78.13
27.97140031 77.08
27.93802977 75.13
27.74082305 74.32
27.7799947 75.57
27.85760704 76.29
27.80616237 72.74
27.79257959 73.14
27.36234485 71.33
28.017087 73.31
27.20914893 73.78
27.42008989 72.33
27.89608946 73.2
27.60453589 74.37
28.08962092 72.34
27.5358951 71.74
27.9662519 75.09
28.06823197 72.55
28.01134255 71.87
28.12264824 70.65
27.78466708 72.28
27.97271606 69.82
28.12213267 68.82
28.01118324 70.24
27.93769772 68.46
28.10709973 69.37
28.05272866 69.24
27.90094894 69.94
28.36038596 67.39
28.67551687 66.45
28.54934442 68.08
28.35537894 66.08
28.39971046 66.63
28.31906936 66.15
28.41096835 67.26
28.47796523 67.42
28.55833231 67.68
28.71797685 66.2
28.97345694 66.43
28.4854693 64.5
28.37992181 64.28
28.50348589 63.54
27.9512602 64.1
28.29111012 64.35
28.45047576 64.84
28.34868189 64.05
28.34377334 66.04
28.18122859 64.08
28.48492086 64.98
28.20915728 65.17
28.26514517 65.77
28.42044105 65.43
28.18027759 66.08
28.52852303 64.96
28.53784806 66.17
28.29702085 65.51
28.52164566 64.77
28.35108176 64.5
28.52678661 62.02
28.23239935 61.4
28.33353525 62.61
28.3918793 65.79
28.34785098 62.43
28.90676545 65.3
28.86436173 63.42
28.9000225 61.56
28.82531793 61.44
29.13606601 62.29
29.24239893 62.69
30.41295185 62.72
35.29279472 62.92
40.07560578 63.94
39.85047507 62.58
50.88022001 61.68
64.94580963 65.5
61.19567058 64.58
62.02008307 65.03
61.5814782 64.13
60.4794675 63.72
59.44409362 63.79
59.30328613 63.12
58.44514404 63.24
57.9451571 63.23
57.5200673 64.43
56.84759076 62.92
56.27230232 64.49
55.8781137 63.98
55.64307744 62.88
54.67155688 63.24
53.65561172 61.65
53.42913519 63.13
52.54638999 64.15
52.03003247 61.31
51.80103998 62.99
50.95914709 62.6
50.73235272 61.01
51.94559811 64.35
48.87871926 62.41
49.94153858 61.76
49.01587109 62.06
48.36524521 62.33
48.6064699 62.92
47.91984551 62.23
47.73724565 61.86
47.46788407 63.99
47.00676788 60.22
46.78894496 63.7
46.24485433 63.68
45.83904756 62.4
45.95075114 63.9
46.70190761 63.62
43.67902274 65.3
44.81360341 64.09
44.9267436 63.47
44.872148 61.92
44.80256577 65.51
44.30309988 62.88
43.84937295 63.7
44.29863103 64.55
43.46762553 62.95
43.52759219 65.72
42.94900577 66.19
43.02078156 66.81
43.60463837 64.76
43.57744781 61.5
43.5355533 61.4
42.9092125 59.05
42.91910179 62.53
42.9216313 65.57
42.51388867 62.17
42.41387841 66.49
42.13209007 66.71
43.57322997 65.3
43.89428494 63.22
43.26211357 65.08
41.47403191 61.21
40.95912227 64.11
43.1017305 66.42
40.6548483 64.67
42.53598348 64.05
42.7936141 63.7
44.72244582 64.42
44.38661171 61.25
45.03052152 65.03
46.9433944 68.38
48.15864844 68.08
49.22129432 64.63
49.11202257 64.92
50.59884737 65.97
51.89563567 66.55
51.63078624 67.32
A:
Set a limit on the color scale:
scale_fill_gradient(
limits = c(custom_limit, NA),
na.value = NA
)
|
Plot only the high density data with geom_hex
|
I create a geom_hex plot using ggplot with the following code:
ggplot(aes(x= inpol, y= polnameref), data =dfbin) +
geom_hex()
Now I want to produce the same graph but only with the high density data (data with counts in the top 10th percentile). I want the x and y axis scales, the size and colour of the hexagons to stay the same.
The way that I have tried is to create a count column using hexbin. I then subset the dataframe for only the high counts and plot.
h <- hexbin(dfbin,IDs=TRUE)
cdf <- h@count[match(h@cID, h@cell)]
dfbin$count <- cdf
dfbin <- subset(dfbin, count >= quantile(dfbin$count, 0.9, na.rm = T))
ggplot(aes(x= inpol, y= polnameref), data =dfbin) +
geom_hex()+ylim(37,80)+xlim(17,65)
This is not good. The hexagons have moved places. Also I only want the light blue ones. It has reset the colour scale. Please help. The dataframe before the subset is:
inpol polnameref
17.42676391 38.25
17.5048993 38.25
17.77483251 38.87
17.39251595 37.63
17.31113196 36.88
17.43306337 40.85
17.35539298 39.93
17.6139507 38.9
17.56161397 40.37
17.73130075 40.5
17.36926083 41.79
17.56821045 40.6
17.69573547 42.78
17.64254034 41.98
17.62787969 40.67
18.10980992 42.85
18.0725116 41.21
18.01466172 40.23
17.95309804 42.56
17.75885604 41.43
18.23903621 42.29
18.07178862 43.25
17.99967031 42.54
20.02494924 42.29
18.51730294 41.73
21.14507957 43.54
20.48827122 45.06
20.3843049 40.77
18.36868867 41.63
19.53477601 41.6
20.80472381 41.75
19.96617762 41.27
19.98848821 41.77
20.02480735 43.51
19.89288626 42.46
20.32677396 42.05
19.66313026 43.08
20.39302774 42.09
20.19763967 41.7
20.42255334 41.43
20.17807805 42.04
20.28806701 42.88
20.40751273 42.66
19.9050178 41.58
20.25198356 41.88
20.20938488 41.67
19.90404658 42.03
19.71477348 42.17
19.68338152 40.98
19.67046427 41.33
19.90519354 41.66
19.71399623 41.72
19.99590555 43.29
20.01490779 41.47
19.70521077 41.96
19.58479633 42.32
19.7858919 41.09
19.48444803 41.93
19.59917189 41.37
19.6844288 42.26
19.45611171 42.2
19.8283006 40.72
19.43650149 42.81
19.45118314 41.68
19.62734881 42.28
19.5755179 42.53
19.30813031 41.44
19.63158924 42.24
19.80666451 42.13
19.79966307 42.74
19.7888629 42.23
19.76426397 42.24
19.27794406 44.97
19.68915206 43.15
19.40696523 41.38
19.31461079 42.82
19.52188744 42.67
19.70922269 41.72
19.74247564 42.05
19.59442694 42.44
19.50884562 42.99
19.47578724 41.87
19.53253826 40.89
19.31143528 42.09
20.06161687 42.74
21.27860503 43.51
21.31562355 44.91
21.63930143 42.82
21.50771959 43.63
21.45356499 44.07
21.58179896 42.72
21.59739348 43.92
21.56413994 44.72
21.94329194 43.01
22.28233901 44.25
21.95846238 43.41
22.37496129 44.35
21.96430959 45.55
22.01903662 44.36
21.78941674 46.48
21.70367342 45.49
21.79222265 44.46
21.78528912 46.66
22.00845113 44.71
21.53516507 45.45
22.17443148 45.84
22.26811528 45.85
22.03964574 46.15
22.04461583 45.24
21.98978594 45.69
22.49674076 45.64
22.10263468 45.19
21.88414276 45.56
22.00298657 45.73
22.15120982 45.63
21.85503077 46.32
22.12098045 45.29
22.14342396 46.42
22.42330741 46.99
22.3987982 46.38
22.44525934 47.62
22.32275898 46.42
22.00197921 46.82
22.38612131 47.01
22.16875464 46.65
22.44285689 46.94
22.47604651 47.34
22.39620617 47.87
22.52473436 48.47
22.21109105 46.91
22.36805704 46.31
22.33986031 47.12
22.25803183 48.03
22.69138134 47.22
22.29976395 47.06
22.33845646 47.05
22.25282078 47.4
22.27895141 47.02
22.55944832 47.4
22.40500726 46.32
22.51406224 47.59
22.37466457 46.83
22.34895404 46.05
22.4610081 46.3
22.34985953 47.3
22.32471725 45.96
22.40504599 46.25
22.34311997 45.39
22.41149132 46.94
22.47033791 45.13
22.21412718 45.44
22.46292823 44.53
22.38438769 44.74
22.1645396 44.46
22.32260146 44.94
22.43829913 46.31
22.34439617 47.86
22.54660265 44.98
22.51751905 46
22.41903293 47.46
22.4024444 45.62
22.64624141 44.83
22.34402646 45.13
22.64857615 46.75
22.628904 46.39
22.03869574 45.26
22.51592044 45.14
22.48721371 46.08
22.59074569 45.41
22.44825156 45.34
22.52951537 48.48
22.55646667 46.85
22.66958166 46.83
22.42873404 47.42
22.54312474 45.71
22.59117514 46.37
22.61624115 47.08
22.68121763 47.2
22.40023094 46.7
22.66173108 45.54
22.37639571 46.56
22.95107637 47.63
23.0379301 47.94
22.72865707 47.62
22.63720396 47.09
22.80593724 48.41
22.72360911 47.59
22.52995956 47
22.45949931 46.79
22.39673331 47.63
22.45988863 46.9
22.70019946 48.56
22.48648674 46.61
22.07587756 47.49
22.60669807 48.34
22.45930076 47.6
22.29895314 47.59
22.27287001 48.23
22.52669014 48.28
22.46398129 48.34
22.12840597 47.15
22.4984977 48.44
22.42495701 48.4
22.55233865 47.04
22.49739963 48.38
22.70725596 47.58
22.36992699 48.49
22.7392806 48.31
22.49460089 48.2
23.03010242 49.61
22.53344863 48.33
22.64603909 47.96
22.81655928 48.7
22.7326182 49.14
22.78408409 49.43
22.52245748 49.4
22.50190243 48.83
22.67152808 51.71
22.73431867 48.66
23.10417711 49.69
22.93608008 50.27
23.41445591 51.35
22.70795555 49.57
23.12595828 49.54
22.89295606 49.9
22.6075502 51.06
21.87559709 51.46
22.6702465 50.35
22.71228948 50.25
23.61207318 49.6
22.65804417 50.71
22.3070406 51.06
22.39958213 49.52
23.5079138 49.97
22.16422586 50.86
22.38312254 49.46
22.04896729 50.63
23.54793995 49.99
20.38162757 49.71
21.38200519 50.54
21.43333337 49.64
21.59500658 50.17
22.03506269 49.49
21.5043559 48.34
21.70015439 49.68
21.80207113 48.69
21.74517136 49.83
21.8609921 49.57
21.67641705 49.55
21.34903313 49.47
21.41308062 48.49
21.63805781 48.97
21.67864117 50.46
21.68675143 48.54
21.62867732 49.51
21.16976344 48.99
21.70387895 47.62
21.477848 49.49
21.9543765 48.85
21.58896211 47.72
21.26003266 49.11
21.35601366 48.58
22.08436729 47.48
22.70243304 45.53
41.2294227 45.61
26.77883723 49.24
36.47645963 48.8
36.8336389 47.23
38.02198933 47.61
38.22273383 45.69
33.14825745 47.6
34.05349207 48.82
34.09730751 47.71
33.11988209 48.16
31.91505055 46.63
31.26936121 44.89
30.23171054 46.84
28.27728546 46.22
26.10252999 45.6
26.56643397 45.63
26.75523342 41.97
27.13260062 46.2
27.22019207 46.03
27.41993138 46.94
27.21529427 46.89
28.11137671 45.27
27.77674591 45.7
27.50241325 46.15
27.31882784 44.74
27.16689129 44.8
27.03591275 44.1
26.53485741 44.66
27.77045211 44.56
26.27928195 44.53
26.04376496 45.8
26.21254574 46.06
26.08243168 45.68
26.12646966 46.19
25.99029321 45.23
25.84735928 46.34
26.19874778 47.84
25.92992329 45.95
26.1647604 47.36
26.22273734 45.69
25.61298895 45.81
25.86863522 44.88
25.54572722 46.39
25.42494037 45.47
25.67121093 46.19
25.22826636 45.27
25.09830832 47.43
25.04649523 46.48
25.15377705 47.37
24.33327451 45.59
24.40491887 44.72
24.74121548 45.12
24.28010796 47.22
24.47110852 44.83
24.59174212 45.93
25.15500895 45.82
25.1112664 46.48
24.84030969 46.43
24.21852486 46.79
24.32798221 47.29
23.70098056 48.09
23.92662019 47.45
24.48271273 48.53
23.99476941 47.91
24.2861086 48.64
24.10086564 49.4
23.84419737 49.03
24.0335352 50.6
24.01178619 50.13
23.67764051 48.53
23.99742496 49.88
23.71466042 49.83
23.76999156 50.26
23.92642523 50.49
23.93898223 49.09
23.46873289 49.58
23.69176247 50.11
24.21102825 48.65
23.71533163 50.24
24.00964442 50.7
23.66545989 50.38
23.83280933 50.75
23.72293199 50.2
23.96290657 51.18
23.96997352 50.89
23.97610611 50.03
24.18524842 51.21
24.44730965 51.28
24.2232263 52.41
24.30968902 52.33
24.18934039 51.98
24.76317041 53.03
24.05379701 53.32
23.99848849 52.23
23.70820653 54.38
23.9988556 55.27
23.92546417 55.85
23.85557109 55.57
24.22741257 54.53
23.34826666 56.29
26.29360721 56.3
43.43981175 75.88
43.43981175 76.15
39.8001966 75.25
39.8001966 76.22
36.34802927 76.09
36.34802927 78.46
33.93190822 78.25
33.93190822 76.6
31.75796316 76.87
31.05566519 78.99
30.98690611 76.77
30.27141269 77.13
30.31065104 76.14
29.99500575 78.37
29.8957148 79.58
29.72999871 77.46
29.49784144 78.35
29.40197634 77.24
29.13645113 79.73
28.91445928 76.69
28.62765673 76.31
28.63034445 76.95
28.61705689 77.64
28.26823504 76.28
28.23742028 76.3
28.39297285 78.13
27.97140031 77.08
27.93802977 75.13
27.74082305 74.32
27.7799947 75.57
27.85760704 76.29
27.80616237 72.74
27.79257959 73.14
27.36234485 71.33
28.017087 73.31
27.20914893 73.78
27.42008989 72.33
27.89608946 73.2
27.60453589 74.37
28.08962092 72.34
27.5358951 71.74
27.9662519 75.09
28.06823197 72.55
28.01134255 71.87
28.12264824 70.65
27.78466708 72.28
27.97271606 69.82
28.12213267 68.82
28.01118324 70.24
27.93769772 68.46
28.10709973 69.37
28.05272866 69.24
27.90094894 69.94
28.36038596 67.39
28.67551687 66.45
28.54934442 68.08
28.35537894 66.08
28.39971046 66.63
28.31906936 66.15
28.41096835 67.26
28.47796523 67.42
28.55833231 67.68
28.71797685 66.2
28.97345694 66.43
28.4854693 64.5
28.37992181 64.28
28.50348589 63.54
27.9512602 64.1
28.29111012 64.35
28.45047576 64.84
28.34868189 64.05
28.34377334 66.04
28.18122859 64.08
28.48492086 64.98
28.20915728 65.17
28.26514517 65.77
28.42044105 65.43
28.18027759 66.08
28.52852303 64.96
28.53784806 66.17
28.29702085 65.51
28.52164566 64.77
28.35108176 64.5
28.52678661 62.02
28.23239935 61.4
28.33353525 62.61
28.3918793 65.79
28.34785098 62.43
28.90676545 65.3
28.86436173 63.42
28.9000225 61.56
28.82531793 61.44
29.13606601 62.29
29.24239893 62.69
30.41295185 62.72
35.29279472 62.92
40.07560578 63.94
39.85047507 62.58
50.88022001 61.68
64.94580963 65.5
61.19567058 64.58
62.02008307 65.03
61.5814782 64.13
60.4794675 63.72
59.44409362 63.79
59.30328613 63.12
58.44514404 63.24
57.9451571 63.23
57.5200673 64.43
56.84759076 62.92
56.27230232 64.49
55.8781137 63.98
55.64307744 62.88
54.67155688 63.24
53.65561172 61.65
53.42913519 63.13
52.54638999 64.15
52.03003247 61.31
51.80103998 62.99
50.95914709 62.6
50.73235272 61.01
51.94559811 64.35
48.87871926 62.41
49.94153858 61.76
49.01587109 62.06
48.36524521 62.33
48.6064699 62.92
47.91984551 62.23
47.73724565 61.86
47.46788407 63.99
47.00676788 60.22
46.78894496 63.7
46.24485433 63.68
45.83904756 62.4
45.95075114 63.9
46.70190761 63.62
43.67902274 65.3
44.81360341 64.09
44.9267436 63.47
44.872148 61.92
44.80256577 65.51
44.30309988 62.88
43.84937295 63.7
44.29863103 64.55
43.46762553 62.95
43.52759219 65.72
42.94900577 66.19
43.02078156 66.81
43.60463837 64.76
43.57744781 61.5
43.5355533 61.4
42.9092125 59.05
42.91910179 62.53
42.9216313 65.57
42.51388867 62.17
42.41387841 66.49
42.13209007 66.71
43.57322997 65.3
43.89428494 63.22
43.26211357 65.08
41.47403191 61.21
40.95912227 64.11
43.1017305 66.42
40.6548483 64.67
42.53598348 64.05
42.7936141 63.7
44.72244582 64.42
44.38661171 61.25
45.03052152 65.03
46.9433944 68.38
48.15864844 68.08
49.22129432 64.63
49.11202257 64.92
50.59884737 65.97
51.89563567 66.55
51.63078624 67.32
|
[
"Set a limit on the color scale:\nscale_fill_gradient(\n limits = c(custom_limit, NA), \n na.value = NA\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"density_plot",
"ggplot2",
"hex",
"r"
] |
stackoverflow_0067602243_density_plot_ggplot2_hex_r.txt
|
Q:
Loki query to show all logs
I'm trying to test our Loki log data source. From the Queries I've been executing nothing is returned.
It's possible that the logs are in a different format to what I'm expecting, or that no Logs are ingested by Loki, and my pipeline is broken somewhere.
Is there a Loki query that returns all the logs?
I've looked through documentation, and so far, I haven't found any such Loki query. Any other queries to help debug would be appreciated!
A:
You can use a match-all regex together with a stream you have for all your logs.
For example if you collect a stream named host for all your incoming logs you'd query for:
{host=~ ".*"}
You should note that at present a stream selector is always required for querying logs.
A:
{host=~ ".*"} doesn't work for me. Use {host=~ ".+"} That should work always.
|
Loki query to show all logs
|
I'm trying to test our Loki log data source. From the Queries I've been executing nothing is returned.
It's possible that the logs are in a different format to what I'm expecting, or that no Logs are ingested by Loki, and my pipeline is broken somewhere.
Is there a Loki query that returns all the logs?
I've looked through documentation, and so far, I haven't found any such Loki query. Any other queries to help debug would be appreciated!
|
[
"You can use a match-all regex together with a stream you have for all your logs.\nFor example if you collect a stream named host for all your incoming logs you'd query for:\n{host=~ \".*\"}\n\nYou should note that at present a stream selector is always required for querying logs.\n",
"{host=~ \".*\"} doesn't work for me. Use {host=~ \".+\"} That should work always.\n"
] |
[
5,
0
] |
[] |
[] |
[
"grafana_loki",
"loki"
] |
stackoverflow_0070777336_grafana_loki_loki.txt
|
Q:
cURL POST : how to send data as BODY of the request (not POST param)
I need to contact a WebService:
this WS accepts only POST.
For authenticating I have to send some JSON in the BODY of request
while in the HEADER I have to send the WS method I want to call.
This is a valid request sent using CLI (WS answers correctly)
curl -X POST -k -H 'Operation: TPLGetCardData' -H 'card_num: 123456789' -i 'https://example.com/ws.aspx' --data '{
"auth": [
{
"Timestamp": 1669910083,
"SenderIdentifier": "XXX-XXX-XXXX",
"ConnectionKey": "XXXX"
}
]
}'
This is the PHP code I've written, but I receive an error from the WS
$data = '{
"auth": [
{
"Timestamp": 1669910083,
"SenderIdentifier": "XXX-XXX-XXXX",
"ConnectionKey": "XXXX"
}
]
}';
$cURLConnection = curl_init();
curl_setopt($cURLConnection, CURLOPT_URL, 'https://example.com/ws.aspx');
curl_setopt($cURLConnection, CURLOPT_RETURNTRANSFER, true);
curl_setopt($cURLConnection, CURLOPT_POST, true);
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, http_build_query($data));
//curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data);
curl_setopt($cURLConnection, CURLOPT_HTTPHEADER, array('Operation: TPLGetCardData', 'card_num: 123456789'));
//curl_setopt($cURLConnection, CURLOPT_VERBOSE , true);
$result = curl_exec($cURLConnection);
curl_close($cURLConnection);
$jsonArrayResponse - json_decode($result);
print_r('RESULT is <pre>'.$result.'</pre>');
If I send the request with
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data)
the error is "no credentials"
if I send the request with
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, http_build_query($data));
the error is "wrong credentials"
I don't understand which is the difference between what I send with curl CLI command and what I send with PHP.
If someone could help me, it will be really apreciated
:::EDIT:::
Sorry, it came out that the problem was on the WS side, my request was OK...2 days lost in finding a non existing problem.
A:
Try this by encoding the data before passing it to curl
$data = json_encode(array(
"Timestamp" => "1669910083",
"SenderIdentifier" => "XXX-XXX-XXXX",
"ConnectionKey" => "XXXX"
"ShortCode" => "600981"
));
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data);
A:
I apologize to everyone who helped, but it came out that the problem was on the WS side, my request was OK...
2 days lost in finding a non existing problem.
Sorry
|
cURL POST : how to send data as BODY of the request (not POST param)
|
I need to contact a WebService:
this WS accepts only POST.
For authenticating I have to send some JSON in the BODY of request
while in the HEADER I have to send the WS method I want to call.
This is a valid request sent using CLI (WS answers correctly)
curl -X POST -k -H 'Operation: TPLGetCardData' -H 'card_num: 123456789' -i 'https://example.com/ws.aspx' --data '{
"auth": [
{
"Timestamp": 1669910083,
"SenderIdentifier": "XXX-XXX-XXXX",
"ConnectionKey": "XXXX"
}
]
}'
This is the PHP code I've written, but I receive an error from the WS
$data = '{
"auth": [
{
"Timestamp": 1669910083,
"SenderIdentifier": "XXX-XXX-XXXX",
"ConnectionKey": "XXXX"
}
]
}';
$cURLConnection = curl_init();
curl_setopt($cURLConnection, CURLOPT_URL, 'https://example.com/ws.aspx');
curl_setopt($cURLConnection, CURLOPT_RETURNTRANSFER, true);
curl_setopt($cURLConnection, CURLOPT_POST, true);
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, http_build_query($data));
//curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data);
curl_setopt($cURLConnection, CURLOPT_HTTPHEADER, array('Operation: TPLGetCardData', 'card_num: 123456789'));
//curl_setopt($cURLConnection, CURLOPT_VERBOSE , true);
$result = curl_exec($cURLConnection);
curl_close($cURLConnection);
$jsonArrayResponse - json_decode($result);
print_r('RESULT is <pre>'.$result.'</pre>');
If I send the request with
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data)
the error is "no credentials"
if I send the request with
curl_setopt($cURLConnection, CURLOPT_POSTFIELDS, http_build_query($data));
the error is "wrong credentials"
I don't understand which is the difference between what I send with curl CLI command and what I send with PHP.
If someone could help me, it will be really apreciated
:::EDIT:::
Sorry, it came out that the problem was on the WS side, my request was OK...2 days lost in finding a non existing problem.
|
[
"Try this by encoding the data before passing it to curl\n$data = json_encode(array(\n \"Timestamp\" => \"1669910083\",\n \"SenderIdentifier\" => \"XXX-XXX-XXXX\",\n \"ConnectionKey\" => \"XXXX\"\n \"ShortCode\" => \"600981\"\n));\n\ncurl_setopt($cURLConnection, CURLOPT_POSTFIELDS, $data);\n\n",
"I apologize to everyone who helped, but it came out that the problem was on the WS side, my request was OK...\n2 days lost in finding a non existing problem.\nSorry\n"
] |
[
0,
0
] |
[] |
[] |
[
"php",
"php_curl"
] |
stackoverflow_0074645189_php_php_curl.txt
|
Q:
Next.js 13 - Error: The "target" property is no longer supported in next.config.js, Even next.config.js does not have the target property
I upgraded my next.js app to Next.js 13 and pushed the new version to AWS Amplify. The build failed due to this error: The "target" property is no longer supported in next.config.js
Error: The "target" property is no longer supported in next.config.js. See more info here https://nextjs.org/docs/messages/deprecated-target-config at Object.loadConfig [as default (/codebuild/output/src405507991/src/assistian/node_modules/next/dist/server/config.js:97:19)
Here is my next.config.js with no target:
/** @type {import('next').NextConfig} */
module.exports = {
webpack(config) {
config.module.rules.push({
test: /\.svg$/i,
issuer: /\.[jt]sx?$/,
use: ['@svgr/webpack'],
})
return config
}
}
Any opinion on what is going wrong?
A:
Appears to be unsupported at this time as per https://github.com/vercel/next.js/issues/41932
A:
Amplify currently supports NextJS versions greater than version 11, via a migration. See here: https://docs.aws.amazon.com/amplify/latest/userguide/update-app-nextjs-version.html
|
Next.js 13 - Error: The "target" property is no longer supported in next.config.js, Even next.config.js does not have the target property
|
I upgraded my next.js app to Next.js 13 and pushed the new version to AWS Amplify. The build failed due to this error: The "target" property is no longer supported in next.config.js
Error: The "target" property is no longer supported in next.config.js. See more info here https://nextjs.org/docs/messages/deprecated-target-config at Object.loadConfig [as default (/codebuild/output/src405507991/src/assistian/node_modules/next/dist/server/config.js:97:19)
Here is my next.config.js with no target:
/** @type {import('next').NextConfig} */
module.exports = {
webpack(config) {
config.module.rules.push({
test: /\.svg$/i,
issuer: /\.[jt]sx?$/,
use: ['@svgr/webpack'],
})
return config
}
}
Any opinion on what is going wrong?
|
[
"Appears to be unsupported at this time as per https://github.com/vercel/next.js/issues/41932\n",
"Amplify currently supports NextJS versions greater than version 11, via a migration. See here: https://docs.aws.amazon.com/amplify/latest/userguide/update-app-nextjs-version.html\n"
] |
[
2,
0
] |
[] |
[] |
[
"next.js"
] |
stackoverflow_0074216515_next.js.txt
|
Q:
How to implement an updateTodo function in a todo list?
I wanna update the todo item in the todo list but I have some troubles understanding this function, like why does setState in the function submitTodo still set the id and value the same like in the useState above...can anybody help me to understand this function better? thank you so much!
This is the function:
TodoList.js:
const updateTodo = (todoId, newValue) => {
if (!newValue.text || /^\s*$/.test(newValue.text)) {
return;
}
setTodos(prev => prev.map(item => (item.id === todoId ? newValue : item)));
};
Todo.js:
const [edit, setEdit] = useState({
id: null,
value: ''
});
const submitUpdate = value => {
updateTodo(edit.id, value);
setEdit({
id: null,
value: ''
});
};
if (edit.id) {
return <TodoForm edit={edit} onSubmit={submitUpdate} />;
}
A:
Because after you update the todo via updateTodo() function, you'll need to clear what is stored in state for future use, hence setting it back to what it was initialised to in the useState declaration. This is usually bundled after the actual update, as a good state management practice.
|
How to implement an updateTodo function in a todo list?
|
I wanna update the todo item in the todo list but I have some troubles understanding this function, like why does setState in the function submitTodo still set the id and value the same like in the useState above...can anybody help me to understand this function better? thank you so much!
This is the function:
TodoList.js:
const updateTodo = (todoId, newValue) => {
if (!newValue.text || /^\s*$/.test(newValue.text)) {
return;
}
setTodos(prev => prev.map(item => (item.id === todoId ? newValue : item)));
};
Todo.js:
const [edit, setEdit] = useState({
id: null,
value: ''
});
const submitUpdate = value => {
updateTodo(edit.id, value);
setEdit({
id: null,
value: ''
});
};
if (edit.id) {
return <TodoForm edit={edit} onSubmit={submitUpdate} />;
}
|
[
"Because after you update the todo via updateTodo() function, you'll need to clear what is stored in state for future use, hence setting it back to what it was initialised to in the useState declaration. This is usually bundled after the actual update, as a good state management practice.\n"
] |
[
0
] |
[] |
[] |
[
"onsubmit",
"reactjs",
"setstate"
] |
stackoverflow_0070051548_onsubmit_reactjs_setstate.txt
|
Q:
Write the attributes of an object into a txt file
So i am trying to write to a text file with all the attributes of an object called item.
I was able to access the information with:
>>>print(*vars(item).values()])
125001 John Smith 12 First Road London N1 55 74
but when i try write it into the text file:
with open('new_student_data.txt', 'w') as f:
f.writelines(*vars(item).values())
it throws an error as writelines() only takes one argument.
how can i write all the attributes of item to a single line in the text file?
A:
with open('new_student_data.txt', 'w') as f:
for i in vars(item).values():
f.write(f"{i}\n")
If file.writelines only takes an iterable and doesn't support *args, you can always iterate over your list and write it with file.write.
A:
You can just print as follows:
with open('new_student_data.txt', 'w') as f:
print(*vars(item).values(), file=f)
A:
Every class is capable of outputting a dictionary by calling __dict__ method. This way you are getting the attribute name and the value as a dictionary, then you could simply iterate through it and write it to a txt file:
with open('params.txt'), 'w') as f:
for param, value in params.__dict__.items():
if not param.startswith("__"):
f.write(f"{param}:{value}\n")
Here I am also getting rid of default class attributes that start with "__". Hope this helps
|
Write the attributes of an object into a txt file
|
So i am trying to write to a text file with all the attributes of an object called item.
I was able to access the information with:
>>>print(*vars(item).values()])
125001 John Smith 12 First Road London N1 55 74
but when i try write it into the text file:
with open('new_student_data.txt', 'w') as f:
f.writelines(*vars(item).values())
it throws an error as writelines() only takes one argument.
how can i write all the attributes of item to a single line in the text file?
|
[
"with open('new_student_data.txt', 'w') as f:\n for i in vars(item).values():\n f.write(f\"{i}\\n\")\n\nIf file.writelines only takes an iterable and doesn't support *args, you can always iterate over your list and write it with file.write.\n",
"You can just print as follows:\nwith open('new_student_data.txt', 'w') as f:\n print(*vars(item).values(), file=f)\n\n",
"Every class is capable of outputting a dictionary by calling __dict__ method. This way you are getting the attribute name and the value as a dictionary, then you could simply iterate through it and write it to a txt file:\nwith open('params.txt'), 'w') as f:\n for param, value in params.__dict__.items():\n if not param.startswith(\"__\"):\n f.write(f\"{param}:{value}\\n\")\n\nHere I am also getting rid of default class attributes that start with \"__\". Hope this helps\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0071604746_python.txt
|
Q:
Error is attached to wrong input field when a form object involves two models with the same attribute name
I'm working on a form object that involves updating a User record with it's many associated models like Address and Phone via nested attributes.
Both Address and Phone models have a number attribute, so I name each HTML input and form object attribute differently for each one.
<%= simple_form_for @user_form, method: :patch do |f| %>
<%= f.input :number %>
<%= f.input :phone %>
<% end %>
controller:
def edit
@user_form = UserForm.new(@user)
end
def update
@user_form = UserForm.new(@user, update_params)
if @customer_update.save
redirect_to :users_path
else
render :edit
end
end
def update_params
params.require(:user_form).permit(:number, :phone)
end
Simplified form object:
class UserForm
include ActiveModel::Model
include ActiveModel::Validations::Callbacks
attr_accessor :number
:phone
attr_reader :user
validate :validate_children
def initialize(user, update_params = nil)
@user = user
super(update_params)
end
def save
user.assign_attributes(user_params)
return false if invalid?
user.save
end
def user_params
{}.tap do |p|
p[:address_attributes] = {number: number}
p[:phone_attributes] = {number: phone}
end.compact
end
def validate_children
promote_errors(user.associated_model.errors) if associated_model.invalid?
end
def promote_errors(child_errors)
child_errors.each do |attribute, message|
errors.add(attribute, message)
end
end
end
I delegate validation to the actual Address and Phone models where there is a
validates :number, presence: true
so if validations on either models kick in, the errors are promoted up to the form object and displayed on the invalid input field.
@user_form.errors.keys
=> [:"phone.number", :number]
@user_form.errors.full_messages
=> ["Number can't be blank"]
The problem is that if for example the :phone field is left blank when the form is submited, because the actual attribute that is invalid is the Phone number, the error is displayed in the :number input field in my form, or in other words the Phone number error is rendered in the Address number field because the form input is called the same (f.input :number).
Is there a way to change that or promote the errors differently so this doesn't happen? The code above is obviously dumbed down, let me know if specifics are needed.
A:
That seems like a lot of unnecessary complexity.
The way you would typically set this up in Rails is to use fields_for (and the SF wrapper simple_fields_for):
<%= form_with(model: @user) do |form| %>
<%= form.fields_for(:phone) do |phone_fields| %>
<pre><%= phone_fields.object.errors.full_messages.inspect %></pre>
<% end %>
<%= form.fields_for(:number) do |number_fields| %>
<pre><%= number_fields.object.errors.full_messages.inspect %></pre>
<% end %>
<% end %>
These methods yield a form builder and calling object gets the model instance wrapped by the form builder. This makes it easy to either display the errors together or per attribute and get it right - this is especially true if you're dealing with a one to many or many to many instead of one to one.
|
Error is attached to wrong input field when a form object involves two models with the same attribute name
|
I'm working on a form object that involves updating a User record with it's many associated models like Address and Phone via nested attributes.
Both Address and Phone models have a number attribute, so I name each HTML input and form object attribute differently for each one.
<%= simple_form_for @user_form, method: :patch do |f| %>
<%= f.input :number %>
<%= f.input :phone %>
<% end %>
controller:
def edit
@user_form = UserForm.new(@user)
end
def update
@user_form = UserForm.new(@user, update_params)
if @customer_update.save
redirect_to :users_path
else
render :edit
end
end
def update_params
params.require(:user_form).permit(:number, :phone)
end
Simplified form object:
class UserForm
include ActiveModel::Model
include ActiveModel::Validations::Callbacks
attr_accessor :number
:phone
attr_reader :user
validate :validate_children
def initialize(user, update_params = nil)
@user = user
super(update_params)
end
def save
user.assign_attributes(user_params)
return false if invalid?
user.save
end
def user_params
{}.tap do |p|
p[:address_attributes] = {number: number}
p[:phone_attributes] = {number: phone}
end.compact
end
def validate_children
promote_errors(user.associated_model.errors) if associated_model.invalid?
end
def promote_errors(child_errors)
child_errors.each do |attribute, message|
errors.add(attribute, message)
end
end
end
I delegate validation to the actual Address and Phone models where there is a
validates :number, presence: true
so if validations on either models kick in, the errors are promoted up to the form object and displayed on the invalid input field.
@user_form.errors.keys
=> [:"phone.number", :number]
@user_form.errors.full_messages
=> ["Number can't be blank"]
The problem is that if for example the :phone field is left blank when the form is submited, because the actual attribute that is invalid is the Phone number, the error is displayed in the :number input field in my form, or in other words the Phone number error is rendered in the Address number field because the form input is called the same (f.input :number).
Is there a way to change that or promote the errors differently so this doesn't happen? The code above is obviously dumbed down, let me know if specifics are needed.
|
[
"That seems like a lot of unnecessary complexity.\nThe way you would typically set this up in Rails is to use fields_for (and the SF wrapper simple_fields_for):\n<%= form_with(model: @user) do |form| %>\n <%= form.fields_for(:phone) do |phone_fields| %>\n <pre><%= phone_fields.object.errors.full_messages.inspect %></pre>\n <% end %>\n\n <%= form.fields_for(:number) do |number_fields| %>\n <pre><%= number_fields.object.errors.full_messages.inspect %></pre>\n <% end %>\n<% end %>\n\nThese methods yield a form builder and calling object gets the model instance wrapped by the form builder. This makes it easy to either display the errors together or per attribute and get it right - this is especially true if you're dealing with a one to many or many to many instead of one to one.\n"
] |
[
0
] |
[] |
[] |
[
"activerecord",
"error_handling",
"forms",
"ruby_on_rails",
"validation"
] |
stackoverflow_0074649477_activerecord_error_handling_forms_ruby_on_rails_validation.txt
|
Q:
Notepad++ macro in menu bar with icon
I'm looking for a way to add a macro as an icon in the toolbar.
It seems possible to bind keys to the macro, customize the toolbar with Customize Toolbar and use toolbarIcons.xml to customize it. But none of those options provide an option to add a recorded macro as an icon to the toolbar in Notepad++.
Any suggestions?
A:
As you said this can be achieved with CustomizeToolbar plugin. Just follow tyhese steps:
Install CustomizeToolbar plugin (you can find it here on Sourceforge)
Restart notepad++
Enable custom buttons checking Plugin/Customize Toolbar/Custom Buttons:
Notepad++ will ask you to restart:
Open %APPDATA%\Notepad++\plugins\config\CustomizeToolbar.btn and add this line: Macro,YourMacroName,,, replacing YourMacroName with the name of your macro
Restart notepad++
A new button should appear in the toolbar:
A:
Answer 1 adaption: In German as written in menue bar "Makros" worked for me. Makro or Macro caused an error
|
Notepad++ macro in menu bar with icon
|
I'm looking for a way to add a macro as an icon in the toolbar.
It seems possible to bind keys to the macro, customize the toolbar with Customize Toolbar and use toolbarIcons.xml to customize it. But none of those options provide an option to add a recorded macro as an icon to the toolbar in Notepad++.
Any suggestions?
|
[
"As you said this can be achieved with CustomizeToolbar plugin. Just follow tyhese steps:\n\nInstall CustomizeToolbar plugin (you can find it here on Sourceforge) \nRestart notepad++\nEnable custom buttons checking Plugin/Customize Toolbar/Custom Buttons:\n\nNotepad++ will ask you to restart:\n\nOpen %APPDATA%\\Notepad++\\plugins\\config\\CustomizeToolbar.btn and add this line: Macro,YourMacroName,,, replacing YourMacroName with the name of your macro\nRestart notepad++\nA new button should appear in the toolbar:\n\n\n",
"Answer 1 adaption: In German as written in menue bar \"Makros\" worked for me. Makro or Macro caused an error\n"
] |
[
5,
0
] |
[] |
[] |
[
"customization",
"macros",
"notepad++"
] |
stackoverflow_0047350950_customization_macros_notepad++.txt
|
Q:
How to Check Internal Memory Storage count in android?
i Want to Iternal storage get Memory and used memory in progressbar in like in image below in my app.how to apply please help me custome progressbar used in android.and i used download app.
A:
Here's how it can be accomplished:
TestActivity.java:
public class TestActivity extends Activity {
/**
* Called when the activity is first created.
*/
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final TextView occupiedSpaceText = (TextView)findViewById(R.id.occupiedSpace);
final TextView freeSpaceText = (TextView)findViewById(R.id.freeSpace);
final ProgressBar progressIndicator = (ProgressBar)findViewById(R.id.indicator);
final float totalSpace = DeviceMemory.getInternalStorageSpace();
final float occupiedSpace = DeviceMemory.getInternalUsedSpace();
final float freeSpace = DeviceMemory.getInternalFreeSpace();
final DecimalFormat outputFormat = new DecimalFormat("#.##");
if (null != occupiedSpaceText) {
occupiedSpaceText.setText(outputFormat.format(occupiedSpace) + " MB");
}
if (null != freeSpaceText) {
freeSpaceText.setText(outputFormat.format(freeSpace) + " MB");
}
if (null != progressIndicator) {
progressIndicator.setMax((int) totalSpace);
progressIndicator.setProgress((int)occupiedSpace);
}
}
/**
* From question http://stackoverflow.com/questions/2652935/android-internal-phone-storage by Lazy Ninja
*/
public static class DeviceMemory {
public static float getInternalStorageSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
//StatFs statFs = new StatFs("/data");
float total = ((float)statFs.getBlockCount() * statFs.getBlockSize()) / 1048576;
return total;
}
public static float getInternalFreeSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
//StatFs statFs = new StatFs("/data");
float free = ((float)statFs.getAvailableBlocks() * statFs.getBlockSize()) / 1048576;
return free;
}
public static float getInternalUsedSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
//StatFs statFs = new StatFs("/data");
float total = ((float)statFs.getBlockCount() * statFs.getBlockSize()) / 1048576;
float free = ((float)statFs.getAvailableBlocks() * statFs.getBlockSize()) / 1048576;
float busy = total - free;
return busy;
}
}
}
layout/main.xml:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ProgressBar
android:id="@+id/indicator"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
style="@android:style/Widget.ProgressBar.Horizontal"
android:progressDrawable="@drawable/memory_indicator_progress" />
<TextView
android:id="@+id/occupiedSpace"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentBottom="true"
android:gravity="left"
android:layout_marginLeft="5dp"
android:textColor="@android:color/black"
android:textStyle="bold" />
<TextView
android:id="@+id/freeSpace"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:gravity="right"
android:layout_marginRight="5dp"
android:textColor="@android:color/black"
android:textStyle="bold" />
</RelativeLayout>
drawable/memory_indicator_progress.xml:
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@android:id/background">
<shape android:shape="rectangle">
<solid android:color="@android:color/holo_green_light" />
</shape>
</item>
<item android:id="@android:id/progress">
<clip>
<shape android:shape="rectangle">
<solid android:color="@android:color/darker_gray" />
</shape>
</clip>
</item>
</layer-list>
I'm not sure if it's exactly what You're looking for, but on my xperia v with 4.1 android I get the following picture:
On Your device colors might be different due to different platform.
A:
Use android.os.Environment to find the internal directory, then use android.os.StatFs to call the Unix statfs system call on it. Shamelessly stolen from the Android settings app:
File path = Environment.getDataDirectory();
StatFs stat = new StatFs(path.getPath());
long blockSize = stat.getBlockSize();
long availableBlocks = stat.getAvailableBlocks();
return Formatter.formatFileSize(this, availableBlocks * blockSize);
From this answer
A:
public class Utils {
public static float getInternalStorageSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
return ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong()) / 1048576;
}
public static float getInternalUsedSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
float total = ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong()) / 1048576;
float free = ((float) statFs.getAvailableBlocksLong() * statFs.getBlockSizeLong()) / 1048576;
return total - free;
}
public static float getUsedSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
return ((float) statFs.getAvailableBlocksLong() * statFs.getBlockSizeLong());
}
public static float getStorageSpace() {
StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());
return ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong());
}
public static String readableFileTotalSize() {
if (getStorageSpace() <= 0) return "0";
final String[] units = new String[]{"B", "kB", "MB", "GB", "TB"};
int digitGroups = (int) (Math.log10(getStorageSpace()) / Math.log10(1024));
return new DecimalFormat("#,##0.#").format(getStorageSpace() / Math.pow(1024, digitGroups)) + " " + units[digitGroups];
}
public static String readableFileUseSize() {
if (getUsedSpace() <= 0) return "0";
final String[] units = new String[]{"B", "kB", "MB", "GB", "TB"};
int digitGroups = (int) (Math.log10(getUsedSpace()) / Math.log10(1024));
return new DecimalFormat("#,##0.#").format(getUsedSpace() / Math.pow(1024, digitGroups)) + " " + units[digitGroups];
}
}
<androidx.appcompat.widget.LinearLayoutCompat
android:layout_width="match_parent"
android:orientation="vertical"
android:layout_marginTop="@dimen/_10sdp"
android:paddingStart="@dimen/_5sdp"
android:paddingBottom="@dimen/_100sdp"
android:paddingEnd="@dimen/_5sdp"
android:layout_height="wrap_content"
tools:ignore="UnusedAttribute">
<TextView
android:text="@string/file_manager"
android:layout_width="match_parent"
android:textColor="?colorOnBackground"
android:textSize="@dimen/_13sdp"
android:layout_margin="@dimen/_5sdp"
android:includeFontPadding="false"
android:fontFamily="@font/roboto_bold"
android:layout_height="wrap_content"/>
<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/cl_internal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="@dimen/_7sdp"
android:paddingStart="@dimen/_10sdp"
android:paddingEnd="@dimen/_10sdp"
android:background="@drawable/bg_internal">
<RelativeLayout
android:id="@+id/rl_support"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="@dimen/_8sdp"
android:layout_marginBottom="@dimen/_8sdp"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toBottomOf="parent">
<com.google.android.material.progressindicator.CircularProgressIndicator
android:id="@+id/pb_progress"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
app:indicatorSize="@dimen/_45sdp"
app:trackColor="?android:colorBackground"
app:indicatorColor="?colorPrimary"/>
<ImageView
android:id="@+id/iv_image"
android:layout_width="@dimen/_25sdp"
android:layout_height="@dimen/_25sdp"
android:layout_centerInParent="true"
android:layout_marginBottom="@dimen/_8sdp"
android:contentDescription="@string/list_sort"
android:src="@drawable/ic_sdcard" />
</RelativeLayout>
<TextView
android:id="@+id/tv_title"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginStart="@dimen/_10sdp"
android:layout_marginEnd="@dimen/_10sdp"
android:text="@string/internal_storage"
android:singleLine="true"
android:textSize="@dimen/_13sdp"
android:textColor="?android:textColorPrimary"
android:fontFamily="@font/roboto_medium"
android:ellipsize="middle"
android:includeFontPadding="true"
android:layout_marginTop="@dimen/_13sdp"
app:layout_constraintBottom_toTopOf="@+id/tv_content"
app:layout_constraintLeft_toRightOf="@+id/rl_support"
app:layout_constraintRight_toLeftOf="@+id/iv_menu"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/tv_content"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginStart="@dimen/_10sdp"
android:layout_marginEnd="@dimen/_10sdp"
android:singleLine="true"
android:fontFamily="@font/roboto_regular"
android:text="@string/app_name"
android:textSize="@dimen/_10sdp"
android:includeFontPadding="true"
android:layout_marginBottom="@dimen/_13sdp"
android:textColor="?android:textColorHint"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toRightOf="@+id/rl_support"
app:layout_constraintTop_toBottomOf="@+id/tv_title"
app:layout_constraintRight_toLeftOf="@+id/iv_menu"/>
<ImageView
android:id="@+id/iv_menu"
android:layout_width="@dimen/_20sdp"
android:layout_height="@dimen/_20sdp"
android:src="@drawable/ic_arrow_right"
android:layout_marginTop="@dimen/_8sdp"
android:layout_marginBottom="@dimen/_8sdp"
android:background="?actionBarItemBackground"
android:contentDescription="@string/list_sort"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toBottomOf="parent"/>
</androidx.constraintlayout.widget.ConstraintLayout>
</androidx.appcompat.widget.LinearLayoutCompat>
final float totalSpace = Utils.getInternalStorageSpace();
final float occupiedSpace = Utils.getInternalUsedSpace();
CircularProgressIndicator indicator = binding.pbProgress;
indicator.setMax((int) totalSpace);
indicator.setProgress((int)occupiedSpace);
String total = Utils.readableFileUseSize()+" free / "+Utils.readableFileTotalSize();
binding.tvContent.setText(total);
|
How to Check Internal Memory Storage count in android?
|
i Want to Iternal storage get Memory and used memory in progressbar in like in image below in my app.how to apply please help me custome progressbar used in android.and i used download app.
|
[
"Here's how it can be accomplished:\nTestActivity.java:\npublic class TestActivity extends Activity {\n\n /**\n * Called when the activity is first created.\n */\n @Override\n public void onCreate(Bundle savedInstanceState) {\n\n super.onCreate(savedInstanceState);\n setContentView(R.layout.main);\n\n final TextView occupiedSpaceText = (TextView)findViewById(R.id.occupiedSpace);\n final TextView freeSpaceText = (TextView)findViewById(R.id.freeSpace);\n final ProgressBar progressIndicator = (ProgressBar)findViewById(R.id.indicator);\n final float totalSpace = DeviceMemory.getInternalStorageSpace();\n final float occupiedSpace = DeviceMemory.getInternalUsedSpace();\n final float freeSpace = DeviceMemory.getInternalFreeSpace();\n final DecimalFormat outputFormat = new DecimalFormat(\"#.##\");\n\n if (null != occupiedSpaceText) {\n occupiedSpaceText.setText(outputFormat.format(occupiedSpace) + \" MB\");\n }\n\n if (null != freeSpaceText) {\n freeSpaceText.setText(outputFormat.format(freeSpace) + \" MB\");\n }\n\n if (null != progressIndicator) {\n progressIndicator.setMax((int) totalSpace);\n progressIndicator.setProgress((int)occupiedSpace);\n }\n }\n\n /**\n * From question http://stackoverflow.com/questions/2652935/android-internal-phone-storage by Lazy Ninja\n */\n public static class DeviceMemory {\n\n public static float getInternalStorageSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n //StatFs statFs = new StatFs(\"/data\");\n float total = ((float)statFs.getBlockCount() * statFs.getBlockSize()) / 1048576;\n return total;\n }\n\n public static float getInternalFreeSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n //StatFs statFs = new StatFs(\"/data\");\n float free = ((float)statFs.getAvailableBlocks() * statFs.getBlockSize()) / 1048576;\n return free;\n }\n\n public static float getInternalUsedSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n //StatFs statFs = new StatFs(\"/data\");\n float total = ((float)statFs.getBlockCount() * statFs.getBlockSize()) / 1048576;\n float free = ((float)statFs.getAvailableBlocks() * statFs.getBlockSize()) / 1048576;\n float busy = total - free;\n return busy;\n }\n }\n}\n\nlayout/main.xml:\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\">\n\n\n <ProgressBar\n android:id=\"@+id/indicator\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentBottom=\"true\"\n style=\"@android:style/Widget.ProgressBar.Horizontal\"\n android:progressDrawable=\"@drawable/memory_indicator_progress\" />\n\n <TextView\n android:id=\"@+id/occupiedSpace\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentLeft=\"true\"\n android:layout_alignParentBottom=\"true\"\n android:gravity=\"left\"\n android:layout_marginLeft=\"5dp\"\n android:textColor=\"@android:color/black\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/freeSpace\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentRight=\"true\"\n android:layout_alignParentBottom=\"true\"\n android:gravity=\"right\"\n android:layout_marginRight=\"5dp\"\n android:textColor=\"@android:color/black\"\n android:textStyle=\"bold\" />\n</RelativeLayout>\n\ndrawable/memory_indicator_progress.xml:\n<layer-list xmlns:android=\"http://schemas.android.com/apk/res/android\">\n\n <item android:id=\"@android:id/background\">\n <shape android:shape=\"rectangle\">\n <solid android:color=\"@android:color/holo_green_light\" />\n </shape>\n </item>\n <item android:id=\"@android:id/progress\">\n <clip>\n <shape android:shape=\"rectangle\">\n <solid android:color=\"@android:color/darker_gray\" />\n </shape>\n </clip>\n </item>\n\n</layer-list>\n\nI'm not sure if it's exactly what You're looking for, but on my xperia v with 4.1 android I get the following picture:\n\nOn Your device colors might be different due to different platform.\n",
"Use android.os.Environment to find the internal directory, then use android.os.StatFs to call the Unix statfs system call on it. Shamelessly stolen from the Android settings app:\nFile path = Environment.getDataDirectory();\nStatFs stat = new StatFs(path.getPath());\nlong blockSize = stat.getBlockSize();\nlong availableBlocks = stat.getAvailableBlocks();\nreturn Formatter.formatFileSize(this, availableBlocks * blockSize);\n\nFrom this answer\n",
"public class Utils {\n\npublic static float getInternalStorageSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n return ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong()) / 1048576;\n}\n\n\npublic static float getInternalUsedSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n float total = ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong()) / 1048576;\n float free = ((float) statFs.getAvailableBlocksLong() * statFs.getBlockSizeLong()) / 1048576;\n return total - free;\n}\n\npublic static float getUsedSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n return ((float) statFs.getAvailableBlocksLong() * statFs.getBlockSizeLong());\n}\n\npublic static float getStorageSpace() {\n StatFs statFs = new StatFs(Environment.getDataDirectory().getAbsolutePath());\n return ((float) statFs.getBlockCountLong() * statFs.getBlockSizeLong());\n}\n\n\npublic static String readableFileTotalSize() {\n if (getStorageSpace() <= 0) return \"0\";\n final String[] units = new String[]{\"B\", \"kB\", \"MB\", \"GB\", \"TB\"};\n int digitGroups = (int) (Math.log10(getStorageSpace()) / Math.log10(1024));\n return new DecimalFormat(\"#,##0.#\").format(getStorageSpace() / Math.pow(1024, digitGroups)) + \" \" + units[digitGroups];\n}\n\npublic static String readableFileUseSize() {\n if (getUsedSpace() <= 0) return \"0\";\n final String[] units = new String[]{\"B\", \"kB\", \"MB\", \"GB\", \"TB\"};\n int digitGroups = (int) (Math.log10(getUsedSpace()) / Math.log10(1024));\n return new DecimalFormat(\"#,##0.#\").format(getUsedSpace() / Math.pow(1024, digitGroups)) + \" \" + units[digitGroups];\n}\n\n}\n <androidx.appcompat.widget.LinearLayoutCompat\n android:layout_width=\"match_parent\"\n android:orientation=\"vertical\"\n android:layout_marginTop=\"@dimen/_10sdp\"\n android:paddingStart=\"@dimen/_5sdp\"\n android:paddingBottom=\"@dimen/_100sdp\"\n android:paddingEnd=\"@dimen/_5sdp\"\n android:layout_height=\"wrap_content\"\n tools:ignore=\"UnusedAttribute\">\n\n <TextView\n android:text=\"@string/file_manager\"\n android:layout_width=\"match_parent\"\n android:textColor=\"?colorOnBackground\"\n android:textSize=\"@dimen/_13sdp\"\n android:layout_margin=\"@dimen/_5sdp\"\n android:includeFontPadding=\"false\"\n android:fontFamily=\"@font/roboto_bold\"\n android:layout_height=\"wrap_content\"/>\n\n <androidx.constraintlayout.widget.ConstraintLayout\n android:id=\"@+id/cl_internal\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"@dimen/_7sdp\"\n android:paddingStart=\"@dimen/_10sdp\"\n android:paddingEnd=\"@dimen/_10sdp\"\n android:background=\"@drawable/bg_internal\">\n\n\n <RelativeLayout\n android:id=\"@+id/rl_support\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"@dimen/_8sdp\"\n android:layout_marginBottom=\"@dimen/_8sdp\"\n app:layout_constraintStart_toStartOf=\"parent\"\n app:layout_constraintTop_toTopOf=\"parent\"\n app:layout_constraintBottom_toBottomOf=\"parent\">\n\n <com.google.android.material.progressindicator.CircularProgressIndicator\n android:id=\"@+id/pb_progress\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:layout_centerInParent=\"true\"\n app:indicatorSize=\"@dimen/_45sdp\"\n app:trackColor=\"?android:colorBackground\"\n app:indicatorColor=\"?colorPrimary\"/>\n\n <ImageView\n android:id=\"@+id/iv_image\"\n android:layout_width=\"@dimen/_25sdp\"\n android:layout_height=\"@dimen/_25sdp\"\n android:layout_centerInParent=\"true\"\n android:layout_marginBottom=\"@dimen/_8sdp\"\n android:contentDescription=\"@string/list_sort\"\n android:src=\"@drawable/ic_sdcard\" />\n\n </RelativeLayout>\n\n <TextView\n android:id=\"@+id/tv_title\"\n android:layout_width=\"0dp\"\n android:layout_height=\"wrap_content\"\n android:layout_marginStart=\"@dimen/_10sdp\"\n android:layout_marginEnd=\"@dimen/_10sdp\"\n android:text=\"@string/internal_storage\"\n android:singleLine=\"true\"\n android:textSize=\"@dimen/_13sdp\"\n android:textColor=\"?android:textColorPrimary\"\n android:fontFamily=\"@font/roboto_medium\"\n android:ellipsize=\"middle\"\n android:includeFontPadding=\"true\"\n android:layout_marginTop=\"@dimen/_13sdp\"\n app:layout_constraintBottom_toTopOf=\"@+id/tv_content\"\n app:layout_constraintLeft_toRightOf=\"@+id/rl_support\"\n app:layout_constraintRight_toLeftOf=\"@+id/iv_menu\"\n app:layout_constraintTop_toTopOf=\"parent\" />\n\n <TextView\n android:id=\"@+id/tv_content\"\n android:layout_width=\"0dp\"\n android:layout_height=\"wrap_content\"\n android:layout_marginStart=\"@dimen/_10sdp\"\n android:layout_marginEnd=\"@dimen/_10sdp\"\n android:singleLine=\"true\"\n android:fontFamily=\"@font/roboto_regular\"\n android:text=\"@string/app_name\"\n android:textSize=\"@dimen/_10sdp\"\n android:includeFontPadding=\"true\"\n android:layout_marginBottom=\"@dimen/_13sdp\"\n android:textColor=\"?android:textColorHint\"\n app:layout_constraintBottom_toBottomOf=\"parent\"\n app:layout_constraintLeft_toRightOf=\"@+id/rl_support\"\n app:layout_constraintTop_toBottomOf=\"@+id/tv_title\"\n app:layout_constraintRight_toLeftOf=\"@+id/iv_menu\"/>\n\n\n <ImageView\n android:id=\"@+id/iv_menu\"\n android:layout_width=\"@dimen/_20sdp\"\n android:layout_height=\"@dimen/_20sdp\"\n android:src=\"@drawable/ic_arrow_right\"\n android:layout_marginTop=\"@dimen/_8sdp\"\n android:layout_marginBottom=\"@dimen/_8sdp\"\n android:background=\"?actionBarItemBackground\"\n android:contentDescription=\"@string/list_sort\"\n app:layout_constraintEnd_toEndOf=\"parent\"\n app:layout_constraintTop_toTopOf=\"parent\"\n app:layout_constraintBottom_toBottomOf=\"parent\"/>\n\n </androidx.constraintlayout.widget.ConstraintLayout>\n\n </androidx.appcompat.widget.LinearLayoutCompat>\n\n\n\n\n\n final float totalSpace = Utils.getInternalStorageSpace();\n final float occupiedSpace = Utils.getInternalUsedSpace();\n CircularProgressIndicator indicator = binding.pbProgress;\n indicator.setMax((int) totalSpace);\n indicator.setProgress((int)occupiedSpace);\n\n String total = Utils.readableFileUseSize()+\" free / \"+Utils.readableFileTotalSize();\n\n binding.tvContent.setText(total);\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"android",
"custom_controls",
"download",
"progress_bar"
] |
stackoverflow_0015517878_android_custom_controls_download_progress_bar.txt
|
Q:
Finding max values in a text file read in using Python
I have a text file containing only numbers. There are gaps in the sets of numbers and the problem asks that the file is read through, adds the numbers within each group then finds the top three values in the list and adds them together.
I've found the way to read through the file and calculate the sum of the largest set but cannot find the second or third.
I've pasted my code here:my coding attempt
and my results text file content here: List of values in the text file
A:
Create a list to store the group totals.
Read the file a line at a time. Try to convert each line to int. If that fails then you're at a group separator so append zero to the group_totals list
Sort the list and print the last 3 items
FILENAME = '/Users/dan/Desktop/day1 copy.txt'
group_totals = [0]
with open(FILENAME) as data:
for line in data:
try:
group_totals[-1] += int(line)
except ValueError:
group_totals.append(0)
print(sorted(group_totals)[-3:])
Output:
[740, 1350, 2000]
Note:
This code implicitly assumes that there are at least 3 groups of values
|
Finding max values in a text file read in using Python
|
I have a text file containing only numbers. There are gaps in the sets of numbers and the problem asks that the file is read through, adds the numbers within each group then finds the top three values in the list and adds them together.
I've found the way to read through the file and calculate the sum of the largest set but cannot find the second or third.
I've pasted my code here:my coding attempt
and my results text file content here: List of values in the text file
|
[
"Create a list to store the group totals.\nRead the file a line at a time. Try to convert each line to int. If that fails then you're at a group separator so append zero to the group_totals list\nSort the list and print the last 3 items\nFILENAME = '/Users/dan/Desktop/day1 copy.txt'\n\ngroup_totals = [0]\n\nwith open(FILENAME) as data:\n for line in data:\n try:\n group_totals[-1] += int(line)\n except ValueError:\n group_totals.append(0)\n print(sorted(group_totals)[-3:])\n\nOutput:\n[740, 1350, 2000]\n\nNote:\nThis code implicitly assumes that there are at least 3 groups of values\n"
] |
[
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0074657939_file_python.txt
|
Q:
flutter-desktop-embedding how to build exe file
in flutter-desktop-embedding , I am a windows environment, I can run it, but I don't know how to build an exe file. I want to know what to do.
A:
If you flutter build or flutter run a desktop project, you're already building a .exe; that's what's being launched by flutter run. You can find it in the build directory of the project (e.g., build\windows\x64\Debug\Runner\Flutter Desktop Example.exe for the FDE example app).
A:
First check the build options with flutter build -h
Then run flutter build windows
Then you can find the build in the following path \build\windows\x64\Release\Runner
Although it is created in the release folder it is still just a debug build.
A:
You can find the release version of your App after running flutter build windows in \build\windows\runner\Release\
A:
but for distribution or run on other pc you need three files of visual studio 2019 to add in folder where you have your .exe file
files are
msvcp140.dll
vcruntime140.dll
vcruntime140_1.dll
path for these file is
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE
A:
Just run flutter build windows
After it is done compiling and making the executable file you will find in build\windows\runner\Release\ your .exe file type should be listed.
|
flutter-desktop-embedding how to build exe file
|
in flutter-desktop-embedding , I am a windows environment, I can run it, but I don't know how to build an exe file. I want to know what to do.
|
[
"If you flutter build or flutter run a desktop project, you're already building a .exe; that's what's being launched by flutter run. You can find it in the build directory of the project (e.g., build\\windows\\x64\\Debug\\Runner\\Flutter Desktop Example.exe for the FDE example app).\n",
"First check the build options with flutter build -h\nThen run flutter build windows\nThen you can find the build in the following path \\build\\windows\\x64\\Release\\Runner\nAlthough it is created in the release folder it is still just a debug build.\n",
"You can find the release version of your App after running flutter build windows in \\build\\windows\\runner\\Release\\\n",
"but for distribution or run on other pc you need three files of visual studio 2019 to add in folder where you have your .exe file\nfiles are\nmsvcp140.dll\nvcruntime140.dll\nvcruntime140_1.dll\npath for these file is\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\Common7\\IDE\n",
"Just run flutter build windows\nAfter it is done compiling and making the executable file you will find in build\\windows\\runner\\Release\\ your .exe file type should be listed.\n"
] |
[
9,
8,
4,
4,
0
] |
[] |
[] |
[
"flutter",
"flutter_desktop"
] |
stackoverflow_0057032406_flutter_flutter_desktop.txt
|
Q:
Create Column Based On Aggregation of Other Columns - Pyspark
I want to create a column whose values are equal to another column's when certain conditions are met. I want the column first to have the value of the column share when the columns gender, week and type are the same.
I have the following dataframe:
+------+----+----+-------------+-------------------+
|gender|week|type| share| units|
+------+----+----+-------------+-------------------+
| Male| 37|Polo| 0.01| 1809.0|
| Male| 37|Polo| 0.1| 2327.0|
| Male| 37|Polo| 0.15| 2982.0|
| Male| 37|Polo| 0.2| 3558.0|
| Male| 38|Polo| 0.01| 1700.0|
| Male| 38|Polo| 0.1| 2245.0|
| Male| 38|Polo| 0.15| 2900.0|
| Male| 38|Polo| 0.2| 3477.0|
I want the output to be:
+------+----+----+-------------+-------------------+---------+
|gender|week|type| share| units| first|
+------+----+----+-------------+-------------------+---------+
| Male| 37|Polo| 0.01| 1809.0| 1809.0|
| Male| 37|Polo| 0.1| 2327.0| 1809.0|
| Male| 37|Polo| 0.15| 2982.0| 1809.0|
| Male| 37|Polo| 0.2| 3558.0| 1809.0|
| Male| 38|Polo| 0.01| 1700.0| 1700.0|
| Male| 38|Polo| 0.1| 2245.0| 1700.0|
| Male| 38|Polo| 0.15| 2900.0| 1700.0|
| Male| 38|Polo| 0.2| 3477.0| 1700.0|
How can I implement this?
A:
I found the answer out so I will be posting it here.
I used a window function:
m_window = Window.partitionBy(["gender","week","type"]).orderBy("share")
Then I create a column using the function first and over window like this:
df.withColumn("first", first("units").over(m_window))
|
Create Column Based On Aggregation of Other Columns - Pyspark
|
I want to create a column whose values are equal to another column's when certain conditions are met. I want the column first to have the value of the column share when the columns gender, week and type are the same.
I have the following dataframe:
+------+----+----+-------------+-------------------+
|gender|week|type| share| units|
+------+----+----+-------------+-------------------+
| Male| 37|Polo| 0.01| 1809.0|
| Male| 37|Polo| 0.1| 2327.0|
| Male| 37|Polo| 0.15| 2982.0|
| Male| 37|Polo| 0.2| 3558.0|
| Male| 38|Polo| 0.01| 1700.0|
| Male| 38|Polo| 0.1| 2245.0|
| Male| 38|Polo| 0.15| 2900.0|
| Male| 38|Polo| 0.2| 3477.0|
I want the output to be:
+------+----+----+-------------+-------------------+---------+
|gender|week|type| share| units| first|
+------+----+----+-------------+-------------------+---------+
| Male| 37|Polo| 0.01| 1809.0| 1809.0|
| Male| 37|Polo| 0.1| 2327.0| 1809.0|
| Male| 37|Polo| 0.15| 2982.0| 1809.0|
| Male| 37|Polo| 0.2| 3558.0| 1809.0|
| Male| 38|Polo| 0.01| 1700.0| 1700.0|
| Male| 38|Polo| 0.1| 2245.0| 1700.0|
| Male| 38|Polo| 0.15| 2900.0| 1700.0|
| Male| 38|Polo| 0.2| 3477.0| 1700.0|
How can I implement this?
|
[
"I found the answer out so I will be posting it here.\nI used a window function:\nm_window = Window.partitionBy([\"gender\",\"week\",\"type\"]).orderBy(\"share\")\nThen I create a column using the function first and over window like this:\ndf.withColumn(\"first\", first(\"units\").over(m_window))\n"
] |
[
1
] |
[] |
[] |
[
"conditional_statements",
"dataframe",
"pyspark",
"python"
] |
stackoverflow_0074655040_conditional_statements_dataframe_pyspark_python.txt
|
Q:
Databinding in Angular, understanding in detail
I am new to Angular and I just started learning it recently. I came across the concept of Databinding in Angular. I was able to understand the syntax and stuff but there were some questions that I couldn't find an answer for. These are the queries I had:
When we export a class from the component TS file, we can use the class properties in HTML file. For eg: Databinding a class property to a HTML element works. But how does this HTML element know the class or the class attribute? How does the HTML file have access to it?
Why exactly are we exporting a class for a component to be used? Is the component a class too? If yes, then wehen we use the component are we calling that class and this leads to rendering the HTML and CSS mentioned in the component?
Please let me know.
A:
i believe you might have to do a deeper dive into how angular works.
Here is a link to your first question that might help.
It might seem daunting at first, but trust me, its worth it eventually
https://www.infragistics.com/products/ignite-ui-angular/angular/components/general/wpf-to-angular-guide/two-way-binding
A:
Answering your question in details requires having an in-depth knowledge about how Angular internally works, but here's a starting point:
I've generated a component using angular CLI:
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-example',
templateUrl: './example.component.html',
styleUrls: ['./example.component.scss']
})
export class ExampleComponent implements OnInit {
public myProperty: number = 0;
constructor() { }
ngOnInit(): void {
}
}
So:
Is the component a class too?
Yes, as you can see from the labels "export class", your component is first of all a regular JS class, which implements the OnInit interface, has an empty constructor and defines a public variable.
If yes, then when we use the component are we calling that class?
Exactly: Angular does a bit of magic (see the next point of the answer), so whenever finds a html tag named <app-example></app-example>, it creates an ExampleComponent instance and replaces that tag with the html you defined on example.component.html
and this leads to rendering the HTML and CSS mentioned in the component?
The magic happens just above the class definition: Angular heavily relies on Typescript Decorators, which are an (still) experimental feature of Typescript. Decorators allows you (or Angular in our case) to alter the behaviour of a class, for example by intercepting methods call, property changes (did you just say databinding?) and constructor parameters (and this is how Angular's dependency injection works).
In the @Component decorators, which is linked to the below ExampleComponent class, you're defining three things:
the selector, or tag name that Angular will search in the DOM and replace with your component's html
Where to find your component's html, which will be linked to each of your ExampleComponent instance
Stylesheets for your component's html
So, when a property on your component changes (for example myProperty), Angular intercepts that change thanks to the @Component decorators you've defined (to understand how, do a little search about decorators), and will re-render his html. Inserting that property value on a paragraph like <p>{{myProperty}}</p> is just a matter of string replacement.
So, now you have the answer to your first question:
But how does this HTML element know the class or the class attribute? How does the HTML file have access to it?
It's not the html that knows which component it belongs, it's the component (or Angular that handles that component) that knows which html has to render and which css needs to apply.
Why exactly are we exporting a class for a component to be used?
This is simply to let Angular know that we have defined a component. Exporting something from a .ts file makes it available on the rest of the project, and particularly on the AppModule file, where you will find your ExampleComponent among the declarations array:
@NgModule({
declarations: [
AppComponent,
ExampleComponent
],
// Something else
|
Databinding in Angular, understanding in detail
|
I am new to Angular and I just started learning it recently. I came across the concept of Databinding in Angular. I was able to understand the syntax and stuff but there were some questions that I couldn't find an answer for. These are the queries I had:
When we export a class from the component TS file, we can use the class properties in HTML file. For eg: Databinding a class property to a HTML element works. But how does this HTML element know the class or the class attribute? How does the HTML file have access to it?
Why exactly are we exporting a class for a component to be used? Is the component a class too? If yes, then wehen we use the component are we calling that class and this leads to rendering the HTML and CSS mentioned in the component?
Please let me know.
|
[
"i believe you might have to do a deeper dive into how angular works.\nHere is a link to your first question that might help.\nIt might seem daunting at first, but trust me, its worth it eventually\nhttps://www.infragistics.com/products/ignite-ui-angular/angular/components/general/wpf-to-angular-guide/two-way-binding\n",
"Answering your question in details requires having an in-depth knowledge about how Angular internally works, but here's a starting point:\nI've generated a component using angular CLI:\nimport { Component, OnInit } from '@angular/core';\n\n@Component({\n selector: 'app-example',\n templateUrl: './example.component.html',\n styleUrls: ['./example.component.scss']\n})\nexport class ExampleComponent implements OnInit {\n\n public myProperty: number = 0;\n\n constructor() { }\n\n ngOnInit(): void {\n }\n\n}\n\nSo:\n\nIs the component a class too?\n\nYes, as you can see from the labels \"export class\", your component is first of all a regular JS class, which implements the OnInit interface, has an empty constructor and defines a public variable.\n\nIf yes, then when we use the component are we calling that class?\n\nExactly: Angular does a bit of magic (see the next point of the answer), so whenever finds a html tag named <app-example></app-example>, it creates an ExampleComponent instance and replaces that tag with the html you defined on example.component.html\n\nand this leads to rendering the HTML and CSS mentioned in the component?\n\nThe magic happens just above the class definition: Angular heavily relies on Typescript Decorators, which are an (still) experimental feature of Typescript. Decorators allows you (or Angular in our case) to alter the behaviour of a class, for example by intercepting methods call, property changes (did you just say databinding?) and constructor parameters (and this is how Angular's dependency injection works).\nIn the @Component decorators, which is linked to the below ExampleComponent class, you're defining three things:\n\nthe selector, or tag name that Angular will search in the DOM and replace with your component's html\nWhere to find your component's html, which will be linked to each of your ExampleComponent instance\nStylesheets for your component's html\n\nSo, when a property on your component changes (for example myProperty), Angular intercepts that change thanks to the @Component decorators you've defined (to understand how, do a little search about decorators), and will re-render his html. Inserting that property value on a paragraph like <p>{{myProperty}}</p> is just a matter of string replacement.\nSo, now you have the answer to your first question:\n\nBut how does this HTML element know the class or the class attribute? How does the HTML file have access to it?\n\nIt's not the html that knows which component it belongs, it's the component (or Angular that handles that component) that knows which html has to render and which css needs to apply.\n\nWhy exactly are we exporting a class for a component to be used?\n\nThis is simply to let Angular know that we have defined a component. Exporting something from a .ts file makes it available on the rest of the project, and particularly on the AppModule file, where you will find your ExampleComponent among the declarations array:\n@NgModule({\n declarations: [\n AppComponent,\n ExampleComponent\n ],\n// Something else\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular",
"javascript"
] |
stackoverflow_0074651255_angular_javascript.txt
|
Q:
Django, looping through openweather icons always displays the last icon instead of appended city icon
I am trying to build out a weather app using openweather api and what I want to do is replace the icon png's with my own customized icon set.
In order to do this, I have referenced the openweather api png codes as seen here: https://openweathermap.org/weather-conditions. I have written some code that states if this code equals '01d' then replace the icon code with my custom data image src.
The issue is when looping through (after I have added a city), I am always appending the last image which in this case is the data code for '50n' rather than the correct weather code for that city.
here is the code in my views.py:
def weather(request):
url = 'http://api.openweathermap.org/data/2.5/weather?q={}&units=metric&appid=<MYAPPKEY>'
cities = City.objects.all()
weather_data = []
for city in cities:
city_weather = requests.get(url.format(city)).json()
weather = {
'city' : city,
'temperature' : city_weather['main']['temp'],
'description' : city_weather['weather'][0]['description'],
'icon' : city_weather['weather'][0]['icon'],
}
icon = weather['icon']
if icon == '01d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
elif icon == '01n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01n.svg'
elif icon == '02d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02d.svg'
elif icon == '02n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02n.svg'
elif icon == '03d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03d.svg'
elif icon == '03n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03n.svg'
elif icon == '04d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04d.svg'
elif icon == '04n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04n.svg'
elif icon == '09d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09d.svg'
elif icon == '09n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09n.svg'
elif icon == '10d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10d.svg'
elif icon == '10n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10n.svg'
elif icon == '11d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11d.svg'
elif icon == '11n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11n.svg'
elif icon == '13d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13d.svg'
elif icon == '13n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13n.svg'
elif icon == '50d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50d.svg'
elif icon == '50n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50n.svg'
weather_data.append(weather)
context = {'weather_data' : weather_data, 'icon': icon}
return render(request, 'weather/weather.html', context)
What am I doing wrong or am I missing something?
A:
icon = weather['icon']
This sets a variable icon to reference the string inside the dictionary.
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
This reassigns that variable to a URL string. It does NOT change the dictionary like you might think.
context = {'weather_data' : weather_data, 'icon': icon}
After the loop, you set a single value in the context which will be the last icon url.
Two suggestions:
Don't reassign a variable to mean something else. Use two different variables instead. So instead of
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
do
icon_url = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
Better yet, store the url in each dictionary:
weather['icon_url'] = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
Now you will have a url for the icon in each city.
You can build the URL directly from the name of the icon without all the if statements. Do this just once instead of 50 times:
weather['icon_url'] = f'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/{icon}.svg'
Alternatively, you can do this directly in the template. Since you didn't share your template, I can't give any details beyond this vague hint.
|
Django, looping through openweather icons always displays the last icon instead of appended city icon
|
I am trying to build out a weather app using openweather api and what I want to do is replace the icon png's with my own customized icon set.
In order to do this, I have referenced the openweather api png codes as seen here: https://openweathermap.org/weather-conditions. I have written some code that states if this code equals '01d' then replace the icon code with my custom data image src.
The issue is when looping through (after I have added a city), I am always appending the last image which in this case is the data code for '50n' rather than the correct weather code for that city.
here is the code in my views.py:
def weather(request):
url = 'http://api.openweathermap.org/data/2.5/weather?q={}&units=metric&appid=<MYAPPKEY>'
cities = City.objects.all()
weather_data = []
for city in cities:
city_weather = requests.get(url.format(city)).json()
weather = {
'city' : city,
'temperature' : city_weather['main']['temp'],
'description' : city_weather['weather'][0]['description'],
'icon' : city_weather['weather'][0]['icon'],
}
icon = weather['icon']
if icon == '01d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'
elif icon == '01n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01n.svg'
elif icon == '02d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02d.svg'
elif icon == '02n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02n.svg'
elif icon == '03d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03d.svg'
elif icon == '03n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03n.svg'
elif icon == '04d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04d.svg'
elif icon == '04n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04n.svg'
elif icon == '09d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09d.svg'
elif icon == '09n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09n.svg'
elif icon == '10d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10d.svg'
elif icon == '10n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10n.svg'
elif icon == '11d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11d.svg'
elif icon == '11n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11n.svg'
elif icon == '13d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13d.svg'
elif icon == '13n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13n.svg'
elif icon == '50d':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50d.svg'
elif icon == '50n':
icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50n.svg'
weather_data.append(weather)
context = {'weather_data' : weather_data, 'icon': icon}
return render(request, 'weather/weather.html', context)
What am I doing wrong or am I missing something?
|
[
" icon = weather['icon']\n\nThis sets a variable icon to reference the string inside the dictionary.\n icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\nThis reassigns that variable to a URL string. It does NOT change the dictionary like you might think.\n context = {'weather_data' : weather_data, 'icon': icon}\n\nAfter the loop, you set a single value in the context which will be the last icon url.\nTwo suggestions:\n\nDon't reassign a variable to mean something else. Use two different variables instead. So instead of\n icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\ndo\n icon_url = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\n\nBetter yet, store the url in each dictionary:\n weather['icon_url'] = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\nNow you will have a url for the icon in each city.\n\nYou can build the URL directly from the name of the icon without all the if statements. Do this just once instead of 50 times:\n weather['icon_url'] = f'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/{icon}.svg'\n\nAlternatively, you can do this directly in the template. Since you didn't share your template, I can't give any details beyond this vague hint.\n\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074658135_django_python.txt
|
Q:
What causes this arithmetic discrepancy between numpy and MATLAB and how can I force either behavior in Python?
I tried to normalize a probability distribution of the form $p_k := 2^{-k^2}$ for $k \in {1,\dots,n}$ for $n = 8$ in numpy/Python 3.8 along the following lines, using an equivalent of MATLAB's num2hex a la C++ / Python Equivalent of Matlab's num2hex. The sums of the normalized distributions differ in Python and MATLAB R2020a.
If $n < 8$, there is no discrepancy.
What is going on, and how can I force Python to produce the same result as MATLAB for $n > 7$? It's hard for me to tell which of these is IEEE 754 compliant (maybe both, with a discrepancy in grouping that affects a carry[?]) but I need the MATLAB behavior.
I note that there are still discrepancies in rounding between numpy and MATLAB per Differences between Matlab and Numpy and Python's `round` function (which I verified myself) but not sure this has any bearing.
import numpy as np
import struct # for MATLAB num2hex equivalent below
n = 8
unnormalizedPDF = np.array([2**-(k**2) for k in range(1,n+1)])
# MATLAB num2hex equivalent a la https://stackoverflow.com/questions/24790722/
num2hex = lambda x : hex(struct.unpack('!q', struct.pack('!d',x))[0])
hexPDF = [num2hex(unnormalizedPDF[k]/np.sum(unnormalizedPDF)) for k in range(0,n)]
print(hexPDF)
# ['0x3fec5862805436a4',
# '0x3fbc5862805436a4',
# '0x3f6c5862805436a4',
# '0x3efc5862805436a4',
# '0x3e6c5862805436a4',
# '0x3dbc5862805436a4',
# '0x3cec5862805436a4',
# '0x3bfc5862805436a4']
hexPDFSum = num2hex(np.sum(unnormalizedPDF/np.sum(unnormalizedPDF)))
print(hexPDFSum)
# 0x3ff0000000000000
Here is the equivalent in MATLAB:
n = 8;
unnormalizedPDF = 2.^-((1:n).^2);
num2hex(unnormalizedPDF/sum(unnormalizedPDF))
% ans =
%
% 8×16 char array
%
% '3fec5862805436a4'
% '3fbc5862805436a4'
% '3f6c5862805436a4'
% '3efc5862805436a4'
% '3e6c5862805436a4'
% '3dbc5862805436a4'
% '3cec5862805436a4'
% '3bfc5862805436a4'
num2hex(sum(unnormalizedPDF/sum(unnormalizedPDF)))
% ans =
%
% '3fefffffffffffff'
Note that the unnormalized distributions are exactly the same, but the sums of their normalizations differ by a single bit. If I use $n = 7$, everything agrees (both give 0x3fefffffffffffff), and both give the same results for $n < 7$ as well.
A:
According to the manual, numpy.sum uses pairwise summation to get more precision. Another common algorithm is Kahan summation.
Anyway, I wouldn't count too much on Numpy and MATLAB giving the same result up to the last bit, as there might me operation reordering if computations are made in parallel. See this for the kind of problem that can arise.
However, we can cheat a little bit and force Python to do the sum without the extra precision:
hexPDFSum = num2hex(np.sum(np.hstack((np.reshape(unnormalizedPDF / np.sum(unnormalizedPDF), (n, 1)), np.zeros((n, 1)))), 0)[0])
hexPDFSum
'0x3fefffffffffffff'
|
What causes this arithmetic discrepancy between numpy and MATLAB and how can I force either behavior in Python?
|
I tried to normalize a probability distribution of the form $p_k := 2^{-k^2}$ for $k \in {1,\dots,n}$ for $n = 8$ in numpy/Python 3.8 along the following lines, using an equivalent of MATLAB's num2hex a la C++ / Python Equivalent of Matlab's num2hex. The sums of the normalized distributions differ in Python and MATLAB R2020a.
If $n < 8$, there is no discrepancy.
What is going on, and how can I force Python to produce the same result as MATLAB for $n > 7$? It's hard for me to tell which of these is IEEE 754 compliant (maybe both, with a discrepancy in grouping that affects a carry[?]) but I need the MATLAB behavior.
I note that there are still discrepancies in rounding between numpy and MATLAB per Differences between Matlab and Numpy and Python's `round` function (which I verified myself) but not sure this has any bearing.
import numpy as np
import struct # for MATLAB num2hex equivalent below
n = 8
unnormalizedPDF = np.array([2**-(k**2) for k in range(1,n+1)])
# MATLAB num2hex equivalent a la https://stackoverflow.com/questions/24790722/
num2hex = lambda x : hex(struct.unpack('!q', struct.pack('!d',x))[0])
hexPDF = [num2hex(unnormalizedPDF[k]/np.sum(unnormalizedPDF)) for k in range(0,n)]
print(hexPDF)
# ['0x3fec5862805436a4',
# '0x3fbc5862805436a4',
# '0x3f6c5862805436a4',
# '0x3efc5862805436a4',
# '0x3e6c5862805436a4',
# '0x3dbc5862805436a4',
# '0x3cec5862805436a4',
# '0x3bfc5862805436a4']
hexPDFSum = num2hex(np.sum(unnormalizedPDF/np.sum(unnormalizedPDF)))
print(hexPDFSum)
# 0x3ff0000000000000
Here is the equivalent in MATLAB:
n = 8;
unnormalizedPDF = 2.^-((1:n).^2);
num2hex(unnormalizedPDF/sum(unnormalizedPDF))
% ans =
%
% 8×16 char array
%
% '3fec5862805436a4'
% '3fbc5862805436a4'
% '3f6c5862805436a4'
% '3efc5862805436a4'
% '3e6c5862805436a4'
% '3dbc5862805436a4'
% '3cec5862805436a4'
% '3bfc5862805436a4'
num2hex(sum(unnormalizedPDF/sum(unnormalizedPDF)))
% ans =
%
% '3fefffffffffffff'
Note that the unnormalized distributions are exactly the same, but the sums of their normalizations differ by a single bit. If I use $n = 7$, everything agrees (both give 0x3fefffffffffffff), and both give the same results for $n < 7$ as well.
|
[
"According to the manual, numpy.sum uses pairwise summation to get more precision. Another common algorithm is Kahan summation.\nAnyway, I wouldn't count too much on Numpy and MATLAB giving the same result up to the last bit, as there might me operation reordering if computations are made in parallel. See this for the kind of problem that can arise.\nHowever, we can cheat a little bit and force Python to do the sum without the extra precision:\nhexPDFSum = num2hex(np.sum(np.hstack((np.reshape(unnormalizedPDF / np.sum(unnormalizedPDF), (n, 1)), np.zeros((n, 1)))), 0)[0])\nhexPDFSum\n'0x3fefffffffffffff'\n\n"
] |
[
3
] |
[] |
[] |
[
"ieee_754",
"matlab",
"numpy",
"python",
"python_3.x"
] |
stackoverflow_0074658068_ieee_754_matlab_numpy_python_python_3.x.txt
|
Q:
Using PySpark to read in datalake table and can't parse timestamp column in Synapse Analytics
I can read in the datalake table and print schema but if I try and display data I get the following error. I am working within Synapse Analytics using a PySpark Notebook and Apache Spark Pool.
See error message:
You may get a different result due to the upgrading of Spark 3.0: Fail to parse '10/27/2022 1:14:31 PM' in the new parser.
You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.
I don't want to use the LEGACY version.
I've tried converting using the following code
df = df.withColumn("SinkCreatedOn",to_date(col("SinkCreatedOn"),"M/dd/yyyy h:m:s"))
df = df.withColumn("SinkModifiedOn",to_date(col("SinkModifiedOn"),"M/dd/yyyy h:m:s"))
I've also tried converting the suspect columns to StringType() or DateType() but no luck.
Any help appreciated
Thank you
A:
Try the script with below date format
df = df1.withColumn("SinkCreatedOn",to_date(col("SinkCreatedOn"),"MM/dd/yyyy h:mm:s a"))
I repro'd the same with sample input. Below is the approach.
Code:
df1=spark.createDataFrame(
data = [ ("1","Arpit","10/27/2022 1:14:31 PM"),("2","Anand","10/28/2022 1:14:31 PM"),("3","Mike","10/29/2022 1:14:31 PM")],
schema=["id","Name","SinkCreatedOn"])
df1.printSchema()
from pyspark.sql.functions import *
df_output = df1.withColumn("SinkCreatedOn",to_date(col("SinkCreatedOn"),"MM/dd/yyyy h:mm:s a"))
df1.show()
df_output.show()
df1
df_output
|
Using PySpark to read in datalake table and can't parse timestamp column in Synapse Analytics
|
I can read in the datalake table and print schema but if I try and display data I get the following error. I am working within Synapse Analytics using a PySpark Notebook and Apache Spark Pool.
See error message:
You may get a different result due to the upgrading of Spark 3.0: Fail to parse '10/27/2022 1:14:31 PM' in the new parser.
You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.
I don't want to use the LEGACY version.
I've tried converting using the following code
df = df.withColumn("SinkCreatedOn",to_date(col("SinkCreatedOn"),"M/dd/yyyy h:m:s"))
df = df.withColumn("SinkModifiedOn",to_date(col("SinkModifiedOn"),"M/dd/yyyy h:m:s"))
I've also tried converting the suspect columns to StringType() or DateType() but no luck.
Any help appreciated
Thank you
|
[
"Try the script with below date format\ndf = df1.withColumn(\"SinkCreatedOn\",to_date(col(\"SinkCreatedOn\"),\"MM/dd/yyyy h:mm:s a\"))\nI repro'd the same with sample input. Below is the approach.\nCode:\ndf1=spark.createDataFrame(\ndata = [ (\"1\",\"Arpit\",\"10/27/2022 1:14:31 PM\"),(\"2\",\"Anand\",\"10/28/2022 1:14:31 PM\"),(\"3\",\"Mike\",\"10/29/2022 1:14:31 PM\")],\nschema=[\"id\",\"Name\",\"SinkCreatedOn\"])\ndf1.printSchema()\nfrom pyspark.sql.functions import *\ndf_output = df1.withColumn(\"SinkCreatedOn\",to_date(col(\"SinkCreatedOn\"),\"MM/dd/yyyy h:mm:s a\"))\ndf1.show()\ndf_output.show()\n\ndf1\n\ndf_output\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_spark",
"azure_data_lake_gen2",
"azure_synapse",
"pyspark"
] |
stackoverflow_0074277979_apache_spark_azure_data_lake_gen2_azure_synapse_pyspark.txt
|
Q:
I made a list which i wanted it to receive data in order
the server list only takes one input and use the for loop to duplicate that
Server starting [Listining] Server is listning to 192.168.129.254 NEW Connection - ('192.168.129.254', 64225) connected[ACTIVE CONNECTIONS] 1
['hello', 'hello', 'hello', 'hello', 'hello']
This 5 'hello' in the list came from one sent data from client
I want the list to save 5 different inputs sent from client like:
['input1', 'input2', 'input3', 'input4', 'input5']
here is the code
def handle_client(conn, addr):
print(f"NEW Connection - {addr} connected")
connected = True
while connected:
msg_length = conn.recv(HEADER).decode(FORMAT)
if msg_length:
msg_length = int(msg_length)
msg = conn.recv(msg_length).decode(FORMAT)
if msg == DISCONNECT_MSG:
connected = False
list1 = []
for i in range(5):
values = msg
list1.append(values)
print(list1)
conn.close()
A:
I want the list to save 5 different inputs sent from client
Then you have to rearrange your code a bit:
list1 = []
while connected:
msg_length = conn.recv(HEADER).decode(FORMAT)
if msg_length:
msg_length = int(msg_length)
msg = conn.recv(msg_length).decode(FORMAT)
if msg == DISCONNECT_MSG:
connected = False
if len(list1) < 5:
values = msg
list1.append(values)
print(list1)
|
I made a list which i wanted it to receive data in order
|
the server list only takes one input and use the for loop to duplicate that
Server starting [Listining] Server is listning to 192.168.129.254 NEW Connection - ('192.168.129.254', 64225) connected[ACTIVE CONNECTIONS] 1
['hello', 'hello', 'hello', 'hello', 'hello']
This 5 'hello' in the list came from one sent data from client
I want the list to save 5 different inputs sent from client like:
['input1', 'input2', 'input3', 'input4', 'input5']
here is the code
def handle_client(conn, addr):
print(f"NEW Connection - {addr} connected")
connected = True
while connected:
msg_length = conn.recv(HEADER).decode(FORMAT)
if msg_length:
msg_length = int(msg_length)
msg = conn.recv(msg_length).decode(FORMAT)
if msg == DISCONNECT_MSG:
connected = False
list1 = []
for i in range(5):
values = msg
list1.append(values)
print(list1)
conn.close()
|
[
"\nI want the list to save 5 different inputs sent from client\n\nThen you have to rearrange your code a bit:\n list1 = []\n while connected:\n msg_length = conn.recv(HEADER).decode(FORMAT)\n if msg_length:\n msg_length = int(msg_length)\n msg = conn.recv(msg_length).decode(FORMAT)\n if msg == DISCONNECT_MSG:\n connected = False\n if len(list1) < 5:\n values = msg\n list1.append(values)\n print(list1)\n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python",
"server",
"sockets"
] |
stackoverflow_0074656985_list_python_server_sockets.txt
|
Q:
OpenJDK Java 17 docker image
We are upgrading our microservices in docker to use Java 17 and previously we used the base image openjdk:11-jre-slim. What is the corresponding image for Java 17?
There doesn't seem to be a openjdk:17-jre-slim? In fact there dont seem to be any recent jre images - just jdks. The 11-jre-slim image seems to be arount 75MB - is there a suitable similarly sized Java 17 image?
We have also used alpine images in the past too.
A:
Oracle image is freely available from Java-17 openjdk:17-oracle
Dockerfile:
FROM openjdk:17-oracle
openjdk:17-jdk-slim also creates lightweight image
Dockerfile:
FROM openjdk:17-jdk-slim
A:
If you are looking for the tiniest Docker images with Alpine Linux and OpenJDK, have a look at Liberica JDK containers at DockerHub https://hub.docker.com/r/bellsoft/liberica-openjdk-alpine
The images have Alpine and Liberica Lite, which is optimized in size to be used on microservices. It is also recommended by the Spring team https://spring.io/quickstart
A:
You can try this (eclipse-temurin:17-jre-alpine) which is around 50MB of compressed size
https://hub.docker.com/layers/eclipse-temurin/library/eclipse-temurin/17-jre-alpine/images/sha256-839f3208bfc22f17bf57391d5c91d51c627d032d6900a0475228b94e48a8f9b3?context=explore
I couldn't sill find an OpenJDK jre image
A:
An update on this - looking again at the Eclipse Adoptium issue mentioned above (https://github.com/adoptium/temurin-build/issues/2683) the more recent comments indicate they have now started producing JRE images.
We have switched to using eclipse-temurin:17-jre-focal. There is also a (slightly larger) 17-jre-centos7 and a smaller 17-jre-alpine, but we now need some libraries that aren't in alpine.
A:
In your Dockerfile link:
FROM openjdk:17-alpine
|
OpenJDK Java 17 docker image
|
We are upgrading our microservices in docker to use Java 17 and previously we used the base image openjdk:11-jre-slim. What is the corresponding image for Java 17?
There doesn't seem to be a openjdk:17-jre-slim? In fact there dont seem to be any recent jre images - just jdks. The 11-jre-slim image seems to be arount 75MB - is there a suitable similarly sized Java 17 image?
We have also used alpine images in the past too.
|
[
"Oracle image is freely available from Java-17 openjdk:17-oracle\nDockerfile:\nFROM openjdk:17-oracle\n\nopenjdk:17-jdk-slim also creates lightweight image\nDockerfile:\nFROM openjdk:17-jdk-slim\n\n",
"If you are looking for the tiniest Docker images with Alpine Linux and OpenJDK, have a look at Liberica JDK containers at DockerHub https://hub.docker.com/r/bellsoft/liberica-openjdk-alpine\nThe images have Alpine and Liberica Lite, which is optimized in size to be used on microservices. It is also recommended by the Spring team https://spring.io/quickstart\n",
"You can try this (eclipse-temurin:17-jre-alpine) which is around 50MB of compressed size\nhttps://hub.docker.com/layers/eclipse-temurin/library/eclipse-temurin/17-jre-alpine/images/sha256-839f3208bfc22f17bf57391d5c91d51c627d032d6900a0475228b94e48a8f9b3?context=explore\nI couldn't sill find an OpenJDK jre image\n",
"An update on this - looking again at the Eclipse Adoptium issue mentioned above (https://github.com/adoptium/temurin-build/issues/2683) the more recent comments indicate they have now started producing JRE images.\nWe have switched to using eclipse-temurin:17-jre-focal. There is also a (slightly larger) 17-jre-centos7 and a smaller 17-jre-alpine, but we now need some libraries that aren't in alpine.\n",
"In your Dockerfile link:\nFROM openjdk:17-alpine\n\n"
] |
[
19,
6,
5,
3,
2
] |
[
"In your Dockerfile add:\n\n\nFROM openjdk:17\nADD target/*.jar app.jar\nENTRYPOINT [\"java\",\"-jar\",\"app.jar\"]\n\n\n\n"
] |
[
-2
] |
[
"docker",
"java",
"java_17"
] |
stackoverflow_0069525199_docker_java_java_17.txt
|
Q:
Python: converting timestamp to date time not working
I am requesting data from the api.etherscan.io website. For this, I require a free API key. I am getting information for the following wallet addresses 0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5, 0xc508dbe4866528db024fb126e0eb97595668c288. Below is the code I am using:
wallet_addresses = ['0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5', '0xc508dbe4866528db024fb126e0eb97595668c288']
page_number = 0
df_main = pd.DataFrame()
while True:
for address in wallet_addresses:
url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}'
output = requests.get(url).text
df_temp = pd.DataFrame(json.loads(output)['result'])
df_temp['wallet_address'] = address
df_main = df_main.append(df_temp)
page_number += 1
df_main['timeStamp'] = pd.to_datetime(df_main['timeStamp'], unit='s')
if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime.date(2022, 1, 1):
pass
Note that you need your own (free) ether_api.
What I want to do is get data from today's date, all the way back to 2022-01-01 which is what I am trying to achieve in the if statement.
However, the above gives me an error: ValueError: unit='s' not valid with non-numerical val='2022-09-19 18:14:47'
How can this be done? I've tried multiple methods to get pandas datetime to work, but all of them gave me errors.
A:
Here you go, it's working without an error:
page_number = 0
df_main = pd.DataFrame()
while True:
for address in wallet_addresses:
url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}'
output = requests.get(url).text
df_temp = pd.DataFrame(json.loads(output)['result'])
df_temp['wallet_address'] = address
page_number += 1
df_temp['timeStamp'] = pd.to_datetime(df_temp['timeStamp'], unit='s')
df_main = df_main.append(df_temp)
if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime(2022, 1, 1).date():
pass
Wrong append
So, what has happened here. As suggested in the first comment under question we acknowledged the type of first record in df_main with type(df_main['timeStamp'].iloc[0]). With IPython and Jupyter-Notebook one can look what is happening with df_main just after receiving an error with it being populated on the last for loop iteration that failed.
Otherwise if one uses PyCharm or any other IDE with a possibility to debug, the contents of df_main can be revealed via debug.
What we were missing, is that df_main = df_main.append(df_temp) is placed in a slightly wrong place. On first iteration it works well, pd.to_datetime(df_main['timeStamp'], unit='s') gets an str type with Linux epoch and gets converted to pandas._libs.tslibs.timestamps.Timestamp.
But on next iteration df_main['timeStamp'] already has the Timestamp type and it gets appended with str type, so we get a column with mixed type. E.g.:
type(df_main['timeStamp'].iloc[0]) == type(df_main['timeStamp'].iloc[-1])
This results with False. Hence when trying to convert Timestamp to Timestamp one gets an error featured in question.
To mitigate this we can place .append() below the conversion and do this conversion on df_temp instead of df_main, this way we will only append Timestamps to the resulting DataFrame and the code below with if clause will work fine.
As a side note
Another small change I've made was datetime.date(2022, 1, 1). This change was not needed, but the way one works with datetime depends on how this library was imported, so it's worth mentioning:
import datetime
datetime.date(2022, 1, 1)
datetime.datetime(2022, 1, 1).date()
from datetime import datetime
datetime(2022, 1, 1).date()
All the above is legit and will produce the same. On the first import module gets imported, on the second one type gets imported.
Alternative solution
Conversion to Timestamp takes time. If the API provides Linux epoch dates, why not use this date for comparison? Let's add this somewhere where you define wallet_addresses:
reference_date = "01/01/2021"
reference_date = int(time.mktime(datetime.datetime.strptime(reference_date, "%d/%m/%Y").timetuple()))
This will result in 1609448400. Other stack overflow question as reference.
This integer can now be compared with timestamps provided by the API. The only thing left is to cast str to int. We can have your code left intact with some minor changes at the end:
<< Your code without changes >>
df_main['timeStamp'] = df_main['timeStamp'].astype(int)
if min(df_main['timeStamp']) < reference_date:
pass
To make a benchmark I've changed while True: to for _ in range(0,4): to limit the infinite cycle, results are as follows:
Initial solution took 11.6 s to complete
Alternative solution took 8.85 s to complete
It's 30% faster. Casting str to int takes less time than conversion to TimeStamps, I would call this a preferable solution.
Future warning
FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
It makes sense to comply with this warning. df_main = df_main.append(df_temp) has to be changed to df_main = pd.concat([df_main, df_temp]).
As for current 1.5.0 version it's already deprecated. Time to upgrade!
|
Python: converting timestamp to date time not working
|
I am requesting data from the api.etherscan.io website. For this, I require a free API key. I am getting information for the following wallet addresses 0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5, 0xc508dbe4866528db024fb126e0eb97595668c288. Below is the code I am using:
wallet_addresses = ['0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5', '0xc508dbe4866528db024fb126e0eb97595668c288']
page_number = 0
df_main = pd.DataFrame()
while True:
for address in wallet_addresses:
url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}'
output = requests.get(url).text
df_temp = pd.DataFrame(json.loads(output)['result'])
df_temp['wallet_address'] = address
df_main = df_main.append(df_temp)
page_number += 1
df_main['timeStamp'] = pd.to_datetime(df_main['timeStamp'], unit='s')
if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime.date(2022, 1, 1):
pass
Note that you need your own (free) ether_api.
What I want to do is get data from today's date, all the way back to 2022-01-01 which is what I am trying to achieve in the if statement.
However, the above gives me an error: ValueError: unit='s' not valid with non-numerical val='2022-09-19 18:14:47'
How can this be done? I've tried multiple methods to get pandas datetime to work, but all of them gave me errors.
|
[
"Here you go, it's working without an error:\npage_number = 0\ndf_main = pd.DataFrame()\nwhile True:\n for address in wallet_addresses:\n url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}'\n output = requests.get(url).text\n df_temp = pd.DataFrame(json.loads(output)['result'])\n df_temp['wallet_address'] = address\n page_number += 1\n df_temp['timeStamp'] = pd.to_datetime(df_temp['timeStamp'], unit='s')\n df_main = df_main.append(df_temp)\n if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime(2022, 1, 1).date():\n pass\n\n\nWrong append\nSo, what has happened here. As suggested in the first comment under question we acknowledged the type of first record in df_main with type(df_main['timeStamp'].iloc[0]). With IPython and Jupyter-Notebook one can look what is happening with df_main just after receiving an error with it being populated on the last for loop iteration that failed.\nOtherwise if one uses PyCharm or any other IDE with a possibility to debug, the contents of df_main can be revealed via debug.\nWhat we were missing, is that df_main = df_main.append(df_temp) is placed in a slightly wrong place. On first iteration it works well, pd.to_datetime(df_main['timeStamp'], unit='s') gets an str type with Linux epoch and gets converted to pandas._libs.tslibs.timestamps.Timestamp.\nBut on next iteration df_main['timeStamp'] already has the Timestamp type and it gets appended with str type, so we get a column with mixed type. E.g.:\ntype(df_main['timeStamp'].iloc[0]) == type(df_main['timeStamp'].iloc[-1])\n\nThis results with False. Hence when trying to convert Timestamp to Timestamp one gets an error featured in question.\nTo mitigate this we can place .append() below the conversion and do this conversion on df_temp instead of df_main, this way we will only append Timestamps to the resulting DataFrame and the code below with if clause will work fine.\nAs a side note\nAnother small change I've made was datetime.date(2022, 1, 1). This change was not needed, but the way one works with datetime depends on how this library was imported, so it's worth mentioning:\nimport datetime\ndatetime.date(2022, 1, 1)\ndatetime.datetime(2022, 1, 1).date()\n\nfrom datetime import datetime\ndatetime(2022, 1, 1).date()\n\nAll the above is legit and will produce the same. On the first import module gets imported, on the second one type gets imported.\n\nAlternative solution\nConversion to Timestamp takes time. If the API provides Linux epoch dates, why not use this date for comparison? Let's add this somewhere where you define wallet_addresses:\nreference_date = \"01/01/2021\"\nreference_date = int(time.mktime(datetime.datetime.strptime(reference_date, \"%d/%m/%Y\").timetuple()))\n\nThis will result in 1609448400. Other stack overflow question as reference.\nThis integer can now be compared with timestamps provided by the API. The only thing left is to cast str to int. We can have your code left intact with some minor changes at the end:\n<< Your code without changes >>\n df_main['timeStamp'] = df_main['timeStamp'].astype(int)\n if min(df_main['timeStamp']) < reference_date:\n pass\n\nTo make a benchmark I've changed while True: to for _ in range(0,4): to limit the infinite cycle, results are as follows:\n\nInitial solution took 11.6 s to complete\nAlternative solution took 8.85 s to complete\n\nIt's 30% faster. Casting str to int takes less time than conversion to TimeStamps, I would call this a preferable solution.\nFuture warning\nFutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.\n\nIt makes sense to comply with this warning. df_main = df_main.append(df_temp) has to be changed to df_main = pd.concat([df_main, df_temp]).\nAs for current 1.5.0 version it's already deprecated. Time to upgrade!\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"datetime",
"etherscan",
"pandas",
"python"
] |
stackoverflow_0074657344_dataframe_datetime_etherscan_pandas_python.txt
|
Q:
beginner troubles
making guessing game. I keep getting an attribute error trying to append my guess to the guesses list. following along in a course. I was prompted to say getting warmer if the current guess was closer than the last guess. i set guesses = 0 and and within the while loop i tried to append with (guesses.append(cg)) cg = current guess
import random
correct = random.randint(1,100)
print(correct)
guesses = 0
cg = int(input('Welcome to GUESSER guess here: '))
while True:
if cg > 100 or cg < 0:
print('out of bounds')
continue
if cg == correct:
print(f'It took {len(guesses)} to guess right. nice.')
break
if abs(cg - correct) <= 10: #first guess
print('warm.')
else:
print('cold.')
guesses.append(cg)
if guesses[-2]: #after first guess
if abs(correct - guesses[-2]) > abs(correct - cg):
print('warmer')
guesses.append(cg)
else:
print ('colder')
guesses.append(cg)
pass
A:
You assign an integer to guesses here:
guesses = 0
So the interpreter is right saying you CANNOT append to int. Define it as a list:
guesses = []
But there's more:
You ask for input BEFORE the loop, so it happens only once, later the loop is infinite, cause no new input is ever provided
If you need only current and previous value you don't need a list at all, rather 3 integers (current, previous, counter - to print number of guesses if you wish to do that)
If you want to stick to the list you'll get an IndexError next, since there is no guesses[-2] element during 1st iteration (and you don't check the length of the list before trying to access that)
Do NOT call variables like "cg" it means nothing, abbreviations depend on a context (which you might have or might not have), now it's a simple program and you can instantly see that it's probably "current_guess", but that's not the case in general, the IDE should make your life easier and give you possibility to auto insert once defined name, so if somebody says it's time consuming they are plainly wrong
A:
while True:
guess = int(input("Enter Guess Here: \n"))
if guess < 1 or guess > 100:
print('OOB, try again: ')
continue
# compare player guess to number
if guess == win_num:
print(f'you did it in {len(guess_list)} congrats')
break
#if guess is wrong add to guess to list
guess_list.append(guess)
#
if guess_list[-2]:
if abs(win_num - guess) < abs(win_num - guess_list[-2]):
print('warmer')
else:
print('colder')
else:
if abs(win_num - guess) <= 10:
print('warm')
else:
print('cold')
|
beginner troubles
|
making guessing game. I keep getting an attribute error trying to append my guess to the guesses list. following along in a course. I was prompted to say getting warmer if the current guess was closer than the last guess. i set guesses = 0 and and within the while loop i tried to append with (guesses.append(cg)) cg = current guess
import random
correct = random.randint(1,100)
print(correct)
guesses = 0
cg = int(input('Welcome to GUESSER guess here: '))
while True:
if cg > 100 or cg < 0:
print('out of bounds')
continue
if cg == correct:
print(f'It took {len(guesses)} to guess right. nice.')
break
if abs(cg - correct) <= 10: #first guess
print('warm.')
else:
print('cold.')
guesses.append(cg)
if guesses[-2]: #after first guess
if abs(correct - guesses[-2]) > abs(correct - cg):
print('warmer')
guesses.append(cg)
else:
print ('colder')
guesses.append(cg)
pass
|
[
"You assign an integer to guesses here:\nguesses = 0\n\nSo the interpreter is right saying you CANNOT append to int. Define it as a list:\nguesses = []\n\nBut there's more:\n\nYou ask for input BEFORE the loop, so it happens only once, later the loop is infinite, cause no new input is ever provided\nIf you need only current and previous value you don't need a list at all, rather 3 integers (current, previous, counter - to print number of guesses if you wish to do that)\nIf you want to stick to the list you'll get an IndexError next, since there is no guesses[-2] element during 1st iteration (and you don't check the length of the list before trying to access that)\nDo NOT call variables like \"cg\" it means nothing, abbreviations depend on a context (which you might have or might not have), now it's a simple program and you can instantly see that it's probably \"current_guess\", but that's not the case in general, the IDE should make your life easier and give you possibility to auto insert once defined name, so if somebody says it's time consuming they are plainly wrong\n\n",
"\nwhile True:\n guess = int(input(\"Enter Guess Here: \\n\"))\n \n if guess < 1 or guess > 100:\n print('OOB, try again: ')\n continue\n \n # compare player guess to number\n if guess == win_num:\n print(f'you did it in {len(guess_list)} congrats')\n break\n \n #if guess is wrong add to guess to list\n guess_list.append(guess)\n \n #\n if guess_list[-2]:\n if abs(win_num - guess) < abs(win_num - guess_list[-2]):\n print('warmer')\n else:\n print('colder')\n \n \n else:\n if abs(win_num - guess) <= 10:\n print('warm')\n else:\n print('cold')\n \n \n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"error_handling",
"list",
"python"
] |
stackoverflow_0074648333_error_handling_list_python.txt
|
Q:
Why my int array printing random numbers when getting the numbers from a .txt file?
The problem is when I finish reading the file and close it, and try to print the numbers from the array I get random numbers as you see in the picture below.
However, when I print them after storing them in the array during reading the file, it works just fine!
I used the following code to read the file and store the data:
typedef struct clientsStruct {
int id;
char actions[SIZE];
}clients;
char line[] = "";
long balances[SIZE];
int bi; // size of balances
int accountsNumber = 0;
int clientsNumber = 0;
void readFile() {
// remove trailing whitespaces
line[strlen(line) - 1] = '\0';
char splitter[] = " ";
char *ptr = strtok(line, splitter);
while(ptr != NULL) {
// if "balance" word is matched, it means we're getting account details
if (strcmp(ptr, "balance") == 0) {
accountsNumber++; // increase the number of accounts
ptr = strtok(NULL, splitter);
balances[accountsNumber] = strtol(ptr, NULL, 0);
bi++;
printf("**************** balance: %d - accountsNumber: %d****************\n", (int)balances[ii], accountsNumber);
}
// count the clients based on the existence of "client" word
else if (strstr(ptr, "client") != NULL) {
// printf(ptr + strlen(ptr) - 1);
clientsNumber++; // increase the number of clients
ptr = strtok(NULL, splitter);
// get the series of transitions made by client-n
while (ptr) {
clients[jj].id = clientsNumber;
strncat(clients[jj].actions, ptr, strlen(ptr));
ptr = strtok(NULL, splitter);
if (ptr != NULL)
strncat(clients[jj].actions, " ", 2);
}
jj++;
}
ptr = strtok(NULL, splitter);
}
}
The .txt file is from the following format:
account1 balance 1000
account2 balance 2000
account3 balance 3000
account4 balance 4000
client1 deposit account2 1000
client2 withdraw account1 300 deposit account4 200
client3 deposit account3 500 withdraw account4 400 withdraw account1 100
client4 withdraw account1 40000 withdraw account2 800
NOTE: I get the correct output of the balances (after finishing reading the file) when run the program using CMD, while I get it wrong if I used CLION. Also, I use pthread along with the code, but I comment the threading out and the result remains the same!
I appreciate any assistance!
A:
Various issues:
Too small
char line[] = ""; is only a 1 character array. As a string, only suitable for "".
Certainly should be larger and the code that populates line[] should be size aware.
Wrong way to remove white-spaces
line[strlen(line) - 1] = '\0' assumes the last character is a white-space.
Below code is a hacker exploit. Consider if line[0] was populated with a null character that was read.
// remove trailing whitespaces
line[strlen(line) - 1] = '\0'; // bad
Alternative to lop off a potential trailing '\n':
line[strcspn(line, "\n")] = '\0';
Questionable code
strncat(clients[jj].actions, ptr, strlen(ptr)); risks buffer overflow.
|
Why my int array printing random numbers when getting the numbers from a .txt file?
|
The problem is when I finish reading the file and close it, and try to print the numbers from the array I get random numbers as you see in the picture below.
However, when I print them after storing them in the array during reading the file, it works just fine!
I used the following code to read the file and store the data:
typedef struct clientsStruct {
int id;
char actions[SIZE];
}clients;
char line[] = "";
long balances[SIZE];
int bi; // size of balances
int accountsNumber = 0;
int clientsNumber = 0;
void readFile() {
// remove trailing whitespaces
line[strlen(line) - 1] = '\0';
char splitter[] = " ";
char *ptr = strtok(line, splitter);
while(ptr != NULL) {
// if "balance" word is matched, it means we're getting account details
if (strcmp(ptr, "balance") == 0) {
accountsNumber++; // increase the number of accounts
ptr = strtok(NULL, splitter);
balances[accountsNumber] = strtol(ptr, NULL, 0);
bi++;
printf("**************** balance: %d - accountsNumber: %d****************\n", (int)balances[ii], accountsNumber);
}
// count the clients based on the existence of "client" word
else if (strstr(ptr, "client") != NULL) {
// printf(ptr + strlen(ptr) - 1);
clientsNumber++; // increase the number of clients
ptr = strtok(NULL, splitter);
// get the series of transitions made by client-n
while (ptr) {
clients[jj].id = clientsNumber;
strncat(clients[jj].actions, ptr, strlen(ptr));
ptr = strtok(NULL, splitter);
if (ptr != NULL)
strncat(clients[jj].actions, " ", 2);
}
jj++;
}
ptr = strtok(NULL, splitter);
}
}
The .txt file is from the following format:
account1 balance 1000
account2 balance 2000
account3 balance 3000
account4 balance 4000
client1 deposit account2 1000
client2 withdraw account1 300 deposit account4 200
client3 deposit account3 500 withdraw account4 400 withdraw account1 100
client4 withdraw account1 40000 withdraw account2 800
NOTE: I get the correct output of the balances (after finishing reading the file) when run the program using CMD, while I get it wrong if I used CLION. Also, I use pthread along with the code, but I comment the threading out and the result remains the same!
I appreciate any assistance!
|
[
"Various issues:\nToo small\nchar line[] = \"\"; is only a 1 character array. As a string, only suitable for \"\".\nCertainly should be larger and the code that populates line[] should be size aware.\nWrong way to remove white-spaces\nline[strlen(line) - 1] = '\\0' assumes the last character is a white-space.\nBelow code is a hacker exploit. Consider if line[0] was populated with a null character that was read.\n// remove trailing whitespaces\nline[strlen(line) - 1] = '\\0'; // bad\n\nAlternative to lop off a potential trailing '\\n':\nline[strcspn(line, \"\\n\")] = '\\0';\n\nQuestionable code\nstrncat(clients[jj].actions, ptr, strlen(ptr)); risks buffer overflow.\n"
] |
[
0
] |
[] |
[] |
[
"c",
"file_io",
"locking",
"mutex",
"pthreads"
] |
stackoverflow_0074655211_c_file_io_locking_mutex_pthreads.txt
|
Q:
"Call to undefined function trailingslashit()" wordpress PHP fatal error when trying to update plugins
We have a wordpress installation on provider wpengine. When we try to update some plugins we get the fatal PHP error in subject. The provider support do not know how to help us. This is the call stack of the error:
"PHP Fatal error: Uncaught Error: Call to undefined function trailingslashit() in /nas/content/live/sillaindustrie/wp-includes/class-wp-textdomain-registry.php:103\nStack trace:\n#0 /nas/content/live/sillaindustrie/wp-includes/l10n.php(784): WP_Textdomain_Registry->set('default', 'it_IT', '/nas/content/li...')\n#1 /nas/content/live/sillaindustrie/wp-includes/load.php(1401): load_textdomain('default', '/nas/content/li...', 'it_IT')\n#2 /nas/content/live/sillaindustrie/wp-includes/load.php(278): wp_load_translations_early()\n#3 /nas/content/live/sillaindustrie/wp-settings.php(74): wp_maintenance()\n#4 /nas/content/live/sillaindustrie/wp-config.php(67): require_once('/nas/content/li...')\n#5 /nas/content/live/sillaindustrie/wp-load.php(50): require_once('/nas/content/li...')\n#6 /nas/content/live/sillaindustrie/wp-blog-header.php(13): require_once('/nas/content/li...')\n#7 /nas/content/live/sillaindustrie/index.php(17): require('/nas/content/li...')\n#8 {main}\n thrown in /nas/content/live/sillaindustrie/wp-includes/class-wp-textdomain-registry.php on line 103, referer: https://silla.industries/wp-admin/update-core.php?action=do-plugin-upgrade"
It seems to be related to WPML plugin or similar, any suggest?
Thanks
G.
I tried to update wordpress plugin, but I cannot understand the source of the error. Maybe it is plugins incompatibility but I don't know how to discover it.
A:
Check you have installed php-psr.
I had this error as I hadn't installed php8.1-psr
|
"Call to undefined function trailingslashit()" wordpress PHP fatal error when trying to update plugins
|
We have a wordpress installation on provider wpengine. When we try to update some plugins we get the fatal PHP error in subject. The provider support do not know how to help us. This is the call stack of the error:
"PHP Fatal error: Uncaught Error: Call to undefined function trailingslashit() in /nas/content/live/sillaindustrie/wp-includes/class-wp-textdomain-registry.php:103\nStack trace:\n#0 /nas/content/live/sillaindustrie/wp-includes/l10n.php(784): WP_Textdomain_Registry->set('default', 'it_IT', '/nas/content/li...')\n#1 /nas/content/live/sillaindustrie/wp-includes/load.php(1401): load_textdomain('default', '/nas/content/li...', 'it_IT')\n#2 /nas/content/live/sillaindustrie/wp-includes/load.php(278): wp_load_translations_early()\n#3 /nas/content/live/sillaindustrie/wp-settings.php(74): wp_maintenance()\n#4 /nas/content/live/sillaindustrie/wp-config.php(67): require_once('/nas/content/li...')\n#5 /nas/content/live/sillaindustrie/wp-load.php(50): require_once('/nas/content/li...')\n#6 /nas/content/live/sillaindustrie/wp-blog-header.php(13): require_once('/nas/content/li...')\n#7 /nas/content/live/sillaindustrie/index.php(17): require('/nas/content/li...')\n#8 {main}\n thrown in /nas/content/live/sillaindustrie/wp-includes/class-wp-textdomain-registry.php on line 103, referer: https://silla.industries/wp-admin/update-core.php?action=do-plugin-upgrade"
It seems to be related to WPML plugin or similar, any suggest?
Thanks
G.
I tried to update wordpress plugin, but I cannot understand the source of the error. Maybe it is plugins incompatibility but I don't know how to discover it.
|
[
"Check you have installed php-psr.\nI had this error as I hadn't installed php8.1-psr\n"
] |
[
0
] |
[] |
[] |
[
"php",
"plugins",
"wordpress"
] |
stackoverflow_0074653740_php_plugins_wordpress.txt
|